Feasibility of on-line speed policies in real-time systems

We consider a real-time system where a single processor with variable speed executes an infinite sequence of sporadic and independent jobs. We assume that job sizes and relative deadlines are bounded by C and $$\varDelta $$ Δ respectively. Furthermore, $$S_{\max }$$ S max denotes the maximal speed of the processor. In such a real-time system, a speed selection policy dynamically chooses ( i.e. , on-line) the speed of the processor to execute the current, not yet finished, jobs. We say that an on-line speed policy is feasible if it is able to execute any sequence of jobs while meeting two constraints: the processor speed is always below $$S_{\max }$$ S max and no job misses its deadline. In this paper, we compare the feasibility region of four on-line speed selection policies in single-processor real-time systems, namely Optimal Available $${\text{(OA)}}$$ (OA) (Yao et al. in IEEE annual foundations of computer science, 1995), Average Rate $${\text{(AVR)}}$$ (AVR) (Yao et al. 1995), $${\text{(BKP)}}$$ (BKP) (Bansal in J ACM 54:1, 2007), and a Markovian Policy based on dynamic programming $${\text{(MP)}}$$ (MP) (Gaujal in Technical Report hal-01615835, Inria, 2017). We prove the following results: $$ {\text{(OA)}}$$ (OA) is feasible if and only if $$S_{\max } \ge C (h_{\varDelta -1}+1)$$ S max ≥ C ( h Δ - 1 + 1 ) , where $$h_n$$ h n is the n -th harmonic number ( $$h_n = \sum _{i=1}^n 1/i \approx \log n$$ h n = ∑ i = 1 n 1 / i ≈ log n ). $${\text{(AVR)}}$$ (AVR) is feasible if and only if $$S_{\max } \ge C h_\varDelta $$ S max ≥ C h Δ . $${\text{(BKP)}}$$ (BKP) is feasible if and only if $$S_{\max } \ge e C$$ S max ≥ e C (where $$e = \exp (1)$$ e = exp ( 1 ) ). $${\text{(MP)}}$$ (MP) is feasible if and only if $$S_{\max } \ge C$$ S max ≥ C . This is an optimal feasibility condition because when $$S_{\max } < C$$ S max < C no policy can be feasible. This reinforces the interest of $${\text{(MP)}}$$ (MP) that is not only optimal for energy consumption (on average) but is also optimal regarding feasibility.

[1]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[2]  Kirk Pruhs,et al.  Speed scaling to manage energy and temperature , 2007, JACM.

[3]  Lothar Thiele,et al.  Real-time calculus for scheduling hard real-time systems , 2000, 2000 IEEE International Symposium on Circuits and Systems. Emerging Technologies for the 21st Century. Proceedings (IEEE Cat No.00CH36353).

[4]  James W. Layland,et al.  Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment , 1989, JACM.

[5]  Rajesh K. Gupta,et al.  Procrastination scheduling in fixed priority real-time systems , 2004, LCTES '04.

[6]  Bruno Gaujal,et al.  Dynamic speed scaling minimizing expected energy consumption for real-time tasks , 2020, J. Sched..

[7]  Lothar Thiele,et al.  Feasibility Analysis of On-Line DVS Algorithms for Scheduling Arbitrary Event Streams , 2009, 2009 30th IEEE Real-Time Systems Symposium.

[8]  Martin Grohe The complexity of homomorphism and constraint satisfaction problems seen from the other side , 2007, JACM.

[9]  F. Frances Yao,et al.  A scheduling model for reduced CPU energy , 1995, Proceedings of IEEE 36th Annual Foundations of Computer Science.

[10]  Chung Laung Liu,et al.  Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment , 1989, JACM.

[11]  John Augustine,et al.  Optimal power-down strategies , 2004, 45th Annual IEEE Symposium on Foundations of Computer Science.