In mixed-criticality systems, functionalities of different degrees of importance (or criticalities) are implemented upon a common platform. Such mixed-criticality implementations are becoming increasingly common in embedded systems – consider, for example, the Integrated Modular Avionics (IMA) software architecture used in aviation [5] and the AUTOSAR initiative (AUTomotive Open System ARchitecture – www.autosar.org) for automotive systems. As a consequence the real-time systems research community has recently been devoting much attention to better understanding the challenges that arise in implementing such mixed-criticality systems; this includes research on various aspects of mixed-criticality scheduling. Most of this prior work draws inspiration from the seminal work of Vestal [6], and has taken the approach of validating the correctness of highly critical functionalities under more pessimistic assumptions than the assumptions used in validating the correctness of less critical functionalities. (For example, a piece of code may be characterized by a larger worst-case execution time (WCET) [6] in the more pessimistic analysis, or recurrent code that is triggered by some external recurrent event may be characterized by a higher frequency [1].) All functionalities are expected to be demonstrated correct under the less pessimistic analysis, whereas the analysis under the more pessimistic assumptions need only demonstrate the correctness of the more critical functionalities. In this paper we take a somewhat different perspective on mixed-criticality scheduling: the system is analyzed only once, under a single set of assumptions. The mixedcriticality nature of the system is expressed in the requirement that while we would like all functionalities to execute correctly under normal circumstances, it is essential that the more critical functionalities execute correctly even when circumstances are not normal. To express this formally, we model the workload of a MC system as being comprised of a collection of real-time jobs — these jobs may be independent, or they may be generated by recurrent tasks. Each job is characterized by a release date, a worstcase execution time (WCET), and a deadline; each job is further designated as being hi-criticality (more important) or lo-criticality (less important). We desire to schedule the system upon a single processor. This processor is unreliable in the following sense:
[1]
S. Vestal.
Preemptive Scheduling of Multi-criticality Systems with Varying Degrees of Execution Time Assurance
,
2007,
RTSS 2007.
[2]
Sanjoy K. Baruah,et al.
Certification-cognizant scheduling of tasks with pessimistic frequency specification
,
2012,
7th IEEE International Symposium on Industrial Embedded Systems (SIES'12).
[3]
James W. Layland,et al.
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment
,
1989,
JACM.
[4]
Aloysius Ka-Lau Mok,et al.
Fundamental design problems of distributed systems for the hard-real-time environment
,
1983
.
[5]
Michael L. Dertouzos,et al.
Control Robotics: The Procedural Control of Physical Processes
,
1974,
IFIP Congress.
[6]
P. J. Prisaznuk,et al.
Integrated modular avionics
,
1992,
Proceedings of the IEEE 1992 National Aerospace and Electronics Conference@m_NAECON 1992.
[7]
Chung Laung Liu,et al.
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment
,
1989,
JACM.