Today, embedding fast processors in portable devices is infeasible because such battery operated systems cannot be supplied with enough power and cannot be kept cool for a reasonable period of time. Many processors offer the feature of reducible clock speed to save power. Reducing clock speed improves the performance-to-energy ratio for three reasons: First, the number of stall cycles that the CPU has to wait for main memory is reduced, because the clock-cycle time of the processor core gets closer to the memory latency. Second, by reducing the clock speed one can draw more energy out of batteries due to the impact of the lower discharge rate on the capacity of a battery. Finally, a reduced clock speed offers the possibility for further energy savings , because the supply voltage can be reduced as well. Our approach to OS-directed power management adds the clock speed to the runtime context of a thread. In addition to the questions when a thread has to be executed and which CPU should be used, we enlarge the freedom of scheduling to a third dimension: the speed of execution. By tuning the clock speed, the operating system can adjust the quality of service to the power constraints of a device.
[1]
Kunle Olukotun,et al.
A Single-Chip Multiprocessor
,
1997,
Computer.
[2]
Daniel P. Siewiorek,et al.
A power metric for mobile systems
,
1996,
ISLPED.
[3]
Victor Yodaiken.
The RTLinux Manifesto
,
1999
.
[4]
Scott Shenker,et al.
Scheduling for reduced CPU energy
,
1994,
OSDI '94.
[5]
David A. Patterson,et al.
Computer Architecture: A Quantitative Approach
,
1969
.
[6]
Luca Benini,et al.
Monitoring system activity for OS-directed dynamic power management
,
1998,
Proceedings. 1998 International Symposium on Low Power Electronics and Design (IEEE Cat. No.98TH8379).