EATS: Energy-Aware Tasks Scheduling in Cloud Computing Systems

Abstract The increasing cost in power consumption in data centers, and the corresponding environmental threats have raised a growing demand in energy-efficient computing. Despite its importance, little work was done on introducing models to manage the consumption efficiently. With the growing use of Cloud Computing, this issue becomes very crucial. In a Cloud Computing, the services run in a data center on a set of clusters that are managed by the Cloud computing environment. The services are provided in the form of a Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). The amount of energy consumed by the underutilized and overloaded computing systems may be substantial. Therefore, there is a need for scheduling algorithms to take into account the power consumption of the Cloud for energy-efficient resource utilization. On the other hand, Cloud computing is seen as crucial for high performance computing; for instance for the purpose of Big Data processing, and that should not be much compromised for the sake of reducing energy consumption. In this work, we derive an energy-aware tasks scheduling (EATS) model, which divides and schedules a big data in the Cloud. The main goal of EATS is to increase the application efficiency and reduce the energy consumption of the underlying resources. The power consumption of a computing server was measured under different working load conditions. Experiments show that the ratio of energy consumption at peak performance compared to an idle state is 1.3. This shows that resources must be utilized correctly without scarifying performance. The results of the proposed approach are very promising and encouraging. Hence, the adoption of such strategies by the cloud providers result in energy saving for data centers.

[1]  Bertoldi Paolo Energy Efficiency in Data Centres: Best Practices and Results of the European Code of Conduct , 2016 .

[2]  Saurabh Kumar,et al.  Energy Efficient Utilization of Resources in Cloud Computing Systems , 2016 .

[3]  John H. Seader,et al.  Tier Classifications Define Site Infrastructure Performance , 2006 .

[4]  Randy H. Katz,et al.  Above the Clouds: A Berkeley View of Cloud Computing , 2009 .

[5]  Rajkumar Buyya,et al.  High-Performance Cloud Computing: A View of Scientific Applications , 2009, 2009 10th International Symposium on Pervasive Systems, Algorithms, and Networks.

[6]  Rajkumar Buyya,et al.  Article in Press Future Generation Computer Systems ( ) – Future Generation Computer Systems Cloud Computing and Emerging It Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility , 2022 .

[7]  Latifur Khan,et al.  Implementation and performance evaluation of a scheduling algorithm for divisible load parallel applications in a cloud computing environment , 2015, Softw. Pract. Exp..

[8]  Latifur Khan,et al.  FSBD: A Framework for Scheduling of Big Data Mining in Cloud Computing , 2014, 2014 IEEE International Congress on Big Data.

[9]  Thomas F. Wenisch,et al.  PowerNap: eliminating server idle power , 2009, ASPLOS.

[10]  Rajeev Barua,et al.  Implementation and performance evaluation of a distributed conjugate gradient method in a cloud computing environment , 2013, Softw. Pract. Exp..

[11]  J. Koomey Worldwide electricity used in data centers , 2008 .

[12]  Rajkumar Buyya,et al.  Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities , 2008, 2008 10th IEEE International Conference on High Performance Computing and Communications.

[13]  Luiz André Barroso,et al.  The Price of Performance , 2005, ACM Queue.

[14]  Wolf-Dietrich Weber,et al.  Power provisioning for a warehouse-sized computer , 2007, ISCA '07.