Leveraging Applications of Formal Methods, Verification and Validation. Specialized Techniques and Applications

Virtualization is a key technology enabler for cloud computing. Despite the added value and compelling business drivers of cloud computing, this new paradigm poses considerable new challenges that have to be addressed to render its usage effective for industry. Virtualization makes elastic amounts of resources available to application-level services; for example, the processing capacity allocated to a service may be changed according to demand. Current software development methods, however, do not support the modeling and validation of services running on virtualized resources in a satisfactory way. This seriously limits the potential for fine-tuning services to the available virtualized resources as well as for designing services for scalability and dynamic resource management. The track on Engineering Virtualized Services aims to discuss key challenges that need to be addressed to enable software development methods to target resource-aware virtualized services. 1 Moving into the Clouds The planet’s data storage and processing is about to move into the clouds. This has the potential to revolutionize how we will interact with computers in the future. Although the privacy of data stored in the cloud remains a challenge, cloud-based data processing, or cloud computing, is already emerging as an economically interesting business model, due to an undeniable added value and compelling business drivers [5]. One such driver is elasticity: businesses pay for computing resources when they are needed, instead of provisioning in advance with huge upfront investments. New resources such as processing power or memory can be added to the cloud’s virtual computers on the fly, or additional virtual computers can be provided to the client application. Going beyond shared storage, the main potential in cloud computing lies in its scalable virtualized framework for data processing. If a service uses cloud-based processing, its capacity can be automatically adjusted when new users arrive. Another driver is agility: new services can be deployed quickly and flexibly on the market at limited cost. This allows a service to handle its end-users in a flexible manner without requiring initial investments in hardware before the service can be launched. Partly funded by the EU project FP7-610582 ENVISAGE: Engineering Virtualized Services (http://www.envisage-project.eu). T. Margaria and B. Steffen (Eds.): ISoLA 2014, Part II, LNCS 8803, pp. 1–4, 2014. c © Springer-Verlag Berlin Heidelberg 2014 2 R. Hähnle and E.B. Johnsen Reliability and control of resources are barriers to the industrial adoption of cloud computing today. To overcome these barriers and to gain control of the virtualized resources on the cloud, client services need to become resource-aware. Looking beyond today’s cloud, we may then expect virtualized services which dynamically combine distributed and heterogeneous resources from providers of utility computing in an increasingly fine-grained way. Making full usage of the potential of virtualized computation requires that we rethink the way in which we design and develop software. 2 Empowering the Designer The elasticity of software executed in the cloud means that its designers are given far reaching control over the resource parameters of the execution environment, such as the number and kind of processors, the amount of memory and storage capacity, and the bandwidth. In principle, these parameters can even be changed dynamically, at runtime. This means that the client of a cloud service not only can deploy and run software, but is also in full control of the trade-offs between the incurred cost and the delivered quality-of-service. To exploit these new possibilities, software in the cloud must be designed for scalability. Today, software is often designed based on specific assumptions about deployment, such as the size of data structures, the amount of random access memory, the number of processors. Rescaling usually requires extensive design changes when scalability has not been taken into account from the start. This consideration makes it clear that it is essential to detect and fix deployment errors, such as the impossibility to meet a service level agreement, already in the design phase. To make full usage of the opportunities of cloud computing, software development for the cloud demands a design methodology that – can take into account deployment modeling at early design stages and – permits the detection of deployment errors early and efficiently, preferably using software tools, such as simulators, test generators, and static analyzers. Clearly, there is a new software engineering challenge which needs to be addressed: how can the validation of deployment decisions be pushed up to the modeling phase of the software development chain without convoluting the design with deployment details? 3 Controlling Deployment in the Design Phase When a service is developed today, the developers first design its functionality, then they determine which resources are needed for the service, and ultimately the provisioning of these resources is controlled through a service level agreement (SLA). So far, these three parts of a deployed cloud service tend to live in separate worlds. It is important to bridge the gaps between them. The first gap is between the client layer functionality and the provisioning layer. It can be closed by a virtualization interface which allows the client layer to read and change resource parameters. The second gap is between SLAs and Introduction to Track on Engineering Virtualized Services 3 the client layer. Here the key observation is that the service contract part of an SLA can be formalized as a specification contract with rigorous semantics. This enables formal analysis of the client behavior with respect to the SLA at design time. Possible analyses include resource consumption, performance analysis, test case generation, and formal verification [2]. For suitable modeling and specification languages such analyses can be highly automated [3].

[1]  Véronique Bruyère,et al.  On the optimal reachability problem of weighted timed automata , 2007, Formal Methods Syst. Des..

[2]  Joe Armstrong,et al.  Programming Erlang: Software for a Concurrent World , 1993 .

[3]  Kim G. Larsen,et al.  Statistical Model Checking for Networks of Priced Timed Automata , 2011, FORMATS.

[4]  Thomas A. Henzinger,et al.  The theory of hybrid automata , 1996, Proceedings 11th Annual IEEE Symposium on Logic in Computer Science.

[5]  Einar Broch Johnsen,et al.  An Asynchronous Communication Model for Distributed Concurrent Objects , 2004, SEFM.

[6]  João Leite,et al.  Statistical Model Checking for Distributed Probabilistic-Control Hybrid Automata with Smart Grid Applications , 2011, ICFEM.

[7]  Reza Pulungan,et al.  Effective Minimization of Acyclic Phase-Type Representations , 2008, ASMTA.

[8]  Boudewijn R. Haverkort,et al.  Which battery model to use? , 2008, IET Softw..

[9]  Ivan Lanese,et al.  Fault Model Design Space for Cooperative Concurrency , 2014, ISoLA.

[10]  John Lygeros,et al.  Probabilistic reachability and safety for controlled discrete time stochastic hybrid systems , 2008, Autom..

[11]  James F. Manwell,et al.  LEAD-ACID-BATTERY STORAGE MODEL FOR HYBRID ENERGY-SYSTEMS , 1993 .

[12]  van der Arjan Schaft,et al.  Stochastic Hybrid Systems: Theory and Safety Critical Applications , 2006 .

[13]  Frank S. de Boer,et al.  A Complete Guide to the Future , 2007, ESOP.

[14]  Kim G. Larsen,et al.  An evaluation framework for energy aware buildings using statistical model checking , 2012, Science China Information Sciences.

[15]  Bengt Jonsson,et al.  Extracting the process structure of Erlang applications , 2001 .

[16]  Steve Vinoski Reliability with Erlang , 2007, IEEE Internet Computing.

[17]  Kim G. Larsen,et al.  Schedulability Analysis Using Uppaal: Herschel-Planck Case Study , 2010, ISoLA.

[18]  Joost-Pieter Katoen,et al.  A compositional modelling and analysis framework for stochastic hybrid systems , 2012, Formal Methods in System Design.

[19]  Thomas A. Henzinger,et al.  The Algorithmic Analysis of Hybrid Systems , 1995, Theor. Comput. Sci..

[20]  Reiner Hähnle,et al.  ABS: A Core Language for Abstract Behavioral Specification , 2010, FMCO.

[21]  Kim G. Larsen,et al.  Minimum-Cost Reachability for Priced Timed Automata , 2001, HSCC.

[22]  Kim G. Larsen Statistical Model Checking, Refinement Checking, Optimization, ... for Stochastic Hybrid Systems , 2012, FORMATS.

[23]  Kim G. Larsen,et al.  Time for Statistical Model Checking of Real-Time Systems , 2011, CAV.

[24]  George Candea,et al.  Crash-Only Software , 2003, HotOS.

[25]  David Holmes,et al.  Java Concurrency in Practice , 2006 .

[26]  Johan Dovland,et al.  Observable behavior of distributed systems: Component reasoning for concurrent objects , 2012, J. Log. Algebraic Methods Program..

[27]  Kim G. Larsen,et al.  Optimal infinite scheduling for multi-priced timed automata , 2008, Formal Methods Syst. Des..

[28]  Joost-Pieter Katoen,et al.  Approximate Model Checking of Stochastic Hybrid Systems , 2010, Eur. J. Control.

[29]  Frank S. de Boer,et al.  User-defined schedulers for real-time concurrent objects , 2012, Innovations in Systems and Software Engineering.

[30]  Joost-Pieter Katoen,et al.  Maximizing system lifetime by battery scheduling , 2009, 2009 IEEE/IFIP International Conference on Dependable Systems & Networks.

[31]  Lijun Zhang,et al.  Safety Verification for Probabilistic Hybrid Systems , 2010, Eur. J. Control.

[32]  Ezio Bartocci,et al.  Proceedings First International Workshop on Hybrid Systems and Biology , 2012 .

[33]  Ivan Lanese,et al.  Fault in the Future , 2011, COORDINATION.

[34]  Rajeev Alur,et al.  A Theory of Timed Automata , 1994, Theor. Comput. Sci..

[35]  Rajeev Alur,et al.  Formal verification of hybrid systems , 2011, 2011 Proceedings of the Ninth ACM International Conference on Embedded Software (EMSOFT).