A Comparison of Markov Reward Based Resource-Latency Aware Heuristics for the Virtual Network Embedding Problem

The increasing use of virtualization (e.g., in Cloud Computing, Software Defined Networks), demands Infrastructure Providers (InPs) to optimize the placement of the virtual network requests (VNRs) into a substrate network. In addition to that, they need to cope with QoS, in particular for the rising number of time critical applications (e.g., healthcare, VoIP). Granting resource optimization along with QoS compliance are two competing goals. In this work, we compare our QoS-aware virtual network embedding (VNE) algorithms. A first approach (MCRR-LA) relies on a powerful resource-latency aware metric, whilst a second one (MCRM) building on the former one adds a node proximity and similarity concept, with a common goal of achieving an effective ranking/mapping. We extend the MCRR-LA metric and the MCRM node/link mapping stage for testing new performing strategies. We widely evaluated our algorithms through simulation. Our experiments point out that our algorithms are able to reduce the average path delay while granting good resource performances in terms of lower VNR blocking rate and higher revenues. We compared our algorithm with a previous two-stage approach obtaining good results useful to underline the strengths of the novel approaches.

[1]  Minlan Yu,et al.  Rethinking virtual network embedding: substrate support for path splitting and migration , 2008, CCRV.

[2]  Xavier Hesselbach,et al.  Virtual Network Embedding: A Survey , 2013, IEEE Communications Surveys & Tutorials.

[3]  Jie Wu,et al.  An Opportunistic Resource Sharing and Topology-Aware mapping framework for virtual networks , 2012, 2012 Proceedings IEEE INFOCOM.

[4]  Tilman Wolf,et al.  Mapping of delay-sensitive virtual networks , 2014, 2014 International Conference on Computing, Networking and Communications (ICNC).

[5]  Raouf Boutaba,et al.  SVNE: Survivable Virtual Network Embedding Algorithms for Network Virtualization , 2013, IEEE Transactions on Network and Service Management.

[6]  Yong Zhu,et al.  Algorithms for Assigning Substrate Network Resources to Virtual Network Components , 2006, Proceedings IEEE INFOCOM 2006. 25TH IEEE International Conference on Computer Communications.

[7]  Lixin Gao,et al.  How to lease the internet in your spare time , 2007, CCRV.

[8]  David Eppstein,et al.  Finding the k Shortest Paths , 1999, SIAM J. Comput..

[9]  Xiang Cheng,et al.  Virtual network embedding through topology-aware node ranking , 2011, CCRV.

[10]  M. Brockmeyer,et al.  QoSMap: QoS aware Mapping of Virtual Networks for Resiliency and Efficiency , 2007, 2007 IEEE Globecom Workshops.

[11]  R. Gallager Stochastic Processes , 2014 .

[12]  Saba Behrouznia A QoS-based Resource Selection Approach for Virtual Networks , 2015 .

[13]  Francesco Bianchi,et al.  A Markov Reward based Resource-Latency Aware Heuristic for the Virtual Network Embedding Problem , 2017, PERV.

[14]  Yonggang Wen,et al.  Toward profit-seeking virtual network embedding algorithm via global resource capacity , 2014, IEEE INFOCOM 2014 - IEEE Conference on Computer Communications.

[15]  Francesco Bianchi,et al.  A Markov Reward Model Based Greedy Heuristic for the Virtual Network Embedding Problem , 2016, 2016 IEEE 24th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS).

[16]  Raouf Boutaba,et al.  Virtual Network Embedding with Coordinated Node and Link Mapping , 2009, IEEE INFOCOM 2009.

[17]  Francesco Bianchi,et al.  A Latency-Aware Reward Model Based Greedy Heuristic for the Virtual Network Embedding Problem , 2016, VALUETOOLS.

[18]  J. J. Garcia-Luna-Aceves,et al.  Finding multi-constrained feasible paths by using depth-first search , 2007, Wirel. Networks.

[19]  Sergey Brin,et al.  The Anatomy of a Large-Scale Hypertextual Web Search Engine , 1998, Comput. Networks.

[20]  Taisir E. H. El-Gorashi,et al.  Energy Efficient Virtual Network Embedding for Cloud Networks , 2015, Journal of Lightwave Technology.