The explosion of Video-on-Demand (VoD) traffic has been a main driving force behind the Internet's evolution from a traditional connection-centric network architecture towards the new content-centric network architecture. To support this evolution, operators are deploying caches of VoD contents closer to users across network equipments deployed in core, metro and even access network segments to mitigate the traffic growth and improve the VoD quality of service. The deployment of storage elements (caches) across the network to deliver contents to end users is known as a Content Delivery Network (CDN). For a CDN operator, it is important to minimize the cache-deployment cost while satisfying end users performance requirements. On one hand, deploying a high number of large caches closer to users improves the performance of a CDN (e.g., decreasing latency) but introduces huge capital and operational costs. On the other hand, deploying fewer caches in higher network segments introduces high operational costs due to high data traffic and might not satisfy future traffic demands, thus failing to meet users requirements. In this paper, we aim to identify the most cost-efficient cache deployment in CDN and to study the tradeoff between the CDN performance and cost. We propose a CDN cost model which takes into consideration the capital and operational expenditures of CDN devices (e.g., caches and video interfaces) and of traffic required to serve the end users. We examine the effect of the content popularity on the cost of CDN deployment strategies, showing that there are different optimal cache deployment strategies for different popularity distributions. Results show that deploying a huge number of large caches in the access segment optimizes the quality of service for end users but increases the operational expenditure. Instead, a CDN deployment which utilizes caches across both the access and metro segments achieves a more balanced solution in terms of both the overall performance and cost.
[1]
Walter Willinger,et al.
An empirical approach to modeling inter-AS traffic matrices
,
2005,
IMC '05.
[2]
Ramesh K. Sitaraman,et al.
Trade-offs in optimizing the cache deployments of CDNs
,
2014,
IEEE INFOCOM 2014 - IEEE Conference on Computer Communications.
[3]
Cisco Visual Networking Index: Forecast and Methodology 2016-2021.(2017) http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual- networking-index-vni/complete-white-paper-c11-481360.html. High Efficiency Video Coding (HEVC) Algorithms and Architectures https://jvet.hhi.fraunhofer.
,
2017
.
[4]
Ke Xu,et al.
Video requests from Online Social Networks: Characterization, analysis and generation
,
2013,
2013 Proceedings IEEE INFOCOM.
[5]
Ben Y. Zhao,et al.
Understanding user behavior in large-scale video-on-demand systems
,
2006,
EuroSys.
[6]
Arian Bär,et al.
When YouTube Does not Work—Analysis of QoE-Relevant Degradation in Google CDN Traffic
,
2014,
IEEE Transactions on Network and Service Management.
[7]
Jaime Llorca,et al.
Dynamic in-network caching for energy efficient content delivery
,
2013,
2013 Proceedings IEEE INFOCOM.
[8]
S. RaijaSulthana.
Distributed caching algorithms for content distribution networks
,
2015
.
[9]
Ramesh K. Sitaraman,et al.
The Akamai network: a platform for high-performance internet applications
,
2010,
OPSR.
[10]
Anja Feldmann,et al.
A methodology for estimating interdomain web traffic demand
,
2004,
IMC '04.
[11]
Michael Pearce,et al.
A framework for network aware caching for video on demand systems
,
2013,
TOMCCAP.
[12]
Bruce M. Maggs,et al.
Globally Distributed Content Delivery
,
2002,
IEEE Internet Comput..
[13]
Seungjoon Lee,et al.
Optimal Content Placement for a Large-Scale VoD System
,
2016,
TNET.
[14]
Philippe Robert,et al.
Impact of traffic mix on caching performance in a content-centric network
,
2012,
2012 Proceedings IEEE INFOCOM Workshops.
[15]
Giacomo Verticale,et al.
Energy-efficient caching for Video-on-Demand in Fixed-Mobile Convergent networks
,
2015,
2015 IEEE Online Conference on Green Communications (OnlineGreenComm).