Within the next 10 years, advances on resource disaggregation will enable full transparency for most Cloud applications: to run unmodified single-machine applications over effectively unlimited remote computing resources. In this article, we present five serverless predictions for the next decade that will realize this vision of transparency – equivalent to Tim Wagner’s Serverless SuperComputer or AnyScale’s Infinite Laptop proposals. SERVERLESS gained popularity in industry and in academia in last few years [1]. It attracted companies and developers with a simple Function-as-a-Service (FaaS) programming model that realized original promise of the Cloud: elasticity and fine grained pay-as-you-go for actual usage. When a code is running in FaaS developers do not have control over where the code is running and do not need to worry about how to do scaling serverless cloud providers create transparency by removing servers, or at least making them more transparent. Transparency is an archetypal challenge in distributed systems that has not yet been adequately solved. Transparency implies the concealment from the user and the application programmer of the complexities of distributed systems. According to Colouris [2], access transparency enables local and remote resources to be accessed using identical operations. Nevertheless, Waldo et al. [3] explain that the goal of merging the programming and computational models of local and remote computing is not new. They state that around every ten years “a furious bout of language and protocol design takes place and a new distributed computing paradigm is announced“. In every iteration, a new wave of software modernization is generated, and applications are ported to the newest and hot paradigm. We believe the Serverless Compute paradigm, as emerging today [1], [9], will converge at the needed level of resource abstraction to enable transparency. What we call The Serverless End Game is the process of mapping this principle on emerging disaggregated computing resources (compute, storage, memory), eventually enabling unlimited flexible scaling. The major hypothesis of this paper is that full transparency will become possible in the next years thanks to predicted advances on latency reduction in distributed systems [4], [6], [16]. This will put an end to the aforementioned cycles Published by the IEEE Computer Society © IEEE 1 ar X iv :2 10 4. 03 07 5v 1 [ cs .D C ] 7 A pr 2 02 1 Table 1. Latencies for remote resource access Resource Local Remote today Remote soon Storage 10ms 10ms 0.05ms (NVM) Compute 0.01ms (Thread) 15-100ms (FaaS) 0.001-0.1ms (RPC) Memory 0.0001ms 0.25ms (Redis) 0.002-0.01ms (PMEM) of software modernization. The consequences for the field will be enormous, by considerably simplifying development and maintenance of software systems for the majority of users. BACKGROUND Latency improvements [4], [6] are boosting resource disaggregation in the Cloud, which is the definitive catalyst to achieve transparency. As we can see in table 1, current data center networks already enable disk storage disaggregation, where reads from local disk are comparable (10ms) to reads over the network. In contrast, creating a thread in Linux takes about 10μs, still far better than 15ms/100ms (warm/cold) as achieved today in Function-as-a-Service (FaaS) settings. The level of resource disaggregation as possible today is specifically utilized by Serverless Platforms, and focus of research in Disaggregated Data Centers (DDC) in general [8]. Providing access transparency over DDC resources is the aim of LegoOS: A disseminated, distributed OS for hardware resource disaggregation [13]. LegoOS exposes a distributed set of virtual nodes (vNode) to users. Each vNode is like a virtual machine managing its own disaggregated processing, memory and storage resources. LegoOS achieves transparency and backwards compatibility by supporting the Linux system call interface, so that unmodified Linux applications can run on top of it. For example, LegoOS executes two unmodified applications: Phoenix (a single-node multi-threaded implementation of MapReduce) and TensorFlow. A good example of providing access transparency over serverless resources is Lithops [14]. Lithops intercepts Python language libraries (multiprocessing) in order to access remote serverless resources in a transparent way. Lithops is however limited to running Python aplications using that library. Another example of transparency in a serverless context is Faasm [19]. Faasm exposes a specialized system interface which includes some POSIX syscalls, serverless-specific tasks, and frameworks such as OpenMP and MPI. Faasm transparently intercepts calls to this interface to automatically distribute unmodified applications, and execute existing HPC applications over serverless compute resources. Faasm allows colocated functions to share pages of memory and synchronizes these pages across hosts to provide distributed state. However, this is done through a custom API where the user must have knowledge of the underlying system, hence breaking full transparency. Furthermore, when functions are widely distributed, this approach exhibits performance similar to traditional distributed shared memory (DSM), which has proven to be poor without hardware support. Nevertheless, resource disaggregation is still in its infancy, and there is no current solution to provide flexible scaling and access transparency over remote shared memory. Container instantiation is slow comparing to local threads, and even fast NVMs[16] are an order of magnitude slower than local memory accesses which are in the nanosecond range [6]. Besides of the aforementioned constraints of current resource disaggregation, serverless computing has a number of well known limitations[9] like: focus on stateless computations, lack of efficient communication between executed tasks or functions, maximum code runtime limitations, and deficiencies in transparent integration of hardware accelerators.
[1]
Josep Sampé,et al.
Toward Multicloud Access Transparency in Serverless Computing
,
2021,
IEEE Software.
[2]
David A. Patterson,et al.
Cloud Programming Simplified: A Berkeley View on Serverless Computing
,
2019,
ArXiv.
[3]
Mendel Rosenblum,et al.
It's Time for Low Latency
,
2011,
HotOS.
[4]
Thomas Coughlin.
Nonvolatile Memory Express: The Link That Binds Them
,
2019,
Computer.
[5]
Aisha Hassan Abdalla Hashim,et al.
Execution time prediction of imperative paradigm tasks for grid scheduling optimization
,
2009
.
[6]
David A. Patterson,et al.
Attack of the killer microseconds
,
2017,
Commun. ACM.
[7]
Timothy Roscoe,et al.
Arrakis
,
2014,
OSDI.
[8]
Jim Waldo,et al.
A Note on Distributed Computing
,
1996,
Mobile Object Systems.
[9]
Yiyu Yao,et al.
Granular Computing
,
2008
.
[10]
Peter Pietzuch,et al.
Faasm: Lightweight Isolation for Efficient Stateful Serverless Computing
,
2020,
USENIX Annual Technical Conference.
[11]
George Coulouris,et al.
Distributed systems - concepts and design
,
1988
.
[12]
Scott Shenker,et al.
Network Requirements for Resource Disaggregation
,
2016,
OSDI.
[13]
Miguel Castro,et al.
FaRM: Fast Remote Memory
,
2014,
NSDI.
[14]
Kang G. Shin,et al.
Efficient Memory Disaggregation with Infiniswap
,
2017,
NSDI.
[15]
Christoforos E. Kozyrakis,et al.
Pocket: Elastic Ephemeral Storage for Serverless Analytics
,
2018,
OSDI.
[16]
Vatche Ishakian,et al.
The rise of serverless computing
,
2019,
Commun. ACM.
[17]
Siddhartha Sen,et al.
Disaggregation and the Application
,
2019,
HotCloud.
[18]
Michael Kaminsky,et al.
Datacenter RPCs can be General and Fast
,
2018,
NSDI.