MORPHEMIC: a complete paradigm from Cloud to Edge

Ali J. Fahs, Maroun Koussaifi and Mohamed Khalil Labidi
Activeeon, Research & Development Department, December 20, 2021

 

Cloud limitations and how the Edge can address them

Cloud computing has revolutionized the IT industry by shifting from traditional in-house servers toward Cloud-based services. A survey in 2019 has shown that 94% of technical professionals across a broad cross-section of organizations use Cloud one way or another [1]. Out of the 786 surveyed professionals, 84% are using a Multi-Cloud strategy. This can be attributed to the low cost of obtaining and maintaining the resources and the facility of scaling the resources following a surge in traffic.

However, the upsurge in fields like Artificial Intelligence (AI), Virtual Reality (VR), stream processing, autonomous vehicles, and most prominently Internet of things (IoT) have introduced new requirements to those of the traditional client-server applications [2]. Such requirements are constricted in terms of:

  • Latency: for example, some VR and gaming applications can only tolerate end-to-end response times of 20 ms (including network and computation delays).
  • Bandwidth: for example, stream processing and IoT applications, requiring the transfer of large data to the Cloud to be processed.

In contrast, Cloud computing is constrained by a relatively high network latency that can be estimated as 40 ms for a wired connection and 150 ms for a 4G connection [3]. As a result, latency-sensitive applications like VR and gaming will perform poorly on the Cloud [4]. In addition, the data transfer from the user to the Cloud is limited by the Internet Service Provider (ISP) Bandwidth which contributes to impeding data-intensive applications like stream processing and IoT.

Unlike Cloud computing where resources are centralized in a handful of data centers, Edge computing aims at distributing resources in the vicinity of the end-user [5]. This approach guarantees a low user-to-resource latency that can be as low as a couple of milliseconds [6, 7]. Simultaneously, the end-user is typically connected to the application resources using a Local Area Network (LAN), and as a result, does not suffer an ISP bandwidth limitation [8].

MORPHEMIC combines the best of both worlds. The MORPHEMIC project supports the accumulation of computational nodes from versatile resource pools. MORPHEMIC does not only provide a Multi-Cloud solution but also allows application developers to acquire Edge nodes. This paradigm combines the best of both worlds; resource-intensive applications can profit from a Multi-Cloud implementation. At the same time, latency-sensitive applications can deploy their services on the Edge through the same platform.

This opens the door for multitude of optimization possibilities,using MORPHEMIC, an application can be deployed in a hybrid fashion where the latency- and bandwidth-critical components of the application are deployed on the Edge. While the latency-tolerant, data-intensive, or resource-intensive are deployed components on the Cloud.

This optimization is the core of the MELODIC Uppperware, a component capable of analyzing the application model and deciding what nodes to be provisioned based on each application’s component requirements. This process is then enhanced through monitoring and forecasting features provided by the MORPHEMIC pre-processor [9], such that the application resources are not only decided at the initial implementation but also subject to on-the-fly modifications. Develop your Edge application and MORPHEMIC will handle the resource management MORPHEMIC uses the Scheduler and Resource Manager of ProActive to handle the resources [9, 10]. ProActive a product of Activeeon is a Java-based cross-platform workflow scheduler and resource manager. ProActive acts as a central point that connects MORPHEMIC with all Cloud providers, clusters, and on-premises computational nodes.

ProActive supports the integration of Edge nodes using two main methods:

  • BYON infrastructure: i.e. Bring your own nodes, which supports any private node provided by the MORPHEMIC user. This approach works using an SSH connection, then ProActive creates and connects an abstract node using a ProActive node agent. This approach supports any kind of node (bare-metal or virtual machine) regardless of the sources of the machine (Cloud, Edge, on-premises, cluster, etc.).
  • Edge infrastructure: this approach targets Edge nodes and focuses on the stringent resource limitation the Edge nodes are famous for. Similar to BYON, it deploys a node by starting a ProActive node agent on it, however with the focus on lightweight implementation.

As can be seen from the current state-of-the-art, ARMv7 and more specifically Raspberry Pis are commonly used as the main-stream Edge nodes. MORPHEMIC and ProActive have taken notice of this trend and enhanced their resource management to support such devices in terms of supporting ARMv7 and by coping with their extreme resource limitations. MORPHEMIC adapts resources according to your application needs In conclusion, supporting different resource types is the first building blocking of MORPHEMIC. Using this capability, MORPHEMIC is able to select the resources that deliver the best performance to the application’s specific needs. Equally, MORPHEMIC can leverage this capability to deliver a tailored proactive adaptation based on the application’s traffic.

References

[1] Flexera, “Rightscale state of the cloud report 2019,” 2019, https://bit.ly/RightScaleReport.
[2] A. Ahmed, H. Arkian, D. Battulga, A. J. Fahs, M. Farhadi, D. Giouroukis, A. Gougeon, F. O.
Gutierrez, G. Pierre, P. R. Souza Jr, M. A. Tamiru, and L. Wu, “Fog computing applications:
Taxonomy and requirements,” arXiv preprint arXiv:1907.11621, 2019.
[3] CLAudit project, “Planetary-scale cloud latency auditing platform,” http://claudit.feld.cvut.
cz/, 2017.
[4] M. S. Elbamby, C. Perfecto, M. Bennis, and K. Doppler, “Toward low-latency and ultra-reliable
virtual reality,” IEEE Network, vol. 32, no. 2, 2018.
[5] A. Sill, “Standards at the edge of the cloud,” IEEE Cloud Computing, vol. 4, no. 2, 2017.
[6] A. Fahs, “Proximity-aware replicas management in geo-distributed fog computing platforms,”
Ph.D. dissertation, Université de Rennes 1, 2020.
[7] A. J. Fahs, G. Pierre, and E. Elmroth, “Voilà: Tail-latency-aware fog application replicas
autoscaler,” in Proceedings of the IEEE Symposium on Modelling, Analysis, and Simulation
of Computer and Telecommunication Systems (MASCOTS), 2020.
[8] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its role in the internet of
things,” in Proceedings of the workshop on Mobile computing, 2012.
[9] MORPHEMIC, “D4.1: Architecture of pre-processor and proactive reconfiguration,” 2021,
https://bit.ly/mor-d-4-1.
[10] MORPHEMIC, “D5.3: Deployment artefact manager,” 2021, https://bit.ly/mor-d-5-3.