Vous êtes ici

Decentralized Fog Computing Infrastructure Control

Equipe et encadrants
Département / Equipe: 
Site Web Equipe: 
Directeur de thèse
Guillaume Pierre
Co-directeur(s), co-encadrant(s)
NomAdresse e-mailTéléphone
Guillaume Pierre
02 99 84 25 20
Sujet de thèse

Cloud computing infrastructures are very powerful and flexible, but they are also located very far from their end users. Typical network latencies between an end user and the closest public cloud data center are in the order of 20-40 ms over high-quality wired networks, and 100-150 ms over 4G mobile phone connections. This performance level is acceptable for simple applications such as web browsing, but it makes it impossible to create a wide range of interactive applications. For example, to enable an "instantaneous" feeling, augmented reality applications require that end-to-end latencies (including all networking and processing delays) remain below 20 ms.

To address these issues, a new type of "fog computing" infrastructures is being designed [1,5]. Instead of treating the mobile operator's network as a high-latency dumb pipe between the end users and the external service providers, fog platforms aim to bring cloud resources at the edge of the network, in very close physical proximity with the end users. This is expected to offer extremely low latency between the client devices and the cloud resources serving them.

Fog platforms have very different geographical distribution compared to traditional clouds. Classical datacenter clouds are composed of many reliable and powerful machines located in a very small number of data centers and interconnected by very high-speed networks. In contrast, fogs are composed of a very large number of points-of-presence with a couple of weak and potentially unreliable servers, interconnected with each other by commodity long-distance

However, the management part of current fog computing platforms remains centralized: a single node (or small group of nodes) is in charge of maintaining the list of available server machines, monitoring them, distributing software to them, deciding which server must take care of which task, etc. This organization generates unnecessary long-distance network traffic, does not handle network partitions well, and may even create legal issues if the controller and the compute/storage nodes are located in different jurisdictions.

The goal of this project is to reduce the discrepancy between the broadly distributed compute/storage resources and the -- currently -- extremely centralized control of these resources. We can exploit the fact that the virtual resources in a fog computing platform are in most cases created in immediate proximity of the user(s) who will access them. In this perspective, the platform management processes could be distributed evenly across the infrastructure nodes so that the virtual and physical resources, the users accessing them, and the management processes organizing this system, will be co-located within a few hundred meters from each other. One interesting direction to address this problem -- which remains to be (in)validated by the doctoral student -- is to execute cloud resource scheduling algorithms [7] on every point-of-presence of the system (whereas traditional clouds centralize these algorithms in a single node), and to base the necessary coordination of multiple schedulers on gossiping algorithms [6] between neighboring points-of-presence.

This project will be conducted within the IRISA Myriads team which is working on the design of innovative infrastructures and middleware for future fog computing platforms [2,3,4].

Tentative work plan:

  • M1-M6: state of the art study on fog computing platforms, cloud  scheduling algorithms, and gossiping algorithms.
  • M7-M12: Study of end users mobility patterns (based on existing publicly-available traces) to characterize the dynamicity in space and time of the fog resource requirements generated by these users.
  • M13-M24: Design, evaluation and validation of decentralized fog resource scheduler.
  • M25-30: Extension of this work to other features of a fog infrastructure control such as resource monitoring and anomaly detection.
  • M31-M36: PhD dissertation writing

[1] "MEC-ConPaaS: An experimental single-board based mobile edge cloud." Alexandre van Kempen, Teodor Crivat, Benjamin Trubert, Debaditya Roy and Guillaume Pierre. In Proceedings of the IEEE Mobile Cloud conference, April 2017.

[2] "Kangaroo: A Tenant-Centric Software-Defined Cloud Infrastructure." Kaveh Razavi, Ana Ion, Genc Tato, Kyuho Jeong, Renato Figueiredo, Guillaume Pierre and Thilo Kielmann. In Proceedings of the IEEE International Conference on Cloud Engineering (IC2E), Tempe, AZ, USA, March 2015. http://www.globule.org/publi/KTCSDCI_ic2e2015.html

[3] "ConPaaS: a Platform for Hosting Elastic Cloud Applications." Guillaume Pierre and Corina Stratan. IEEE Internet Computing 16(5), September-October 2012. http://www.globule.org/publi/CPHECA_ic2012.html

[4] "The mobile edge cloud testbed at IRISA Myriads team." https://youtu.be/7uLkLitiSPo

[5] "Fog Computing and its Ecosystem." Ramin Elahi, tutorial at the USENIX FAST conference, 2016. http://bit.do/c2Ta7

[6] "Gossip-based peer sampling." Mark Jelasity, Spyros Voulgaris, Rachid Guerraoui, Anne-Marie Kermarrec and Maarten Van Steen.  ACM Transactions on Computer Systems 25(3), 2007. http://acropolis.cs.vu.nl/~spyros/www/papers/Gossip-based%20Peer%20Sampl...

[7] "A Survey on Resource Scheduling in Cloud Computing: Issues and Challenges." Sukhpal Singh and Inderveer Chana.  Journal Grid Computing 14, 2016. http://link.springer.com/article/10.1007/s10723-015-9359-2

Début des travaux: 
Dès que possible
Mots clés: 
Fog computing, Decentralized computing, Resource scheduling, Infrastructure control
IRISA - Campus universitaire de Beaulieu, Rennes