Workshop Program :

  • 9:30 – 10:10 Pascal Felber (Unine, CH) Privacy-Preserving Publish/Subscribe
  • 10:10 – 10:50 Antonio Carzaniga (Univ. Lugano, CH) Fully Decentralized Estimation of Some Global Properties of a Network
  • 10:50 – 11:10 Break
  • 11:10 – 11:50 Gael Thomas (Lip6, FR) A Study of the Scalability of Stop-the-world Garbage Collectors on Multicores
  • 11:50 – 12:30 Guillaume Pierre (Univ. Rennes 1, FR) Robust Performance Control for Web Applications in the Cloud
  • 12:30 – 13:30 Lunch
  • 13:30 – 14:10 Pablo Rodriguez (Telefonica, ES) Opening up the mobile web
  • 14:10 – 14:50 Rachid Guerraoui (EPLF, CH) Adversary Oriented Computing
  • 14:50 – 15:30 Peter Triantafillou (Univ. Glasgow, UK) Facilitating Time Traveling Queries on Cloud Stores
  • 15:30 – 16:00 Break
  • 16:00 – 16:40 Amr El Abadi (Univ. California, US) Consistently Spanning the Globe for Fault-tolerance
  • 16:40 – 17:20 Paolo Costa (Imperial College, UK) Bridging the Tenant-Provider Gap in Networked Cloud Services

Pascal Felber

Privacy-Preserving Publish/Subscribe

Content-based publish/subscribe is an appealing paradigm for building large-scale distributed applications. Such applications are often deployed over multiple administrative domains, some of which may not be trusted. Recent attacks in public clouds indicate that a major concern in untrusted domains is the enforcement of privacy. By routing data based on subscriptions evaluated on the content of publications, publish/subscribe systems can expose critical information to unauthorized parties. Information leakage can be avoided by the means of privacy-preserving filtering, which is supported by several mechanisms for encrypted matching. Unfortunately, all existing approaches have in common a high performance overhead and the difficulty to use classical optimization for content-based filtering such as per-attribute containment.  We propose a novel mechanism that greatly reduces the cost of supporting privacy-preserving filtering based on encrypted matching operators. It is based on a pre-filtering stage that can be combined with containment graphs, if available. Our experiments indicate that pre-filtering is able to significantly reduce the number of encrypted matching for a variety of workloads, and therefore the costs associated with the cryptographic mechanisms. Furthermore, our analysis shows that the additional data structures used for pre-filtering have very limited impact on the effectiveness of privacy preservation.

Antonio Carzaniga

Fully Decentralized Estimation of Some Global Properties of a Network

It is often beneficial to architect networks and overlays as fully decentralized systems, in the sense that any computation (e.g., routing or search) would use only local information, and no single node would have a complete view or control over the whole network. Yet it might still be necessary to compute global properties of the network.  In this talk, I will present a fully decentralized algorithm to compute some global properties of a network that can be derived from the spectrum of the network.  More specifically, the algorithm computes the most significant eigenvalues of a descriptive matrix closely related to the adjacency matrix of the network graph.  Such spectral properties can then lead to, for example, the “mixing time” of the network, which is an important parameter of random walks and related search algorithms.  The key insight is to view the network as a linear dynamic system whose impulse response can be computed efficiently and locally by each node.  This impulse response can then be used to identify the spectral properties of the network. This algorithm is completely decentralized and requires only minimal local state and local communication.  I will present experimental results that show that the algorithm works well on different kinds of networks and also in the presence of network instability.

Gaël Thomas

A Study of the Scalability of Stop-the-world Garbage Collectors on Multicores

Large-scale multicore architectures create new challenges for garbage collectors (GCs). In particular, throughput-oriented stop-the-world algorithms demonstrate good performance with a small number of cores, but have been shown to degrade badly beyond approximately 8 cores on a 48-core with OpenJDK 7. This negative result raises the question whether the stop-the-world design has intrinsic limitations that would require a radically different approach. Our study suggests that the answer is no, and that there is no compelling scalability reason to discard the existing highly-optimised throughput-oriented GC code on contemporary hardware. We study the default throughput-oriented garbage collector of OpenJDK 7, called Parallel Scavenge. We identify its bottlenecks, and show how to eliminate them using well-established parallel programming techniques. On the SPECjbb2005, SPECjvm2008 and DaCapo 9.12 benchmarks, the improved GC matches the performance of Parallel Scavenge at low core count, but scales well, up to 48 cores.  Bio: Gaël Thomas is an associate professor at UPMC in Paris, France, where he joined the faculty in 2006. He is a member of the Regal team, a joint research team of Inria and UPMC that investigates large-scale distributed systems. His main research interests include operating systems, managed runtime environments, large-scale distributed massively multiplayer games, and multicore programming. He received the PhD degree from UPMC in 2005 and subsequently performed postdoctoral research as a member of the Adele Team at the Université de Grenoble “Jospeh Fourier”. Since 2011, he is the chair of the french chapter of the ACM SIGOPS.


Amr El Abbadi

Consistently Spanning the Globe for Fault-Tolerance

Over the past few years, cloud computing and the growth of global large scale computing systems have led to applications which require data management across multiple datacenters. Initially key-value stores were proposed to provide single row level operations with eventual consistency. Although key-value stores provide high availability, they are not ideal for applications that require consistent data view. More recently, there has been a gradual shift to provide transactions with strong consistency, for example Google’s Megastore and Spanner. In this talk, we will explore the data management design space for geo-replicated data and discuss different approaches starting from key-value stores to providing full transactional execution while replicating data in multi-datacenter environments.

Guillaume Pierre

Robust Performance Control for Web Applications in the Cloud

One of the major innovations provided by Cloud computing platforms is their pay-per-use model where applications can request and release resources at any time according to their needs — and pay
only for the resources they actually used. This business model is particularly favorable for application domains where workloads vary widely over time, such as the domain of Web applications. Resource provisioning algorithms aim at driving this on-demand scalability by deciding which resources should be added or removed at which moment to maintain an application’s performance under control at the lowest possible cost. We however see a significant discrepancy between sophisticated resource provisioning proposed by academic researchers, and the very simple systems which are actually used in practice. We will discuss the reasons behind these discrepancies and propose practical and robust mechanisms which may actually be deployed in real systems.

Facilitating Time Traveling Queries on Cloud Stores