Objectives
Data@Exascale is an associated team between the KerData team from INRIA Rennes - Bretagne Atlantique, Argonne National Laboratory (ANL) and the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana Champaign (UIUC).
Principal investigator (Inria): Gabriel Antoniu, KerData team.
Principal investigators (partners): Robert Ross, Argonne National Laboratory, USA, and Marc Snir, University of Illinois at Urbana-Champaign and Argonne National Laboratory, USA.
Our research topics address the area of large scale data management for post-petascale supercomputers and for clouds. We aim to investigate several open issues related to storage and I/O in HPC, but also in situ data visualization and analysis from large scale simulations.
Investigate new storage architectures for Exascale systems leveraging BLOB-based large-scale storage able to cope with complex data models for Exascale storage systems.
Approach: explore how to combine the benefits of the approaches to Big Data storage currently developed by the partners - the BlobSeer approach (KerData, Inria), which provides support for multi-versioning and efficient fine-grain access to huge data under heavy concurrency and the Triton approach (ANL), which introduces new object storage semantics. The final goal of the resulting architecture will be to propose efficient solutions to data-related bottlenecks in Exascale HPC systems.
Investigate new approaches to the design of I/O middleware for Exascale systems
We aim to optimize data storage, processing and visualization by leveraging I/O forwarding techniques for leadership-class computing systems with IOFSL (ANL), which supports the aggregation and execution of larger and better organized I/O requests on dedicated nodes; as well as in-situ asynchronous data processing and movements through dedicated I/O cores, as enabled by the Damaris approach issued from joint efforts of two of the partners of this proposal (KerData team - Inria and UIUC).
Propose improved communication algorithms for new network topologies
New supercomputers rely on network topologies that present new challenges in terms of communication algorithms. Some collective communication algorithms supported by MPI implementations must be redesigned to take into account these new topologies. However, directly studying such new algorithms on the actual machine requires prohibitively expensive allocations, and makes any investigation difficult to reproduce. Our goal is therefore to study new algorithms in the context of event-driven network simulations, in order to show which algorithms present the most potential for improving performance. This work is based on CODES, an event-driven network simulator developed by ANL.
Explore techniques enabling adaptive, effective cloud services for HPC .
We aim to explore ways of leveraging elastic computing environments that are highly available, scalable and able to adapt quickly to changing demands, like Nimbus (ANL) in conjunction with approaches to high-throughput, concurrency-optimized massive data processing on clouds exploiting data locality such as BlobSeer (KerData).