Applications continue to have stringent needs in terms of computing resources for faster executions. This led to the advent of specific devices such as accelerators or processing in memory (PIM) components to meet the diverse requirements of applications. Thus, servers' operating systems face several heterogeneous components and must manage data copies between the host and heterogeneous devices' memory. To reduce and simplify the cost of accessing remote memory (devices' memory other than the host memory), the industry is converging towards a cache-coherent protocol: Compute Express Link (CXL). CXL's goal is to expose devices' memory over the PCI addressable space so that CPUs can issue direct read and write operations. CXL-capable devices started to ship in November 2022 despite being discussed since 2020.
However, changes required to applications for supporting CXL are not yet known, and it is difficult to leverage the potential of CXL fully. We claim that this difficulty is due to lack to (1) a lack of intuitive, user-friendly interfaces that user-space applications can use, (2) a lack of comprehension regarding the use cases where CXL can improve performance, and (3) no understanding of the potential compute overhead of using CXL.
The main aim of the Ph.D. is to leverage all the potential brought by the CXL feature. The objective is to facilitate the adoption of CXL for user space applications by providing a set of a handful of information and features to fully express their needs depending on their context. Our key insight is that a comprehensive analysis of benefits overhead coupled with a tuned user-friendly interface will ease its adoption. Thus, our starting point is to perform an intensive overhead of CXL in different contexts to highlight improvements and also overhead in terms of performance and computing resources used. The output of this evaluation will be the different use cases where CXL can be helpful with minimal overhead. Based on these results, we will proceed to design and implement user space interfaces that match applications' needs to easily leverage and fully use CXL while abstracting its low-level complexity. We intend to test the resulting interface on several real-world applications deployed on simulated and real datacenter test beds.
- Kevin Loughlin, Stefan Saroiu, Alec Wolman, Yatin A. Manerkar, and Baris Kasikci. 2022. MOESI-prime: preventing coherence-induced hammering in commodity workloads. In Proceedings of the 49th Annual International Symposium on Computer Architecture (ISCA '22). Association for Computing Machinery, New York, NY, USA, 670–684. https://doi.org/10.1145/3470496.3527427
- Li, Huaicheng and Berger, Daniel S. and Novakovic, Stanko and Hsu, Lisa and Ernst, Dan and Zardoshti, Pantea and Shah, Monish and Rajadnya, Samir and Lee, Scott and Agarwal, Ishwar and Hill, Mark D. and Fontoura, Marcus and Bianchini, Ricardo. Pond: CXL-Based Memory Pooling Systems for Cloud Platforms. To appear in ASPLOS 2023
- Yibo Huang, Yukai Huang, Ming Yan, Jiayu Hu, Cunming Liang, Yang Xu, Wenxiong Zou, Yiming Zhang, Rui Zhang, Chunpu Huang, and Jie Wu. 2022. An ultra-low latency and compatible PCIe interconnect for rack-scale communication. In Proceedings of the 18th International Conference on emerging Networking EXperiments and Technologies (CoNEXT '22). Association for Computing Machinery, New York, NY, USA, 232–244. https://doi.org/10.1145/3555050.3569128
- Yizhou Shan, Will Lin, Zhiyuan Guo, and Yiying Zhang. 2022. Towards a fully disaggregated and programmable data center. In Proceedings of the 13th ACM SIGOPS Asia-Pacific Workshop on Systems (APSys '22). Association for Computing Machinery, New York, NY, USA, 18–28. https://doi.org/10.1145/3546591.3547527
- Evaluating Emerging CXL-enabled Memory Pooling for HPC Systems. Jacob Wahlgren, MayacGokhale, Ivy B. Peng. SC 2022
- Q. Yang, R. Jin, B. Davis, D. Inupakutika and M. Zhao, "Performance Evaluation on CXL-enabled Hybrid Memory Pool," 2022 IEEE International Conference on Networking, Architecture and Storage (NAS), Philadelphia, PA, USA, 2022, pp. 1-5, https://doi.org/10.1109/NAS55553.2022.9925356.