GekkoFS ranked #4 in the IO500’s “10-Node Challenge” at SC’19

The GekkoFS file system jointly developed by the Storage Systems for Extreme Computing team from the Barcelona Supercomputing (BSC) and the Efficient Computing and Storage team from Johannes Gutenberg-Universität Mainz (JGU) has taken the number 4 spot in the IO500’s ‘10-Node Challenge’.

GekkoFS is a file system capable of aggregating the local I/O capacity and performance of an HPC cluster compute nodes to create an ephemeral high-performance storage space that can be accessed by an application in a distributed manner. This storage space allows HPC applications and simulations to run in isolation from each other with regards to I/O, which reduces interferences and improves performance.

GekkoFS is being developed in partnership with as part of the Horizon 2020 NEXTGenIO project and Germany’s SPPEXA programme.

The IO500’s ‘10-Node Challenge’ list is a global ranking that executes multiple concurrent processes in 10 compute nodes to benchmark the I/O performance of a HPC storage system. GekkoFS’ score of 125 ranks it fourth in IO500’s 10-Node Challenge List and ninth in IO500’s Full List, with an average 21.41 GiB/s of bandwidth and an average of 728,680 operations per second.

GekkoFS’ IO500 benchmark was run on the 34 compute nodes of the NEXTGenIO prototype cluster. NEXTGenIO, is an R&D project involving EPCC, Intel, Fujitsu, Arm, ECMWF, TUD, Arctur, and BSC which wasbeen granted €8 million in funding by the European Commission to prototype an I/O-specialized HPC cluster based on Intel® Optane™ DC persistent memory technology. Each of the prototype’s 34 compute nodes is equipped with two second-generation Intel® Xeon® Scalable processors and 3TB of Intel® Optane™ DC persistent memory, thus providing approximately 102TB of persistent I/O capacity to HPC applications.

Within the project, BSC collaborated with JGU in the design and development of GekkoFS, a file system capable of aggregating the local I/O capacity and performance of each compute node to produce a high-performance storage space that can be accessed in a distributed manner. This storage space allows HPC applications and simulations to run in isolation from each other with regards to I/O, which reduces interferences and improves performance.

Updated: