The progressive development of the top supercomputing system of Italian research will allow using state-of-the-art microprocessor technology, enabling an extremely high-performance computing system but still with a ‘green’ soul. One of the parameters of the project conceived by Cineca is actually to gradually increase the computational power up to 50Pflop/s without exceeding, at any stage, the limit of 3MWatt of power consumption. The aim is to help researchers to address the major scientific and socio-economic challenges of our time: supercomputing and Big Data analytics are essential tools for computational and data-driven science of the national and international research.
The HPC Infrastructure
Cineca is currently one of the Large Scale Facilities in Europe and it is a PRACE Tier-0 hosting site.
- MARCONI: It is the Tier-0 system that replaced FERMI in July 2016. It is based on the LENOVO NeXtScale platform and the Intel Xeon Phi product family. It has been gradually upgraded from June 2016. The actual configuration consists of Marconi-A3 with SkyLake (in production since August 2017, upgraded in January 2018 and completed in November 2018). In a previous configuration phase, Marconi had two additional partitions: Marconi-A1 with Intel Broadwell, into production since July 2016, closed in September 2018, and Marconi-A2 with Intel KnightsLanding into production since January 2017, closed in January 2020. Marconi is classified in Top500 list among the most powerful supercomputer: rank 12 in November 2016, and rank 19 in the November 2018 list.
- MARCONI100: It is the most recent accelerated Marconi partition, available since April 2020. This is an IBM system equipped with NVIDIA Volta V100 GPUs, opening the way to the accelerated pre-exascale Leonardo supercomputer.
- DGX: It is the NVIDIA A100 accelerated system available from January 2021, particulary suitable for Deep Learning frameworks. This is a AMD 3-node system equipped with 8 NVIDIA A100 Tensor Core GPUs per node.
- GALILEO100: This is our Tier-1 infrastructure for scientific research, co-funded by the European ICEI (Interactive Computing e-Infrastructure) project and engineered by DELL. It is available to the Italian public and industrial researchers since August 2021 (in pre-production). The full production started in October 2021.
- ADA CLOUD: CINECA HPC cloud service has been renewed in September 2021 with Intel 8260 (CascadeLake) nodes, completing the HPC ecosystem.
CPU (mhz,core, ...) | Total cores / Total Nodes | Memory per node | Accelerator | Notes | |
MARCONI-A3 |
Intel SkyLake |
48*3216 / 3216 |
192 GB | - | |
MARCONI100 |
IBM Power9 AC922 |
32*980 / 980 | 256 GB | 4x NVIDIA Volta V100 GPUs, NVlink 2.0 16 GB |
|
DGX | AMD 2x Rome 7742 @2.6GHz 32 cores HT 2 each |
384/3 | 980 GB | 8x NVIDIA A100 Tensor Core GPUs, NVlink 3.0 80 GB |
|
GALILEO100 |
2 x Intel CascadeLake 8260 |
48*554 / 554 |
384 GB 3.0 TB |
34 nodes with 2x NVIDIA V100 per node, PCIe3 | |
ADA CLOUD | 2x Intel CascadeLake 8260 [email protected] GHz 24 cores each |
48*2*68/ 68 | 768 GB |
Here you can visit Cineca Hardware facility (Virtual Tour): |
![]() |
The Data Storage Facility
- Scratch: each system has its own local scratch area (pointed by $CINECA_SCRATCH env variable)
- Work: each system has its own local working storage area (pointed by $WORK env variable)
- DRes: a shared storage area for Long Term Archive is mounted on all machine's login-nodes (pointed by $DRES env variable)
- Tape: a tape library (12 PB, expandable to 16PB) is connected to the DRES storage area as a multi-level archive (via LTFS)
Scratch (local) | Work (local) | DRes (shared) | Tape (shared) | |
MARCONI | 2.5 PB | 7.5 PB | 6.5 PB | 20 PB |
GALILEO100 | TBD | TBD | ||
MARCONI | 1.8 PB | 2.3 PB |