The progressive development of the top supercomputing system of Italian research will allow using state-of-the-art microprocessor technology, enabling an extremely high-performance computing system but still with a ‘green’ soul. One of the parameters of the project conceived by Cineca is actually to gradually increase the computational power up to 50Pflop/s without exceeding, at any stage, the limit of 3MWatt of power consumption. The aim is to help researchers to address the major scientific and socio-economic challenges of our time: supercomputing and Big Data analytics are essential tools for computational and data-driven science of the national and international research.

 

 

The HPC Infrastructure 

Cineca is currently one of the Large Scale Facilities in Europe and it is a PRACE Tier-0 hosting site.

  • MARCONI: It is the new Tier-0 system that replaced FERMI in July 2016. It is based on the LENOVO NeXtScale platform and the next generation of the Intel Xeon Phi product family. It has been gradually completed in about 12 months and now it is in its final configuration: the first partition (Marconi-A1) with Intel Broadwell, into production from July 2016, has been closed on Sept 2018; the second partition (Marconi-A2) with Intel KnightsLanding is into production from January 2017, the third partition (Marconi-A3) with SkyLake is into production from August 2017, upgraded on Jan 2018 and finally on Nov 2018.
    Marconi is classified in Top500 list among the most powerful supercomputer:  rank 12 in November 2016, and rank 19 in the November 2018 list.
  • GALILEO: It has been renewed in March 2018 with Intel Xeon E5-2697 v4 (Broadwell) nodes, available for italian research community.
  • D.A.V.I.D.E.: (Development of an Added Value Infrastructure Designed in Europe)  It is  the energy-aware, High Performance Cluster, based on OpenPOWER8 servers and NVIDIA Tesla P100 data center GPUs.  It entered the Top500 and Green500 lists in June 2017, and entered into production  in January 2018.
  CPU (mhz,core, ...) Total cores / Total Nodes Memory  per node Accelerator Notes
MARCONI-A2

Intel Knights Landing
1x Intel Xeon Phi7250 
@1.4GHz
68 cores each

244800 / 3600 96 GB -  
MARCONI-A3

Intel SkyLake
2x Intel Xeon 8160 
@2.1GHz 
24 cores each

1791552 / 3188 192 GB -  
GALILEO Intel Broadwell
2x Intel Xeon E5-2697 v4
@2.3GHz
18 cores each
466560 / 1020 128 GB Nvidia K80  
D.A.V.I.D.E.

OpenPOWER8
NVIDIA Tesla P100 SXM2
@2GHz
16 cores each

720 / 45   Tesla P100  


The Data Storage Facility

  • Scratch: each system has its own local scratch area (pointed by $CINECA_SCRATCH environment variable)
  • Work: a working storage is mounted to the three systems (pointed by $WORK environment variable)
  • DRes (Data Resources): a shared storage area is mounted on all machine's login-nodes and all Pico's computes node (pointed by $DRES environment variable)
  • Tape: a tape library (12 PB, expandible to 16PB) is connected to the DRES storage area as a multi-level archive (via LTFS)
 
  Scratch (local) Work (local) DRes (shared) Tape (shared)
MARCONI 2.5 PB 7.5 PB 6.5 PB 20 PB
GALILEO 300 TB 1500 TB