Newsletter

Open position at Mercator for an HPC expert

Dear users, this is to advertize an open position available at Mercator for an HPC expert that, we think, may be of interest. Job opportunities – Mercator Ocean (mercator-ocean.eu) Expert […]

MARCONI: scratch quota removed

Dear Marconi users,the occupation quota of the scratch area of MARCONI has returned to sustainable levels. Therefore, the imposed 20 TB limit has been removed. Best regards,HPC User Support – CINECA […]

Marconi: Scheduled Maintenance, Tuesday, March 14th

Dear Users, this is to inform you that Marconi will be stopped next Tuesday, March 14th, due to scheduled maintenance. The stop will begin at 08:00 am and the cluster […]

CINECA GPU HACKATHON 2023 – Gentle Reminder

    Dear Users,   CINECA GPU Hackathons provide exciting opportunities for scientists to accelerate their HPC codes or AI research under the guidance of expert mentors from National Labs, […]

Enforced 2FA on gitlab.hpc.cineca.it

Dear Users, we are adopting a series of measures to improve the quality and security of our HPC services.  Starting from 15th March the users of gitlab.hpc.cineca.it will be forced […]

MARCONI: scratch is almost full

Dear Marconi Users, we inform you that the scratch space has reached the occupation of more than 88% this morning. This may cause malfunctions to the filesystems. To avoid reaching […]

Marconi100 end of production

Dear Users, In view of the forthcoming start of production of Leonardo (our next Tier-0 cluster), we write to inform you that Marconi100 is approaching the end of its activity. We […]

Marconi: OPA network issue UPDATE

Dear Marconi users, this is to inform you that the critical situation experienced this morning with OPA network has been recovered and the cluster is back to production. We are still investigating […]

Marconi: OPA network issue

Dear Marconi users, we are sorry to inform you that we are experiencing some issues with OPA network, which is causingseveral nodes failing. We are working to fix this issue. […]

G100: back to production and slurm update to 22.05.8 version – IMPORTANT NOTE for hybrid runs

Dear Users, this is to inform you that the maintenance operations on G100 are over and the cluster is back to full production. During the maintenance the slurm scheduler was […]