Newsletter
Scratch problems on Fermi solved
Dear users, The problem on the SCRATCH area has been fixed. However, two midplanes on Fermi are temporarly in draining. We apologize for the inconvenience Regards HPC User Support @ […]
Problems in accessing the scratch filesystem on Fermi
Dear Users, we are experiencing problems in the scratch filesystem on Fermi which started in the week-end. Currently the scratch area is not fully operational, we are in the process […]
CINECA_DATA filesystem back in RW mode
Dear Users, the upgrade of the storage system at CINECA has been successfully completed and the CINECA_DATA filesystem on Fermi and PLX is now back in read/write mode. In case […]
Change in qstat output on PLX
Dear Users, due to confidentiality reasons, the PBS qstat command on PLX has been modified in order to show only your own submitted jobs, and not those of other users. […]
New LAMMPS installation on Fermi: removed the limitation in the number of MPI processes per node
Dear Users, the LAMMPS molecular dynamics package has been recompiled on FERMI in both its MPI and MPI+OpenMP versions. The new installation now allows the full parallelism of LAMMPS to […]
Accounting problem on FERMI
Dear Users, we inform you that, only for some accounts, an accounting problem on FERMI has been detected. It was done to a LoadLeveler error. The budgets have been rectified […]
FERMI: Sw update to V1R1M2
Dear users, yesterday the full software stack of BlueGene/Q has been upgraded to version V1R1M2. Please find a description of the new version in the IBM Technical Document available in […]
FERMI status: up and running
Dear Users, the scheduled system update on Fermi is completed. Now the system is up and running (only two nodeboards off for hardware problems) Best regards, HPC Support @ CINECA
Reminder of Fermi Update on January 8th 2013
Dear Users, This is just to remind that Fermi update has just started. It will last all day long The drainage system of the queues will ensure that the running jobs […]
Fermi status update
Dear Users, the scheduled system update on Fermi is completed. However, the 80 % of the cluster is at present still in draining and on hold for a reboot. Furthermore, […]