An HPC cluster dedicated to CERN studies is now available at INFN-CNAF (Bologna, Italy). Basic info:

  • The cluster presently consists of 12 nodes equipped with 32 cores each. HyperThreading is enabled, so 64 cores are visible for each machine. The next stage is to extend the cluster with 8 additional nodes with the same number of cores (there will eventually a total number of 640 cores). The second phase has been launched after the successful run of the acceptance tests (see note below).
  • Access to the cluster can be requested by sending an email to abp-cwg-admin@cernNOSPAMPLEASE.ch including the following information.
    • The filled in form, in which you have accepted the conditions by marking all the "tick-boxes” and undersigning, and after you read the AUP (Acceptable Use Policy). Please add Daniele Cesini as contact person and "Access to the hpc_acc Cluster to run High Energy Physics simulation codes" as the reason.
    • A copy of identity card or other valid identification document (e.g. passport).
  • Users should connect via ssh to bastion.cnaf.infn.it and from there access the cluster front-end called ui-hpc2.cr.cnaf.infn.it.
  • The LSF system is installed on the cluster to manage job submissions. Jobs should be submitted to the ”hpc acc” queue.
  • If you have inquiries about running your jobs on this cluster or need technical support, please write an email to hpc-support@listsNOSPAMPLEASE.cnaf.infn.it

Relevant links to obtain further information on this cluster are:

Cluster status updates:

  • 22 December 2017: As a consequence of the flooding in November 2017, the HPC cluster at CNAF (Bologna) has remained inactive for some time. Brief update on the present status.
    • 4 of the 12 nodes were damaged during the flooding. The remaining 8 nodes had to be physically moved from the CNAF to the Cineca last week and they are presently being restored back to operation by our CNAF colleagues.
    • While the filesystems are o.k. and no data appeared to be lost, there are still a couple of missing steps in the configuration to allow access and use of the remaining part of the cluster as before the incident. In particular, our CNAF colleagues still need to set up the bastion for the access and the LSF for the job submission.
    • Due to the Christmas holiday, the remaining configuration work will be resumed in the first days of the new year and the (reduced) cluster is expected to become again available to run jobs shortly afterwards.
    • The replacement of the damaged 4 nodes will happen over a longer timescale, as well as the completion of the cluster with the 8 additional nodes - which was launched earlier this year.
  • 12 January 2018: Finally the HPC cluster relocated from CNAF to Cineca is again up an running, although with fewer cores. We expect the missing cores to be replaced soon. Here are the instructions to connect:
    1. Connection to the temporary bastion at CNAF: ssh @login05.cnaf.infn.it
    2. When you are in, connect to Cineca user interface: ssh @130.186.16.113

  • From this user interface you will be able to submit your jobs with your usual lsf commands. The only thing that is not available at this stage is the common software area, which was exported from a server at CNAF. Probably this can be overcome by installing locally any required software, if needed.
  • Please consider that Antonio (antonio.falabella@cnafNOSPAMPLEASE.infn.it) is available to assist you, should you encounter any problem in connecting or running your jobs in the present configuration.
Edit | Attach | Watch | Print version | History: r6 < r5 < r4 < r3 < r2 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r6 - 2019-04-16 - GiovanniRumolo
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    ABPComputing All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback