Infiniband QDR Cluster


13 nodes glued together with Infiniband low latency interconnect.

  • Infinibad HCA: (Intel) QLogic Corp. IBA7322 QDR InfiniBand HCA (rev 02)
  • Infiniband Switch: (Intel) QLogic 12300
  • Front-end node:
Chassis Cores Memory GPU/Co-processor Internal I/O Network OS Version Software installed Machine name
E4 SandyBridge 2*6cores = 12cores 64 GB@1333 NA Infiniband QDR 1 Gb SLC 6.2 Infiniband Subnet Manager, NFS server, Software RAID1 storage of 1.9 TB exported through NFS to all compute nodes, and available there under /data lxbrf65c01
  • Compute nodes (12 nodes in total):
Chassis Cores Memory GPU/Co-processor Internal I/O Network OS Version Software installed Machine name
E4 SandyBridge 2*6cores = 12cores 64 GB@1333 NA Infiniband QDR 1 Gb SLC 6.2   from lxbrf63c02 to lxbrf63c07
E4 SandyBridge 2*6cores = 12cores 64 GB@1333 NA Infiniband QDR 1 Gb SLC 6.2   from lxbrf65c03 to lxbrf65c07

Hostfile

lxbrf63c01 lxbrf63c02 lxbrf63c03 lxbrf63c04 lxbrf63c05 lxbrf63c06 lxbrf63c07 lxbrf65c04 lxbrf65c05 lxbrf65c06 lxbrf65c07

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2016-05-26 - AritzBrosaIartza
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    HardwareLabs/HardwareLabsPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback