Computer Centre visit

Note: ask for Computer Centre access in addition to access to B.513 in EDH to be able to enter the room! Two groups (one downstairs, one upstairs) maximum.

Please read through all of this page.

Key messages

  • don't drink the tap water
  • don't touch any computers, cables or switches
  • visit http://gridcafe.org

Floor plan

Insert Picture Here

Stops

Openlab

This is the Openlab area with their stuff.

CiXP

The CERN Internet Exchange Point is famous for its history. Also for our Internet2 Land Speed Records.

Batch machines

Some 3500 batch nodes in total - NOT a supercomputer. Dual-CPU, mostly Intel, recently we allow AMD as well; PCs are cheaper than mainframes/supercomputers and the physics events, being independent from each other, are easily distributed among independent CPUs. Single disk, no redundancy, 1-2GiB RAM. Most of the data processing will be done on the Grid.

Elevator

Note the wooden floor and the doors that tend to be stuck when the elevator is full of visitors. If that happens, take the stairs instead.

Tape storage

Check the A4 papers Charles stuck on the tape robots, they contain information. Preferably check that in advance. Tape is NOT backup (that's a very small part of it) but the primary data storage for the LHC. Each tape drive is connected to a 'tape server' (a redundant PC running Linux) via Fibre Channel.

There are more similar robots in b.613 (which cannot be visited).

There's no 'backup' of the physics data but each experiment will have their copies as well as the Tier1 centers.

Disk storage (disk servers)

These are also PCs, running Linux. Disk servers are used as a buffer to tapes (both for reading and writing). The file access process (copy from tape to disk, or archiving from disk) is transparent to the users, using software from CASTOR. These are redundant machines (ECC RAM, RAID disks, n+1 power supplies).

Other (or FAQ)

  • upstairs room ("Computer centre") is 1450 sq.m, downstairs (called "Vault", because it used to be a tape vault) is 1200 sq.m.
  • cooling capacities are 2MW and 500kW respectively and this is what limits the capacity we can put in there (also the maximum amount of power consumed by machines!)
  • two electrical inputs (one Swiss, one French). We automatically switch from one to another in case of a failure. 100% of the surface of the computer centre is covered by UPS capacity, but only the critical areas (strip at back of upstairs and at far right downstairs as you enter from the lift) are backed by diesel generators. So if the Swiss/French autotransfer mechanism fails, physics services die in 10 minutes max.
  • At present we have one UPS system for physics with 4 400kVA modules, so 1,200kVA given the N+1 redundancy configuration. We will be installing more to get 3.6 MVA. In addition, we have 2 300kVA units to support critical services. Again, this is an N+1 redundancy configuration, so capacity is 300kVA.
  • cooling is an issue when running from UPS. Part of the cooling can run from UPS.
  • there's no automatic fire extinguisher mechanism; however there's fire detection and we have manual extinguishers. The room is simply too big for the former to be effective (we're on a budget).
  • most machines are running Linux smile except for mail and web services using Windows and some special services using Solaris or OpenBSD. Some (more than half but not all) desktops are also running Windows, the rest Linux. That would be Scientific Linux, a recompiled version of Red Hat with some additions if anyone is interested.

-- AndrasHorvath - 10 Aug 2006

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2006-08-10 - AndrasHorvath
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    ItCommTeam All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback