Site | Status | Recent changes | Planned changes |
---|---|---|---|
CERN | CASTOR 2.1.10 (CMS, ATLAS and ALICE) CASTOR 2.1.9-9 (LHCb) SRM 2.9-4 (all) xrootd 2.1.9-7 |
||
ASGC | CASTOR 2.1.7-19 (stager, nameserver) CASTOR 2.1.8-14 (tapeserver) SRM 2.8-2 |
28/1: 30' of unscheduled downtime![]() |
None |
BNL | dCache 1.9.5-23 (PNFS, Postgres 9) | ||
CNAF | StoRM 1.5.6-3 (ATLAS, CMS, LHCb,ALICE) | ||
FNAL | dCache 1.9.5-23 (PNFS) Scalla xrootd 2.9.1/1.4.2-4 Oracle Lustre 1.8.3 |
none | Moving unmerged pools from dCache to Lustre Deploying scalable SRM servers with DNS load balancing |
IN2P3 | dCache 1.9.5-24 (Chimera) | Upgraded to version 1.9.5-24 on 2011-02-08 | |
KIT | dCache 1.9.5-15 (admin nodes) (Chimera) dCache 1.9.5-5 - 1.9.5-15 (pool nodes) |
||
NDGF | dCache 1.9.11 | None | None |
NL-T1 | dCache 1.9.5-23 (Chimera) (SARA), DPM 1.7.3 (NIKHEF) | ||
PIC | dCache 1.9.5-23 (PNFS) | ||
RAL | CASTOR 2.1.9-6 (stagers) 2.1.9-1 (tape servers) SRM 2.8-6 |
CMS disk server upgrades to SL5 64 bit | ALICE disk server upgrades to SL5 64bit NS upgrade to 2.1.10 |
TRIUMF | dCache 1.9.5-21 with Chimera namespace | None | None |
Site | Version | OS, n-bit | Backend | Upgrade plans |
---|---|---|---|---|
ASGC | 1.7.4-7 | SLC5 64-bit | Oracle | None |
BNL | 1.8.0-1 | SL5, 64-bit | Oracle | |
CERN | 1.7.3 64-bit | SLC4 | Oracle | Will upgrade to SLC5 64-bit by the end of Jan or begin of Feb. |
CNAF | 1.7.4-7 | SL5 64-bit | Oracle | |
FNAL | N/A | Not deployed at Fermilab | ||
IN2P3 | 1.8.0-1 | SL5 64-bit | Oracle | Upgraded to LFC 1.8.0 on January 4th |
KIT | 1.7.4 | SL5 64-bit | Oracle | |
NDGF | 1.7.4.7-1 | Ubuntu 9.10 64-bit | MySQL | None |
NL-T1 | 1.7.4-7 | CentOS5 64-bit | Oracle | |
PIC | 1.7.4-7 | SL5 64-bit | Oracle | |
RAL | 1.7.4-7 | SL5 64-bit | Oracle | |
TRIUMF | 1.7.3-1 | SL5 64-bit | MySQL |
Site | Status, recent changes, incidents, ... | Planned interventions |
---|---|---|
ASGC | ||
BNL | * Conditions database successfully upgraded to 10.2.0.5. No issues occurred during this upgrade. * Former LFC_FTS cluster was reconfigured to be used as a physical standby database: - Upgrades included OS RHEL5, Cluster/database server 10.2.0.5, Storage firmware. -Only initially enabled on one of the production clusters, as a part of the integration of this Data Guard technology within the oracle database operations. * Enable IPMI on all oracle production clusters. |
* To enable Data guard for LFC database. * Decommission TAGS database service. |
CNAF | * 16 Feb - LHCb cluster upgrade to 10.2.0.5 * 2 Mar (to be confirmed): FTS DB upgrade to 10.2.0.5 + FTS DB purge old data and set up of the periodical cleaning job, that was missing before. |
|
KIT | * Jan 26: Upgrade of 3D RACs (ATLAS, LHCb) to 10.2.0.5. | None |
IN2P3 | ||
NDGF | Nothing to report | None |
PIC | 8th Feb - we've upgraded FTS database. | We're planning to upgrade all other databases before the end of February but we don't have a exact date. |
RAL | * We have upgraded the 3D, LFC,FTS and castor databases to 10.2.0.5 * In few days we should receive the new hardware ready for us to install Oracle and start our testing (this is the HW that will be used for data guard CASTOR and FTS/LFC). |
|
SARA | Nothing to report | No interventions |
TRIUMF | * Upgraded Oracle 3D RAC to Oracle 10.2.0.5 | None |