Week of 100712

Daily WLCG Operations Call details

To join the call, at 15.00 CE(S)T Monday to Friday inclusive (in CERN 513 R-068) do one of the following:

  1. Dial +41227676000 (Main) and enter access code 0119168, or
  2. To have the system call you, click here
  3. The scod rota for the next few weeks is at ScodRota

WLCG Service Incidents, Interventions and Availability, Change / Risk Assessments

VO Summaries of Site Usability SIRs, Open Issues & Broadcasts Change assessments
ALICE ATLAS CMS LHCb WLCG Service Incident Reports WLCG Service Open Issues Broadcast archive CASTOR Change Assessments

General Information

General Information GGUS Information LHC Machine Information
CERN IT status board M/W PPSCoordinationWorkLog WLCG Baseline Versions WLCG Blogs   GgusInformation Sharepoint site - Cooldown Status - News


Monday:

Attendance: local(Harry, Patricia, Lola, Dirk, Jamie, Maria, Jean-Philippe, Carlos, Luca, Manuel, MariaDZ);remote(Jon, Gonzalo, Federico, Gang, Rolf, Kyle, Alexander, Angela, Gareth, Alessandro(?)).

Experiments round table:

  • ATLAS reports -
    • FZK cloud is now online after weekend outage, FZK-LCG site offline for production
    • SARA remains 0% for T0 data export

  • ALICE reports -
    • T0 site
      • GGUS:59974: local cream ce203 showing error messages at submission time (authentication problems). Experts working on the problem. alice has taken this system out of production. Latest news: SOLVED at 12:00 (system back in production)
      • GGUS:60007: local cream ce202 refusing any connection at submission time (system has been taken out of production).Latest news: SOLVED (system back in production) [ Manuel - Ulrich fixed by Googling and found fix in South Africa! ]
    • T1 sites
      • CNAF: GGUS:60004: Wrong information reported by voview concerning the local CREAM CEs. This information decides new bunches of jobs submissions. The site has been taken out of production
      • FZK: Cooling problems reported by the site.
    • T2 sites
      • Catania and TriGrid: Local CREAM systems timing out with apparently operations performed by the sysadmins. The issue has been taken out of the Alice TF list to be treated directly with the CREAM developers. conclusions will be exposed through the list to all sites
      • Mephi: blparser service errors at the local cream-ce. Alice contact person at the site contacted.

  • LHCb reports -
    • Reconstruction production, on a selected number of runs (only T1 sites involved), is running. Only problems are GridKA (powercut) and IN2P3 (shared area)
    • Issues at the sites and services
      • DownTime notifications: no notifications arrived in the weekend, (e.g.: GridKA notification not received)
    • T1 site issues:
      • Degradations in IN2P3 shared area. Still no solution provided. GGUS:59880 (opened 4 days ago) [ Rolf - people are working on this problem. ]
Sites / Services round table:

  • FNAL - ntr
  • PIC - ntr
  • IN2P3 - nothing to add
  • NL-T1 - still working with vendor to solve problem but no solution yet
  • KIT - had cooling problem on Saturday - all chiller plants went down. Engineer called and tried to restart. 3 restarted, the last one had problems that still are to be fixed. Most of water cooled racks went down - only air-cooled systems stayed mainly up. Storage: doors opened automatically but air con not enough. Alot of staff there in evening to restart services. Decided not to restart all based on engineer - left compute cluster down and focussed on storage and other services (FTS, LFC, BDII). Some pools had h/w problems not reparable on w/e. Most up by now. For CMS pools still down (detail missing). LHCb - 8 pools stay down and a few ATLAS. In contact with companies to repair broken h/w. Some controllers will be switched from less important racks. Notification from Andreas from wlcg-operations. Start broadcast not seen. 20% of WNs now back online. Most of cluster will be restarted now but not all - wait for 4th cooling machine.
  • RAL - 1) some problems with ATLAS s/w server end of last week and into w/e. Controlled # of ATLAS jobs starting and running. More widespread jobs with ATLAS over w/e? 2) Trip of transformer in computer building. Doesn't affect us too badly but 2nd time this one has tripped. Lost some cooling but restored. None of services directly affected.
  • CNAF - still investigating issue with PIC. From our point of view problem is not at CNAF but at PIC. Need to investigate. Gonzalo - which is ticket?
  • ASGC - ntr
  • OSG - ntr

  • CERN - ntr

AOB:

Tuesday:

Attendance: local(Jacek, Nilo, Harry, Miguel, Manuel, Alessandro, Maria, Jamie, MariaDZ, Patricia, Xavier, Flavia, Jean-Philippe);remote(Ronald, Gareth, Rolf, Roger, Jon, Michael, Federico, Xavier, Oliver, CNAF).

Experiments round table:

  • ATLAS reports -
    • Detector running stably and collecting data. Expected calibration period after this fill.
    • ASGC:
      • Possible data loss of group data: /perf-tau/mc09_7TeV/ GGUS:60022 ongoing.
      • Tape pre-staging issues: GGUS:60042 ongoing. Site responded immediately - stuck tape. Resetting request.
    • FZK: testing production queues - fully back online in Panda once shifters confirm correct behavior. (ATLAS still missing two files servers 11 disk only and 4 read pools down)
    • SARA still in downtime and activities remain stopped. Downtime ending tomorrow at 19:00 according to GOCDB. (Working on plan to bring services back online in as safe as way as possible)
    • BNL: Two files at BNL dCache got lost. DDM experts recovering them.
    • MC Production:
      • Keeping up the pace: ~30k concurrent jobs.
      • New project name: mc10_7TeV will be used for summer re-simulation campaign. Consider it to setup File Families in tape areas. New campaign starting in about a week.
    • Site interventions: LHC technical stop foreseen for next week (Mo-Wed). On Monday, ATLAS will still process and export data processed over the week-end. Sites may consider short intervention (~1day) Tuesday/Wednesday.
    • Security incident - Savannah bug used for spam attack - trying to restrict access
    • CERN - ATLAS had pending request to upgrade to SLC5 machines behind Q - noticed yesterday many jobs pending. New ticket opened. PES group provided some update - machines were waiting for LSF upgrade to take place.

  • CMS reports -
    • KIT released successfully all CMS components after the downtime due to cooling troubles
    • AFS at CERN still shows slow access behavior, noticed by users accessing the CMS software. T0 move to read-only software AFS volume was successful, will move all users next week. Nothing more can be done currently.
    • Plans for the next week:
      • T0: data taking
      • T1: few central workflows requested, work started to check consistency on T1 tape storage and cleaning up of remainders of past processing (unmerged areas for example)
      • T2: very few MC requests, mostly used by analysis

  • ALICE reports -
    • T0 site
      • The r/w AFS software area of ALICE at CERN got unreachable yesterday night. Thanks a lot to Harry who pointed the problem. The area was still reachable for reading purposes and since no new packages were being installed via PAckMan the production was maintained. The area is still very slow for writing purposes this morning
      • GGUS:60049: blparser service of ce201 and ce203 is not alived.
    • CREAM reports (valid for T1 and T2 sites)
      • Due to the common problems observed at any sites (connection time outs, blparser services down...) we have proposed to the CREAM developers the creation of a guide which will include the most common problems observed by the site admins and the corresponding solution
        • Suffering such a problems, Mephi today is out of production

  • LHCb reports -
    • Reconstruction productions running, as well as MC.
    • T0 site issues:
    • T1 site issues:
      • NTR
    • T2 site issues:
      • Jobs failing at IN2P3-LAPP. Possible shared area problems, might be temporary. GGUS:60040

Sites / Services round table:

  • CNAF - ticket discussed yesterday on ATLAS functional transfer tests PIC-CNAF: GGUS:59591 didn't observe any problems on transfers CNAF-PIC (Xavier - also looking at PIC but could not find any problem. Production etc is fine. Ale - we didn't have production data to move on this link ... )
  • NL-T1 - ntr
  • RAL - 1 ATLAS disk server out for a few hours this afternoon. Email sent to ATLAS
  • IN2P3 - ntr
  • NDGF - ntr
  • FNAL - ntr
  • BNL - had one gridftp door hanging this morning. Some transfer timeouts seen. Door restarted and transfers resumed.
  • KIT - early this morning pnfs manager of one dcache instance for LHCb ran out of memory and was restarted. Under heavy load since and trying to find out why. dcache developers helping. Production reduced for moment.
  • ASGC - files mentioned in ATLAS GGUS team ticket recovered and will close ticket
  • OSG - reports that some updates between BNL and OSG tickets not going through. OSG->BNL updates not going through.

  • CERN Storage - a few announcements will come about transparent nameserver update during LHC technical stop. Increase memory and apply patches - risk assessments and proposed dates will be sent in next 24h
  • CERN DB - 2 interruptions of streaming this night and morning online -> offline. First memory fragmentation around 01:30 and 2nd around 09:00 due to DBA mistake.

AOB: (MariaDZ)

  • GGUS:59041 was escalated to the highest level. Assigned a a peculiar combination, please have a look and re-assign as appropriate. (Ale - two replies; a very peculiar problem of one non critical SAM test that was failing. A site responsible asked for explanation. Put some info - he was waiting for more. Seems to be a glitch - cannot correlate failures with any other failures. Verified code of test ok. Should give info for timeout and not error. )

  • Reminder to collect GGUS issues of concern sent by email. Action on experiment reps to highlight tickets of concern for follow up at this Thursday's WLCG T1SCM.

Wednesday

Attendance: local(Jean-Philippe, Xavier, Jamie, Maria, Miguel, Luca, Harry, Edoardo, MariaDZ, Patricia, Pepe, Steve, Lola, IanF);remote(Angela, Jon, Michael, Ron, Federico, Elisabetta, Tiju, Roger).

Experiments round table:

  • ATLAS reports -
    • BNL failing importing/exporting data (since ~8:00 CERN time) -also affecting T0-export GGUS:60048
      • Site people working on this, M. Ernst: "These transfer failures are caused by high load on the dCache namespace component. We are monitoring the situation and are taking action as appropriate. More information will be provided as soon as it becomes available"
        • Seems solved after 11:00AM CERN time
    • PIC->CNAF data transfer problems still opened and being followed up by sites and experts: GGUS:59791
      • Criticality is increasing. Problem affecting all transfers now from PIC to CNAF for CMS (since 25/6) and ATLAS (since 07/7), but not LHCB! (NDGF also suffering sending data to CNAF)
      • Transfers CNAF->PIC are ok today, while yesterday where failing.
    • FZK: ATLAS need important data to be back online for production validation - probably stored at the offline pools. Estimation ? GGUS:60045 (300K files - can site provide a list) [ Angela - list of files available and just sent to local ATLAS list. List available on VO box. Disk for pools - RAID rebuilt and think available latest tomorrow evening, maybe tomorrow noon ]
    • ASGC:

  • CMS reports -
    • Tier-1s
      • Tier-1s were asked to configure new /cms/Role=t1production role. Savannah tickets opened for each T1.
      • We will ask Tier-1s to provide a review on their Tape Systems (number of drives read/write, shares with other VOs, etc...) and updates on SiteDB (mainly cores). CMS has made a I/O Output study for different kind of workflows at T1s and we want to be sure we will not go beyond tape ingestion limits when running those at the Tier-1s.
      • There is an urgent reprocessing campaign going to start (JetMETTau@FNAL,EG@RAL,Mu@PIC, MuOnia@CNAF).
      • CNAF -> PIC is also broken for CMS. ( Since June 26 all CMS (debug) transfers failing on this link )
    • Tier-2s
      • [ OPEN ]Savannah #115672: JobRobot with 0% and SAM error in T2_IN_TIFR
    • Notes:
      • AFS at CERN still shows slow access behaviour, noticed by users accessing the CMS software. T0 move to read-only software AFS volume was successful, will move all users next week. Nothing more can be done currently.
        • GGUS team ticket still open possibly to track the issue GGUS:59728

  • ALICE reports -
    • T0 site
      • The problem reported yesterday and concerning the slow access to the CERN AFS area has been understood. Alice was still reading from the writable volume causing therefore such a problems already observed few months ago. The problem was AFS independent and therefore no GGUS ticket has been submitted. Harry has just confirmed the good balance in the number of access to the writable area after the corrections
    • T1 sites
      • FZK-LCG2: GGUS:60097. One of the three cream systems is timing out. ( KIT - restarted Tomcat, now ok, ticket closed )
    • T2 sites
      • Currently Alice is restarting a new MC production cycle. T2 sites are being checked and tested to ensure the max number of resources

  • LHCb reports -
    • Many reconstruction productions running, as well as MC. 35k jobs submitted in the last 24 hours
    • T0 site issues:
      • NTR
    • T1 site issues:
      • FZK-LCG2: SAM tests failing, problems staging and accessing files. SEs put out of the mask. GGUS:60087 (SAM tests not failing anymore - will update ticket soon) [ Angela - had high load on pnfs around 11:00 - SE should now recover. Last test UTC 09:25 - next test should be ok ]
      • Shared area degradation at IN2P3: problem understood and reproduced. GGUS:59880
    • T2 site issues:
      • Transfers from UK (and not only) to CERN: seems OK regarding UK site, not yet for Spanish ones. GGUS:59422 (opened one month ago or more! Don't see more problems from UK sites but now from some Spanish sites... Contacted directly one of the sites. Keep ticket open until we find solution) [ Miguel - this ticket is one with TCP parameters for sites behind firewalls - tuning of tcp parameters ]

Sites / Services round table:

  • FNAL - ntr
  • KIT - nta - still on reduced WN power.
  • BNL - we had an incident last night in US where we observed very demanding worklow. Based on 2 components: merging jobs reading in 50 files / job and massive staging from tape that put a lot of load on n/s component. Slowdown of dCache operations also causing transfer timeouts. By 11:00 CERN time problem fixed.
  • CNAF - problem CNAF-PIC still investigating. As soon as we understand problem will close ticket!
  • NL-T1 - yesterday afternoon put a large amount of storage back in production - all except from 3 racks . Observed that only 3 racks affected with Infiniband problems. Investigation still goes on but unaffected racks back in production/
  • RAL - ntr
  • NDGF - small downtime today (forgot to mention yesterday) - one of more important dCache machines failing. Think uncorrectable ECC memory problems. Swapping modules now.
  • OSG - have 2 things: 1) problems BNL-OSG ticketing system reported yesterday, actually misdiagnosis. Actually ok smile 2) Have a Gratia accounting maintenance today - records will be held at resources until accounting system comes back up.

  • Network: CNAF-PIC issue- routing policy depends on sites. Traffic may go via GPN or OPN. If via OPN should pass by CERN as no direct link in this case. Link peaking at 9Gbps to Bologna and 6Gpbs to Spain.

  • DB - 4 hour downtime on replication of PVSS data for CMS. PVSS application running a statement that blocked streams as on an account that was not replicated. Communicated to CMS to avoid in future. 18:30 - 22:30.

AOB:

Thursday

Attendance: local(Xavier, Pepe, Manuel, Maria, Jamie, Jacek, Andrea, Gang, Harry, Lola, MariaDZ, Patricia, Nilo);remote(Jon, Michael, Ronald, Roger, Angela, Tiju, Rolf, Federico, Rob).

Experiments round table:

  • ATLAS reports -
    • CNAF-PIC transfer issue solved: (Elisabetta - just know router reconfigured for PIC connection) [ Service Incident Report should be produced ] Gerard from PIC added to ticket that LHCOPN plans to make tests that will be picked up by monitoring
      • MTU black hole. Bad configure router, everything ok since yesterday at ~16h when the router was set to MTU=9000. GGUS:59791 closed
    • NDGF:
      • SRM broke yesterday ~16h. GGUS ALARM ticket sent.
        • HW failure, spare hardware used but transfer problems continues -probably underlying SW problem.
        • Excluded for all ATLAS data transfers.
      • After resuming DDM transfers we see the issue is solved since 14 CERN time. ALARM ticket closed.
    • SARA:
      • Fully back online in ATLAS data distribution (since 12:20AM CERN time).
      • Data processing activities will be resumed as soon as shifters do the certification.
    • LRZ-LMU_ATLASCALIBDISK: Warning due to shortage in disk space yesterday ~15h (14% free). atlas-muoncalib-oper is handling the issue:

  • CMS reports -
    • Tier-1s
      • Tier-1s asked to configure new /cms/Role=t1production role. A few sites deployed it already (PIC, ASGC,KIT, IN2P3).
      • Urgent reprocessing campaign ongoing (JetMETTau@FNAL,EG@RAL,Mu@PIC, MuOnia@CNAF).
      • CNAF -> PIC transfer broken for CMS. [ CLOSED ]GGUS ticket GGUS #59791 by ATLAS. Link recovered yesterday, after fixing network issues at CNAF/GARR (JumboFrames, MTU=9000). Ticket closed.
      • [ OPEN ]Savannah #115696: KIT: Cannot install CMSSW due to issues with NFS locking .
    • Tier-2s
      • [ OPEN ]Savannah #115697: SAM SRM tests might indicate storage is full in T2_UK_SGrid_RALPP
      • [ OPEN ]Savannah #115672: JobRobot with 0% and SAM error in T2_IN_TIFR
      • [ OPEN ]Savannah #115694: Frontier seems down in T2_US_MIT although SAM tests are ok. Investigating.
    • Notes:
      • [ CLOSED ]Savannah #115691: File invalidation: /QCD_Pt-15_7TeV-pythia6/Spring10-START3X_V26B-PU_E7TeV_AVE_2_8_BX_50ns-v1/GEN-SIM-RECO(RAW). 6 files with bad checksum.
      • AFS at CERN slow access behavior --> CMS is in 'watching-mode'.
        • GGUS team ticket still open possibly to track more issues [ OPEN ]https://gus.fzk.de/ws/ticket_info.php?ticket=59728

  • ALICE reports - GENERAL INFORMATION: As we reported yesterday, a CREAM-CE documentation including the most common problems observed by Alice at the sites and possible solutions has been created together with the CREAM developers' advices. It will be announced this afternoon during the ALICE TF Meeting
    • T0 site
      • sysadmins are upgrading the logfile parser software followjng developers' suggestions. ce201 and ce203 are out of production. Ticket GGUS:60125 submitted by Alice
      • The deployment of the latest set of 2.1.9 patches for CASTOR (including xroot) has been announced this morning. The transparent update will take place on the 21st of July between 9h00 and 10h30 Geneva time. The operation has been agreed by ALICE
    • T1 sites
      • All T1 sites in production (increasing the number of running jobs at FZK)
      • All transfers to CNAF failing - seems disk is full. Promised more.
    • T2 sites
      • Kolkata-T2: local SE updated. Site back in production
      • Cyfronet: It seems Alice is not accessing the common number of resources available at the site. Alice contact person contacted

  • LHCb reports -
    • Reconstruction productions running, fewer MC jobs.
    • T0 site issues:
      • NTR
    • T1 site issues:
      • Pilots aborting at INFN-T1. Very urgent. GGUS:60104 In reply known bug with CREAM CE - patch maybe made or else downgraded to former version. Experiment would prefer latter (and apply fix when ready).
      • The ticket against FZK-LCG2 can now be closed. GGUS:60087. Site running at 20% capacity, CPU share momentarily reduced.
    • T2 site issues:
      • NTR

Sites / Services round table:

  • FNAL - ntr
  • BNL - ntr
  • NDGF - incident report available for problem https://wiki.ndgf.org/display/ndgfwiki/20100714+dCache+server+failure seen with dCache. Also problem with OPN network failing towards some sites - fixed by using non-OPN instead (some fibre problems somewhere in Sweden...)
  • KIT - now started rest of WNs again.
  • IN2P3 - ntr
  • RAL - ntr
  • CNAF - ntr
  • NL-T1 - 2 issues: 1) all storage at SARA up and running again 2) maintenance scheduled for 26 July postponed to 28 Correction - will be Wednesday 21st July . Includes dCache upgrade, CREAM CE and some other issues. Q: some pools offline. A: h/w available again, will have to be validated before put online again.
  • OSG - ntr
  • ASGC - several possibilities that lead to staging problem. Some tape files could have been destroyed - checked and seems ok. Tape drive problems? Not confirmed.. DB load heavy? ATLAS pre-staging tests have stopped.
  • PIC - ntr

  • CERN - ntr

AOB:

Friday

Attendance: local(Jean-Philippe, Alessandro, Stephane, Xavier, Uri, Manuel, Jamie, Maria, Dirk, Simone, Patricia, Jan, Nilo, Jacek);remote(Rolf/IN2P3, CNAF, KIT, Jon/FNAL, Mattias/NDGF, Pepe (CMS), Alexander/NL-T1, Gang/ASGC, John/RAL, Reda/TRIUMF).

Experiments round table:

  • ATLAS reports -
    • BNL: small glitches yesterday in the storage because of dCache's namespace and gsidcap doors restarts. This was well announced in advance by BNL to shifters.
      • Caused some backlog of data exportation to BNL - importing data at sustained rate of ~700MB/s during last 24h.
    • FZK: some pools still offline -causing some transfer errors. [ Only 1 single pool offline since yesterday - maybe related to cooling outage - rely on manufacturer to find out. Recovering files v. difficult - may take more than 1 week! ATLAS should think of these files as lost. ATLAS.de informed, 4500 files affected. ] Simone - if we declare files lost had better be! If recovered they will have to be deleted.
    • ASGC: pre-staging errors persists GGUS:60042 Started affecting more data.
    • SARA: tape to disk staging problems: GGUS:60175

  • CMS reports -
    • Tier-1s
      • Tier-1s asked to configure new /cms/Role=t1production role. A few sites deployed it already (PIC, ASGC,KIT,IN2P3). In progress.
      • [OPEN ]Team GGUS:60154: Tape backlog at CNAF - Muonia datasets not written to tape. We cannot process data until blocks are closed (which means, transferred to tape). This is critical, as we need to promptly reconstruct this new data. CNAF contacted and they are investigating. We stopped all non-custodial data transfers to CNAF. Still, the backlog is still big. [ Daniele - received report about this ticket but just "investigating" - no more news than in ticket. ]
      • [OPEN ]Savannah #115696: KIT: Cannot install CMSSW due to issues with NFS locking .
    • Tier-2s
      • [ CLOSED ]Savannah #115697: SAM SRM tests might indicate storage is full in T2_UK_SGrid_RALPP. One pool went offline. That left CMS with less space than had been reserved for production transfers. Reservation space relaxed until the pool comes back to production again.
      • [ OPEN ]Savannah #115724: SAM tests errors in T2_BR_SPRACE.
      • [ OPEN ]Savannah #115672: JobRobot with 0% and SAM error in T2_IN_TIFR
      • [ OPEN ]Savannah #115694: Frontier seems down in T2_US_MIT although SAM tests are ok. Investigating.
    • Notes:
      • AFS at CERN slow access behavior. GGUS team ticket still open possibly to track more issues [ OPEN ]https://gus.fzk.de/ws/ticket_info.php?ticket=59728

  • ALICE reports -
    • T0 site
      • Reconstruction tasks ongoing. No remarkable issues to report
    • T1 sites
      • CNAF. GGUS:60169: apparently the SE (disk) used by ALICE seems to be full - new disk installed and being configured, should be in production in a few days
    • T2 sites
      • SpbSU: out of production, site currently under operations, waiting for the ALICE responsible's report

  • LHCb reports -
    • Reconstruction and MC productions running.
    • T0 site issues:
      • NTR
    • T1 site issues:
    • T2 site issues:
      • NTR

Sites / Services round table:

  • CNAF - as said in ALICE report disk will be available in a few days, otherwise ntr
  • IN2P3 - ntr
  • KIT - nta - try to find out about issues raised earlier [ CMS Issue: s/w installation problem. is this same problem as GGUS:60126 ? A:Y - solved today in the morning! ]
  • FNAL - ntr
  • NDGF - incident earlier now have prelim SIR up and linked in. Only got a first notification of alarm ticket - no subsequent follow-ups resulted in e-mails which led to delay in noticing ATLAS transfers still failing [ did not receive any notification when ticket was re-opened. Ale - will follow-up ]
  • RAL - all proposed outages for next week submitted to GOCDB - all "AT RISK".
  • TRIUMF - next week during LHC TS planning firmware upgrade on old storage i/f - 30% of storage. Minor incident yesterday whilst testing some of procedures. 5% of data unavailable for 3-4 hours but ATLAS not affected.
  • OSG - we had an outage to several central operations services yesterday when a "?" was corrupted. Only LCG service affected was ticket exchange - resynced.
  • ASGC - still trying find out root cause of ATLAS pre-staging problem. Example in ticket is very old status - has to be changed manually.
  • NL-T1: NIKHEF had a problem with expired certificates on their LDAP servers. This affected some services like Torque and Nagios. It has been fixed. At SARA alle dcache pools are back on-line.This afternoon GGUS ticket 60175 has been opened for SARA regarding failed transfers. We're looking into that.

  • CERN Storage - CASTOR transparent upgrades next week to latest version; DB team would like to patch CASTOR NS DB.

AOB:

-- JamieShiers - 12-Jul-2010

Edit | Attach | Watch | Print version | History: r11 < r10 < r9 < r8 < r7 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r11 - 2010-07-16 - JamieShiers
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback