LCG Management Board


Tuesday 5 February 16:00-18:00 – F2F Meeting




(Version 1 - 10.2.2008)


A.Aimar (notes), D.Barberis, L.Betev, I.Bird (chair), T.Cass, Ph.Charpentier, L.Dell’Agnello, T.Doyle, I.Fisk, S.Foffano, J.Gordon, C.Grandi, F.Hernandez, M.Kasemann, M.Lamanna, E.Laure, U.Marconi, H.Marten, P.Mato, G.Merino, A.Pace, R.Pordes, Di Qing, M.Schulz, J.Shiers, O.Smirnova, J.Templon

Action List

Mailing List Archive:

Next Meeting:

Tuesday 12 February 2008 16:00-18:00 – Phone Meeting

1.   Minutes and Matters arising (Minutes)


1.1      Minutes of Previous Meeting

The minutes of the previous MB meeting were approved.

1.2      High Level Milestones Update

The High Level Milestones have been updated.


Update: Here is the High Level Milestones dashboard updated after the F2F meeting (PDF file, updated 5.2.2008).

1.3      Move the QR report to end Feb 2008

The next QR for the Overview Board meeting in March (31.3.2008) and should also cover CCRC-08-Feb. If it reports only until January it will be right away an obsolete report. Therefore the proposal is to extend the period reported by the QR in preparation until the end of February 2008.



The MB Accepted the proposal of extending the current QR report until the end of February 2008.

1.4      Do we continue with the LCG Bulletin? (LcgBulletins)

A.Aimar asked the MB whether he should continue to produce the LCG Bulletin. Currently there are about 35-40 unique readers at every release.


Several MB members commented that they consider it useful, both as memory of project information and as summary of the upcoming events.


I.Bird concluded that the LCG Bulletin can continue monthly and the LCG Office should check whether the bulletins can be publicized better.

1.5      Approval of the Interim Policy for User Accounting (H.Marten's mail; Interim Policy Document)

The Interim User Accounting Policy proposed by J.Gordon at the previous MB meeting received the feedback below from H.Marten.


Email from: H.Marten

Dear MB,

In reaction to this thread during the last MB TelConf I attached a document with comments that I sent to Dave Kelsey in May 2006. Please note that this was not my personal view but a summary of a discussion with the data privacy commissioner of Forschungszentrum Karlsruhe.

During the April GDB 2006 Dave provided a draft of the content of a user level accounting policy and asked for feedback. I didn't cross-check Dave’s slides again. However, only from reading again our feedback I got the impression that several of the legal requirements for such a policy were originally contained in Dave’s draft but are now missing in the Interim Policy. I thus strongly encourage the JSPG to quickly review these three docs (Dave’s draft, the interim policy and our feedback) and proposes a new version of an interim policy that could be accepted at least for a limited period of time. In my opinion it is very important for the whole process that this interim policy is at least an official one from the JSPG.

In view of the discussions with our data privacy commissioner (documented in our attached feedback) I have real concerns about

- VOs controlling and steering the activities of individuals
- VOs setting up their own user level accounting
- VOs / groups developing their own monitoring systems down to user levels

without clearly documenting the need, purpose, use and protection of the respective information. Yes, I know that the answer by VOs will be "we can't work without this", and I it is not my intention to re-initiate these discussions, especially because I am not the legal expert. But I have learnt that data privacy is to be taken really serious. Everybody dealing with private data at FZK (including IT admins) gets respective instructions and is taught that misuse is punished with fines or even custodial sentence.

Please don't misunderstand me. We (FZK) are definitely interested in general security and data privacy solutions such that grid methodologies become acceptable for everybody (including industrial partners, medical science etc.) rather than to block the whole process, and we can certainly live with properly defined interim solutions for a while (cf. second paragraph above).


H.Marten summarized the email above and noted that the main point would be to use the current draft of the JSPG User Policy document instead of the interim solution proposed which is not designed and verified by the JSPG.


I.Bird replied that D.Kelsey (chairman on the JSPG) does not consider the current JSPG draft a useable document yet. The interim policy proposed is very close to the proposal and was reviewed by the JSPG. Waiting for the final JSPG document would take several more months.


R.Pordes clarified that points 4, 6 and 7 (shown below) in the Interim Policy Document do not apply to the OSG Sites at the moment. Only the EGEE middleware currently sends the VOMS roles and groups under which the jobs are run. The UserDN information for the OSG jobs is actually collected in Gratia but is protected until a clear privacy policy is defined at OSG.


I.Bird added that the interpretation by the FZK is stricter than other Tier-1 sites. And if the other sites agree the information should be published by those that agree. It is important to introduce the policy in order to see how this data can be used and presented by the VOs and Sites. We should proceed with the sites that agree on the interim policy.




4. Access to a portal that allows the decoding of the anonymised name into a person’s DN is restricted to individuals in the VO appointed to be "VO Resource Managers".



6. A user should have access to summaries of their own jobs


7. Site System Administrators have access to summary data about jobs run at their site. This includes the UserDN and VOMS information. The System Administrators should not make information obtained through the portal available to any un-authorised person.




D.Barberis added that a common system for accounting is really needed and a policy should be clear. The VOs need have the clear usage reports of the resource they are accounted for. If the VOs collect the user information by themselves it will be more difficult to protect any kind of user privacy.


H.Marten clarified that the issues at FZK can be solved if the exact usage, distribution and retention of the user data is documented and execute in detail.


J.Templon explained that NIKHEF had similar issues to FZK’s ones: They solved them by having the users signing that they agree to have their usage data collected and used by NIKHEF and how NIKHEF will consider needed.


O.Smirnova noted that the information should not be published, but should be available on request of the VOs management.


I.Bird concluded that the interim policy should be completed by adding “why the data is needed and how can be used”. At the same time the interim policy should continue with the sites that agree with it.


2.   Action List Review (List of actions)

Actions that are late are highlighted in RED.

  • 21 Jan 2008 - The LCG Office should define where (a web area, wiki, share point?) the Sites can upload their statistics about their tape storage performance and efficiency.


Email from A.Aimar:

Dear Colleagues
     individual pages for publishing tape efficiency data, for each Tier-0
and Tier-1 Site, are now available.

Please go to:

There you will find links to site-specific pages: one page for each site.
Currently only the CERN's page contains real data.
For the other sites it is just some fake data, needed to show the graphs
that are automatically generated (thanks to T.Bell).

Please replace the fake data with your efficiency data; the rest should
be automatic.


New Action:

12 Feb 2008 – Sites should start publishing their tape efficiency data in their wiki page (see

  • 24 Jan 2008 - Sites should update the High Level Milestones. A.Aimar will remind it via email during the week.


  • 5. Feb.2008 - I.Bird will find a speaker for the Experiments Status at the LHCC Referees Meeting.

Removed. The Experiments will have a 10' presentation each.


3.   CCRC08 Update (CCRC F2F Agenda; CCRC Wiki; Slides; WLCG Service Readiness Checklist) – J.Shiers


J.Shiers reported on the status and progress of the CCRC08 February challenge. See the Slides for more details.


Current Status (see slide 3 for details)

With respect to previous challenges CCRC08 is better prepared. The information flow has been considerably improved, a lot of focused work on ensuring that middleware and storage-ware are ready.


The information flow, particularly to / from Tier2s, still needs to be improved. Accurate and timely reporting of problems and their resolution – be they scheduled or unscheduled – still needs to be standardized.


The “Week of stability” did not happen – some known fixes will even be deployed during February. Unexpected issues will probably add to other interventions.


J.Shiers emphasized the issue that the information flow is also still plagued by problems with remote meetings (see slide 7 for details) and with the lack of documentation on using the communication infrastructure.


Current Problems (see slide 5 for details)

A table of “baseline versions” for clients / servers is linked from the CCRC’08 wiki. This is updated regularly and sites have been encouraged to upgrade to these versions.


The known main outstanding issues are:

-       “Space” related features in SRM v2.2 still need more production experience.

-       Relatively minor outstanding bugs with DM tools.


SRM 2.2 Issues

A meeting was held yesterday to try to find acceptable solutions to a short list of problems:

-       Tokens for selecting tape sets

-       Write access on spaces (follow the DPM strategy)

-       Tokens and PrepareToGet / BringOnline (pool selection)

-       Support for true bulk operations (list/remove)


Each of the items above is quite complex. The target is for “solutions” – which may well be “work-arounds” that can be deployed in production in time for May 2008. Some issues may need to be revisited in the longer term.


CCRC08 Communication

The “Daily” CCRC’08 meetings are at 15:00 Geneva time. Except Mondays – joint operations meeting at 16:00.


There is a fixed agenda, with good attendance from the CERN teams by with low participation from the Sites. Some issues around communication with Tier-2 sites and also there was a speculation that CCRC’08 didn’t involve Tier-2 sites.


The proposal is to have WLCG Tier-2 coordinators with specific role of bi-directional communication. These people have to agree to perform this role actively.


The communication with Asia-Pacific Tier-2 sites being addressed by regular con-calls every two weeks so far, including not only CCRC’08 but also preparation for pre-ISGC A-P Tier2 workshop. Similar, but lighter-weight, coordination is also being set up with DB community.



The problems must be acknowledged and recorded in detail. They must not be only solved but also should be documented in detail.


During the ramp-up over the past couple months have uncovered quite a few bugs and configuration issues. The teams involved have been very responsive and professional in responding to the issues in a timely manner.


Critical Services Support

An on-call service is being setup (in principle from Feb 4th) for CASTOR, LFC & FTS. “Critical” DB services were also recommended but not included.


The possible response targets are shown below.


Time Interval

Issue (Tier0 Services)


End 2008

Consistent use of all WLCG Service Standards



Operator response to alarm / call to x5011


1 hour

Operator response to alarm / call to x5011


4 hours

Expert intervention in response to above


8 hours

Problem resolved


24 hours

Problem resolved



The MoU says:

”All storage and computational services shall be “grid enabled” according to standards agreed between the LHC Experiments and the regional centres.”


Does this mean that all sites should (by when?) implement the agreed standards for service design / deployment and operation? This would certainly have a positive impact on service reliability, as well as the ability to perform transparent interventions.


Some better understanding of how 24x7 is implemented – in particular, common baseline across Tier1s – is needed.


Monitoring and Reporting Progress

For February, we have the following three kinds of metrics:

1.    The scaling factors published by the experiments for the various functional blocks that will be tested. These are monitored continuously by the Experiments and reported on at least weekly;

2.    The lists of Critical Services, also defined by the experiments. These are complementary to the above and provide additional detail as well as service targets. It is a goal that all such services are handled in a standard fashion – i.e. as for other IT-supported services – with appropriate monitoring, procedures, alarms and so forth. Whilst there is no commitment to the problem-resolution targets – as short as 30 minutes in some cases – the follow-up on these services will be through the daily and weekly operations meetings;

3.    The services that a site must offer and the corresponding availability targets based on the WLCG MoU. These will also be tracked by the Operations meetings.


It was agreed at the CCRC F2F to measure whether sites upgraded to the baseline services

1.    Upgrades explicitly and/or implicitly needed – some aspects of challenge require these versions to work!

2.    Some of this reporting is still too “manual” – need to be improved.


Slide 14 shows some of the current views but others were agreed at the CCRC08 F2F, in order to provide more VO-oriented views.


Next Steps

The aim is to agree on baseline versions for May during April’s F2F meetings. Must be based on versions as close to production as possible at that time (and not pre-pre-certification).


One should aim for stability from April 21st at least. Beyond May everything needs to be working in continuous full production mode.


The CCRC’08 “post-mortem” Workshop: is for June 12-13 2008.


J.Templon commented that the methods used to reach the reliability targets of the services are a choice that should be left to the Sites. There should not be the request of using one of the methods proposed to improve availability.

I.Bird replied that these recommendations are needed because the reliability seems insufficient at several sites and the targets are not easily reached nor sustained.

J.Shiers added that the requirements are on the quality of service not on the solutions to adopt. Although standard solutions are proposed.


4.   CASTOR Metrics for the LHCC Referees Meeting (Slides) - T.Cass


The LHCC referees asked for CASTOR metrics to address this point following the November review.

T.Cass proposed the set of metrics that could be collected to measure and monitor the performance of the MSS storage at the Tier-0 site.


There are many different CASTOR related performance measurements that can be collected, but these are not easily available from a single location, and have not been tracked consistently to show (hopefully) performance improvements over time.


One should note that the metrics can/will be prepared by VO as relevant. Few (if any) of the proposed metrics are specific to CASTOR; these metrics could thus be used to compare performance across Tier-1 sites.


The metrics proposed can be grouped in 4 classes

4.1      Usage & Usage Patterns

-       Average file size

-       Average data transferred per tape mount

-       Average daily mounts per tape volume

-       Requests/second

-       Percentage of requests for non-disk resident files

4.2      Performance

-       Network utilization per pool, where relevant, with comparison to target

-       Number of simultaneous active transfers, with breakdown by read/write and access mode (streaming/random)

-       Average I/O performance of tape system




4.3      Responsiveness (by component)


-       Time between request and availability of first byte. Distinguish cases where file in requested pool, in another pool, on tape

-       Queued transfer requests

-       Average & maximum disk server busy time





-       Time between initial request and return of TURL


Tape Layer

-       Time between CASTOR request and drive allocation

4.4      Miscellaneous

-       Drive availability: Percentage of installed drives available for users

-       Tape fragmentation: Data on full tapes divided by nominal capacity

-       Available Tape Capacity, expressed as time to exhaust pool at current usage rate (daily/weekly/monthly)



I.Bird asked whether all these metrics will be available for the meeting with the LHCC Referees in a couple of weeks.

T.Cass replied that not all will be available; but a delivery date will be defined for those missing.


5.   Post SLC4 options at CERN (Slides) – T.Cass


The possible options for the future platform to support are:

-       Progressively deliver SLC5 services (test clusters, build services, etc),
leading to introduction of batch and interactive services in ~September and phase out of SLC4 in Q1 09.

-       Skip RHES5 based platform,
introduce SLC6 services from delivery of RHES6 (expected Q4 08 – Q1 09); and phase out SLC4 in Q4 09.


It should NOT be an option to develop SLC5 service support infrastructure without deployment of batch and interactive SLC5 services.

5.1      Requests Received

There are current requests (not formal)  for SLC5 that were received from the Experiments

-       Install service
ATLAS online (hardware compatibility tests)

-       Test environment
Do SLC4 binaries run on SLC5?
Does code compile with gcc4?

-       Build services
ATLAS (certification representative)
Full SLC5 s/w release planned for early 08 (status unclear)


D.Barberis noted that ATLAS is moving all its software to SL4 in early 2008 and therefore SL5 is not a priority now.

5.2      Moving from SLC4 to SLC5


-       Stability only just achieved on SLC4; can we move rapidly to SLC5?
Many fewer changes to Grid middleware; GD opinion is that migration is possible.

-       Short timeframe to deliver services (and experiment software); effort in 2008 in parallel with CCRC & LHC start-up


-       Would avoid the risks of moving from SLC4 to SLC6.

5.3      Moving from SLC4 to SLC6


-       RHES4 support for new hardware ends in April 08.
Higher price/performance for 2009 hardware?

-       Release timescale for RHES6 uncertain.
“Autumn 08”, but RHES5 slipped 6 months; if RHES6 arrives in “Spring 09”, deployment of SLC6 services and software testing before end-09 would be very tight.

-       Relax of focus on middleware (and software) portability. May further complicate migration to SLC6. It is easier to keep porting the software.


-       Avoid SLC4 to SLC5 effort


5.4      Recommendations

IT/FIO & IT/GD recommend:

-       Development of SLC5 services at CERN during 2008,

-       Introduction of production batch and interactive services in September, and

-       Phasing out SLC4 batch and interactive services during Q1 2009.


I.Bird asked whether this proposal should be discussed by the Architect Forum.

P.Mato agreed with the AF discussion but also added that are the Experiments that will have to agree on what is the option to select.


P.Mato also reminded that it is more the version of the compiler and run-time libraries that makes a difference for the applications software. The default SL5 compiler is gcc 4.1 but the Applications Area is not testing gcc 4.2 that is delivered SL5.1.


D.Barberis added that, for example, the ATLAS off-line software depends more on the versions of gcc and Python that on the version of the Linux. Instead the Experiments online depends also on the version of the Linux kernel.


T.Cass added that there must be a clear agreement before starting. IT and EGEE should not proceed with the porting of the IT and middleware if the Experiments’ applications are not going to use it.


J.Templon asked why the Experiments do not just try the SLC5 porting on stand-alone hosts.

T.Cass replied that the Experiments are used to have a build service at CERN supported by IT.

D.Barberis added that the build infrastructure exists at CERN and SLC5 would be one more platform to support on the existing build service.


I.Bird asked whether is possible to have an update in September 2008.

T.Cass replied that it will depend on how much other groups will actively port their software.


Ph.Charpentier noted that all clients of the middleware should then be ported to SL5.

M.Schulz replied that the certified versions will be limited to a specific combination of client/server versions, and for specified compilers.

I.Bird added that different version could have different levels of supports. Some “certified” while other only “built and made available after basic testing”. The levels of certification should be reviewed.


P.Mato clarified that the change of compiler is done because some versions of the compiler give 10-20% gains in performance. That is the reason why the Experiments want to change version of the compiler.


R.Pordes asked whether the GDB will discuss this issue and provide written recommendations.

I.Bird replied that the GDB will surely be involved in the process; but first it must be clear what the Experiments want.

R.Pordes added that the all sites need to be prepared for the move, and should study the porting of their software to SLC5.


M.Schulz added that not all services have to be ported to a new platform. The WN service is the crucial one in order to be able to run the same applications on all sites.



6.   VOBOXes Support and 24x7 at the Sites – Sites Roundtable


This item was postponed to next week.


I.Bird asked for written input from the Sites because clarifications were requested by the LHCC Referees.


New Action:

11. Feb 2008 - Sites should send information about the progress of their 24x7 Support and VOBox SLA milestones (if the milestones are still to be done, i.e. in red).


7.   ALICE Quarterly Report (Slides) – L.Betev


L.Betev presented the QR for ALICE, from Nov 07 to Jan 08.

7.1      Production and Analysis Activities

The ALICE MC production continued with very good site and services availability (including the Christmas break).




The number of users increased. And the priorities were tuned in order to reduce the user waiting time.

The graph below shows how about 55% of the users requests were taken into account within a minute.



The usage of the CAF/PROFF service at CERN was used for chaotic analysis and quotas and for CPU and disks had to be introduced.



7.2      FDR Phase 1

RAW data volume and rates:

-       17.5TB total written, 18MB/sec  – 1/3 of p+p rate

-        Expected 30MB/s (˝ of p+p rate) in February CCRC’08



The replication from Tier-0 to Tiuer-1 sites is also progressing:

-       Running quasi-online, following the registration of RAW at CERN through FTS

-        Export to 2 of the big ALICE T1s (GridKA, CCIN2P3)



The Conditions data collection was in operation from day one (Shuttle system to Offline CondDB).


All data source components are ready and integrated, including the DAQ/DCS/HLT databases and fileservers.

The focus is now on having a full complement of conditions data - and the corresponding online software - for all detectors

The Conditions data access on the Grid is working well.


The general status for Data Production

-       Systematic production of all RAW data completed in January (Pass I at T0)

-       Detector experts are verifying the code and detector performance

-       FDR Phase II – simultaneous reconstruction at T0/T1

7.3      Milestones for the Quarter

-       MS-120 Oct 07: MC raw data for FDR: Done

-       MS-121 Oct 7: on line DA and shuttle integrated in DAQ: Done

-       MS-122 Oct 07: FDR Phase II: Postponed to February 2007


New Milestones:

-       MS-124 Feb. 08: Start of FDR Phase II

-       MS-125 Apr 08: Start of FDR Phase III

-       MS-126 Feb 08: Ready for CCRC 08

-       MS-127 Apr 08: Ready for CCRC 08


J.Templon asked whether ALICE will start to test all Tier-1 Sites (NIKHEF in particular) by mid-February.

L.Betev replied that ALICE is already doing testing on several sites and progressing daily by solving the issues encountered. The raw data will start be distributed by the 15 February when produced by the online software.

J.Templon asked that the ALICE tests are performed in order to test the Sites. Other Experiments are performing these tests since several months.


L.Betev reminded that at CNAF and RAL the xrootd plug-ins for CASTOR 2 are not yet deployed.

L.Dell’Agnello replied that at CNAF, until very recently, there was no request for xrootd from the local ALICE representatives. Now, that the request was made, it is going to be taken into account by working with the ALICE and xrootd experts.


J.Templon asked about the implementation of xrootd security, authentication and authorization. If the access to data is found not secure it could be stopped any time for security reasons.

L.Betev replied that the security concerns raised in the past are taken into account by the xrootd experts.


8.   Applications Area before CCRC (Slides) – P.Mato


The AA software version to be used by the experiments in the CCRC is very weakly coupled with grid services. Very few points of contact (e.g. access to event and conditions data) 


Each experiment will use the version they have managed to fully integrate and validate with their applications:

-       CCRC February run based on last year releases

-       CCRC May run on based on the new AA LCG_54x configuration.


The rest of the presentation is about the current status of the Applications software.

8.1      Applications Area Structure


All AA software (external and internal) is available in /afs/ and is organized as

-       The platform keyword is made of operating system, architecture and compiler version

-       Tar files (sources and/or binaries) for distribution are also available


More than 100 external packages are available. For many packages only the client-side is required

-       Automated installations made by a system of scripts within CMT  (LCGCMT)

-       The packages and versions are decided by Architects Forum (AF)

8.2      AA Configurations

An AA LCG configuration is a  combination of packages and versions which are coherent and compatible

Configurations are given names like “LCG_54”.


Experiments build their application software based on a given LCG configuration. Interfaces to the experiments configuration systems are provided (for both SCRAM and CMT). The content of the configurations are decided in the AF and concurrent configurations are an everyday situation.


For example:

Picture 3.png

8.3      Supported Platforms

The platforms currently supported in the Applications Area are:

-       slc3_ia32_gcc323 - Only available for old configurations

-       slc4_ia32_gcc34 - Current production platform for all experiments

-       slc4_amd64_gcc34 - Fully functional but some difficulties for some experiments

-       win32_vc71 - Used mainly by LHCb as a second platform for development/testing. Missing interface to Castor, DPM, etc.  Requested to IT/DM

-       osx104_ia32_gcc401 - Requested by experiments as a second platform for dev/test. Missing interface to Castor, DPM, etc. Requested to IT/DM 

-       slc4_ia32_gcc41 and slc4_ia32_gcc42 - All AA packages successfully tested. Not yet the need for a release


8.4      Latest Release: Configuration LCG_54

Configuration LCG_54 was released and announced January 21st


A long list of version changes in the external libraries/tools:

-       Python 2.5, Boost 1.34, and 17 others.


New versions of all AA packages

-       ROOT 5.18.00 - production version, major release compared with 5.14 (which is in production by 3 Experiments)

-       SEAL 1.9.4 - minor changes

-       RELAX 1.1.11 - minor changes

-       CORAL 1.9.3 - adaptations and cleanup

-       COOL 2.3.0 - channel selection by name, bulk retrieval of channel ID-name mapping, partial tag locking and adaptation

-       POOL 2.7.0 - improved collections and adaptation

8.5      Nightly Builds

The nightly builds have enabled experiments to validate the candidate releases in all platforms

-       A number of issues (show stoppers in some cases) discovered before release

-       Unfortunately not all the experiments managed at the same level of detail

-       More testing/validation is needed for major releases(1 month is not sufficient)


The current complete release took only 1-2 days after ROOT was tagged (Main goal in 2007 has been to speedup the release process).


Picture 2

8.6      Simulation packages

Geant4 version 9.1 released on December 14th as planned:

-       Includes GDML as a new plug-in and an extension of the binary cascade model for hadronic physics.

-       CPU improvement in the hadronic part (of the order of 5 %) is expected

-       Experimental new physics list to enable the analysis of test beam data.

-       The validation of the hadronic physics has been done with 5000 jobs submitted in the Grid. This is about half the number of jobs compared with last time mainly due to the reduced resources with SLC4 available for Geant4

MC Generators

-       New structure is more stable and is used by the Experiments

-       In total 22 generators in various versions installed

8.7      Conclusions

The main AA activity for 2008 is the consolidation and optimization of the existing software packages. No expected big functionality changes are planned in 2008.


LCG_54 released was released a few weeks ago

-       Was better validated than previous releases

-       This is the main configuration the Experiments will be using for the rest of the year.


Machinery ready to produce new complete software releases (mainly for bug fix releases) in short time (days). The solution adopted is to optimize/speedup the cycle of reporting – debugging – fixing – testing – validating - releasing


D.Barberis expressed ATLAS satisfaction for the Applications Area work. His only worry is about the fact that Geant2 V9 will be twice slower that the previous version. He also asked that the major releases of Geant4 should be less frequent and give the Experiments time to migrate first.


P.Mato replied that often the improved (but slower) algorithms are asked by the Experiments representatives.

In 2008 Geant4 should be added to the nightly builds and in this way the Experiments can immediately test it before the official release.


I.Bird asked how the middleware could benefit from having the validation by the Experiments before the software releases. This approach does not work for the middleware testing.


M.Schulz noted that gcc 4.1 and 4.2 are link incompatible, therefore all external software would have to be recompiled.


9.   AOB



J.Templon noted that LHCb needs to have all sites with the latest update of the lcg-utils. For instance even CERN is still running the version of the WN 1.5.2 that is about one year old. The update of the WN should be asked at all Sites.


10.   Summary of New Actions


The full Action List, current and past items, will be in this wiki page before next MB meeting.


12 Feb 2008 – Sites should start publishing their tape efficiency data in their wiki page (see


11. Feb 2008 - Sites should send information about the progress of their 24x7 Support and VOBox SLA milestones (if the milestones are still to be done, i.e. in red).