LHCb Core Software Meeting

Date and Location

10:30 - 11:10
CERN (2-R-030)


Ben Couturier, Gaylord Cherencey, Gloria Corti, Illya Shapoval, Joel Closier, Marco Cattaneo, Marco Clemencic (minutes), Markus Frank, Stefan Lohn, Thomas Hartmann (Vidyo)



  • LCGCMT 63 has been announced
    • one of the aims of this version was to make AFS LCG release area and local installations (including CVMS) uniform, so that we could get rid of CMTSITE=LOCAL/CERN
    • unfortunately the structure has been made uniform, but the CMTSITE is still mandatory in this version
    • Marco Cl. and Ben proposed an alternative implementation of a requirements file that fixes the issue, which is being tested; if the test is successful, SFT will release LCGCMT 64 (no change in the externals) and Gaudi will use it
  • Problem with the nightly builds based on LCG nightlies
    • caused by (errors in) the changes needed to adopt the proposal by Marco Cl. and Ben
    • under investigation
  • Gaylord is a new Summer Student
    • will work on tuning of cache parameters to use Frontier to access the CondDB
  • Seminar on CMake next Tuesday, followed by a smaller meeting where users can show their cases
    • Marco Cl. will present the case of Gaudi
  • Computing Seminar next Wednesday: Trends in Database Research
  • There will be no Core Software Meeting next week; next meeting on the 27th of June

Round Table

Marco Cl.

  • Gaudi ready to be tagged, but waiting for LCGCMT 64
  • Problems testing the upgrade of Coverity
    • the database of defects cannot be upgraded: we need to contact the company for support


  • The Oracle connection string have been disabled from the configuration: 10 days without Oracle, we can soon tell Tier-1s that we do not need Oracle and streams (not even for LFC)
  • Preparing the list of Python bindings required by DIRAC, for the request to EMI for Python 2.7.
  • New LHCbDirac under certification; it will be released next week

Thomas: how log does it take to get a new machine for the nightlies?
Joel will make the request (it has been agreed to ask for an SLC6 machine).


  • Problem of dependencies between a couple of script in the "make" of DecFiles
    • works sequentially, but not in parallel (will investigate with Marco Cl.)
  • Started the Upgrade Code Review
    • it is clear that developers require guidelines (suggested ways of doing things)
    • proposed to update the C++ coding conventions
    • prepare a Python coding convention document
      Joel suggested to use the DIRAC Coding Conventions since there is already a lot of code using them and a pylint configuration to validate the code.
    • Marco Ca. suggested that we need guidelines on the use of Configurables too
  • Somebody expressed the worry that using the event number to seed the random number generator may cause problems in filtered productions, but it is not the case because we use 31 bits per job (the two other 31 bits words are fixed in a job), which gives us room for 2.5G events, which is enough.


  • Prepared the infrastructure for Upgrade detector description databases.
    • Updated the packages: Tools/CondDBUI, Det/DetCond and Det/SQLDDDB


  • It turned out that we cannot put in production the secondary CVMFS mount point for CondDB (e.g. IN2P3 needs some downtime). Joel also pointed out that we can enable the extra partition only if the vast majority (>95%) of our Tiers supports it.
    The plan is to keep what we have and review the situation after the Summer.
  • Fixed generation of GENSER tarballs (we now get the right Pythia8).
  • New version of LbScripts released.
  • Another version of LbScripts will come soon to be able to distribute the externals with the new directory structure.

Marco Ca.

  • Major change in Brunel output (see minutes of PAC on Friday)
    • the RAW data will be copied to DSTs
    • since part was already copied, only the missing part will be copied
    • required adaptation of the decoders to allow for a list of locations to access the RAW data
    • the old DAQ/RawEvent will not be accessible anymore in DSTs, to avoid confusion


Joel: we are getting less "swap full" warnings.
Marco Cl. said that he noticed that the Coverity (CIM) web server (tomcat) was using lot of memory. After a restart, the memory usage was reasonable.
Joel will schedule regular restarts of the server (every Sunday).


  • Contact Coverity to complete the upgrade MarcoCl.
  • Request SLC6 build server Joel
  • Prepare the list of Python bindings required by Dirac Joel
  • Investigate the dependency problem in DecFiles Marco Cl. + Gloria
  • Configure lhcb-coverity to restart regularly the web server Joel

-- MarcoClemencic - 14-Jun-2012

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r2 - 2012-06-14 - MarcoClemencic
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCb All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback