MOST OF THE ISSUES ARE RESOLVED NOW.

Conditions database open issues

This pages lists issues related to deployment of the Conditions database (CondDB).

Online conditions

Last update: 25th June 2008

Synchronisation issues

Synchronisation with PVSS

Online conditions are single version conditions which are inserted into the ONLINE partition of the CondDB, starting from PVSS data points.
  • Carmen and Marco Cl. have a working mechanism for the automatic insertion
  • Currently the ONLINE partition does not contain any of the conditions required by the HLT
    • A different mechanism from the automatic insertion is needed, see below

Synchronisation with offline

The CondDB has two different masters, depending on the partition. For ONLINE, the master is the Oracle service at the pit, for the other partitions it is the Oracle service at CERN/IT. The databases are synchronised with each other, and with the copies at Tier1s, using Oracle streams.
  • The streaming between the pit and CERN/IT is not yet operational. Radu is following this.

HLT requirements

  • The HLT requires a very limited number of online conditions:
    • Conditions to determine the Magnetic Field scale factor (Set Current (unsigned), Polarity)
      • Carmen and Marco Cl. are working to make these values available in ONLINE
      • Adlene is responsible for interfacing these to the Magnetic Field Service
    • Conditions to determine the absolute Velo Halves positions relative to LHCb (Velo "Resolver" (a.k.a. stepper motor) readings)
      • Silvia Borghi is responsible specifying for the Velo what to insert into ONLINE and on interfacing to the Velo Detector Element.
  • The HLT requires that the online conditions which it uses have a constant value during a given run.
    • The values will be downloaded to the HLT at run change.
      • Marco Cl. and Clara are working on the download mechanism
    • Even if the PVSS values change during the run, only the value downloaded to the HLT at run change will be inserted into the ONLINE CondDB partition. This is different from the standard ONLINE updating, which happens whenever the PVSS values have changed by more than some pre-defined fraction.
      • Carmen, Marco Cl. and Clara are working on this update mechanism.
      • Is it clear to everybody that the finer grained (i.e. finer than a run) history of these conditions will only be available in the PVSS archive?

Offline requirements

  • The offline code should not normally directly use any conditions from the ONLINE CondDB partition. This is because the online readings may need to be corrected or recalibrated (not possible for single value conditions). Rather, it should use an offline number that can be derived from an ONLINE condition by applying a correction (stored in LHCBCOND), or take from LHCBCOND an offline number that replaces the ONLINE one after the calibration has been done.
    • For conditions used in the HLT, it must be possible to continue to use the uncorrected ONLINE number even when an offline correction is available, to allow to reproduce the HLT decision offline.
    • It should be possible to use the uncorrected ONLINE values if an LHCBCOND correction is not yet availiable. This is typically the case when running the prompt reconstruction soon after datataking. One possibility is to initialise the LHCBCOND correction as a null correction
  • Since the HLT has to run both online and offline, and since we want to share code between the online HLT and the offline analysis, it should be possible to tell the relevant objects (e.g. MagneticFieldSvc, VeloDet) whether to take the relevant conditions uncorrected from ONLINE or corrected from LHCBCOND.
    • Currently, neither the Magnetic Field Service nor the Velo detector element have this mechanism in place
      • It is the responsibility of the Magnet and Velo groups to provide code for this after discussion with Marco Cl
    • Note that, since these components are shared by HLT and offline code, it is not possible with this mechanism to run, in the same job, the HLT using ONLINE values and an analysis using LHCBCOND values

Conditions for simulation

Last update: 8th July 2008

  • The simulation (and reconstruction of simulated data) will get a copy of the necessary online conditions from a SIMCOND database. The directory structure of this database is identical to that of ONLINE. SIMCOND also contains a copy of the relevant conditions from LHCBCOND. Only DDDB is shared for real and simulated data reconstruction and analysis.
  • Marco Cl. has provided a first version of SIMCOND containing the existing ONLINE magnetic field conditions, and the conditions included in the latest LHCBCOND tag. It is released with SQLDDDB v4r5
    • This prototype can be used to commission the access to ONLINE conditions from the offline software
    • It is agreed that several tags will be provided for various simulation use cases, such as Magnetic Field on/off, Velo open/closed, ideal or survey detector positions.
    • Each different combination of simulation conditions will give rise to a new SIMCOND tag
      • A side effect of this is that only one combination of simulation conditions can be analysed in a given job.
  • When reconstructing or analysing simulated data, Brunel and DaVinci should access SIMCOND and not LHCBCOND. Marco Ca. will set this up.
    • The choice of database and of database tag will have to be set up manually to be consistent with the input data. Though it would be desirable to automate this, a technical solution does not currently exist
  • The content of SIMCOND has to be maintained separately from ONLINE and LHCBCOND. This is considered part of the validation to be done before any new simulation. Thomas will have an applied fellow to look after this

See also: Gloria's talk on 20th June 2008 (33rd software week)

Survey for simulation

Last update: 8th July 2008

  • It is agreed that, ultimately, the 2008 simulation should simulate a realistic detector based on the surveyed detector, not the ideal ("Optimisation TDR") detector simulated in DC06
  • The current geometry structure is such that the surveyed detector cannot be simulated for TT/IT/OT. A workaround exists for the other detectors
    • The short term solution is to adapt the TT/IT/OT geometry either to use the same workaround as the other detectors, or to put the survey geometry as the baseline in DDDB (as opposed to a correction to the baseline in LHCBCOND)
      • Wouter and Jan will look into this for OT
      • There is currently no manpower to adapt the TT/IT geometry
    • A longer term solution is to reimplement the geometry conversion software so that any LHCb geometry can be converted into a GEANT4 geometry
      • M.Pappagallo from the Bari group (new group joining LHCb) will work on this from September
  • Any geometry to be simulated in Gauss must be free of overlaps. It is agreed that the survey geometry must be made overlap free
    • This is a responsibility of the sub-detectors, to be coordinated in the Gauss meeting
    • Manpower has been identified to do the validation work for the common infrastructure (cavern and magnet by a summer student, beam pipe by a Ph.D. student to be supervised by Gloria)

The status of the geometry for the simulation is tracked here

Access to CondDB

Last update: 26th June 2008

The ONLINE partition of CondDB is currently only an Oracle DB, to be synchronised on the Tier1s via Oracle streams, though this synchronisation is not yet switched on. The master copy of DDDB and LHCBCOND is already in Oracle and synchronised on Tier 1s, and was used for the recent CCRC. SQLite snapshots of DDDB and LHCBCOND are regularly released (SQLDDDB). Currently the snaphots are not snapshots at all, they contain the whole database; in future it will become necessary to be selective because of the growing size of the database, a policy is then needed for what to put in the snapshots. Snaphots of the ONLINE would also be possible but would be needed very frequently to apply to the latest data.

The alternative to SQLite snapshots is direct connection to the Oracle DB, this has always been the baseline design. For the copy at the pit, on the LHCb internal network, one could imagine having anonymous read access to the database, but in the offline world a secure access is mandatory. This is implemented as a username/password pair which is obtained from a grid service, and thus requires authentication via a grid certificate. If this becomes the default access mechanism, every job (interactive, batch, grid) will have to be executed with a valid grid certificate. However, SQLite snapshots would still be needed (e.g. to work disconnected or on Windows or for simulation on Tier 2), but they could be created on demand depending on the dataset to be analysed.

Proposal for CondDB access

The following proposal was discussed at the Core Software meeting on 25th June 2008. It should be validated by the OPG
  • Production jobs (reconstruction, stripping) running at Tier1 will always access the CondDB from Oracle, which is the master copy and which is guaranteed to have the most up to date tags and information. This will be the default mode in Dirac
  • The default for all other offline use cases is to use SQLite files
    • A SQLite snapshot of ONLINE will be (semi-)automatically made once per month, containing the online conditions data for the month.
      • Marco Cl. will adapt the CondDB access mechanism to open the monthly file corresponding to the time of the event being analysed. * The access mechanism should be able to deal with different snapshot periodicity (e.g. weekly, daily)
      • Anyone wanting offline access to ONLINE conditions of the current month will have to use the Oracle access mechanism, or create his/her own snapshot
      • A procedure for creating and deploying the monthly snapshot must be defined and put in place
        • It can be envisaged to produce the snapshot file more frequently, but this obviously requires additional service manpower
    • A SQLite snapshot of DDDB, LHCBCOND, SIMCOND will be made and distributed whenever a new tag is added to these partitions. The current SQLDDDB can be used for the time being (i.e. containing a complete copy of the CondDB, including all tags), but a different procedure will be needed once the size of the complete database becomes too big (e.g. one snapshot file per tag)
  • In the online network at point 8, Oracle access should be possible without prior authentication (how)?

See also Brainstorming on CondDB on the GRID that took place in 2006

Content management

Last update: 16th February 2009

  • Currently, database commits are all centralised through Marco Cl. - while it is clear that, at least in the beginning, database commits should be done manually by a database manager, it is also clear that Marco cannot run the service single handed and requires one or more deputies to cover for absences or at busy times. These deputies must be identified and appointed soon.
  • The procedure for validating and deploying updates to the LHCBCOND needs to be formalised. This includes policies for deploying the updates to the reconstruction. The proposed procedure is documented here.
  • There are a number of conditions in LHCBCOND that look online related (e.g. readout maps, dead channels). It probably makes sense to leave these in LHCBCOND since one may want to correct e.g. cabling mistakes found after looking at the data. It should be made clear however that any "fixes" to these conditions have to follow the usual procedure for an update to LHCBCOND, so the time needed for them to make their way back into the HLT is of order hours, not minutes.
  • The procedure for deploying updates to DDDB or LHCBCOND to the HLT must be defined and documented (by who? Data Quality? HLT?)

-- MarcoCattaneo

Edit | Attach | Watch | Print version | History: r9 < r8 < r7 < r6 < r5 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r9 - 2013-03-20 - IllyaShapoval
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCb All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback