Difference: CondDBOpenIssues (1 vs. 9)

Revision 92013-03-20 - IllyaShapoval

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"
Added:
>
>

MOST OF THE ISSUES ARE RESOLVED NOW.

 

Conditions database open issues

This pages lists issues related to deployment of the Conditions database (CondDB).

Revision 82009-02-16 - MarcoCattaneo

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Conditions database open issues

This pages lists issues related to deployment of the Conditions database (CondDB).
Line: 97 to 97
 See also Brainstorming on CondDB on the GRID that took place in 2006

Content management

Changed:
<
<
Last update: 26th June 2008
>
>
Last update: 16th February 2009
 
  • Currently, database commits are all centralised through Marco Cl. - while it is clear that, at least in the beginning, database commits should be done manually by a database manager, it is also clear that Marco cannot run the service single handed and requires one or more deputies to cover for absences or at busy times. These deputies must be identified and appointed soon.
Changed:
<
<
  • The procedure for validating and deploying updates to the LHCBCOND needs to be formalised. This includes policies for deploying the updates to the reconstruction. It is expected that this procedure will be defined by the Data Quality group
>
>
  • The procedure for validating and deploying updates to the LHCBCOND needs to be formalised. This includes policies for deploying the updates to the reconstruction. The proposed procedure is documented here.
 
  • There are a number of conditions in LHCBCOND that look online related (e.g. readout maps, dead channels). It probably makes sense to leave these in LHCBCOND since one may want to correct e.g. cabling mistakes found after looking at the data. It should be made clear however that any "fixes" to these conditions have to follow the usual procedure for an update to LHCBCOND, so the time needed for them to make their way back into the HLT is of order hours, not minutes.
  • The procedure for deploying updates to DDDB or LHCBCOND to the HLT must be defined and documented (by who? Data Quality? HLT?)

Revision 72008-07-08 - MarcoCattaneo

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Conditions database open issues

This pages lists issues related to deployment of the Conditions database (CondDB).
Line: 44 to 44
 
    • Note that, since these components are shared by HLT and offline code, it is not possible with this mechanism to run, in the same job, the HLT using ONLINE values and an analysis using LHCBCOND values

Conditions for simulation

Changed:
<
<
Last update: 26th June 2008
>
>
Last update: 8th July 2008
 
  • The simulation (and reconstruction of simulated data) will get a copy of the necessary online conditions from a SIMCOND database. The directory structure of this database is identical to that of ONLINE. SIMCOND also contains a copy of the relevant conditions from LHCBCOND. Only DDDB is shared for real and simulated data reconstruction and analysis.
Changed:
<
<
  • Marco Cl. is going to provide a first version of SIMCOND containing the existing ONLINE data structure, and the conditions included in the latest LHCBCOND tag.
    • A prototype is needed before end July, to allow commissioning of the offline software with realistic 2008 geometry and conditions
>
>
  • Marco Cl. has provided a first version of SIMCOND containing the existing ONLINE magnetic field conditions, and the conditions included in the latest LHCBCOND tag. It is released with SQLDDDB v4r5
    • This prototype can be used to commission the access to ONLINE conditions from the offline software
 
    • It is agreed that several tags will be provided for various simulation use cases, such as Magnetic Field on/off, Velo open/closed, ideal or survey detector positions.
    • Each different combination of simulation conditions will give rise to a new SIMCOND tag
      • A side effect of this is that only one combination of simulation conditions can be analysed in a given job.
Line: 59 to 59
 See also: Gloria's talk on 20th June 2008 (33rd software week)

Survey for simulation

Changed:
<
<
Last update: 26th June 2008
>
>
Last update: 8th July 2008
 
  • It is agreed that, ultimately, the 2008 simulation should simulate a realistic detector based on the surveyed detector, not the ideal ("Optimisation TDR") detector simulated in DC06
  • The current geometry structure is such that the surveyed detector cannot be simulated for TT/IT/OT. A workaround exists for the other detectors
Line: 67 to 67
 
      • Wouter and Jan will look into this for OT
      • There is currently no manpower to adapt the TT/IT geometry
    • A longer term solution is to reimplement the geometry conversion software so that any LHCb geometry can be converted into a GEANT4 geometry
Changed:
<
<
      • Manpower for this has been identified (new group joining LHCb) and will be available from September
>
>
      • M.Pappagallo from the Bari group (new group joining LHCb) will work on this from September
 
  • Any geometry to be simulated in Gauss must be free of overlaps. It is agreed that the survey geometry must be made overlap free
    • This is a responsibility of the sub-detectors, to be coordinated in the Gauss meeting
    • Manpower has been identified to do the validation work for the common infrastructure (cavern and magnet by a summer student, beam pipe by a Ph.D. student to be supervised by Gloria)
Added:
>
>
The status of the geometry for the simulation is tracked here
 

Access to CondDB

Last update: 26th June 2008

Revision 62008-06-26 - MarcoCattaneo

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Conditions database open issues

This pages lists issues related to deployment of the Conditions database (CondDB).
Line: 8 to 8
 

Online conditions

Last update: 25th June 2008
Changed:
<
<

Insertion mechanism

>
>

Synchronisation issues

Synchronisation with PVSS

 Online conditions are single version conditions which are inserted into the ONLINE partition of the CondDB, starting from PVSS data points.
  • Carmen and Marco Cl. have a working mechanism for the automatic insertion
  • Currently the ONLINE partition does not contain any of the conditions required by the HLT
Added:
>
>
    • A different mechanism from the automatic insertion is needed, see below

Synchronisation with offline

The CondDB has two different masters, depending on the partition. For ONLINE, the master is the Oracle service at the pit, for the other partitions it is the Oracle service at CERN/IT. The databases are synchronised with each other, and with the copies at Tier1s, using Oracle streams.
  • The streaming between the pit and CERN/IT is not yet operational. Radu is following this.
 

HLT requirements

  • The HLT requires a very limited number of online conditions:
Line: 19 to 25
 
      • Carmen and Marco Cl. are working to make these values available in ONLINE
      • Adlene is responsible for interfacing these to the Magnetic Field Service
    • Conditions to determine the absolute Velo Halves positions relative to LHCb (Velo "Resolver" (a.k.a. stepper motor) readings)
Changed:
<
<
      • Thomas will check who in the Velo group is specifying what to insert into ONLINE and on interfacing to the Velo Detector Element.
>
>
      • Silvia Borghi is responsible specifying for the Velo what to insert into ONLINE and on interfacing to the Velo Detector Element.
 
  • The HLT requires that the online conditions which it uses have a constant value during a given run.
Changed:
<
<
    • The values will be downloaded to the HLT at run change
>
>
    • The values will be downloaded to the HLT at run change.
      • Marco Cl. and Clara are working on the download mechanism
 
    • Even if the PVSS values change during the run, only the value downloaded to the HLT at run change will be inserted into the ONLINE CondDB partition. This is different from the standard ONLINE updating, which happens whenever the PVSS values have changed by more than some pre-defined fraction.
Changed:
<
<
      • How is this different mechanism implemented, and by who?
>
>
      • Carmen, Marco Cl. and Clara are working on this update mechanism.
 
      • Is it clear to everybody that the finer grained (i.e. finer than a run) history of these conditions will only be available in the PVSS archive?

Offline requirements

Changed:
<
<
  • The offline code should not use directly any conditions from the ONLINE CondDB partition. This is because the calibration of the readings may need to be refined (not possible for single value conditions), or because some readings may be wrong due to a malfuction of the readout software
  • On the other hand, at least the first version of the condition used offline must be derived from the online value using a default calibration. This allows reconstruction of events as soon as they are taken (e.g. in the monitoring farm or in the first pass reconstruction on the grid)
  • Since we want to share code online in the HLT and offline, the location of the condition to use should be passed as a job option to the relevant object (e.g. MagneticFieldSvc, VeloDet)
>
>
  • The offline code should not normally directly use any conditions from the ONLINE CondDB partition. This is because the online readings may need to be corrected or recalibrated (not possible for single value conditions). Rather, it should use an offline number that can be derived from an ONLINE condition by applying a correction (stored in LHCBCOND), or take from LHCBCOND an offline number that replaces the ONLINE one after the calibration has been done.
    • For conditions used in the HLT, it must be possible to continue to use the uncorrected ONLINE number even when an offline correction is available, to allow to reproduce the HLT decision offline.
    • It should be possible to use the uncorrected ONLINE values if an LHCBCOND correction is not yet availiable. This is typically the case when running the prompt reconstruction soon after datataking. One possibility is to initialise the LHCBCOND correction as a null correction
  • Since the HLT has to run both online and offline, and since we want to share code between the online HLT and the offline analysis, it should be possible to tell the relevant objects (e.g. MagneticFieldSvc, VeloDet) whether to take the relevant conditions uncorrected from ONLINE or corrected from LHCBCOND.
 
  • Currently, neither the Magnetic Field Service nor the Velo detector element have this mechanism in place
Changed:
<
<
    • It is the responsibility of the Magnet and Velo groups to provide code for this
>
>
      • It is the responsibility of the Magnet and Velo groups to provide code for this after discussion with Marco Cl
    • Note that, since these components are shared by HLT and offline code, it is not possible with this mechanism to run, in the same job, the HLT using ONLINE values and an analysis using LHCBCOND values
 

Conditions for simulation

Last update: 26th June 2008
Line: 75 to 85
 
  • The default for all other offline use cases is to use SQLite files
    • A SQLite snapshot of ONLINE will be (semi-)automatically made once per month, containing the online conditions data for the month.
      • Marco Cl. will adapt the CondDB access mechanism to open the monthly file corresponding to the time of the event being analysed.
Added:
>
>
* The access mechanism should be able to deal with different snapshot periodicity (e.g. weekly, daily)
 
      • Anyone wanting offline access to ONLINE conditions of the current month will have to use the Oracle access mechanism, or create his/her own snapshot
      • A procedure for creating and deploying the monthly snapshot must be defined and put in place
Added:
>
>
        • It can be envisaged to produce the snapshot file more frequently, but this obviously requires additional service manpower
 
    • A SQLite snapshot of DDDB, LHCBCOND, SIMCOND will be made and distributed whenever a new tag is added to these partitions. The current SQLDDDB can be used for the time being (i.e. containing a complete copy of the CondDB, including all tags), but a different procedure will be needed once the size of the complete database becomes too big (e.g. one snapshot file per tag)
  • In the online network at point 8, Oracle access should be possible without prior authentication (how)?

Revision 52008-06-26 - MarcoCattaneo

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Conditions database open issues

This pages lists issues related to deployment of the Conditions database (CondDB).
Line: 22 to 22
 
      • Thomas will check who in the Velo group is specifying what to insert into ONLINE and on interfacing to the Velo Detector Element.
  • The HLT requires that the online conditions which it uses have a constant value during a given run.
    • The values will be downloaded to the HLT at run change
Changed:
<
<
    • Even if the PVSS values change during the run, only the value downloaded at run change will be inserted into the CondDB
      • How is this mechanism implemented, and by who?
>
>
    • Even if the PVSS values change during the run, only the value downloaded to the HLT at run change will be inserted into the ONLINE CondDB partition. This is different from the standard ONLINE updating, which happens whenever the PVSS values have changed by more than some pre-defined fraction.
      • How is this different mechanism implemented, and by who?
 
      • Is it clear to everybody that the finer grained (i.e. finer than a run) history of these conditions will only be available in the PVSS archive?

Offline requirements

Revision 42008-06-26 - MarcoCattaneo

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Conditions database open issues

This pages lists issues related to deployment of the Conditions database (CondDB).
Line: 85 to 85
 

Content management

Last update: 26th June 2008
Changed:
<
<
  • Currently, database commits are all centralised through Marco Cl. - while it is clear that, at least in the beginning, database commits should be done manually by a database manager, it is also clear that Marco cannot run the service single handed and requires a deputy to cover for absences or at busy times. This deputy must be identified and appointed soon.
  • The procedure for validating and deploying updates to the LHCbCond needs to be formalised. This includes policies for deploying the updates to the reconstruction. It would be good to document a procedure that could be tried out in a ≥dress rehearsal≤
  • There are a number of conditions in LHCbCond that look online related (e.g. readout maps, dead channels). It probably makes sense to leave these in LHCbCond since one may want to correct e.g. cabling mistakes found after looking at the data. It should be made clear however that any "fixes" to these conditions have to follow the usual procedure for an update to LHCbCond, so the time needed for them to make their way back into the HLT is of order hours, not minutes.
>
>
  • Currently, database commits are all centralised through Marco Cl. - while it is clear that, at least in the beginning, database commits should be done manually by a database manager, it is also clear that Marco cannot run the service single handed and requires one or more deputies to cover for absences or at busy times. These deputies must be identified and appointed soon.
  • The procedure for validating and deploying updates to the LHCBCOND needs to be formalised. This includes policies for deploying the updates to the reconstruction. It is expected that this procedure will be defined by the Data Quality group
  • There are a number of conditions in LHCBCOND that look online related (e.g. readout maps, dead channels). It probably makes sense to leave these in LHCBCOND since one may want to correct e.g. cabling mistakes found after looking at the data. It should be made clear however that any "fixes" to these conditions have to follow the usual procedure for an update to LHCBCOND, so the time needed for them to make their way back into the HLT is of order hours, not minutes.
  • The procedure for deploying updates to DDDB or LHCBCOND to the HLT must be defined and documented (by who? Data Quality? HLT?)
 
Changed:
<
<
-- MarcoCattaneo - 25 Jun 2008
>
>
-- MarcoCattaneo
 \ No newline at end of file

Revision 32008-06-26 - MarcoCattaneo

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Conditions database open issues

This pages lists issues related to deployment of the Conditions database (CondDB).
Line: 27 to 27
 
      • Is it clear to everybody that the finer grained (i.e. finer than a run) history of these conditions will only be available in the PVSS archive?

Offline requirements

Changed:
<
<
  • The offline code should not use directly any conditions from the ONLINE CondDB partition. This is because the calibration of the readings may need to be refined (not possible for single value conditions), or because some readings may be wrong due to a malfuction of the readout softwar
>
>
  • The offline code should not use directly any conditions from the ONLINE CondDB partition. This is because the calibration of the readings may need to be refined (not possible for single value conditions), or because some readings may be wrong due to a malfuction of the readout software
 
  • On the other hand, at least the first version of the condition used offline must be derived from the online value using a default calibration. This allows reconstruction of events as soon as they are taken (e.g. in the monitoring farm or in the first pass reconstruction on the grid)
Changed:
<
<
  • Since we want to share code online in the HLT and offline, the location of the condition to use should be passed as a job option to the relevant object (e.g. !MagneticFeldSvc, _VeloDet_)
>
>
  • Since we want to share code online in the HLT and offline, the location of the condition to use should be passed as a job option to the relevant object (e.g. MagneticFieldSvc, VeloDet)
 
  • Currently, neither the Magnetic Field Service nor the Velo detector element have this mechanism in place
    • It is the responsibility of the Magnet and Velo groups to provide code for this

Conditions for simulation

Changed:
<
<
Last update: 25th June 2008
  • The simulation (and reconstruction of simulated data) will get a copy of the necessary online conditions from a SimCondDB database. The directory structure of this database is identical to that of OnlineCondDB.
  • A prototype of this DB is needed to allow commissioning of the offline software
  • The simulation (and reconstruction of simulated data) does not see the LHCbCond. SimCondDB must contain a copy of the relevant offline conditions from LHCbCond, with appropriate tags and validity. An inventory of these conditions must be done, and the SimCondDB populated.
  • When reconstructing or analysing simulated data, Brunel and DaVinci should access SimCondDB and not LHCbCond. Currently this has to be set up manually, should it be automated? Can it?
>
>
Last update: 26th June 2008

  • The simulation (and reconstruction of simulated data) will get a copy of the necessary online conditions from a SIMCOND database. The directory structure of this database is identical to that of ONLINE. SIMCOND also contains a copy of the relevant conditions from LHCBCOND. Only DDDB is shared for real and simulated data reconstruction and analysis.
  • Marco Cl. is going to provide a first version of SIMCOND containing the existing ONLINE data structure, and the conditions included in the latest LHCBCOND tag.
    • A prototype is needed before end July, to allow commissioning of the offline software with realistic 2008 geometry and conditions
    • It is agreed that several tags will be provided for various simulation use cases, such as Magnetic Field on/off, Velo open/closed, ideal or survey detector positions.
    • Each different combination of simulation conditions will give rise to a new SIMCOND tag
      • A side effect of this is that only one combination of simulation conditions can be analysed in a given job.
  • When reconstructing or analysing simulated data, Brunel and DaVinci should access SIMCOND and not LHCBCOND. Marco Ca. will set this up.
    • The choice of database and of database tag will have to be set up manually to be consistent with the input data. Though it would be desirable to automate this, a technical solution does not currently exist
  • The content of SIMCOND has to be maintained separately from ONLINE and LHCBCOND. This is considered part of the validation to be done before any new simulation. Thomas will have an applied fellow to look after this

See also: Gloria's talk on 20th June 2008 (33rd software week)

 

Survey for simulation

Changed:
<
<
Last update: 25th June 2008
  • Current situation is that we cannot simulate survey for IT/TT/OT
  • Solving this in Gauss/GiGa is possible but requires major work from experts who are not available on a short timescale.
  • Wouter is going to investigate fixing this in the Xml for OT. IT/TT to be discussed
  • Work is required by all sub-detectors to make the Survey geometry overlap free
  • Who does the validation work for the common infrastructure (cavern, magnet, beam pipe etc.)? New manpower?
>
>
Last update: 26th June 2008
 
Changed:
<
<

Access to CondDB

Last update: 25th June 2008
  • OnlineCondDB is currently only an Oracle DB, synchronised on the Tier1s via Oracle streams, though this synchronisation is not yet switched on. The master copy of DDDB and LHCbCond is already in Oracle and synchrnised on Tier 1s, and was used for the recent CCRC. SQLite snapshots of DDDB and LHCbCond are regularly released (SQLDDDB). Currently the snaphots are not snapshots at all, they contain the whole database; in future it will become necessary to be selective because of the growing size of the database, a policy is then needed for what to put in the snapshots. Snaphots of the OnlineCondDB would also be possible but would be needed very frequently to apply to the latest data.
  • The alternative to SQLite slices is direct connection to the OracleDB, this has always been the baseline design. For the copy at the pit, on the LHCb internal network, one could imagine having anonymous read access to the database, but in the offline world a secure access is mandatory. This is implemented as a username/password pair which is obtained from a grid service, and thus requires authentication via a grid certificate. If this becomes the default access mechanism, every job (interactive, batch, grid) will have to be executed with a valid grid certificate. SQLite snapshots would still be needed (e.g. to work disconnected or on Windows or for simulation on Tier 2), but they could be created on demand depending on the dataset to be analysed
  • A decision must be taken on the default way of working. Should the default be Oracle (and so grid certificate for everybody) or is it SQLite slices (in which case a release procedure for slices, and in particular OnlineCondDB slices, must be defined)
>
>
  • It is agreed that, ultimately, the 2008 simulation should simulate a realistic detector based on the surveyed detector, not the ideal ("Optimisation TDR") detector simulated in DC06
  • The current geometry structure is such that the surveyed detector cannot be simulated for TT/IT/OT. A workaround exists for the other detectors
    • The short term solution is to adapt the TT/IT/OT geometry either to use the same workaround as the other detectors, or to put the survey geometry as the baseline in DDDB (as opposed to a correction to the baseline in LHCBCOND)
      • Wouter and Jan will look into this for OT
      • There is currently no manpower to adapt the TT/IT geometry
    • A longer term solution is to reimplement the geometry conversion software so that any LHCb geometry can be converted into a GEANT4 geometry
      • Manpower for this has been identified (new group joining LHCb) and will be available from September
  • Any geometry to be simulated in Gauss must be free of overlaps. It is agreed that the survey geometry must be made overlap free
    • This is a responsibility of the sub-detectors, to be coordinated in the Gauss meeting
    • Manpower has been identified to do the validation work for the common infrastructure (cavern and magnet by a summer student, beam pipe by a Ph.D. student to be supervised by Gloria)

Access to CondDB

Last update: 26th June 2008

The ONLINE partition of CondDB is currently only an Oracle DB, to be synchronised on the Tier1s via Oracle streams, though this synchronisation is not yet switched on. The master copy of DDDB and LHCBCOND is already in Oracle and synchronised on Tier 1s, and was used for the recent CCRC. SQLite snapshots of DDDB and LHCBCOND are regularly released (SQLDDDB). Currently the snaphots are not snapshots at all, they contain the whole database; in future it will become necessary to be selective because of the growing size of the database, a policy is then needed for what to put in the snapshots. Snaphots of the ONLINE would also be possible but would be needed very frequently to apply to the latest data.

The alternative to SQLite snapshots is direct connection to the Oracle DB, this has always been the baseline design. For the copy at the pit, on the LHCb internal network, one could imagine having anonymous read access to the database, but in the offline world a secure access is mandatory. This is implemented as a username/password pair which is obtained from a grid service, and thus requires authentication via a grid certificate. If this becomes the default access mechanism, every job (interactive, batch, grid) will have to be executed with a valid grid certificate. However, SQLite snapshots would still be needed (e.g. to work disconnected or on Windows or for simulation on Tier 2), but they could be created on demand depending on the dataset to be analysed.

Proposal for CondDB access

The following proposal was discussed at the Core Software meeting on 25th June 2008. It should be validated by the OPG
  • Production jobs (reconstruction, stripping) running at Tier1 will always access the CondDB from Oracle, which is the master copy and which is guaranteed to have the most up to date tags and information. This will be the default mode in Dirac
  • The default for all other offline use cases is to use SQLite files
    • A SQLite snapshot of ONLINE will be (semi-)automatically made once per month, containing the online conditions data for the month.
      • Marco Cl. will adapt the CondDB access mechanism to open the monthly file corresponding to the time of the event being analysed.
      • Anyone wanting offline access to ONLINE conditions of the current month will have to use the Oracle access mechanism, or create his/her own snapshot
      • A procedure for creating and deploying the monthly snapshot must be defined and put in place
    • A SQLite snapshot of DDDB, LHCBCOND, SIMCOND will be made and distributed whenever a new tag is added to these partitions. The current SQLDDDB can be used for the time being (i.e. containing a complete copy of the CondDB, including all tags), but a different procedure will be needed once the size of the complete database becomes too big (e.g. one snapshot file per tag)
  • In the online network at point 8, Oracle access should be possible without prior authentication (how)?
  See also Brainstorming on CondDB on the GRID that took place in 2006

Content management

Added:
>
>
Last update: 26th June 2008
 
  • Currently, database commits are all centralised through Marco Cl. - while it is clear that, at least in the beginning, database commits should be done manually by a database manager, it is also clear that Marco cannot run the service single handed and requires a deputy to cover for absences or at busy times. This deputy must be identified and appointed soon.
  • The procedure for validating and deploying updates to the LHCbCond needs to be formalised. This includes policies for deploying the updates to the reconstruction. It would be good to document a procedure that could be tried out in a ≥dress rehearsal≤
  • There are a number of conditions in LHCbCond that look online related (e.g. readout maps, dead channels). It probably makes sense to leave these in LHCbCond since one may want to correct e.g. cabling mistakes found after looking at the data. It should be made clear however that any "fixes" to these conditions have to follow the usual procedure for an update to LHCbCond, so the time needed for them to make their way back into the HLT is of order hours, not minutes.

Revision 22008-06-25 - MarcoCattaneo

Line: 1 to 1
 
META TOPICPARENT name="LHCbComputing"

Conditions database open issues

Added:
>
>
This pages lists issues related to deployment of the Conditions database (CondDB).
 
Changed:
<
<

Online conditions for HLT

  • Only a very limited number of Online conditions are needed for the HLT:
Magnetic field scale factor derived from Set Current, Velo position derived from stepper motor.
  • Plan is to download these values to the HLT at run change, and at the same time insert the downloaded values into the OnlineCondDB. These are single version conditions that are only updated when downloaded to the HLT; they are distinct from conditions added to the OnlineCondDB directly by PVSS, for example magnet measured current.
    • Marco Cl., Markus, Clara are working on the downloading mechanism
    • Carmen has defined the xml files and directory structure for these conditions and implemented them in OnlineCondDB, but this is not yet available for use by the physics application software
  • Another client of these conditions is the alignment code
  • The mechanism for updating these conditions for use offline needs to be documented. One requirement of our code is that the same code (HLT,
reconstruction) can run both online and offline without changes. But if a parameter is obtained from some location in the (single version) OnlineCondDB, how can the same parameter be obtained from the same location but with an updated value offline? Is the plan to always calculate e.g. the Velo half global position using a formula whose coefficients are in the LHCbCond (so can be recalibrated), and whose variable is the OnlineCondDB value?
>
>

Online conditions

Last update: 25th June 2008

Insertion mechanism

Online conditions are single version conditions which are inserted into the ONLINE partition of the CondDB, starting from PVSS data points.
  • Carmen and Marco Cl. have a working mechanism for the automatic insertion
  • Currently the ONLINE partition does not contain any of the conditions required by the HLT

HLT requirements

  • The HLT requires a very limited number of online conditions:
    • Conditions to determine the Magnetic Field scale factor (Set Current (unsigned), Polarity)
      • Carmen and Marco Cl. are working to make these values available in ONLINE
      • Adlene is responsible for interfacing these to the Magnetic Field Service
    • Conditions to determine the absolute Velo Halves positions relative to LHCb (Velo "Resolver" (a.k.a. stepper motor) readings)
      • Thomas will check who in the Velo group is specifying what to insert into ONLINE and on interfacing to the Velo Detector Element.
  • The HLT requires that the online conditions which it uses have a constant value during a given run.
    • The values will be downloaded to the HLT at run change
    • Even if the PVSS values change during the run, only the value downloaded at run change will be inserted into the CondDB
      • How is this mechanism implemented, and by who?
      • Is it clear to everybody that the finer grained (i.e. finer than a run) history of these conditions will only be available in the PVSS archive?

Offline requirements

  • The offline code should not use directly any conditions from the ONLINE CondDB partition. This is because the calibration of the readings may need to be refined (not possible for single value conditions), or because some readings may be wrong due to a malfuction of the readout softwar
  • On the other hand, at least the first version of the condition used offline must be derived from the online value using a default calibration. This allows reconstruction of events as soon as they are taken (e.g. in the monitoring farm or in the first pass reconstruction on the grid)
  • Since we want to share code online in the HLT and offline, the location of the condition to use should be passed as a job option to the relevant object (e.g. !MagneticFeldSvc, _VeloDet_)
  • Currently, neither the Magnetic Field Service nor the Velo detector element have this mechanism in place
    • It is the responsibility of the Magnet and Velo groups to provide code for this
 

Conditions for simulation

Added:
>
>
Last update: 25th June 2008
 
  • The simulation (and reconstruction of simulated data) will get a copy of the necessary online conditions from a SimCondDB database. The directory structure of this database is identical to that of OnlineCondDB.
  • A prototype of this DB is needed to allow commissioning of the offline software
  • The simulation (and reconstruction of simulated data) does not see the LHCbCond. SimCondDB must contain a copy of the relevant offline conditions from LHCbCond, with appropriate tags and validity. An inventory of these conditions must be done, and the SimCondDB populated.
  • When reconstructing or analysing simulated data, Brunel and DaVinci should access SimCondDB and not LHCbCond. Currently this has to be set up manually, should it be automated? Can it?

Survey for simulation

Added:
>
>
Last update: 25th June 2008
 
  • Current situation is that we cannot simulate survey for IT/TT/OT
  • Solving this in Gauss/GiGa is possible but requires major work from experts who are not available on a short timescale.
  • Wouter is going to investigate fixing this in the Xml for OT. IT/TT to be discussed
Line: 28 to 49
 
  • Who does the validation work for the common infrastructure (cavern, magnet, beam pipe etc.)? New manpower?

Access to CondDB

Added:
>
>
Last update: 25th June 2008
 
  • OnlineCondDB is currently only an Oracle DB, synchronised on the Tier1s via Oracle streams, though this synchronisation is not yet switched on. The master copy of DDDB and LHCbCond is already in Oracle and synchrnised on Tier 1s, and was used for the recent CCRC. SQLite snapshots of DDDB and LHCbCond are regularly released (SQLDDDB). Currently the snaphots are not snapshots at all, they contain the whole database; in future it will become necessary to be selective because of the growing size of the database, a policy is then needed for what to put in the snapshots. Snaphots of the OnlineCondDB would also be possible but would be needed very frequently to apply to the latest data.
  • The alternative to SQLite slices is direct connection to the OracleDB, this has always been the baseline design. For the copy at the pit, on the LHCb internal network, one could imagine having anonymous read access to the database, but in the offline world a secure access is mandatory. This is implemented as a username/password pair which is obtained from a grid service, and thus requires authentication via a grid certificate. If this becomes the default access mechanism, every job (interactive, batch, grid) will have to be executed with a valid grid certificate. SQLite snapshots would still be needed (e.g. to work disconnected or on Windows or for simulation on Tier 2), but they could be created on demand depending on the dataset to be analysed
  • A decision must be taken on the default way of working. Should the default be Oracle (and so grid certificate for everybody) or is it SQLite slices (in which case a release procedure for slices, and in particular OnlineCondDB slices, must be defined)

Revision 12008-06-25 - MarcoCattaneo

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="LHCbComputing"

Conditions database open issues

Online conditions for HLT

  • Only a very limited number of Online conditions are needed for the HLT:
Magnetic field scale factor derived from Set Current, Velo position derived from stepper motor.
  • Plan is to download these values to the HLT at run change, and at the same time insert the downloaded values into the OnlineCondDB. These are single version conditions that are only updated when downloaded to the HLT; they are distinct from conditions added to the OnlineCondDB directly by PVSS, for example magnet measured current.
    • Marco Cl., Markus, Clara are working on the downloading mechanism
    • Carmen has defined the xml files and directory structure for these conditions and implemented them in OnlineCondDB, but this is not yet available for use by the physics application software
  • Another client of these conditions is the alignment code
  • The mechanism for updating these conditions for use offline needs to be documented. One requirement of our code is that the same code (HLT,
reconstruction) can run both online and offline without changes. But if a parameter is obtained from some location in the (single version) OnlineCondDB, how can the same parameter be obtained from the same location but with an updated value offline? Is the plan to always calculate e.g. the Velo half global position using a formula whose coefficients are in the LHCbCond (so can be recalibrated), and whose variable is the OnlineCondDB value?

Conditions for simulation

  • The simulation (and reconstruction of simulated data) will get a copy of the necessary online conditions from a SimCondDB database. The directory structure of this database is identical to that of OnlineCondDB.
  • A prototype of this DB is needed to allow commissioning of the offline software
  • The simulation (and reconstruction of simulated data) does not see the LHCbCond. SimCondDB must contain a copy of the relevant offline conditions from LHCbCond, with appropriate tags and validity. An inventory of these conditions must be done, and the SimCondDB populated.
  • When reconstructing or analysing simulated data, Brunel and DaVinci should access SimCondDB and not LHCbCond. Currently this has to be set up manually, should it be automated? Can it?

Survey for simulation

  • Current situation is that we cannot simulate survey for IT/TT/OT
  • Solving this in Gauss/GiGa is possible but requires major work from experts who are not available on a short timescale.
  • Wouter is going to investigate fixing this in the Xml for OT. IT/TT to be discussed
  • Work is required by all sub-detectors to make the Survey geometry overlap free
  • Who does the validation work for the common infrastructure (cavern, magnet, beam pipe etc.)? New manpower?

Access to CondDB

  • OnlineCondDB is currently only an Oracle DB, synchronised on the Tier1s via Oracle streams, though this synchronisation is not yet switched on. The master copy of DDDB and LHCbCond is already in Oracle and synchrnised on Tier 1s, and was used for the recent CCRC. SQLite snapshots of DDDB and LHCbCond are regularly released (SQLDDDB). Currently the snaphots are not snapshots at all, they contain the whole database; in future it will become necessary to be selective because of the growing size of the database, a policy is then needed for what to put in the snapshots. Snaphots of the OnlineCondDB would also be possible but would be needed very frequently to apply to the latest data.
  • The alternative to SQLite slices is direct connection to the OracleDB, this has always been the baseline design. For the copy at the pit, on the LHCb internal network, one could imagine having anonymous read access to the database, but in the offline world a secure access is mandatory. This is implemented as a username/password pair which is obtained from a grid service, and thus requires authentication via a grid certificate. If this becomes the default access mechanism, every job (interactive, batch, grid) will have to be executed with a valid grid certificate. SQLite snapshots would still be needed (e.g. to work disconnected or on Windows or for simulation on Tier 2), but they could be created on demand depending on the dataset to be analysed
  • A decision must be taken on the default way of working. Should the default be Oracle (and so grid certificate for everybody) or is it SQLite slices (in which case a release procedure for slices, and in particular OnlineCondDB slices, must be defined)

See also Brainstorming on CondDB on the GRID that took place in 2006

Content management

  • Currently, database commits are all centralised through Marco Cl. - while it is clear that, at least in the beginning, database commits should be done manually by a database manager, it is also clear that Marco cannot run the service single handed and requires a deputy to cover for absences or at busy times. This deputy must be identified and appointed soon.
  • The procedure for validating and deploying updates to the LHCbCond needs to be formalised. This includes policies for deploying the updates to the reconstruction. It would be good to document a procedure that could be tried out in a ≥dress rehearsal≤
  • There are a number of conditions in LHCbCond that look online related (e.g. readout maps, dead channels). It probably makes sense to leave these in LHCbCond since one may want to correct e.g. cabling mistakes found after looking at the data. It should be made clear however that any "fixes" to these conditions have to follow the usual procedure for an update to LHCbCond, so the time needed for them to make their way back into the HLT is of order hours, not minutes.

-- MarcoCattaneo - 25 Jun 2008

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback