Difference: CalibrationUploadProtocol (1 vs. 6)

Revision 62009-10-26 - unknown

Line: 1 to 1
 
META TOPICPARENT name="LHCbDetectorAlignment"
Line: 90 to 90
 If we see scaling problems we can deploy it on multiple SEs, but all jobs (running on the Xpress stream) should run at CERN.
Added:
>
>

Run boundaries

Currently, thereb is no easy way to find the beginning and end of a run. The dirac-bookkeeping-run-informations script will be adaped to return also the begin and end time of a run. You can run the script with

   SetupProject Dirac
   dirac-bookkeeping-run-informations <run_number>
 

Open issues

A rather non-exhaustive list ...

Revision 52009-10-02 - MarcoCattaneo

Line: 1 to 1
 
META TOPICPARENT name="LHCbDetectorAlignment"
Line: 15 to 15
 
  • FSR: File summary record
  • SE: storage element
Changed:
<
<

Protocal (first draft)

>
>

Protocol (first draft)

 
  1. alignment/calibration/monitoring runs on ES data on CAF, produces new xml files. (see also below.)
Line: 26 to 26
 
  1. the database content manager copies the sqlite file into the official database
  2. tier 1 processing starts
Changed:
<
<
Summary of protocal, to be transformed into a fancy flow diagram:
>
>
Summary of protocol, to be transformed into a fancy flow diagram:
 
step process input output where who
1 alignment/calibration/monitoring ES data xml files, monitoring histos CAF sub-detector experts, alignment convenor
Line: 61 to 61
 with finite interval of validity. Typically, this will be 'valid from run XXX', but I don't quite understand yet what the requirements are.
Changed:
<
<
=Escher+ contains algorithms to write the alignment constants for all
>
>
Escher contains algorithms to write the alignment constants for all
 subdetectors. The instructions for creating an sqlite layer can be found
Changed:
<
<
[[https://twiki.cern.ch/twiki/bin/view/LHCb/CondDBHowTo#From_XML_files][here].
>
>
here.
 If you run the Escher/options/gaudipariter.py script, the sqlite file will be created automatically.

Revision 42009-09-02 - unknown

Line: 1 to 1
 
META TOPICPARENT name="LHCbDetectorAlignment"
Changed:
<
<
ES: express stream DCM: database content manager FSR: File summary record
>
>
 
Changed:
<
<

Protocal (first draft)

>
>

Procedure for obtaining, validating and deploying alignment constants

Table of Contents

Glossary

 
Changed:
<
<
Procedure for obtaining and validating calibation constants using Express Stream (ES) and Cern Analysis Farm (CAF)
>
>
  • CAF: Cern Analysis Farm
  • ES: express stream
  • DCM: database content manager
  • FSR: File summary record
  • SE: storage element

Protocal (first draft)

 
Changed:
<
<
  • alignment/calibration/monitoring runs on CAF, produces new xml files
  • xml files are stored in database layer for LHCbCond with certain interval of validity
  • job runs again, using database layer as input
  • iterate
  • if monitoring results acceptable, give the database layer to DCM, who will include it
  • production team processes the data at Tier1 aftergreen light from monitoring
>
>
  1. alignment/calibration/monitoring runs on ES data on CAF, produces new xml files. (see also below.)
  2. xml constants are collected into a single sqlite file, with a corresponding interval of validity. the result is copied to a grid storage element. (see instructions below)
  3. the production teams runs a test production over the ES data.
  4. the monitoring teams looks at the histograms from this production and signs off
  5. the database content manager copies the sqlite file into the official database
  6. tier 1 processing starts

Summary of protocal, to be transformed into a fancy flow diagram:

step process input output where who
1 alignment/calibration/monitoring ES data xml files, monitoring histos CAF sub-detector experts, alignment convenor
2 creating temporary DB for production xml files sqlite file on storage element - alignment convenor
3 test production ES data, sqlite file logfiles, monitoring histos cern tier1 or CAF ? production team
4 sign-off logfiles, monitoring histos green/red light - monitoring team
5 updating official DB for production sqlite file updated DB - database contents manager
6 tier 1 processing full data stream, updated DB dst tier1 production team

Various steps in testing this

  • What was done?
    • running production using an sqlite file on a storage element
  • What has not yet been done, among others?
    • writing an sqlite file with a finite interval of validity.
    • interaction with monitoring team
    • finalizing ES trigger
 
Changed:
<
<

Running the prompt calibration on the CAF

>
>

Step 1. Running the prompt calibration on the CAF

  There are two scenarios.
Line: 23 to 53
  2. More realistically, certainly in the beginning, is that the alignment constants are produced by non-automated tasks, probably performed by different persons for different subdetectors. The alignment convenor collects database constants and creates a database layer. The production team runs a standard brunel task with the new layer such that the monitoring group can look at the result even before anything is added to the 'official' database.
Changed:
<
<

Various steps in testing this

>
>

Step 2. Handing over alignment xml to production team

 
Changed:
<
<
* write a workflow for alignment on ES on the CAF. can the production team run jobs on the CAF? do we want this to be handles b
>
>

Creating the sqlite file

 
Changed:
<
<
* write a workflow for monitoring on ES on the CAF. shouldn't this just be the same job? if we make this a brunel job, can we write this first?
>
>
The alignment xml is written into a sqlite file as a data base layer with finite interval of validity. Typically, this will be 'valid from run XXX', but I don't quite understand yet what the requirements are.
 
Changed:
<
<
* give the production team a new data base layer and let them reprocess the ES with this. at least it will allow us to test if the monitoring sees the improvements.
>
>
=Escher+ contains algorithms to write the alignment constants for all subdetectors. The instructions for creating an sqlite layer can be found [[https://twiki.cern.ch/twiki/bin/view/LHCb/CondDBHowTo#From_XML_files][here]. If you run the Escher/options/gaudipariter.py script, the sqlite file will be created automatically.
 
Deleted:
<
<
* create a database layer with a finite interval of validity. This requires some changes to create_lhcb_cond. How can we extract the interval of validy for the constants for a particular run?
 
Changed:
<
<

Copying a new alignment file to a storage element

>
>

Copying the sqlite file to a storage element

To use the sqlite file in production, it must be copied to a storage element (SE) using the following DIRAC command:

 
Deleted:
<
<
The file should be uploaded to a Storage Element using the following DIRAC command:
 
SetupProject Dirac
dirac-dms-add-file <LFN> <local file> <SE>
Line: 51 to 90
 If we see scaling problems we can deploy it on multiple SEs, but all jobs (running on the Xpress stream) should run at CERN.
Added:
>
>

Open issues

A rather non-exhaustive list ...

  • Does the latest version of copy_file_to_db already allow to set the interval of validity?
  • How do we extract the interval of validity for a run range? Or for the start of a particular run?
 
Changed:
<
<
-- WouterHulsbergen - 30 Jun 2009
>
>
-- WouterHulsbergen - 01 September 2009

Revision 32009-07-01 - WouterHulsbergen

Line: 1 to 1
 
META TOPICPARENT name="LHCbDetectorAlignment"
Deleted:
<
<
 ES: express stream DCM: database content manager FSR: File summary record
Line: 35 to 33
  * create a database layer with a finite interval of validity. This requires some changes to create_lhcb_cond. How can we extract the interval of validy for the constants for a particular run?
Added:
>
>

Copying a new alignment file to a storage element

The file should be uploaded to a Storage Element using the following DIRAC command:

SetupProject Dirac
dirac-dms-add-file <LFN> <local file> <SE>

with for example

 
  <LFN> = /lhcb/user/w/wouter/Alignment/Alignment_20090701.db
  <local file> = Alignment_20090701.db
  <SE> = CERN-USER

If we see scaling problems we can deploy it on multiple SEs, but all jobs (running on the Xpress stream) should run at CERN.

 

-- WouterHulsbergen - 30 Jun 2009 \ No newline at end of file

Revision 22009-07-01 - WouterHulsbergen

Line: 1 to 1
 
META TOPICPARENT name="LHCbDetectorAlignment"

Revision 12009-06-30 - WouterHulsbergen

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="LHCbDetectorAlignment"

ES: express stream DCM: database content manager FSR: File summary record

Protocal (first draft)

Procedure for obtaining and validating calibation constants using Express Stream (ES) and Cern Analysis Farm (CAF)

  • alignment/calibration/monitoring runs on CAF, produces new xml files
  • xml files are stored in database layer for LHCbCond with certain interval of validity
  • job runs again, using database layer as input
  • iterate
  • if monitoring results acceptable, give the database layer to DCM, who will include it
  • production team processes the data at Tier1 aftergreen light from monitoring

Running the prompt calibration on the CAF

There are two scenarios.

1. Ultimately, the alignment/calibration process should run without too much manual intervention. The easiest is if this is a production task. The idea is that the express stream will be produces by several parallel task, that write DSTs with alignment derivatives in the FSR. There will then be a single 'update' tasks that reads the FSR DSTs and spits a new database layer. This process can eventually be repeated if the monitoring group decides that more iterations are needed.

2. More realistically, certainly in the beginning, is that the alignment constants are produced by non-automated tasks, probably performed by different persons for different subdetectors. The alignment convenor collects database constants and creates a database layer. The production team runs a standard brunel task with the new layer such that the monitoring group can look at the result even before anything is added to the 'official' database.

Various steps in testing this

* write a workflow for alignment on ES on the CAF. can the production team run jobs on the CAF? do we want this to be handles b

* write a workflow for monitoring on ES on the CAF. shouldn't this just be the same job? if we make this a brunel job, can we write this first?

* give the production team a new data base layer and let them reprocess the ES with this. at least it will allow us to test if the monitoring sees the improvements.

* create a database layer with a finite interval of validity. This requires some changes to create_lhcb_cond. How can we extract the interval of validy for the constants for a particular run?

-- WouterHulsbergen - 30 Jun 2009

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback