General information
CondDB Project Management Team
Current CondDB and CondDB Upgrade management team consists of the following people: Marco Clemencic, Liang Sun.
Limited patches management responsibility for the CondDB Upgrade project is conferred on Sajan Easo.
Migration to the new Conditions Database Backend (GitCondDB)
For 2017 data taking we introduced the new backend for conditions data (
GitCondDB
), but kept the old backend (COOL) to ease the migration.
The decommissioning of the COOL backend is planned for the end of 2017, so until then we must keep to two systems in sync, meaning that any new global tag created in any of the two systems must be propagated to the other one.
In this page we describe the procedure for both integrating changes in each system and how to port changes from one system to the other.
Git Conditions Database
The management of contributions to Conditions Databases in Git (
https://gitlab.cern.ch/lhcb-conddb
) is not much different from the workflow of any regular software project:
Changes in gitlab are automatically propagated to the Conditions Databases copies on CVMFS.
Creation of a Global Tag
Once all required changes have been merged in the data type branch, a new global tag can be created via the "create new tag" link in the Gitlab project.
The tag message must contain a section enclosed between two lines made of only 3 dashes (
---
). This section must contain valid YAML code declaring the named field
datatypes and optionally the fields
simtypes and
recotypes, containing lists of numbers or strings, one for each data/sim/reco type the tag is meant for.
For example:
---
datatypes: [2017, 2016]
simtypes: [Sim10, Sim09]
recotypes: [Reco14]
---
Once the tag has been created, it should be ported to the COOL database.
Porting changes from Git to COOL
A
GitCondDB version (tag, commit) can be converted to COOL easily by cloning the
GitCondDB repository, checking out the tag and using the usual COOL procedure on the resulting directory.
As an example:
git clone https://gitlab.cern.ch/lhcb-conddb/LHCBCOND.git
cd LHCBCOND
git checkout cond-12345678
cd ..
lb-run LHCb/latest CondDBAdmin_Commit.py ...
Changes for Upgrade
Upgrade related changes are kept in branches and tags prefixed with
upgrade
, so the master branch for Upgrade is
upgrade/master
, etc.
For Upgrade tags, the only allowed data type is
Upgrade
so the tag message must always be:
---
datatypes: [Upgrade]
---
Porting changes for Upgrade to MagnetUp
A branch for magnet up is setup
upgrade/magup
matching the contents of
upgrade/master
but for the magnetic field.
When ready to port changes added to
upgrade/master
to make a corresponding mag up tag, one should
* create a Merge Request with
upgrade/master
as the source branch and
upgrade/magup
as the target branch
* double check the only changes will be those applied to master
* accept the merge request
* create a tag in
upgrade/magup
corresponding to the one in
upgrade/master
COOL Conditions Database
CondDB operation assistance systems
DBS - CondDB deployment and backup system;
ITS - CondDB integrity tracking system;
CTS - CondDB compatibility tracking system.
CondDB release preparation procedures
Committing CondDB patches
In order to commit the
CondDB
(or
CondDB Upgrade
) patches to the master CondDB (or CondDB Upgrade) databases one has to use the
CondDBAdmin_Commit.py script, available under the LHCb Project environment:
lb-run LHCb/latest CondDBAdmin_Commit.py -h
The script can take an input to be committed in two interchangeable forms:
- XML files
- SQLite database slice
If there are no completely new files in a patch then either the XML input file tree, or the SQLite DB slice internal nodes tree, has to reproduce the internal tree of the destination CondDB SQLite database. Otherwise, the CondDB node which has a wrong/incomplete path will be added as a new node.
The lookup of the destination SQLite database is performed by the script at the $SQLITEDBPATH location. So it is usually a good idea to do echo $SQLITEDBPATH before committing to cross-check the destination. Currently, there are two locations commonly used to commit and test CondDB patches at:
$LHCBDEV/DBASE/Det/SQLDDDB/db/
$LHCBDEV/DBASE/Det/SQLDDDB_Upgrade/db/
The first one is the standard SQLite CondDB location which you get when requesting the development environment ( lb-run --dev LHCb/latest). While the second one, as it is easy to see, is for the SQLite CondDB Upgrade files (see next section).
An example of a patch commitment looks like:
lb-run --dev LHCb/latest bash --norc
echo $SQLITEDBPATH
CondDBAdmin_Commit.py -c "NAME OF REQUESTOR" -P PATCH_NUMBER -t DATATYPES_LIST -m "PATCH DESCRIPTION" SOURCE DESTINATION_PARTITION HEAD LOCAL_TAG_NAME
where
- NAME OF REQUESTOR - full name of the person submitted the patch
- PATCH_NUMBER - numeric part of JIRA ticket (e.g. 589 for the ticket LHCBCNDB-589)
- DATATYPES_LIST - list of data types the patch is meant for (if many then one has to use comma separated list without spaces. E.g. 2012,2011,test)
- PATCH DESCRIPTION - short, but meaningful, patch description which will appear on the web CondDB Release Notes portal
- SOURCE - path to the source of changes containing either XML files, or SQLite DB slice.
- DESTINATION_PARTITION - a CondDB destination partition. Currently the following ones are available: DDDB, LHCBCOND, SIMCOND (there is also fourth one - DQFLAGS, but it is STRONGLY FORBIDDEN to commit anything to it with the CondDBAdmin_Commit.py script, there is another one to do that)
- HEAD - has to stay the same, which means that the script will compare the new changes provided in SOURCE to the latest previously committed to CondDB content.
- LOCAL_TAG_NAME - a name of a local tag (e.g. it-20120101)
After an execution of this command line the script will diff the SOURCE with the cumulative database content (at the HEAD of the database in this example), and will schedule the
real changes for the commitment (a difference in only one space in a file is enough for the script to think it is really different). The commitment is not done immediately upon the command line execution. The script will provide a commitment summary to a user. At this point one has to cross-check all the settings (especially, the SOURCE and DESTINATION values), and confirm or reject the commitment.
After committing a patch check the result with
CondDBBrowser:
CondDBBrowser PARTITION
If changes are committed and you are sure everything is correct it is advisable to create a compressed backup of the destination SQLite file, e.g.:
pbzip2 -vkc $SQLITEDBPATH/PARTITION.db > SOME_BACKUP_LOCATION/PARTITION+LOCAL_TAG_NAME.db.bz2
This may be useful in cases when the subsequent commits fail: you will be able then to rollback to whatever backed up CondDB state.
Committing CondDB Upgrade patches
The procedure is the same as for the regular CondDB patches but with one difference. You have to change the cumulative SQLite CondDB files destination location by hand. So the environment preparation will be (in tcsh):
lb-run --dev LHCb/latest bash --norc
setenv SQLITEDBPATH /afs/cern.ch/lhcb/software/DEV/DBASE/Det/SQLDDDB_Upgrade/db
From now on the procedure is the same as in previous section.
Committing CondDB DQFlags patches
Prepare the LHCb project environment and check the script
lb-run --dev LHCb/latest CondDBAdmin_DQ_Commit.py -h
An example of how to commit a DQ patch is as follows:
lb-run --dev LHCb/latest CondDBAdmin_DQ_Commit.py -c "Marco Adinolfi" -P 9999 -m "VELO flag set as 'BAD' during [2012-08-13_01:19:00, 2012-08-13_06:58:42)." VELO 1 -s 2012-08-13_01:19:00 -u 2012-08-13_06:58:42 velo-20120907
Check the result with
CondDBBrowser:
lb-run --dev LHCb/latest CondDBBrowser DQFLAGS
Creating a global tag in CondDB
Prepare the LHCb project environment and check the script
lb-run --dev LHCb/latest CondDBAdmin_GlobalTag.py -h
An example of global tagging is as follows:
lb-run --dev LHCb/latest CondDBAdmin_GlobalTag.py -c "Illya Shapoval" -d 2012,HLT LHCBCOND cond-20120831 cond-20120829 rich-20120831-AerogelCalib rich-20120831-MirrorAlign it-20120831

new global tags must be cloned to Git before the SQLite files are published.
Porting changes from COOL to GitCondDB
When new global tags are created in COOL databases, they have to be ported to the Git copies.
The tool used originally to clone SQLite COOL databases to Git can be used to update the global tags:
part=LHCBCOND
if [ ! -e ${part} ] ; then
git clone --reference /cvmfs/lhcb.cern.ch/lib/lhcb/git-conddb/${part}.git ssh://git@gitlab.cern.ch:7999/lhcb-conddb/${part}.git
else
(cd ${part} && git checkout master && git pull)
fi
lb-run --dev LHCb/latest \$GITENTITYRESOLVERROOT/utils/make_git_conddb.py --no-head \$SQLITEDBPATH/${part}.db \$SQLITEDBPATH/../doc/release_notes.xml ${part}
# the option =--dev= to lb-run is needed to get tags not published yet
# For Upgrade use the following
lb-run --dev LHCb/latest \$GITENTITYRESOLVERROOT/utils/make_git_conddb.py --no-head --tag-prefix=upgrade/ \$SQLITEUPGRADEDBPATH/${part}.db \$SQLITEUPGRADEDBPATH/../doc/release_notes.xml ${part}
Check the generated tags (and branches) and push them. On top of the new tags, the script generate some internal branches (branch-N) that should be ignored, and some
data type branches (dt-TYPE) that should be pushed to the main repository too.
For example:
cd ${part}
git push origin tag dddb-12345678 dt-2017 master
Releasing CondDB
To make a CondDB release a set of procedures has to be completed in the order stated below.
Releasing SQLite CondDB
The release procedure is simple and involves copying the new SQLite CondDB files to the CondDB (or CondDB Upgrade) gateway which is regularly analyzed by dedicated acron jobs. The publishing frequency is "every time the LHCb Online run is finished" but not more frequently than once per 10 minutes. If that acron job finds the CondDB gateway state changed it publishes the changed files to the web server based public repository. From that moment another machinery comes into action deploying the new file to all release locations: AFS, CVMFS, LHCb Pit.
So, to make the CondDB release set the correct gateway path
set gateway=/afs/cern.ch/lhcb/software/SQLiteMaster/SQLDDDB
OR, for the case of CondDB Upgrade
set gateway=/afs/cern.ch/lhcb/software/SQLiteMaster/SQLDDDB_Upgrade
and just copy the files in a safe manner to the gateway:
cp -pfv PATH2NEWSQLITEFILE /PARTITION.db $gateway/db/PARTITON.db-tmp~
diff PATH2NEWSQLITEFILE /PARTITION.db $gateway/db/PARTITON.db-tmp~
cp -pfv PATH2NEWSQLITEFILE /../doc/release_notes.xml $gateway/doc/release_notes.xml-tmp~
mv -f $gateway/db/PARTITON.db-tmp~ $gateway/db/PARTITON.db
mv -f $gateway/doc/release_notes.xml-tmp~ $gateway/doc/release_notes.xml
As already mentioned above, the publishing acron job is executed every 10 minutes but the actual check whether the gateway state has changed is started only if there is a new LHCb run.
CAUTIONs:
- Use rsync with caution to copy files to the gateway. There were a couple of silent problems with it for unknown reasons (most probably due to AFS glitches) for large files when destination was not identical to source in the end.
- Once the new files are ready to be released try to transmit all of them to the gateway in-between the publishing cron sessions such that when cron detects that the gateway state has changed it pushes all of new files in one go. Otherwise, it may happen that, e.g., the release notes file is published alone with the new tag entries but the SQLite file does not contain those tags yet.
Creating flow control files to manually switch on/off SQLite CondDB publishing/updating process
CondDB acron pilots check regularly the CondDB SQLite Gateway directory (currently $LHCBHOME/software/SQLiteMaster/SQLDDDB[_Upgrade] ) for modified files, and publish the latter to the CondDB SQLite distribution server. There are following controls over the CondDB acron pilots:
- In order to switch off the publishing process, you need to create a flow control file named ".stopPublishing" using touch command under the gateway directory.
- Likewise, in order to enable any changes under the gateway path to be published, you need to remove the stop control file ".stopPublishing" first if it exists, and create another flow control file named ".startPublishing" (NB: the file ".stopPublishing" will override the file ".startPublishing"). If all updates are successfully published, this control file ".startPublishing" will be deleted automatically.
- In order to disable the updating process for the SQLite snapshots, a flow control file named ".stopUpdatingSnapshots" need to be placed under the gateway directory.
Releasing Oracle CondDB: synchronizing Oracle CondDB from SQLite CondDB
From lxplus5 (still to be validated on lxplus6), prepare the environment with
SetupProject LHCb
Change directory to location with authentication.xml and dblookup.xml files present (the ones needed to connect to
CERN Oracle CondDB with
read/write privileges). Then ensure you are going to use correct SQLite source (echo $SQLITEDBPATH) and call the coolReplicateDB tool on the those partitions (DDDB, LHCBCOND or SIMCOND) which have been modified:
coolReplicateDB sqlite_file:$SQLITEDBPATH/DDDB.db/DDDB "CondDB(owner)/DDDB"
coolReplicateDB sqlite_file:$SQLITEDBPATH/LHCBCOND.db/LHCBCOND "CondDB(owner)/LHCBCOND"
coolReplicateDB sqlite_file:$SQLITEDBPATH/SIMCOND.db/SIMCOND "CondDB(owner)/SIMCOND"
If there are new files (nodes) in the replicated changes, then also access permissions have to be updated:
coolPrivileges "CondDB(owner)/DDDB" GRANT READER lhcb_conddb_reader
coolPrivileges "CondDB(owner)/LHCBCOND" GRANT READER lhcb_conddb_reader
coolPrivileges "CondDB(owner)/SIMCOND" GRANT READER lhcb_conddb_reader
NO-GOs:
- Never, ever replicate to Oracle the same SQLite database more than once: this may be done by mistake if, e.g., the $SQLITEDBPATH points to previously replicated SQLite files, and will result in corrupted Oracle state.
- Never, ever modify/re-apply already replicated to Oracle tags (either local, or global ones). Apart from that this is never allowed from the point of view of LHCb Productions this attempt will result in failed replication and corrupted Oracle state.
-
Adding new CondDB tags to ITS
New tags have to be passed to the integrity tracking system in order to let it track their deployment and integrity status.
Activating new global tags for the LHCb HLT
This has to be done only for those CondDB global tags, which introduce changes important for HLT. The latter knowledge, typically, is provided by the patch requester. In any case, this action has to be discussed with the LHCb Computing project leader and the LHCb HLT coordinator. It may happen that, even though a new tag has changes relevant for HLT, putting this new tag in action for HLT may be postponed for whatever reasons.
Adding new CondDB tags to the BookKeeping database