Index

Introduction

Database Structure

The database will contain

  • conditions
  • detector description
The "conditions" part will contain whatever is in the package Det/XmlConditions, while the "detector description" will contain Det/XmlDDDB.

The database structure will be modelled on the filesystem structure of XmlDDDB (v30r9) and XmlConditions (v2r0).

New conditions will come in two flavors: on-line and off-line. The on-line conditions will be written in the Oracle server in the PIT, the off-line one will be added to the Oracle server at IT. The changes occurring on one server need to be replicated to the other one (from PIT to IT and vice versa), and it means, since we use Oracle streams, that the two flavors have to be in different Oracle schemas (it is handled through a functionality added to Det/DetCond in LHCb v21r3).

Before the introduction of the support for multiple database schemas, it was assumed that the detector description part of the database would go to the same database schema as the off-line conditions. Now it looks more sensible to put it in a different schema to keep the separation we have in the XML files. We can also extract from the current XmlDDDB the part used only in simulation or visualization and put them in their own schema.

Plan

I presented our plan at the 3D Meeting on the 22nd of June (see the slides).

The main points are:

  • set up streaming from CERN to Tier-1s
  • test replication CERN -> Tier-1
  • set up streaming with an LHCb managed RAC (PIT)
  • test replication
    • CERN -> PIT
    • PIT -> CERN -> Tier-1

Schedule

The schedule we aim to follow is:
  • 31 Jul 2006 - 04 Aug 2006 -
    • Setup streaming to one Tier-1 (RAL?) (two schemas to be replicated).
    • Data integrity tests.
    • Test privileges transmission through streams.
  • 07 Aug 2006 - 11 Aug 2006 -
    • Repeat the tests of the previous week with 2 Tier-1s (RAL and CNAF?)
  • 14 Aug 2006 - 18 Aug 2006 -
    • Pause
  • 21 Aug 2006 - 25 Aug 2006 -
    • Add another Tier-1.
    • Recovery Tests.
  • 28 Aug 2006 - 01 Sep 2006 -
    • Test access to the DBs from the GRID (hopefully with the CORAL GRID enabled functionalities)
  • 04 Sep 2006 - 08 Sep 2006 -
    • Set up a LHCb managed RAC to simulate the PIT one.
    • Test cross replication PIT <-> CERN
    • Test the two step replication PIT -> CERN -> Tier-1.
  • 11 Sep 2006 - 15 Sep 2006 -
    • Pause (LHCb week in Heidelberg)
  • 18 Sep 2006 - 22 Sep 2006 -
    • Include the 3 missing Tier-1s and test from the GRID (with the CORAL GRID enabled functionalities).
  • Oct 2006 -
    • Conditions Database used by production jobs.

Actual Achievements

  • 07 Aug 2006
    • Test of the software on the Grid: Nicolas submitted few jobs on the grid to verify the installation of the libraries needed for CondDB (COOL, Oracle, etc.). The result is that the libraries we need are correctly installed.
  • 18 Aug 2006
    • Replication activated from LHCb integration RAC to RAL and GridKa: Eva prepared the configuration and tested it.
  • 22 Aug 2006
    • Creation of databases LHCBCOND and DDDB: started at14:00 and ended at ~18:30. LHCBCOND was replicate quickly (tag v2r0 applied), DDDB replication at RAL ended at 23:50, while at GridKa it failed because of memory problems.
  • 24 Aug 2006
    • Testing on the GRID: Prepared a package to run DetDescChecks on the GRID (accessing the master copy) and submitted (Ganga+DIRAC)
    • Replication to GridKa: The server has been recovered and the stream re-connected (Eva ). The copy of DDDB is OK.
    • Replication to IN2P3: Eva configured the replication to IN2P3 and started to export the schema from the master to the slave (we agreed to not write the the master until the end of the export).
  • 25 Aug 2006
    • Test on the GRID: The test failed because of various problems: wrong version of Brunel used (v30r7 instead of v30r6), not supported use-case, Oracle not installed on the CEs. At the end Marco adopted the solution of a custom wrapper script (based on the Ganga one) and a tarball with all the needed files that is downloaded with gridFTP (successfull).
    • Replication to IN2P3: IN2P3 copy ready and stream connected around 16:00 (Eva). The copies of both LHCBCOND and DDDB are OK.
    • Replication of OnLine database: lhcb_online_conddb accounts created on the 3 slaves and streams connected (Eva).
  • 28 Aug 2006
    • Replication to Tier-1s: Applied the tag v30r9 to DDDB. The replication was smooth. The problem is that the tagging of the whole DB is not atomic and the "global" tag appears before the end of the entire operation. (we cannot use it to check if the replication is completed)
      Modified LHCBCOND too (tag v2r0p1)
  • 30 Aug 2006
    • Privileges replication: Smooth, only a bit long (few hours).
  • 05 Sep 2006
    • Fake PIT:

Streaming from CERN to Tier-1

The first thing is the definition of the accounts on the Master copy. Currently we have (on the integration rack int4r_lb):
lhcb_conddb
schema owner for the off-line conditions (alignments, etc.), 10GB tablespace
lhcb_online_conddb
schema owner for on-line conditions (those written in the PIT), 10GB tablespace
lhcb_conddb_manager
account with read/write permission to the other schemas (he can store/tag data, but not add folders), no tablespace
lhcb_conddb_reader
account with read only permission on the main schemas, no tablespace

Due to absence for vacation of relevant people, Eva could set up the streaming from the LHCb integration RAC to RAL and GridKa only on Friday 18/08/2006.

Links

Contacts

CERN
Miguel Angio (integration RAC)
Maria Girone (production RAC)
Dirk Duellmann (3D)
Eva Dafonte Perez (Oracle streaming)
CNAF
Angelo Carbone
GridKa
waiting for reply
IN2P3
David Bouvet
NIKHEF
waiting for reply
PIC
Adria Casajus
RAL
Raja Nandakumar

-- MarcoClemencic - 29 Aug 2006

Edit | Attach | Watch | Print version | History: r16 | r14 < r13 < r12 < r11 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r12 - 2006-09-28 - MarcoClemencic
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCb All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback