Database Structure

The database will contain

  • conditions
  • detector description
The "conditions" part will contain whatever is in the package Det/XmlConditions, while the "detector description" will contain Det/XmlDDDB.

The database structure will be modelled on the filesystem structure of XmlDDDB and XmlConditions.

I'm currently restructuring XmlConditions

New conditions will come in two flavors: on-line and off-line. The on-line conditions will be written in the Oracle server in the PIT, the off-line one will be added to the Oracle server at IT. The changes occurring on one server need to be replicated to the other one (from PIT to IT and vice versa), and it means, since we use Oracle streams, that the two flavors have to be in different Oracle schemas (it is handled through a functionality added to Det/DetCond in LHCb v21r3).

Before the introduction of the support for multiple database schemas, it was assumed that the detector description part of the database would go to the same database schema as the off-line conditions. Now it looks more sensible to put it in a different schema to keep the separation we have in the XML files. We can also extract from the current XmlDDDB the part used only in simulation or visualization and put them in their own schema.


I presented our plan at the 3D Meeting fo the 22nd of june (see the slides).

The main points are:

  • set up streaming from CERN to Tier-1s
  • test replication CERN -> Tier-1
  • set up streaming with an LHCb managed RAC (PIT)
  • test replication
    • CERN -> PIT
    • PIT -> CERN -> Tier-1


The schedule we aim to follow is:
  • 31 Jul 2006 - 04 Aug 2006 -
    • Setup streaming to one Tier-1 (RAL?) (two schemas to be replicated).
    • Data integrity tests.
    • Test privileges transmission through streams.
  • 07 Aug 2006 - 11 Aug 2006 -
    • Repeat the tests of the previous week with 2 Tier-1s (RAL nad CNAF?)
  • 14 Aug 2006 - 18 Aug 2006 -
    • Pause
  • 21 Aug 2006 - 25 Aug 2006 -
    • Add another Tier-1.
    • Recovery Tests.
  • 28 Aug 2006 - 01 Sep 2006 -
    • Test access to the DBs from the GRID (hopefully with the CORAL GRID enabled functionalities)
  • 04 Sep 2006 - 08 Sep 2006 -
    • Set up a LHCb managed RAC to simulate the PIT one.
    • Test cross replication PIT <-> CERN
    • Test the two step replication PIT -> CERN -> Tier-1.
  • 11 Sep 2006 - 15 Sep 2006 -
    • Pause (LHCb week in Heidelberg)
  • 18 Sep 2006 - 22 Sep 2006 -
    • Include the 3 missing Tier-1s and test from the GRID (with the CORAL GRID enabled functionalities).
  • Oct 2006 -
    • Conditions Database used by production jobs.

Streaming from CERN to Tier-1

The first thing is the definition of the accounts on the Master copy. Currently we have (on the integration rack int4r_lb):
schema owner for the off-line conditions (alignments, etc.), 10GB tablespace
schema owner for on-line conditions (those written in the PIT), 10GB tablespace
account with read/write permission to the other schemas (he can store/tag data, but not add foders), no tablespace
account with read only premission on the main schemas, no tablespace



Miguel Angio (integration RAC)
Maria Girone (production RAC)
Dirk Duellmann (3D)
Eva Dafonte Perez (Oracle streaming)
Angelo Carbone
waiting for reply
David Bouvet
waiting for reply
Adria Casajus
Raja Nandakumar

-- MarcoClemencic - 25 Jul 2006

Edit | Attach | Watch | Print version | History: r16 | r10 < r9 < r8 < r7 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r8 - 2006-07-28 - MarcoClemencic
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCb All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback