Calibration Requirements using Reconstructed Data

This page summarizes the various requirements for calibration use cases needing reconstructed data and the possible implementations.

Reminder of the processing flow

  1. The event filter farm (EFF) writes out 2 kHz.
  2. They all go through the monitoring farm (MF) .
    1. Some (~50 Hz) are reconstructed in the MF for monitoring purposes.
  3. The raw data is copied to castor and distributed to Tier1s.
  4. The raw data is reconstructed at Tier1s after the green light has been given for reconstruction. This can be a few days after data taking.
  5. The reconstructed data is stripped once enough reconstructed data is available. This can be several days (weeks?) after data taking.
  6. Steps 4-5 are repeated if needed.

The question discussed in this page is where and how we perform the calibrations that require reconstructyed

Sources of reconstructed data

Last update: Jul 15 2008.

Monitoring Farm

In the monitoring farmwe will reconstruct order of 50 Hz using Brunel. Which events are to be reconstructed is defined by the routing bits. This data is used to produce histograms that will be analysed in real time and stored.
  • All monitoring done at this level is in real time.
  • It is not foreseen to save this data, but it could be done.
    • The data is not in root format, so the most easy would be to save it as a MSF file. Such a format could then only be read in by the same version of the event model.

Hot stream

A special calibration stream, already mentioned by the Streaming Task Force, was advocated. We would like a lot rate of "hot" events suitable for calibration purposes, like alignment or PID, to be forked off the standard data flow, reconstructed and made available to experts for analysis.
  • This data would have to be reconstructed, probably at the pit (PLUS farm).

Distribution

From this point the the MF and hot streams are reconstructed and equivalent.
  • The PLUS farm could be used to analyse them. There is a buffer of 30 TB at the pit (i.e. 10^9 events, or 6 days of 100% efficient running at 2 kHz), of which some could be used for this data. In principle data is deleted after some time but one could pin down some data for later use. This data should be used quickly and not kept for long time anyway. It is still possible to copy some of the data to some scratch space or laptop if needed.
  • One could migrate to castor.
    • Even distribute to Tier1s?
It all depends on the timescale during which we need this data. In 2008 we are likely to need all the data all the time. But what about 2009? Will we ever look at this data once the processing has been done?

Online calibration

Some quantities will be monitored and calibrated using the monitoring farm. The result will be put in the condition database for use in the processing. See above.

Calibration farm

This is special farm (so far of one node) that has access to special calibration events which are not saved. The calorimeter is the only user so far.

Alignment

Last update: Jul 8 2008.

Contact person: Wouter Hulsbergen

The alignment of the tracking stations can be monitored online, but if a problem is found one cannot re-run the alignment in the monitoring farm. One needs fully reconstructed events. The hot stream would be ideal.

  • Where would it run?
    • plus farm?
      • Could the monitoring farm save more information (residuals...), allowing the alignment not to have to redo all the track fits?
    • lxbatch?
    • Grid?

The alignment group needs for instance to run several times on reconstructed events.

RICH refractive index calibration

Last update: Jun 20 2008.

It needs tracks. Is this similar to Alignment or can it be done on the Brunel jobs running in the monitoring farm?

Offline processing

Last update_: Jun 20 2008.

Typically the mass scale, i.e. the magnetic field will be determined to the full precision at the stripping level only.

Calibration user jobs

Last update: Jun 20 2008.

Some calibrations will need detailed user analyses to be made. Typical examples are the D* and Lambda PID calibrations.

  • In principle these calibrations determine high level conditions, like the mis-ID rate.
  • In general the data quality flag in the bookkeeping should not depend on these jobs.
  • One must ensure that all data samples are surveyed by the appropriate jobs.
  • If possible these calibrations should be done automatically in the processing step.

-- PatrickKoppenburg - 15 Jul 2008

Edit | Attach | Watch | Print version | History: r7 | r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r1 - 2008-07-15 - unknown
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCb All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback