Minutes of the Data Quality Meeting of Monday 7 July 2008

Offline Monitoring

Muon PID

Discussion about the best place to run the high-level muon monitoring using partially reconstructed J/psi->mumu (only one mu identified as such). See Gaia's slides from software week. She expects 1Hz after the final selection and would like to monitor the changes of muID efficiency with time. Typically a granularity of 6 hours (20 k signal events) would be ideal.
  • It cannot be run in the monitoring farm as these events are selected with the single muon stream that has a too high rate.
  • One could run this is the Brunel reconstruction. It's not forbidden to make particles in Brunel (a bit of trivial re-packaging would be needed). Brunel knows about time and one can then merge histograms from several runs centrally.
  • The stripping is not good as there can be a significant delay between the Brunel and DaVinci steps and there is no guarantee to process the files in time-ordered batches. At least it is not foreseen and the production wouldn't like to do it if not absolutely necessary. The stripping will run on many rDST files as provided by a database, with no particular order.
  • I was also suggested to add a dedicated Hlt2 selection. Hlt2 can do whatever Brunel can do, so if the event can be selected in Brunel, it can as well in the HLT.

Offline histogram analyser

Following last meeting Patrick wrote a trivial python script that retrieves Brunel histograms from http. Philippe suggested to do the histogram merging directly using the root http protocol. This requires Dirac not to zip the histogram files as presently.

We also had a look at OMAlib, the online histogram analyser package. There's not much in it yet and it is very much linked to the online world. It would be nice to extract all online-independent code and put it in a packaged shared by the online and offline monitoring tasks. They will be very much alike. This will be followed up when Giacomo is back from holiday. The alignment group is interested in testing it.

Wouter also suggested that rather than saving time-dependent profiles in Brunel (as is done in HLT), one could produce the profiles in the merging job. Brunel would fill 1D histograms and when merging one would create profile histograms following the mean, rms... with time.

Routing bits

Gerhard explained what he expects from the clients of the routing bits. We have 3x32 bits available for routing events to different tasks in the monitoring farm. They are filled by the trigger and stored in the data. For each bit one needs to provide a logical combination of HLT Selections. If the appropriate selections do not exist the clients have to provide them.

Special calibration stream

A special calibration stream, already mentioned by the Streaming Task Force, was advocated. We would like a lot rate of "hot" events suitable for calibration purposes, like alignment or PID, to be forked off the standard data flow, reconstructed and made available to experts for analysis at CERN and not distributed to Tier1s. The alignment group needs for instance to run several times on reconstructed events. Two models were discussed:
  1. Fork off a low rate (about 10 Hz) hot stream at DAQ level
    • How? The routing bits seem to be the way.
    • Where would this be reconstructed? PLUS farm? High priority batch queues?
    • The data would stay on castor. This is a negligible amount of data compared to the 2kHz.
  2. Save events reconstructed in the Monitoring Farm in DST format.
    • These events are already reconstructed but not in the appropriate format for saving to disk. Condirerable core-software work would be needed.
    • Gerhard pointed out that this was a lossy stream, as events get registered for reconstruction but can be discarded if the CPU is not available at the monitoring farm.

Option 1 was preferred. It needs less work. But no-one from the online group was present to comment.

Action: Patrick to investigate with Online group.


We had a short discussion abou the procedure to change alignment constants online. Especially the velo positioning was mentioned by Sheldon. Clearly, offline we want the best possible alignment. But the the present workflow implies that a change of alignment is not trivial, so a new alignment has to be proven to be better for physics before being used. Babar had a running alignment, but only 6 parameters were actually changed. The procedure for getting the best velo alignment after each repositioning is still to be defined (that's an OPG issue). In 2008 we'll be very careful. Experience will then show what's the best scenario for 2009.

Problems database

Time running away only a snapshot of the Savannah Portal was shown. You need to be registered with Savannah to use it.
  • It is run and centrally managed at CERN. They seem to be happy about more users.
  • It is very easy to configure, for defining the fields, the entry forms and the queries.
  • A few limitations were hit. The main is that one cannot do binary operations of numbers (they are text fields). although this is possible for dates. Patrick is following that up with the support people.

New actions

  • Patrick to investigate hot stream with Online group.
  • See if OMAlib can be split. Test it.


I had to run, but I got the shuttle.

-- PatrickKoppenburg - 08 Jul 2008

Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r3 - 2018-09-23 - MarcoCattaneo
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LHCb All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback