-- VivekJain - 27 Sep 2006

User Issues relating to the RTT

Here is a current status of what I understand about the RTT, i.e., what packages are run, etc.

Files generated with release 12 geometries

Note that the following prefix are used.
prefix DetDescrVersion
calib0 ATLAS-CSC-01-00-00
calibg ATLAS-CSC-01-01-00
calib1 ATLAS-CSC-01-02-00
One file is normally 50 events.

The list of files can be obtained using dq2_ls with some amount of wild-carding

  • NOTE: The new way is to put the file name and the DQ2 site where it can be found in your XML configuration file, and RTT will automatically pick it up from there.
  • Go here for more details

  • This method is obsolete now : Do NOT put your requests here
    • David Rousseau's list is here
    • Stephane Willocq's list is here
    • Karim Bernardet's list is here
    • Dirk Zerwas's list is here - Please note that some files are common with David and Dirk.
      • Dirk is still waiting for some of his files to be available - Nov 13'06
    • See the thread here for requests

Description of tests

Software tests - basically testing the underlying software:

  • AthExHelloWorld: Alex Undrus

  • CBNT_AOD: Vassilios Vassilakopoulos
    • "...is for the validation of the AOD content and AOD reconstruction..." - What does this really mean

  • GeneratorsRTT: George.Stavropoulos
    • 5 tests: meant to validate the Generators packages at run time.
    • Athena jobs are setup and test the main generators used for the production of the MC events.
    • Each job writes out an ntuple file and, as a consistency check, performs a fit on the invariant mass of the Z->mumu events produced.

  • G4AtlasApps: Andrea di Simone
    • 6 jobs, two each for single muons, electrons and pions (pT = 5 GeV and 50 GeV)
    • Input is EvGen file and they are probably testing the simulation stage and looking at variables such as time, Virtual Memory, etc.
    • Muon description is MUONQ02, so they are probably using Rome geometry.

  • JiveXML: Nikos Konstantinidis
    • What we try to do in the RTT jobs of JiveXML is to have separate jobs per subsystem, so that we can localise problems more easily
    • So, if you look at JiveXML/share/JiveXML_jobOptions_Muons.py, you will see that we switch off the InDet and Calos by

Digitization tests:

  • CaloDigEx: Karim Bernardet
    • "...digitization, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact
    • Input file is "T1_McAtNLO_top.simul.Hits" events" and detector description is ATLAS-DC3-02

  • InDetSimRTT: Seth Zenz
    • A much younger package, this is supposed to find obvious holes in the geometry. It runs 100 (CSC11 EVGEN) events through InDet-only simulation, digitization, and reconstruction. In the last step it puts out an InDetRecStat ntuple and uses some of the hit plotting scripts from InDetRTT to make some pretty pictures. In addition to finding geometry holes, in principle it can also be a test of whether the full chain runs properly for the Inner Detector, and whether pool file input/output works at each steps--which are issues that needed to be checked at points during the last release cycle
    • Detector descriptions used are ATLAS-CSC-01-00-00, ATLAS-CSC-01-01-00

  • Muon Digi Example : Daniella Rebuzzi
    • Run geantinos through RPC/TGC/MDT digitizations - 2K events for each technology.
    • Detector description is "ATLAS-CSC-01-00-00"
    • Definition of success?

  • Digitization: Sven Vahsen
    • Digitization tests are duplicated because they were put in place before detector specific tests. What I propose is to replace the detector tests with a full ATLAS digitization (eg an integration test) and possibly a test with pileup.
    • ID, CALO and MUON systems are separately tested (what exactly is being tested?) - Whis is LVL1 being set on?
    • Input file is "simul.T1_McAtNLO_top" and det. description is "Rome-Initial"

Detector/Software test: Checking reconstruction software:

Overall tests:

  • Details for the following three tests are here
    • All tests use ATLAS-DC3-02 description
    • All but one test use a top file - "mc11.004100.T1_McAtNLO_top.digit.RDO.v11000301"
    • The one lone test uses "T1_McAtNLO_Jimmy_digit_RDO.v12000201"

  • RecExAnaTest : David Rousseau
    • RecExAnaTest tests in AtlasAnalysis have a very similar scope as RecExRecoTest and RecExRecoTest. They are basic test of integration of reconstruction up to AOD and trigger. As AOD typically depend of all reconstruction no attempt is made to run only pieces of reconstruction, but some tests are done with or without trigger


  • Trigger Release : Simon George
    • The trigger tests I set up in the RTT are meant to measure the rate of memory increase for some standard jobs
    • Documentation is here

  • TrigInDetValidation : Dmitry Emeliyanov
    • Runs LVL2 InDet algorithms (IdScan, SiTrack) on 25 GeV single electrons dataset and checks reconstruction efficiency vs. eta, phi of a MonteCarlo track and multiplicities of tracks reconstructed by the algorithms.

  • TrigEgammaValidation : Iwona
    • This test is aimed to run the entire egamma trigger chain for ~1k events. It will produce a CBNTAA ntuple which will serve as a base to do some control histograms for all steps in the egamma reconstruction

Inner Detector:

  • InDetRTT: Seth Zenz
    • This does Inner Detector reconstruction, plotting properties of tracks and hits for a several different physics samples (and, more and more, different geometries). The input is mostly 11.0.41 digits, with reconstruction outputting the InDetRecStatistics ntuple; the digits will soon be changed to mostly 12.0.2
      • Single mu (pT=10, 100), J5, Single e (pT=25), Top (-T1-McAtNLO-), Jimmy.Zee, Jimmy.Zmumu use 11.0.41 digits and ATLAS-DC3-02
      • One Pythia.Zmumu job uses 12.0.2 digits and CSC-00-01-00
      • One minbias job uses 11.0.3? digits and another one uses csc11 (11.0.42) ATLAS-DC3-02
    • It has a system of scripts that plots track pulls/resolutions, hit locations/residuals, efficiencies, fakes, etc. etc., and compares them with reference files I provide (currently through CVS, but this will change soon). It also does various comparisons between these the three track authors (new/default, IPatRec, XKalman)

  • InDetRecValidation: Steve Dallison - Software/Detector test
    • Look at single muon tracks and makes plots of the perigee parameters
    • Testing the standard reconstruction in one set of jobs and the stand alone inner detector reconstruction in another set of jobs
    • ATLAS-DC3-02 jobs: Full reco. + ID only reco. v11.0.41(?) digits for single mu (pt=10,100,300) for IPAT, XKAL, NTRK
    • Rome-Initial jobs:
      • Full reco. for v10.50 (?) digits for single mu = (pt = +-5) for IPAT, XKAL, NTRK
      • InDet reco. only for for v10.50 (?) digits for single mu = (pt = +-10, +-100, +-1000) for IPAT, XKAL, NTRK

  • It would seem that there is some level of duplication in the InDet jobs
    • Perhaps we should start a dialog between Steve, Seth/Sven, and Markus about this.


  • CaloAnaEx: Karim Bernardet
    • Reco with production of ESD+AOD (2 tests)
    • Reco with production of ESD (and a ntuple). Then read the ESD to produce an AOD. With this AOD produce a ntuple which is compared to the first ntuple to make sure that I find the same things
      I run a ROOT macro (checkPOOL.C) on the ESD and AOD files to check the content of them. - What does this really mean
      I run a python script to check the CPU time with a ref file (for the ESD+AOD tests)
    • Previous three jobs are two H-2e-2mu and a top job: Rome-Initial. Muons use Q.02
    • (c) reco with testbeam : I run a python script to check the CPU time with a ref file
      => 4 tests in this package

  • CaloRecEx: Karim Bernardet
    • There are 4 tests. Each test produces a ntuple.
    • 2 ROOT macros are run on this ntuple :
      • One does histos comparison and the other truth plots.
      • Then a python script (didAnyTestFail) is run to check the results for the comparison and the truth (If one of the tests fails then the RTT test is marked as failed). The ROOT macros uses thresholds which are stored in files in my web area (easier to update them)
    • Run a python script to check the CPU time with a ref file
    • Single photon (pT=100 GeV) and H-2e-2mu use Rome-Initial
    • Top job (-T1-McAtNLO-top) use digits with v11.0.31(?) and same redone with 11.5.0. ATLAS-DC3-02

  • CaloSimEx: Karim Bernardet
    • Simulation, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact
    • Uses ATLAS-DC3-05 layout

  • CaloTests: Karim Bernardet
    • 5 tests: full chain tests (simulation, digitization and reconstruction) with single particles
    • I use them to test the last tags of the geometry. ROOT macros are run plot some histos and truth plots
    • Single electron (pT=5,50 GeV) use ATLAS-DC3-07
    • Single photon (pT=50 GeV) use CSC-00-00-00
    • Single electron and photon (pT=50 GeV) use CSC-00-01-00

  • LArMonTools:Tayfun Ince
    • tested on commissioning data in bytestream format. Output is a root file with plenty of monitoring histograms which are simply dumped in a ps file with a macro. Just to double check if the updates to the monitoring tools run with latest athena version

Muon Spectrometer:

  • MboyPerformance: Samira Hassani
    • Check performance of MuonBoy, Staco and MuTag reconstruction code
    • Single muon (pT = +- 100 GeV), 12.0.3 digits, CSC-01-00-00

  • MooPerformance: Stephane Willocq
    • Check performance of MOORE reconstruction code
    • Single muon (pt=10,100,300), Jimmy.Zmumu use 11.0.41(?) digits and ATLAS-DC3-02

  • MuonEvtValidator : Daniela Rebuzzi and Nectarios Benekos
    • Validation of simulation and digitization, different Athena releases or/and Muon Spectrometer geometries.
    • Also provides important check of plots at lower levels (chamber or even tube level)
      • The packages MuonHitTest and MuonDigitTest can be interpreted as an interface to the MuonEvtValidator packages, which compares their outcome information. These two packages provide a common format to describe the hit and digit collections. The main advantage of this interface structure is the flexibility of MuonEvtValidator package which is now independent from the original format of the input information
      • The information of hits and digits is represented on an event by event basis in the MuonHitTest and MuonDigitTest packages. Since the chosen validation variables have a direct impact on the information representation inside the MuonEvtValidator package and therefore implies the structure of the whole package.

Checking Physics quantities:

  • Analysis Examples: Laurent Vacavant
    • ...deals with b-tagging validation. The jobs reads always the same AOD file, re-runs the b-tagging on it and compares the resulting histograms with some reference histograms..."
    • ATLAS-DC3-02

  • BPhysValidation: Steve Dallison
    • Looks at Bs -> J/psi Phi events. v11.0.41(?) digits. ATLAS-DC3-02

  • JetRec: Rolf Seuster
    • Tested on CSC data. This is for monitoring of how the Jet reconstruction work, various jet reconstruction algorithms like Kt, Cone and clustering effects (from topoclusters), etc.
    • Both J5 and single Pi jobs use 12.0.1(?) digits. ATLAS-DC3-06

  • Missing ET : Silvia Resconi
    • Z -> tau/tau events. Rome-Initial

  • egammaRec : Dirk Zerwas
    • Single electron (pt=100), v11.0.31 digits and photon (pT=60), v11.0.41 digits, use ATLAS-DC3
    • Single electron (pt=100), g4dig(?) uses ATLAS-DC2

  • tauRec: Michael Heldmann
    • Z -> tau/tau events. Rome-Initial

Outstanding Problems

Census of problems with 12.0.4 (NEW - Dec 19, 2006)

I went through a recent 12.0.4 run at Lancaster and looked at those packages that failed to run. I summarize the problems here. Some of these problems are old ones and others maybe new - Dec. 19, 2006

General Issues:

Ability of packages to use files from other packages

  • This will be very useful in streamlining tests. The idea is that, say, a simulation package runs first and produces an output file. A digitization package then starts and uses this file as an input and produces an output. A reconstruction then starts and picks up the latter output file and so on

Streamlining RTT tests

  • Are there obsolete tests or tests whose results are not being looked at?
    • Talk to the package owners to see which tests are obsolete; either remove them or upgrade them, e.g., use newer geometry (next point).

  • Can we combine tests?
    • For instance, the digitization package can run tests that will satisfy all the detector groups, so that they don't have to run their own. In conversation with Sven, Karim, Daniela and Seth.
      • From Karim "I would like to keep CaloDigEx (only one test). It is used to check the cpu time against reference"
    • The previous topic, i.e the Ability of packages to use files from other packages, is also relevant to this issue

When to run on 12.X.0 nightlies

  • This is under discussion with Fred. Perhaps it can be run at Lancaster (see running on the grid)

What geometry to use in samples

  • As of Nov. 30, 2006, Moore, RecExCommon, some Calo tests and egammaRec have started to use Release 12 geometries . What about other users?

  • Many jobs use Rome- geomtery in their tests. Perhaps they should use newer geometry versions. I believe that the there are four new versions for production:
    • ATLAS-CSC-00-00-00 ATLAS-CSC 01-00-00 ATLAS-CSC 01-01-00 ATLAS-CSC 01-02-00
    • What about keeping ATLAS-DC3-02 as a reference?
    • Should we really drop Rome-Initial? A lot of tests have been done with it?
    • Details of these tags are here
    • See relevant message on RTT - HN here

Duplication of tests

  • There is probably some level of duplication. What is the best way to reduce the redundancy? Still under discussion

Moving jobs from RTT to KV

  • Peter points out that it would be very useful to tag some jobs currently running in RTT as KV, so that they can be run on the short queue and provide faster feedback on the Kit. Of course, the results will need to be checked ASAP. To avoid confusion, we should call this rttKitValidation; this distinguishes it from Alessandro's KV suite.
  • What jobs are suitable for running as KV? Still under discussion

Error reporting

  • Improve error reporting
    • failureReport.html could include all messages reported with ERROR and FATAL tags - not feasible
    • What "not feasible" means that there have been too many ERROR messages for this to be very useful - Why so many ERROR messages?
  • Currently, if you chain athena jobs (e.g. in CaloTests) and any one of them works, the web page will flag success (as they all write to the same log file). We need to improve things then.
    • Fixed as of Nov. 10'06

Interactive RTT

  • Get interative RTT up and running. Will provide an easy way for users to test their scripts before a full RTT run
    • Steve Dallison and Seth Zenz are testing it.
    • See Steve's latest report here
    • A non-RTT expert needs to test this. Is there someone doing this? (Nov 10'06)
      • Nov 30'06: Markus Bischofberger is exercising this system. See details here


  • Keep user scripts on the RTT webpage
    • From Peter, "No. User scripts are code. Need versioning. Need to be in CVS." - Nov 10'06

Running on the Grid

  • Can RTT run on the Grid?
    • Nov 30'06: From Eric: Alessandro has done some worked on the installation procedure and scripts for nightly kits on the grid. Peter Love has provided space on the Lancaster CE where the kits have been installed. The RTT will continue working on testing this system.

Specific issues:


  • Database errors in some jobs
  • Muon job fails probably because ID was set off. Need to check if setMUID = false fixes this problem or not
    • Appropriate tag has been submitted for 12.X.0
    • From Peter, "I think the tag was screwed up. Needs to be checked." - Nov 10'06

Calo test jobs

  • Of late, Karim's ROOT macros seem to have problems - it seems it doesnt find the logfile anymore
    • it was fine with rel_1 for example and it fails for rel_5. In both cases the tag for CaloRecEx is the same (message on Oct 9/06)
    • Dec. 1'06: From Karim "The problem with my ROOT macros was fixed (the most important)".
    • Dec. 1'06: However, some other issues have cropped up:
      • I still have to modify my python scripts because of the new RTT version. Something to do with FileGrepper not being valid anymore.
      • Also, is "listarg" tag still valid"?

InDetRTT tests

  • Seth has been having trouble with ransfers of files requested by InDetRTT are timing out
    • Dec 1,'06 - Seth and Eric have been exchanging e-mails regarding this issue

Trigger Release job

RecEx tests

  • David wants extra features, e.g., ability to use jobO from other packages, etc.
    • Brinick said that some features are being tested, others will come in future releases
    • Feature is now available in RTT tag 00-01-53. Details here

Jet Rec

  • Rolf says, "I prepared a reference root file to which I want to compare the results from the RTT to. As this file is rather big, I don't want to store it in CVS.
    • Nov 13'06 - It is working. From Rolf, "...In the long term, I'd prefer another solution, as now, I have to store O(10MB) in my limited scratch0 directory..."

GeneratorsRTT and Missing ET

  • Jobs are OK but tests fail probably because users have old-style tags in their XML files
    • Brinick's response: "However....there is not currently an equivalent in <test></test> land, so you should not bug the developer about this. It is us who need to update"
Edit | Attach | Watch | Print version | History: r37 < r36 < r35 < r34 < r33 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r37 - 2007-06-25 - LashkarKashif
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback