Difference: RttIssues (1 vs. 37)

Revision 372007-06-25 - LashkarKashif

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 77 to 77
 
    • Detector descriptions used are ATLAS-CSC-01-00-00, ATLAS-CSC-01-01-00

  • Muon Digi Example : Daniella Rebuzzi
Changed:
<
<
    • Run geantinos through RPC/TGC/MDT/CSC digitizations - 60K events for each sub-det.
    • Detector description is "Q.02", i.e., Rome-Initial-02
    • What is definition of success?
>
>
    • Run geantinos through RPC/TGC/MDT digitizations - 2K events for each technology.
    • Detector description is "ATLAS-CSC-01-00-00"
    • Definition of success?
 
  • Digitization: Sven Vahsen
    • Digitization tests are duplicated because they were put in place before detector specific tests. What I propose is to replace the detector tests with a full ATLAS digitization (eg an integration test) and possibly a test with pileup.

Revision 362007-04-23 - SamiraHassani

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 175 to 175
 

Muon Spectrometer:

Changed:
<
<
  • MboyPerformance: Eric Lancon
    • Check performance of Muon Boy reconstruction code
    • Single muon (pT = +- 100 GeV), 10.01 digits, Rome-Initial
>
>
  • MboyPerformance: Samira Hassani
    • Check performance of MuonBoy, Staco and MuTag reconstruction code
    • Single muon (pT = +- 100 GeV), 12.0.3 digits, CSC-01-00-00
 
  • MooPerformance: Stephane Willocq
    • Check performance of MOORE reconstruction code

Revision 342007-02-02 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 6 to 6
  Here is a current status of what I understand about the RTT, i.e., what packages are run, etc.
Added:
>
>
<!--STARTPAWORKBOOK-->
 

Files generated with release 12 geometries


Note that the following prefix are used.
Line: 15 to 17
 
calib1 ATLAS-CSC-01-02-00
One file is normally 50 events.
Added:
>
>
<!--STOPPAWORKBOOK-->
 The list of files can be obtained using dq2_ls with some amount of wild-carding
Line: 33 to 37
 

Description of tests


Changed:
<
<
>
>
<!--STARTPAWORKBOOK-->
 

Software tests - basically testing the underlying software:

  • AthExHelloWorld: Alex Undrus
Line: 59 to 63
 
Added:
>
>
<!--STOPPAWORKBOOK-->
 

Digitization tests:

  • CaloDigEx: Karim Bernardet
Line: 80 to 85
 
    • Input file is "simul.T1_McAtNLO_top" and det. description is "Rome-Initial"

Detector/Software test: Checking reconstruction software:

Changed:
<
<
>
>
<!--STARTPAWORKBOOK-->
 

Overall tests:

  • Details for the following three tests are here
Line: 97 to 102
 
  • RecExAnaTest : David Rousseau
    • RecExAnaTest tests in AtlasAnalysis have a very similar scope as RecExRecoTest and RecExRecoTest. They are basic test of integration of reconstruction up to AOD and trigger. As AOD typically depend of all reconstruction no attempt is made to run only pieces of reconstruction, but some tests are done with or without trigger

Changed:
<
<
>
>
<!--STOPPAWORKBOOK-->
 

Trigger:

  • Trigger Release : Simon George

Revision 332006-12-19 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 208 to 208
 

Outstanding Problems


Added:
>
>

Census of problems with 12.0.4 (NEW - Dec 19, 2006)

I went through a recent 12.0.4 run at Lancaster and looked at those packages that failed to run. I summarize the problems here. Some of these problems are old ones and others maybe new - Dec. 19, 2006

 

General Issues:

Revision 322006-12-01 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 105 to 105
 
    • Documentation is here

Added:
>
>
    • Runs LVL2 InDet algorithms (IdScan, SiTrack) on 25 GeV single electrons dataset and checks reconstruction efficiency vs. eta, phi of a MonteCarlo track and multiplicities of tracks reconstructed by the algorithms.
 

Revision 312006-12-01 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 51 to 51
 
    • Input is EvGen file and they are probably testing the simulation stage and looking at variables such as time, Virtual Memory, etc.
    • Muon description is MUONQ02, so they are probably using Rome geometry.
Added:
>
>
 
  • JiveXML: Nikos Konstantinidis
    • What we try to do in the RTT jobs of JiveXML is to have separate jobs per subsystem, so that we can localise problems more easily
    • So, if you look at JiveXML/share/JiveXML_jobOptions_Muons.py, you will see that we switch off the InDet and Calos by
Line: 102 to 104
 
    • The trigger tests I set up in the RTT are meant to measure the rate of memory increase for some standard jobs
    • Documentation is here
Added:
>
>

  • TrigEgammaValidation : Iwona
    • This test is aimed to run the entire egamma trigger chain for ~1k events. It will produce a CBNTAA ntuple which will serve as a base to do some control histograms for all steps in the egamma reconstruction
 

Inner Detector:

  • InDetRTT: Seth Zenz
Line: 166 to 175
 
    • Check performance of MOORE reconstruction code
    • Single muon (pt=10,100,300), Jimmy.Zmumu use 11.0.41(?) digits and ATLAS-DC3-02
Added:
>
>
  • MuonEvtValidator : Daniela Rebuzzi and Nectarios Benekos
    • Validation of simulation and digitization, different Athena releases or/and Muon Spectrometer geometries.
    • Also provides important check of plots at lower levels (chamber or even tube level)
      • The packages MuonHitTest and MuonDigitTest can be interpreted as an interface to the MuonEvtValidator packages, which compares their outcome information. These two packages provide a common format to describe the hit and digit collections. The main advantage of this interface structure is the flexibility of MuonEvtValidator package which is now independent from the original format of the input information
      • The information of hits and digits is represented on an event by event basis in the MuonHitTest and MuonDigitTest packages. Since the chosen validation variables have a direct impact on the information representation inside the MuonEvtValidator package and therefore implies the structure of the whole package.
 

Checking Physics quantities:

  • Analysis Examples: Laurent Vacavant
Line: 194 to 210
 

General Issues:

Changed:
<
<

Obsolete tests

>
>

Ability of packages to use files from other packages

  • This will be very useful in streamlining tests. The idea is that, say, a simulation package runs first and produces an output file. A digitization package then starts and uses this file as an input and produces an output. A reconstruction then starts and picks up the latter output file and so on

Streamlining RTT tests

 
  • Are there obsolete tests or tests whose results are not being looked at?
Deleted:
<
<
    • What is the best way to get an idea of the latter? Have a tool that checks to see which URL's have not been accessed?
 
    • Talk to the package owners to see which tests are obsolete; either remove them or upgrade them, e.g., use newer geometry (next point).
Added:
>
>
  • Can we combine tests?
    • For instance, the digitization package can run tests that will satisfy all the detector groups, so that they don't have to run their own. In conversation with Sven, Karim, Daniela and Seth.
      • From Karim "I would like to keep CaloDigEx (only one test). It is used to check the cpu time against reference"
    • The previous topic, i.e the Ability of packages to use files from other packages, is also relevant to this issue
 

When to run on 12.X.0 nightlies

  • This is under discussion with Fred. Perhaps it can be run at Lancaster (see running on the grid)
Line: 262 to 286
 

Calo test jobs

Deleted:
<
<
  • I don't understand why the CaloRecEx jobs in RTT finish successfully, but fail the tests
    • Karim's response - "I understand why they fail, it is normal. CheckForFail looks at the results produced by ROOT macros if one of them failed then the test is marked as failed.
 
  • Of late, Karim's ROOT macros seem to have problems - it seems it doesnt find the logfile anymore
    • it was fine with rel_1 for example and it fails for rel_5. In both cases the tag for CaloRecEx is the same (message on Oct 9/06)
Changed:
<
<
    • Being investigated (Oct 16-2006) - From Peter, "...Solution well advanced" - Nov 10'06
>
>
    • Dec. 1'06: From Karim "The problem with my ROOT macros was fixed (the most important)".
    • Dec. 1'06: However, some other issues have cropped up:
      • I still have to modify my python scripts because of the new RTT version. Something to do with FileGrepper not being valid anymore.
      • Also, is "listarg" tag still valid"?

InDetRTT tests

  • Seth has been having trouble with ransfers of files requested by InDetRTT are timing out
    • Dec 1,'06 - Seth and Eric have been exchanging e-mails regarding this issue
 

Trigger Release job

Deleted:
<
<
    • Need to investigate (Oct 16-2006)
 
    • Simon/Brinick exchanged e-mails on Nov 10'06...

RecEx tests

Line: 283 to 312
 

Jet Rec

  • Rolf says, "I prepared a reference root file to which I want to compare the results from the RTT to. As this file is rather big, I don't want to store it in CVS.
Changed:
<
<
    • Being investigated (Oct 16-2006) - It is working - Nov 13'06
    • From Rolf, "...In the long term, I'd prefer another solution, as now, I have to store O(10MB) in my limited scratch0 directory..."
>
>
    • Nov 13'06 - It is working. From Rolf, "...In the long term, I'd prefer another solution, as now, I have to store O(10MB) in my limited scratch0 directory..."
 

GeneratorsRTT and Missing ET

Revision 302006-12-01 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 20 to 20
 
  • NOTE: The new way is to put the file name and the DQ2 site where it can be found in your XML configuration file, and RTT will automatically pick it up from there.
Added:
>
>
  • Go here for more details
 
  • This method is obsolete now : Do NOT put your requests here
    • David Rousseau's list is here
Line: 201 to 202
 

When to run on 12.X.0 nightlies

Changed:
<
<
  • This is under discussion with Fred. Perhaps it can be run at Lancaster (see running on the grid)
>
>
  • This is under discussion with Fred. Perhaps it can be run at Lancaster (see running on the grid)
 

What geometry to use in samples

Line: 245 to 246
 
    • From Peter, "No. User scripts are code. Need versioning. Need to be in CVS." - Nov 10'06

Running on the Grid

Added:
>
>
 
  • Can RTT run on the Grid?
    • Nov 30'06: From Eric: Alessandro has done some worked on the installation procedure and scripts for nightly kits on the grid. Peter Love has provided space on the Lancaster CE where the kits have been installed. The RTT will continue working on testing this system.

Revision 292006-11-30 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 6 to 6
  Here is a current status of what I understand about the RTT, i.e., what packages are run, etc.
Changed:
<
<

List of files needed for 12.0.3 samples

>
>

Files generated with release 12 geometries

 
Note that the following prefix are used.
prefix DetDescrVersion
Line: 18 to 18
 The list of files can be obtained using dq2_ls with some amount of wild-carding
Changed:
<
<
  • Please put your requests here:
  • David Rousseau's list
>
>
  • NOTE: The new way is to put the file name and the DQ2 site where it can be found in your XML configuration file, and RTT will automatically pick it up from there.
 
Changed:
<
<
data set N files availability purpose
calib0_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 2 files yes RecExXYZTest integration test
calibg_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 2 files no RecExXYZTest integration test
calib1_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 2 files yes RecExXYZTest integration test

  • Stephane Willocq's list

data set N files availability purpose
calib0_mc11.007211.singlepart_mu10.digit.RDO.v12000301 30 files yes Muon reco test
calib0_csc11.007234.singlepart_mu200.digit.RDO.v12000301 30 files yes Muon reco test
calib0_csc11.005145.PythiaZmumu.digit.RDO.v12000301 40 files yes Muon reco test
calib1_mc11.007211.singlepart_mu10.digit.RDO.v12000301 30 files yes Muon reco test
calib1_csc11.007234.singlepart_mu200.digit.RDO.v12000301 30 files yes Muon reco test
calib1_csc11.005145.PythiaZmumu.digit.RDO.v12000301 40 files yes Muon reco test

  • Karim Bernardet's list

data set N files availability purpose
calib0_csc11.007085.singlepart_gamma_E500.digit.RDO.v12000301 10 files yes Calo reco test
calib0.007063.singlepart_gamma_E100.digit.RDO.v12003101 10 files Calo reco test
calibg_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 10 files yes Calo reco test
calib1_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 10 files yes Calo reco test
calib0_csc11.007080.singlepart_gamma_E5.digit.RDO.v12000301 10 files yes Calo reco test
calib0_csc11.007075.singlepart_e_E500.digit.RDO.v12000301 10 files yes Calo reco test
calib0.007061.singlepart_e_E100.digit.RDO.v12003101 10 files no Calo reco test
calibg_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 10 files yes Calo reco test
calib1_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 10 files yes Calo reco test
calib0_csc11.007070.singlepart_e_E5.digit.RDO.v12000301 10 files yes Calo reco test
calib0.005144.PythiaZee.digit.RDO.v12003101 10 files no Calo reco test
calib1_csc11.005144.PythiaZee.digit.RDO.v12000301 10 files yes Calo reco test
calib0_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 10 files yes Calo reco test
calibg_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 10 files no Calo reco test
calib1_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 10 files yes Calo reco test

Please note that some files are common with David and Dirk.

  • Dirk Zerwas's list
>
>
  • This method is obsolete now : Do NOT put your requests here
    • David Rousseau's list is here
    • Stephane Willocq's list is here
    • Karim Bernardet's list is here
    • Dirk Zerwas's list is here - Please note that some files are common with David and Dirk.
 
    • Dirk is still waiting for some of his files to be available - Nov 13'06
Changed:
<
<
dataset events purpose
calib0_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 2000evts egamma reco test
calib0_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 2000evts egamma reco test
calibg_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 2000evts egamma reco test
calib1_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 2000evts egamma reco test
calibg_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 2000evts egamma reco test
calib1_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 2000evts egamma reco test

  • See the thread here for requests
>
>
    • See the thread here for requests
 

Description of tests

Line: 241 to 199
 
    • What is the best way to get an idea of the latter? Have a tool that checks to see which URL's have not been accessed?
    • Talk to the package owners to see which tests are obsolete; either remove them or upgrade them, e.g., use newer geometry (next point).
Added:
>
>

When to run on 12.X.0 nightlies

  • This is under discussion with Fred. Perhaps it can be run at Lancaster (see running on the grid)
 

What geometry to use in samples

Added:
>
>
  • As of Nov. 30, 2006, Moore, RecExCommon, some Calo tests and egammaRec have started to use Release 12 geometries . What about other users?
 
  • Many jobs use Rome- geomtery in their tests. Perhaps they should use newer geometry versions. I believe that the there are four new versions for production:
    • ATLAS-CSC-00-00-00 ATLAS-CSC 01-00-00 ATLAS-CSC 01-01-00 ATLAS-CSC 01-02-00
    • What about keeping ATLAS-DC3-02 as a reference?
Line: 256 to 220
 

Moving jobs from RTT to KV

Changed:
<
<
  • Peter points out that it would be very useful to tag some jobs currently running in RTT as KV, so that they can be run on the short queue and provide faster feedback on the Kit. Of course, the results will need to be checked ASAP. To avoid confusion, we should call this rttKitValidation; this distinguishes it from Alessandro's KV suite.
>
>
  • Peter points out that it would be very useful to tag some jobs currently running in RTT as KV, so that they can be run on the short queue and provide faster feedback on the Kit. Of course, the results will need to be checked ASAP. To avoid confusion, we should call this rttKitValidation; this distinguishes it from Alessandro's KV suite.
 
  • What jobs are suitable for running as KV? Still under discussion

Error reporting

  • Improve error reporting
    • failureReport.html could include all messages reported with ERROR and FATAL tags - not feasible
Added:
>
>
    • What "not feasible" means that there have been too many ERROR messages for this to be very useful - Why so many ERROR messages?
 
  • Currently, if you chain athena jobs (e.g. in CaloTests) and any one of them works, the web page will flag success (as they all write to the same log file). We need to improve things then.
    • Fixed as of Nov. 10'06
Line: 270 to 235
 
  • Get interative RTT up and running. Will provide an easy way for users to test their scripts before a full RTT run
    • Steve Dallison and Seth Zenz are testing it.
Changed:
<
<
    • See Steve's latest report here
    • A non-RTT expert needs to test this. Is there someone doing this? (Nov 10'06)
>
>
    • See Steve's latest report here
    • A non-RTT expert needs to test this. Is there someone doing this? (Nov 10'06)
      • Nov 30'06: Markus Bischofberger is exercising this system. See details here
 

Miscellaneous

Line: 281 to 247
 

Running on the Grid

  • Can RTT run on the Grid?
Added:
>
>
    • Nov 30'06: From Eric: Alessandro has done some worked on the installation procedure and scripts for nightly kits on the grid. Peter Love has provided space on the Lancaster CE where the kits have been installed. The RTT will continue working on testing this system.
 

Specific issues:

Revision 282006-11-13 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 278 to 278
 
  • Keep user scripts on the RTT webpage
    • From Peter, "No. User scripts are code. Need versioning. Need to be in CVS." - Nov 10'06
Added:
>
>

Running on the Grid

  • Can RTT run on the Grid?
 

Specific issues:

JiveXML

Revision 272006-11-13 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 59 to 59
 Please note that some files are common with David and Dirk.

  • Dirk Zerwas's list
Added:
>
>
    • Dirk is still waiting for some of his files to be available - Nov 13'06
 
dataset events purpose
calib0_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 2000evts egamma reco test
calib0_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 2000evts egamma reco test
Line: 307 to 310
 

Jet Rec

  • Rolf says, "I prepared a reference root file to which I want to compare the results from the RTT to. As this file is rather big, I don't want to store it in CVS.
Changed:
<
<
    • Being investigated (Oct 16-2006) - Waiting to hear from Rolf - Nov 10'06
>
>
    • Being investigated (Oct 16-2006) - It is working - Nov 13'06
    • From Rolf, "...In the long term, I'd prefer another solution, as now, I have to store O(10MB) in my limited scratch0 directory..."
 

GeneratorsRTT and Missing ET

Revision 262006-11-10 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 253 to 253
 

Moving jobs from RTT to KV

Changed:
<
<
  • Peter points out that it would be very useful to tag some jobs currently running in RTT as KV, so that they can be run on the short queue and provide faster feedback on the Kit. Of course, the results will need to be checked ASAP.
>
>
  • Peter points out that it would be very useful to tag some jobs currently running in RTT as KV, so that they can be run on the short queue and provide faster feedback on the Kit. Of course, the results will need to be checked ASAP. To avoid confusion, we should call this rttKitValidation; this distinguishes it from Alessandro's KV suite.
 
  • What jobs are suitable for running as KV? Still under discussion

Error reporting

  • Improve error reporting
Changed:
<
<
    • failureReport.html could include all messages reported with ERROR and FATAL tags
>
>
    • failureReport.html could include all messages reported with ERROR and FATAL tags - not feasible
 
  • Currently, if you chain athena jobs (e.g. in CaloTests) and any one of them works, the web page will flag success (as they all write to the same log file). We need to improve things then.
Changed:
<
<
    • What's going on with this - Oct 16, 2006?
>
>
    • Fixed as of Nov. 10'06
 

Interactive RTT

  • Get interative RTT up and running. Will provide an easy way for users to test their scripts before a full RTT run
    • Steve Dallison and Seth Zenz are testing it.
    • See Steve's latest report here
Added:
>
>
    • A non-RTT expert needs to test this. Is there someone doing this? (Nov 10'06)
 

Miscellaneous

  • Keep user scripts on the RTT webpage
Added:
>
>
    • From Peter, "No. User scripts are code. Need versioning. Need to be in CVS." - Nov 10'06
 

Specific issues:

Line: 280 to 282
 
  • Database errors in some jobs
  • Muon job fails probably because ID was set off. Need to check if setMUID = false fixes this problem or not
    • Appropriate tag has been submitted for 12.X.0
Added:
>
>
    • From Peter, "I think the tag was screwed up. Needs to be checked." - Nov 10'06
 

Calo test jobs

Line: 287 to 290
 
    • Karim's response - "I understand why they fail, it is normal. CheckForFail looks at the results produced by ROOT macros if one of them failed then the test is marked as failed.
  • Of late, Karim's ROOT macros seem to have problems - it seems it doesnt find the logfile anymore
    • it was fine with rel_1 for example and it fails for rel_5. In both cases the tag for CaloRecEx is the same (message on Oct 9/06)
Changed:
<
<
    • Being investigated (Oct 16-2006)
>
>
    • Being investigated (Oct 16-2006) - From Peter, "...Solution well advanced" - Nov 10'06
 

Trigger Release job

Added:
>
>
    • Simon/Brinick exchanged e-mails on Nov 10'06...
 

RecEx tests

Line: 303 to 307
 

Jet Rec

  • Rolf says, "I prepared a reference root file to which I want to compare the results from the RTT to. As this file is rather big, I don't want to store it in CVS.
Changed:
<
<
    • Being investigated (Oct 16-2006)
>
>
    • Being investigated (Oct 16-2006) - Waiting to hear from Rolf - Nov 10'06
 

GeneratorsRTT and Missing ET

Revision 252006-10-24 - StephaneWillocq

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 29 to 29
 
  • Stephane Willocq's list

data set N files availability purpose
Changed:
<
<
calib0_mc11.007211.singlepart_mu10.digit.v12000301 30 files yes Muon reco test
calib0_csc11.007234.singlepart_mu200.digit.v12000301 30 files yes Muon reco test
calib0_csc11.005145.PythiaZmumu.digit.v12000301 40 files yes Muon reco test
calib1_mc11.007211.singlepart_mu10.digit.v12000301 30 files yes Muon reco test
calib1_csc11.007234.singlepart_mu200.digit.v12000301 30 files yes Muon reco test
calib1_csc11.005145.PythiaZmumu.digit.v12000301 40 files yes Muon reco test
>
>
calib0_mc11.007211.singlepart_mu10.digit.RDO.v12000301 30 files yes Muon reco test
calib0_csc11.007234.singlepart_mu200.digit.RDO.v12000301 30 files yes Muon reco test
calib0_csc11.005145.PythiaZmumu.digit.RDO.v12000301 40 files yes Muon reco test
calib1_mc11.007211.singlepart_mu10.digit.RDO.v12000301 30 files yes Muon reco test
calib1_csc11.007234.singlepart_mu200.digit.RDO.v12000301 30 files yes Muon reco test
calib1_csc11.005145.PythiaZmumu.digit.RDO.v12000301 40 files yes Muon reco test
 

  • Karim Bernardet's list

Revision 242006-10-18 - KarimBernardet

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 40 to 40
 
  • Karim Bernardet's list

data set N files availability purpose
Changed:
<
<
calib0_csc11.007085.singlepart_gamma_E500.digit.RDO.v12000301 yes 10 files Calo reco test
calib0.007063.singlepart_gamma_E100.digit.RDO.v12003101 no 10 files Calo reco test
calibg_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 yes 10 files Calo reco test
calib1_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 yes 10 files Calo reco test
calib0_csc11.007080.singlepart_gamma_E5.digit.RDO.v12000301 yes 10 files Calo reco test
calib0_csc11.007075.singlepart_e_E500.digit.RDO.v12000301 yes 10 files Calo reco test
calib0.007061.singlepart_e_E100.digit.RDO.v12003101 no 10 files Calo reco test
calibg_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 yes 10 files Calo reco test
calib1_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 yes 10 files Calo reco test
calib0_csc11.007070.singlepart_e_E5.digit.RDO.v12000301 yes 10 files Calo reco test
calib0.005144.PythiaZee.digit.RDO.v12003101 no 10 files Calo reco test
calib1_csc11.005144.PythiaZee.digit.RDO.v12000301 yes 10 files Calo reco test
calib0_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 yes 10 files Calo reco test
calibg_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 no 10 files Calo reco test
calib1_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 yes 10 files Calo reco test
>
>
calib0_csc11.007085.singlepart_gamma_E500.digit.RDO.v12000301 10 files yes Calo reco test
calib0.007063.singlepart_gamma_E100.digit.RDO.v12003101 10 files Calo reco test
calibg_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 10 files yes Calo reco test
calib1_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 10 files yes Calo reco test
calib0_csc11.007080.singlepart_gamma_E5.digit.RDO.v12000301 10 files yes Calo reco test
calib0_csc11.007075.singlepart_e_E500.digit.RDO.v12000301 10 files yes Calo reco test
calib0.007061.singlepart_e_E100.digit.RDO.v12003101 10 files no Calo reco test
calibg_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 10 files yes Calo reco test
calib1_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 10 files yes Calo reco test
calib0_csc11.007070.singlepart_e_E5.digit.RDO.v12000301 10 files yes Calo reco test
calib0.005144.PythiaZee.digit.RDO.v12003101 10 files no Calo reco test
calib1_csc11.005144.PythiaZee.digit.RDO.v12000301 10 files yes Calo reco test
calib0_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 10 files yes Calo reco test
calibg_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 10 files no Calo reco test
calib1_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 10 files yes Calo reco test

Please note that some files are common with David and Dirk.

 
  • Dirk Zerwas's list
dataset events purpose

Revision 232006-10-18 - DirkZerwas

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 56 to 56
 
calibg_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 no 10 files Calo reco test
calib1_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 yes 10 files Calo reco test
Added:
>
>
  • Dirk Zerwas's list
dataset events purpose
calib0_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 2000evts egamma reco test
calib0_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 2000evts egamma reco test
calibg_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 2000evts egamma reco test
calib1_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 2000evts egamma reco test
calibg_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 2000evts egamma reco test
calib1_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 2000evts egamma reco test
 
  • See the thread here for requests

Description of tests

Revision 222006-10-18 - KarimBernardet

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 40 to 40
 
  • Karim Bernardet's list

data set N files availability purpose
Changed:
<
<
>
>
calib0_csc11.007085.singlepart_gamma_E500.digit.RDO.v12000301 yes 10 files Calo reco test
calib0.007063.singlepart_gamma_E100.digit.RDO.v12003101 no 10 files Calo reco test
calibg_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 yes 10 files Calo reco test
calib1_csc11.007063.singlepart_gamma_E100.digit.RDO.v12000301 yes 10 files Calo reco test
calib0_csc11.007080.singlepart_gamma_E5.digit.RDO.v12000301 yes 10 files Calo reco test
calib0_csc11.007075.singlepart_e_E500.digit.RDO.v12000301 yes 10 files Calo reco test
calib0.007061.singlepart_e_E100.digit.RDO.v12003101 no 10 files Calo reco test
calibg_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 yes 10 files Calo reco test
calib1_csc11.007061.singlepart_e_E100.digit.RDO.v12000301 yes 10 files Calo reco test
calib0_csc11.007070.singlepart_e_E5.digit.RDO.v12000301 yes 10 files Calo reco test
calib0.005144.PythiaZee.digit.RDO.v12003101 no 10 files Calo reco test
calib1_csc11.005144.PythiaZee.digit.RDO.v12000301 yes 10 files Calo reco test
calib0_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 yes 10 files Calo reco test
calibg_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 no 10 files Calo reco test
calib1_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 yes 10 files Calo reco test
 
  • See the thread here for requests

Revision 212006-10-18 - KarimBernardet

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 36 to 36
 
calib1_csc11.007234.singlepart_mu200.digit.v12000301 30 files yes Muon reco test
calib1_csc11.005145.PythiaZmumu.digit.v12000301 40 files yes Muon reco test
Added:
>
>
  • Karim Bernardet's list

data set N files availability purpose
 
  • See the thread here for requests

Description of tests

Revision 202006-10-18 - StephaneWillocq

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 26 to 26
 
calibg_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 2 files no RecExXYZTest integration test
calib1_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 2 files yes RecExXYZTest integration test
Added:
>
>
  • Stephane Willocq's list

data set N files availability purpose
calib0_mc11.007211.singlepart_mu10.digit.v12000301 30 files yes Muon reco test
calib0_csc11.007234.singlepart_mu200.digit.v12000301 30 files yes Muon reco test
calib0_csc11.005145.PythiaZmumu.digit.v12000301 40 files yes Muon reco test
calib1_mc11.007211.singlepart_mu10.digit.v12000301 30 files yes Muon reco test
calib1_csc11.007234.singlepart_mu200.digit.v12000301 30 files yes Muon reco test
calib1_csc11.005145.PythiaZmumu.digit.v12000301 40 files yes Muon reco test
 
  • See the thread here for requests

Description of tests

Revision 192006-10-17 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 19 to 19
 dq2_ls with some amount of wild-carding

  • Please put your requests here:
Added:
>
>
  • David Rousseau's list
 
data set N files availability purpose
calib0_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 2 files yes RecExXYZTest integration test

Revision 182006-10-17 - DavidRousseau

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 8 to 8
 

List of files needed for 12.0.3 samples


Added:
>
>
Note that the following prefix are used.
prefix DetDescrVersion
calib0 ATLAS-CSC-01-00-00
calibg ATLAS-CSC-01-01-00
calib1 ATLAS-CSC-01-02-00
One file is normally 50 events.

The list of files can be obtained using dq2_ls with some amount of wild-carding

 
  • Please put your requests here:
Added:
>
>
data set N files availability purpose
calib0_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 2 files yes RecExXYZTest integration test
calibg_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 2 files no RecExXYZTest integration test
calib1_csc11.005200.T1_McAtNlo_Jimmy.digit.RDO.v12000301 2 files yes RecExXYZTest integration test
 
  • See the thread here for requests

Description of tests

Revision 172006-10-16 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 10 to 10
 

  • Please put your requests here:
Added:
>
>
  • See the thread here for requests
 

Description of tests


Revision 162006-10-16 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 187 to 187
 
    • What about keeping ATLAS-DC3-02 as a reference?
    • Should we really drop Rome-Initial? A lot of tests have been done with it?
    • Details of these tags are here
Added:
>
>
    • See relevant message on RTT - HN here
 

Duplication of tests

Changed:
<
<
  • There is probably some level of duplication. What is the best way to reduce the redundancy?
>
>
  • There is probably some level of duplication. What is the best way to reduce the redundancy? Still under discussion
 

Moving jobs from RTT to KV

  • Peter points out that it would be very useful to tag some jobs currently running in RTT as KV, so that they can be run on the short queue and provide faster feedback on the Kit. Of course, the results will need to be checked ASAP.
Changed:
<
<
  • What jobs are suitable for running as KV?
>
>
  • What jobs are suitable for running as KV? Still under discussion
 

Error reporting

  • Improve error reporting
    • failureReport.html could include all messages reported with ERROR and FATAL tags
  • Currently, if you chain athena jobs (e.g. in CaloTests) and any one of them works, the web page will flag success (as they all write to the same log file). We need to improve things then.
Added:
>
>
    • What's going on with this - Oct 16, 2006?
 

Interactive RTT

  • Get interative RTT up and running. Will provide an easy way for users to test their scripts before a full RTT run
    • Steve Dallison and Seth Zenz are testing it.
Added:
>
>
    • See Steve's latest report here
 

Miscellaneous

Line: 218 to 221
 
  • Database errors in some jobs
  • Muon job fails probably because ID was set off. Need to check if setMUID = false fixes this problem or not
Added:
>
>
    • Appropriate tag has been submitted for 12.X.0
 

Calo test jobs

Line: 225 to 229
 
    • Karim's response - "I understand why they fail, it is normal. CheckForFail looks at the results produced by ROOT macros if one of them failed then the test is marked as failed.
  • Of late, Karim's ROOT macros seem to have problems - it seems it doesnt find the logfile anymore
    • it was fine with rel_1 for example and it fails for rel_5. In both cases the tag for CaloRecEx is the same (message on Oct 9/06)
Added:
>
>
    • Being investigated (Oct 16-2006)
 

Trigger Release job

Added:
>
>
    • Need to investigate (Oct 16-2006)
 

RecEx tests

  • David wants extra features, e.g., ability to use jobO from other packages, etc.
    • Brinick said that some features are being tested, others will come in future releases
Added:
>
>
    • Feature is now available in RTT tag 00-01-53. Details here
 

Jet Rec

  • Rolf says, "I prepared a reference root file to which I want to compare the results from the RTT to. As this file is rather big, I don't want to store it in CVS.
Added:
>
>
    • Being investigated (Oct 16-2006)
 

GeneratorsRTT and Missing ET

Revision 152006-10-13 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 6 to 6
  Here is a current status of what I understand about the RTT, i.e., what packages are run, etc.
Added:
>
>

List of files needed for 12.0.3 samples


  • Please put your requests here:
 

Description of tests


Revision 142006-10-09 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 218 to 218
 
  • I don't understand why the CaloRecEx jobs in RTT finish successfully, but fail the tests
    • Karim's response - "I understand why they fail, it is normal. CheckForFail looks at the results produced by ROOT macros if one of them failed then the test is marked as failed.
Added:
>
>
  • Of late, Karim's ROOT macros seem to have problems - it seems it doesnt find the logfile anymore
    • it was fine with rel_1 for example and it fails for rel_5. In both cases the tag for CaloRecEx is the same (message on Oct 9/06)
 

Trigger Release job

Revision 132006-10-05 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 22 to 22
 
    • Each job writes out an ntuple file and, as a consistency check, performs a fit on the invariant mass of the Z->mumu events produced.

  • G4AtlasApps: Andrea di Simone
Added:
>
>
    • 6 jobs, two each for single muons, electrons and pions (pT = 5 GeV and 50 GeV)
    • Input is EvGen file and they are probably testing the simulation stage and looking at variables such as time, Virtual Memory, etc.
    • Muon description is MUONQ02, so they are probably using Rome geometry.
 
  • JiveXML: Nikos Konstantinidis
    • What we try to do in the RTT jobs of JiveXML is to have separate jobs per subsystem, so that we can localise problems more easily
Line: 198 to 201
 

Interactive RTT

  • Get interative RTT up and running. Will provide an easy way for users to test their scripts before a full RTT run
Changed:
<
<
    • Steve Dallison is testing it.
>
>
    • Steve Dallison and Seth Zenz are testing it.
 

Miscellaneous

  • Keep user scripts on the RTT webpage
Deleted:
<
<
  • From Peter "As you know, we put in some control machinery to run short queue kit tests for quick(ish) turn around tests with the testing of kits specifically in mind. Perhaps, with Vivek;s help, we could try again to encourage users to label selected tests - which will be routed to the short queue - as RTT kit validation tests
 

Specific issues:

Revision 122006-10-05 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 193 to 193
 
  • Improve error reporting
    • failureReport.html could include all messages reported with ERROR and FATAL tags
Added:
>
>
  • Currently, if you chain athena jobs (e.g. in CaloTests) and any one of them works, the web page will flag success (as they all write to the same log file). We need to improve things then.
 

Interactive RTT

Revision 112006-10-04 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 194 to 194
 
  • Improve error reporting
    • failureReport.html could include all messages reported with ERROR and FATAL tags
Added:
>
>

Interactive RTT

  • Get interative RTT up and running. Will provide an easy way for users to test their scripts before a full RTT run
    • Steve Dallison is testing it.
 

Miscellaneous

  • Keep user scripts on the RTT webpage

Revision 102006-10-02 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 56 to 56
 
  • Details for the following three tests are here
    • All tests use ATLAS-DC3-02 description
    • All but one test use a top file - "mc11.004100.T1_McAtNLO_top.digit.RDO.v11000301"
Changed:
<
<
    • The one event uses "T1_McAtNLO_Jimmy_digit_RDO.v12000201"
>
>
    • The one lone test uses "T1_McAtNLO_Jimmy_digit_RDO.v12000201"
 
Line: 78 to 78
 
  • InDetRTT: Seth Zenz
    • This does Inner Detector reconstruction, plotting properties of tracks and hits for a several different physics samples (and, more and more, different geometries). The input is mostly 11.0.41 digits, with reconstruction outputting the InDetRecStatistics ntuple; the digits will soon be changed to mostly 12.0.2
Added:
>
>
      • Single mu (pT=10, 100), J5, Single e (pT=25), Top (-T1-McAtNLO-), Jimmy.Zee, Jimmy.Zmumu use 11.0.41 digits and ATLAS-DC3-02
      • One Pythia.Zmumu job uses 12.0.2 digits and CSC-00-01-00
      • One minbias job uses 11.0.3? digits and another one uses csc11 (11.0.42) ATLAS-DC3-02
 
    • It has a system of scripts that plots track pulls/resolutions, hit locations/residuals, efficiencies, fakes, etc. etc., and compares them with reference files I provide (currently through CVS, but this will change soon). It also does various comparisons between these the three track authors (new/default, IPatRec, XKalman)

  • InDetRecValidation: Steve Dallison - Software/Detector test
    • Look at single muon tracks and makes plots of the perigee parameters
    • Testing the standard reconstruction in one set of jobs and the stand alone inner detector reconstruction in another set of jobs
Added:
>
>
    • ATLAS-DC3-02 jobs: Full reco. + ID only reco. v11.0.41(?) digits for single mu (pt=10,100,300) for IPAT, XKAL, NTRK
    • Rome-Initial jobs:
      • Full reco. for v10.50 (?) digits for single mu = (pt = +-5) for IPAT, XKAL, NTRK
      • InDet reco. only for for v10.50 (?) digits for single mu = (pt = +-10, +-100, +-1000) for IPAT, XKAL, NTRK
 
  • It would seem that there is some level of duplication in the InDet jobs
    • Perhaps we should start a dialog between Steve, Seth/Sven, and Markus about this.
Line: 90 to 97
 

Calorimeter:

  • CaloAnaEx: Karim Bernardet
Changed:
<
<
    • reco with production of ESD+AOD (2 tests)
    • reco with production of ESD (and a ntuple). Then read the ESD to produce an AOD. With this AOD produce a ntuple which is compared to the first ntuple to make sure that I find the same things
>
>
    • Reco with production of ESD+AOD (2 tests)
    • Reco with production of ESD (and a ntuple). Then read the ESD to produce an AOD. With this AOD produce a ntuple which is compared to the first ntuple to make sure that I find the same things
 
I run a ROOT macro (checkPOOL.C) on the ESD and AOD files to check the content of them. - What does this really mean
I run a python script to check the CPU time with a ref file (for the ESD+AOD tests)
Added:
>
>
    • Previous three jobs are two H-2e-2mu and a top job: Rome-Initial. Muons use Q.02
 
    • (c) reco with testbeam : I run a python script to check the CPU time with a ref file
      => 4 tests in this package
Line: 103 to 111
 
      • One does histos comparison and the other truth plots.
      • Then a python script (didAnyTestFail) is run to check the results for the comparison and the truth (If one of the tests fails then the RTT test is marked as failed). The ROOT macros uses thresholds which are stored in files in my web area (easier to update them)
    • Run a python script to check the CPU time with a ref file
Added:
>
>
    • Single photon (pT=100 GeV) and H-2e-2mu use Rome-Initial
    • Top job (-T1-McAtNLO-top) use digits with v11.0.31(?) and same redone with 11.5.0. ATLAS-DC3-02
 
  • CaloSimEx: Karim Bernardet
Changed:
<
<
    • simulation, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact
>
>
    • Simulation, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact
    • Uses ATLAS-DC3-05 layout
 
  • CaloTests: Karim Bernardet
Changed:
<
<
    • 7 tests: full chain tests (simulation, digitization and reconstruction) with single particles
>
>
    • 5 tests: full chain tests (simulation, digitization and reconstruction) with single particles
 
    • I use them to test the last tags of the geometry. ROOT macros are run plot some histos and truth plots
Added:
>
>
    • Single electron (pT=5,50 GeV) use ATLAS-DC3-07
    • Single photon (pT=50 GeV) use CSC-00-00-00
    • Single electron and photon (pT=50 GeV) use CSC-00-01-00
 
  • LArMonTools:Tayfun Ince
    • tested on commissioning data in bytestream format. Output is a root file with plenty of monitoring histograms which are simply dumped in a ps file with a macro. Just to double check if the updates to the monitoring tools run with latest athena version
Line: 118 to 132
 
  • MboyPerformance: Eric Lancon
    • Check performance of Muon Boy reconstruction code
Added:
>
>
    • Single muon (pT = +- 100 GeV), 10.01 digits, Rome-Initial
 
  • MooPerformance: Stephane Willocq
    • Check performance of MOORE reconstruction code
Added:
>
>
    • Single muon (pt=10,100,300), Jimmy.Zmumu use 11.0.41(?) digits and ATLAS-DC3-02
 

Checking Physics quantities:

  • Analysis Examples: Laurent Vacavant
    • ...deals with b-tagging validation. The jobs reads always the same AOD file, re-runs the b-tagging on it and compares the resulting histograms with some reference histograms..."
Added:
>
>
    • ATLAS-DC3-02
 
  • BPhysValidation: Steve Dallison
Changed:
<
<
    • ...
>
>
    • Looks at Bs -> J/psi Phi events. v11.0.41(?) digits. ATLAS-DC3-02
 
  • JetRec: Rolf Seuster
    • Tested on CSC data. This is for monitoring of how the Jet reconstruction work, various jet reconstruction algorithms like Kt, Cone and clustering effects (from topoclusters), etc.
Added:
>
>
    • Both J5 and single Pi jobs use 12.0.1(?) digits. ATLAS-DC3-06
 
  • Missing ET : Silvia Resconi
Added:
>
>
    • Z -> tau/tau events. Rome-Initial
 
  • egammaRec : Dirk Zerwas
Changed:
<
<
    • ...
>
>
    • Single electron (pt=100), v11.0.31 digits and photon (pT=60), v11.0.41 digits, use ATLAS-DC3
    • Single electron (pt=100), g4dig(?) uses ATLAS-DC2
 
  • tauRec: Michael Heldmann
Changed:
<
<
    • ...
>
>
    • Z -> tau/tau events. Rome-Initial
 

Outstanding Problems


Revision 92006-10-02 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 33 to 33
 
  • CaloDigEx: Karim Bernardet
    • "...digitization, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact
Added:
>
>
    • Input file is "T1_McAtNLO_top.simul.Hits" events" and detector description is ATLAS-DC3-02
 
  • InDetSimRTT: Seth Zenz
Changed:
<
<
    • A much younger package, this is supposed to find obvious holes in the geometry. It runs 100 events (for two different gemeometries) through inner detector-only simulation, digitization, and reconstruction. In the last step it puts out an InDetRecStat ntuple and uses some of the hit plotting scripts from InDetRTT to make some pretty pictures. In addition to finding geometry holes, in principle it can also be a test of whether the full chain runs properly for the Inner Detector, and whether pool file input/output works at each steps--which are issues that needed to be checked at points during the last release cycle
>
>
    • A much younger package, this is supposed to find obvious holes in the geometry. It runs 100 (CSC11 EVGEN) events through InDet-only simulation, digitization, and reconstruction. In the last step it puts out an InDetRecStat ntuple and uses some of the hit plotting scripts from InDetRTT to make some pretty pictures. In addition to finding geometry holes, in principle it can also be a test of whether the full chain runs properly for the Inner Detector, and whether pool file input/output works at each steps--which are issues that needed to be checked at points during the last release cycle
    • Detector descriptions used are ATLAS-CSC-01-00-00, ATLAS-CSC-01-01-00
 
  • Muon Digi Example : Daniella Rebuzzi
Changed:
<
<
    • ...
>
>
    • Run geantinos through RPC/TGC/MDT/CSC digitizations - 60K events for each sub-det.
    • Detector description is "Q.02", i.e., Rome-Initial-02
    • What is definition of success?
 
  • Digitization: Sven Vahsen
    • Digitization tests are duplicated because they were put in place before detector specific tests. What I propose is to replace the detector tests with a full ATLAS digitization (eg an integration test) and possibly a test with pileup.
Added:
>
>
    • ID, CALO and MUON systems are separately tested (what exactly is being tested?) - Whis is LVL1 being set on?
    • Input file is "simul.T1_McAtNLO_top" and det. description is "Rome-Initial"
 

Detector/Software test: Checking reconstruction software:

Overall tests:

Added:
>
>
  • Details for the following three tests are here
    • All tests use ATLAS-DC3-02 description
    • All but one test use a top file - "mc11.004100.T1_McAtNLO_top.digit.RDO.v11000301"
    • The one event uses "T1_McAtNLO_Jimmy_digit_RDO.v12000201"
 

Line: 55 to 66
 
  • RecExAnaTest : David Rousseau
    • RecExAnaTest tests in AtlasAnalysis have a very similar scope as RecExRecoTest and RecExRecoTest. They are basic test of integration of reconstruction up to AOD and trigger. As AOD typically depend of all reconstruction no attempt is made to run only pieces of reconstruction, but some tests are done with or without trigger
Changed:
<
<
>
>
 

Trigger:

Revision 82006-10-02 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 143 to 143
 

What geometry to use in samples

  • Many jobs use Rome- geomtery in their tests. Perhaps they should use newer geometry versions. I believe that the there are four new versions for production:
Changed:
<
<
    • ATLAS-CSC-00-00-00 ATLAS-CSC 01-00-00 ATLAS-CSC 01-01-00 ATLAS-CSC 02-02-00
>
>
    • ATLAS-CSC-00-00-00 ATLAS-CSC 01-00-00 ATLAS-CSC 01-01-00 ATLAS-CSC 01-02-00
    • What about keeping ATLAS-DC3-02 as a reference?
    • Should we really drop Rome-Initial? A lot of tests have been done with it?
 
    • Details of these tags are here

Duplication of tests

Revision 72006-09-29 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 134 to 134
 

General Issues:

Added:
>
>

Obsolete tests

  • Are there obsolete tests or tests whose results are not being looked at?
    • What is the best way to get an idea of the latter? Have a tool that checks to see which URL's have not been accessed?
    • Talk to the package owners to see which tests are obsolete; either remove them or upgrade them, e.g., use newer geometry (next point).
 

What geometry to use in samples

  • Many jobs use Rome- geomtery in their tests. Perhaps they should use newer geometry versions. I believe that the there are four new versions for production:
Line: 144 to 150
 
  • There is probably some level of duplication. What is the best way to reduce the redundancy?
Added:
>
>

Moving jobs from RTT to KV

  • Peter points out that it would be very useful to tag some jobs currently running in RTT as KV, so that they can be run on the short queue and provide faster feedback on the Kit. Of course, the results will need to be checked ASAP.
  • What jobs are suitable for running as KV?
 

Error reporting

  • Improve error reporting

Revision 62006-09-28 - FrederickLuehring

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 73 to 73
 
    • Look at single muon tracks and makes plots of the perigee parameters
    • Testing the standard reconstruction in one set of jobs and the stand alone inner detector reconstruction in another set of jobs
Changed:
<
<
  • It would seem that there is some level of duplication in the InDet jobs
>
>
  • It would seem that there is some level of duplication in the InDet jobs
    • Perhaps we should start a dialog between Steve, Seth/Sven, and Markus about this.
 

Calorimeter:

Revision 52006-09-28 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 11 to 11
 

Software tests - basically testing the underlying software:

Changed:
<
<
  • AthExHelloWorld: Alex Undrus
>
>
  • AthExHelloWorld: Alex Undrus
 
Changed:
<
<
  • CBNT_AOD: Vassilios Vassilakopoulos
>
>
  • CBNT_AOD: Vassilios Vassilakopoulos
 
    • "...is for the validation of the AOD content and AOD reconstruction..." - What does this really mean
Changed:
<
<
  • GeneratorsRTT: George.Stavropoulos
>
>
  • GeneratorsRTT: George.Stavropoulos
 
    • 5 tests: meant to validate the Generators packages at run time.
    • Athena jobs are setup and test the main generators used for the production of the MC events.
    • Each job writes out an ntuple file and, as a consistency check, performs a fit on the invariant mass of the Z->mumu events produced.
Changed:
<
<
  • G4AtlasApps: Andrea di Simone
>
>
  • G4AtlasApps: Andrea di Simone
 
Changed:
<
<
  • JiveXML: Nikos Konstantinidis
>
>
  • JiveXML: Nikos Konstantinidis
 
    • What we try to do in the RTT jobs of JiveXML is to have separate jobs per subsystem, so that we can localise problems more easily
    • So, if you look at JiveXML/share/JiveXML_jobOptions_Muons.py, you will see that we switch off the InDet and Calos by
Line: 31 to 31
 

Digitization tests:

Changed:
<
<
  • CaloDigEx: Karim Bernardet
>
>
  • CaloDigEx: Karim Bernardet
 
    • "...digitization, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact
Changed:
<
<
  • InDetSimRTT: Seth Zenz
>
>
  • InDetSimRTT: Seth Zenz
 
    • A much younger package, this is supposed to find obvious holes in the geometry. It runs 100 events (for two different gemeometries) through inner detector-only simulation, digitization, and reconstruction. In the last step it puts out an InDetRecStat ntuple and uses some of the hit plotting scripts from InDetRTT to make some pretty pictures. In addition to finding geometry holes, in principle it can also be a test of whether the full chain runs properly for the Inner Detector, and whether pool file input/output works at each steps--which are issues that needed to be checked at points during the last release cycle
Changed:
<
<
  • Muon Digi Example : Daniella Rebuzzi
>
>
  • Muon Digi Example : Daniella Rebuzzi
 
    • ...
Changed:
<
<
  • Digitization: Sven Vahsen
>
>
  • Digitization: Sven Vahsen
 
    • Digitization tests are duplicated because they were put in place before detector specific tests. What I propose is to replace the detector tests with a full ATLAS digitization (eg an integration test) and possibly a test with pileup.

Detector/Software test: Checking reconstruction software:

Overall tests:

Changed:
<
<
>
>
 

Changed:
<
<
>
>
 
    • RecExTrigTest tests in AtlasTrigger have a very similar scope as RecExRecoTest. They are basic test of integration of reconstruction and trigger, assuming reconstruction and trigger have been tested separately
Changed:
<
<
>
>
 
    • RecExAnaTest tests in AtlasAnalysis have a very similar scope as RecExRecoTest and RecExRecoTest. They are basic test of integration of reconstruction up to AOD and trigger. As AOD typically depend of all reconstruction no attempt is made to run only pieces of reconstruction, but some tests are done with or without trigger
Changed:
<
<
>
>
 

Trigger:

Changed:
<
<
  • Trigger Release : Simon George
>
>
  • Trigger Release : Simon George
 
    • The trigger tests I set up in the RTT are meant to measure the rate of memory increase for some standard jobs
Changed:
<
<
>
>
    • Documentation is here
 

Inner Detector:

Changed:
<
<
  • InDetRTT: Seth Zenz
>
>
  • InDetRTT: Seth Zenz
 
    • This does Inner Detector reconstruction, plotting properties of tracks and hits for a several different physics samples (and, more and more, different geometries). The input is mostly 11.0.41 digits, with reconstruction outputting the InDetRecStatistics ntuple; the digits will soon be changed to mostly 12.0.2
    • It has a system of scripts that plots track pulls/resolutions, hit locations/residuals, efficiencies, fakes, etc. etc., and compares them with reference files I provide (currently through CVS, but this will change soon). It also does various comparisons between these the three track authors (new/default, IPatRec, XKalman)
Changed:
<
<
  • InDetRecValidation: Steve Dallison - Software/Detector test
>
>
  • InDetRecValidation: Steve Dallison - Software/Detector test
 
    • Look at single muon tracks and makes plots of the perigee parameters
    • Testing the standard reconstruction in one set of jobs and the stand alone inner detector reconstruction in another set of jobs
Line: 77 to 77
 

Calorimeter:

Changed:
<
<
  • CaloAnaEx: Karim Bernardet
>
>
  • CaloAnaEx: Karim Bernardet
 
    • reco with production of ESD+AOD (2 tests)
    • reco with production of ESD (and a ntuple). Then read the ESD to produce an AOD. With this AOD produce a ntuple which is compared to the first ntuple to make sure that I find the same things
      I run a ROOT macro (checkPOOL.C) on the ESD and AOD files to check the content of them. - What does this really mean
Line: 85 to 85
 
    • (c) reco with testbeam : I run a python script to check the CPU time with a ref file
      => 4 tests in this package
Changed:
<
<
  • CaloRecEx: Karim Bernardet
>
>
  • CaloRecEx: Karim Bernardet
 
    • There are 4 tests. Each test produces a ntuple.
    • 2 ROOT macros are run on this ntuple :
      • One does histos comparison and the other truth plots.
      • Then a python script (didAnyTestFail) is run to check the results for the comparison and the truth (If one of the tests fails then the RTT test is marked as failed). The ROOT macros uses thresholds which are stored in files in my web area (easier to update them)
    • Run a python script to check the CPU time with a ref file
Changed:
<
<
  • CaloSimEx: Karim Bernardet
>
>
  • CaloSimEx: Karim Bernardet
 
    • simulation, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact
Changed:
<
<
  • CaloTests: Karim Bernardet
>
>
  • CaloTests: Karim Bernardet
 
    • 7 tests: full chain tests (simulation, digitization and reconstruction) with single particles
    • I use them to test the last tags of the geometry. ROOT macros are run plot some histos and truth plots
Changed:
<
<
  • LArMonTools:Tayfun Ince
>
>
  • LArMonTools:Tayfun Ince
 
    • tested on commissioning data in bytestream format. Output is a root file with plenty of monitoring histograms which are simply dumped in a ps file with a macro. Just to double check if the updates to the monitoring tools run with latest athena version

Muon Spectrometer:

Changed:
<
<
  • MboyPerformance: Eric Lancon
>
>
  • MboyPerformance: Eric Lancon
 
    • Check performance of Muon Boy reconstruction code
Changed:
<
<
  • MooPerformance: Stephane Willocq
>
>
  • MooPerformance: Stephane Willocq
 
    • Check performance of MOORE reconstruction code

Checking Physics quantities:

Changed:
<
<
  • Analysis Examples: Laurent Vacavant
>
>
  • Analysis Examples: Laurent Vacavant
 
    • ...deals with b-tagging validation. The jobs reads always the same AOD file, re-runs the b-tagging on it and compares the resulting histograms with some reference histograms..."
Changed:
<
<
  • BPhysValidation: Steve Dallison
>
>
  • BPhysValidation: Steve Dallison
 
    • ...
Changed:
<
<
  • JetRec: Rolf Seuster
>
>
  • JetRec: Rolf Seuster
 
    • Tested on CSC data. This is for monitoring of how the Jet reconstruction work, various jet reconstruction algorithms like Kt, Cone and clustering effects (from topoclusters), etc.
Changed:
<
<
  • Missing ET : Silvia Resconi
>
>
  • Missing ET : Silvia Resconi
 
Changed:
<
<
  • egammaRec : Dirk Zerwas
>
>
  • egammaRec : Dirk Zerwas
 
    • ...
Changed:
<
<
  • tauRec: Michael Heldmann
>
>
  • tauRec: Michael Heldmann
 
    • ...

Outstanding Problems


\ No newline at end of file
Added:
>
>

General Issues:

What geometry to use in samples

  • Many jobs use Rome- geomtery in their tests. Perhaps they should use newer geometry versions. I believe that the there are four new versions for production:
    • ATLAS-CSC-00-00-00 ATLAS-CSC 01-00-00 ATLAS-CSC 01-01-00 ATLAS-CSC 02-02-00
    • Details of these tags are here

Duplication of tests

  • There is probably some level of duplication. What is the best way to reduce the redundancy?

Error reporting

  • Improve error reporting
    • failureReport.html could include all messages reported with ERROR and FATAL tags

Miscellaneous

  • Keep user scripts on the RTT webpage
  • From Peter "As you know, we put in some control machinery to run short queue kit tests for quick(ish) turn around tests with the testing of kits specifically in mind. Perhaps, with Vivek;s help, we could try again to encourage users to label selected tests - which will be routed to the short queue - as RTT kit validation tests

Specific issues:

JiveXML

  • Database errors in some jobs
  • Muon job fails probably because ID was set off. Need to check if setMUID = false fixes this problem or not

Calo test jobs

  • I don't understand why the CaloRecEx jobs in RTT finish successfully, but fail the tests
    • Karim's response - "I understand why they fail, it is normal. CheckForFail looks at the results produced by ROOT macros if one of them failed then the test is marked as failed.

Trigger Release job

RecEx tests

  • David wants extra features, e.g., ability to use jobO from other packages, etc.
    • Brinick said that some features are being tested, others will come in future releases

Jet Rec

  • Rolf says, "I prepared a reference root file to which I want to compare the results from the RTT to. As this file is rather big, I don't want to store it in CVS.

GeneratorsRTT and Missing ET

  • Jobs are OK but tests fail probably because users have old-style tags in their XML files
    • Brinick's response: "However....there is not currently an equivalent in <test></test> land, so you should not bug the developer about this. It is us who need to update"

Revision 42006-09-28 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Line: 9 to 9
 

Description of tests


Changed:
<
<
  • Analysis Examples: Laurent Vacavant - Physics test
    • ...deals with b-tagging validation. The jobs reads always the same AOD file, re-runs the b-tagging on it and compares the resulting histograms with some reference histograms..."
>
>

Software tests - basically testing the underlying software:

 
Changed:
<
<
  • AthExHelloWorld: Alex Undrus - Athena framework test
>
>
  • AthExHelloWorld: Alex Undrus
 
Changed:
<
<
  • BPhysValidation: Steve Dallison - Physics test
    • ...
>
>
  • CBNT_AOD: Vassilios Vassilakopoulos
    • "...is for the validation of the AOD content and AOD reconstruction..." - What does this really mean
 
Changed:
<
<
  • CBNT_AOD: Vassilios Vassilakopoulos - Software test
    • "...is for the validation of the AOD content and AOD reconstruction..." - What does this really mean
>
>
  • GeneratorsRTT: George.Stavropoulos
    • 5 tests: meant to validate the Generators packages at run time.
    • Athena jobs are setup and test the main generators used for the production of the MC events.
    • Each job writes out an ntuple file and, as a consistency check, performs a fit on the invariant mass of the Z->mumu events produced.
 
Changed:
<
<
  • CaloAnaEx: Karim Bernardet - Software/Detector test
    • reco with production of ESD+AOD (2 tests)
    • reco with production of ESD (and a ntuple). Then read the ESD to produce an AOD. With this AOD produce a ntuple which is compared to the first ntuple to make sure that I find the same things
      I run a ROOT macro (checkPOOL.C) on the ESD and AOD files to check the content of them. - What does this really mean
      I run a python script to check the CPU time with a ref file (for the ESD+AOD tests)
    • (c) reco with testbeam : I run a python script to check the CPU time with a ref file
      => 4 tests in this package
>
>
  • G4AtlasApps: Andrea di Simone
 
Changed:
<
<
  • CaloDigEx: Karim Bernardet - Software/Detector test
    • "...digitization, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact"
>
>
  • JiveXML: Nikos Konstantinidis
    • What we try to do in the RTT jobs of JiveXML is to have separate jobs per subsystem, so that we can localise problems more easily
    • So, if you look at JiveXML/share/JiveXML_jobOptions_Muons.py, you will see that we switch off the InDet and Calos by
 
Changed:
<
<
  • CaloRecEx: Karim Bernardet - Software/Detector test
    • There are 4 tests. Each test produces a ntuple.
    • 2 ROOT macros are run on this ntuple :
      • One does histos comparison and the other truth plots.
      • Then a python script (didAnyTestFail) is run to check the results for the comparison and the truth (If one of the tests fails then the RTT test is marked as failed). The ROOT macros uses thresholds which are stored in files in my web area (easier to update them)
    • Run a python script to check the CPU time with a ref file
>
>

Digitization tests:

 
Changed:
<
<
  • CaloSimEx: Karim Bernardet - Software/Detector test
    • simulation, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact
>
>
  • CaloDigEx: Karim Bernardet
    • "...digitization, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact
 
Changed:
<
<
  • CaloTests: Karim Bernardet - Software/Detector test
    • 7 tests: full chain tests (simulation, digitization and reconstruction) with single particles
    • I use them to test the last tags of the geometry. ROOT macros are run plot some histos and truth plots
>
>
  • InDetSimRTT: Seth Zenz
    • A much younger package, this is supposed to find obvious holes in the geometry. It runs 100 events (for two different gemeometries) through inner detector-only simulation, digitization, and reconstruction. In the last step it puts out an InDetRecStat ntuple and uses some of the hit plotting scripts from InDetRTT to make some pretty pictures. In addition to finding geometry holes, in principle it can also be a test of whether the full chain runs properly for the Inner Detector, and whether pool file input/output works at each steps--which are issues that needed to be checked at points during the last release cycle

  • Muon Digi Example : Daniella Rebuzzi
    • ...
 
Changed:
<
<
  • Digitization: Sven Vahsen - Software/detector test
>
>
  • Digitization: Sven Vahsen
 
    • Digitization tests are duplicated because they were put in place before detector specific tests. What I propose is to replace the detector tests with a full ATLAS digitization (eg an integration test) and possibly a test with pileup.
Changed:
<
<
  • GeneratorsRTT: George.Stavropoulos - Software test
    • 5 tests: meant to validate the Generators packages at run time.
    • Athena jobs are setup and test the main generators used for the production of the MC events.
    • Each job writes out an ntuple file and, as a consistency check, performs a fit on the invariant mass of the Z->mumu events produced.
>
>

Detector/Software test: Checking reconstruction software:

Overall tests:

Trigger:

 
Changed:
<
<
  • G4AtlasApps: Andrea di Simone - Software test
>
>

Inner Detector:

 
Changed:
<
<
  • InDetRTT: Seth Zenz - Software/Detector test
>
>
  • InDetRTT: Seth Zenz
 
    • This does Inner Detector reconstruction, plotting properties of tracks and hits for a several different physics samples (and, more and more, different geometries). The input is mostly 11.0.41 digits, with reconstruction outputting the InDetRecStatistics ntuple; the digits will soon be changed to mostly 12.0.2
    • It has a system of scripts that plots track pulls/resolutions, hit locations/residuals, efficiencies, fakes, etc. etc., and compares them with reference files I provide (currently through CVS, but this will change soon). It also does various comparisons between these the three track authors (new/default, IPatRec, XKalman)
Deleted:
<
<
  • InDetSimRTT: Seth Zenz - Software/Detectror test
    • A much younger package, this is supposed to find obvious holes in the geometry. It runs 100 events (for two different gemeometries) through inner detector-only simulation, digitization, and reconstruction. In the last step it puts out an InDetRecStat ntuple and uses some of the hit plotting scripts from InDetRTT to make some pretty pictures. In addition to finding geometry holes, in principle it can also be a test of whether the full chain runs properly for the Inner Detector, and whether pool file input/output works at each steps--which are issues that needed to be checked at points during the last release cycle
 
  • InDetRecValidation: Steve Dallison - Software/Detector test
    • Look at single muon tracks and makes plots of the perigee parameters
    • Testing the standard reconstruction in one set of jobs and the stand alone inner detector reconstruction in another set of jobs
Changed:
<
<
  • It would seem that there is some level of duplication in the InDet jobs
>
>
  • It would seem that there is some level of duplication in the InDet jobs
 
Changed:
<
<
  • JetRec: Rolf Seuster - Software/Physics test
    • Tested on CSC data. This is for monitoring of how the Jet reconstruction work, various jet reconstruction algorithms like Kt, Cone and clustering effects (from topoclusters), etc.
>
>

Calorimeter:

 
Changed:
<
<
  • JiveXML: Nikos Konstantinidis - Software test
    • What we try to do in the RTT jobs of JiveXML is to have separate jobs per subsystem, so that we can localise problems more easily
    • So, if you look at JiveXML/share/JiveXML_jobOptions_Muons.py, you will see that we switch off the InDet and Calos by
>
>
  • CaloAnaEx: Karim Bernardet
    • reco with production of ESD+AOD (2 tests)
    • reco with production of ESD (and a ntuple). Then read the ESD to produce an AOD. With this AOD produce a ntuple which is compared to the first ntuple to make sure that I find the same things
      I run a ROOT macro (checkPOOL.C) on the ESD and AOD files to check the content of them. - What does this really mean
      I run a python script to check the CPU time with a ref file (for the ESD+AOD tests)
    • (c) reco with testbeam : I run a python script to check the CPU time with a ref file
      => 4 tests in this package
 
Changed:
<
<
  • LArMonTools:Tayfun Ince - Detector test
>
>
  • CaloRecEx: Karim Bernardet
    • There are 4 tests. Each test produces a ntuple.
    • 2 ROOT macros are run on this ntuple :
      • One does histos comparison and the other truth plots.
      • Then a python script (didAnyTestFail) is run to check the results for the comparison and the truth (If one of the tests fails then the RTT test is marked as failed). The ROOT macros uses thresholds which are stored in files in my web area (easier to update them)
    • Run a python script to check the CPU time with a ref file

  • CaloSimEx: Karim Bernardet
    • simulation, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact

  • CaloTests: Karim Bernardet
    • 7 tests: full chain tests (simulation, digitization and reconstruction) with single particles
    • I use them to test the last tags of the geometry. ROOT macros are run plot some histos and truth plots

  • LArMonTools:Tayfun Ince
 
    • tested on commissioning data in bytestream format. Output is a root file with plenty of monitoring histograms which are simply dumped in a ps file with a macro. Just to double check if the updates to the monitoring tools run with latest athena version
Changed:
<
<
  • MboyPerformance: Eric Lancon - Software/Detector test
>
>

Muon Spectrometer:

  • MboyPerformance: Eric Lancon
 
    • Check performance of Muon Boy reconstruction code
Changed:
<
<
  • MooPerformance: Stephane Willocq - Software/Detector test
>
>
  • MooPerformance: Stephane Willocq
 
    • Check performance of MOORE reconstruction code
Changed:
<
<
  • Missing ET : Silvia Resconi - Software/Physics test

  • Muon Digi Example : Daniella Rebuzzi - Software/Detector test
    • ...
>
>

Checking Physics quantities:

 
Changed:
<
<
>
>
  • Analysis Examples: Laurent Vacavant
    • ...deals with b-tagging validation. The jobs reads always the same AOD file, re-runs the b-tagging on it and compares the resulting histograms with some reference histograms..."
 
Changed:
<
<
  • RecoExTrigTest : David Rousseau - Software/Detector test
    • RecExTrigTest tests in AtlasTrigger have a very similar scope as RecExRecoTest. They are basic test of integration of reconstruction and trigger, assuming reconstruction and trigger have been tested separately
>
>
  • BPhysValidation: Steve Dallison
    • ...
 
Changed:
<
<
>
>
  • JetRec: Rolf Seuster
    • Tested on CSC data. This is for monitoring of how the Jet reconstruction work, various jet reconstruction algorithms like Kt, Cone and clustering effects (from topoclusters), etc.
 
Changed:
<
<
>
>
  • Missing ET : Silvia Resconi
 
Changed:
<
<
  • egammaRec : Dirk Zerwas - Software/Physics test
>
>
  • egammaRec : Dirk Zerwas
 
    • ...
Changed:
<
<
  • tauRec: Michael Heldmann - Software/Physics test
>
>
  • tauRec: Michael Heldmann
 
    • ...

Outstanding Problems

Revision 32006-09-28 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006
Added:
>
>
 Here is a current status of what I understand about the RTT, i.e., what packages are run, etc.
Added:
>
>

Description of tests


 
  • Analysis Examples: Laurent Vacavant - Physics test
    • ...deals with b-tagging validation. The jobs reads always the same AOD file, re-runs the b-tagging on it and compares the resulting histograms with some reference histograms..."
Added:
>
>
 
  • AthExHelloWorld: Alex Undrus - Athena framework test
Added:
>
>
 
  • BPhysValidation: Steve Dallison - Physics test
    • ...
Added:
>
>
 
  • CBNT_AOD: Vassilios Vassilakopoulos - Software test
    • "...is for the validation of the AOD content and AOD reconstruction..." - What does this really mean
Added:
>
>
 
  • CaloAnaEx: Karim Bernardet - Software/Detector test
    • reco with production of ESD+AOD (2 tests)
    • reco with production of ESD (and a ntuple). Then read the ESD to produce an AOD. With this AOD produce a ntuple which is compared to the first ntuple to make sure that I find the same things
Line: 31 to 44
 
  • CaloTests: Karim Bernardet - Software/Detector test
    • 7 tests: full chain tests (simulation, digitization and reconstruction) with single particles
    • I use them to test the last tags of the geometry. ROOT macros are run plot some histos and truth plots
Changed:
<
<
  • Digitization: George.Stavropoulos - Software test
>
>
  • Digitization: Sven Vahsen - Software/detector test
    • Digitization tests are duplicated because they were put in place before detector specific tests. What I propose is to replace the detector tests with a full ATLAS digitization (eg an integration test) and possibly a test with pileup.

  • GeneratorsRTT: George.Stavropoulos - Software test
 
    • 5 tests: meant to validate the Generators packages at run time.
    • Athena jobs are setup and test the main generators used for the production of the MC events.
    • Each job writes out an ntuple file and, as a consistency check, performs a fit on the invariant mass of the Z->mumu events produced.
Line: 39 to 58
 
  • InDetRTT: Seth Zenz - Software/Detector test
    • This does Inner Detector reconstruction, plotting properties of tracks and hits for a several different physics samples (and, more and more, different geometries). The input is mostly 11.0.41 digits, with reconstruction outputting the InDetRecStatistics ntuple; the digits will soon be changed to mostly 12.0.2
    • It has a system of scripts that plots track pulls/resolutions, hit locations/residuals, efficiencies, fakes, etc. etc., and compares them with reference files I provide (currently through CVS, but this will change soon). It also does various comparisons between these the three track authors (new/default, IPatRec, XKalman)
Added:
>
>
 
  • InDetSimRTT: Seth Zenz - Software/Detectror test
    • A much younger package, this is supposed to find obvious holes in the geometry. It runs 100 events (for two different gemeometries) through inner detector-only simulation, digitization, and reconstruction. In the last step it puts out an InDetRecStat ntuple and uses some of the hit plotting scripts from InDetRTT to make some pretty pictures. In addition to finding geometry holes, in principle it can also be a test of whether the full chain runs properly for the Inner Detector, and whether pool file input/output works at each steps--which are issues that needed to be checked at points during the last release cycle
Added:
>
>
 
  • InDetRecValidation: Steve Dallison - Software/Detector test
Added:
>
>
    • Look at single muon tracks and makes plots of the perigee parameters
    • Testing the standard reconstruction in one set of jobs and the stand alone inner detector reconstruction in another set of jobs

  • It would seem that there is some level of duplication in the InDet jobs
 
  • JetRec: Rolf Seuster - Software/Physics test
    • Tested on CSC data. This is for monitoring of how the Jet reconstruction work, various jet reconstruction algorithms like Kt, Cone and clustering effects (from topoclusters), etc.
Added:
>
>
 
  • JiveXML: Nikos Konstantinidis - Software test
    • What we try to do in the RTT jobs of JiveXML is to have separate jobs per subsystem, so that we can localise problems more easily
    • So, if you look at JiveXML/share/JiveXML_jobOptions_Muons.py, you will see that we switch off the InDet and Calos by
Line: 49 to 76
 
    • So, if you look at JiveXML/share/JiveXML_jobOptions_Muons.py, you will see that we switch off the InDet and Calos by
Added:
>
>
 
  • LArMonTools:Tayfun Ince - Detector test
    • tested on commissioning data in bytestream format. Output is a root file with plenty of monitoring histograms which are simply dumped in a ps file with a macro. Just to double check if the updates to the monitoring tools run with latest athena version
Added:
>
>
 
  • MboyPerformance: Eric Lancon - Software/Detector test
    • Check performance of Muon Boy reconstruction code
Added:
>
>
 
  • MooPerformance: Stephane Willocq - Software/Detector test
    • Check performance of MOORE reconstruction code
Added:
>
>
  • Missing ET : Silvia Resconi - Software/Physics test

  • Muon Digi Example : Daniella Rebuzzi - Software/Detector test
    • ...

  • RecoExTrigTest : David Rousseau - Software/Detector test
    • RecExTrigTest tests in AtlasTrigger have a very similar scope as RecExRecoTest. They are basic test of integration of reconstruction and trigger, assuming reconstruction and trigger have been tested separately

  • egammaRec : Dirk Zerwas - Software/Physics test
    • ...

  • tauRec: Michael Heldmann - Software/Physics test
    • ...

Outstanding Problems


 \ No newline at end of file

Revision 22006-09-28 - VivekJain

Line: 1 to 1
 
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006

Here is a current status of what I understand about the RTT, i.e., what packages are run, etc. \ No newline at end of file

Added:
>
>
  • Analysis Examples: Laurent Vacavant - Physics test
    • ...deals with b-tagging validation. The jobs reads always the same AOD file, re-runs the b-tagging on it and compares the resulting histograms with some reference histograms..."
  • AthExHelloWorld: Alex Undrus - Athena framework test
  • BPhysValidation: Steve Dallison - Physics test
    • ...
  • CBNT_AOD: Vassilios Vassilakopoulos - Software test
    • "...is for the validation of the AOD content and AOD reconstruction..." - What does this really mean
  • CaloAnaEx: Karim Bernardet - Software/Detector test
    • reco with production of ESD+AOD (2 tests)
    • reco with production of ESD (and a ntuple). Then read the ESD to produce an AOD. With this AOD produce a ntuple which is compared to the first ntuple to make sure that I find the same things
      I run a ROOT macro (checkPOOL.C) on the ESD and AOD files to check the content of them. - What does this really mean
      I run a python script to check the CPU time with a ref file (for the ESD+AOD tests)
    • (c) reco with testbeam : I run a python script to check the CPU time with a ref file
      => 4 tests in this package
  • CaloDigEx: Karim Bernardet - Software/Detector test
    • "...digitization, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact"
  • CaloRecEx: Karim Bernardet - Software/Detector test
    • There are 4 tests. Each test produces a ntuple.
    • 2 ROOT macros are run on this ntuple :
      • One does histos comparison and the other truth plots.
      • Then a python script (didAnyTestFail) is run to check the results for the comparison and the truth (If one of the tests fails then the RTT test is marked as failed). The ROOT macros uses thresholds which are stored in files in my web area (easier to update them)
    • Run a python script to check the CPU time with a ref file
  • CaloSimEx: Karim Bernardet - Software/Detector test
    • simulation, only one test ( I run a python script to check the CPU time, I have to fix the python script) : could be removed in fact
  • CaloTests: Karim Bernardet - Software/Detector test
    • 7 tests: full chain tests (simulation, digitization and reconstruction) with single particles
    • I use them to test the last tags of the geometry. ROOT macros are run plot some histos and truth plots
  • Digitization: George.Stavropoulos - Software test
    • 5 tests: meant to validate the Generators packages at run time.
    • Athena jobs are setup and test the main generators used for the production of the MC events.
    • Each job writes out an ntuple file and, as a consistency check, performs a fit on the invariant mass of the Z->mumu events produced.
  • G4AtlasApps: Andrea di Simone - Software test
  • InDetRTT: Seth Zenz - Software/Detector test
    • This does Inner Detector reconstruction, plotting properties of tracks and hits for a several different physics samples (and, more and more, different geometries). The input is mostly 11.0.41 digits, with reconstruction outputting the InDetRecStatistics ntuple; the digits will soon be changed to mostly 12.0.2
    • It has a system of scripts that plots track pulls/resolutions, hit locations/residuals, efficiencies, fakes, etc. etc., and compares them with reference files I provide (currently through CVS, but this will change soon). It also does various comparisons between these the three track authors (new/default, IPatRec, XKalman)
  • InDetSimRTT: Seth Zenz - Software/Detectror test
    • A much younger package, this is supposed to find obvious holes in the geometry. It runs 100 events (for two different gemeometries) through inner detector-only simulation, digitization, and reconstruction. In the last step it puts out an InDetRecStat ntuple and uses some of the hit plotting scripts from InDetRTT to make some pretty pictures. In addition to finding geometry holes, in principle it can also be a test of whether the full chain runs properly for the Inner Detector, and whether pool file input/output works at each steps--which are issues that needed to be checked at points during the last release cycle
  • InDetRecValidation: Steve Dallison - Software/Detector test
  • JetRec: Rolf Seuster - Software/Physics test
    • Tested on CSC data. This is for monitoring of how the Jet reconstruction work, various jet reconstruction algorithms like Kt, Cone and clustering effects (from topoclusters), etc.
  • JiveXML: Nikos Konstantinidis - Software test
    • What we try to do in the RTT jobs of JiveXML is to have separate jobs per subsystem, so that we can localise problems more easily
    • So, if you look at JiveXML/share/JiveXML_jobOptions_Muons.py, you will see that we switch off the InDet and Calos by
  • LArMonTools:Tayfun Ince - Detector test
    • tested on commissioning data in bytestream format. Output is a root file with plenty of monitoring histograms which are simply dumped in a ps file with a macro. Just to double check if the updates to the monitoring tools run with latest athena version
  • MboyPerformance: Eric Lancon - Software/Detector test
    • Check performance of Muon Boy reconstruction code
  • MooPerformance: Stephane Willocq - Software/Detector test
    • Check performance of MOORE reconstruction code

Revision 12006-09-27 - VivekJain

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="VivekJain"
-- VivekJain - 27 Sep 2006

Here is a current status of what I understand about the RTT, i.e., what packages are run, etc.

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback