-- YeChen - 2017-01-24

DQ twiki







Useful JIRA

HLTMon memory usage: https://its.cern.ch/jira/browse/ATR-15869

HLT online monitoring review before 2017 beams: https://its.cern.ch/jira/browse/ATR-16200

OHP webdisplay: https://atlasop.cern.ch/tdaq/web_is/ohp/Trigger.html#page_HLT_Muon_EF_Muon

DQMD: https://atlasop.cern.ch/operRef.php?subs=wmi/DQM.html

Online DQ


  • OHP: The Online Histogram Presenter allows for histogram visualization. It displays a set of pre-defined monitoring histograms from the IS server. It is also based on xml configuration files, which reside at P1 and can be directly edited there.
  • DQMF: The Data Quality Monitoring Framework is the online framework for data quality assessment. It analyzes histograms through user-defined algorithms and relays the summary of this analysis in the form of DQ flags. Results are visualized with the DQMD (Data Quality Monitoring Display). The framework is based on xml configuration files that are part of the OKS database.

Prescription for OHP

For on-going run, the OHP histograms can be checked via web: https://atlasop.cern.ch/tdaq/web_is/ohp/Trigger.html

  1. You need to get an account at Point 1 machine, by making a request through the web of assigning the roles (TRG:MON:expert, TRG:remote, TRG:shifter) and enable the roles. Note: TRG:MON:expert is only necessary if you are the developer.
  2. Connect to Point 1 machine: ssh -XY username@atlasgwNOSPAMPLEASE.cern.ch (if you are at CERN), Or login to lxplus with your user name firstly, then type ssh -XY atlasgw.cern.ch (if you are remote). The password is the same as your lxplus account.
  3. You may be asked for remote access check, you could type the following information for such check:ohp_p1_access.png
  4. Type pc-atlas-pub when asked for a Hostname, always press enter button for STEP 2 and STEP 3.
  5. Setup the tdaq release (currently tdaq-05-05-00): source /det/tdaq/scripts/setup_TDAQ_tdaq-05-05-00.sh
  6. The trigger OHP configuration is located at: cd /atlas/moncfg/tdaq-05-05-00/trigger/ohp/
  7. Start the histogram presenter window: ohp -c atlas_trigger.ohp_current.xml &
  8. To modify the configuration file for muon slice, it is suggested to test the xml file in your local directory before commit it to the online running configuration. The way to proceed is to copy all the configuration files in your local directory:
    • cd
    • cp -r /atlas/moncfg/tdaq-05-05-00/trigger/ohp/ myohp
    • cd myohp
    • edit the file signatures/MuonSlice.ohp.xml, make the changes to your needs
    • check the changes working correctly or not: ohp -c atlas_trigger.ohp_current.xml &
    • once everything working well, you could commit the changes in muon slice to the online configuration: cp signatures/MuonSlice.ohp.xml /atlas/moncfg/tdaq-05-05-00/trigger/ohp/signatures/MuonSlice.ohp.xml

Prescription for DQMF

For on-going run, the online DQMF histograms can be checked via web: https://atlasop.cern.ch/operRef.php?subs=wmi/DQM.html (go to ATLAS > ATLAS > TriggerSystems > HLT)

  1. You need to get an account at Point 1 machine, please follow the same way as OHP to get the account.
  2. Connect to Point 1 machine, please follow the same way as OHP to connect.
  3. Setup the tdaq release (currently tdaq-05-05-00): source /det/tdaq/scripts/setup_TDAQ_tdaq-05-05-00.sh
  4. Start the histogram presenter window: dqm_display -p ATLAS &
  5. For developer, please follow the way described in this page for test partition. Note: the current testbed system is not working so well for individual test, we could only test up to the OKS checking (i.e oks_data_editor daq/segments/DQM/DQM.HLT.xml). If no warning or error shown in the check, we could send the new xml file to Joana Machado Miguéns <jmiguens@cern.ch>, and ask her to commit the new file into central partition.
  6. We could see the latest configuration xml file for muon slice through webpage: https://atlasop.cern.ch/cvs/viewvc.cgi/tdaq-05-05-00/daq/segments/DQM/Signatures/MuonSlice/DQM.MuonSlice.xml?view=log
  7. Currently, there are three dqm_algorithms applied in muon slice: KolmogorovTest _MaxDistPlusNorm, Bins_GreaterThanNonZeroMedian_Threshold, Histogram_Not_Empty. In fact, online DQMF and offline DQMF share the same dqm_algorithms as listed in the web.
  1. In the source code folder ( src), we can only find the file named with the base class of a dq algorithm, for KolmogorovTest _MaxDistPlusNorm algorithm, the base class is KolmogorovTest. While in the header folder ( dqm_algorithms), we can find several instances inherited from the base class, e.g KolmogorovTest _MaxDist.h, KolmogorovTest _MaxDistPlusNorm.h, KolmogorovTest _Norm.h, KolmogorovTest _Prob.h. Each of these instances is used as an independent dqm algorithm.
  2. As explained in the source code of KolmogorovTest.cxx, KolmogorovTest _MaxDistPlusNorm algorithm is comparing two histograms, both on the normalization and on the maximum discrepancy between the two histograms. There is only one parameter for this algorithm "MaxDist". Larger value of "MaxDist" means worse agreement, while smaller value means good agreement. Thus, when we define the red and green threshold for this parameter, green threshold is smaller than red threshold
  3. An example for KolmogorovTest _MaxDistPlusNorm algorithm usage in the DQMF configuration can be seen from line 82 ~ line 112 in the muon configuration file. Since this algorithm is checking the agreement between two histograms, reference is needed.

Make reference file for OHP and online DQMF

  1. Log on to lxplus
  2. Setup root
  3. Copy the following codes (makeDQM.C, makeDQM.sh, makeHLTRef.C) to your directory from /afs/cern.ch/user/l/lyuan/public/DQmonitoring/DQonlineReference/Run2_reference
  4. Download the online histogram for a run from EOS (you could list all the runs with command: eos ls /eos/atlas/atlascerngroupdisk/tdaq-mon/coca/2015/Histogramming-HLT), command for downloading the file from eos: xrdcp root://eosatlas.cern.ch//eos/atlas/atlascerngroupdisk/tdaq-mon/coca/2015/Histogramming-HLT/r0000279932_lEoR_ATLAS_MDA-HistogrammingHLT_HistogrammingHLT.root ./ (or PATH_WHEREVER_YOU_LIKE)
  5. Modify the line of "CMDLS="/tmp/lyuan/r0000279169_lEoR_ATLAS_MDA-HistogrammingHLT_HistogrammingHLT.root"" in makeDQM.sh by the file you downloaded, please add the full path for the file.
  6. Type sh makeDQM.sh New reference file MuonSliceReference _phys00xxxxxx.root will be produced.

Change reference file for OHP and online DQMF

  1. Copy the reference file to the directory at the Point 1 machine: /atlas/moncfg/tdaq-05-05-00/trigger/dqm/Ref_Histo/. Note: you could only copy it to atlasgw in lxplus account (i.e using the command of scp MuonSliceReference _phys00279169.root lyuan@atlasgwNOSPAMPLEASE.cern.ch:/atlas/moncfg/tdaq-05-05-00/trigger/dqm/Ref_Histo/ in lxplus account). It does NOT work in the opposite way around if you try (scp lyuan@lxplusNOSPAMPLEASE.cern.ch:~/ MuonSliceReference _phys00279169.root /atlas/moncfg/tdaq-05-05-00/trigger/dqm/Ref_Histo/) in Point 1 machine.
  2. Change the reference file:
    • remove the current link: rm -rf /atlas/moncfg/tdaq-05-05-00/trigger/ohp/references/Muon_RefOHP.root
    • create the new link: ln -s /atlas/moncfg/tdaq-05-05-00/trigger/dqm/Ref_Histo/ MuonSliceReference _phys00279169.root /atlas/moncfg/tdaq-05-05-00/trigger/ohp/references/Muon_RefOHP.root
    • Currently OHP and online DQMF share the same reference file.
    • Note: it is suggested to change the reference when no physics run is taking to avoid problems.

Offline DQ

Introduction to offline DQ

Offline DQ access to the data information at the ESD level which is after the HLT trigger selection and full reconstruction. The data quality for the trigger was assessed offline. The online data quality monitoring only guided the experts to debug the trigger system as quickly as possible, but had no effect on the data quality assessment. The runs were signed-off using the ES (express stream) histograms, rapidly processed and available a few hours after a run finished. Only muons signed-off the BULK (physics stream) reprocessing in addition.

Work on three packages

  • TrigMuonMonitoring (produce the histograms)
  • DataQualityUtils (a.k.a post-processor, merge histograms for different Lumin Blocks)
  • DataQualityConfigurations (a.k.a han-config, publish selected histograms in tier0 webdisplay)


environment setup

  • setupATLAS
  • localSetupDQ2Client
  • voms-proxy-init -voms atlas
  • go to your workarea: asetup,AtlasProduction,opt,gcc48,here
check out the package and compile
  • cmt co Trigger/TrigMonitoring/TrigMuonMonitoring (check out the header trunk version)
  • svn co svn+ssh://svn.cern.ch//reps/atlasoff/Trigger/TrigMonitoring/TrigMuonMonitoring/tags/TrigMuonMonitoring-00-02-18 Trigger/TrigMonitoring/TrigMuonMonitoring (this could check out specific tag you want)
  • cd Trigger/TrigMonitoring/TrigMuonMonitoring/cmt
  • cmt config
  • make
run the package
  • put the following commands into a run script: e.g run_job.sh. The outputDQMonitorFile name could be changed to whatever you like.
    ESDtoESD trf.py \
    'root://eosatlas//eos/atlas/atlastier0/rucio/data15_13TeV/physics_Main/00267638/data15_13TeV.00267638.physics_Main.recon.ESD.f598/data15_13TeV.002 67638.physics_Main.recon.ESD.f598._lb0652._SFO-2._0001.1','root://eosatlas//eos/atlas/atlastier0/rucio/data15_13TeV/physics_Main/00267638/data15_1 3TeV.00267638.physics_Main.recon.ESD.f598/data15_13TeV.00267638.physics_Main.recon.ESD.f598._lb0652._SFO-2._0002.1' \
    outputAODFile=myAODMC12.pool.root \
    outputDQMonitorFile=Monitor_newtestlb121651_6521.root \
    'DQMonFlags.doCaloMon.set_Value_and_Lock(False)','DQMonFlags.doTileMon.set_Value_and_Lock(False)','DQMonFlags.doLArMon.set_Value_and_Lock(False)', 'DQMonFlags.doJetMon.set_Value_and_Lock(False)',\
    'DQMonFlags.doPixelMon.set_Value_and_Lock(False)','DQMonFlags.doSCTMon.set_Value_and_Lock(False)','DQMonFlags.doTRTMon.set_Value_and_Lock(False)', 'DQMonFlags.doInDetPerfMon.set_Value_and_Lock(False)',\
    'DQMonFlags.doMissingEtMon.set_Value_and_Lock(False)','DQMonFlags.doMuonCombinedMon.set_Value_and_Lock(False)','DQMonFlags.doTauMon.set_Value_and Lock(False)','DQMonFlags.doJetTagMon.set_Value_and_Lock(False)',\
    'DQMonFlags.doHLTMon.set_Value_and_Lock(True)','DQMonFlags.doEgammaMon.set_Value_and_Lock(False)','DQMonFlags.doLucidMon.set_Value_and_Lock(False) ',\
    'DQMonFlags.doMuonRawMon.set_Value_and_Lock(False)' \

  • Then simply type source run_job.sh, job will run. Just to be patient for the job to finish. For a job with 100 events, it takes about 10 minutes.



environment setup

  • setupATLAS
  • go to your workarea: asetup,AtlasProduction,opt,gcc48,here
check out the package and compile
  • cmt co DataQuality /DataQualityUtils (check out the header trunk version)
  • cd DataQuality /DataQualityUtils/cmt
  • cmt config
  • make
run the package
  • DQHistogramMerge.py inputfilelist.txt outfile.root True
  • inputfilelist.txt list all the root files you want to merge. An example is shown as below:







environment setup

  • setupATLAS
  • go to your workarea: asetup,AtlasProduction,opt,gcc48,here
check out the packages and compile
  • cmt co DataQuality /dqm_algorithms (check out the header trunk version)
  • cd DataQuality /dqm_algorithms/cmt
  • cmt config
  • make
  • go back to your workarea: cmt co DataQuality /DataQualityConfigurations (check out the header trunk version)
  • cd DataQuality /DataQualityConfigurations/cmt
  • cmt config
  • make
run the package
  • We can just publish the HLT/Muon histograms in the test webdisplay to make life easier.

  • cd ../config

  • cp HLT/HLTmuon/collisions_run.config ./

  • han-config-gen.exe collisions_run.config (make sure this step does not show any warning or error message, otherwise the webdisplay will not work)

  • DQWebDisplay data15_13TeV.00279932.express_express.merge.HIST.x353_h79._0001.1 TestDisplay 1111 (Note: The file name should contain run number, stream type, tag, and version number. The iteration number 1111 could be changed to any number you like)

Check the test webdisplay

Request a new tag of TrigMuonMonitoring

  • perform the q-tests (q220, q221, q222, q431) as mentioned on the web
  • once the tests are finished successfully, send an email to tag collection list ( atlas-trig-relcoord@cernNOSPAMPLEASE.ch ) including the following contents ()
     Here is the information for the new tag:
    1) TrigMuonMonitoring-00-02-18
    2) cacheable (yes)
    3) motivation: tier0 data monitoring. This tag fixes muonType bug in CommonMon.cxx, adds chain mu20_msonly for supporting MSonly triggers
    4) passed the tests of q220, q221, q222, q431
    5) svn diff: https://svnweb.cern.ch/trac/atlasoff/changeset?reponame=&new=695097%40Trigger%2FTrigMonitoring%2FTrigMuonMonitoring%2Ftags%2FTrigMuonMonitoring-00-02-18&old=690913%40Trigger%2FTrigMonitoring%2FTrigMuonMonitoring%2Ftrunk

Git for >Rel 21.*




Release 21

mkdir testarea

cd testarea

mkdir run source build

cd source

asetup --cmtconfig=x86_64-slc6-gcc49-opt AtlasOffline,21.0.13, here

svnco -t TrigMuonMonitoring (edit src)

cd ../build/

cmake ../source/


source x86_64-slc6-gcc49-opt/setup.sh

cd ../run/

Reco_tf.py --AMI=q431 >&log.q431&


Menu Aware Monitoring

Unfortunately, there is no specific twiki page discussing about this. Some knowledge can be gained by reading following talks:

One summary talk from Elin in Dec 2014: https://indico.cern.ch/event/353161/session/1/contribution/10/attachments/699697/960690/elin_TrigMon_meeting_141210.pdf

Another talk by Christos in May, 2015: https://indico.cern.ch/event/396070/contribution/2/attachments/793648/1087870/HLT_Monitoring_29MAY15.pdf

Yet another talk by Ben in July, 2015: https://indico.cern.ch/event/403367/contribution/7/attachments/1124368/1604676/MaM_Ben_Smart_10_7_15.pdf

2015 p-p collision offline DQ setup

primary trigger supporting triggers comment
HLT_mu24_imedium HLT_mu6_idperf or HLT_mu20_idperf Events pass supporting triggers are used
HLT_mu50 HLT_mu6_idperf or HLT_mu20_idperf Events pass supporting triggers are used
HLT_mu60_0eta105_msonly HLT_mu6_idperf or HLT_mu20_idperf Events pass supporting triggers are used
HLT_mu18_mu8noL1 HLT_mu18 or HLT_mu24_imedium Events pass supporting triggers, and require two offline muons, one needs to match with the any of supporting triggers.
The rates assigned to each supporting triggers in express stream can be found in https://twiki.cern.ch/twiki/bin/viewauth/Atlas/ExpressStream

2015 Heavy ion offline DQ setup

primary triggers supporting triggers comment
HLT_mu10 HLT_noalg_L1MU4 or HLT_noalg_L1MU6 Events pass supporting triggers are used
HLT_mu14 HLT_noalg_L1MU4 or HLT_noalg_L1MU6 Events pass supporting triggers are used

HLT_noalg_L1MU4 or HLT_noalg_L1MU6 or


Events pass supporting triggers are used
HLT_mu4_mu4noL1 HLT_mu4 or HLT_mu10 Events pass supporting triggers, and require two offline muons, one needs to match with the any of supporting triggers.
The rates assigned to each primary trigger and supporting trigger in express stream:

HLT_mu4_mu4noL1: 0.3Hz
HLT_mu10: 0.4Hz

HLT_mu4: 0.4Hz
HLT_noalg_L1MU4: 0.3Hz
HLT_noalg_L1MU6: 0.3Hz
HLT_noalg_L1MU11: 0.3Hz

Container name of each step of trigger algorithms

L2MuonSA: HLT_xAOD__L2CombinedMuonContainer_MuonL2CBInfo (L2StandAloneMuonContainer)

MuComb: HLT_xAOD__L2CombinedMuonContainer_MuonL2CBInfo (L2CombinedMuonContainer)

EF: HLT_xAOD__MuonContainer_MuonEFInfo (MuonContainer)

Indet: HLT_xAOD__TrackParticleContainer_InDetTrigTrackingxAODCnv_Muon_FTF (TrackParticleContainer)

**StatusCode StoreGateSvc::retrieve(const DataHandle <DATA>&, const DataHandle <DATA>&) [with T = TileMuFeatureContainer ]' is deprecated [-Wdeprecated-declarations]

--#pragma GCC diagnostic ignored "-Wdeprecated-declarations"

Trigger Test Link

In the future, ESD will go to die. ESDtoESD command not work, so you should use AOD to test your modification.


Edit | Attach | Watch | Print version | History: r14 < r13 < r12 < r11 < r10 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r14 - 2017-05-25 - YeChen
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback