---+!! 2019 L1 Trigger Hands on at HCAL days at LPC <div id="_mcePaste">%TOC{title="Contents:"}%</div> <div id="_mcePaste"> </div> | *Newsbox* | | This page is for the [[https://indico.cern.ch/event/830764/timetable/#b-341095-hands-on-l1-framework][2019 L1 Trigger Hands on at HCAL days at LPC.]] It is designed to run in CMSSW 92X (Section 1) and 102X (Section 2). | | We assume the attendees to be familiar with basic C++ and Python and to have performed the [[SWGuideCMSDataAnalysisSchoolPreExerciseFirstSet][first set of CMSDAS pre-exercises,]] including setting up computing accounts for *both LPC and LXPLUS* . The [[%SCRIPTURL{"view"}%auth/CMS/WorkBookExercisesCMSDataAnalysisSchool#PreExercises2018][CMSDAS pre-exercises]] may also prove useful. | ---++ Introduction <div id="_mcePaste"> </div> Requirement * CERN lxplus account: https://uscms.org/uscms_at_work/physics/computing/getstarted/index.shtml#GetCERNAcct * Fermilab cmslpc account: http://www.uscms.org/uscms_at_work/computing/getstarted/getaccount_fermilab.shtml * CMS personal grid certificate: https://uscms.org/uscms_at_work/physics/computing/getstarted/get_grid_cert.shtml ---+++ Accompanying slides The accompanying slides shown during the HATS can be viewed in the [[https://indico.cern.ch/event/830764/sessions/316341/attachments/1908852/3155337/2019_L1_Trigger_Hands_on_at_HCAL_days_at_LPC.pdf][indico page]]. ---++ Section 1: Level 1 (L1) Trigger Exercises ---+++ Exercise 1.1: Build your customized L1 menu Build your customized menu with the Trigger Menu Editor (TME). * You will need to download this to your local machine (Mac or Linux) or to your ~/nobackup area on cmslpc. If setting it up in your nobackup area, use the Linux binary tarballs for SLC 6/Red Hat 6, 64bit, and be sure to log in to cmslpc-sl6. * TME official website: http://globaltrigger.hephy.at/upgrade/tme/ * Download TME: http://globaltrigger.hephy.at/upgrade/tme/downloads * Linux binary tarballs for SLC 6/Red Hat 6, 64bit: [[http://globaltrigger.hephy.at/tarball/tm-editor-0.9.0-1-slc6-x86_64.tar.gz][tm-editor-0.9.0-1-slc6-x86_64.tar.gz]] * Move the tarball to your desired location, then open a terminal and cd to that directory. To untar and run the editor, simply do: <div style="padding-left: 30px;">%CODE{ lang="bash"}% tar -xzvf tm-editor-0.9.0-1-slc6-x86_64.tar.gz cd tm-editor-0.9.0-1-slc6-x86_64 ./tm-editor %ENDCODE%</div> * If running on cmslpc-sl6, you will need to have run cmsenv to set up python. Look at the start of the section "Ingredient 1: menu" and run the commands through cmsenv. * If running locally and you encounter an error about missing modules, open the README in the same directory as tm-editor and yum install the required dependencies. Let's start with modifying an official L1 menu. * 2018 official L1 menu XML files: https://github.com/cms-l1-dpg/L1Menu2018/tree/master/official/XMLs * Download the latest one ( [[https://github.com/cms-l1-dpg/L1Menu2018/blob/master/official/XMLs/L1Menu_Collisions2018_v2_1_0.xml][L1Menu_Collisions2018_v2_1_0.xml]]) with the following command: <div style="padding-left: 30px;">%CODE{ lang="bash"}% wget https://raw.githubusercontent.com/cms-l1-dpg/L1Menu2018/master/official/XMLs/L1Menu_Collisions2018_v2_1_0.xml %ENDCODE%</div> * Relaunch TME and open the file. * Menu --> Algorithms/seeds * Here you will see a list of seeds organized by the bit index. * We will be looking at an example VBF seed: L1_DoubleJet_110_35_DoubleJet35_Mass_Min620 (bit 357) <span style="background-color: transparent;">Goal: study the rate of this seed that comes from HF</span><br /><span style="background-color: transparent;">Approch: apply eta < 3.0 cut to the jets in this seed</span> 1 Goto Menu --> Cuts. We want to make a new eta cut with the range [-3,3]. 1 Copy an existing jet eta cut, e.g. JET-ETA_2p52. This will open a window that lets you edit the name and properties. 1 Change the eta range of the copy to [-3,3] and rename it as JET-ETA_3p0. 1 Go back to Menu --> Algorithms/seeds. 1 Copy the example seed L1_DoubleJet_110_35_DoubleJet35_Mass_Min620 (bit 357). 1 Rename the seed L1_DoubleJet_110_35_er3p0_DoubleJet35er3p0_Mass_Min620 and assign an empty bit (e.g. bit 362) to it. You can click on "Select index" to see a list of available indices (green = available, red = in use, blue = current), or use the up/down arrows to move to the next available index. 1 Now we want to add the JET-ETA_3p0 cut to each of the jets. To see an example, look at the expression listed for L1_DoubleJet30er2p5_Mass_Min360_dEta_Max1p5 (bit 353). Then go back to your new seed, edit it, and modify the expression on the left. %TWISTY{mode="div" showlink="Show solution:" hidelink="Hide solution" remember="on" <br /> showimgleft="%ICONURLPATH{toggleopen-small}%" <br /> hideimgleft="%ICONURLPATH{toggleclose-small}%"}% Replace the expression with the following: (comb{JET110[JET-ETA_3p0], JET35[JET-ETA_3p0]} AND mass_inv{JET35[JET-ETA_3p0], JET35[JET-ETA_3p0]}[MASS_MIN_620]) %ENDTWISTY% Finish editing your seed, then File->Save and quit TME. <div style="font-size: 12.22px; background-image: url('watermark_protect_forever.gif');"> </div> ---+++ Exercise 1.2: Rate of your customized menu There are 4 ingredients required to calculate the rate of your customized menu: 1. Menu 2. Ntuple 3. Lumi Section (LS) information table 4. Prescale (PS) table ---++++!! Ingredient 1: menu First we need to convert the menu XML file to .cc file. * Login to cmslpc if you haven't already: <div style="padding-left: 30px;"> </div> %CODE{ lang="bash"}% ssh -XY your_name@cmslpc-sl6.fnal.gov source /cvmfs/cms.cern.ch/cmsset_default.csh cmsrel CMSSW_9_2_15 cd CMSSW_9_2_15/src cmsenv git clone --branch develop https://gitlab.cern.ch/cms-l1t-utm/scripts.git source /uscms/home/hats/nobackup/Trigger2017/UTM/utm-setup.csh cd scripts #copy your menu XML file here. #Edit it and change the grammar version from 0.7 to 0.6, because the UTM version at LPC is old (but works well :P ) #line 8 <grammar_version>0.7 python menu2lib.py --menu L1Menu_Collisions2018_v2_1_0.xml %ENDCODE% * After a successful execution, you will get menulib.hh/cc in the same directory * Check out [[https://github.com/cms-l1-dpg/L1Menu][cms-l1-dpg/L1Menu]] package <div style="padding-left: 30px;"> </div> %CODE{ lang="bash"}% cd $CMSSW_BASE/src git clone --branch 2019-HATs https://github.com/cms-l1-dpg/L1Menu.git L1TriggerDPG/L1Menu %ENDCODE% * Copy your customized menulib.hh/cc files into L1Menu/macros directory and compile <div style="padding-left: 30px;"> </div> %CODE{ lang="bash"}% cd L1TriggerDPG/L1Menu/macros cp $CMSSW_BASE/src/scripts/menulib* . make -j 4 %ENDCODE% ---++++!! Ingredient 2: ntuple * L1Ntuples are produced by TEA shifters weekly and stored on [[CMS.EOS][EOS]]: * /eos/cms/store/group/dpg_trigger/comm_trigger/L1Trigger/TEAshiftNtuples/ * nanoDST ntuple is recommended for rate study because it has rich statistics * Need to use ZeroBias ntuple if you want to run emulation * Fill 7118 and 7131 are two 2018 fills that are often used in trigger study because of high PU and long run time * nanoDST ntuples copied to LPC: * /eos/uscms/store/group/lpctrig/comm_trigger/L1Trigger/TEAshiftNtuples/ We will be using a small ntuple for today's tutorial, already included in the branch of the git repository you downloaded: macros/ntuple/fill_7118_nanoDST_shifter_test.list ---++++!! Ingredient 3: Lumi Section (LS) information table [[BrilCalc][<br />]] * This is important because 2018 ntuples are produced without a json file. We need to pick our preferred json file to avoid bad LS. * 2018 DCS only json file: https://cms-service-dqm.web.cern.ch/cms-service-dqm/CAF/certification/Collisions18/13TeV/DCSOnly/json_DCSONLY.txt * Also, LS vs PU information is stored in this table; we will use this later in the rate vs PU plots. * LS information is acquired by CMS Beam Radiation Instrumentation and Luminosity (BRIL) https://cms-service-lumi.web.cern.ch/cms-service-lumi/brilwsdoc.html Please use the LS information table provided in macros/menu/run_lumi.csv ; the sqlalchemy.exc database seems to be down at the moment. ---++++!! Ingredient 4: prescale (PS) table * 2018 official PS table is in https://github.com/cms-l1-dpg/L1Menu2018/tree/master/official/PrescaleTables * The PS table is in .xlsx format for better presentation, but it is often easier to work with tab-separated .txt or .csv files. The provided file menu/Prescale_2018_v2_1_0_Col_2.0.txt is a copy of the latest PS table, column 2.0e34.Add your customized seed into the PS table, with your designed PS value. ---++++!! Run the rate locally * You can run ./testMenu2016 --help for all arguments * Some useful arguments: * -u LS information table you just made * -m PS table you just customized * -l ntuple list you just made * -o name of output files * -b number of bunches; this is usually 2544 for 2018 data * -n max number of events (default is the whole ntuple) * --UseUnpackTree to use UnpackTree (default is EmuTree) * --SelectRun to select the run number if your ntuple list has multiple runs (default is the whole ntuple list) * --SelectLS to select the LS (default is all LS). You can look up the LS information table for help. <div style="padding-left: 30px;"> </div> %CODE{ lang="bash"}% cd $CMSSW_BASE/src/L1TriggerDPG/L1Menu/macros ./testMenu2016 -u menu/run_lumi.csv -m menu/Prescale_2018_v2_1_0_Col_2.0.txt -l ntuple/fill_7118_nanoDST_shifter_test.list -b 2544 -n 50000 --UseUnpackTree %ENDCODE% ---++++ Batch jobs for rate vs PU plots * add argument --doPrintPU * remove argument --SelectRun and --SelectLS, because we need to cover the full PU range * You can use batch/SubmitLPC.py or SubmitLSF.py to split the job * Batch job output files for today's tutorial: /eos/uscms/store/user/huiwang/L1Menu2017/Sep07fill_7118_and_7131_nanoDST_Prescale_2018_v2_1_0_Col_2.0_HATS * edit plot/CompPUDep.py and run it! ---++ Section 2: HCAL calibration/conditions updates and impact at L1 / rates validation ---+++ LUT generation and validation (Rhys) For the LUTs exercise please check LUTsAtHCALdays2019 ---+++ HCAL conditions impact at L1 rates (Georgia) These scripts use the L1Ntuple framework, which should be set up as described here: [[CMSPublic.SWGuideL1TStage2Instructions#Environment_Setup_with_Integrati][L1environment_setup]]. They also assume that you run the scripts from lxplus, though they are easily modified to run on cmslpc. In this exercise we are running on lxplus . After setting up the L1Ntuple environment, issue the following: <div style="padding-left: 30px;"> %CODE{ lang="bash"}% cd CMSSW_10_2_1/src git clone git@github.com:cms-hcal-trigger/Validation.git HcalTrigger/Validation cd HcalTrigger/Validation %ENDCODE%</div> and compile / run "scram b". Before starting you need to setup the crab environment as well: <div style="padding-left: 30px;"> %CODE{ lang="bash"}% source /cvmfs/cms.cern.ch/crab3/crab.csh cmsenv voms-proxy-init --voms cms --valid 168:00 %ENDCODE%</div> The script that submit CRAB jobs is called submit_jobs.py. Its required arguments are a good run lumimask, a dataset name, the new HcalL1TriggerObjects tag, and the storage site for the output. Here we choose: * CMSSW_10_2_1 * Dataset: /ZeroBias/Run2018D-v1/RAW * Run 325170, create a file lumimask_325170.json with this line: {325170: [ [ 1, 300 ] ] } * Reference HLT GT: 101X_dataRun2_HLT_v8 * New tag for HcalL1TriggerObjectsRcd: HcalL1TriggerObjects_2018_v16.0_data For example: <div style="padding-left: 30px;"> %CODE{ lang="bash"}% ./scripts/submit_jobs.py -l lumimask_325170.json -d /ZeroBias/Run2018D-v1/RAW -t HcalL1TriggerObjects_2018_v16.0_data -o T2_CH_CERN %ENDCODE%</div> For this exercise, we will run a local example of L1Ntuple jobs, so we will need to specify an input root file by editing submit_jobs.py and replace line: <div style="padding-left: 30px;"> %CODE{ lang="bash"}% DEFAULTINPUT = '/store/data/Run2018D/ZeroBias/RAW/v1/000/325/170/00000/FF9E45DF-DC15-E749-8E0C-0EE9A37361CD.root' %ENDCODE%</div> Then, we will instead execute the above command with the additional argument "-n" (=no_exec): <div style="padding-left: 30px;"> %CODE{ lang="bash"}% ./scripts/submit_jobs.py -l lumimask_325170.json -d /ZeroBias/Run2018D-v1/RAW -t HcalL1TriggerObjects_2018_v16.0_data -o T2_CH_CERN -n NO_EXEC %ENDCODE%</div> Two python CMSSW configurations files are generated: ntuple_maker_def.py and ntuple_maker_new_cond.py , which we run locally for 1k events each. <div style="padding-left: 30px;"> %CODE{ lang="bash"}% cmsRun ntuple_maker_def.py mkdir def_dir/ mv L1Ntuple.root def_dir/L1Ntuple_def.root cmsRun ntuple_maker_new_cond.py mkdir new_cond_dir mv L1Ntuple.root new_cond_dir/L1Ntuple_new_cond.root %ENDCODE%</div> Then to submit the jobs to make the histograms on lxplus, execute: <div style="padding-left: 30px;"> %CODE{ lang="bash"}% ./scripts/submit_hist_jobs.py -d [path to ntuples with default, old conditions] -n [path to ntuples with new conditions] %ENDCODE%</div> ie. <div style="padding-left: 30px;"> %CODE{ lang="bash"}% ./scripts/submit_hist_jobs.py -d root://eoscms.cern.ch//eos/cms/store/user/georgia/ZeroBias/Hcal302472_def/170908_192827/0000/ -n root://eoscms.cern.ch//eos/cms/store/user/georgia/ZeroBias/Hcal302472_new_cond/170908_192908/0000/ %ENDCODE%</div> For this exerice, we will simply run the rates histogram locally: <div style="padding-left: 30px;"> %CODE{ lang="bash"}% rates.exe def def_dir/ rates.exe new new_cond_dir/ %ENDCODE%</div> This will create histogram files rates_def.root and rates_new_cond.root with the rates for the default and new conditions separately. Then execute: <div style="padding-left: 30px;"> %CODE{ lang="bash"}% mkdir plots/ draw_rates.exe %ENDCODE%</div> to draw the rate histograms. ---+++ -- Main.HuiWang - 2019-09-11
This topic: Sandbox
>
WebPreferences
>
L1TriggerAtHCALdays2019
Topic revision: r24 - 2019-09-18 - GeorgiaKarapostoli
Copyright &© 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use
Discourse
or
Send feedback