Goals of conditions-based Monte Carlo Production

LHC beam conditions have been changing rapidly since March 2010, and conditions are also variable for many subdetectors. The framework for conditions-based MC production, nicknamed RunDMC, is meant to:

  • Do pileup re-weighting for all analyses (in analysis, simply apply the same good-run list filter to data and MC).
  • Do luminosity-weighted subdetector efficiency corrections (properly correlated to pileup and other subdetector efficiency effects), for inefficiencies from transient dead channels.
  • Prepare for zero-bias overlay...
The combined effects of varying pile-up and varying detector acceptance can be modeled in Monte Carlo, making efficiency corrections simpler for analysis.

What (and where) are the RunDMC datasets?

A run-dependent MC dataset contains events with "real" run and luminosity block numbers stored in the EventID. In a complete dataset, the number of events in any luminosity block is proportional to the integrated luminosity in that block.

The run-dependent MC dataset may:

  • combine events with different pileup parameters (numberOfCollisions, numberOfCavern, numberOfBeamGas, numberOfBeamHalo...). These parameters are determined by the assigned run and luminosity block number from the online luminosity database.
    • Example: truth-level in-time pileup event multiplicity in RunDMC validation sample, run 165591:
      Run165591ND.png
  • combine events with different sets of disabled channels in various subdetectors. A complete list of the possible subdetectors with run-dependent conditions is listed below.
By default, both pileup and disabled detector conditions are turned on.

Currently available RunDMC:

Simulation run number Run Period Dataset name Size Status Comments
105200 G1 valid1.105200.T1_McAtNlo_Jimmy.digit.RDO.e603_s932_d363 50k? validating  

RunDMC inputs and assumptions

Luminosity and average pileup per lumiblock.

The integrated luminosity of each luminosity block, and the mean number of interactions per crossing, are obtained from the COOL luminosity database (COOLONL_TRIGGER/COMP200, COOLOFL_TRIGGER/COMP200).
  • The offline database is needed for the best estimate of the instantaneous luminosity and the number of events per bunch crossing; it uses tag OflLumi-7TeV-002 by default.
  • The online database must be used for the lumiblock numbers and L1 accepts, which are used to calculate the livetime fraction during a lumiblock. The livetime is calculated using a L1 trigger. By default, the trigger L1_MBTS_1 is used, and the livetime is calculated as (events after prescale, masks, and busy vetoes)/(events after prescale). This should work well as long as:
    • The L1 trigger used is not masked
    • The number of events after the prescale is large enough that numerical errors do not distort the livetime.
  • There are a tiny fraction luminosity blocks with negative instantaneous luminosity in the offline database. These are ignored.

Configuration files.

  • Latest (RunDependentSimData-00-00-05)

Conditions.

  • Conditions DB folders for digitization:
  • Conditions tag for reconstruction: TBA (expect OFLCOND-SDR-BS7T-04-IOVDEP)
    • /TILE/OFL02/STATUS/ADC//TileOfl02StatusAdc-REP-Sept2010-03
    • /LAR/BadChannelsOfl/MissingFEBs//LARBadChannelsOflMissingFEBs-REPP-02
    • /LAR/BadChannelsOfl/BadChannels//LARBadChannelsOflBadChannels-REPP-05
    • /TRT/Cond/StatusPermanent//TrtStrawStatusPermCol-02
    • /TRT/Cond/Status//TrtStrawStatusTemporaryEmpty-BLK-UPD4-00-00

Making RunDMC

RunDMC tasks are controlled by configuration files specifying, in a list, for each run/lumiblock pair:

  • the number of desired events per lumiblock,
  • the timestamp at the beginning of the lumiblock, and
  • the pileup parameters of that lumiblock
  • a (currently unused) flag for controlling the allocation of lumiblocks to production jobs.
This information is used by the digitization job to assign run/lumiblock numbers and to add pileup. All other conditions applied to the job are accessed via the IOVDbSvc.

Tools for creating these configuration files, and for examining them, are in the ATLAS offline release: Simulation/RunDependentSim/RunDependentSimComps. Official configuration files can be found in Simulation/RunDependentSim/RunDependentSimData.

Creating a RunDMC dataset

RunDMC dataset creation begins with digitization. To create a RunDMC dataset, one provides a configuration file, as described above. This configuration takes the form of python jobOptions. Examples can be found here. All of the files were created by the RunDepTaskMaker tool, described below.

Using the RunDepTaskMaker tool (release 16.0.1.6 and later)

  1. First, obtain a good run list corresponding to the luminosity blocks you want to model with the dataset. One simple way to do this is by using the runQuery: select the run periods you wish to simulate, and then use the selection "ready and dq lumi g" in the online ATLAS run query ). The good run list can be downloaded from the xml link at the bottom of the page.
  2. Next, determine the number of events you want in the output dataset (probably the size of the HITS dataset you will use for input, available from the AMI browser.)
  3. Run the tool:
    • If your good run list is called MyLBCollection.xml, you can create a configuration file for a production task processing 1,000,000 HITS events with the command
  RunDepTaskMaker.py --nMC 1000000 --outfile MyRunDMCConfiguration.py MyLBCollection.xml
  1. The output file, MyRunDMCConfiguration.py, can be passed to the digitization job as described below.

Using the PrintFirstJobForRun tool (Release 16.0.2.1 and later)

Since a configuration file can assign a different number of events to each lumiblock, the number of lumiblocks processed by each job in a task can vary. To make it easier to determine which output file in a dataset contains events from a given run, the tool PrintFirstJobForRun will process a configuration file and determine, based on the number of events per job, which job first processes events from that run. It will also return a list of the runs and luminosity blocks simulated by that run.

The configuration file specifies the total number of events, but not the total number of jobs (or the equivalent, which is the number of events processed per job). This is decided when a task is requested in the production system. Hence, the number of events per job should be specified when using PrintFirstJobForRun:

  PrintFirstJobForRun.py --nev 50 167607 MyRunDMCConfiguration.py

Using the Digitization job transform options (release 16.0.0 and later)

You can then run digitization jobs using the Digi_trf.py whith the job transform option preInclude!MyRunDMCConfiguration.py,!RunDependentSimData/!EnableConditions.py= to override the run and luminosity block number. You must also set the jobNumber parameter, since the configuration file specifies an entire production system "task," not a single job For example:
Digi_trf.py maxEvents=50 inputHitsFile=HITS.pool.root outputRDOFile=RDO.pool.root conditionsTag='OFLCOND-SIM-BS14T-00'  'NDMinbiasHitsFile=HITS.00000._000[075,079,104,106].pool.root.1' \
jobNumber=0 \
preInclude=SimuJobTransforms/Lumi010DigitConfig.py,RunDependentSimData/configLumi_PeriodD_50k.py,RunDependentSimData/EnableConditions.py \
 

The last two options are unique to RunDMC production.

Using RunDMC datasets

Getting old (truth) event info (16.0.3 and later).

The event number from the simulation is not changed in creating a RunDMC event. The run number is changed: the old run number (the Monte Carlo dataset ID) will be stored in EventInfo->event_type()->mc_channel_number(). (This is not available in 16.0.2.1. For now, the run number can be found for a given dataset by the dataset name, or the provenance tool in AMI.)

Reweighting events in RunDMC datasets

RunDMC datasets are currently made without applying prescales. For analysis using unprescaled triggers, this is ideal. For analysis using a prescaled trigger, it would be useful to have the event weight that should be applied to events in each luminosity block. (In progress).

Status (what's run/lumiblock dependent?)

16.0.2.1 System Conditions in digitization Conditions in reconstruction Folders Comments (relevant to 16.0.2.1)
Cancel Beam none none would require special simulation vs. run
DONE Pileup yes   /TRIGGER/OFLLUMI/LBLESTOFL numberOfCollisions/Cavern. numberOfBeamGas/Halo are fixed by default (for now).
Cancel Trigger none none
DONE Pixels none disabled modules /TDAQ/PIXEL/EnabledResources/Modules Could be enabled for digitization instead.
Cancel SCT none none Not requested
DONE TRT none dead wires /TRT/Cond/StatusPermanent, /TRT/Cond/Status Dead and temporarily dead wires.
DONE LAr bad channels/FEBs   /LAR/BadChannelsOfl/MissingFEBs, /LAR/BadChannelsOfl/BadChannels  
DONE Tile none disabled channels /TILE/OFL02/STATUS/ADC Note: small inconsistencies due to cabling maps in MC are being ignored.
No MDT dead stations   /MDT/DCS/DROPPEDCH, /MDT/DCS/PSLVCHSTATE ready for next cache.
No RPC disabled   /RPC/DQMF/ELEMENT_STATUS Not yet implemented...

Validation

Pileup validation

ID conditions validation

Calorimeter noise validation

Muon conditions validation

-- AyanaHollowayArce - 25-Oct-2010

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng Run165591ND.png r1 manage 16.5 K 2010-11-14 - 11:53 AyanaHollowayArce In-time, non-diffractive pileup events (truth) in RunDMC validation sample, run 165591
Edit | Attach | Watch | Print version | History: r7 < r6 < r5 < r4 < r3 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r7 - 2010-11-16 - AyanaHollowayArce
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback