This document is a work in progress. Its goal is to provide a clear, well-linked introduction to the Atlas Simulation framework.

The Atlas Work Book provides a straightforward introduction to the topics covered here, for those users that are interested only in having a functional example of running athena. The work book sections are divided into Generation, Simulation, and Digitization, all of which fall under the Atlas Simulation umbrella. There is a separate section on using ATLFAST to run simulation.

For issues with simulation, one useful starting point is the Simulation HN Thread.

A detailed introduction to generation job options

Event generation in athena is done in a separate step to ensure reliability of the underlying physics events being simulated and to isolate as much as possible the simulation from variation in hard process modeling, initial- and final-state radiation models, multiple interaction and beam remnants, hadronization and decay models, particle distribution functions, and the interactions of these various effects. Generation is run until all particles are "stable," as defined by the generator options themselves. One might imagine, for example, that at Belle or BaBar, b-mesons are basically unstable but at the LHC they must be considered stable since they might fly through the first interacting layer of the detector. Simple filtering algorithms are provided at the athena level.

All the event generators available in athena are described in detail here. After a generator is run, its output is converted into the common HepMC format and a container of these events is created and filled. The container can be accessed through StoreGate or can be written out to a POOL file. The generators run with some athena interface code accessing common code for the generators. The common code is supplied by external authors and is maintained by either the LCG Genser project or Atlas members in an external area.

Below are the workbook job options, as an example.

The first piece is the common setup options for generation which load common code and services that include particle properties.

# General Application Configuration options
include( "AthenaCommon/Atlas_Gen.UnixStandardJob.py" )

include( "PartPropSvc/PartPropSvc.py" )

Next, "dll's" are added to the application. Athena uses dynamically loaded libraries (dll) for its configuration in order to read into memory only what is required for desired output. Here "TruthExamples" is a library that has information about storing particle truth, and "Pythia_i" is the athena interface to Pythia. The algorithms that the application is to execute are then added appropriately. One could read any number of libraries into memory and not execute their contents, but the best practice here is to load exactly what is needed and execute it. "DumpMC" is an algorithm that prints a large amount of human-readable information about the event being generated to stdout. Options beyond Pythia are described further here.

# Private Application Configuration options
theApp.Dlls  += [ "TruthExamples", "Pythia_i" ]
theApp.TopAlg = ["Pythia","DumpMC"]

The following piece is a common athena jobOptions fragment for altering the default output verbosity of all methods called during simulation. The DEBUG output is often not comprehensible to a non-expert.

# ------------------------------------------------------------
# Set output level threshold (2=DEBUG, 3=INFO, 4=WARNING, 5=ERROR, 6=FATAL )
# ------------------------------------------------------------
MessageSvc = Service( "MessageSvc" )
MessageSvc.OutputLevel               = 3

The next piece sets common athena flags. In this case, the number of events to be generated is set to 10.

# Event related parameters
# Number of events to be processed (default is 10)
theApp.EvtMax = 10

The next set of options defines the random number seeds to be used by generation. These must be tracked (and are kept with the job) to ensure reproducibility of the events. Here the access is to the "Athena Random Number Generation Service," an external service.

# Algorithms Private Options
theApp.ExtSvc += ["AtRndmGenSvc"]
AtRndmGenSvc = Service( "AtRndmGenSvc" )
AtRndmGenSvc.Seeds = ["PYTHIA 4789899 989240512", "PYTHIA_INIT 820021 2347532"]

Next the Pythia algorithm is constructed (remember that a few lines earlier we added "Pythia" to the algorithms to be executed - now we must create the algorithm!). Each generator provided in athena has a method for accessing the interface of the generator itself. In this case, PythiaCommond is the string of Pythia setup options. These commands determine decay modes and generation modes available to the interacting particles.

# Generate Z->ee
Pythia = Algorithm( "Pythia" )
Pythia.PythiaCommand = ["pysubs msel 0","pysubs msub 1 1",
                        "pypars mstp 43 2","pydat3 mdme 174 1 0",
                        "pydat3 mdme 175 1 0","pydat3 mdme 176 1 0",
                        "pydat3 mdme 177 1 0","pydat3 mdme 178 1 0",
                        "pydat3 mdme 179 1 0","pydat3 mdme 180 1 0",
                        "pydat3 mdme 181 1 0","pydat3 mdme 182 1 1",
                        "pydat3 mdme 183 1 0","pydat3 mdme 184 1 0",
                        "pydat3 mdme 185 1 0","pydat3 mdme 186 1 0",
                        "pydat3 mdme 187 1 0"]

Finally, the generated events must be stored in a file that can then be read by the simulation. This storage is not necessary in the case that the generation options are being read directly by the simulation (see below). Here the event information and Monte Carlo event collection are written out into a pool file, readable by root.

# Pool Persistency
include( "AthenaPoolCnvSvc/WriteAthenaPool_jobOptions.py" )
theApp.Dlls   += [ "GeneratorObjectsAthenaPoolPoolCnv" ]
Stream1 = Algorithm( "Stream1" )
Stream1.ItemList += [ 'EventInfo#*', 'McEventCollection#*' ]                   #2

Stream1.OutputFile = "pythia.pool.root"

A detailed introduction to simulation job options

The simulation itself is run using a jobOptions file called jobOptions.G4Atlas_Sim.py. A great deal can be learned by examining these top level job options alone with the proper documentation. The jobOptions are broken up here so that we might examine them more carefully.

The first piece is a standard doxygen header.

# Job options file for Geant4 Simulations
# Atlas simulation 
__version__="$Revision: 1.1 $"

Next comes the selection of subdetectors to be simulated. The lines below turn on each of the main subdetector groups. Within each set are subsets that can individually be turned on and off. The inner detector includes Pixel, SCT, TRT, and bpipe (for "beampipe") flags that can be set on and off. The calorimeter includes LAr and Tile flags. These options may be modified by the Layout selected (see below). Included in this section is also a flag for turning on the storage of the particle truth. There are several strategies defined in the G4TruthStrategies package, implementing the recommendations of the Monte Carlo Truth Task Force.

#--- Detector flags -------------------------------------------
from AthenaCommon.DetFlags import DetFlags
# - Select detectors 

The next section defines flags that are common to all athena jobs. The PoolEvgenInput flag defines the event generation input file to be read in, if one wishes to do so. The PoolHitsOutput file defines the file in which to store all hits. The EvtMax flag defines how many events will be simulated. Other possible flags are documented within common flags.

#--- AthenaCommon flags ---------------------------------------
from AthenaCommon.AthenaCommonFlags import athenaCommonFlags

The next section includes simulation-specific flags, all of which are documented in Simulation Flags. Several of these are rather clear. The EventFilters are described further in the G4AtlasApps documentation, and include filters based on the initial pseudorapidity and phi of the primary particle (default acceptance is +/-6 in eta, 0 to 2 pi in phi), a vertex range checker for discarding non-central primary vertices, a vertex rotator for altering primary vertices being read in, and a vertex positioner for changing the starting locations of the vertices being read in. The magnetic field can be turned on or off (default is on). The fast G4 simulation can be enabled at one of three levels (0=off), as described in fast G4 simulation documentation. The Simulation Layout must also be selected. This layout affects many different pieces of the simulation initialization, including the geometry to be constructed (e.g. with material distortions, misaligned, older versions) and the truth to be stored. All the various possibilities can be found here and suggested values in the workbook, with further explanation of the geometry versions in DB tags documentation.

#--- Simulation flags -----------------------------------------
from G4AtlasApps.SimFlags import SimFlags
# Look into SimFlags.SimLayout for other possible values 
#SimFlags.SimLayout='ATLAS-CSC-02-00-00' # specific value 
SimFlags.SimLayout.set_On()              # uses the default value 

#  sets the EtaPhi, VertexSpread and VertexRange checks on
#  sets LArParametrization 
#  No magnetic field 

The next jobOptions affect only the SingleParticle generator, and in this case the particles it can generate. The generator is useful for creating simple, single-particle events. Orders can be sent to it using the final piece of code; any of the orders described in the LINK documentation will work here.

# - uses single particle generator
#   set energy constant to 10 GeV

# set your own particle generator orders here.
# do this for example if you need generation at fixed pt
# PDG code will be set following user instructions in the SimFlags.ParticlePDG
# SimFlags.Energy will be ingored if you uncomment the following lines

#SimFlags.ParticleGeneratorOrders={'vertX:' : ' constant 1.0','vertY:' :' constant 1.0',
#          'vertZ:' : ' constant 0.0','t:' :' constant 0.0',
#          'eta:' : ' flat -3.0 3.0', 'phi:' : ' flat  0 6.28318',
#          'pt:' : ' constant 50000'}

If events that have already been generated are to be read in, one must use "KinematicsMode='ReadGeneratedEvents'." There is an additional option to provide event generation job options (as described above) that will be run at the beginning of each event during simulation.

# - reads events already generated

# (the input file name is athenaCommonFlags.PoolEvgenInput]
# - uses a given generator 

The following piece is a common athena jobOptions fragment for altering the default output verbosity of all methods called during simulation. The DEBUG output is often not comprehensible to a non-expert.

#---  Output printout level ----------------------------------- 
#output threshold (2=DEBUG, 3=INFO, 4=WARNING, 5=ERROR, 6=FATAL)
#you can override this for individual modules if necessary
MessageSvc = Service( "MessageSvc" )
MessageSvc.OutputLevel = 4

The final piece of code gets the simulation kernel in preparation for the beginning of simulation.

# Job configuration
# ***>> Do not add flags or simulation options below this line
from G4AtlasApps import SimKernel

If one wishes to modify the simulation in a non-trivial way, it may be desirable to first initialize the simulation to the default level or some lower level (set using SimFlags.init_level(1) ). At level zero, only externals and a few top level python interfaces have been loaded. At level one, the Physics and Detector Facilities are initialized. At level two, Geant4 is initialized. At level three, any fast simulation models, physics regions, sensitive detectors, truth, fields, and recording envelopes are initialized.

# enter interactive mode 

These two lines, rarely used, provide detailed information about the stepping of particles through Geant4. The information can be useful for debugging, but can create rather large output files and slow down the simulation, and so should be used with care.


The final few lines start the event loop and loop through all the events, then request a clean exit from athena.

# start run after the interactive mode 
#theApp.nextEvent( theApp.EvtMax )

#--- End jobOptions.G4Atlas_Sim.py file  ------------------------------

Digitization and Pile Up

Everything you ever wanted to know about digitization is nicely documented on the digitization page.

Pile up can include both multiple interactions per bunch crossing and multiple bunch crossings overlayed in time. At the moment, all pile up and cavern background are handled as part of the digitization routines.

An introduction to functionality

Now that you know what job options look like and how to modify them, it is useful to know how these options are then translated and run in athena.

As in the rest of athena, simulation is run with a python interface guiding object-oriented C++ algorithms. The main steering is done in the G4AtlasApps package, and other libraries are loaded as-needed during the run. The Atlas simulation package is based on the Geant4 toolkit, an open source particle physics simulation maintained by a world-wide collaboration and led by a development team from CERN. Atlas-specific packages provide tailor-made handling of geometry, kinematics, materials, physics, fields, sensitive detectors, Monte Carlo truth, run-specific issues, visualization, and so on.

The input to the simulation can be an event generation file, a script for running generation on the fly, or commands to the single particle generator, as described above. Before being accepted into the simulation, the particles can be filtered by the optional Event Filter already described. The output of simulation is a HITS file which stores information about the energy deposits in the detector, in some cases merged together to save space, and the stored particle truth. The hits can then be digitized to produce an Raw Data Object (RDO) file which roughly contains the response of the detector (e.g. voltages, currents) to these hits.

The RDO can optionally be converted into byte stream data, during which process the truth information is stripped away to emulate as closely as possible real data that will come out of the detector. The reconstruction software either uses the byte stream information directly or includes a decoding method that translates the byte stream data back into RDO format.

Before beginning simulation with Geant4, the Atlas geometry is translated from the common GeoModel framework into a form that Geant can use via the Geo2G4 package. This allows the one-time implementation of new detector geometries in GeoModel which can then be made available to the entire chain of Atlas software, from simulation through reconstruction and analysis.

Geant4 simulation employs a particle stack for sequential simulation of individual particles. Each particle is picked up from the stack and moved, one step at a time, through the detector. No single step may cross volume boundaries, so that a step may be terminated for a process (e.g. brem or decay) or for a geometric reason (volume boundary). At the end of each step Geant calculated an energy deposition for that step, adds any secondaries produced to the stack of particles to move, and adjusts the current particle's trajectory appropriately. In the case of certain processes including brem and ionization, range cuts determine whether the secondary particle will be produced or the energy will be deposited along the original particle's track. By default in Atlas, any secondary that could travel more than 1mm is produced everywhere, and any secondary that could travel more than 30um is produced in the calorimetry. There are a few other volumes with specially defined range cuts in the inner detector and muon system. When moving a particle, Geant4 employs a stepper that calculates, based on the values and derivatives of the magnetic field, the next location of a charged particle. There are tolerances set on the errors and accumulated biases of these steppers to ensure that the simulated particle does not stray considerably from its ideal path.

When ever a particle deposits energy into the detector, the energy can be deposited into dead material, in which case only calibration hits would store the associated energy, or it can be deposited in a sensitive detector. Each Atlas subdetector has its own sensitive detector associated to its sensitive volumes that allows the alteration of energy deposits (including merging or early collecting in the case of the liquid argon calorimetry) before the deposit is stored as a hit. These hits are then recorded and later transformed (via digitization) into a RDO, as described above.

At each break in the simulation process, the user is provided with handles to stop or insert actions into the simulation. In Atlas, Monte Carlo truth storing algorithms are handled at each step to make sure any interesting secondaries or decays are saved into the truth tree. More information on truth storage is in the report of the Monte Carlo Truth Task Force.

Currently the Simulation Optimization Task Force is looking into optimizing the parameters of Geant4 simulation, for both CPU and physics performance. The Simulation Strategy group is examining the simulation requirements for each physics analysis and providing recommendations for flavor of simulation (full G4, fast G4, ATLFAST, ATLFAST-II) to be run for various samples. Agenda pages for both groups are here. In order to speed up the simulation, a EM shower parameterization and EM shower library can now be used inside of standard G4 simulation. More documentation is available on the fast G4 simulation working group page.

Forward (Luminosity) Detectors

The simulation of data for ALFA, LUCID, and ZDC is an open problem at the moment. More details will be made available soon.


ATLFAST is a fast detector simulation for Atlas. Full documentation can be found on their documentation page. It takes as input generated events and outputs reconstructed objects, consuming about 1s per event. ATLFAST-II is a somewhat slower (5x) but more accurate version of ATLFAST. Detailed explanations of what is done by ATLFAST can be found on the procedure webpages.

The ATLFAST validation group provides information about the progress of validating ATLFAST in the latest Athena releases. There is also a production webpage


Detailed explanations of running job transforms in athena (grid-like jobs) can be found on the Python Transforms Twiki page. For simulation, the Simu Job Transforms package contains all the relevant scripts.

Atlas Geant4 Code

Python control of the Geant4 simulation for ATLAS is done almost exclusively by the G4AtlasApps project. This project interacts through G4AtlasControl with the underlying FADS infrastructure. Dictionaries are constructed that contain all the necessary pieces of code to be loaded, dictionaries, options to be used, geometry to be constructed, and so on. Various packages exist solely for the purpose of wrapping python options set by the user into a format accessible to C++ code later (e.g. LArG4RunControl). At the user's request, the simulation is the initialized to each of three levels:

Level zero: only externals and a few top level python interfaces have been loaded Level one: Physics and Detector Facilities are initialized Level two: Geant4 is initialized Level three: any fast simulation models, physics regions, sensitive detectors, truth, fields, and recording envelopes are initialized.

The initialization is done in this specific order to ensure that no piece of code is constructed that requires the existence of an uninitialized service. At this point, control of the simulation is handed over to the services in the event loop.

Each subdetector provides code for hit detectors, in packages including:

The fast simulation is added to the load dictionaries via the G4FastSimulation package. It then runs from the LArG4FastSimulation package.

More details will be provided here as time allows.


Validation of athena software takes many forms, and is certainly an ongoing process.

Since 2001 a combined test beam (CTB) validation program has been underway, and their efforts are still continuing today. The CTB meetings' agenda 1 and agenda 2 and working group page provide considerably more information about the status of their work.

Most release-to-release validation is done in the context of either the physics validationƒ group or the software validation group (with concentrations appropriate to the titles). Their meeting agendas can be found here.

Computing performance validation is also done release-by-relase and for certain special studies (new physics lists, alteration to Geant4 cuts) in both memory and CPU time. The main validation page provides more information about these studies.

Nightly validation is also done in Real Time Tests and the Full Chain Test. The current nightly status can be found on the Simulation Validation webpage.

Each software release is run on the Large Computing Grid, and bug reports are filed in Savannah under both simulation and validation.

Further documentation

The following papers and websites contained a great deal of the information above:

Simulation Papers

Group Pages

Major updates:
-- ZacharyMarshall - 27 Nov 2007

%RESPONSIBLE% ZacharyMarshall

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2007-11-27 - ZacharyMarshall
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback