Frequently Asked Questions

Complete: 4


About input files

I want to use a private generation fragment

Note: this topic is not FastSim specific.

When cmsDriver is configured to include the GEN step, the first argument of the cmsDriver configures the GEN-content of the events. This first argument points to a configuration file that defines an object named generator. Some example of genfragments are:

Case 1: The genfragment of your choice is available in the CMSSW release of your choice

  • Check whether your genfragment is indeed available: you can do that e.g. on github. For e.g. CMSSW_7_4_0_pre5 you find all genfragments listed here. Adapt the link if you are interested in another link.
  • If it is not available, move to Case 2
  • If it is available, provide the filename of the genfragment to cmsDriver. E.g. say you'd like to use, part of Configuration/Generator/python/ in CMSSW_7_4_0_pre5 do: <MORE OPTIONS>

Case 2: The genfragment of your choice is not available in the CMSSW release of your choice

Possibility 1: add the genfragment of your choice to Configuration/Generator inside your local area, e.g.:

# setup a local area
cmsrel CMSSW_7_4_0_pre5
cd CMSSW_7_4_0_pre5/src
# add the Configuration/Generator package to your local area
git cms-addpkg Configuration/Generator
# for the sake of this example: add a random genfragment to your local Configuration/Generator/python 
curl -o Configuration/Generator/python/
# compile
scram b
# provide the filename of the genfragment as first argument to If you like, you can strip the extension from the filename. <MORE OPTIONS>

Possibility 2: add the genfragment of your choice to a package of your choice

# setup cmssw
cmsrel CMSSW_7_4_0_pre5
cd CMSSW_7_4_0_pre5/src
# create a package
mkdir MyPackage
cd MyPackage
mkedanlzr MyGun
# add a gen-fragment to your package and compile
curl -o MyGun/python/
scram b
# specify the path relative to $CMSSW_BASE/src MyPackage/MyGun/python/ <MORE OPTIONS>

I want to read generated events from edm files

Use e.g. a command: fastsim -s SIM,RECO --fast --filein=file:edmFile.root --pileup=NoPileUp --geometry DB --conditions auto:startup_GRun --eventcontent=FEVTDEBUGHLT --datatier GEN-SIM-DIGI-RECO -n 10

If you use a release lower than CMSSW_7_2_0_pre5, BE VERY CAREFUL Make sure you read in gen-level events with primary vertex at (0,0,0,0). If this is not, you will over-smear, your gen-level primary vertex. (your gen-level primary vertex will be moved to (0,0,0,0) + beamspot + beamspot + fluctuations)

When you produced the input edm files, avoid smearing of the gen-level primary vertex, by disabling the module process.VertexSmearing.

I want to use LHE events as input

See SWGuideFastSimulation, section "Relevant cmsDriver options", options "--filein" and "--filetype"

About non-standard settings

I want to split my jobs in two steps

This is possible since 6_1_0_pre6.
You may want to split into SIM and RECO, as in typical FullSim jobs, or you can use simExtended and recoHighLevel, as motivated in slides 10-12 of this talk.
Instructions here

I want to simulate events with a fixed vertex at (0,0,0) and no smearing

It implies first to switch off the smearing of the vertex, and second to set the beam spot in (0,0,0). More information can be found here.

I want no tracker simulation

If you just want to save time by not performing the simulation of the response to charged particles and the execution of the tracking emulation, you can just set
process.famosSimHits.SimulateTracking = False
Beware: the tracker is still there, and it affects all the particles that pass through it!
You can make the tracker "transparent" by switching off all the interactions in it (link). In short:
process.famosSimHits.MaterialEffects.PairProduction = False
process.famosSimHits.MaterialEffects.Bremsstrahlung = False
process.famosSimHits.MaterialEffects.EnergyLoss = False
process.famosSimHits.MaterialEffects.MultipleScattering = False
process.famosSimHits.MaterialEffects.NuclearInteraction = False

Is it possible to set a 900 GeV centre-of-mass energy ?

Yes, it is possible, it should be modified in the source.
comEnergy = cms.untracked.double(900.)
at the same level of the pythiaHepMCVerbosity settings, i.e., outside the PythiaParameters PSet . Finally, the average number of pile-up events should be set to 0 !

Alternatively, you can pick one existing config file. So, to simulate 900 GeV min-bias, do:

cvs co Configuration/CMS.GenProduction/python/
cvs co Configuration/CMS.GenProduction/python/
scram b Configuration/CMS.GenProduction/python/ -s GEN,FASTSIM --pileup=NoPileUp --conditions=MC_31X_V9::All --eventcontent=FEVTDEBUGHLT --beamspot=Early10TeVCollision --datatier GEN-SIM-DIGI-RECO -n 100

How to run simulation with 14 TeV pileup if it's not included in the release ?

Note: in the latest CMSSW releases 14 TeV pileup files are in the release. Nevertheless, this section stays as a reference for users who want to simulate non-standard energies.

To perform simulations with 14 TeV files one needs to get available pileup files locally (into the developer area) :

  • the following lines:
from one of old versions of FastSimulation/PileUpProducer/data/download.url in CVS repository

to be put into FastSimulation/PileUpProducer/data/download.url and then to download all the files listed above (~740 MB in total) in one go:

cd FastSimulation/PileUpProducer/data
cat download.url | xargs wget
Additional remark:
to directly access the files, it's worth taking into accound that
is equivalent to
Also for any alternative access to these files (other than from local .../FastSimulation/PileUpProducer/data/ as just described above) make sure CMSSW_SEARCH_PATH includes this alternative location, so that the job could find them.

  • In FastSimulation/PileUpProducer/python/ make a replacement
#from FastSimulation.PileUpProducer.PileUpSimulator10TeV_cfi import *
from FastSimulation.PileUpProducer.PileUpSimulator_cfi import *

  • Vertex specificity: two configs to be changed (current 10 TeV vertex smearing -> nominal 14 TeV one) :
#from FastSimulation.Event.Early10TeVCollisionVertexGenerator_cfi import *
from FastSimulation.Event.NominalCollisionVertexGenerator_cfi import *

#from FastSimulation.Event.Early10TeVCollisionVertexGenerator_cfi import *
from FastSimulation.Event.NominalCollisionVertexGenerator_cfi import *

  • Center-of-mass energy (re-setting): make sure the generation fragment (config) uses 14 TeV setting (comEnergy) as discussed in case of 900 GeV in the previous FAQ item. Curently there is no 14 TeV configurations to check out from the dedicated Configuration/CMS.GenProduction CVS repository But one can still find some 14 TeV configurations in the list after clicking on "Show NNNN dead files" link in the upper left part of the mentioned CVS page header.

How to switch off the magnetic field ?

Remove the regular include of the magnetic field and use this one instead

Alternatively, with the cmsDriver, add "--magField=0T" to the list of arguments

How to switch off the zero-suppression in the ECAL.

In this mode, there will be one RecHit per crystal.
It is possible only as of FastSimulation/CaloRecHitsProducer V07-00-07. The parameters to add are:
process.ecalRecHit.RecHitsFactory.ECALBarrel.HighNoiseParameters = cms.vdouble()
process.ecalRecHit.RecHitsFactory.ECALBarrel.SRThreshold = -999.
process.ecalRecHit.RecHitsFactory.ECALEndcap.HighNoiseParameters =  cms.vdouble()
process.ecalRecHit.RecHitsFactory.ECALEndcap.SRThreshold = -999.

Important: this only makes sense when CaloMode is equal to 0 or 2 (i.e., when no digitization is performed for ECAL) in CaloRecHitsProducer/python/

How to simulate long-lived exotic charged particles

The current Fast Simulation doesn't support long-lived exotic particles (which are implicitely treated as invisible, hence giving an artificially large missing energy), and we plan to add this functionality during 2013. In the meantime, this section can give some hints about how to hack the existing code to get a simulation of these particles.

Currently, the list of particles not simulated in FastSim is hard-coded here. You can add more particles by their PDG code. (In future releases, this will be made configurable.) Exotic particles pass this filter, but in many case do not pass criteria hard-coded elsewhere in the simulation.

In case you are interested in the response in the calorimeters and in the muon chambers, make sure that these detectors treat the new particle as you deside (e.g., as a muon or as a hadron?)

For example, if the new particle is not sensitive to strong interactions (and is therefore muon-like) you can add its PDG code in the following places (credit to Lukas Vanelderen who tested this recipe):

  • method KineParticleFilter::isOKForMe() in FastSimulation/Event/src/, which dictates which particles can be simulated; note that particles decaying outside the tracker volume do not pass the filter and do not enter the simulation

  • method FSimEvent::load() in FastSimulation/Event/src/, where it is specified that only particles with PDG id +/-13 have to be treated as muons; add your new PDG id here; in the note that in the same if statement: particles with an 'endvertex' do not pass

  • method MuonSimHitProducer::produce() in FastSimulation/MuonSimHitProducer/src/, where only particles with PDG id +/-13 are processed; add your new PDG id here

  • method CalorimetryManager::reconstruct() in FastSimulation/Calorimetry/src/, where exotic particles are ignored. You can add here the kind of behaviour that you want them to have in the calorimeters.

Important caveats:

Although the above recipe technically works, in the sense that you get tracks and muon candidates corresponding to the new particle, the simulation has the following important shortcomings:

  • it does not model material effects for charginos from calorimeters, magnet and the yoke, so that the new particles don't get stopped in the detector at the right place;

  • even for those charginos with transverse momenta that in FullSim typically get stopped in the tracker (pt < ~50 GeV for a mass of 300 GeV) there is a problem: they loose a lot of energy in FastSim (because dE/dx is simulated), but never get stopped, and they make it all the way through the tracker and the muon chambers.

So if you desire a realistic simulation you have to implement a probability to get stopped in the detector as a function of traversed material. As mentioned above, work in this sense is planned during 2013. Don't hesitate to volunteer if you are interested in contributing to that.

Running on Fast Simulation events

Running the Fast Simulation and your own analyzer (or PATtuplizer) in the same job

Fast Simulation is fast! Therefore, it can often be considered more appropriate to generate, simulate, reconstruct and run own analyzer on those events in one go, without having to store somewhere the events themselves. In that case, one should take care that the analyzer is actually run only when all needed products have already been created. If only objects from the reconstruction are needed, one can put the analyzer just after the reconstruction step in the sequence. But whenever a TriggerResult object is also needed, one has to take into account that it is only created by the Framework when all Paths have completed. Therefore, in that case the EndPath is the correct location for an analyzer module which needs to access TriggerResult.

Filtering events at RECO level

Although Fast Simulation is fast, its output is not slimmer than Full Simulation. Therefore, often private productions are limited by the logistics of storing very large files locally. In some cases, for example when you want to fast-simulate large quantities of some background and you already know that your offline selection will get rid of the vast majority of them, it can be convenient to apply a loose pre-selection at RECO level in the same job. Technically this can be done with an EDFilter.

An example of such a filter is here and can be added to your job by configuring the PoolOutputModule as done here (tested in 5_2_X)

Applying pileup reweighting when analyzing Fast Simulation events

Everything works as for Full Simulation events (see instructions here and here) if you use the true number of interaction reweighting, with just one subtlety to consider in the specific case of 2011 analyses: only in-time pileup is simulated in FastSim, meaning that the 1D reweighting should be applied, and not the 3D one which is recommended for Full Sim in 2011 analyses.
In practice this means using the LumiReWeighting instead of the Lumi3DReWeighting class in PhysicsTools/Utilities.

Running the track refitter on previously produced Fast Simulation file

For some use cases it is necessary to run the track refitter over a file already produced. There are only a few parameters of the configuration that should be changed in order to be consistent with the configuration of FastSimulation tracking (no RungeKutta, hits splitting, TTRH building without refit):

 ### Track refitter
process.TrackRefitter.TrajectoryInEvent = cms.bool(True)
process.TrackRefitter.Fitter = cms.string('KFFittingSmoother')
process.TrackRefitter.useHitsSplitting = cms.bool(True)
process.TrackRefitter.TTRHBuilder = cms.string('WithoutRefit')
process.TrackRefitter.Propagator = cms.string('PropagatorWithMaterial')

My HLT output analyzer does not work on the Fast Sim output

Some producers have a different name in the Fast Sim, a couple of replaces are needed in the config file
  • the L1CMS.GlobalTriggerObjectMapRecord L1CMS.GlobalTriggerEvmReadoutRecord and L1CMS.GlobalTriggerReadoutRecord are done by gtDigis
  • the l1extra objects are done by l1extraParticles except the one corresponding to the muons which is written by l1ParamMuons

See this extract of a edmDumpEventContent

L1CMS.GlobalTriggerEvmReadoutRecord    "gtDigis"    ""
L1CMS.GlobalTriggerObjectMapRecord    "gtDigis"    ""
L1CMS.GlobalTriggerReadoutRecord    "gtDigis"    ""

vector<l1extra::L1EmParticle>     "l1extraParticles"    "Isolated"
vector<l1extra::L1EmParticle>     "l1extraParticles"    "NonIsolated"
vector<l1extra::L1EtMissParticle>     "l1extraParticles"    ""
vector<l1extra::L1JetParticle>     "l1extraParticles"    "Central"
vector<l1extra::L1JetParticle>     "l1extraParticles"    "Forward"
vector<l1extra::L1JetParticle>     "l1extraParticles"    "Tau"
vector<l1extra::L1MuonParticle>     "l1ParamMuons"    ""
vector<l1extra::L1MuonParticle>     "l1extraParticles"    ""
Do not use the last one for the muons, but the one made by l1ParamMuons

About warning messages

What does Number of Cells in file is 0 - probably bad file format mean ? Can I ignore it ?

Yes, this warning can be safely ignored. It is produced the class reading the XML file containing the HCAL mis-calibration constants. As to match the full sim, the HCAL is no longer miscalibrated, even with STARTUP conditions . Therefore, a (almost) empty file is given the software as an input. It complains a bit (hence this error message), but it creates an array filled with 1, which is exactly what is needed.

Can I safely ignore the error message "received a hit very far from the detector (x,y,z) coming from an electromagnetic shower. - Ignoring it"

Yes, these warnings can be safely ignored. They can appear when the fast sim checks if an electromagnetic showers leaks behind the ECAL. The formula of the shower parameterization can diverge in such cases and create a spot with a very large radius. The energy of the spot in this case is infinitesimal, so it is really not a problem to discard it.

Technical problems

When running with very high pileup, my jobs are terribly slow

This is related to memory issues. The simplest (and recommended) FastSim workflow does the Sim and Reco steps in one go, but because of that, the memory consumption is higher than in most other workflow. In very high pileup conditions (e.g. 200 pileup collisions), it has been reported by some users that their FastSim jobs saturate the memory and therefore become very slow.

Please notice that this problem is not related to FastSim itself, but to the fact that everything is run at once, differently from typical FullSim workflows which are split into GEN-SIM and DIGI-RECO. If FullSim is run from GEN to RECO in one go, memory usage versus event number looks like that, with a plateau at roughly the same value as FastSim.

A trivial workaround, possible since 6_1_0_pre6, is to split the FastSim job in two steps, simExtended and recoHighLevel, as motivated in slides 10-12 of this talk, see the instructions here.

Some module doesn't find genParticles

Most likely this problem is not related to Fast Simulation. It typically happens when you use a private generation fragment (or you backported it from another release) and you did not recompile. See more details here.

Also notice that cmsDriver looks by default for a configuration fragment in Configuration/Generator, and not in Configuration/GenProduction, so even if you are using an official fragment from the release, if it is in Configuration/GenProduction you will have to specify the location as Configuration/GenProduction/python/

If the problem persists after recompilation and after checking that the file path is correct, please contact the hypernews.

My Fast Sim jobs crash when I run interactively on lxplus.

The default virtual memory setting on lxplus is 1.5 Gb. This is a bit short when one is running FastSim+Reco+L1+HLT simultaneously. As long as the overall memory consumption has not been reduced, it is possible to increase the virtual memory with limit vmem unlim (tcsh) or ulimit -v 3000000 (bash). See this post from Pete.

I observe duplicates in the gsfElectrons collection

The problem of electron duplicates appeared since 4_2_0_pre7. It seems to affect more strongly the processes with a large final-state multiplicity, see the SUSY report in this thread.
It is not observed in pfElectrons, but only in gsfElectrons. Although the use of pfElectrons is generally recommended for analysis, some groups still access gsfElectrons. See slides 5-8 of this talk and slide 9 of this one.
This is due to track candidates with the very same momentum (up to the last digit), which used to be cleaned away at gsfElectron level before 4_2_0_pre7 (so, nobody had ever cared for this FastSim feature before).
The reason why the duplicate cleaning works for pfElectrons but not for gsfElectrons is that at particle-flow level the same ECAL clusters can not be counted twice, so there is a further implicit cleaning.
Therefore if you loop over gsfElectrons you have to take care to skip duplicates (you can either take inspiration from how pfElectrons are cleaned, or just check by hand if pt/eta/phi are the same).

The effect seems to be sample-dependent.
To quickly check in Root how many duplicates are in your case:

In CMSSW_7_4_X and prior releases:

TFile f("filename.root")

In CMSSW_7_5_X and later releases:

TFile f("filename.root")

Status: solved.
Patch provided here, introduced in 6_1_0_pre4 and backported to 5_3_6.
In the other affected releases, users can reimplement the same change to FastSimulation/Tracking/plugins/ and recompile, or just manually skip gsfElectron duplicates. (Hint: the objects in the collection are already sorted, so just compare each candidate with the preceding one.)

I am getting Fatal error: BLOCK DATA PYDATA has not been loaded! when running with Pythia8 or Herwig (or other non-Pythia6-interfaced generators)

Problem reported here, appearing in 5_1_X, early 5_2_X (up to 5_2_5) and early 5_3_X, due to the following facts:
- the Fast Simulation has no internal implementation of the decays inside the detector volume, and since the beginning the solution was to use the decay routine from Pythia6 (in Fortran!) as external decayer
- going from 5_0_X to 5_1_X, the compiler and linker changed
- the new linker has a different behaviour under many respect, and in particular it doesn't initialize correctly the common block coming from pydata because of how the dependency chain is structured

The consequence is that, in the few releases affected before an elegant solution was found (see here), Pythia8 or Herwig and Fast Simulation cannot be run in a single step. (Everything fine if you produce the GEN file with Pythia8 or Herwig and then run Fast Simulation on it as a second step.)

It is advised to move to one of the recommended releases (e.g., 5_2_X with X>=6, or anything since 6_0_X).
For the affected releases, here follow several workarounds, in increasing order of complexity.
You don't need these tricks for generators which are already interfaced to Pythia6 (e.g., for hadronization), like MadGraph or PowHeg.

Workaround #1: split the job in two steps: the GEN part first (e.g., your_generator_fragment_cfi  --conditions auto:startup -s GEN --datatier GEN -n 10000  --eventcontent RAWSIM), and the FastSim alone taking as input the files produced by the first step (see instructions above.)

Workaround #2: if you use Pythia8 as generator, and prefer to run everything in one job, check out GeneratorInterface/Pythia8Interface V00-01-32 (in 5_2_X and 5_3_0) or GeneratorInterface/Pythia8Interface V00-01-26-01 (in 5_1_X) and recompile.
With other generators, you have to add these lines in the respective BuildFile.xml:

< architecture name="slc._[^_]*_gcc4[5-9]">
< flags LDFLAGS="$(PYTHIA6_BASE)/lib/pydata.o"/>
< /architecture>

Workaround #3: be a validator of the new decayer scheme based on Pythia8!
Check out:

FastSimulation/TrajectoryManager V01-05-01
FastSimulation/ParticleDecay V03-00-04
and recompile, then edit FastSimulation/TrajectoryManager/python/ and set Decayer = cms.string('pythia8') (the default, at the moment, is still pythia6).
This works for whatever program you use as generator (not only for Pythia8).
As soon as we will make sure that the new decayer scheme gives the same results as the old one (or, in case of differences, that it moves towards FullSim) we will switch the default to pythia8 and get rid for good of one of the last bits of Fortran still surviving in CMSSW.

I am getting crashes when using Herwig++ and MC&NLO

Note: to be checked if this problem is still present

It is unfortunately a known problem for which there is no satisfactory technical solution yet. A workaround is however available. It consists in modifying the BuildFile of the relevant generator package. Add

<use name="GeneratorInterface/CommonInterface"/> 
in the BuildFile of
for herwig++ and in the BuildFile of
for MC@NLO and recompile.

Comparison between fast and full simulations

I observe a discrepancy in energy scale (or neutral energy fraction) for Particle Flow jets in the barrel and endcap regions

This is a known issue, see for example slides 16 and 17 of this talk. This is understood as coming from the shower modeling, that cannot account properly for outliers.

A long-term solution would be a complete GFlash implementation in FastSim (but nobody is currently working on that, although it's in our wish list since long time - please volunteer!).

Ad interim, we provide a patch to fix the distributions by adding additional dummy neutral clusters:

cvs co -r CMSSW_X_Y_Z FastSimulation/Configuration
comment out
from FastSimulation.ParticleFlow.ParticleFlowFastSim_cff import *
and uncomment
from FastSimulation.ParticleFlow.ParticleFlowFastSimNeutralHadron_cff import *
then recompile, and rerun.

Please tell us whether the patch is sufficient to restore Fast/Full agreement in your analysis use case.

This patch is available, but not active by default, in CMSSW_4_2_8_patch4 and following.
It has been made active by default in 5_2_5 and following and in all the 5_3_X, 6_0_X, etc. releases.

Warning for PF2PAT users

After the "PF patch" started to be widely used, it was realized that the first version had no effect when PF2PAT is applied. The reason is that PF2PAT re-clusters jets on the fly, and with the first implementation of the patch it happened to take the uncorrected PF candidates collection as input.
A fix to the "PF patch", to properly take into account this, is FastSimulation/ParticleFlow V00-00-08, which entered 6_0_0_pre5, 5_3_1, 5_2_6. This tag works also on top of any previous release of the 6_0_X, 5_3_X and 5_2_X series.
The backport to 4_4_X is FastSimulation/ParticleFlow V00-00-06-01.
On top of the releases in the 4_2_X series, a different implementation of the fix is needed: FastSimulation/ParticleFlow V00-00-01-02.

As an alternative (for example if you already produced large amounts of FastSim events and you don't want to recreate them), you can edit the configuration files of PF2PAT where the PF collections to be used for re-clustering the jets are specified, or do the corresponding "replaces" in your configuration file.
In recent releases these are CommonTools/ParticleFlow/python/TopProjectors/ and CommonTools/ParticleFlow/python/; in both files, replace cms.InputTag("particleFlowTmp") with cms.InputTag("FSparticleFlow").

The forward jets are very different between the Fast & Full Sim

The HF calorimeter which covers the 3<|eta|<5 range has a structure very different from the central calorimeter. It has no electromagnetic or hadronic part, it is actually made of long and short fibres. A description of this calorimeter can be found in the beginning of this talk. The total energy deposited can be decomposed into in the energy deposited in the long fibers and in the short fibers : E(tot)=E(L)-E(S). The electromagnetic or hadronic energy can then be derived as E(EM)=E(L)-E(S) and E(HAD)=2E(S) These two quantities are rather artificial, E(EM) can be negative, and obviously very different from the electromagnetic fraction that is computed in the central region from the CMS.CaloTowers which do contain an electromagnetic and a hadronic part. As a result, the electromagnetic fraction of the forward jets should be used very carefully.

In the Fast Simulation, the energy sharing between the long and short fibres is not properly simulated, only the important quantity, i.e the overall energy deposit is tuned and well reproduced (see slide 16). In other words, the electromagnetic fraction of the forward jets is not well reproduced in the Fast Sim, but to our knowledge is not yet used in the jet reconstruction. There is nevertheless some on-going work to improve the situation.

Update : As of CMSSW_2_1_0 the HF simulation has been improved. See the presentation of P. Verdier in this meeting.

The number of hits per track has a much longer tail in Full than in Fast Sim

This is due to the presence of loopers in Full Simulation (and data, of course), which are not emulated in Fast Simulation (intentionally, to save cpu time).
In principle this should have no effect on high-pt analyses (loopers are very low-pt particles). So far, no effect has been seen for example in lepton track-based isolation, where a cut of 1 GeV is applied on the tracks, and in Particle Flow quantities.
In case you see a visible discrepancy, attributable to loopers, in any high-level observable of your interest, please let us know as soon as possible and we might reconsider our strategy.

The b-tagging performance is different from Full Sim

This is well known, and unavoidable given the many differences in track simulation. See for example slides 12-14 of this talk.

The BTV POG provides data-to-MC scale factors for each major MC production, in both Full Sim and Fast Sim. Please contact analysis experts to get advice on which set of scale factors is more appropriate for the release you are using.

By the way, there are several ideas to explore in order to improve the Fast-vs-Full agreement in b-tagging, and you can get ESP credit for working on those; don't hesitate to volunteer if b-tagging is important for your analysis or you need ESP credits.

Review Status

Reviewer/Editor and Date Comments
AndreaGiammanco - 27 Apr 2012 Reorganized by clumping entries in categories, removed some obsolete entries, added some new ones
FlorianBeaudette - 25 Nov 2008 HLT/L1
PatriziaAzzi - 13 Oct 2008 No comment
FlorianBeaudette - 9 Oct 2008 Various updates : H/E, muons, Geometry
FlorianBeaudette - 10 Jun 2008 Jets in HF
PatrickJanot - 31 Mars 2008 Page template creation

Responsible: AndreaGiammanco

Last reviewed by:

Topic attachments
I Attachment History Action Size Date Who Comment
GIFgif HoE.gif r1 manage 9.2 K 2008-03-31 - 19:11 FlorianBeaudette H/E distribution for electrons
GIFgif nhits.gif r1 manage 12.1 K 2008-03-31 - 21:30 FlorianBeaudette Number of hits in the HCAL when there is a ZS
GIFgif rechits.gif r1 manage 9.0 K 2008-03-31 - 21:29 FlorianBeaudette HCAL RecHits with ZS
Edit | Attach | Watch | Print version | History: r81 < r80 < r79 < r78 < r77 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r81 - 2015-06-16 - WilliamTanenbaum

    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CMSPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback