Questions and answers

This page collects frequently asked questions. Some questions (and answers) concerning common problems have been included here from the Hypernews forums. You can also submit question through the CMSSW Savannah page. Choose "Submit" from the "Support" menu bar or go directly to the Submit page.

Please check the existing documentation, especially the CMSWorkbook before adding your question.

InputTag src = "secondCluster"

hn-cms-crabFeedback, Khristian Kotov - Sep 09

I am running CMSSW_1_6_0 and default version on crab on lxplus cluster. When I create jobs crab generates crab_0_*/job/CMSSW.cfg file which has two problems:

line 3885:   InputTag src = "secondClusters"
line 4814:   InputTag src = "thirdClusters"
Perhaps, crab just uses some corrupted .cfi file from the release. I tried to fix CMSSW.cfg and submit my jobs, but I still have same problem in job's output Does anyone know how to solve such problem?


I fixed it in my local area by taking from CVS the two files (RecoParticleFlow/PFTracking/data/second.cff and RecoParticleFlow/PFTracking/data/third.cff) and removing the quotes. (Andrea Rizzi)

project CMSSW: project: command not found

Help desk, 30 May 2007

An error message type

project CMSSW
sh: project: command not found
may appear if you change your shell at lxplus without changing the default shell.


See the next question. The problem may arise also when connecting to lxplus from a Windows PC and opening a second xterm. The environmental variables do not get properly set in this case, use the original x-window, instead. It may also happen for a similar reason if you have saved your profile and you are not prompted for you password when opening a lxplus window on Windows PC.

How to change the default shell on lxplus

Help desk, Dayong Wang, 30 Apr 2007

How to change the default shell on lxplus?


You can find the instructions in the CERN IT list of frequently asked question:

Copying a small amount of data locally

cms-User.Support, Michel Della Negra, 29 Mar 2007

In trying to execute

rfcp /castor/ tbh2.root
I got:
Disk quota exceeded.
The file contains 50K events from TB2006 H2 data. I do not need the full statistics to debug an analysis. Is there an exec file for copying only say 1000 events?


You can copy a fraction of a data set from the local storage (i.e. from castor at CERN) by running cmsRun with the configuration file which just copies a part of the file locally as explained in WorkBookDataSamples#LocalCopy.

You can skip a number of events by adding untracked uint32 skipEvents = 100 (where 100 would be the number of events to be skipped) in the PoolSource block and you can start with a specified event by adding

   untracked uint32 firstRun = 5006
   untracked uint32 firstEvent = 96
in the PoolSource block. (Luca Lista, Chris Jones, Iana Osborne)


hn-cms-progQuestions, Carmelo Marchica, Mar 12 2007

I would like to use the following variable pt:

std::vector<HepMC::GenParticle*> electrons; double pt = electrons[0]->momentum().pt()

And I get:

no matching function for call to `CLHEP::HepLorentzVector::pt()&#8217;

What I have to use for the pt? And where I can find the definitions of the variables in momentum()?


You need to use momentum().perp() . You can check the interface of CLHEP::HepLorentzVector from (for example) LXR: ( 22 Mar 07: the link temporarily broken due to LXR ) or check the CLHEP documentation (not necessarily very helpful). (Andrea Bocci)

No "EcalWeightXtalGroupsRcd" record found -> FakeConditions.cff

hn-cms-ecalDB, Ian Tomalin, Mar 03 2007

I am running the b-jet HLT code with CMSSW 1.3.0.pre5. In order to reconstruct jets in the calorimeters (starting from Digis), the configuration file includes:

include "HLTrigger/Configuration/data/common/CaloTowers.cff"
include "HLTrigger/Configuration/data/common/RecoJetMET.cff"

which in turn include:

include "RecoLocalCalo/Configuration/data/RecoLocalCalo.cff"

When I run this job, I get the error message:

No "EcalWeightXtalGroupsRcd" record found in the EventSetup.
Please add an ESSource or ESProducer that delivers such a record.
cms::Exception going through module

I didn't get this error with pre2. I am not an ECAL expert. How do I get rid of this error ?


Starting in pre4 you need to add to your your cfg

include "Configuration/StandardSequences/data/FakeConditions.cff"

if you are going to to reco. (Shahram Rahatlou)

SuperCluster->e3x3(), Invalid Reference

hn-cms-physTools, James Jackson, Mar 03 2007

I am using MC data (FEVT) generated with 1_2_0, analysing with 1_2_1. Getting the supercluster associated with a PixelMatchGsfElectron is working fine, and the energy() method returns sensible values. However, if I attempt to call e3x3() on the supercluster, I get the following exception:

%MSG-w ScheduleExecutionFailure:  PostModule 03-Mar-2007 18:13:08 GMT
        Run: 3766 Event: 46
an exception occurred and event is being skipped:
---- ScheduleExecutionFailure BEGIN
---- InvalidReference BEGIN
NullID RefCore::getProduct: Attempt to use a null Ref.cms::Exception  
going through module L1TriggerEfficiencyAnalyzer/analysis run: 3766  
event: 46
---- InvalidReference END
Exception going through path p
---- ScheduleExecutionFailure END

Looks like something fishy going on - is this a known bug / incompatability between 1_2_0 and 1_2_1?


This is a known issue, and the e3x3() and other cluster shape accessor methods are removed from the SuperCluster interface from 130_pre3 onwards. These methods do not to work because the Ref to the ClusterShape is not set and it was decided instead to set up an association map. Therefore to access shape information it is necessary to use the association map between BasicClusters and ClusterShape, for example:

edm::Handle<reco::BasicClusterShapeAssociationCollection>  barrelClShpHandle;
iEvent.getByLabel("hybridSuperClusters","hybridShapeAssoc",  barrelClShpHandle);

edm::Handle<reco::BasicClusterShapeAssociationCollection> endcapClShpHandle;
iEvent.getByLabel("islandBasicClusters","islandEndcapShapeAssoc", endcapClShpHandle);

seedShpItr = barrelClShpHandle->find(myBarrelSuperCluster->seed());
const reco::ClusterShapeRef& seedShapeRef = seedShpItr->val;
float e9 = seedShapeRef->e3x3();
(David Futyan)

MagGeometry::fieldInTesla: failed to find volume for (nan,nan,nan)

hn-cms-swdevelopment, Javier Fernandez, 12/11/2006

I´m experiencing randomly in some jobs over the grid the following error when doing re-reconstruction with CMSSW_1_0_6 over CSA06 1_0_6 samples (I´m not saying this is related to this samples/version):

MagGeometry::fieldInTesla: failed to find volume for (nan,nan,nan)
the stdout log file is filled with this line at some random event until the maximum output log file size is reached, while the stderr log file is filled with the following line:
  *** Break *** write on a pipe with no one to read it
resulting in 2x2Gb std log files. Other jobs run good on the same sample


The warning means that field has been requested for the position (nan,nan,nan) which is obviously invalid (nothing to do with the field itself!).

Any piece of CMSSW can cause this printout - a rare occurrence of a numerical proble, creates a nan that is propagted in the code and triggers this message when it reaches the field.

(it should be quite worring that apart of this warning message, jobs go on and the nan is silently accepted...)

This particular issue with 106 is solved as mentioned by Chang, but there can be more such problems in other parts of CMSSW.

the way to trace down the source of the problem is to add:

service = EnableFloatingPointExceptions {}
service = Tracer {}
that will cause an exception to be thrown at the place where the nan is created, allowing to debug it. This does not help Javier: the application will normally crash, unless there is a way to instruct the framework to skip events with such an exception.

I think that it would be a good idea to require release validation to be run with EnableFloatingPointExceptions so that these bugs are found early in the release cycle.

What I can do on the magnetic field side is, on the other hand, one (or more) of the following:

  1. print the message up to a given number of times
  2. throw an exception (will force people to debug their code), possibly printing the stack trace so the culprit can be found easily
  3. stop complaining and just go on (as all the rest of CMSSW does wink

This will happen in 130 I hope. (Nicola Amapane)

Naming of the samples in DBS

hn-cms-btag, Pavel Demin, 07 February 2007

Could someone please explain me what QCD samples are intended to be used for b-tag validation? I can see that currently there are two types of samples:

  • mc-onsel-120_PU_QCD_pt_80_120
  • mc-onsel-120_QCD_pt_80_120
I could not find any description of the samples in DBS.


In :

  1. Datasets with "onsel" in the name were produced at the request of the HLT group. The corresponding configuration files are in the CMSSW directory Configuration/CSAProduction/data/.
  2. Datasets with "physval" in the name were requested by the Physics Validation Group. The .cfg files are in Configuration/PhysicsValidation/data/.
  3. Datasets with "relval" in the name were requested by the Software Validation Group. The .cfg files are in Configuration/ReleaseValidation/data/.

The data/ directories all contain a subdirectory PileUp/, which contains any .cfg files used for pileup production.

(In some cases, you may need to check out the CVS HEAD of Configuration to find them ...)

So in your example: mc-onsel-120_PU_QCD_pt_80_120 was produced with Configuration/CSA06Production/data/PileUp/CSA06Production/data/PileUp/QCD_pt_80_120_PU_OnSel.cfg , whilst mc-onsel-120_QCD_pt_80_120 was produced with Configuration/CSA06Production/data/QCD_pt_80_120_OnSel.cfg . And our pileup bb and cc dijets are produced with PhysicsValidation/data/PileUp/BBbar_pt80to120_LowLumi.cfg etc. (Ian Tomalin)

Compatibility between CMSSW versions

hn-cms-edmFramework, Pedrame Bargassa, 23 Nov 2006

Is there any known 120pre2 -> 106 incompatibility as well ?


Yes, you should expect to read the CSA06 MC data only up to CMSSW_1_0_6.

The general picture is:

Type of data produced in run in
GEN-SIM-DIGI 0_8_x-1_0_x 1_0_x (preferably 1_0_6)
GEN-SIM-DIGI-RECO 1_0_x 1_0_x (preferably 1_0_6)
GEN-SIM-DIGI 1_1_1 1_1_x
GEN-SIM-DIGI-RECO 1_1_1 1_1_x

i.e. in general don't expect to read any MC data written in 1_0_x with 1_1_x and don't expect to read anything written in 1_1_x with 1_2_x. (It may run, but produce garbage results, don't waste your time on it.)

Right now it looks like only GEN-SIM data will be usefully compatible between 1_1_x and 1_2_x. (Peter Elmer)

NB, this table has been updated in WorkBookWhichRelease

How to access trajectories and track state on each crossed layers

hn-cms-swDevelopment, Boris Mangano, 18 Oct 2006

In the past, there were many requests to access track's parameters (in particular the complete TrajectoryMeasurement) on the surface of each layer crossed by the track. The trajectory is the object devoted to collect all the TrajectoryMeasurements (TM) of the tracks. Nevertheless TrajectoryMeasurement objects were not designed to be saved on disk and, after the final fit of the tracks is done, the informations about the TMs is discarded. Final Track contain only the TSOS on the innermost and outermost hits.


Since last nightly and, hopefully, in120pre2: Thanks to a very recent update of the Framework, there is the possibility to define transient-only product which can be put into the event, even if they are not allowed to be saved on the root file.

The RecoTracker/TrackProducer has been accordingly modified to be able to put the vector of trajectories into the event. The default behavior, for saving cpu time, is to discard the copy. Nevertheless,the user can now change the new TrackProducer's parameter "TrajectoryInEvent" from 'false' to 'true' and the trajectories will be put into the Event during the execution of the job.

A subsequent module can get them back from the event using the standard commands:

Handle<vector<Trajectory> > trajCollectionHandle;
and then analyze them:
for(vector<Trajectory>::const_iterator it =
trajCollectionHandle->begin(); it!=trajCollectionHandle->end();it++){
      edm::LogVerbatim("TrajectoryAnalyzer") << "this traj has " <<
         it->foundHits() << " valid hits" ;

vector<TrajectoryMeasurement> tmColl = it->measurements();
A complete analyzer (TrajectoryAnalyzer .cc .cfg) has been committed in the RecoTracker/TrackProducer/test directory. It explains how the value of the TMs can be accessed.

Last remarks, the trajectories can be accessed after the TrackProducer module is executed (with its default TrajectoryInEvent parameter changed) or after the Refitter module: this second producer, has also the new TrjectoryInEvent parameter.

The important thing is that the 2 modules (TrackProducer+"user defined trajectory analyzer" or TrackRefitter+"user defined..") are run in the same job. The trajectories are not persistent! (Boris Mangano)

Can't compile an EDAnalyzer created with mkedanlzr

hn-cms-swDevelopment, Benedict Huckvale, 10 Oct 2006

I've done nothing more than

> eval ...
> mkedanlzr WGAnalyser
> rm -r interface/ test/ doc/
But when I try to build I get this:
benedict@lxplus061:~/w0/cmssw/CMSSW_1_0_1/src/WGAnalyser > scramv1 b -r
Resetting caches
Parsing BuildFiles
 /afs/ *** target file
`src_clean' has both : and :: entries.  Stop.


You need to put the generated package into a subsystem (i.e. NOT under /src/myAnalyzer but /src/subsys/myAnalyzer. (Shaun Ashby)
Unfortunately you might also find you need to check out an entirely new CMSSW project area. Try not to build a wrongly placed package by accident in future! (Benedict Huckvale)

Problems when removing RECO.cff from a config file

hn-cms-swDevelopment Taylan Yetkin, Sep 23,

I would like to remove reconstruction part from the others. When I remove include "Configuration/Examples/data/RECO.cff" and reconstruction from path path p = {smear_and_geant,digitization, reconstruction} which are in the exampleRunAll.cfg my naive guess is that it should work. However it does not.


I have found what in RECO.cff depends on the SWGuideParticleDataTable: it is the HepMCCandidateProducer which is then used by different Jet and Met algorithms to do their work on the mc particles instead of the reconstructed quantities. So I guess RECO.cff is not a completely accurate name, it probably should be called MCRECO.cff since it is designed to work with MC and not actual data. To solve your problem, I suggest adding

include "SimGeneral/HepPDTESSource/data/pdt.cfi"

to your config file. (Chris Jones)

Problems accessing more than one file

hn-cms-swDevelopment, Christian Veelken, Wed, 13 Sep 2006

I have two Monte Carlo files that were produced independently and each contain 500 events. I have written a class derived from EDAnalyzer to make plots of the file content. When I run over one of the two files at a time, everything is fine, no problem. But when I run over both MC files in the same cmsRun job, cmsRun crashes when processing the first event in the second file.


Update 16 Apr 07: A description and an example how to save histograms or other objects from different modules in one ROOT file is available in SWGuideTFileService

My guess is the program crashes in your EDAnalyzer. Are you by chance filling your own ROOT file from within the EDAnalyzer? If so, then other have found that when the PoolSource goes from one input ROOT file to another one their code crashes. What was determined is unless you've explicitly openned your own ROOT file and tied your histogram, ntuples, etc. to your file, then ROOT will tie those things to the first 'input' file and when that input file goes away the pointers to your histograms, ntuples, etc. then point to object that no longer exist (since ROOT deleted them when the file closed). So it look like you need to modify the ROOT calls you are making in your EDAnalyzer. (Chris Jones)

If Chris' diagnosis is correct (very likely), you can have a look at RecoEcal/EgammaClusterProducers/src/ where I had a similar problem and is now solved by correctly binding the histos and trees to my root file w/o interfering with PoolSource. (Shahram Rahatlou)

Where can I find the data?

How do I know what data is available (real data from MTCC or simulation data) and how can I access it?


See the Workbook page on data access.

For the MTCC data (cosmic muons at CMS) see the instructions in the IGUANA web page, and in particular the data file location.

The CSA06 samples can be found using DBS tool.

For each release of the CMSSW packages, a set of of small event samples are produced which are mainly used to validate the release. Find the instructions how to access this data in the release validation samples page.

-- KatiLassilaPerini - 13 Jul 2006

Edit | Attach | Watch | Print version | History: r54 | r51 < r50 < r49 < r48 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r49 - 2007-09-25 - JennyWilliams

    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CMSPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback