Analysis Work

CMSSW WorkBook
New workspace area, 20GB, /afs/



CMS Public Higgs results PAS for H->gg, HIG-13-001 CMS Higgs Combination Total_Width_datacards
Physics model for decay width SW guide, Higgs Combination Signal Separation Comb analysis talk ATLAS Higgs page

  • Higgs-4l-MC-dists-AN2012-141-v9.jpg:

AAA, Any Data, Anytime, Anywhere



TFile *f =TFile::Open("root://");

CMSSW, old way:

process.source = cms.Source("PoolSource",
                            #                            # replace 'myfile.root' with the source file you want to use
                            fileNames = cms.untracked.vstring('/store/myfile.root')

With AAA:

process.source = cms.Source("PoolSource",
                            #                            # replace 'myfile.root' with the source file you want to use
                            fileNames = cms.untracked.vstring('root://')


DAS home page, DAS FAQs, DAS Query Guide, DAS commands

Workbook: Locating Data Samples with DAS

To check if file is accessible:

xrdfs locate /store/data/............

If unavailable, returns with:
[ERROR] Server responded with an error: [3011] No servers have the file

If OK, returns with:
[::]:1094 Manager ReadWrite

To check the release in DAS

Dataset: /DoubleEG/Run2016H-PromptReco-v1/AOD
Creation time: 2016-09-21 00:02:10, Dataset size: 501.4MB, Number of blocks: 12, Number of events: 352, Number of files: 13, Physics group: NoGroup, Status: VALID, Type: data

Click on "Release" and see:
Release: CMSSW_8_0_19_patch1

release dataset=/DoubleEG/Run2016H-PromptReco-v1/AOD

DAS web page examples of successful searches:

config dataset=/SingleElectron/Run2016H-PromptReco-v3/AOD
gives Release: CMSSW_8_0_22
and Global Tag: 80X_dataRun2_Prompt_v14

dataset dataset=/*/Run2016H*/AOD run=284036


run dataset=/DoubleEG/Run2016G-PromptReco-v1/RECO
dataset run=278309

config dataset=/DoubleMuon/Run2016F-PromptReco-v1/AOD

site dataset=/SingleMuon/Run2015B-PromptReco-v1/AOD
run dataset=/SingleMuon/Run2015C-PromptReco-v1/AOD

file dataset=/SingleMuon_0T/Run2015D-PromptReco-v4/AOD run=260493

Note how the DAS search name and the filename have a different structure !!!


Dataset: /SinglePhoton/Run2015D-PromptReco-v4/AOD 
Creation time: 2015-10-06 03:57:18, Dataset size: 3.2TB, 
Number of blocks: 61, Number of events: 21845413, Number of files: 1055, 
Physics group: NoGroup, Status: VALID, Type: data 

run dataset=/SinglePhoton/Run2015D-PromptReco-v4/AOD

Run: 258159
Datasets Sources: dbs3 show 
...... etc etc for 134 runs in total

file dataset=/SinglePhoton/Run2015D-PromptReco-v4/AOD run=260725

run=260725: 4 Nov 2015, 0.0 lumi !!!!!!!!!, 3.8T, non CERN sites

file dataset=/SinglePhoton/Run2015D-PromptReco-v4/AOD run=260540

 run=260540: 1 Nov 2015, 3.2*10**33, 3.8T, HLT triggers: 1470, non CERN sites

file dataset=/SinglePhoton/Run2015D-PromptReco-v4/AOD run=260425

and many others. HLT triggers: 5606983<br>
file has Lumi: [[191, 198], [201, 208], [211, 218], [221, 228]], non CERN sites 

summary dataset=/SingleMuon_0T/Run2015D-PromptReco-v4/AOD run=260493 with Add filter/aggregator function to the query: grep and summary.nevents gives:
Number of blocks: 1, Number of events: 9309, Number of files: 1, Number of lumis: 43, Sum(file_size): 1.2GB

Look for T2_CH_CERN, StorageElement: |

Difference between eos reported location format and that for DAS >>>>
eoscms ls -l /eos/cms/store/data/Run2015B/JetHT/RECO/PromptReco-v1/000/251/562/
Run2015B ==> Run2015B-PromptReco-v1

$ ./ --query="dataset dataset=/JetHT/*/RECO run=251562" 
>>> returns /JetHT/Run2015B-PromptReco-v1/RECO |

From DAS webpage:
file dataset=/JetHT/Run2015B-PromptReco-v1/RECO run=251562 lumi=285
>>> returns explicit file location for this run and lumi

DAS to get root file locations with:</br>
file dataset=/DoublePhoton/Run2012B-PromptReco-v1/AOD run=194108 lumi=575</br>

File: /store/data/Run2012B/DoublePhoton/AOD/PromptReco-v1/000/194/108/D0FEA2C9-649F-E111-9368-003048D2BF1C.root</br>
Clicking on file, size is 3.1GB</br>
Site: T2_CH_CERN_HLT</br></br>

DAS supports wild-card query for dataset names and then applying the filters, e.g.
> dataset=/SingleMu/*-22Jan2013-v1/AOD | grep,dataset.modification_time

will give everything for SingleMu primary dataset and AOD data-tier.

CMSWBM Run-Event-Lumi info

For run info, go to cmswbm and click on Run Summary.

For run 194108, get start 2012.05.13 16:24:36, end 2012.05.13 22:06:09, nearly 6 hrs, triggers 831,997,827
Famous event 564,224,000 about 70% into run.

Click on Lumi Sections:
888 lumi sections listed. ~1 M events per lumi section. Each lumi section only ~ 30 seconds !!

Famous event between following times, info from clicking on Run Info:
CMS.TRG:NumTriggers 562,145,377 2012.05.13 20:07:24
CMS.TRG:NumTriggers 564,576,503 2012.05.13 20:08:24

Now back to Lumi Sections to search for event 564,224,000
573 start 20:07:06 too early ??
574 start 20:07:30 possible
575 start 20:07:53 Possible Probable
576 start 20:08:16 too late

GIT and doxygen


CMSSW code cms-analysis code HiggsToZZ4Leptons code JetMETCorrections-METPUSubtraction

git documentation Search with git git chapters git reference manual git cheat sheet

git faq for CMSSW Migrating from CVS to git, slides git UserCode FAQs
git tutorial git faqs pull requests

How to browse/search with git CMSSW on git, advanced topics

Get a single file from GIT

  • search GiT for the file, from the GIT web interface
  • click on the file
  • select 'Raw"
  • In browser, "save file as" to save the file to a directory

GIT syntax (A Bocci, 23 Sep 2015)
Most of the CMSSW git tools use the syntax /RecoVertex/BeamSpotProducer/ in .git/info/sparse-checkout to signify that the package RecoVertex/BeamSpotProducer should be checked out.

  • The leading / means that the package is at the base of the project, and not a subdirectory.
  • For example /Configuration/ will match all the packages in the main Configuration directory, while
  • Configuration/
  • will match all the packages */Configuration (e.g. RecoVertex/Configuration, RecoPixelVertexing/Configuration, etc.)

However, git 1.7.1 (the default version on SLC6) does not understand the leading / .

  • As you say, if one does a git checkout using that, it will remove all the packages.
  • The only suggestion I have is to do git config --global push.default simple
  • This will set an (useful) option that git 1.7.1 does not understand, so you will not be able to use it by mistake.

Old CVS links:
Old CVS tutorial/syntax



static float swissCross( const DetId& id, 
            const EcalRecHitCollection & recHits, 
            float recHitThreshold , 
bool avoidIeta85=true);

static bool isNextToDead( const DetId& id, const edm::EventSetup& es);

static bool isNextToDeadFromNeighbours( const DetId& id, 
                 const EcalChannelStatus& chs,
int chStatusThreshold) ; 

static bool isNextToBoundary (const DetId& id);
/// true if near a crack or ecal border

static bool deadNeighbour(const DetId& id, const EcalChannelStatus& chs, 
             int chStatusThreshold, 
int dx, int dy); 

WorkBook WorkBookGlossary CMS Offline Operations mtgs - releases etc

CMSSW releases and architectures CMS Release Schedule consumes etc

CMSSW path info How to read events from an EDM/ROOT file

Search with git, CMSSW code on git ECAL Objects list for 7_6_X

Tutorials Code Tutorial

New consumes format:

In class definition i have the following:
    edm::EDGetTokenT<EBRecHitCollection> tok_EB_;
    edm::EDGetTokenT<EERecHitCollection> tok_EE_;
    edm::EDGetTokenT<EBDigiCollection> tok_EB_digi;

In class constructor where we have access to "config"
    tok_EB_    = consumes<EcalRecHitCollection>(edm::InputTag("reducedEcalRecHitsEB"));
    tok_EE_    = consumes<EcalRecHitCollection>(edm::InputTag("reducedEcalRecHitsEE"));

    tok_EB_digi = consumes<EBDigiCollection>(edm::InputTag("selectDigi","selectedEcalEBDigiCollection"));

And then in analysis part I have the following:
     edm::Handle<EBRecHitCollection> EBRecHits;
     edm::Handle<EERecHitCollection> EERecHits;

    iEvent.getByToken( tok_EB_, EBRecHits );
    iEvent.getByToken( tok_EE_, EERecHits );

Pulling in packages and EcalLaserDbService.h downloaded with git into /afs/ with
git-cms-addpkg CalibCalorimetry
NOTE - git wanted an empty src directory. Had to move "Reco" away temporarily.
NOTE - have to do "scramv1 b" in /afs/ to get edited


In  EcalLaserDbService.h
mutable int dbprint;

mutable due to (????) const in:
  float getLaserCorrection (DetId const & xid, edm::Timestamp const & iTime) const; 

Parameters for modules Paths and trigger bits Framework and Event Data Model Offline Guide
CMSSW installation notes scramv1 intro scram build/usage tips


CMSSW path info

alias cmsenv ==> eval 'scramv1 runtime -csh'

scram runtime -csh gives setenv PATH "path/folder list used to compile....................."


Source, Configuration, Release, And Management tool. It is the CMS build program. It is responsible for building framework applications and also making sure that all the necessary shared libraries are available.

Compile information for gcc optimization

Example of cmsenv operation, after setting SCRAM_ARCH = slc6_amd64_gcc472 :
Setup the runtime environment with cmsenv:
cmsenv is an alias for eval `scramv1 runtime -csh`

Print the resulting environment with:

  • scram runtime -csh
setenv PATH "/afs/";

where the sl6 ROOT is correctly there as:

Scram commands:

scram without any argument, will print the scram help about all the available commands
scram runtime -csh print the CMSSW environment
eval `scram runtime -csh` set the CMSSW environment
scram cmnd -help help ! ie, scram arch -help
scram arch Gives current architecture, ie slc6_amd64_gcc472
amd64 = amd 64 bit machine, gcc472 = GCC compiler version 4.7.2
scram list ProjectName ie scram list CMSSW gives releases available for a given architecture
scram -arch slc4_ia32_gcc345 list CMSSW lists CMSSW versions for slc4_ia32_gcc345
scram b will build every thing available under your current working directory
  running from my dev's src directory (e.g. CMSSW_x_y_z_pre1/src), will build every Subsystem/Package available
  running it from a specific Subsystem/Package directory (e.g. CMSSW_x_y_z_pre1/src/subsystemA/packageB) will only build this package and all the other packages it depends on (if needed)

CMSSW utilities:
edmPluginDump -a | grep MuonTriggerSelectionEmbedder

  • Tell you exactly where the plugin manager thinks that module should come from.
  • The first one in the list is the one that is actually used.

General flow through a CMSSW programme is as follows:

Class setup:
class Pedestal : public edm::EDAnalyzer { // Pedestal inherits from EDAnalyzer
  explicit Pedestal(const edm::ParameterSet& ps); /* constructor */
  ~Pedestal(); /* destructor */
      virtual void beginJob(const edm::EventSetup&) ;        <= private member function declaration in class Pedestal
      virtual void analyze(const edm::Event&, const edm::EventSetup&);  <= private member function declaration in class Pedestal
      virtual void endJob() ; <= private member function declaration in class Pedestal
       ---------- then member data ---------------------------
  TH2D *h001; // entries, gain = 12
  const EcalElectronicsMapping* ecalElectronicsMap_;};


Begin job
Iinitialisation...conditions database, variables etc
Pedestal::Pedestal(iConfig);  <= pass iConfig as follows:
Pedestal::Pedestal(const edm::ParameterSet& iConfig)
   //now do what ever initialization is needed
   edm::Service<TFileService> fs;
   h001 = fs->make<TH2D>("h001"," ped entries, gain 12, z = +1", 102,-0.5,101.5, 102,-0.5, 101.5 );
   // and insert any further histos here, but also globally defined above
// Object or variable 'c' passed into my code - Pedestal::beginJob(const edm::EventSetup& c) and is of type 'EventSetup'
Pedestal::beginJob(const edm::EventSetup& c)    <= defining beginJob, 'c' passed from main programme
{    edm::ESHandle< EcalElectronicsMapping > elecHandle;
    c.get< EcalMappingRcd >().get(elecHandle);    <= 'c' is probably the Event
    ecalElectronicsMap_ = elecHandle.product();

Event loop
Pedestal::analyze(iEvent, iSetup);  <= passes iEvent and iSetup to my analyze function as follows:
void Pedestal::analyze(const edm::Event& iEvent, const edm::EventSetup& iSetup) {...all my code...}

End job
Pedestal::endJob();       <= calls my private function 'endJob', no arguments, as below:
void Pedestal::endJob() { cout << "ievcount = " << ievcount << endl;
// now normalise histogrammes
// get means
// TH2D h002 = h002/h001;
 adcsq = h003->GetBinContent(49,87);
 cout << "adc squared = " << adcsq << endl;


CAF and EOS usage

17 feb 2020
Set up new eos web site for linux work:, with

Started from D Petyt's eos page
At bottom of the page have: Like this page? Get it here. Contains:, index.php, res, example.

Directory area from linux: /eos/home-d/davec/www
Sudirectory laser-analysis
Used github to set up .htaccess file - ie for just cern-memebers, and for indexing across the site
Had to set up a "test" website via cernbox, since davec already taken in dfs
Site is, pointed at my eos www site
EOS path /eos/user/d/davec/www/
Web site management

Configuring the page, ie where/how to show plots: done with the "res" file.
The res file contains a css file, css = cascading style sheet "

eos info/procedures
To search for files on eos:

  • eoscms ls -l /eos/cms/

How to access from cfg file:

  • 'root://eoscms//eos/cms/store/relval/CMSSW_5_2_0_pre5/RelValQCD_FlatPt_15_3000/GEN-SIM-RECO/START52_V1-v1/

Copy a file from eos to my work area, ie Sherwin double electron file:

$ eos cp /eos/cms/store/user/shervin/calibration/8TeV/ZNtuples/alcareco/DoubleElectron-ZSkim-RUN2012A-13Jul-v1/190456-193621/190456-202305-13Jul_Prompt_Cal_Nov2012/DoubleElectron-ZSkim-RUN2012A-13Jul-v1-190456-193621.root /afs/

[eos-cp] going to copy 1 files and 41.10 MB

[eoscp] DoubleElectron-ZSkim-RUN2012A-13Jul-v1-190456-193621.root Total 39.20 MB   |====================| 100.00 % [3.4 MB/s]

[eos-cp] copied 1/1 files and 41.10 MB in 13.56 seconds with 3.03 MB/s
[lxplus316] ~ $ work

[lxplus316] /afs/ $ ls -al *621.root
-r-------- 1 davec zh 41101089 Feb  5 14:49 DoubleElectron-ZSkim-RUN2012A-13Jul-v1-190456-193621.root

Other EOS notes: Setup

  • By default you will most likely find 'eos' in the setup provided by your experiment when you login on lxplus. Experiments configure the stable release which is used also in grid frameworks etc.
  • If you want to make use of most recent features mentioned in this FAQ you have to pick the 'client' version of EOS using a bash or tcsh setup script:
  • source /afs/[atlas|cms|lhcb|alice]/etc/
  • source /afs/[atlas|cms|lhcb|alice]/etc/setup.csh
  • Verify first that you are connecting to 'your' experiment dedicated EOS instance.
  • bash-3.2$ source /afs/[atlas|cms|lhcb|alice]/etc/ *bash-3.2$ eos
# ---------------------------------------------------------------------------
# EOS  Copyright (C) 2011 CERN/Switzerland
# This program comes with ABSOLUTELY NO WARRANTY; for details type `license'.
# This is free software, and you are welcome to redistribute it
# under certain conditions; type `license' for details.
# ---------------------------------------------------------------------------
The automatic EOS endpoint setting use your group membership to sort you into an experiment.
 If this does not work for you, you can just specify your endpoint via an environment variable:
ATLAS: export EOS_MGM_URL=root://
CMS:   export EOS_MGM_URL=root://
LHCB:  export EOS_MGM_URL=root://  [ there is no user space here - only usable via LHCB tools ]
ALICE: export EOS_MGM_URL=root:// [ there is no user space here - only usable via ALICE tools ]

Using EOS

  • The 'eos' CLI (Command Line Interface) supports most of the standard filesystem commands like:
  • ls, cd, mkdir, rm, rmdir, find, cp ...
  • 'eos help' shows all available commands, 'eos --help' explains each command

The 'eos' CLI can be used as an interactive shell with history

    bash$ eos
    EOS Console [root://] |/> whoami
    Virtual Identity: uid=755 (755,99) gid=1338 (1338,99) [authz:krb5] host=lxplus423

    as a busy-box command
    bash$ eos whoami
    Virtual Identity: uid=755 (755,99) gid=1338 (1338,99) [authz:krb5] host=lxplus423

Accessing an EOS file from ROOT

  • You have to use URLs as file names which are built in this way: root://eos[experiment][experiment]/... e.g.
  • root://
  • root://
  • root [0]: TFile::Open("root://");

Found data on eos, stepping through all cms directories one by one, with:
eoscms ls -l /eos/cms/store/data/Run2012B/DoublePhoton/AOD/PromptReco-v1/000/194/108/D0FEA2C9-649F-E111-9368-003048D2BF1C.root
eos copy messages, copying to my work area:

eos cp /eos/cms/store/data/Run2012B/DoublePhoton/AOD/PromptReco-v1/000/194/108/D0FEA2C9-649F-E111-9368-003048D2BF1C.root /afs/
[eos-cp] path=/eos/cms/store/data/Run2012B/DoublePhoton/AOD/PromptReco-v1/000/194/108/D0FEA2C9-649F-E111-9368-003048D2BF1C.root size=3134841652
[eos-cp] going to copy 1 files and 3.13 GB
append: /eos/cms/store/data/Run2012B/DoublePhoton/AOD/PromptReco-v1/000/194/108/D0FEA2C9-649F-E111-9368-003048D2BF1C.root D0FEA2C9-649F-E111-9368-003048D2BF1C.root
[eoscp] D0FEA2C9-649F-E111-9368-003048D2BF1C.root Total 2989.62 MB      |====================| 100.00 % [40.1 MB/s]
[eos-cp] copied 1/1 files and 3.13 GB in 85.07 seconds with 36.85 MB/s

Copying files around at CERN

  • You probably need to move files from/to EOS to your local computer/AFS or from CASTOR to EOS. Here are few examples
# copy a single file
eos cp /eos/atlas/user/t/test/histo.root /tmp/                   

# copy all files within a directory - no subdirectories
eos cp /eos/atlas/user/t/test/histodirectory/ /afs/  

# copy recursive the complete hierarchy in a directory
eos cp -r /eos/atlas/user/t/test/histodirectory/ /afs/

# copy recursive the complete hierarchiy into the directory 'histordirectory' in the current local working directory
eos cp -r /eos/atlas/user/t/test/histodirectory/ histodirectory

# copy recursive the complete hierarchy of a CASTOR directory to an EOS directory (make sure you have the proper CASTOR settings)
eos cp -r root://castorpublic//castor/ /eos/atlas/user/t/test/histodirectory/

# copy an WEB file [ currently the reported copy size is 0 ]
eos cp /tmp/

# copy all ROOT files from an Amazon S3 bucket
# define the environment variables: S3_ACCESS_KEY, S3_SECRET_ACCESS_KEY & SE_HOSTNAME

eos cp -r as3:mybucket/*.root /tmp/mybucket/

Creating file lists

  • You can run 'find' commands in EOS or XRootD storage like CASTOR or S3 storage using the 'eos' CLI. This command returns full pathnames!
# find all files under an EOS subdirectory (if you are an ordinary user, the file list is limited to 100k files and 50k directories)
eos find -f /eos/atlas/user/t/test/

# find all directories
eos find -d /eos/atlas/user/t/test/

# find all files in a CASTOR directory
eos find -f root://castorpublic//castor/

# find all files on a mounted file system
eos find -f file:/afs/

# find all files in my Amazon S3 bucket
eos find -f as3:mybucket/

Listing directories

# list files in eos
eos ls [-la] /eos/atlas/user/t/test/

# list files in castor [ if you use '-la' be aware that the ownership and permissions shown are not correct ]

eos ls [-la] root://castorpublic//castor/

# to list files in an S3 bucket you have to use the find command - see before

Mounting EOS on lxplus

You can mount EOS into your AFS home directory as follows:

bash-3.2$ mkdir -p $HOME/eos

bash-3.2$ eosmount $HOME/eos
===> Mountpoint   : /afs/
===> Fuse-Options : kernel_cache,attr_timeout=30,entry_timeout=30,max_readahead=131072,max_write=4194304, root://
===> xrootd ra             : 131072
===> xrootd cache          : 393216
===> fuse debug            : 0
===> fuse write-cache      : 1
===> fuse write-cache-size : 100000000

Please unmount it once you are over or before you log out !!!

bash-3.2$ eosumount $HOME/eos

or if you have some hanging mount

bash-3.2$ eosforceumount $HOME/eos

Warning: after 24h the mount has to re-authenticate and your kerberos token will have expired in the meanwhile. So whenever you login into lxplus re-fresh (=eosumount + eosmount) any existing mount (even better don't let it there).

Disclaimer: the FUSE module is not recommended for production usage and the use is at your own risk. EOS is not mounted/mountable on lxbatch nodes!
Trouble Shooting

You can get help for all kind of problems at the service desk!

    When I am reading a file, I get 'unble to open - machine not on the network'

    - this happens if all copies of a file are unaccessible. You can verify the state of a file using 'eos fileinfo <path>'. Atleast one copy needs to be in the state 'active -> online'

    [root@eosdummy tmp]# eos file info /eos/atlas/user/t/test/histo.root

    File: '/eos/atlas/user/t/test/histo.root'  Size: 2052

    Modify: Fri Jun 15 14:12:43 2012 Timestamp: 1341416753.962860000

    Change: Wed Jul  4 17:45:53 2012 Timestamp: 1339762363.349244000

    CUid: 99 CGid: 99  Fxid: 002a26e2 Fid: 2762466    Pid: 1456

    XStype: adler    XS: 49 05 c2 e1 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

    replica Stripes: 2 Blocksize: 4k *******

    #Rep: 2
    <#> <fs-id>
    #                   host #   id #     schedgroup #           path #     boot # configstatus #      drain # active
    0      62     62        default.7          /data08     opserror           rw  offline 
    1      35     35        default.7          /data08     opserror           rw  offline

    If you have offline files please report via the service desk!
    I get 'no space left on device'

    - please verify that you have quota in the part of the namespace you are writing to. 
The quota is not managed by IT but by each experiment. Please request to the experiment responsibles.

Check your quota using 'eos quota ls'

    bash-3.2$ eos quota ls
    By user ...
    # _______________________________________________________________________________________________
    # ==> Quota Node: /eos/atlas/user/t/test/          
    # _______________________________________________________________________________________________
    user       used bytes logi bytes used files aval bytes aval logib aval files filled[%]  vol-status ino-status
    test       289.69 TB  144.69 TB  2.74 M-    100.00 MB  50.00 MB   10.00 k-   100.00     exceeded   exceeded
    In this example volume and inode quota are exceeded  (the space one can use and the number of files one can create).


 eoscms ls -l /eos/cms/store/data/Run2011A/Photon/RAW/v1/000/165/121
-rw-r--r--   1 phedex   zh         4011987444 Oct 24 12:28 4A78D0C2-C57F-E011-83C8-000423D9A212.root
-rw-r--r--   1 phedex   zh         4020465582 Oct 24 12:28 5CA4A288-B87F-E011-AFFE-003048F117B6.root
-rw-r--r--   1 phedex   zh         3957335961 Oct 24 12:28 6864450C-C17F-E011-8BAF-0030487C8CB6.root
-rw-r--r--   1 phedex   zh         3907128205 Oct 24 12:28 6E674C59-B47F-E011-847A-001D09F24FEC.root
-rw-r--r--   1 phedex   zh         3620500624 Oct 24 12:28 866E56B0-D47F-E011-89FC-003048F024FA.root
-rw-r--r--   1 phedex   zh         3982682572 Oct 24 12:28 A0692214-CC7F-E011-A7F8-0030487CAEAC.root
-rw-r--r--   1 phedex   zh         3974456617 Oct 24 12:28 C470493D-B07F-E011-8F32-001D09F24E39.root
-rw-r--r--   1 phedex   zh         4014233070 Oct 24 12:28 F4FE198D-BD7F-E011-8F79-001D09F34488.root

[lxplus414] ~ $ eoscms ls -l /eos/cms/store/data/Run2011A
drwxr-sr-+   2 phedex   zh                  1 Jun 22 00:04 AlCaP0
drwxr-sr-+   2 phedex   zh                  1 Jun 25 14:59 AlCaPhiSym
drwxr-sr-+   2 phedex   zh                  1 Jun 29 12:09 BTag
drwxr-sr-+   3 phedex   zh                  2 Jun 19 19:26 Cosmics
drwxr-sr-+   4 phedex   zh                  3 Jun 29 01:41 DoubleElectron
drwxr-sr-+   5 phedex   zh                  4 Jun 25 17:51 DoubleMu
drwxr-sr-+   2 phedex   zh                  1 Jun 29 01:47 ElectronHad
drwxr-sr-+   4 phedex   zh                  3 Jun 28 22:22 Jet
drwxr-sr-+   4 phedex   zh                  3 Jun 25 18:33 MinimumBias
drwxr-sr-+   3 phedex   zh                  2 Jun 29 01:39 MuEG
drwxr-sr-+   3 phedex   zh                  2 Jun 25 18:32 MuOnia
drwxr-sr-+   2 phedex   zh                  1 Jun 29 04:19 Photon
drwxr-sr-+   3 phedex   zh                  2 Jun 28 21:29 SingleElectron
drwxr-sr-+   5 phedex   zh                  4 Jun 17 23:42 SingleMu
drwxr-sr-+   2 phedex   zh                  1 Jun 28 21:29 TauPlusX

CASTOR notes

  • My area: /castor/

staging Querying the stager and pre-staging of files
  stager_get -M /castor/
  Received 1 responses /castor/ SUBREQUEST_READY
  stager_qry -M /castor/
  Received 1 responses /castor/ 12345@castorns STAGEIN
  or Error 2/No such file or directory
  or STAGED, ie on disk
  multiple files, ie
  stager_get -M /castor/ -M /castor/

  • Each user has a CASTOR home directory
    • e.g. /castor/
    • my files on CASTOR: rfdir /castor/
    • make directories: rfmkdir /castor/
    • copy to CASTOR: rfcp filename /castor/
    • rename files on castor: rfrename $CASTOR_HOME/some_file.ext $CASTOR_HOME/some_new_file.ext
    • deleting files: rfrm $CASTOR_HOME/some_file.ext
  • Listing Contents of a Directory on Castor
    • list your home directory on castor: nsls
    • list some subdir of your home on castor i.e. come ganga output: nsls 416/outputdata
    • list some absolute path: nsls /castor/
  • Opening files in CASTOR for an interactive root session, type
    • TFile *f = TFile::Open("rfio:/castor/")

A (large) output file can be saved to CASTOR by using the following script called runMyJob.(c)sh

cd /afs/
eval `scram runtime -(c)sh`
cd -
cmsRun /afs/
rfcp outputFile.root /castor/
The batch job starts to run in a directory called /pool/lsf/username/jobnumber, and this directory has a large scratch space that is available for the duration of the job. You then cd to scratch0/CMSSW_x_y_z/src/.../.../test to set the environment variables. The command "cd -" takes you back to /pool/lsf/username/jobnumber so that the large scratch space is available. aConfigFile.cfg is run from that directory, and the last line copies the output root file from there directly to your space in CASTOR. Note that while writing directly to a file in CASTOR is possible, it is not recommended, as it is very error-prone. The recommended practice is the one indicated above, i.e. writing to a local area and copying the file after cmsRun has finished.

You submit the job in the same way as that for smaller jobs.

bsub -q 1nd -J job1 < runMyjob.(c)sh
It is also possible to request a minimum amount of space in /pool/lsf/username/jobnumber using the -R option, for example to request a minimum of 30GB type
bsub -R "pool>30000" -q 1nd -J job1 < runMyjob.(c)sh

In cms cfg files, data on castor accessed with

process.source = cms.Source("PoolSource",
    # replace 'myfile.root' with the source file you want to use
    fileNames = cms.untracked.vstring(
     #   'file:myfile.root'
     # rfdir  /castor/
     # gives 3908245997 blocks size = 3.9Gbytes, written on Sep 30  2010 
The string '/castor/' not to be used in the cfg file. File in Run2010B/Photon/RAW/v1/000/146/944/ had ~23600 events, ~170kbytes per event.

Can drill down the directories in CASTOR with

rfdir /castor/
drwxrwxr-x  12 cmsprod  zh                          0 Jun 26  2003 BigJets
drwxrwxr-x 25880 cmsprod  zh                          0 Jan 30  2002 BigMB123
drwxrwxr-x   7 cmsprod  zh                          0 Oct 18  2005 CMSGLIDE
drwxr-xr-x  58 cmsprod  zh                          0 Mar 29  2006 DAR
drwxrwxr-x   7 cmsprod  zh                          0 Mar 23  2004 DSTs
drwxrwxr-x   7 cmsprod  zh                          0 Apr 01  2004 DSTs_801
drwxrwxr-x  22 cmsprod  zh                          0 Apr 30  2004 DSTs_801a
drwxrwxr-x   6 cmsprod  zh                          0 Jan 30  2004 DVD
drwxr-xr-x 3380 cmsprod  zh                          0 Jun 30  2005 FNAL
drwxrwxr-x   5 cmsprod  zh                          0 Feb 09  2007 MTCC
drwxrwxr-x  46 cmsprod  zh                          0 Apr 05  2006 PCP04
drwxrwxr-x 166 cmsprod  zh                          0 Apr 13  2006 PTDR
drwxrwxrwx   0 cmsprod  zh                          0 Apr 30  2007 PitData
drwxrwxr-x   1 cmsprod  zh                          0 Oct 26  2005 RefDB
drwxrwxr-x   7 cmsprod  zh                          0 Feb 24  2010 T0
drwxrwxr-x   1 cmsprod  zh                          0 Sep 12  2007 T0Prototype
drwxrwxr-x   2 cmsprod  zh                          0 Feb 18  2004 TW
drwxrwxr-x  50 cmsprod  zh                          0 Mar 01  2004 Valid
drwxrwxr-x  84 cmsprod  zh                          0 May 26  2010 Validation
drwxrwxr-x   9 cmsprod  zh                          0 Dec 08  2008 archive
drwxrwxr-x   5 cmsprod  zh                          0 Jan 03  2004 archive-shift20-obsolete
drwxr-xr-x   1 1046     c3                          0 Jun 22  2005 archives
drwxrwxrwx   2 12406    zh                          0 Oct 01  2001 cmsbt
drwxrwxr-x   5 cmshi    zh                          0 May 24  2006 cmshi
drwxrwxr-x   1 cmsprod  zh                          0 Aug 08  2002 comphep
drwxr-xr-x   1 cmsprod  zh                          0 Jul 07  2005 cosmic
drwxrwxr-x   2 cmsprod  zh                          0 May 01  2003 eff
drwxr-xr-x   6 emuslice zh                          0 Jun 11  2008 emuslice
drwxrwxr-x  24 cmsprod  zh                          0 Feb 14  2008 generation
drwxrwxr-x  35 cmsprod  zh                          0 Jul 17  2006 grid
drwxr-xr-x   2 1066     zh                          0 Aug 21  2006 h2_testbeam
drwxrwxr--   0 cmsprod  zh                          0 Aug 22  2002 import
drwxrwxr-x   4 duccio   zh                          0 Oct 10  2006 integration
-rwxr-xr-x   1 cmsprod  zh                   89161728 May 17  2002 jet0900.FDDB
-rw-rw-r--   1 cmsprod  zh                     541096 Jun 29  2005 me-0629-100
-rw-rw-r--   1 cmsprod  zh                     598580 May 02  2006 me-te-0502-01
drwxrwxr-x  13 cmsprod  zh                          0 Feb 04  2002 official_geometry_files
drwxrwxr-x   1 outreach zh                          0 Jul 14  2006 outreach
drwxrwxr-x 3629 cmsprod  zh                          0 Mar 12  2008 phedex_heartbeat
drwxrwxr-x 317 cmsprod  zh                          0 Nov 28  2006 phedex_loadtest
drwxr-xr-x  23 cmsprod  zh                          0 Jul 16  2003 reconstruction
drwxrwxr-x   3 23928    zh                          0 Jun 28  2005 repos_standalone
drwxrwxr-x  41 cmsprod  zh                          0 Jan 31  2006 simulation
drwxr-xr-x  56 cmsprod  zh                          0 Nov 04 16:47 store
drwxrwxrwx   3 cmsprod  zh                          0 Jul 24  2006 t0test
drwxrwxr-x 181 cmsprod  zh                          0 May 24  2008 test
-rw-rw-r--   1 cmsprod  zh                     658517 Feb 19  2004 test-2
-rw-r--r--   1 cmsprod  zh                    5008283 Dec 11  2001 test.transfer
drwxrwxr-x  20 cmsprod  zh                          0 Aug 21  2008 testbeam
drwxr-xr-x   0 cmsprod  zh                          0 Nov 16  2007 xrdt

rfdir /castor/

The full set of installed CASTOR man-pages can be viewed with the command 'man -k castor'.
Every LXPLUS user has a directory in CASTOR. It has the same structure of your AFS directory but it is not reachable via standard unix command (ls, cp, etc...). Special commands must be used.

nsmkdir create a subdirectory
  nsmkdir /castor/
rfdir is a RFIO command and can therefore also be used to list local or remote files
  ie rfdir /castor/
nsls like ls
  nsls -l /castor/
  mrwxr--r-- 1 laman vy 10 Mar 21 15:06 castor.txt
xrdcp copy file
  xrdcp xroot:// myfile.txt
  xroot is the protocol is the host of the xroot server
  /castor/ is the CASTOR full path
  “castorpublic” is the entry point for all users not affiliated with an LHC experiment.
rfcp copy file from castor
  rfcp /castor/ .
stager_qry get status of file on castor
  stager_qry -M /castor/

Castor help:

  • RFIO commands such as rfcp don't work
  • User comment: In my case, this line was in /etc/sysconfig/iptables.castor and not in /etc/sysconfig/iptables
  • After re-inserting it back and restarting iptables rfcp works.


Naming conventions inside CMSSW .root files

  • `C++ namespace' `class name'_label__RECO',
  • for example: recoJetdmRefProdTofloatAssociationVector_jetProbabilityBJetTags__RECO
  • suggests C++ namespace = 'reco'
  • suggests class name = 'JetdmRefProdTofloatAssociationVector'
  • suggests label = 'jetProbabilityBJetTags'

Code and code generators

SkeletonCodeGenerator Writing your own EDAnalyzer

Token/Handle examples


Classes, Methods and .h files


Compiling and running CMSSW

To start
  • 1) Create a new project area -> cmsrel CMSSW_3_6_2 -> cd CMSSW_3_6_2/src/ -> cmsenv
  • 2) In CMSSW_3_6_2/src/ -> mkdir Demo -> cd Demo
  • 3) Create "skeleton" of EDAnalyzer module (only way to compile properly) -> mkedanlzr DemoAnalyzer <- your own fn here
  • 4) Compile -> cd DemoAnalyzer -> scram b
  • 5) Also, in /DemoAnalyzer/, file automatically created. Change for new data sources etc.
  • 6) Run the job -> cmsRun
  • 7) Code changes - go to: /Demo/DemoAnalyzer/src/
  • 8) Compile and run, in the /DemoAnalyzer/ directory, edit DemoAnalyzer/BuildFile, if necessary -> scram b -> cmsRun





FirstCollisionsAnalysis Collisions Dec 09

I also made a skim of L1 EG 2:


BSC 40-41 skims (from Chiara): /castor/ bit40or41skim_expressPhysics_run123591_full.root /castor/ bit40or41skim_expressPhysics_run123592_full.root /castor/ bit40or41skim_expressPhysics_run123596_full.root

BSC 40-41 && L1_SingleEG2 skim: /castor/ L1EGSkim_bit40or41skim_expressPhysics_run123591_full.root /castor/ L1EGSkim_bit40or41skim_expressPhysics_run123592_full.root /castor/ L1EGSkim_bit40or41skim_expressPhysics_run123596_full.root

iSpy files of above BSC 40-41 && L1_SingleEG2 skim files: iSpy/ispy_L1EGSkim_bit40or41skim_expressPhysics_run123591_full.ig iSpy/ispy_L1EGSkim_bit40or41skim_expressPhysics_run123592_full.ig iSpy/ispy_L1EGSkim_bit40or41skim_expressPhysics_run123596_full.ig

P Merid, 6 Dec 09, I'm producing skims from the Minimum Bias sample and filtering on bit 40||41. The samples are available here:


Toyoko, 6 Dec 09 I also made a skim of L1 EG 2: Summary: BSC 40-41 skims (from Chiara):


BSC 40-41 && L1_SingleEG2 skim:


iSpy files of above BSC 40-41 && L1_SingleEG2 skim files:

General CMS sites

CMSSW CVS site DQM run registry user instructions Access to DQM run registry
Access from outside CERN CMS Dashboard DBS (Database Bookkeeping Sevice) discovery page
DBS tutorial Data discovery interface Frontier tags
Beam Spot data

ECAL sites

ECAL Twiki pages ECAL software - EcalReco ECAL DPG
ECAL DQM ECAL channel Status ECAL DQM shift instructions
ECAL data archiving at point 5 ECAL local DAQ files in DBS
EE vfe txt map etc in CVS EE TCC map in CVS
DetId SWguideEcalReco


900GeV iso studies Di_electrons_studies_with_first_collisions

EE map in CVS Google 'CMSSW CVS', then CMSSW->Geometry->EcalMapping->data->EEMap.txt Lines like:

ix iy iz  
100 56 -1 21 1 2 1 4 19 4 10 1

ECAL local DAQ files in DBS Within few minutes from the end of the Ecal local DAQ data taking, the Streamer raw data files of the Ecal local DAQ runs and the binary Ecal MATACQ files are copied to a central CMS dropbox area (cms-tier0-stage:/dropbox/ecal/daq-data/ and cms-tier0-stage:/dropbox/ecal/matacq-data/).

Two cronjobs run on cms-tier0-stage, monitor the availability of new files, and inject them in the CMS Tier0 DB. After a successful injection, the files will be managed by the central Tier0 team: they will be copied to Castor, will be inserted in DBS, and will be automatically removed from the CMS dropbox area.

Click Runinfo icon and switch to RunInfo Table - this brings up an empty line of fields. Under the number column, in the empty field enter sorts like '> 113990' to bring up the next Global runs after this one.


get Luminosity with CMSWBM
Go to ConditionBrowser, mid way down 'Core Services' on CMSWBM page

CMSWBM -> Condition browser
-> cms_omds_lb
-> CMS_BEAM_cond
-> CMS_lhc_luminosity
->lumi_totinst      <== tick box, set time, click on 1D, prescalar box set to 5 (could try 1)

select a plot value

Fill Luminosity on CMSWBM

CMS Luminosity plots

Example of script to get lumi - >

lxplus  ~/scratch0/CMS/CMSSW_3_5_6/src/QCDPhotonAnalysis/DataAnalyzers/test 
$ python json_2.txt 
Total luminosity: 98.16 /ub, 0.10 /nb, 9.82e-05 /pb, 9.82e-08 /fb 


To access the DQM from outside P5 you can go to the following pages:
DQM online
DQM offline
To find a specific run click on run and enter your run number. For the offline DQM you need to enter the run number and click on 'Vary: Any' and then select the StreamExpress.

ECAL Database information

CMSSW code links | PFG code | Filters and skims code |

LXR cross referencer Twiki on EcalRawToRecHit EcalDigiCollections.h

GsfElectron e-gamma electron interface

8 Dec 09 Hi James, Do you want the raw, unreconstructed amplitude samples as digitized by the ADC? If so, the basic prescription is to start from RAW data, run the Ecal unpacker, and then read the EcalDigiCollection out of the event. If you look here [1] in the selectDigi method, you will see a way to obtain a Digi for a given DetId from the DigiCollection in the event, and then how the ADC values for each sample are extracted. Cheers, Seth



RECO data format table

RECO data format table

Luminosity data

1) cmswbm -> ConditionBrowser -> cms_omds_lb -> CMS_BEAM_COND -> CMS_LHC_LUMINOSITY 
3) enter begin and end dates -> click submit query -> will offer box "with Prescaler"
4) note - may need to set prescale by 5 or 100 etc to get file of allowed size
5) Plot is returned to screen of luminosity versus time
6) Click on root, or text etc to choose the data format to save all the data to your area

With root file, ie ConditionBrowser_1286193847132.root, 
1) root -l ConditionBrowser_1286193847132.root
2) Tbrowser b,  go to ROOT files - > ConditionBrowser_1286193847132.root -> tree -> VALUE for distribution of lumis
3) tree->Draw("VALUE:TIME")      for a plot of lumi vs time
4) tree->Draw("VALUE:TIME","VALUE>0.1")  plot, after a cut on VALUE
5) Returns (Long64_t)2331   <- number of times the cut succeeded
6) tree->Scan("VALUE:TIME")  <- listing of VALUE and  TIME
7) tree->Scan("VALUE:TIME","VALUE>0.1","colsize=10")  <- lists with each column 10 wide, to avoid printing in scientific, E10 etc

Find data files and events

16 Nov 2017

Run information, Jean Fay, email Thu 13/08/2015 10:34
You can get this information from WBM, Ecal Summary, Configuration Compare/Show button then expend Sequence 0:

Also, can go directly to a run on WBM by:
if WBM top page is broken (ie illegal run numbers!)

Default sequence (1 cycle) Cycle 0: SelectiveReadout (the default cycle for run Cosmics-SR) and check ECAL_TTCCI_CONFIGURATION

Looking to the configuration of this night run (254232), find the following 'CONFIGURATION_SCRIPT-PARAMS' :

--n-phot-2 0 --n-ir 0 --n-ped 1 --n-tp 1 --n-phot-1 600 --n-green 600 --n-orange-led 600 --n-blue-led 600 
--las-switch-time 400 --led-switch-time 1200 --las-switch-rst-time 3000 --eb-to-ee-las-switch-time 2000 
--starting-lme-from /nfshome0/ecalpro/.lsup_starting_lme

If I understand it well, there is no blue2, no infrared laser, 1 pedestal event, 1 Testpulse event, 
600 blue1 laser events, 600 green laser events, 600 orange LED events, 600 blue LED events in the calibration sequence.


Use conddb to locate files used by the CMS database.
Various commands:
conddb listTags | grep "Ecal" > listTags.txt

conddb list EcalPedestals_hlt

conddb search EcalPedestals_hlt

conddb list EcalPedestals_hlt > peds.txt

conddb listTag will give an error - but also list all possible items, ie
usage: conddb [-h] [--db DB] [--verbose] [--quiet] [--yes] [--nocolors]
[--editor EDITOR] [--force] [--noLimit] [--authPath AUTHPATH]
NOTE: the condb entry is usually different from the actual run, ie:
EcalPedestals_hlt and EcalPedestals_express 
have been updated : IOV 305085 with the payload from the pedestal run 304680.


To get one file (or a few), from a T1 use the PhedEx FileMover

9 Dec 2011

Now use the Data Aggregation System, das

A typical string in the das window

  • file dataset=/SingleElectron/Run2012B-PromptReco-v1/AOD run=194455 lumi=257
  • File: /store/data/Run2012B/SingleElectron/AOD/PromptReco-v1/000/194/455/C813A592-23A4-E111-B4C3-5404A6388694.root
  • file dataset=/SingleElectron/Run2012B-PromptReco-v1/AOD run=194455 lumi=258
  • file dataset=/ElectronHad/Run2011B-PromptReco-v1/AOD run=178708 lumi=326
  • das-event-search window.pdf

A dbs search for the global tag of a dataset

  • dbs search --query='find dataset.tag where dataset=/HT/Run2011A-PromptReco-v4/AOD'


Using DBS instance at:


 ./ --query="file dataset=/DoubleElectron/Run2011B*/AOD"
Showing 1-10 out of 5536 results, for more results use --idx/--limit options

Using dbs, 26 May 2012, to find dataset then run/lumi:

  • dbsql "find dataset where run = 194455" > fn.txt
  • dbsql "find file where dataset = /SingleElectron/Run2012B-PromptReco-v1/AOD and run =194455"
  • dbsql "find file where dataset = /SingleElectron/Run2012B-PromptReco-v1/AOD and run =194455 and lumi=257"

Using dbs, 4 Apr 2011:

dbsql "find dataset where run = 149442" > fn.txt    (if want to pipe output to fn.txt
           "find file where dataset = /Electron......................
            and lumi =
(can get a lumi section of data, not an individual event this way)

/store/...................          indicates a file on CASTOR


Code examples from Brian Heltsley, May 2011


Rec Hits

doxygen EcalRecHit Class Reference

[[][CMS IN-2011/002 -- Definition of calibrated ECAL RecHits and the ECAL calibration and correction scheme]]

View of /CMSSW/DataFormats/EcalRecHit/src/

View of /CMSSW/DataFormats/EcalRecHit/src/

Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph
Revision 1.19 - (download) (annotate)
Fri Feb 4 13:34:06 2011 UTC (2 months ago) by argiro
Branch: MAIN
CVS Tags: CMSSW_4_3_0_pre2, CMSSW_4_3_0_pre1, CMSSW_4_2_0, V02-02-05, V02-02-04, V02-02-06, V02-02-03, V02-02-02, CMSSW_4_2_0_pre4, CMSSW_4_2_0_pre5, CMSSW_4_2_0_pre6, CMSSW_4_2_0_pre7, CMSSW_4_2_0_pre2, CMSSW_4_2_0_pre3, CMSSW_4_2_0_pre8, HEAD
Changes since 1.18: +0 -2 lines

added unsetFlag function, RecHit is created by default with no flag

#include "DataFormats/EcalRecHit/interface/EcalRecHit.h"
#include "DataFormats/EcalDetId/interface/EBDetId.h"
#include "DataFormats/EcalDetId/interface/EEDetId.h"
#include "DataFormats/EcalDetId/interface/ESDetId.h"
#include "FWCore/MessageLogger/interface/MessageLogger.h"
#include <cassert>
#include <math.h>

EcalRecHit::EcalRecHit() : CaloRecHit(), flagBits_(0) {

EcalRecHit::EcalRecHit(const DetId& id, float energy, float time, uint32_t flags, uint32_t flagBits) :

bool EcalRecHit::isRecovered() const {

  return (    checkFlag(kLeadingEdgeRecovered) || 
         checkFlag(kNeighboursRecovered)  ||

float EcalRecHit::chi2() const
        uint32_t rawChi2 = 0x7F & (flags()>>4);
        return (float)rawChi2 / (float)((1<<7)-1) * 64.;

float EcalRecHit::outOfTimeChi2() const
        uint32_t rawChi2Prob = 0x7F & (flags()>>24);
        return (float)rawChi2Prob / (float)((1<<7)-1) * 64.;

float EcalRecHit::outOfTimeEnergy() const
        uint32_t rawEnergy = (0x1FFF & flags()>>11);
        uint16_t exponent = rawEnergy>>10;
        uint16_t significand = ~(0xE<<9) & rawEnergy;
        return (float) significand*pow(10,exponent-5);

void EcalRecHit::setChi2( float chi2 )
        // bound the max value of the chi2
        if ( chi2 > 64 ) chi2 = 64;
        // use 7 bits
        uint32_t rawChi2 = lround( chi2 / 64. * ((1<<7)-1) );
        // shift by 4 bits (recoFlag)
        setFlags( (~(0x7F<<4) & flags()) | ((rawChi2 & 0x7F)<<4) );

void EcalRecHit::setOutOfTimeEnergy( float energy )
        if ( energy > 0.001 ) {
                uint16_t exponent = lround(floor(log10(energy)))+3;
                uint16_t significand = lround(energy/pow(10,exponent-5));
                // use 13 bits (3 exponent, 10 significand)
                uint32_t rawEnergy = exponent<<10 | significand;
                // shift by 11 bits (recoFlag + chi2)
                setFlags( ( ~(0x1FFF<<11) & flags()) | ((rawEnergy & 0x1FFF)<<11) );

void EcalRecHit::setOutOfTimeChi2( float chi2 )
        // bound the max value of chi2
        if ( chi2 > 64 ) chi2 = 64;
        // use 7 bits
        uint32_t rawChi2 = lround( chi2 / 64. * ((1<<7)-1) );
        // shift by 24 bits (recoFlag + chi2 + outOfTimeEnergy)
        setFlags( (~(0x7F<<24) & flags()) | ((rawChi2 & 0x7F)<<24) );

void EcalRecHit::setTimeError( uint8_t timeErrBits )
        // take the bits and put them in the right spot
        setAux( (~0xFF & aux()) | timeErrBits );

float EcalRecHit::timeError() const
        uint32_t timeErrorBits = 0xFF & aux();
        // all bits off --> time reco bailed out (return negative value)
        if( (0xFF & timeErrorBits) == 0x00 )
                return -1;
        // all bits on  --> time error over 5 ns (return large value)
        if( (0xFF & timeErrorBits) == 0xFF )
                return 10000;

        float LSB = 1.26008;
        uint8_t exponent = timeErrorBits>>5;
        uint8_t significand = timeErrorBits & ~(0x7<<5);
        return pow(2.,exponent)*significand*LSB/1000.;

bool EcalRecHit::isTimeValid() const
        if(timeError() <= 0)
          return false;
          return true;

bool EcalRecHit::isTimeErrorValid() const
          return false;
        if(timeError() >= 10000)
          return false;

        return true;

 /// DEPRECATED provided for temporary backward compatibility
EcalRecHit::Flags EcalRecHit::recoFlag() const {
  for (int i=kUnknown; ; --i){
    if (checkFlag(i)) return Flags(i);
    if (i==0) break;
  // no flag assigned, assume good
  return kGood;

std::ostream& operator<<(std::ostream& s, const EcalRecHit& hit) {
  if (hit.detid().det() == DetId::Ecal && hit.detid().subdetId() == EcalBarrel) 
    return s << EBDetId(hit.detid()) << ": " << << " GeV, " << hit.time() << " ns";
  else if (hit.detid().det() == DetId::Ecal && hit.detid().subdetId() == EcalEndcap) 
    return s << EEDetId(hit.detid()) << ": " << << " GeV, " << hit.time() << " ns";
  else if (hit.detid().det() == DetId::Ecal && hit.detid().subdetId() == EcalPreshower) 
    return s << ESDetId(hit.detid()) << ": " << << " GeV, " << hit.time() << " ns";
    return s << "EcalRecHit undefined subdetector" ;

CERN LCG CVS service
   ViewVC Help
Powered by

View of /CMSSW/DataFormats/EcalDetId/interface/EBDetId.h

View of /CMSSW/DataFormats/EcalDetId/interface/EBDetId.h

add a function to return the approximate eta of a DetId


#include <ostream>
#include <cmath>
#include <cstdlib>
#include "DataFormats/DetId/interface/DetId.h"
#include "DataFormats/EcalDetId/interface/EcalSubdetector.h"
#include "DataFormats/EcalDetId/interface/EcalTrigTowerDetId.h"

/** \class EBDetId
 *  Crystal identifier class for the ECAL barrel
 *  $Id: AnalysisWork.txt,v 1.119 2020/02/17 12:15:39 davec Exp $

class EBDetId : public DetId {
  enum { Subdet=EcalBarrel};
  /** Constructor of a null id */
  EBDetId() {}
  /** Constructor from a raw value */
  EBDetId(uint32_t rawid) : DetId(rawid) {}
  /** Constructor from crystal ieta and iphi 
      or from SM# and crystal# */
  EBDetId(int index1, int index2, int mode = ETAPHIMODE);
  /** Constructor from a generic cell id */
  EBDetId(const DetId& id);
  /** Assignment operator from cell id */
  EBDetId& operator=(const DetId& id);

  /// get the subdetector .i.e EcalBarrel (what else?)
  // EcalSubdetector subdet() const { return EcalSubdetector(subdetId()); }
  static EcalSubdetector subdet() { return EcalBarrel;}

  /// get the z-side of the crystal (1/-1)
  int zside() const { return (id_&0x10000)?(1):(-1); }
  /// get the absolute value of the crystal ieta
  int ietaAbs() const { return (id_>>9)&0x7F; }
  /// get the crystal ieta
  int ieta() const { return zside()*ietaAbs(); }
  /// get the crystal iphi
  int iphi() const { return id_&0x1FF; }
  /// get the HCAL/trigger ieta of this crystal
  int tower_ieta() const { return ((ietaAbs()-1)/5+1)*zside(); }
  /// get the HCAL/trigger iphi of this crystal
  int tower_iphi() const;
  /// get the HCAL/trigger iphi of this crystal
  EcalTrigTowerDetId tower() const { return EcalTrigTowerDetId(zside(),EcalBarrel,abs(tower_ieta()),tower_iphi()); }
  /// get the ECAL/SM id
  int ism() const;
  /// get the number of module inside the SM (1-4)
  int im() const;
  /// get ECAL/crystal number inside SM
  int ic() const;
  /// get the crystal ieta in the SM convention (1-85)
  int ietaSM() const { return ietaAbs(); }
  /// get the crystal iphi (1-20)
  int iphiSM() const { return (( ic() -1 ) % kCrystalsInPhi ) + 1; }
  // is z positive?
  bool positiveZ() const { return id_&0x10000;}
  // crystal number in eta-phi grid
  int numberByEtaPhi() const { 
    return (MAX_IETA + (positiveZ() ? ietaAbs()-1 : -ietaAbs()) )*MAX_IPHI+ iphi()-1;
  // index numbering crystal by SM
  int numberBySM() const; 
  /// get a compact index for arrays
  int hashedIndex() const { return numberByEtaPhi(); }

  uint32_t denseIndex() const { return hashedIndex() ; }

  /** returns a new EBDetId offset by nrStepsEta and nrStepsPhi (can be negative), 
    * returns EBDetId(0) if invalid */
  EBDetId offsetBy( int nrStepsEta, int nrStepsPhi ) const;

  /** returns a new EBDetId on the other zside of barrel (ie iEta*-1), 
    * returns EBDetId(0) if invalid (shouldnt happen) */
  EBDetId switchZSide() const;
  /** following are static member functions of the above two functions
    * which take and return a DetId, returns DetId(0) if invalid 
  static DetId offsetBy( const DetId startId, int nrStepsEta, int nrStepsPhi );
  static DetId switchZSide( const DetId startId );

  /** return an approximate values of eta (~0.15% precise)
  float approxEta() const { return ieta() * crystalUnitToEta; }
  static float approxEta( const DetId id );

  static bool validDenseIndex( uint32_t din ) { return ( din < kSizeForDenseIndexing ) ; }

  static EBDetId detIdFromDenseIndex( uint32_t di ) { return unhashIndex( di ) ; }

  /// get a DetId from a compact index for arrays
  static EBDetId unhashIndex( int hi ) ;

  static bool validHashIndex(int i) { return !(i<MIN_HASH || i>MAX_HASH); }

  /// check if a valid index combination
  static bool validDetId(int i, int j) ;

  static bool isNextToBoundary(EBDetId id);

  static bool isNextToEtaBoundary(EBDetId id);

  static bool isNextToPhiBoundary(EBDetId id);

  //return the distance in eta units between two EBDetId
  static int distanceEta(const EBDetId& a,const EBDetId& b); 
  //return the distance in phi units between two EBDetId
  static int distancePhi(const EBDetId& a,const EBDetId& b); 

  /// range constants
  static const int MIN_IETA = 1;
  static const int MIN_IPHI = 1;
  static const int MAX_IETA = 85;
  static const int MAX_IPHI = 360;
  static const int kChannelsPerCard = 5;
  static const int kTowersInPhi = 4;  // per SM
  static const int kModulesPerSM = 4;
  static const int kModuleBoundaries[4] ;
  static const int kCrystalsInPhi = 20; // per SM
  static const int kCrystalsInEta = 85; // per SM
  static const int kCrystalsPerSM = 1700;
  static const int MIN_SM = 1;
  static const int MAX_SM = 36;
  static const int MIN_C = 1;
  static const int MAX_C = kCrystalsPerSM;
  static const int MIN_HASH =  0; // always 0 ...
  static const int MAX_HASH =  2*MAX_IPHI*MAX_IETA-1;

  // eta coverage of one crystal (approximate)
  static const float crystalUnitToEta;

  enum { kSizeForDenseIndexing = MAX_HASH + 1 } ;
  // function modes for (int, int) constructor
  static const int ETAPHIMODE = 0;
  static const int SMCRYSTALMODE = 1;

std::ostream& operator<<(std::ostream& s,const EBDetId& id);


View of /CMSSW/DataFormats/EcalRecHit/interface/EcalRecHitCollections.h

CMSSW/ DataFormats/ EcalRecHit/ interface/ EcalRecHitCollections.h

004 #include "DataFormats/Common/interface/SortedCollection.h"
005 #include "DataFormats/EcalRecHit/interface/EcalRecHit.h"
006 #include "DataFormats/EcalRecHit/interface/EcalUncalibratedRecHit.h"
007 #include "DataFormats/Common/interface/Ref.h"
008 #include "DataFormats/Common/interface/RefVector.h"
011 typedef edm::SortedCollection<EcalRecHit> EcalRecHitCollection;
012 typedef edm::Ref<EcalRecHitCollection> EcalRecHitRef;
013 typedef edm::RefVector<EcalRecHitCollection> EcalRecHitRefs;
014 typedef edm::RefProd<EcalRecHitCollection> EcalRecHitsRef;
016 typedef EcalRecHitCollection EBRecHitCollection;
017 typedef EcalRecHitCollection EERecHitCollection;
018 typedef EcalRecHitCollection ESRecHitCollection;
020 typedef edm::SortedCollection<EcalUncalibratedRecHit> EcalUncalibratedRecHitCollection;
021 typedef edm::Ref<EcalUncalibratedRecHitCollection> EcalUncalibratedRecHitRef;
022 typedef edm::RefVector<EcalUncalibratedRecHitCollection> EcalUncalibratedRecHitRefs;
023 typedef edm::RefProd<EcalUncalibratedRecHitCollection> EcalUncalibratedRecHitsRef;
025 typedef EcalUncalibratedRecHitCollection EBUncalibratedRecHitCollection;
026 typedef EcalUncalibratedRecHitCollection EEUncalibratedRecHitCollection;
028 #endif

View of CMSSW/DataFormats/EcalRecHit/interface/EcalRecHit.h

View of CMSSW/DataFormats/EcalRecHit/interface/EcalRecHit.h
added checkFlagMask() function


#include "DataFormats/CaloRecHit/interface/CaloRecHit.h"

/** \class EcalRecHit
 * $Id: AnalysisWork.txt,v 1.119 2020/02/17 12:15:39 davec Exp $
 * \author P. Meridiani INFN Roma1

class EcalRecHit : public CaloRecHit {
  typedef DetId key_type;

  // recHit flags
    enum Flags { 
          kGood=0,                   // channel ok, the energy and time measurement are reliable
          kPoorReco,                 // the energy is available from the UncalibRecHit, but approximate
                                              (bad shape, large chi2)
          kOutOfTime,                // the energy is available from the UncalibRecHit (sync reco),
                                              but the event is out of time
          kFaultyHardware,        // The energy is available from the UncalibRecHit, 
                                              channel is faulty at some hardware level (e.g. noisy)
          kNoisy,                    // the channel is very noisy
          kPoorCalib,                // the energy is available from the UncalibRecHit, but the
                                            calibration of the channel is poor
          kSaturated,                // saturated channel (recovery not tried)
          kLeadingEdgeRecovered,     // saturated channel: energy estimated from the leading
                                            edge before saturation
          kNeighboursRecovered,      // saturated/isolated dead: energy estimated from 
          kTowerRecovered,           // channel in TT with no data link, info retrieved from
                                                  Trigger Primitive
          kDead,                     // channel is dead and any recovery fails
          kKilled,                   // MC only flag: the channel is killed in the real detector
          kTPSaturated,              // the channel is in a region with saturated TP
          kL1SpikeFlag,              // the channel is in a region with TP with sFGVB = 0
          kWeird,                    // the signal is believed to originate from an anomalous deposit
          kDiWeird,                  // the signal is anomalous, and neighbors another anomalous
          kUnknown                   // to ease the interface with functions returning flags. 

  // ES recHit flags
  enum ESFlags {

  /** bit structure of CaloRecHit::flags_ used in EcalRecHit:
   *  | 32 | 31...25 | 24...12 | 11...5 | 4...1 |
   *     |      |         |         |       |
   *     |      |         |         |       +--> reco flags       ( 4 bits)
   *     |      |         |         +--> chi2 for in time events  ( 7 bits)
   *     |      |         +--> energy for out-of-time events      (13 bits)
   *     |      +--> chi2 for out-of-time events                  ( 7 bits)
   *     +--> spare                                               ( 1 bit )

  // by default a recHit is greated with no flag
  EcalRecHit(const DetId& id, float energy, float time, uint32_t flags = 0, uint32_t flagBits = 0);
  /// get the id
  // For the moment not returning a specific id for subdetector
  DetId id() const { return DetId(detid());}
  bool isRecovered() const;
  bool isTimeValid() const;
  bool isTimeErrorValid() const;

  float chi2() const;
  float outOfTimeChi2() const;

  // set the energy for out of time events
  // (only energy >= 0 will be stored)
  float outOfTimeEnergy() const;
  float timeError() const;

  void setChi2( float chi2 );
  void setOutOfTimeChi2( float chi2 );
  void setOutOfTimeEnergy( float energy );

  void setTimeError( uint8_t timeErrBits );

  /// set the flags (from Flags or ESFlags) 
  void setFlag(int flag) {flagBits_|= (0x1 << flag);}
  void unsetFlag(int flag) {flagBits_ &= ~(0x1 << flag);}

  /// check if the flag is true
  bool checkFlag(int flag) const{return flagBits_ & ( 0x1<<flag);}

  /// apply a bitmask to our flags. Experts only
  bool checkFlagMask(uint32_t mask) const { return flagBits_&mask; }

  /// DEPRECATED provided for temporary backward compatibility
  Flags recoFlag() const ;


  /// store rechit condition (see Flags enum) in a bit-wise way 
  uint32_t flagBits_;

std::ostream& operator<<(std::ostream& s, const EcalRecHit& hit);


At the Rechit level, the raw rechit is multiplied by Lasercalib x Intercalib x ADCtoGeV. From the EcalRecProducers/plugins area see: shows how to obtain the Intercalib, Time calib, ADCtoGeV, channel status and lasercorrection data.

-- DavidCockerill - 06-Nov-2009

  • Higgs-4l-MC-dists-AN2012-141-v9.jpg:

Topic attachments
I Attachment History Action Size Date Who Comment
Unknown file formatext BuildFile r1 manage 0.6 K 2011-09-14 - 17:13 DavidCockerill  
Unknown file formatcc r1 manage 3.7 K 2011-09-14 - 17:12 DavidCockerill  
PDFpdf Git-search-and-faqs.pdf r1 manage 68.7 K 2014-11-12 - 16:42 DavidCockerill  
PDFpdf WBM-conddb-info.pdf r2 r1 manage 340.6 K 2017-11-17 - 09:56 DavidCockerill  
Unknown file formatcfg crab-gennai.cfg r1 manage 11.0 K 2012-11-22 - 09:10 DavidCockerill  
PDFpdf das-event-search.pdf r1 manage 105.2 K 2012-02-20 - 18:30 DavidCockerill  
Texttxt geom-example.txt r1 manage 0.7 K 2011-09-14 - 17:16 DavidCockerill  
Unknown file formatcfg mkcfg-jansenn.cfg r1 manage 8.4 K 2012-11-22 - 09:10 DavidCockerill  
Edit | Attach | Watch | Print version | History: r119 < r118 < r117 < r116 < r115 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r119 - 2020-02-17 - DavidCockerill
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback