Analysis Work
CMSSW WorkBook
New workspace area, 20GB, /afs/cern.ch/work/d/davec/
Higgs
HiggsPropertiesExercise
- Higgs-4l-MC-dists-AN2012-141-v9.jpg:
AAA, Any Data, Anytime, Anywhere
XrootdService
In
ROOT:
TFile *f =TFile::Open("root://cmsxrootd.fnal.gov///store/mc/SAM/GenericTTbar/GEN-SIM-RECO/CMSSW_5_3_1_START53_V5-v1/0013/CE4D66EB-5AAE-E111-96D6-003048D37524.root");
CMSSW, old way:
process.source = cms.Source("PoolSource",
# # replace 'myfile.root' with the source file you want to use
fileNames = cms.untracked.vstring('/store/myfile.root')
)
With AAA:
process.source = cms.Source("PoolSource",
# # replace 'myfile.root' with the source file you want to use
fileNames = cms.untracked.vstring('root://cmsxrootd.fnal.gov//store/myfile.root')
)
DAS
DAS home page
,
DAS FAQs
,
DAS Query Guide,
DAS commands
Workbook: Locating Data Samples with DAS
To check if file is accessible:
xrdfs cms-xrd-global.cern.ch locate /store/data/............
If unavailable, returns with:
[ERROR] Server responded with an error: [3011] No servers have the file
If OK, returns with:
[::90.147.66.75]:1094 Manager ReadWrite
To check the release in DAS
Dataset: /DoubleEG/Run2016H-PromptReco-v1/AOD
Creation time: 2016-09-21 00:02:10, Dataset size: 501.4MB, Number of blocks: 12, Number of events: 352, Number of files: 13, Physics group: NoGroup, Status: VALID, Type: data
Release,
Click on "Release" and see:
Release: CMSSW_8_0_19_patch1
CAN ALSO DO, in DAS
release dataset=/DoubleEG/Run2016H-PromptReco-v1/AOD
DAS web page examples of successful searches:
config dataset=/SingleElectron/Run2016H-PromptReco-v3/AOD
gives Release: CMSSW_8_0_22
and Global Tag: 80X_dataRun2_Prompt_v14
dataset dataset=/*/Run2016H*/AOD run=284036
dataset=/DoubleEG/Run2016H*/AOD
run dataset=/DoubleEG/Run2016G-PromptReco-v1/RECO
dataset run=278309
config dataset=/DoubleMuon/Run2016F-PromptReco-v1/AOD
dataset=/SingleMuon/Run2015B-PromptReco-v1/AOD
site dataset=/SingleMuon/Run2015B-PromptReco-v1/AOD
run dataset=/SingleMuon/Run2015C-PromptReco-v1/AOD
dataset=/SingleMu*/*/AOD
file dataset=/SingleMuon_0T/Run2015D-PromptReco-v4/AOD run=260493
/store/data/Run2015D/SingleMuon_0T/AOD/PromptReco-v4/000/260/493/00000/34B5777B-C283-E511-A8EF-02163E011EA3.root
Note how the DAS search name and the filename have a different structure !!!
dataset=/SinglePhoton/Run2015D-PromptReco-v4/AOD
Dataset: /SinglePhoton/Run2015D-PromptReco-v4/AOD
Creation time: 2015-10-06 03:57:18, Dataset size: 3.2TB,
Number of blocks: 61, Number of events: 21845413, Number of files: 1055,
Physics group: NoGroup, Status: VALID, Type: data
run dataset=/SinglePhoton/Run2015D-PromptReco-v4/AOD
Run: 258159
Datasets Sources: dbs3 show
...... etc etc for 134 runs in total
file dataset=/SinglePhoton/Run2015D-PromptReco-v4/AOD run=260725
/store/data/Run2015D/SinglePhoton/AOD/PromptReco-v4/000/260/725/00000/3C4BE0E0-BE84-E511-9C30-02163E013420.root
run=260725: 4 Nov 2015, 0.0 lumi !!!!!!!!!, 3.8T, non CERN sites
file dataset=/SinglePhoton/Run2015D-PromptReco-v4/AOD run=260540
/store/data/Run2015D/SinglePhoton/AOD/PromptReco-v4/000/260/540/00000/F834E11C-6A83-E511-9D0C-02163E01274F.root
run=260540: 1 Nov 2015, 3.2*10**33, 3.8T, HLT triggers: 1470, non CERN sites
file dataset=/SinglePhoton/Run2015D-PromptReco-v4/AOD run=260425
/store/data/Run2015D/SinglePhoton/AOD/PromptReco-v4/000/260/425/00000/10D17BB7-3281-E511-8559-02163E014755.root<br>
and many others. HLT triggers: 5606983<br>
file has Lumi: [[191, 198], [201, 208], [211, 218], [221, 228]], non CERN sites
summary dataset=/SingleMuon_0T/Run2015D-PromptReco-v4/AOD run=260493
with
Add filter/aggregator function to the query: grep and summary.nevents
gives:
Number of blocks: 1, Number of events: 9309, Number of files: 1, Number of lumis: 43, Sum(file_size): 1.2GB
Look for T2_CH_CERN,
StorageElement: srm-eoscms.cern.ch |
Difference between eos reported location format and that for DAS >>>>
eoscms ls -l /eos/cms/store/data/Run2015B/JetHT/RECO/PromptReco-v1/000/251/562/
Run2015B ==> Run2015B-PromptReco-v1
$ ./das_client.py --query="dataset dataset=/JetHT/*/RECO run=251562"
>>> returns /JetHT/Run2015B-PromptReco-v1/RECO |
From DAS webpage:
file dataset=/JetHT/Run2015B-PromptReco-v1/RECO run=251562 lumi=285
>>> returns explicit file location for this run and lumi
/store/data/Run2015B/JetHT/RECO/PromptReco-v1/000/251/562/00000/0249E9DA-7F2A-E511-AC3F-02163E011D23.root
DAS to get root file locations with:</br>
file dataset=/DoublePhoton/Run2012B-PromptReco-v1/AOD run=194108 lumi=575</br>
Get:</br>
File: /store/data/Run2012B/DoublePhoton/AOD/PromptReco-v1/000/194/108/D0FEA2C9-649F-E111-9368-003048D2BF1C.root</br>
Clicking on file, size is 3.1GB</br>
Site: T2_CH_CERN_HLT</br>
site=srm-eoscms.cern.ch</br>
DAS supports wild-card query for dataset names and then applying the filters, e.g. >
dataset=/SingleMu/*-22Jan2013-v1/AOD | grep dataset.name,dataset.modification_time
dataset=/SingleMu*/*/AOD
will give everything for
SingleMu primary dataset and AOD data-tier.
CMSWBM Run-Event-Lumi info
For run info, go to
cmswbm
and click on
Run Summary.
For run 194108, get start 2012.05.13 16:24:36, end 2012.05.13 22:06:09, nearly 6 hrs, triggers 831,997,827
Famous event 564,224,000 about 70% into run.
Click on
Lumi Sections:
888 lumi sections listed. ~1 M events per lumi section. Each lumi section only ~ 30 seconds !!
Famous event between following times, info from clicking on
Run Info:
CMS.TRG:NumTriggers 562,145,377 2012.05.13 20:07:24
CMS.TRG:NumTriggers 564,576,503 2012.05.13 20:08:24
Now back to
Lumi Sections to search for event 564,224,000
573 start 20:07:06 too early ??
574 start 20:07:30 possible
575 start 20:07:53
Possible Probable
576 start 20:08:16 too late
GIT and doxygen
Get a single file from GIT
- search GiT for the file, from the GIT web interface
- click on the file
- select 'Raw"
- In browser, "save file as" to save the file to a directory
GIT syntax (A Bocci, 23 Sep 2015)
Most of the CMSSW git tools use the syntax /RecoVertex/BeamSpotProducer/ in .git/info/sparse-checkout to signify that the package
RecoVertex/BeamSpotProducer should be checked out.
- The leading / means that the package is at the base of the project, and not a subdirectory.
- For example /Configuration/ will match all the packages in the main Configuration directory, while
- Configuration/
- will match all the packages */Configuration (e.g. RecoVertex/Configuration, RecoPixelVertexing/Configuration, etc.)
However, git 1.7.1 (the default version on SLC6) does not understand the leading / .
- As you say, if one does a git checkout using that, it will remove all the packages.
- The only suggestion I have is to do git config --global push.default simple
- This will set an (useful) option that git 1.7.1 does not understand, so you will not be able to use it by mistake.
Old CVS links:
Old CVS tutorial/syntax
CMSSW CVS
CMSSW
EcalTools.h
static float swissCross( const DetId& id,
const EcalRecHitCollection & recHits,
float recHitThreshold ,
bool avoidIeta85=true);
static bool isNextToDead( const DetId& id, const edm::EventSetup& es);
static bool isNextToDeadFromNeighbours( const DetId& id,
const EcalChannelStatus& chs,
int chStatusThreshold) ;
static bool isNextToBoundary (const DetId& id);
/// true if near a crack or ecal border
static bool deadNeighbour(const DetId& id, const EcalChannelStatus& chs,
int chStatusThreshold,
int dx, int dy);
New consumes format:
In class definition i have the following:
edm::EDGetTokenT<EBRecHitCollection> tok_EB_;
edm::EDGetTokenT<EERecHitCollection> tok_EE_;
edm::EDGetTokenT<EBDigiCollection> tok_EB_digi;
In class constructor where we have access to "config"
tok_EB_ = consumes<EcalRecHitCollection>(edm::InputTag("reducedEcalRecHitsEB"));
tok_EE_ = consumes<EcalRecHitCollection>(edm::InputTag("reducedEcalRecHitsEE"));
tok_EB_digi = consumes<EBDigiCollection>(edm::InputTag("selectDigi","selectedEcalEBDigiCollection"));
And then in analysis part I have the following:
edm::Handle<EBRecHitCollection> EBRecHits;
edm::Handle<EERecHitCollection> EERecHits;
iEvent.getByToken( tok_EB_, EBRecHits );
iEvent.getByToken( tok_EE_, EERecHits );
Pulling in packages
EcalLaserDbService.cc and EcalLaserDbService.h downloaded with git into /afs/cern.ch/user/d/davec/CMSSW_8_0_8/src with
git-cms-addpkg CalibCalorimetry
NOTE - git wanted an empty src directory. Had to move "Reco" away temporarily.
NOTE - have to do "scramv1 b" in /afs/cern.ch/user/d/davec/CMSSW_8_0_8/src to get edited EcalLaserDbService.cc
Needed:
In EcalLaserDbService.h
mutable int dbprint;
mutable due to (????) const in:
float getLaserCorrection (DetId const & xid, edm::Timestamp const & iTime) const;
CMSSW PATH INFO
CMSSW path info
alias cmsenv ==> eval 'scramv1 runtime -csh'
scram runtime -csh gives setenv PATH "path/folder list used to compile....................."
SCRAM
Source, Configuration, Release, And Management tool. It is the CMS build program. It is responsible for building framework applications and also making sure that all the necessary shared libraries are available.
Compile information for gcc optimization
Example of cmsenv operation, after setting SCRAM_ARCH = slc6_amd64_gcc472 :
Setup the runtime environment with cmsenv:
cmsenv is an alias for eval `scramv1 runtime -csh`
Print the resulting environment with:
Gives:
setenv PATH "/afs/cern.ch/user/d/davec/HZZ/sl6/CMSSW_5_3_18/bin/slc6_amd64_gcc472:/afs/cern.ch/user/d/davec/HZZ/sl6/CMSSW_5_3_18/external/slc6_amd64_gcc472/bin:/cvmfs/cms.cern.ch/slc6_amd64_gcc472/cms/cmssw/CMSSW_5_3_18/bin/slc6_amd64_gcc472:/cvmfs/cms.cern.ch/slc6_amd64_gcc472/cms/cmssw/CMSSW_5_3_18/external/slc6_amd64_gcc472/bin:/cvmfs/cms.cern.ch/slc6_amd64_gcc472/external/llvm/3.2-cms2/bin:/cvmfs/cms.cern.ch/slc6_amd64_gcc472/external/gcc/4.7.2-cms/bin:/afs/cern.ch/project/gd/LCG-share/3.2.11-1/d-cache/srm/bin:/afs/cern.ch/project/gd/LCG-share/3.2.11-1/d-cache/dcap/bin:/afs/cern.ch/project/gd/LCG-share/3.2.11-1/edg/bin:/afs/cern.ch/project/gd/LCG-share/3.2.11-1/glite/bin:/afs/cern.ch/project/gd/LCG-share/3.2.11-1/globus/bin:/afs/cern.ch/project/gd/LCG-share/3.2.11-1/lcg/bin:/afs/cern.ch/group/zh/bin:/afs/cern.ch/user/d/davec/scripts:/afs/cern.ch/cms/caf/scripts:/cvmfs/cms.cern.ch/common:/cvmfs/cms.cern.ch/bin:/usr/sue/bin:/usr/lib64/qt-3.3/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/afs/cern.ch/project/eos/installation/pro/bin/:/afs/cern.ch/sw/lcg/app/releases/ROOT/6.02.00/x86_64-slc6-gcc48-opt/root//bin:/afs/cern.ch/project/eos/installation/pro/bin/:/afs/cern.ch/sw/lcg/app/releases/ROOT/6.02.00/x86_64-slc6-gcc48-opt/root//bin";
where the sl6
ROOT is correctly there as:
ROOT/6.02.00/x86_64-slc6-gcc48-opt/root//bin
Scram commands:
scram |
without any argument, will print the scram help about all the available commands |
scram runtime -csh |
print the CMSSW environment |
eval `scram runtime -csh` |
set the CMSSW environment |
scram cmnd -help |
help ! ie, scram arch -help |
scram arch |
Gives current architecture, ie slc6_amd64_gcc472 |
|
amd64 = amd 64 bit machine, gcc472 = GCC compiler version 4.7.2 |
scram list ProjectName |
ie scram list CMSSW gives releases available for a given architecture |
scram -arch slc4_ia32_gcc345 list CMSSW |
lists CMSSW versions for slc4_ia32_gcc345 |
scram b |
will build every thing available under your current working directory |
|
running from my dev's src directory (e.g. CMSSW_x_y_z_pre1/src), will build every Subsystem/Package available |
|
running it from a specific Subsystem/Package directory (e.g. CMSSW_x_y_z_pre1/src/subsystemA/packageB) will only build this package and all the other packages it depends on (if needed) |
CMSSW utilities:
edmPluginDump -a | grep
MuonTriggerSelectionEmbedder
- Tell you exactly where the plugin manager thinks that module should come from.
- The first one in the list is the one that is actually used.
General flow through a CMSSW programme is as follows:
Class setup:
class Pedestal : public edm::EDAnalyzer { // Pedestal inherits from EDAnalyzer
public:
explicit Pedestal(const edm::ParameterSet& ps); /* constructor */
~Pedestal(); /* destructor */
private:
virtual void beginJob(const edm::EventSetup&) ; <= private member function declaration in class Pedestal
virtual void analyze(const edm::Event&, const edm::EventSetup&); <= private member function declaration in class Pedestal
virtual void endJob() ; <= private member function declaration in class Pedestal
---------- then member data ---------------------------
TH2D *h001; // entries, gain = 12
const EcalElectronicsMapping* ecalElectronicsMap_;};
}
Job
Begin job
Iinitialisation...conditions database, variables etc
Pedestal::Pedestal(iConfig); <= pass iConfig as follows:
Pedestal::Pedestal(const edm::ParameterSet& iConfig)
{
//now do what ever initialization is needed
edm::Service<TFileService> fs;
h001 = fs->make<TH2D>("h001"," ped entries, gain 12, z = +1", 102,-0.5,101.5, 102,-0.5, 101.5 );
// and insert any further histos here, but also globally defined above
}
Pedestal::beginJob(c);
// Object or variable 'c' passed into my code - Pedestal::beginJob(const edm::EventSetup& c) and is of type 'EventSetup'
Pedestal::beginJob(const edm::EventSetup& c) <= defining beginJob, 'c' passed from main programme
{ edm::ESHandle< EcalElectronicsMapping > elecHandle;
c.get< EcalMappingRcd >().get(elecHandle); <= 'c' is probably the Event
ecalElectronicsMap_ = elecHandle.product();
Event loop
Pedestal::analyze(iEvent, iSetup); <= passes iEvent and iSetup to my analyze function as follows:
void Pedestal::analyze(const edm::Event& iEvent, const edm::EventSetup& iSetup) {...all my code...}
End job
Pedestal::endJob(); <= calls my private function 'endJob', no arguments, as below:
void Pedestal::endJob() { cout << "ievcount = " << ievcount << endl;
// now normalise histogrammes
// get means
// TH2D h002 = h002/h001;
adcsq = h003->GetBinContent(49,87);
cout << "adc squared = " << adcsq << endl;
}
EOS and CAF
CAF and EOS usage
17 feb 2020
Set up new eos web site for linux work:
https://test-davec2.web.cern.ch/test-davec2
, with
https://test-davec2.web.cern.ch/test-davec2/laser-analysis/
Started from D Petyt's eos
page
At bottom of the page have: Like this page?
Get it here
. Contains:
README.md
,
index.php
,
res
,
example
.
Directory area from linux: /eos/home-d/davec/www
Sudirectory laser-analysis
Used github to set up .htaccess file - ie for just cern-memebers, and for indexing across the site
Had to set up a "test" website via cernbox, since davec already taken in dfs
Site is
http://cern.ch/test-davec2
, pointed at my eos www site
EOS path /eos/user/d/davec/www/
Web site management
Configuring the page, ie where/how to show plots: done with the "res" file.
The res file contains a css file, css = cascading style sheet "
eos info/procedures
To search for files on eos:
How to access from cfg file:
- 'root://eoscms//eos/cms/store/relval/CMSSW_5_2_0_pre5/RelValQCD_FlatPt_15_3000/GEN-SIM-RECO/START52_V1-v1/
Copy a file from eos to my work area, ie Sherwin double electron file:
$ eos cp /eos/cms/store/user/shervin/calibration/8TeV/ZNtuples/alcareco/DoubleElectron-ZSkim-RUN2012A-13Jul-v1/190456-193621/190456-202305-13Jul_Prompt_Cal_Nov2012/DoubleElectron-ZSkim-RUN2012A-13Jul-v1-190456-193621.root /afs/cern.ch/work/d/davec/
[eos-cp] going to copy 1 files and 41.10 MB
[eoscp] DoubleElectron-ZSkim-RUN2012A-13Jul-v1-190456-193621.root Total 39.20 MB |====================| 100.00 % [3.4 MB/s]
[eos-cp] copied 1/1 files and 41.10 MB in 13.56 seconds with 3.03 MB/s
[lxplus316] ~ $ work
[lxplus316] /afs/cern.ch/work/d/davec $ ls -al *621.root
-r-------- 1 davec zh 41101089 Feb 5 14:49 DoubleElectron-ZSkim-RUN2012A-13Jul-v1-190456-193621.root
Other EOS notes:
Setup
- By default you will most likely find 'eos' in the setup provided by your experiment when you login on lxplus. Experiments configure the stable release which is used also in grid frameworks etc.
- If you want to make use of most recent features mentioned in this FAQ you have to pick the 'client' version of EOS using a bash or tcsh setup script:
- source /afs/cern.ch/project/eos/installation/[atlas|cms|lhcb|alice]/etc/setup.sh
- source /afs/cern.ch/project/eos/installation/[atlas|cms|lhcb|alice]/etc/setup.csh
- Verify first that you are connecting to 'your' experiment dedicated EOS instance.
- bash-3.2$ source /afs/cern.ch/project/eos/installation/[atlas|cms|lhcb|alice]/etc/setup.sh *bash-3.2$ eos
# ---------------------------------------------------------------------------
# EOS Copyright (C) 2011 CERN/Switzerland
# This program comes with ABSOLUTELY NO WARRANTY; for details type `license'.
# This is free software, and you are welcome to redistribute it
# under certain conditions; type `license' for details.
# ---------------------------------------------------------------------------
EOS_INSTANCE=eoslhcb
EOS_SERVER_VERSION=0.1.6 EOS_SERVER_RELEASE=1
EOS_CLIENT_VERSION=0.2.5 EOS_CLIENT_RELEASE=1
The automatic EOS endpoint setting use your group membership to sort you into an experiment.
If this does not work for you, you can just specify your endpoint via an environment variable:
ATLAS: export EOS_MGM_URL=root://eosatlas.cern.ch
CMS: export EOS_MGM_URL=root://eocms.cern.ch
LHCB: export EOS_MGM_URL=root://eoslhcb.cern.ch [ there is no user space here - only usable via LHCB tools ]
ALICE: export EOS_MGM_URL=root://eosalice.cern.ch [ there is no user space here - only usable via ALICE tools ]
Using EOS
- The 'eos' CLI (Command Line Interface) supports most of the standard filesystem commands like:
- ls, cd, mkdir, rm, rmdir, find, cp ...
- 'eos help' shows all available commands, 'eos --help' explains each command
The 'eos' CLI can be used as an interactive shell with history
bash$ eos
EOS Console [root://eoslhcb.cern.ch] |/> whoami
Virtual Identity: uid=755 (755,99) gid=1338 (1338,99) [authz:krb5] host=lxplus423
as a busy-box command
bash$ eos whoami
Virtual Identity: uid=755 (755,99) gid=1338 (1338,99) [authz:krb5] host=lxplus423
Accessing an EOS file from
ROOT
- You have to use URLs as file names which are built in this way: root://eos[experiment].cern.ch//eos/[experiment]/... e.g.
- root://eosatlas.cern.ch//eos/atlas/.../histo.root
- root://eoscms.cern.ch//eos/cms/.../histo.root
- root [0]: TFile::Open("root://eosatlas.cern.ch//eos/atlas/user/t/test/histo.root");
Found data on eos, stepping through all cms directories one by one, with:
eoscms ls -l /eos/cms/store/data/Run2012B/DoublePhoton/AOD/PromptReco-v1/000/194/108/D0FEA2C9-649F-E111-9368-003048D2BF1C.root
eos copy messages, copying to my work area:
eos cp /eos/cms/store/data/Run2012B/DoublePhoton/AOD/PromptReco-v1/000/194/108/D0FEA2C9-649F-E111-9368-003048D2BF1C.root /afs/cern.ch/work/d/davec/
[eos-cp] path=/eos/cms/store/data/Run2012B/DoublePhoton/AOD/PromptReco-v1/000/194/108/D0FEA2C9-649F-E111-9368-003048D2BF1C.root size=3134841652
[eos-cp] going to copy 1 files and 3.13 GB
append: /eos/cms/store/data/Run2012B/DoublePhoton/AOD/PromptReco-v1/000/194/108/D0FEA2C9-649F-E111-9368-003048D2BF1C.root D0FEA2C9-649F-E111-9368-003048D2BF1C.root
[eoscp] D0FEA2C9-649F-E111-9368-003048D2BF1C.root Total 2989.62 MB |====================| 100.00 % [40.1 MB/s]
[eos-cp] copied 1/1 files and 3.13 GB in 85.07 seconds with 36.85 MB/s
Copying files around at CERN
- You probably need to move files from/to EOS to your local computer/AFS or from CASTOR to EOS. Here are few examples
# copy a single file
eos cp /eos/atlas/user/t/test/histo.root /tmp/
# copy all files within a directory - no subdirectories
eos cp /eos/atlas/user/t/test/histodirectory/ /afs/cern.ch/user/t/test/histodirectory
# copy recursive the complete hierarchy in a directory
eos cp -r /eos/atlas/user/t/test/histodirectory/ /afs/cern.ch/user/t/test/histodirectory
# copy recursive the complete hierarchiy into the directory 'histordirectory' in the current local working directory
eos cp -r /eos/atlas/user/t/test/histodirectory/ histodirectory
# copy recursive the complete hierarchy of a CASTOR directory to an EOS directory (make sure you have the proper CASTOR settings)
eos cp -r root://castorpublic//castor/cern.ch/user/t/test/histordirectory/ /eos/atlas/user/t/test/histodirectory/
# copy an WEB file [ currently the reported copy size is 0 ]
eos cp http://root.cern.ch/files/atlas.root /tmp/
# copy all ROOT files from an Amazon S3 bucket
# define the environment variables: S3_ACCESS_KEY, S3_SECRET_ACCESS_KEY & SE_HOSTNAME
eos cp -r as3:mybucket/*.root /tmp/mybucket/
Creating file lists
- You can run 'find' commands in EOS or XRootD storage like CASTOR or S3 storage using the 'eos' CLI. This command returns full pathnames!
# find all files under an EOS subdirectory (if you are an ordinary user, the file list is limited to 100k files and 50k directories)
eos find -f /eos/atlas/user/t/test/
# find all directories
eos find -d /eos/atlas/user/t/test/
# find all files in a CASTOR directory
eos find -f root://castorpublic//castor/cern.ch/user/t/test/
# find all files on a mounted file system
eos find -f file:/afs/cern.ch/user/t/test/
# find all files in my Amazon S3 bucket
eos find -f as3:mybucket/
Listing directories
# list files in eos
eos ls [-la] /eos/atlas/user/t/test/
# list files in castor [ if you use '-la' be aware that the ownership and permissions shown are not correct ]
eos ls [-la] root://castorpublic//castor/cern.ch/user/t/test/
# to list files in an S3 bucket you have to use the find command - see before
Mounting EOS on lxplus
You can mount EOS into your AFS home directory as follows:
bash-3.2$ mkdir -p $HOME/eos
bash-3.2$ eosmount $HOME/eos
OK
===> Mountpoint : /afs/cern.ch/user/t/test/eos
===> Fuse-Options : kernel_cache,attr_timeout=30,entry_timeout=30,max_readahead=131072,max_write=4194304,fsname=eoslhcb.cern.ch root://eoslhcb.cern.ch//eos/
===> xrootd ra : 131072
===> xrootd cache : 393216
===> fuse debug : 0
===> fuse write-cache : 1
===> fuse write-cache-size : 100000000
Please unmount it once you are over or before you log out !!!
bash-3.2$ eosumount $HOME/eos
or if you have some hanging mount
bash-3.2$ eosforceumount $HOME/eos
Warning: after 24h the mount has to re-authenticate and your kerberos token will have expired in the meanwhile. So whenever you login into lxplus re-fresh (=eosumount + eosmount) any existing mount (even better don't let it there).
Disclaimer: the FUSE module is not recommended for production usage and the use is at your own risk. EOS is not mounted/mountable on lxbatch nodes!
Trouble Shooting
You can get help for all kind of problems at the service desk!
When I am reading a file, I get 'unble to open - machine not on the network'
- this happens if all copies of a file are unaccessible. You can verify the state of a file using 'eos fileinfo <path>'. Atleast one copy needs to be in the state 'active -> online'
[root@eosdummy tmp]# eos file info /eos/atlas/user/t/test/histo.root
File: '/eos/atlas/user/t/test/histo.root' Size: 2052
Modify: Fri Jun 15 14:12:43 2012 Timestamp: 1341416753.962860000
Change: Wed Jul 4 17:45:53 2012 Timestamp: 1339762363.349244000
CUid: 99 CGid: 99 Fxid: 002a26e2 Fid: 2762466 Pid: 1456
XStype: adler XS: 49 05 c2 e1 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
replica Stripes: 2 Blocksize: 4k *******
#Rep: 2
<#> <fs-id>
#................................................................................................................
# host # id # schedgroup # path # boot # configstatus # drain # active
#................................................................................................................
0 62 lxfsra..........cern.ch 62 default.7 /data08 opserror rw offline
nodrain
1 35 lxfsra..........cern.ch 35 default.7 /data08 opserror rw offline
nodrain
*******
If you have offline files please report via the service desk!
I get 'no space left on device'
- please verify that you have quota in the part of the namespace you are writing to.
The quota is not managed by IT but by each experiment. Please request to the experiment responsibles.
Check your quota using 'eos quota ls'
bash-3.2$ eos quota ls
By user ...
# _______________________________________________________________________________________________
# ==> Quota Node: /eos/atlas/user/t/test/
# _______________________________________________________________________________________________
user used bytes logi bytes used files aval bytes aval logib aval files filled[%] vol-status ino-status
test 289.69 TB 144.69 TB 2.74 M- 100.00 MB 50.00 MB 10.00 k- 100.00 exceeded exceeded
In this example volume and inode quota are exceeded (the space one can use and the number of files one can create).
Examples:
eoscms ls -l /eos/cms/store/data/Run2011A/Photon/RAW/v1/000/165/121
-rw-r--r-- 1 phedex zh 4011987444 Oct 24 12:28 4A78D0C2-C57F-E011-83C8-000423D9A212.root
-rw-r--r-- 1 phedex zh 4020465582 Oct 24 12:28 5CA4A288-B87F-E011-AFFE-003048F117B6.root
-rw-r--r-- 1 phedex zh 3957335961 Oct 24 12:28 6864450C-C17F-E011-8BAF-0030487C8CB6.root
-rw-r--r-- 1 phedex zh 3907128205 Oct 24 12:28 6E674C59-B47F-E011-847A-001D09F24FEC.root
-rw-r--r-- 1 phedex zh 3620500624 Oct 24 12:28 866E56B0-D47F-E011-89FC-003048F024FA.root
-rw-r--r-- 1 phedex zh 3982682572 Oct 24 12:28 A0692214-CC7F-E011-A7F8-0030487CAEAC.root
-rw-r--r-- 1 phedex zh 3974456617 Oct 24 12:28 C470493D-B07F-E011-8F32-001D09F24E39.root
-rw-r--r-- 1 phedex zh 4014233070 Oct 24 12:28 F4FE198D-BD7F-E011-8F79-001D09F34488.root
[lxplus414] ~ $ eoscms ls -l /eos/cms/store/data/Run2011A
drwxr-sr-+ 2 phedex zh 1 Jun 22 00:04 AlCaP0
drwxr-sr-+ 2 phedex zh 1 Jun 25 14:59 AlCaPhiSym
drwxr-sr-+ 2 phedex zh 1 Jun 29 12:09 BTag
drwxr-sr-+ 3 phedex zh 2 Jun 19 19:26 Cosmics
drwxr-sr-+ 4 phedex zh 3 Jun 29 01:41 DoubleElectron
drwxr-sr-+ 5 phedex zh 4 Jun 25 17:51 DoubleMu
drwxr-sr-+ 2 phedex zh 1 Jun 29 01:47 ElectronHad
drwxr-sr-+ 4 phedex zh 3 Jun 28 22:22 Jet
drwxr-sr-+ 4 phedex zh 3 Jun 25 18:33 MinimumBias
drwxr-sr-+ 3 phedex zh 2 Jun 29 01:39 MuEG
drwxr-sr-+ 3 phedex zh 2 Jun 25 18:32 MuOnia
drwxr-sr-+ 2 phedex zh 1 Jun 29 04:19 Photon
drwxr-sr-+ 3 phedex zh 2 Jun 28 21:29 SingleElectron
drwxr-sr-+ 5 phedex zh 4 Jun 17 23:42 SingleMu
drwxr-sr-+ 2 phedex zh 1 Jun 28 21:29 TauPlusX
CASTOR notes
- My area: /castor/cern.ch/user/d/davec/
staging |
Querying the stager and pre-staging of files |
|
stager_get -M /castor/cern.ch/user/l/linda/thesis.tar |
|
Received 1 responses /castor/cern.ch/user/l/linda/thesis.tar SUBREQUEST_READY |
|
stager_qry -M /castor/cern.ch/user/l/linda/thesis.tar |
|
Received 1 responses /castor/cern.ch/user/l/linda/thesis.tar 12345@castorns STAGEIN |
|
or Error 2/No such file or directory |
|
or STAGED, ie on disk |
|
multiple files, ie |
|
stager_get -M /castor/cern.ch/user/l/linda/thesis.tar -M /castor/cern.ch/user/l/linda/higgs/newHiggs.root |
- Each user has a CASTOR home directory
- e.g. /castor/cern.ch/user/d/davec/
- my files on CASTOR: rfdir /castor/cern.ch/user/d/davec/
- make directories: rfmkdir /castor/cern.ch/user/d/davec/dir
- copy to CASTOR: rfcp filename /castor/cern.ch/user/d/davec/dir
- rename files on castor: rfrename $CASTOR_HOME/some_file.ext $CASTOR_HOME/some_new_file.ext
- deleting files: rfrm $CASTOR_HOME/some_file.ext
- Listing Contents of a Directory on Castor
- list your home directory on castor: nsls
- list some subdir of your home on castor i.e. come ganga output: nsls 416/outputdata
- list some absolute path: nsls /castor/cern.ch/path/to/data
- Opening files in CASTOR for an interactive root session, type
- TFile *f = TFile::Open("rfio:/castor/cern.ch/user/d/davec/myFile.root")
A (large) output file can be saved to CASTOR by using the following script called runMyJob.(c)sh
#!/bin/(c)sh
cd /afs/cern.ch/user/initial/username/scratch0/CMSSW_x_y_z/src/.../.../test
eval `scram runtime -(c)sh`
cd -
cmsRun /afs/cern.ch/user/initial/username/scratch0/CMSSW_x_y_z/src/../../test/aConfigFile.cfg
rfcp outputFile.root /castor/cern.ch/user/initial/username/largeOutputFile.root
The batch job starts to run in a directory called /pool/lsf/username/jobnumber, and this directory has a large scratch space that is available for the duration of the job. You then cd to scratch0/CMSSW_x_y_z/src/.../.../test to set the environment variables. The command "cd -" takes you back to /pool/lsf/username/jobnumber so that the large scratch space is available. aConfigFile.cfg is run from that directory, and the last line copies the output root file from there directly to your space in CASTOR. Note that while writing directly to a file in CASTOR is possible, it is not recommended, as it is very error-prone. The recommended practice is the one indicated above, i.e. writing to a local area and copying the file after cmsRun has finished.
You submit the job in the same way as that for smaller jobs.
bsub -q 1nd -J job1 < runMyjob.(c)sh
It is also possible to request a minimum amount of space in /pool/lsf/username/jobnumber using the -R option, for example to request a minimum of 30GB type
bsub -R "pool>30000" -q 1nd -J job1 < runMyjob.(c)sh
In cms cfg files, data on castor accessed with
process.source = cms.Source("PoolSource",
# replace 'myfile.root' with the source file you want to use
fileNames = cms.untracked.vstring(
# 'file:myfile.root'
# rfdir /castor/cern.ch/cms/store/data/Run2010B/Photon/RAW/v1/000/146/944/2CB08BA6-34CC-DF11-99F8-003048F118AC.root
# gives 3908245997 blocks size = 3.9Gbytes, written on Sep 30 2010
'/store/data/Run2010B/Photon/RAW/v1/000/146/944/2CB08BA6-34CC-DF11-99F8-003048F118AC.root'
)
)
The string '/castor/cern.ch/cms' not to be used in the cfg file. File in Run2010B/Photon/RAW/v1/000/146/944/ had ~23600 events, ~170kbytes per event.
Can drill down the directories in CASTOR with
rfdir /castor/cern.ch/cms/
drwxrwxr-x 12 cmsprod zh 0 Jun 26 2003 BigJets
drwxrwxr-x 25880 cmsprod zh 0 Jan 30 2002 BigMB123
drwxrwxr-x 7 cmsprod zh 0 Oct 18 2005 CMSGLIDE
drwxr-xr-x 58 cmsprod zh 0 Mar 29 2006 DAR
drwxrwxr-x 7 cmsprod zh 0 Mar 23 2004 DSTs
drwxrwxr-x 7 cmsprod zh 0 Apr 01 2004 DSTs_801
drwxrwxr-x 22 cmsprod zh 0 Apr 30 2004 DSTs_801a
drwxrwxr-x 6 cmsprod zh 0 Jan 30 2004 DVD
drwxr-xr-x 3380 cmsprod zh 0 Jun 30 2005 FNAL
drwxrwxr-x 5 cmsprod zh 0 Feb 09 2007 MTCC
drwxrwxr-x 46 cmsprod zh 0 Apr 05 2006 PCP04
drwxrwxr-x 166 cmsprod zh 0 Apr 13 2006 PTDR
drwxrwxrwx 0 cmsprod zh 0 Apr 30 2007 PitData
drwxrwxr-x 1 cmsprod zh 0 Oct 26 2005 RefDB
drwxrwxr-x 7 cmsprod zh 0 Feb 24 2010 T0
drwxrwxr-x 1 cmsprod zh 0 Sep 12 2007 T0Prototype
drwxrwxr-x 2 cmsprod zh 0 Feb 18 2004 TW
drwxrwxr-x 50 cmsprod zh 0 Mar 01 2004 Valid
drwxrwxr-x 84 cmsprod zh 0 May 26 2010 Validation
drwxrwxr-x 9 cmsprod zh 0 Dec 08 2008 archive
drwxrwxr-x 5 cmsprod zh 0 Jan 03 2004 archive-shift20-obsolete
drwxr-xr-x 1 1046 c3 0 Jun 22 2005 archives
drwxrwxrwx 2 12406 zh 0 Oct 01 2001 cmsbt
drwxrwxr-x 5 cmshi zh 0 May 24 2006 cmshi
drwxrwxr-x 1 cmsprod zh 0 Aug 08 2002 comphep
drwxr-xr-x 1 cmsprod zh 0 Jul 07 2005 cosmic
drwxrwxr-x 2 cmsprod zh 0 May 01 2003 eff
drwxr-xr-x 6 emuslice zh 0 Jun 11 2008 emuslice
drwxrwxr-x 24 cmsprod zh 0 Feb 14 2008 generation
drwxrwxr-x 35 cmsprod zh 0 Jul 17 2006 grid
drwxr-xr-x 2 1066 zh 0 Aug 21 2006 h2_testbeam
drwxrwxr-- 0 cmsprod zh 0 Aug 22 2002 import
drwxrwxr-x 4 duccio zh 0 Oct 10 2006 integration
-rwxr-xr-x 1 cmsprod zh 89161728 May 17 2002 jet0900.FDDB
-rw-rw-r-- 1 cmsprod zh 541096 Jun 29 2005 me-0629-100
-rw-rw-r-- 1 cmsprod zh 598580 May 02 2006 me-te-0502-01
drwxrwxr-x 13 cmsprod zh 0 Feb 04 2002 official_geometry_files
drwxrwxr-x 1 outreach zh 0 Jul 14 2006 outreach
drwxrwxr-x 3629 cmsprod zh 0 Mar 12 2008 phedex_heartbeat
drwxrwxr-x 317 cmsprod zh 0 Nov 28 2006 phedex_loadtest
drwxr-xr-x 23 cmsprod zh 0 Jul 16 2003 reconstruction
drwxrwxr-x 3 23928 zh 0 Jun 28 2005 repos_standalone
drwxrwxr-x 41 cmsprod zh 0 Jan 31 2006 simulation
drwxr-xr-x 56 cmsprod zh 0 Nov 04 16:47 store
drwxrwxrwx 3 cmsprod zh 0 Jul 24 2006 t0test
drwxrwxr-x 181 cmsprod zh 0 May 24 2008 test
-rw-rw-r-- 1 cmsprod zh 658517 Feb 19 2004 test-2
-rw-r--r-- 1 cmsprod zh 5008283 Dec 11 2001 test.transfer
drwxrwxr-x 20 cmsprod zh 0 Aug 21 2008 testbeam
drwxr-xr-x 0 cmsprod zh 0 Nov 16 2007 xrdt
rfdir /castor/cern.ch/cms/store/
etc.
The full set of installed CASTOR man-pages can be viewed with the command 'man -k castor'.
Every LXPLUS user has a directory in CASTOR. It has the same structure of your AFS directory but it is not reachable via standard unix command (ls, cp, etc...). Special commands must be used.
nsmkdir |
create a subdirectory |
|
nsmkdir /castor/cern.ch/user/l/laman/castor_tutorial |
rfdir |
is a RFIO command and can therefore also be used to list local or remote files |
|
ie rfdir /castor/cern.ch/user/l/linda |
nsls |
like ls |
|
nsls -l /castor/cern.ch/user/l/laman/castor_tutorial |
|
mrwxr--r-- 1 laman vy 10 Mar 21 15:06 castor.txt |
xrdcp |
copy file |
|
xrdcp xroot://castorpublic.cern.ch//castor/cern.ch/user/l/laman/castor_tutorial/castor.txt myfile.txt |
|
xroot is the protocol |
|
castorpublic.cern.ch is the host of the xroot server |
|
/castor/cern.ch/user/l/laman/castor_tutorial/castor.txt is the CASTOR full path |
|
“castorpublic” is the entry point for all users not affiliated with an LHC experiment. |
rfcp |
copy file from castor |
|
rfcp /castor/cern.ch/user/n/ndefilip/Paper/MCSummer12/roottree_leptons_GluGluToHToZZTo4L_M-125_8TeV-powheg15-pythia6.root . |
stager_qry |
get status of file on castor |
|
stager_qry -M /castor/cern.ch/user/n/ndefilip/Paper/MCSummer12/roottree_leptons_GluGluToHToZZTo4L_M-125_8TeV-powheg15-pythia6.root |
|
STAGED = FILE IS ON DISK, STAGEIN = FILE BEING TRANSFERRED TO DISK |
Castor help:
- RFIO commands such as rfcp don't work
- User comment: In my case, this line was in /etc/sysconfig/iptables.castor and not in /etc/sysconfig/iptables
- After re-inserting it back and restarting iptables rfcp works.
Naming conventions inside CMSSW .root files
- `C++ namespace' `class name'_label__RECO',
- for example: recoJetdmRefProdTofloatAssociationVector_jetProbabilityBJetTags__RECO
- suggests C++ namespace = 'reco'
- suggests class name = 'JetdmRefProdTofloatAssociationVector'
- suggests label = 'jetProbabilityBJetTags'
Code and code generators
SkeletonCodeGenerator
Writing your own EDAnalyzer
Token/Handle examples
HanDle
Classes, Methods and .h files
ClasSes
Compiling and running CMSSW
To start
- 1) Create a new project area -> cmsrel CMSSW_3_6_2 -> cd CMSSW_3_6_2/src/ -> cmsenv
- 2) In CMSSW_3_6_2/src/ -> mkdir Demo -> cd Demo
- 3) Create "skeleton" of EDAnalyzer module (only way to compile properly) -> mkedanlzr DemoAnalyzer <- your own fn here
- 4) Compile -> cd DemoAnalyzer -> scram b
- 5) Also, in /DemoAnalyzer/, file demoanalyzer_cfg.py automatically created. Change for new data sources etc.
- 6) Run the job -> cmsRun demoanalyzer_cfg.py
- 7) Code changes - go to: /Demo/DemoAnalyzer/src/DemoAnalyzer.cc
- 8) Compile and run, in the /DemoAnalyzer/ directory, edit DemoAnalyzer/BuildFile, if necessary -> scram b -> cmsRun demoanalyzer_cfg.py
Data
I also made a skim of L1 EG 2:
Summary:
BSC 40-41 skims (from Chiara):
/castor/cern.ch/user/c/ccecal/BEAM/Skims/InterestingEvents/
bit40or41skim_expressPhysics_run123591_full.root
/castor/cern.ch/user/c/ccecal/BEAM/Skims/InterestingEvents/
bit40or41skim_expressPhysics_run123592_full.root
/castor/cern.ch/user/c/ccecal/BEAM/Skims/InterestingEvents/
bit40or41skim_expressPhysics_run123596_full.root
BSC 40-41 && L1_SingleEG2 skim:
/castor/cern.ch/user/c/ccecal/BEAM/Skims/L1EGSkim/
L1EGSkim_bit40or41skim_expressPhysics_run123591_full.root
/castor/cern.ch/user/c/ccecal/BEAM/Skims/L1EGSkim/
L1EGSkim_bit40or41skim_expressPhysics_run123592_full.root
/castor/cern.ch/user/c/ccecal/BEAM/Skims/L1EGSkim/
L1EGSkim_bit40or41skim_expressPhysics_run123596_full.root
iSpy files of above BSC 40-41 && L1_SingleEG2 skim files:
http://test-ecal-cosmics.web.cern.ch/test-ecal-cosmics/Visualization/
iSpy/ispy_L1EGSkim_bit40or41skim_expressPhysics_run123591_full.ig
http://test-ecal-cosmics.web.cern.ch/test-ecal-cosmics/Visualization/
iSpy/ispy_L1EGSkim_bit40or41skim_expressPhysics_run123592_full.ig
http://test-ecal-cosmics.web.cern.ch/test-ecal-cosmics/Visualization/
iSpy/ispy_L1EGSkim_bit40or41skim_expressPhysics_run123596_full.ig
P Merid, 6 Dec 09, I'm producing skims from the Minimum Bias sample and filtering on bit 40||41.
The samples are available here:
/castor/cern.ch/cms/store/caf/user/meridian/MinimumBias/BeamCommissioning09_BSCFilter/01d438aa8cf896a53b8c31904618a1ea
Toyoko, 6 Dec 09
I also made a skim of L1 EG 2:
Summary:
BSC 40-41 skims (from Chiara):
/castor/cern.ch/user/c/ccecal/BEAM/Skims/InterestingEvents/bit40or41skim_expressPhysics_run123591_full.root |
/castor/cern.ch/user/c/ccecal/BEAM/Skims/InterestingEvents/bit40or41skim_expressPhysics_run123592_full.root |
/castor/cern.ch/user/c/ccecal/BEAM/Skims/InterestingEvents/bit40or41skim_expressPhysics_run123596_full.root |
BSC 40-41 && L1_SingleEG2 skim:
/castor/cern.ch/user/c/ccecal/BEAM/Skims/L1EGSkim/L1EGSkim_bit40or41skim_expressPhysics_run123591_full.root |
/castor/cern.ch/user/c/ccecal/BEAM/Skims/L1EGSkim/L1EGSkim_bit40or41skim_expressPhysics_run123592_full.root |
/castor/cern.ch/user/c/ccecal/BEAM/Skims/L1EGSkim/L1EGSkim_bit40or41skim_expressPhysics_run123596_full.root |
iSpy files of above BSC 40-41 && L1_SingleEG2 skim files:
General CMS sites
ECAL sites
E-Gamma
EE map in CVS Google 'CMSSW CVS', then CMSSW->Geometry->EcalMapping->data->EEMap.txt
Lines like:
ix |
iy |
iz |
|
100 |
56 |
-1 |
21 |
1 |
2 |
1 |
4 |
19 |
4 |
10 |
1 |
ECAL local DAQ files in DBS
Within few minutes from the end of the Ecal local DAQ data taking, the Streamer raw data files of the Ecal local DAQ runs and the binary Ecal MATACQ files are copied to a central CMS dropbox area (cms-tier0-stage:/dropbox/ecal/daq-data/ and cms-tier0-stage:/dropbox/ecal/matacq-data/).
Two cronjobs run on cms-tier0-stage, monitor the availability of new files, and inject them in the CMS Tier0 DB. After a successful injection, the files will be managed by the central Tier0 team: they will be copied to Castor, will be inserted in DBS, and will be automatically removed from the CMS dropbox area.
Click Runinfo icon and switch to RunInfo Table - this brings up an empty line of fields. Under the number column, in the empty field enter sorts like '> 113990' to bring up the next Global runs after this one.
Luminosity
get Luminosity with
CMSWBM
Go to
ConditionBrowser
, mid way down 'Core Services' on CMSWBM page
CMSWBM -> Condition browser
-> cms_omds_lb
-> CMS_BEAM_cond
-> CMS_lhc_luminosity
->lumi_totinst <== tick box, set time, click on 1D, prescalar box set to 5 (could try 1)
select a plot value
Fill Luminosity on CMSWBM
CMS Luminosity plots
- CMS run coordination page, click on 'Lumi results'
- Public lumi results
- Yearly report, 2010
- peak lumi per day 2010
- peak lumi per day 2010, log scale
- cumulative, 2010
- cumulative 2010, log scale
- peak lumi per day 2011
- peak lumi per day 2012
- peak lumi, 2010, 2011, 2012, log scale
- int lumi cumulative, 2010, 2011, 2012
- int lumi cumulative, 2010, 2011, 2012, log scale
Example of script to get lumi - >
lxplus ~/scratch0/CMS/CMSSW_3_5_6/src/QCDPhotonAnalysis/DataAnalyzers/test
$ python getLumi.py json_2.txt
Total luminosity: 98.16 /ub, 0.10 /nb, 9.82e-05 /pb, 9.82e-08 /fb
ECAL DQM
To access the DQM from outside P5 you can go to the following pages:
To find a specific run click on run and enter your run number. For the offline DQM you need to enter the run number and click on 'Vary: Any' and then select the
StreamExpress.
ECAL Database information
https://twiki.cern.ch/twiki/bin/view/CMS/EcalDB
CMSSW code links
EcalElectronicsMapping.cc
|
PFG code |
Filters and skims code |
8 Dec 09 Hi James,
Do you want the raw, unreconstructed amplitude samples as digitized by the ADC? If so, the basic prescription is to start from RAW data, run the Ecal unpacker, and then read the
EcalDigiCollection out of the event. If you look here [1] in the selectDigi method, you will see a way to obtain a Digi for a given
DetId from the
DigiCollection in the event, and then how the ADC values for each sample are extracted. Cheers, Seth
RECO data format table
RECO data format table
Luminosity data
1) cmswbm -> ConditionBrowser -> cms_omds_lb -> CMS_BEAM_COND -> CMS_LHC_LUMINOSITY
2) tick LUMI_TOTINST
3) enter begin and end dates -> click submit query -> will offer box "with Prescaler"
4) note - may need to set prescale by 5 or 100 etc to get file of allowed size
5) Plot is returned to screen of luminosity versus time
6) Click on root, or text etc to choose the data format to save all the data to your area
With root file, ie ConditionBrowser_1286193847132.root,
1) root -l ConditionBrowser_1286193847132.root
2) Tbrowser b, go to ROOT files - > ConditionBrowser_1286193847132.root -> tree -> VALUE for distribution of lumis
3) tree->Draw("VALUE:TIME") for a plot of lumi vs time
4) tree->Draw("VALUE:TIME","VALUE>0.1") plot, after a cut on VALUE
5) Returns (Long64_t)2331 <- number of times the cut succeeded
6) tree->Scan("VALUE:TIME") <- listing of VALUE and TIME
7) tree->Scan("VALUE:TIME","VALUE>0.1","colsize=10") <- lists with each column 10 wide, to avoid printing in scientific, E10 etc
Find data files and events
16 Nov 2017
Run information, Jean Fay, email Thu 13/08/2015 10:34
You can get this information from WBM, Ecal Summary, Configuration Compare/Show button then expend Sequence 0:
WBM-conddb-info.pdf
Also, can go directly to a run on WBM by:
https://cmswbm.cern.ch/cmsdb/servlet/ECALSummary?RUN=304680
if WBM top page is broken (ie illegal run numbers!)
Default sequence (1 cycle) Cycle 0:
SelectiveReadout (the default cycle for run Cosmics-SR) and check ECAL_TTCCI_CONFIGURATION
Looking to the configuration of this night run (254232), find the following 'CONFIGURATION_SCRIPT-PARAMS' :
--n-phot-2 0 --n-ir 0 --n-ped 1 --n-tp 1 --n-phot-1 600 --n-green 600 --n-orange-led 600 --n-blue-led 600
--las-switch-time 400 --led-switch-time 1200 --las-switch-rst-time 3000 --eb-to-ee-las-switch-time 2000
--starting-lme-from /nfshome0/ecalpro/.lsup_starting_lme
If I understand it well, there is no blue2, no infrared laser, 1 pedestal event, 1 Testpulse event,
600 blue1 laser events, 600 green laser events, 600 orange LED events, 600 blue LED events in the calibration sequence.
conddb
Use conddb to locate files used by the CMS database.
Various commands:
conddb listTags | grep "Ecal" > listTags.txt
conddb list EcalPedestals_hlt
conddb search EcalPedestals_hlt
conddb list EcalPedestals_hlt > peds.txt
conddb listTag will give an error - but also list all possible items, ie
usage: conddb [-h] [--db DB] [--verbose] [--quiet] [--yes] [--nocolors]
[--editor EDITOR] [--force] [--noLimit] [--authPath AUTHPATH]
{status,listGTsForTag,listTags,help,dump,edit,list,search,listGTs,showFCSR,init,listRuns,diff,copy,delete}
NOTE: the condb entry is usually different from the actual run, ie:
EcalPedestals_hlt and EcalPedestals_express
have been updated : IOV 305085 with the payload from the pedestal run 304680.
To get one file (or a few), from a T1 use the
PhedEx FileMover
9 Dec 2011
Now use the Data Aggregation System, das
A typical string in the das window
- file dataset=/SingleElectron/Run2012B-PromptReco-v1/AOD run=194455 lumi=257
Output
- File: /store/data/Run2012B/SingleElectron/AOD/PromptReco-v1/000/194/455/C813A592-23A4-E111-B4C3-5404A6388694.root
Also:
- file dataset=/SingleElectron/Run2012B-PromptReco-v1/AOD run=194455 lumi=258
- file dataset=/ElectronHad/Run2011B-PromptReco-v1/AOD run=178708 lumi=326
- das-event-search window.pdf
A dbs search for the global tag of a dataset
- dbs search --query='find dataset.tag where dataset=/HT/Run2011A-PromptReco-v4/AOD'
Output:
Using DBS instance at: http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet
dataset.tag
GR_P_V20::All
Examples:
./das_client.py --query="file dataset=/DoubleElectron/Run2011B*/AOD"
Showing 1-10 out of 5536 results, for more results use --idx/--limit options
/store/data/Run2011B/DoubleElectron/AOD/HZZ-PromptSkim-v1/0000/FEE381C3-06F9-E011-A9D7-00151796D774.root
/store/data/Run2011B/DoubleElectron/AOD/HZZ-PromptSkim-v1/0000/FEF71492-37E4-E011-98FE-001D0967D517.root
/store/data/Run2011B/DoubleElectron/AOD/HZZ-PromptSkim-v1/0000/FEB14727-67F6-E011-9AB7-00151796D8D4.root
/store/data/Run2011B/DoubleElectron/AOD/HZZ-PromptSkim-v1/0000/FC0D7A14-8AE6-E011-A338-0024E8768265.root
/store/data/Run2011B/DoubleElectron/AOD/HZZ-PromptSkim-v1/0000/FED824D3-7DDF-E011-8208-001D0967D9B3.root
/store/data/Run2011B/DoubleElectron/AOD/HZZ-PromptSkim-v1/0000/FADEC850-26E6-E011-973F-001D0967DC0B.root
/store/data/Run2011B/DoubleElectron/AOD/HZZ-PromptSkim-v1/0000/FED9173B-48FA-E011-B97C-0024E8768D68.root
/store/data/Run2011B/DoubleElectron/AOD/HZZ-PromptSkim-v1/0000/FA9D311E-02FC-E011-901B-001D0967D48B.root
/store/data/Run2011B/DoubleElectron/AOD/HZZ-PromptSkim-v1/0000/FECE5A44-BFEC-E011-8E88-0015178C49C0.root
/store/data/Run2011B/DoubleElectron/AOD/HZZ-PromptSkim-v1/0000/FECB4B12-11EE-E011-8607-0015178C48F4.root
Using dbs, 26 May 2012, to find dataset then run/lumi:
- dbsql "find dataset where run = 194455" > fn.txt
- dbsql "find file where dataset = /SingleElectron/Run2012B-PromptReco-v1/AOD and run =194455"
- dbsql "find file where dataset = /SingleElectron/Run2012B-PromptReco-v1/AOD and run =194455 and lumi=257"
Using dbs, 4 Apr 2011:
dbsql "find dataset where run = 149442" > fn.txt (if want to pipe output to fn.txt
"find file where dataset = /Electron......................
and lumi =
(can get a lumi section of data, not an individual event this way)
/store/................... indicates a file on CASTOR
Geometry
Code examples from Brian Heltsley, May 2011
Statistics
Rec Hits
doxygen EcalRecHit Class Reference
[[http://cms.cern.ch/iCMS/jsp/openfile.jsp?type=IN&year=2011&files=IN2011_002.pdf][CMS IN-2011/002 --
Definition of calibrated ECAL
RecHits and the ECAL calibration and correction scheme]]
View of /CMSSW/DataFormats/EcalRecHit/src/EcalRecHit.cc
View of /CMSSW/DataFormats/EcalRecHit/src/EcalRecHit.cc
Parent Directory Parent Directory | Revision Log Revision Log | View Revision Graph Revision Graph
Revision 1.19 - (download) (annotate)
Fri Feb 4 13:34:06 2011 UTC (2 months ago) by argiro
Branch: MAIN
CVS Tags: CMSSW_4_3_0_pre2, CMSSW_4_3_0_pre1, CMSSW_4_2_0, V02-02-05, V02-02-04, V02-02-06, V02-02-03, V02-02-02, CMSSW_4_2_0_pre4, CMSSW_4_2_0_pre5, CMSSW_4_2_0_pre6, CMSSW_4_2_0_pre7, CMSSW_4_2_0_pre2, CMSSW_4_2_0_pre3, CMSSW_4_2_0_pre8, HEAD
Changes since 1.18: +0 -2 lines
added unsetFlag function, RecHit is created by default with no flag
#include "DataFormats/EcalRecHit/interface/EcalRecHit.h"
#include "DataFormats/EcalDetId/interface/EBDetId.h"
#include "DataFormats/EcalDetId/interface/EEDetId.h"
#include "DataFormats/EcalDetId/interface/ESDetId.h"
#include "FWCore/MessageLogger/interface/MessageLogger.h"
#include <cassert>
#include <math.h>
EcalRecHit::EcalRecHit() : CaloRecHit(), flagBits_(0) {
}
EcalRecHit::EcalRecHit(const DetId& id, float energy, float time, uint32_t flags, uint32_t flagBits) :
CaloRecHit(id,energy,time,flags),
flagBits_(flagBits)
{
}
bool EcalRecHit::isRecovered() const {
return ( checkFlag(kLeadingEdgeRecovered) ||
checkFlag(kNeighboursRecovered) ||
checkFlag(kTowerRecovered)
);
}
float EcalRecHit::chi2() const
{
uint32_t rawChi2 = 0x7F & (flags()>>4);
return (float)rawChi2 / (float)((1<<7)-1) * 64.;
}
float EcalRecHit::outOfTimeChi2() const
{
uint32_t rawChi2Prob = 0x7F & (flags()>>24);
return (float)rawChi2Prob / (float)((1<<7)-1) * 64.;
}
float EcalRecHit::outOfTimeEnergy() const
{
uint32_t rawEnergy = (0x1FFF & flags()>>11);
uint16_t exponent = rawEnergy>>10;
uint16_t significand = ~(0xE<<9) & rawEnergy;
return (float) significand*pow(10,exponent-5);
}
void EcalRecHit::setChi2( float chi2 )
{
// bound the max value of the chi2
if ( chi2 > 64 ) chi2 = 64;
// use 7 bits
uint32_t rawChi2 = lround( chi2 / 64. * ((1<<7)-1) );
// shift by 4 bits (recoFlag)
setFlags( (~(0x7F<<4) & flags()) | ((rawChi2 & 0x7F)<<4) );
}
void EcalRecHit::setOutOfTimeEnergy( float energy )
{
if ( energy > 0.001 ) {
uint16_t exponent = lround(floor(log10(energy)))+3;
uint16_t significand = lround(energy/pow(10,exponent-5));
// use 13 bits (3 exponent, 10 significand)
uint32_t rawEnergy = exponent<<10 | significand;
// shift by 11 bits (recoFlag + chi2)
setFlags( ( ~(0x1FFF<<11) & flags()) | ((rawEnergy & 0x1FFF)<<11) );
}
}
void EcalRecHit::setOutOfTimeChi2( float chi2 )
{
// bound the max value of chi2
if ( chi2 > 64 ) chi2 = 64;
// use 7 bits
uint32_t rawChi2 = lround( chi2 / 64. * ((1<<7)-1) );
// shift by 24 bits (recoFlag + chi2 + outOfTimeEnergy)
setFlags( (~(0x7F<<24) & flags()) | ((rawChi2 & 0x7F)<<24) );
}
void EcalRecHit::setTimeError( uint8_t timeErrBits )
{
// take the bits and put them in the right spot
setAux( (~0xFF & aux()) | timeErrBits );
}
float EcalRecHit::timeError() const
{
uint32_t timeErrorBits = 0xFF & aux();
// all bits off --> time reco bailed out (return negative value)
if( (0xFF & timeErrorBits) == 0x00 )
return -1;
// all bits on --> time error over 5 ns (return large value)
if( (0xFF & timeErrorBits) == 0xFF )
return 10000;
float LSB = 1.26008;
uint8_t exponent = timeErrorBits>>5;
uint8_t significand = timeErrorBits & ~(0x7<<5);
return pow(2.,exponent)*significand*LSB/1000.;
}
bool EcalRecHit::isTimeValid() const
{
if(timeError() <= 0)
return false;
else
return true;
}
bool EcalRecHit::isTimeErrorValid() const
{
if(!isTimeValid())
return false;
if(timeError() >= 10000)
return false;
return true;
}
/// DEPRECATED provided for temporary backward compatibility
EcalRecHit::Flags EcalRecHit::recoFlag() const {
for (int i=kUnknown; ; --i){
if (checkFlag(i)) return Flags(i);
if (i==0) break;
}
// no flag assigned, assume good
return kGood;
}
std::ostream& operator<<(std::ostream& s, const EcalRecHit& hit) {
if (hit.detid().det() == DetId::Ecal && hit.detid().subdetId() == EcalBarrel)
return s << EBDetId(hit.detid()) << ": " << hit.energy() << " GeV, " << hit.time() << " ns";
else if (hit.detid().det() == DetId::Ecal && hit.detid().subdetId() == EcalEndcap)
return s << EEDetId(hit.detid()) << ": " << hit.energy() << " GeV, " << hit.time() << " ns";
else if (hit.detid().det() == DetId::Ecal && hit.detid().subdetId() == EcalPreshower)
return s << ESDetId(hit.detid()) << ": " << hit.energy() << " GeV, " << hit.time() << " ns";
else
return s << "EcalRecHit undefined subdetector" ;
}
CERN LCG CVS service
ViewVC Help
Powered by
View of /CMSSW/DataFormats/EcalDetId/interface/EBDetId.h
View of /CMSSW/DataFormats/EcalDetId/interface/EBDetId.h
add a function to return the approximate eta of a DetId
#ifndef ECALDETID_EBDETID_H
#define ECALDETID_EBDETID_H
#include <ostream>
#include <cmath>
#include <cstdlib>
#include "DataFormats/DetId/interface/DetId.h"
#include "DataFormats/EcalDetId/interface/EcalSubdetector.h"
#include "DataFormats/EcalDetId/interface/EcalTrigTowerDetId.h"
/** \class EBDetId
* Crystal identifier class for the ECAL barrel
*
*
* $Id: AnalysisWork.txt,v 1.119 2020/02/17 12:15:39 davec Exp $
*/
class EBDetId : public DetId {
public:
enum { Subdet=EcalBarrel};
/** Constructor of a null id */
EBDetId() {}
/** Constructor from a raw value */
EBDetId(uint32_t rawid) : DetId(rawid) {}
/** Constructor from crystal ieta and iphi
or from SM# and crystal# */
EBDetId(int index1, int index2, int mode = ETAPHIMODE);
/** Constructor from a generic cell id */
EBDetId(const DetId& id);
/** Assignment operator from cell id */
EBDetId& operator=(const DetId& id);
/// get the subdetector .i.e EcalBarrel (what else?)
// EcalSubdetector subdet() const { return EcalSubdetector(subdetId()); }
static EcalSubdetector subdet() { return EcalBarrel;}
/// get the z-side of the crystal (1/-1)
int zside() const { return (id_&0x10000)?(1):(-1); }
/// get the absolute value of the crystal ieta
int ietaAbs() const { return (id_>>9)&0x7F; }
/// get the crystal ieta
int ieta() const { return zside()*ietaAbs(); }
/// get the crystal iphi
int iphi() const { return id_&0x1FF; }
/// get the HCAL/trigger ieta of this crystal
int tower_ieta() const { return ((ietaAbs()-1)/5+1)*zside(); }
/// get the HCAL/trigger iphi of this crystal
int tower_iphi() const;
/// get the HCAL/trigger iphi of this crystal
EcalTrigTowerDetId tower() const { return EcalTrigTowerDetId(zside(),EcalBarrel,abs(tower_ieta()),tower_iphi()); }
/// get the ECAL/SM id
int ism() const;
/// get the number of module inside the SM (1-4)
int im() const;
/// get ECAL/crystal number inside SM
int ic() const;
/// get the crystal ieta in the SM convention (1-85)
int ietaSM() const { return ietaAbs(); }
/// get the crystal iphi (1-20)
int iphiSM() const { return (( ic() -1 ) % kCrystalsInPhi ) + 1; }
// is z positive?
bool positiveZ() const { return id_&0x10000;}
// crystal number in eta-phi grid
int numberByEtaPhi() const {
return (MAX_IETA + (positiveZ() ? ietaAbs()-1 : -ietaAbs()) )*MAX_IPHI+ iphi()-1;
}
// index numbering crystal by SM
int numberBySM() const;
/// get a compact index for arrays
int hashedIndex() const { return numberByEtaPhi(); }
uint32_t denseIndex() const { return hashedIndex() ; }
/** returns a new EBDetId offset by nrStepsEta and nrStepsPhi (can be negative),
* returns EBDetId(0) if invalid */
EBDetId offsetBy( int nrStepsEta, int nrStepsPhi ) const;
/** returns a new EBDetId on the other zside of barrel (ie iEta*-1),
* returns EBDetId(0) if invalid (shouldnt happen) */
EBDetId switchZSide() const;
/** following are static member functions of the above two functions
* which take and return a DetId, returns DetId(0) if invalid
*/
static DetId offsetBy( const DetId startId, int nrStepsEta, int nrStepsPhi );
static DetId switchZSide( const DetId startId );
/** return an approximate values of eta (~0.15% precise)
*/
float approxEta() const { return ieta() * crystalUnitToEta; }
static float approxEta( const DetId id );
static bool validDenseIndex( uint32_t din ) { return ( din < kSizeForDenseIndexing ) ; }
static EBDetId detIdFromDenseIndex( uint32_t di ) { return unhashIndex( di ) ; }
/// get a DetId from a compact index for arrays
static EBDetId unhashIndex( int hi ) ;
static bool validHashIndex(int i) { return !(i<MIN_HASH || i>MAX_HASH); }
/// check if a valid index combination
static bool validDetId(int i, int j) ;
static bool isNextToBoundary(EBDetId id);
static bool isNextToEtaBoundary(EBDetId id);
static bool isNextToPhiBoundary(EBDetId id);
//return the distance in eta units between two EBDetId
static int distanceEta(const EBDetId& a,const EBDetId& b);
//return the distance in phi units between two EBDetId
static int distancePhi(const EBDetId& a,const EBDetId& b);
/// range constants
static const int MIN_IETA = 1;
static const int MIN_IPHI = 1;
static const int MAX_IETA = 85;
static const int MAX_IPHI = 360;
static const int kChannelsPerCard = 5;
static const int kTowersInPhi = 4; // per SM
static const int kModulesPerSM = 4;
static const int kModuleBoundaries[4] ;
static const int kCrystalsInPhi = 20; // per SM
static const int kCrystalsInEta = 85; // per SM
static const int kCrystalsPerSM = 1700;
static const int MIN_SM = 1;
static const int MAX_SM = 36;
static const int MIN_C = 1;
static const int MAX_C = kCrystalsPerSM;
static const int MIN_HASH = 0; // always 0 ...
static const int MAX_HASH = 2*MAX_IPHI*MAX_IETA-1;
// eta coverage of one crystal (approximate)
static const float crystalUnitToEta;
enum { kSizeForDenseIndexing = MAX_HASH + 1 } ;
// function modes for (int, int) constructor
static const int ETAPHIMODE = 0;
static const int SMCRYSTALMODE = 1;
};
std::ostream& operator<<(std::ostream& s,const EBDetId& id);
#endif
View of /CMSSW/DataFormats/EcalRecHit/interface/EcalRecHitCollections.h
CMSSW/ DataFormats/ EcalRecHit/ interface/ EcalRecHitCollections.h
001 #ifndef DATAFORMATS_ECALRECHIT_ECALRECHITCOLLECTION_H
002 #define DATAFORMATS_ECALRECHIT_ECALRECHITCOLLECTION_H
003
004 #include "DataFormats/Common/interface/SortedCollection.h"
005 #include "DataFormats/EcalRecHit/interface/EcalRecHit.h"
006 #include "DataFormats/EcalRecHit/interface/EcalUncalibratedRecHit.h"
007 #include "DataFormats/Common/interface/Ref.h"
008 #include "DataFormats/Common/interface/RefVector.h"
009
010
011 typedef edm::SortedCollection<EcalRecHit> EcalRecHitCollection;
012 typedef edm::Ref<EcalRecHitCollection> EcalRecHitRef;
013 typedef edm::RefVector<EcalRecHitCollection> EcalRecHitRefs;
014 typedef edm::RefProd<EcalRecHitCollection> EcalRecHitsRef;
015
016 typedef EcalRecHitCollection EBRecHitCollection;
017 typedef EcalRecHitCollection EERecHitCollection;
018 typedef EcalRecHitCollection ESRecHitCollection;
019
020 typedef edm::SortedCollection<EcalUncalibratedRecHit> EcalUncalibratedRecHitCollection;
021 typedef edm::Ref<EcalUncalibratedRecHitCollection> EcalUncalibratedRecHitRef;
022 typedef edm::RefVector<EcalUncalibratedRecHitCollection> EcalUncalibratedRecHitRefs;
023 typedef edm::RefProd<EcalUncalibratedRecHitCollection> EcalUncalibratedRecHitsRef;
024
025 typedef EcalUncalibratedRecHitCollection EBUncalibratedRecHitCollection;
026 typedef EcalUncalibratedRecHitCollection EEUncalibratedRecHitCollection;
027
028 #endif
View of CMSSW/DataFormats/EcalRecHit/interface/EcalRecHit.h
View of CMSSW/DataFormats/EcalRecHit/interface/EcalRecHit.h
added checkFlagMask() function
#ifndef DATAFORMATS_ECALRECHIT_H
#define DATAFORMATS_ECALRECHIT_H 1
#include "DataFormats/CaloRecHit/interface/CaloRecHit.h"
/** \class EcalRecHit
*
* $Id: AnalysisWork.txt,v 1.119 2020/02/17 12:15:39 davec Exp $
* \author P. Meridiani INFN Roma1
*/
class EcalRecHit : public CaloRecHit {
public:
typedef DetId key_type;
// recHit flags
enum Flags {
kGood=0, // channel ok, the energy and time measurement are reliable
kPoorReco, // the energy is available from the UncalibRecHit, but approximate
(bad shape, large chi2)
kOutOfTime, // the energy is available from the UncalibRecHit (sync reco),
but the event is out of time
kFaultyHardware, // The energy is available from the UncalibRecHit,
channel is faulty at some hardware level (e.g. noisy)
kNoisy, // the channel is very noisy
kPoorCalib, // the energy is available from the UncalibRecHit, but the
calibration of the channel is poor
kSaturated, // saturated channel (recovery not tried)
kLeadingEdgeRecovered, // saturated channel: energy estimated from the leading
edge before saturation
kNeighboursRecovered, // saturated/isolated dead: energy estimated from
neighbours
kTowerRecovered, // channel in TT with no data link, info retrieved from
Trigger Primitive
kDead, // channel is dead and any recovery fails
kKilled, // MC only flag: the channel is killed in the real detector
kTPSaturated, // the channel is in a region with saturated TP
kL1SpikeFlag, // the channel is in a region with TP with sFGVB = 0
kWeird, // the signal is believed to originate from an anomalous deposit
(spike)
kDiWeird, // the signal is anomalous, and neighbors another anomalous
signal
//
kUnknown // to ease the interface with functions returning flags.
};
// ES recHit flags
enum ESFlags {
kESGood,
kESDead,
kESHot,
kESPassBX,
kESTwoGoodRatios,
kESBadRatioFor12,
kESBadRatioFor23Upper,
kESBadRatioFor23Lower,
kESTS1Largest,
kESTS3Largest,
kESTS3Negative,
kESSaturated,
kESTS2Saturated,
kESTS3Saturated,
kESTS13Sigmas,
kESTS15Sigmas
};
/** bit structure of CaloRecHit::flags_ used in EcalRecHit:
*
* | 32 | 31...25 | 24...12 | 11...5 | 4...1 |
* | | | | |
* | | | | +--> reco flags ( 4 bits)
* | | | +--> chi2 for in time events ( 7 bits)
* | | +--> energy for out-of-time events (13 bits)
* | +--> chi2 for out-of-time events ( 7 bits)
* +--> spare ( 1 bit )
*/
EcalRecHit();
// by default a recHit is greated with no flag
EcalRecHit(const DetId& id, float energy, float time, uint32_t flags = 0, uint32_t flagBits = 0);
/// get the id
// For the moment not returning a specific id for subdetector
DetId id() const { return DetId(detid());}
bool isRecovered() const;
bool isTimeValid() const;
bool isTimeErrorValid() const;
float chi2() const;
float outOfTimeChi2() const;
// set the energy for out of time events
// (only energy >= 0 will be stored)
float outOfTimeEnergy() const;
float timeError() const;
void setChi2( float chi2 );
void setOutOfTimeChi2( float chi2 );
void setOutOfTimeEnergy( float energy );
void setTimeError( uint8_t timeErrBits );
/// set the flags (from Flags or ESFlags)
void setFlag(int flag) {flagBits_|= (0x1 << flag);}
void unsetFlag(int flag) {flagBits_ &= ~(0x1 << flag);}
/// check if the flag is true
bool checkFlag(int flag) const{return flagBits_ & ( 0x1<<flag);}
/// apply a bitmask to our flags. Experts only
bool checkFlagMask(uint32_t mask) const { return flagBits_&mask; }
/// DEPRECATED provided for temporary backward compatibility
Flags recoFlag() const ;
private:
/// store rechit condition (see Flags enum) in a bit-wise way
uint32_t flagBits_;
};
std::ostream& operator<<(std::ostream& s, const EcalRecHit& hit);
#endif
At the Rechit level, the raw rechit is multiplied by Lasercalib x Intercalib x ADCtoGeV. From the
EcalRecProducers/plugins
area see:
EcalRecHitWorkerSimple.cc shows how to obtain the Intercalib, Time calib, ADCtoGeV, channel status and lasercorrection data.
--
DavidCockerill - 06-Nov-2009
- Higgs-4l-MC-dists-AN2012-141-v9.jpg: