Heavy-ion centrality software information

Page under construction Work in progress, under construction

Software release CMSSW_7_5_X

Software pieces in CMSSW

The package RecoHI/HiCentralityAlgos has the code that is producing the centrality and the centrality bin objects in the release. The objects are defined in DataFormats/HeavyIonEvent. The information below is valid for CMSSW_7_5_X and higher.

CentralityProducer

The CentralityProducer returns a reco::Centrality object, which collects the summed information used for the centrality determination from the different detectors.

The configuration file HiCentrality_cfi.py contains the settings for the CentralityProducer module, hiCentrality which is run as part of the standard heavy-ion reconstruction sequence found in Configuration/StandardSequences/python/ReconstructionHeavyIons_cff.py So, all one has to do is to run standard reco sequence for PbPb.

There is also a module for pPb, called pACentrality that has to be run on the fly on data reconstructed with the standard pp reconstruction.

Running the CentralityProducer on the fly on a heavy-ion reco data sample

process.load('RecoHI.HiCentralityAlgos.HiCentrality_cfi')
process.p = cms.Path(... * process.HiCentrality * ...)
or in case of pp reco data
process.load('RecoHI.HiCentralityAlgos.pACentrality_cfi')
process.p = cms.Path(... * process.pACentrality * ...)

The CentralityProducer is able to update the centrality information by reproducing some of the variables, while keeping the rest of the variables the same. For example, if the track collection that is desired to be the input to centrality is changed, one can set the flag for all the other variables to be off, and specify the label of the previous centrality object that was produced in these events:

process.hiCentrality.produceHFhits = False
process.hiCentrality.produceHFtowers = False
process.hiCentrality.produceEcalhits = False
process.hiCentrality.produceBasicClusters = False
process.hiCentrality.produceZDChits = False
process.hiCentrality.produceETmidRapidity = False
process.hiCentrality.producePixelhits = False
process.hiCentrality.produceTracks = True
process.hiCentrality.producePixelTracks = False
process.srcReUse = cms.InputTag("hiCentrality","","FIRSTRECO")

CentralityBinProducer

The CentralityBinProducer.cc takes a centrality variable from the centrality object and a calibration from the database defined by the global tag and produces a new centrality bin object.

Default settings are available in CentralityBin_cfi.py but it is recommended to always put them in the config file:

process.centralityBin.Centrality = cms.InputTag("hiCentrality")
process.centralityBin.centralityVariable = cms.string("HFtowers")
process.centralityBin.nonDefaultGlauberModel = cms.string("")

The nonDefaultGlauberModel has to be changed for non-default MC samples. The default MC sample is Hydjet Drum5 for Run2 and Hydjet Drum for Run1. The centrality bin producer will look for a database entry with label centralityVariable+nonDefaultGlauberModel, which means that the default MC is in the database with "HFtowers" label and the non-default ones have a longer label as in old times like "HFtowersHydjetDrum5".

When testing a new calibration that is not yet in the global tag then the global tag extension is needed in the configuration file

process.GlobalTag.snapshotTime = cms.string("9999-12-31 23:59:59.000")
process.GlobalTag.toGet.extend([
   cms.PSet(record = cms.string("HeavyIonRcd"),
      tag = cms.string("CentralityTable_HFtowers200_HydjetDrum5_v750x02_mc"),
      connect = cms.string("frontier://FrontierProd/CMS_CONDITIONS"),
      label = cms.untracked.string("HFtowers")
   ),
])

Centrality tables workflow

Centrality tables contain the bin boundaries for a certain centrality variable (e.g. HFtowers) and the corresponding average and RMS values of Npart, Ncoll and b.

Tools described below can be found in git https://github.com/azsigmon/cmssw/blob/centrality_tools/HeavyIonsAnalysis/CentralityAnalysis/tools/

The input is an event tree that is present in HiForest files but can be produced in a standalone way as well. Config files are available in the forest branch for that: https://github.com/CmsHI/cmssw/tree/forest_CMSSW_7_5_0/HeavyIonsAnalysis/EventAnalysis/test

Producing tables for MC

The macro makeMCCentralityTable.C can be used to create a centrality table from an event tree. The event tree should have the variables from the centrality object and Npart, Ncoll, b values.

Things to be edited in the file or at running:

  • number of bins: e.g. 200
  • which centrality variable: e.g. "HFtowers"
  • centrality table tag: see naming conventions below
  • input file: output of the event analyzer and has "hiEvtAnalyzer/HiTree" without any selection so all generated events
  • output file: useful to have in the name every information for later use

Before running the macro you need to run the rootlogon.C script also available in the branch or you can have the necessary lines in your own rootlogon file.

root -l makeMCCentralityTable.C+

Producing tables for data

The macro makeDataCentralityTable.C can be used to create a centrality table from an event tree. The event tree should have the variables from the centrality object for events that are selected by the minimum bias trigger and the standard event selection.

Things to be edited in the file or at running:

  • number of bins: e.g. 200
  • which centrality variable: e.g. "HFtowers"
  • centrality table tag: see naming conventions below
  • input centrality table: default Glauber Npart, Ncoll, b average and RMS values for each bin
  • input file: output of the event analyzer and has "hiEvtAnalyzer/HiTree" with event selection
  • output file: useful to have in the name every information for later use
  • efficiency assumption: e.g. 0.99

Work in progress, under construction To do

  • include possibility of efficiency histogram instead of one value

Before running the macro you need to run the rootlogon.C script also available in the branch or you can have the necessary lines in your own rootlogon file.

root -l makeDataCentralityTable.C+

Uploading tables to the database

Below are the steps to be followed for uploading the centrality table produced above to the database for use in the global tag.

naming conventions

CentralityTable_HFtowers200_HydjetDrum5_v750x01_mc

  • start with CentralityTable
  • centrality variable
  • number of bins
  • Glauber model: Glauber2015A is the default, non-default are the generators with tunes
  • version number: v750 means the release x01 can increase for a new table
  • offline for data table and mc for MC

makeDBFromTFile.py

After the root file is produced containing a CentralityTable object then a db file has to be created with cmsRun makeDBFromTFile.py

In makeDBFromTFile.py only three variables have to be edited or given at running:

  • outputTag: tag that is in the root file and will be inserted in the db file
  • inputFile: root file name
  • outputFile: db file name

testing your db file locally

You can do a local test of the db file with centrality calibration running the CentralityBinProducer over a few events. To do this the global tag extension is needed in the configuration file:

process.GlobalTag.snapshotTime = cms.string("9999-12-31 23:59:59.000")
process.GlobalTag.toGet.extend([
   cms.PSet(record = cms.string("HeavyIonRcd"),
      tag = cms.string("CentralityTable_HFtowers200_HydjetDrum5_v750x02_mc"),
      connect = cms.string("sqlite_file:/path/to/db/file.db"),
      label = cms.untracked.string("HFtowers")
   ),
])

uploading with dropbox

Follow the instructions on the DropBox twiki page to setup your upload.py once. This is not needed again.

The usage of conditionsUpload.py is simple:

$ uploadConditions.py ~/somewhere/filename.db

Create a text file as below with the same name as your db file. The "destination tag" is the one that will appear in the database so the naming conventions are important to be kept, the "input tag" is the one present in the db file and they both can be the same for simplicity. The user text is a comment on what you uploaded exactly so use it. The "since" parameter is only interesting for data where the run number from which the uploaded table is valid can be inserted called IOV. The IOVs allow for one destination tag to have multiple tables uploaded for different run ranges but the run number has to increase with each upload.

{
    "destinationDatabase": "oracle://cms_orcon_prod/CMS_COND_PAT_000",
    "destinationTags": {
        "CentralityTable_HFtowers200_HydjetDrum5_v750x01_mc": {
            "dependencies": {},
            "synchronizeTo": "offline"
        }
    },
    "inputTag": "CentralityTable_HFtowers200_HydjetDrum5_v750x01_mc",
    "since": 1,
    "userText": "Centrality table for Hydjet MC at 5 TeV"
}

list of available database entries

What tags are uploaded to cms conditions?

conddb search CentralityTable from terminal

https://cms-conddb.cern.ch/browser/ from web browser

Glauber tables

For data, the Npart, Ncoll and b values come from a Glauber model with taking into account the detector effects. There are two methods to produce the values: smearing method using a MC sample that has Npart and detector simulation or the fitting the data multiplicity distribution with negative binomials.

Glauber model parameters and details are found on a separate page https://twiki.cern.ch/twiki/bin/viewauth/CMS/Glauber5TeVPbPb

Smearing method

...

Glauber table is produced with Npart, Ncoll and b average and RMS values for each bin. This table in a root file can be used as input for data centrality tables.

NDB fitting

Information on separate page https://twiki.cern.ch/twiki/bin/viewauth/CMS/GlauberNBD

Using git introduction

cmsrel CMSSW_7_5_0
cd CMSSW_7_5_0/src
cmsenv
git cms-merge-topic -u CmsHi:forest_CMSSW_7_5_0
git checkout -b forest_CMSSW_7_5_0
scram b

change things as you need, test if it is working, etc.

git commit -a -m "comment what you did"
git push my-cmssw forest_CMSSW_7_5_0

Point the forest branch in your own cmssw area on git to the forest experts to merge your changes to the central forest branch.

Old page

Software development using git

Centrality branch in CMSSW_5_3_20 is set up in github within the CmsHI community: https://github.com/CmsHI/cmssw/tree/centrality_5_3_20

Packages concerning centrality and event selection include
RecoHI/HiCentralityAlgos
HeavyIonsAnalysis/Configuration
HeavyIonsAnalysis/VertexAnalysis

For testing purposes also the event analyzer is added
HeavyIonsAnalysis/EventAnalysis

For centrality analysis, calibration, table production, etc. a development branch will be needed that also includes
HeavyIonsAnalysis/CentralityAnalysis
package that is right now only in the forest branch...

Instructions

cmsrel CMSSW_5_3_20
cd CMSSW_5_3_20/src
cmsenv
git cms-addpkg RecoHI/HiCentralityAlgos
git cms-merge-topic -u CmsHI:centrality_5_3_20
scram b
git checkout -b centrality_dev

change things as you need, test if it is working, etc.

git commit -a -m "comment what you did"
git remote add cmshi git@github.com:CmsHI/cmssw.git
git push cmshi centrality_dev

Software pieces in CMSSW

These three modules are needed for every analysis that contain centrality software pieces:

CondFormats/HIObjects
DataFormats/HeavyIonEvent
RecoHI/HiCentralityAlgos

For the latest version developed for pPb

cvs co -r pPbProd_v04 DataFormats/HeavyIonEvent
cvs co -r pPbProd_v06 RecoHI/HiCentralityAlgos
cvs co -r pPbProd_v07 HeavyIonsAnalysis/Configuration
cvs co -d Appeltel/RpPbAnalysis UserCode/Appeltel/RpPbAnalysis

CentralityProducer

The CentralityProducer returns a reco::Centrality object, which collects the summed information used for the centrality determination from the different detectors.

The configuration file RecoHI/HiCentralityAlgos/python/HiCentrality_cfi.py contains the settings for the CentralityProducer module, hiCentrality which is run as part of the standard heavy-ion reconstruction sequence found in Configuration/StandardSequences/python/ReconstructionHeavyIons_cff.py So, all one has to do is to run standard reco sequence for PbPb.

There is a new module for pPb pACentrality

The RecoHI/HiCentralityAlgos/src/CentralityProducer.cc can be run on the fly with including in the cfg: process.load('RecoHI.HiCentralityAlgos.HiCentrality_cfi') process.p = cms.Path(... * process.pACentrality * ...)

The centrality producer is able to update the centrality information by reproducing some of the variables, while keeping the rest of the variables the same. For example, if the track collection that is desired to be the input to centrality is changed, one can set the flag for all the other variables to be off, and specify the label of the previous centrality object that was produced in these events:

process.hiCentrality.produceHFhits = False
process.hiCentrality.produceHFtowers = False
process.hiCentrality.produceEcalhits = False
process.hiCentrality.produceBasicClusters = False
process.hiCentrality.produceZDChits = False
process.hiCentrality.produceETmidRapidity = False
process.hiCentrality.producePixelhits = False
process.hiCentrality.produceTracks = True
process.hiCentrality.producePixelTracks = False
process.srcReUse = cms.InputTag("hiCentrality","","FIRSTRECO")

CentralityProvider

The CentralityProvider takes the string from the config file called centralityVariable and gives the bins for this variable from a table that was uploaded to conditions database. The CommonFunctions_cff takes the uploaded tag from the database and uses it with the given label that should be identical to the centralityVariable string.

from HeavyIonsAnalysis.Configuration.CommonFunctions_cff import *
overrideCentrality(process)

process.HeavyIonGlobalParameters = cms.PSet(
  centralityVariable = cms.string("HFtowersPlusTrunc"),
  nonDefaultGlauberModel = cms.string(""),
  centralitySrc = cms.InputTag("pACentrality"),
  pPbRunFlip = cms.untracked.uint32(211313)
  )

CentralityBins object

EDAnalyzer

In full framework, one can read the values directly from DB. Instructions are as follows:

  • Make an EDAnalyzer
  • Include the header for Centrality:
#include "DataFormats/HeavyIonEvent/interface/Centrality.h"
  • Declare a pointer to the centrality bins object, and initialize it with the getCentralityBinsFromDB function:
const CentralityBins *cbins_ = 0;
...
if(!cbins_) cbins_ = getCentralityBinsFromDB(iSetup);

where iSetup is the EventSetup. Make sure you re-initialize for every run in case the bins are run-dependent.

  • To build the analyzer, you need the following dependencies:
<use name=DataFormats/HeavyIonEvent>
<use name=CondFormats/HIObjects>
<use name=CondFormats/DataRecord>
  • If the centrality tables are not a part of the global tag yet, you need to load an ESSource for the EventSetup.
process.load('Configuration.StandardSequences.FrontierConditions_CMS.GlobalTag_cff')
process.GlobalTag.globaltag = 'MC_38Y_V8::All'

process.GlobalTag.toGet = cms.VPSet(
    cms.PSet(record = cms.string("HeavyIonRcd"),
             tag = cms.string("CentralityTable_HFhits40_Hydjet2760GeV_v0_mc"),
             connect = cms.untracked.string("frontier://FrontierPrep/CMS_COND_PHYSICSTOOLS")
             )
    )

FWlite

The CentralityBins object is the same in full framework and FWLite, however it is more convenient to use the RunMap directly when running in FWLite.

// event loop ... {
   edm::Handle cent;
   iEvent.getByLabel(edm::InputTag("hiCentrality"),cent);
   double hf = cent->EtHFhitSum();

   int bin = HFhitBins->getBin(hf);

   // in full framework, making sure the bins are re-initialized for each run.
   double npartMean = HFhitBins->NpartMean(hf);
   double npartSigma = HFhitBins->NpartSigma(hf);

   // in FWLite
   int runnum = iEvent.id().run();
   double npartMean = HFhitBinMap[runnum]->NpartMean(hf);
   double npartSigma = HFhitBinMap[runnum]->NpartSigma(hf);
//  ... end of event loop  }

Centrality tables

All the tools for creating centrality tables can be found in RecoHI/HiCentralityAlgos/tools/

Summary of the workflow

Basically makeTable2.C creates the centrality table root files with different tags for data and MC. For MC the input is the simple HiTree (in forest or separately), but for Data we need extra input files beside the HiTree. One needs for data an efficiency histogram that contains the selection efficiency versus the centrality variable (this became necessary for pPb where we have inefficiency in more than one bin) and also an input centrality table. The efficiency histograms can be created e.g. with getEfficiency.C. The input centrality table comes from smeared Glauber and is created by the simulate.C. The input for this is the Glauber ntuple and some 2D histograms created from HiForest with ProduceResponsePlots.C. The last two macros are put into one as a new version called makeSmearedTable.C also available in cvs.

tableworkflow.png

makeTable2.C

Used for creating data and MC tables.

  • The MC tables are just sliced in a given variable and the corresponding Npart mean and RMS values are taken as the histogram's mean and RMS.
  • For the data tables we need efficiency correction and an input table from smeared Glauber calculation. In the tag specify also the efficiency scenario and Glauber.

makeDBFromTFile.py

From the root files created by makeTable2.C create db files for uploading to database. The input tag (directory name in root file) must be specified.

What tags are in the db file? cmscond_list_iov -c sqlite_file:TestFile.db -a

Uploading with dropbox

Follow the instructions on the DropBox twiki page.

Naming conventions for tags in database CentralityTable_HFplus100_PA2012B_v533x01_offline

  • start with CentralityTable
  • something that identifies the variable
  • Glauber model: PA2012B is the default, non-default are the generators
  • version number: v533 means the release x01 can increase for a new table
  • offline for data table and mc for MC

What tags are uploaded to cms conditions? conddb search CentralityTable

Files available in cvs to help the upload: upload.py (downloadable from the dropbox twiki page), template.txt (sceleton for different text file to be uploaded with the same name db file), makeJEC.sh (example script to make more than one text file for the different db files).

Example text file (currently used for pPb and Pbp data):

{
    "destinationDatabase": "oracle://cms_orcon_prod/CMS_COND_31X_PHYSICSTOOLS",
    "destinationTags": {
        "CentralityTable_HFplus100_PA2012B_v533x01_offline": {
            "dependencies": {},
            "synchronizeTo": "offline"
        }
    },
    "inputTag": "CentralityTable_HFplus100_PA2012B_v533x01_offline",
    "since": 211300,
    "userText": ""
}

simulate.C

It reads the output file(basically use the 2-D histograms) from ProduceResponsPlots.C and also a Glauber output file(Phob_Glau_pPb_sNN70mb_v15_1M_dmin04.root, it just uses b, npart, ncoll in it).

The output is a ntuple with some global variables(including the centrality bin for each Glauber event) and 2-D histograms(show npart, ncoll or b vs. centrality variable)

The main thing the code do is: read each Glauber event, get its npart, use the 2D histograms smearing output from ProduceResponsPlots.C to find the Gen-ET and then Reco-HF energy for this event after smear.(this is done by: double Ana::getHFbyET(double Npart) , also void Ana::getProjections() is used before that).

After reading 1M Glauber events, you can get the Reco-HF energy distribution and define centrality for each event.

Then you can have the npart distribution for each centrality bin. The code fit the npart distribution in each centrality bin to get and Npart_RMS.(done by void fitSlices(TH2* hCorr, TF1* func).

ProduceResponsePlots.C

Input MC Hiforest. In different cases of event selections(bool accept[MAXHIST] = {1,1,1,1,1,1,1,1,1,1,1};) fill some 2D histograms. for example: Gen-particle ET vs. Npart, HF energy vs. Gen-particle ET.... Also save a ntuple with some global variables.

The histogram of Gen-particle ET vs. Npart shows the smearing of Gen-ET from each Npart. The histogram of HF energy vs. Gen-particle ET shows the smearing of HF energy from each Gen-ET.

Other tools

Inspecting db file with sqlite3

Try the following commands, starting from shell:

sqlite3 CentralityTables.db
.tables
.dump CENTRAL_M_TABLE
.quit

Edit | Attach | Watch | Print version | History: r27 < r26 < r25 < r24 < r23 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r27 - 2018-09-13 - JavierMartinBlanco
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CMSPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback