Facilitators

Responsible at CMSDAS@Beijing on this short exercise:

Introduction

Color coding

Throughout this twiki page, we use the following color coding convention:

In this Twiki, the following colour scheme will be used:

  • Commands that you should run will be shown in a gray box, e.g.:
    ls -ltr
  • Code snippets to be looked at (e.g, as examples) or edited will be shown in a pink box, e.g.:
     std::cout << "This is a test" << std::endl; 
  • Output such as screen printouts will be shown in a green box, e.g.:
    This is a test

Part 1: Using MadGraph to generate parton-level events

MG Exercise1: Running standalone Madgraph.(Interactive mode)

Madgraph(MG) is a tool for Matrix Element(ME) level calculation. In LHC physics, we calculate a physics process by energy scale - from large to small scale / from hard to soft process. As QCD is perturbative at high energy scale, we can describe a physics process in order of alpha_s at ME level(hard process level). At this level, we construct a process's skeleton.

image ref : "http://www.ep.ph.bham.ac.uk/general/seminars/slides/Dimitris-Varouchas-2015.pdf"

Let's start with DrellYan process ( q q~ -> Z/gamma* -> l+ l-)

Connect to lxplus OR PKU machine via ssh.

Get the source of MG5

wget https://cms-project-generators.web.cern.ch/cms-project-generators/MG5_aMC_v2.6.5.tar.gz
OR if you use PKU machine
cp /home/cmsdas/junho/MGsource/MG5_aMC_v2.6.5.tar.gz . 

Untar the source and run the MG

tar xf MG5_aMC_v2.6.5.tar.gz
cd MG5_aMC_v2_6_5/bin/
./mg5_aMC

set automatic_html_opening False
generate p p > l+ l- 
output myrun
launch myrun 

You can see the xsec of the process.

Let's go to the detail information of the process.

- Diagram.

ls myrun/SubProcesses/P1_qq_ll/*.ps 
ls myrun/SubProcesses/P1_qq_ll/*.jpg 
diagram1 diagram2

- Event record

open lhe file using text editor.

vi myrun/Events/run_01/unweighted_events.lhe.gz

(or

emacs -nw myrun/Events/run_01/unweighted_events.lhe.gz 

)

unweighted_events.lhe

Find <event> block

e.g)

<event>

5 1 +1.6860000e+03 8.89432000e+01 7.54677100e-03 1.30543100e-01

-1 -1 0 0 0 501 -0.0000000000e+00 +0.0000000000e+00 +2.2398225191e+01 2.2398225191e+01 0.0000000000e+00 0.0000e+00 1.0000e+00

1 -1 0 0 501 0 +0.0000000000e+00 -0.0000000000e+00 -8.8298210922e+01 8.8298210922e+01 0.0000000000e+00 0.0000e+00 -1.0000e+00

23 2 1 2 0 0 +0.0000000000e+00 +0.0000000000e+00 -6.5899985731e+01 1.1069643611e+02 8.8943200127e+01 0.0000e+00 0.0000e+00

-11 1 3 3 0 0 +1.6824537576e+01 -5.9830707237e+00 +1.7740426939e+01 2.5171113363e+01 0.0000000000e+00 0.0000e+00 1.0000e+00

11 1 3 3 0 0 -1.6824537576e+01 +5.9830707237e+00 -8.3640412670e+01 8.5525322750e+01 0.0000000000e+00 0.0000e+00 -1.0000e+00

in Les Houches Event(LHE) format.(https://arxiv.org/abs/hep-ph/0109068)

MG Exercise2: Gridpack Generation

A gridpack is a precompiled file to run event generation of a specific process. It is more proper to large production of samples because it already contains all information and all setups are done for a process. Let's make a gridpack of DY process.

export VO_CMS_SW_DIR=/cvmfs/cms.cern.ch
source $VO_CMS_SW_DIR/cmsset_default.sh
#git clone git@github.com:soarnsoar/genproductions.git -b DAS2019
tar xf /home/cmsdas/junho/genproductions.tar
cd genproductions/bin/MadGraph5_aMCatNLO
./gridpack_generation.sh dyellell0j_5f_LO_MLM cards/examples/dyellell0j_5f_LO_MLM local ALL slc7_amd64_gcc630 CMSSW_9_3_16
ls dyellell0j_5f_LO_MLM_slc7_amd64_gcc630_CMSSW_9_3_16_tarball.tar.xz

Now, anyone who wants to produce DY->ll events can use this gridpack you've made!

We need only two steps to generate events -

Step1) untar the gridpack

Step2) run ."runcmsgrid.sh <NEVENT> <SEED> <NCORE>"



You can check this procedure in the script which is used in official CMS MC production!

An example of MC production setup.

You can see ' - run_generic_tarball_cvmfs.sh' is set to 'scriptName'.

Let's look into what this script does.

run_generic_tarball_cvmfs.sh

1)untar L65,66

2)run "runcmsgrid.sh" (L73) L72,73



Yes. Only untar -> runcmsgrid.sh

Then, let's make events with the gridpack.

mkdir -p test_gridpack ##make test dir

cd test_gridpack

tar -xf ../dyellell0j_5f_LO_MLM_slc7_amd64_gcc630_CMSSW_9_3_16_tarball.tar.xz

./runcmsgrid.sh 100 1 1 #100 event with seed=1 ncore=1

ls cmsgrid_final.lhe ## output lhe.

Can you see the DY events in lhe output?

MG Exercise3: Parton Shower

Now, we know how to get 'skeleton' of a physics process - hard process. Now, let's add some details - softer scale physics.

image ref : "http://www.ep.ph.bham.ac.uk/general/seminars/slides/Dimitris-Varouchas-2015.pdf"

mkdir -p PartonShowerProduction
cd PartonShowerProduction
mv ../dyellell0j_5f_LO_MLM_slc7_amd64_gcc630_CMSSW_9_3_16_tarball.tar.xz .
cp ../../../python/DAS/* . ## genproductions/python/DAS/ 
#copy 'setup.sh'
#DY0j_MLM_fragment.py
source setup.sh

Here's What setup.sh does:

  • 1) setup cmssw enviroment
  • 2) copy fragment file for the sample production
fragment file : contains basic infos for the production such as

-gridpack location

-name of script to run the gridpack

-Options for Parton showering

-My fragment file's name = DY0j_MLM_fragment.py

  • 3) Make python file using the fragment file.

>cmsDriver.py [options]

Now, we get output python file. I set the name of the script to 'DAS_MG_EXERCISE.py'



Let's run the script using

cmsRun DAS_MG_EXERCISE.py &> log.txt
cp /home/cmsdas/junho/Output/*.root . 

we got two kinds of output.

->DAS_MG_EXERCISE_inLHE.root

->DAS_MG_EXERCISE.root

DAS_MG_EXERCISE_inLHE.root is edm root format of .lhe from MG.

DAS_MG_EXERCISE.root is one after the PS is simulated to lhe output.

Let's get into some kinematic distributions of the generated events.

cd CMSSW_10_0_2/src
mkdir -p Tools/
cd Tools
##Get analyzer source
#git clone git@github.com:soarnsoar/LHE_Analyzer.git -b DAS2019
#git clone git@github.com:soarnsoar/GEN_Analyzer.git -b DAS2019

git clone  https://github.com/soarnsoar/LHE_Analyzer.git -b DAS2019

git clone https://github.com/soarnsoar/GEN_Analyzer.git -b DAS2019

cd ../
scram b ## compile analyzer
cd ../../
cmsRun $CMSSW_BASE/src/Tools/LHE_Analyzer/python/run_DYanalyzerLHE.py
cmsRun $CMSSW_BASE/src/Tools/GEN_Analyzer/python/run_DYanalyzerGEN.py
ls histoGEN.root
ls histoLHE.root

Open the rootfiles and check the histograms.histoGEN.root, histoLHE.root

pT(ll), eta(ll), phi(ll), M(ll)

pT(l), eta(l), phi(l), M(l)

l = e+ e- µ+ µ-

If you look into "pT(µµ) and pT(ee)" you can see different shape on each LHE and GEN level. Why?

Part 2: Using Sherpa to generate parton-level events

Part 3: Using Powheg Box to generate parton-level events

Here let's see how to use Powheg Box generator to generate MC events. Here we are going to use a python script provided by the generator group to build Powheg Box binary and performing the rest of the jobs.

A detailed description on the usage of the script can be found with this link: https://twiki.cern.ch/twiki/bin/view/CMS/PowhegBOXPrecompiled#Gridpack_production_with_multipl We briefly go through the whole procedures with an example of gg_H production.

Step by step tutorials for POWHEG BOX 2.

Download all the necessary scripts and prepare input cards

  • Log on to PKU CMSDAS machine (SLC7)
    ssh -Y -p 9001 hepfarm02.phy.pku.edu.cn

  • Create a CMSSW work area in the lxplus, better in the /afs/cern.ch/work so that you have enough disk space. Note for the corresponding SCRAM_ARCH.
    export SCRAM_ARCH=slc7_amd64_gcc630 
    source /cvmfs/cms.cern.ch/cmsset_default.sh
    cmsrel CMSSW_9_3_0
    cd CMSSW_9_3_0/src
    cmsenv

  • Checkout the scripts
    # one possibility is to clone the genproductions repository
    git clone https://github.com/cms-sw/genproductions.git genproductions
    cd genproductions/bin/Powheg

    # otherwise copy the relevant files from an existing one
    genpath=/path/to/your/local/genproduction
    cp $genpath/bin/Powheg/*.py .
    cp $genpath/bin/Powheg/*.sh .
    cp -r $genpath/bin/Powheg/patches .

Alternatively, one can copy the zip files directly.

   
    unzip /home/cmsdas/yuanchao/genproductions.zip

    cd genproductions/bin/Powheg

    # otherwise copy the relevant files from an existing one
    genpath=/path/to/your/local/genproduction
    cp $genpath/bin/Powheg/*.py .
    cp $genpath/bin/Powheg/*.sh .
    cp -r $genpath/bin/Powheg/patches .

  • Get the input card files
    mkdir gg_H; cd gg_H
    wget --no-check-certificate https://raw.githubusercontent.com/cms-sw/genproductions/master/bin/Powheg/examples/gg_H_quark-mass-effects_withJHUGen_NNPDF30_13TeV/gg_H_quark-mass-effects_NNPDF30_13TeV.input
    wget --no-check-certificate https://raw.githubusercontent.com/cms-sw/genproductions/master/bin/Powheg/examples/gg_H_quark-mass-effects_withJHUGen_NNPDF30_13TeV/JHUGen.input
    cd ..

Other examples can be found on Generator GitHub.


#AdditioNal

Additional batch queue configurations (HTCondor ONLY, to be tested on PKU cluster)

You may need specific requirements for one of the POHWEG jobs described below (source compilation, MC-grid computation etc.). For example, the process ttH takes a large memory for compiling (see above), some processes (HJ NNLOPS, WWJ) have large text/.top files inside so the work directory, when running events, must be large enough etc.

In this case it is possible that HTCondor jobs fail because the machine where the jobs run has not enough diskspace, memory etc. One may add a text file, in the main work directory from where the jobs are sent, named additional.condorConf . These HTCondor commands will be automatically appended to all HTCondor configuration files that are launched from that directory. A typical content of additional.condorConf could be:

request_memory =                  2000M
request_disk   =                  500M

Be careful : if you put conflicting/impossible requirements, the HTCondor jobs will stay pending forever.


#SingleRun

Gridpack production with single processor

If you want to create gridpack in one go

    cmsenv
    ON HTCondor:             python ./run_pwg_condor.py -p f -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -q longlunch -n 1000 -d 1
    ON LSF (PHASING OUT):    python ./run_pwg.py -p f -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -q 2nd -n 1000

 Definition of the input parameters:
  (1) -p grid production stage [f]  (one go)
  (2) -i intput card name [powheg.input]
  (3) -m process name (process defined in POWHEG)
  (4) -f working folder [my_ggH]
  (5) -q job flavor / batch queue name (run locally if not specified)
  (6) -n the number of events to run
  (7) -d bypass the LHAPDF set check
 

A tar ball with the name below is created

    my_ggH_gg_H_quark-mass-effects_<SCRAM_ARCH>_<CMSSW_VERSION>.tgz
  • If a proper URL path is given for the input cards, run_pwg.py will download it, ex.
     -i slc6_amd64_gcc481/powheg/V2.0/13TeV/examples/DMGG_NNPDF30_13TeV/DMGG_NNPDF30_13TeV.input 
  • Please make sure that the "-m" is fed with the correct process name listed in the previous session. Only the specified process code will be compiled. No binary will be produced if a wrong process name is given.

If you want to separate the step of compiling/producing grids/gridpack creation (for debugging or testing purposes)

Step 1: Compiling the POWHEG source

    cmsenv
    python ./run_pwg_condor.py -p 0 -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH

 Definition of the input parameters:
  (1) -p grid production stage [0]  (compiling source)
  (2) -i intput card name [powheg.input]
  (3) -m process name (process defined in POWHEG)
  (4) -f working folder [my_ggH]
  (5) -q  job flavor / batch queue name (run locally if not specified)
 

  • If a proper URL path is given for the input cards, run_pwg.py will download it, ex.
     -i slc6_amd64_gcc481/powheg/V2.0/13TeV/examples/DMGG_NNPDF30_13TeV/DMGG_NNPDF30_13TeV.input 
  • Please make sure that the "-m" is fed with the correct process name listed in the previous session.

Step 2: Producing grids

    ON HTCondor:             python ./run_pwg_condor.py -p 123 -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -q workday -n 1000
    ON LSF (PHASING OUT):    python ./run_pwg.py -p 123 -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -q 2nd -n 1000

 Definition of the input parameters:
  (1) -p grid production stage '123' stands for single process through out the three internal stages
  (2) -i intput card name [powheg.input]
  (3) -m process name (process defined in POWHEG)
  (4) -f working folder [testProd]
  (5) -q job flavor / batch queue name (run locally if not specified)
  (6) -n the number of events to run

Step 3: Creating POWHEG gridpack tarball

    python ./run_pwg_condor.py -p 9 -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -k 1

 Definition of the input parameters:
  (1) -p grid production stage '9' stands for tarball creation
  (2) -i intput card name [powheg.input]
  (3) -m process name (process defined in POWHEG)
  (4) -f working folder [my_ggH]
  (5) -k keep the validation .top plots [0]


#ParallelRun

Gridpack production with multiple processors (needed in case of complex processes, e.g. those using MINLO or NNLOPS)

<!-- The instructions below refer to a new, modified version of the scripts being finalized and contained in this PR
https://github.com/cms-sw/genproductions/pull/952

The previous instructions to be used without this PR can be found at
https://twiki.cern.ch/twiki/bin/view/CMS/PowhegBOXPrecompiled?rev=24 -->

First, download all the needed files including the input datacard following these instructions https://twiki.cern.ch/twiki/bin/view/CMS/PowhegBOXPrecompiled#Download_all_the_necessary_scrip

PLEASE READ BEFORE PROCEEDING: The basic features of the multicore grid production are outlined in section 4.1 of the W2jet process manual, attached to this page: manual-BOX-WZ2jet.pdf. They are equally applicable to all the POWHEG processes supporting it.

Setup the datacard: number of calls, iterations etc

The important figure of merit is the TOTAL number of calls, i.e. (number of cores) x (number of calls) x (number of iterations). The single numbers can vary, for instance according to the available cpus, what matters is the product (for instance 10 cores with xgriditeration up to 5 and ncall1 100000 is equivalent to 100 cores with xgriditeration up to 3 and ncall1 200000). IMPORTANT: in the multi-core approach itmx1 is NOT used, what matters is xgriditeration. On the contrary, itmx2 is relevant.

If for stage 1 the TOTAL number of calls is X, a rule of thumb for the stage 2 is to use a TOTAL number of calls equal to 3*X (one should check the NLO .top files to make sure is ok before to go to stage 3). Another "rule of thumb recommendation" is to ensure that for each core the .top plots are not a complete disaster but look "decent"

Finally, for nubound one should use a number of the same order of the calls used for the previous stages

The suggested setup for number of calls during the grid creation (under review with the POWHEG authors) is <----- ncall2, itmx2 and nubound updated but TO BE CHECKED

ncall1  100000  ! number of calls for initializing the integration grid
itmx1    1     ! number of iterations for initializing the integration grid
ncall2  100000  ! number of calls for computing the integral and finding upper bound
itmx2     5    ! number of iterations for computing the integral and finding upper bound
nubound 200000  ! number of bbarra calls to setup norm of upper bounding function

to be used with 10 parallel jobs for step 1, launched 5 times (more details below).

The suggested phase space folding to reduce the number of events from rougly 30% to order 5% is:

foldcsi   2    ! number of folds on csi integration
foldy     2    ! number of folds on  y  integration
foldphi   2    ! number of folds on phi integration

for Zj, Wj, HJ. It can be used with the LSF batch queue 1nd or the 'longlunch' HTCondor job flavor. Further reduction can be induced by using:

foldcsi   2    ! number of folds on csi integration
foldy     5    ! number of folds on  y  integration
foldphi   2    ! number of folds on phi integration

but the 'longlunch' queue will not be sufficient, needs some testing with 'tomorrow' or 'testmatch' (the latter queue is discouraged)

A script to automatize all the steps described below has been added and can be used, for example, as:

ON HTCondor:            python ./run_pwg_parallel_condor.py -i powheg_Zj.input -m Zj -f my_Zj -q 2nd -j 10
ON LSF (PHASING OUT):   python ./run_pwg_parallel.py -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10

Further details and steps performed are listed below.

Usage of the python script

python ./run_pwg_parallel_condor.py -h
Usage: run_pwg_parallel_condor.py [options]

Options:
  -h, --help            show this help message and exit
  -f FOLDERNAME, --folderName=FOLDERNAME
                        local folder and last eos folder name[testProd]
  -j NUMJOBS, --numJobs=NUMJOBS
                        number of jobs to be used for multicore grid step
                        1,2,3
  -x NUMX, --numX=NUMX  number of xgrid iterations for multicore grid step 1
  -i INPUTTEMPLATE, --inputTemplate=INPUTTEMPLATE
                        input cfg file (fixed) [=powheg.input]
  -q DOQUEUE, --doQueue=DOQUEUE
                        Condor job flavor [longlunch]
  -m PRCNAME, --prcName=PRCNAME
                        POWHEG process name [DMGG]
  --step3pilot          do a pilot job to combine the grids, calculate upper
                        bounds afterwards (otherwise afs jobs might fail)
  --dry-run             show commands only, do not submit

  • The same working modes as for the basic script apply.

Step 1: Compiling the POWHEG source

    cmsenv
    # interactive mode (recommended, but for complex processes, it could take a while)
    python ./run_pwg_condor.py -p 0 -i powheg_Zj.input -m Zj -f my_Zj 
    # batch mode
    ON HTCondor:           python ./run_pwg_condor.py -p 0 -i powheg_Zj.input -m Zj -f my_Zj -q microcentury
    ON LSF (PHASING OUT):  python ./run_pwg.py -p 0 -i powheg_Zj.input -m Zj -f my_Zj -q 8nh

Definition of the input parameters:

  (1) -p grid production stage [0]  (compiling source)
  (2) -i intput card name [powheg.input]
  (3) -m process name (process defined in POWHEG)
  (4) -f working folder [my_ggH]
  (5) -q job flavor / batch queue name (run locally if not specified)


  • Please make sure that the "-m" is always fed with the same process name (gg_H_quark-mass-effects in this example).
For the complete list of processes refer to the powheg website http://powhegbox.mib.infn.it

Step 2: Producing grids with 3 separate internal stages

This step must run 3 times, corresponding to different stages of the grid production. Each step is labeled by a different number (1,2, or 3) passed as parameter for the -p option.

The submission is splitted into chunks according to the total number of required events (-t) divided by the number of events per job (-n). The number of events is fictious, unless you want to produce a specific number of events for purposes other than validation. A low number of events per job like 1000 ensures no large additional waiting time.

Each step must be run only after all jobs of the previous step are finished. One can do a double-check that the output files of the previous step are loaded by looking into the log files in the working directory. ON HTCondor:

# step 1-1
    python ./run_pwg_condor.py -p 1 -x 1 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10
# step 1-2
    python ./run_pwg_condor.py -p 1 -x 2 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10
# step 1-3
    python ./run_pwg_condor.py -p 1 -x 3 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10
# step 1-4
    python ./run_pwg_condor.py -p 1 -x 4 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10
# step 1-5
    python ./run_pwg_condor.py -p 1 -x 5 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10
# step 1-n (suggested number is n=5)
# ...

# step 2
   python ./run_pwg_condor.py -p 2 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10
# step 3
   python ./run_pwg_condor.py -p 3 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10 

ON LSF (PHASING OUT):

# step 1-1
    python ./run_pwg.py -p 1 -x 1 -i powheg_Zj.input -m Zj -f my_Zj -q 1nd -j 10
# etc. etc.

Definition of the input parameters:

  (1) -p grid production parallel stage '1', '2', '3'
  (2) -x grid refinement steps '1', '2', or '3'... for parallel stage '1'
  (3) -i intput card name [powheg.input]
  (4) -m process name (process defined in POWHEG)
  (5) -f working folder [my_ggH]
  (6) -q job flavor / batch queue name (run locally if not specified)
  (7) -t the total numbers events to run
  (8) -n the number of events in each parallel jobs
  (9) -j number of parallel jobs


  • In this example, 10 jobs will be submitted to queue 'longlunch' for each step.

Step 3: Create the POWHEG gridpack tarball

This step must be run only after all jobs of the previous stage are finished.

    python ./run_pwg_condor.py -p 9 -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -k 1

Definition of the input parameters:

  (1) -p grid production stage '9' stands for tarball creation
  (2) -i intput card name [powheg.input]
  (3) -m process name (process defined in POWHEG)
  (4) -f working folder [my_ggH]
  (5) -k keep the validation .top plots [0]


Step 4: Checking the integration grids

MANUAL WAY

https://twiki.cern.ch/twiki/bin/viewauth/CMS/PowhegBOXPrecompiledCheckGrids

AUTOMATIC WAY (starting from revision 3610)

Add in the powheg.input datacard the following two commands (used in step 2 and 3 respectively):

check_bad_st1 1
check_bad_st2 1

Producing the LHE files

Gridpacks produced for NNLOPS run on official production will not give sensible results, because just a few events per job are produced, while NNLOPS reweighting needs a baseline of several 10k events (better a few 100k) in order to be meaningful. It is therefore mandatory to run LHE production privately, following these steps:

  • Run some parallel jobs in order to reach the needed statistics as explained below. (suggestion: better not to use more than 2-3000 events per job, especially if there are many weights)
  • When all are done so you have n LHE files in the work directory, run the script run_nnlops.sh to merge them and run the reweighting. As usually the final result is named cmsgrid_final.lhe.

How to Run a POWHEG gridpack and produce LHE files


#LocalRun

Running the gridpack locally

First, create a CMSSW area

   cmsrel CMSSW_7_1_30
   cd CMSSW_7_1_30/src
   cmsenv

Then, untar the gridpack

    tar xvzf my_ggH_gg_H_quark-mass-effects-etc-etc.tgz

  • For single processor production, use the following command:
    ./runcmsgrid.sh   1

  • For multi-processor production, a patched version (beta, being validated) is created, still to be run with:
    ./runcmsgrid.sh   1

The standard version of the macro is stored in runcmsgrid_singlecore.sh

  • For producing large samples on the queues use:
    ./run_lhe_condor.sh    1

The resulting LHE files will be named cmsgrid_final<n>.lhe in the directory where the job is sent.


#OfficialRun

Running the gridpack via externalLHEProducer

  • Example config python file for cmsDriver.py could be found here.

  • An LHE file cmsgrid_final.lhe or cmsgrid_final_XXXX.lhe is now produced. You could find an example of LHE file with weights at /afs/cern.ch/work/s/syu/public/LHE/cmsgrid_final.lhe.

How to check the fraction of events with negative weights

When "withnegweights" is set to 1, POWHEG log file will print out the information of negative weights as follows:

 tot:   10.396661228247105      +-   2.2674314776262974     
 abs:   10.552610613248710      +-   2.2674303490903318     
 pos:   10.474635920747978      +-   2.2674300770006690     
 neg:   7.7974692500833637E-002 +-   1.9475025001334601E-003
  powheginput keyword ubsigmadetails       absent; set to   -1000000.0000000000     
 btilde pos.   weights:   10.474635920747978       +-   2.2674300770006690     
 btilde |neg.| weights:   7.7974692500833637E-002  +-   1.9475025001334601E-003
 btilde total (pos.-|neg.|):   10.396661228247105       +-   2.2674314776262974     
 negative weight fraction:   6.7595163087627646E-003
 

However, to check how the kinematic distributions are affected, one must compare the histograms filled with both weights and those with only positive weights. You could use either Rivet or LHEAnalyzer to study the effect.

The following parameters (present in all powheg.input cards) can be used to tune the fraction of negative weights:

foldcsi 1 ! number of folds on csi integration
foldy 1 ! number of folds on y integration
foldphi 1 ! number of folds on phi integration

for example changing to:

foldcsi 2 ! number of folds on csi integration
foldy 5 ! number of folds on y integration
foldphi 2 ! number of folds on phi integration

will reduce the fraction. However the computation time for steps 1, 2 and 3 (or full) will be longer.

For the showering and jet matching validation, please refer to the corresponding parts in the MG5 exercise.

-- YuanChao - 2019-11-15

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng 300px-Drell-Yan.svg.png r1 manage 5.8 K 2019-12-10 - 00:44 JunhoChoi  
PNGpng HardScattering.png r1 manage 74.3 K 2019-12-10 - 00:44 JunhoChoi  
PDFpdf LHEexplanation.pdf r1 manage 64.5 K 2019-11-04 - 10:49 JunhoChoi LHEformat
PNGpng LHEexplanation.png r1 manage 463.2 K 2019-11-04 - 10:51 JunhoChoi  
PNGpng PS_Hadronization.png r1 manage 75.3 K 2019-12-10 - 00:44 JunhoChoi  
PNGpng why_difference_on_ptZ.png r1 manage 267.6 K 2019-12-10 - 00:44 JunhoChoi  
Edit | Attach | Watch | Print version | History: r8 < r7 < r6 < r5 < r4 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r8 - 2019-12-10 - JunhoChoi
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback