ls -ltr
std::cout << "This is a test" << std::endl;
This is a test
wget https://cms-project-generators.web.cern.ch/cms-project-generators/MG5_aMC_v2.6.5.tar.gzOR if you use PKU machine
cp /home/cmsdas/junho/MGsource/MG5_aMC_v2.6.5.tar.gz .Untar the source and run the MG
tar xf MG5_aMC_v2.6.5.tar.gz cd MG5_aMC_v2_6_5/bin/ ./mg5_aMC
set automatic_html_opening False generate p p > l+ l- output myrun launch myrunYou can see the xsec of the process. Let's go to the detail information of the process. - Diagram.
ls myrun/SubProcesses/P1_qq_ll/*.ps
ls myrun/SubProcesses/P1_qq_ll/*.jpgdiagram1
vi myrun/Events/run_01/unweighted_events.lhe.gz(or
emacs -nw myrun/Events/run_01/unweighted_events.lhe.gz) unweighted_events.lhe
<event> 5 1 +1.6860000e+03 8.89432000e+01 7.54677100e-03 1.30543100e-01 -1 -1 0 0 0 501 -0.0000000000e+00 +0.0000000000e+00 +2.2398225191e+01 2.2398225191e+01 0.0000000000e+00 0.0000e+00 1.0000e+00 1 -1 0 0 501 0 +0.0000000000e+00 -0.0000000000e+00 -8.8298210922e+01 8.8298210922e+01 0.0000000000e+00 0.0000e+00 -1.0000e+00 23 2 1 2 0 0 +0.0000000000e+00 +0.0000000000e+00 -6.5899985731e+01 1.1069643611e+02 8.8943200127e+01 0.0000e+00 0.0000e+00 -11 1 3 3 0 0 +1.6824537576e+01 -5.9830707237e+00 +1.7740426939e+01 2.5171113363e+01 0.0000000000e+00 0.0000e+00 1.0000e+00 11 1 3 3 0 0 -1.6824537576e+01 +5.9830707237e+00 -8.3640412670e+01 8.5525322750e+01 0.0000000000e+00 0.0000e+00 -1.0000e+00in Les Houches Event(LHE) format.(https://arxiv.org/abs/hep-ph/0109068
export VO_CMS_SW_DIR=/cvmfs/cms.cern.ch source $VO_CMS_SW_DIR/cmsset_default.sh #git clone git@github.com:soarnsoar/genproductions.git -b DAS2019 tar xf /home/cmsdas/junho/genproductions.tar cd genproductions/bin/MadGraph5_aMCatNLO ./gridpack_generation.sh dyellell0j_5f_LO_MLM cards/examples/dyellell0j_5f_LO_MLM local ALL slc7_amd64_gcc630 CMSSW_9_3_16 ls dyellell0j_5f_LO_MLM_slc7_amd64_gcc630_CMSSW_9_3_16_tarball.tar.xz
Now, anyone who wants to produce DY->ll events can use this gridpack you've made!
We need only two steps to generate events -
Step1) untar the gridpack
Step2) run ."runcmsgrid.sh <NEVENT> <SEED> <NCORE>"
You can check this procedure in the script which is used in official CMS MC production!
An example of MC production setup.
You can see ' - run_generic_tarball_cvmfs.sh' is set to 'scriptName'.
Let's look into what this script does.
run_generic_tarball_cvmfs.shThen, let's make events with the gridpack.
mkdir -p test_gridpack ##make test dirCan you see the DY events in lhe output?
cd test_gridpack
tar -xf ../dyellell0j_5f_LO_MLM_slc7_amd64_gcc630_CMSSW_9_3_16_tarball.tar.xz
./runcmsgrid.sh 100 1 1 #100 event with seed=1 ncore=1
ls cmsgrid_final.lhe ## output lhe.
Now, we know how to get 'skeleton' of a physics process - hard process. Now, let's add some details - softer scale physics.
mkdir -p PartonShowerProduction cd PartonShowerProduction mv ../dyellell0j_5f_LO_MLM_slc7_amd64_gcc630_CMSSW_9_3_16_tarball.tar.xz . cp ../../../python/DAS/* . ## genproductions/python/DAS/ #copy 'setup.sh' #DY0j_MLM_fragment.py source setup.sh
Here's What setup.sh does:
-gridpack location
-name of script to run the gridpack
-Options for Parton showering
-My fragment file's name = DY0j_MLM_fragment.py
>cmsDriver.py [options]
Now, we get output python file. I set the name of the script to 'DAS_MG_EXERCISE.py'
Let's run the script using
cmsRun DAS_MG_EXERCISE.py &> log.txt
cp /home/cmsdas/junho/Output/*.root .
we got two kinds of output.
->DAS_MG_EXERCISE_inLHE.root
->DAS_MG_EXERCISE.root
DAS_MG_EXERCISE_inLHE.root is edm root format of .lhe from MG.
DAS_MG_EXERCISE.root is one after the PS is simulated to lhe output.
Let's get into some kinematic distributions of the generated events.
cd CMSSW_10_0_2/src mkdir -p Tools/ cd Tools ##Get analyzer source #git clone git@github.com:soarnsoar/LHE_Analyzer.git -b DAS2019 #git clone git@github.com:soarnsoar/GEN_Analyzer.git -b DAS2019 git clone https://github.com/soarnsoar/LHE_Analyzer.git -b DAS2019 git clone https://github.com/soarnsoar/GEN_Analyzer.git -b DAS2019 cd ../ scram b ## compile analyzer cd ../../ cmsRun $CMSSW_BASE/src/Tools/LHE_Analyzer/python/run_DYanalyzerLHE.py cmsRun $CMSSW_BASE/src/Tools/GEN_Analyzer/python/run_DYanalyzerGEN.py ls histoGEN.root ls histoLHE.root
Open the rootfiles and check the histograms.histoGEN.root, histoLHE.root
pT(ll), eta(ll), phi(ll), M(ll)
pT(l), eta(l), phi(l), M(l)
l = e+ e- µ+ µ-
If you look into "pT(µµ) and pT(ee)" you can see different shape on each LHE and GEN level. Why?
ssh -Y -p 9001 hepfarm02.phy.pku.edu.cn
export SCRAM_ARCH=slc7_amd64_gcc630 source /cvmfs/cms.cern.ch/cmsset_default.sh cmsrel CMSSW_9_3_0 cd CMSSW_9_3_0/src cmsenv
# one possibility is to clone the genproductions repository git clone https://github.com/cms-sw/genproductions.git genproductions cd genproductions/bin/Powheg # otherwise copy the relevant files from an existing one genpath=/path/to/your/local/genproduction cp $genpath/bin/Powheg/*.py . cp $genpath/bin/Powheg/*.sh . cp -r $genpath/bin/Powheg/patches .Alternatively, one can copy the zip files directly.
unzip /home/cmsdas/yuanchao/genproductions.zip cd genproductions/bin/Powheg # otherwise copy the relevant files from an existing one genpath=/path/to/your/local/genproduction cp $genpath/bin/Powheg/*.py . cp $genpath/bin/Powheg/*.sh . cp -r $genpath/bin/Powheg/patches .
mkdir gg_H; cd gg_H wget --no-check-certificate https://raw.githubusercontent.com/cms-sw/genproductions/master/bin/Powheg/examples/gg_H_quark-mass-effects_withJHUGen_NNPDF30_13TeV/gg_H_quark-mass-effects_NNPDF30_13TeV.input wget --no-check-certificate https://raw.githubusercontent.com/cms-sw/genproductions/master/bin/Powheg/examples/gg_H_quark-mass-effects_withJHUGen_NNPDF30_13TeV/JHUGen.input cd ..Other examples can be found on Generator GitHub
additional.condorConf
could be:
request_memory = 2000M request_disk = 500MBe careful : if you put conflicting/impossible requirements, the HTCondor jobs will stay pending forever.
cmsenv ON HTCondor: python ./run_pwg_condor.py -p f -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -q longlunch -n 1000 -d 1 ON LSF (PHASING OUT): python ./run_pwg.py -p f -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -q 2nd -n 1000 Definition of the input parameters: (1) -p grid production stage [f] (one go) (2) -i intput card name [powheg.input] (3) -m process name (process defined in POWHEG) (4) -f working folder [my_ggH] (5) -q job flavor / batch queue name (run locally if not specified) (6) -n the number of events to run (7) -d bypass the LHAPDF set checkA tar ball with the name below is created
my_ggH_gg_H_quark-mass-effects_<SCRAM_ARCH>_<CMSSW_VERSION>.tgz
-i slc6_amd64_gcc481/powheg/V2.0/13TeV/examples/DMGG_NNPDF30_13TeV/DMGG_NNPDF30_13TeV.input
cmsenv python ./run_pwg_condor.py -p 0 -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH Definition of the input parameters: (1) -p grid production stage [0] (compiling source) (2) -i intput card name [powheg.input] (3) -m process name (process defined in POWHEG) (4) -f working folder [my_ggH] (5) -q job flavor / batch queue name (run locally if not specified)
-i slc6_amd64_gcc481/powheg/V2.0/13TeV/examples/DMGG_NNPDF30_13TeV/DMGG_NNPDF30_13TeV.input
ON HTCondor: python ./run_pwg_condor.py -p 123 -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -q workday -n 1000 ON LSF (PHASING OUT): python ./run_pwg.py -p 123 -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -q 2nd -n 1000 Definition of the input parameters: (1) -p grid production stage '123' stands for single process through out the three internal stages (2) -i intput card name [powheg.input] (3) -m process name (process defined in POWHEG) (4) -f working folder [testProd] (5) -q job flavor / batch queue name (run locally if not specified) (6) -n the number of events to run
python ./run_pwg_condor.py -p 9 -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -k 1 Definition of the input parameters: (1) -p grid production stage '9' stands for tarball creation (2) -i intput card name [powheg.input] (3) -m process name (process defined in POWHEG) (4) -f working folder [my_ggH] (5) -k keep the validation .top plots [0]
ncall1 100000 ! number of calls for initializing the integration grid itmx1 1 ! number of iterations for initializing the integration grid ncall2 100000 ! number of calls for computing the integral and finding upper bound itmx2 5 ! number of iterations for computing the integral and finding upper bound nubound 200000 ! number of bbarra calls to setup norm of upper bounding functionto be used with 10 parallel jobs for step 1, launched 5 times (more details below). The suggested phase space folding to reduce the number of events from rougly 30% to order 5% is:
foldcsi 2 ! number of folds on csi integration foldy 2 ! number of folds on y integration foldphi 2 ! number of folds on phi integrationfor Zj, Wj, HJ. It can be used with the LSF batch queue 1nd or the 'longlunch' HTCondor job flavor. Further reduction can be induced by using:
foldcsi 2 ! number of folds on csi integration foldy 5 ! number of folds on y integration foldphi 2 ! number of folds on phi integrationbut the 'longlunch' queue will not be sufficient, needs some testing with 'tomorrow' or 'testmatch' (the latter queue is discouraged) A script to automatize all the steps described below has been added and can be used, for example, as:
ON HTCondor: python ./run_pwg_parallel_condor.py -i powheg_Zj.input -m Zj -f my_Zj -q 2nd -j 10 ON LSF (PHASING OUT): python ./run_pwg_parallel.py -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10Further details and steps performed are listed below. Usage of the python script
python ./run_pwg_parallel_condor.py -h Usage: run_pwg_parallel_condor.py [options] Options: -h, --help show this help message and exit -f FOLDERNAME, --folderName=FOLDERNAME local folder and last eos folder name[testProd] -j NUMJOBS, --numJobs=NUMJOBS number of jobs to be used for multicore grid step 1,2,3 -x NUMX, --numX=NUMX number of xgrid iterations for multicore grid step 1 -i INPUTTEMPLATE, --inputTemplate=INPUTTEMPLATE input cfg file (fixed) [=powheg.input] -q DOQUEUE, --doQueue=DOQUEUE Condor job flavor [longlunch] -m PRCNAME, --prcName=PRCNAME POWHEG process name [DMGG] --step3pilot do a pilot job to combine the grids, calculate upper bounds afterwards (otherwise afs jobs might fail) --dry-run show commands only, do not submit
cmsenv # interactive mode (recommended, but for complex processes, it could take a while) python ./run_pwg_condor.py -p 0 -i powheg_Zj.input -m Zj -f my_Zj # batch mode ON HTCondor: python ./run_pwg_condor.py -p 0 -i powheg_Zj.input -m Zj -f my_Zj -q microcentury ON LSF (PHASING OUT): python ./run_pwg.py -p 0 -i powheg_Zj.input -m Zj -f my_Zj -q 8nhDefinition of the input parameters:
(1) -p grid production stage [0] (compiling source) (2) -i intput card name [powheg.input] (3) -m process name (process defined in POWHEG) (4) -f working folder [my_ggH] (5) -q job flavor / batch queue name (run locally if not specified)
# step 1-1 python ./run_pwg_condor.py -p 1 -x 1 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10 # step 1-2 python ./run_pwg_condor.py -p 1 -x 2 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10 # step 1-3 python ./run_pwg_condor.py -p 1 -x 3 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10 # step 1-4 python ./run_pwg_condor.py -p 1 -x 4 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10 # step 1-5 python ./run_pwg_condor.py -p 1 -x 5 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10 # step 1-n (suggested number is n=5) # ... # step 2 python ./run_pwg_condor.py -p 2 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10 # step 3 python ./run_pwg_condor.py -p 3 -i powheg_Zj.input -m Zj -f my_Zj -q longlunch -j 10ON LSF (PHASING OUT):
# step 1-1 python ./run_pwg.py -p 1 -x 1 -i powheg_Zj.input -m Zj -f my_Zj -q 1nd -j 10 # etc. etc.Definition of the input parameters:
(1) -p grid production parallel stage '1', '2', '3' (2) -x grid refinement steps '1', '2', or '3'... for parallel stage '1' (3) -i intput card name [powheg.input] (4) -m process name (process defined in POWHEG) (5) -f working folder [my_ggH] (6) -q job flavor / batch queue name (run locally if not specified) (7) -t the total numbers events to run (8) -n the number of events in each parallel jobs (9) -j number of parallel jobs
python ./run_pwg_condor.py -p 9 -i gg_H/gg_H_quark-mass-effects_NNPDF30_13TeV.input -m gg_H_quark-mass-effects -f my_ggH -k 1Definition of the input parameters:
(1) -p grid production stage '9' stands for tarball creation (2) -i intput card name [powheg.input] (3) -m process name (process defined in POWHEG) (4) -f working folder [my_ggH] (5) -k keep the validation .top plots [0]
check_bad_st1 1 check_bad_st2 1
cmsrel CMSSW_7_1_30 cd CMSSW_7_1_30/src cmsenvThen, untar the gridpack
tar xvzf my_ggH_gg_H_quark-mass-effects-etc-etc.tgz
./runcmsgrid.sh1
./runcmsgrid.shThe standard version of the macro is stored in runcmsgrid_singlecore.sh1
./run_lhe_condor.shThe resulting LHE files will be named cmsgrid_final<n>.lhe in the directory where the job is sent.1
tot: 10.396661228247105 +- 2.2674314776262974 abs: 10.552610613248710 +- 2.2674303490903318 pos: 10.474635920747978 +- 2.2674300770006690 neg: 7.7974692500833637E-002 +- 1.9475025001334601E-003 powheginput keyword ubsigmadetails absent; set to -1000000.0000000000 btilde pos. weights: 10.474635920747978 +- 2.2674300770006690 btilde |neg.| weights: 7.7974692500833637E-002 +- 1.9475025001334601E-003 btilde total (pos.-|neg.|): 10.396661228247105 +- 2.2674314776262974 negative weight fraction: 6.7595163087627646E-003However, to check how the kinematic distributions are affected, one must compare the histograms filled with both weights and those with only positive weights. You could use either Rivet or LHEAnalyzer to study the effect.
foldcsi 1 ! number of folds on csi integration foldy 1 ! number of folds on y integration foldphi 1 ! number of folds on phi integrationfor example changing to:
foldcsi 2 ! number of folds on csi integration foldy 5 ! number of folds on y integration foldphi 2 ! number of folds on phi integrationwill reduce the fraction. However the computation time for steps 1, 2 and 3 (or full) will be longer. For the showering and jet matching validation, please refer to the corresponding parts in the MG5 exercise. -- YuanChao - 2019-11-15
I | Attachment | History | Action | Size | Date | Who | Comment |
---|---|---|---|---|---|---|---|
![]() |
300px-Drell-Yan.svg.png | r1 | manage | 5.8 K | 2019-12-10 - 00:44 | JunhoChoi | |
![]() |
HardScattering.png | r1 | manage | 74.3 K | 2019-12-10 - 00:44 | JunhoChoi | |
![]() |
LHEexplanation.pdf | r1 | manage | 64.5 K | 2019-11-04 - 10:49 | JunhoChoi | LHEformat |
![]() |
LHEexplanation.png | r1 | manage | 463.2 K | 2019-11-04 - 10:51 | JunhoChoi | |
![]() |
PS_Hadronization.png | r1 | manage | 75.3 K | 2019-12-10 - 00:44 | JunhoChoi | |
![]() |
why_difference_on_ptZ.png | r1 | manage | 267.6 K | 2019-12-10 - 00:44 | JunhoChoi |