Quick Summary of what is being done.


Generate multi-run by shell script

  • To get 10000 events* 5 lhe files, the runTimes.sh:
#! /bin/bash
   ./bin/generate_events -f

cd Events/

   cd run_0$i
   gunzip unweighted_events.lhe.gz
   mv unweighted_events.lhe file_$i.lhe
   cp file_$i.lhe /Users/KellyTsai/Documents/MadGraph/Zp_zh_lheFile
   cd ..

So there are 5 runs under Events directory like run_01, run_02, run_03, run_04, and run_05. Each run_0x has a lhe file: file_x.lhe. And then copy those lhe files to directory, Zp_zh_lheFile.

  • To get root files by using CMS analyzer and shell script:
#! /bin/bash
   cmsRun ../dumpLHE_cfg.py inputFiles='file:'file_$i.lhe outputFile=../Zp_zh_rootFile/file_$i.root

So five root files are generated, file_1.root, file_2.root, file_3.root, file_4.root, and file_5.root.

X-sec Value

x-sec of 1000GeV z' mass and its process is z' to zh, tan(beta) = 3.0, 13TeV.
average: 0.000753388 (Pb)
x-sec of 1000GeV z' mass and its process is z' to hA0, tan(beta) = 3.0, 13TeV.
average: 0.00025128 (Pb)
total: 0.001004668 (Pb)
ratio (hA0/hz):0.33
consider branching ratio (h>bb~57.7%): xsec = total/0.577 = 0.00174119237

x-sec of 800GeV z' mass and its process is z' to zh, tan(beta) = 1.5, 13TeV.
average: 0.000376546 (Pb)
x-sec of 800GeV z' mass and its process is z' to hA0, tan(beta) = 1.5, 13TeV.
average: 0.000394864 (Pb)
total: 0.00077141 (Pb)
ratio (hA0/hz):1.05
consider branching ratio (h>bb~57.7%): xsec = total/0.577 = 0.0013369324

x-sec of 800GeV z' mass and its process is z' to zh, tan(beta) = 1.5, 8TeV.
average: 0.000136446 (Pb)
x-sec of 800GeV z' mass and its process is z' to hA0, tan(beta) = 1.5, 8TeV.
average: 0.000143228 (Pb)
total: 0.000279674 (Pb)
ratio (hA0/hz):1.05
consider branching ratio (h>bb~57.7%): xsec = total/0.577 = 0.00048470363

Correcting for Some variables

the xsec are too small since Z' input should consider 'Sa' for tan(beta) and 'gz' for z' mass :

sa means sin(alpha)

value = 'cmath.sin(cmath.atan(Tb)-(cmath.pi/2))'

gz ,weak coupling constant, gz=ge/sw,
ge = 2*aEW**1/2*cmath.pi**1/2
the value for aEW is the fine structure constant at the EW scale, not the EW coupling.

value = '0.03*(gw/(cw*Sb**2))*(MZp**2 - MZ**2)**0.5/MZ'

DM Background Study

from this twiki page: https://twiki.cern.ch/twiki/bin/view/CMS/NCUDarkMatterRun2
1. add Ntuples which are the same process with different HT:

  • Merging Files with Different Cross-sections

Use the mergeFiles to merge files with the SAME cross-section. Do that until you have a small set of ROOT files with different cross-sections and then you can merge/plot from them:

To correctly merge histograms only, you must download and edit root macro to your current directory, called hadd.C (http://www.hep.wisc.edu/cms/comp/examples/hadd.C)

And that will combine the HISTOGRAMS, taking into account cross sections, into one final root file.

2. Kinematics of DM Analysis with 8 TeV MC sample:


1. follow PKU steps: https://twiki.cern.ch/twiki/bin/viewauth/CMS/PKURunII0LHESIM:
* delete the tag of generator

<generator name='MadGraph5_aMC@NLO' version='2.2.3'>please cite
1405.0301 </generator>

2. in step1, met the error log:

PYTHIA Error in Pythia::check: unknown particle code , i = 5, id = 50

Solve it by:
--> id = 50 (z') to 32
--> id = 28 (A0) to 33
--> id = 1000022 (n1) to 18
--> untracked.int32(-1)

CMSSW batch job submission

Do it in the work area.
Job script, runJob.sh:

#! /bin/bash
echo $1
cd $1
export slc6_amd64_gcc472; eval `scramv1 runtime -sh`
cmsRun  test_PY8_cfg.py

Check SCRAM_ARCH for script

scram arch list CMSSW_7_2_0


 bsub -q2nd $PWD/runJob.sh $PWD 

Job is submitted to queue (setenv). Job is submitted to queue (export).

For finding large CPU

bsub -q 1nw -C 0  -R "rusage[mem=50000]" $PWD/runJob.sh $PWD

To kill all of your pending jobs you can use the command:

bkill ` bjobs -u fatsai |grep PEND |cut -f1 -d" "`

Example command

Now you can submit the job by using bsub, passing it the above script. An example command is

bsub -R "pool>30000" -q 1nw -J job1 < lxplusbatchscript.csh

There are a few arguments specified in this example

  • -R "pool>30000" means you want a minimum free space of 30G to run your job.
  • -q 1nw means you are submitting to the 1-week que. Other available queues are:
    • 8nm (8 minutes)
    • 1nh (1 hour)
    • 8nh
    • 1nd (1day)
    • 2nd
    • 1nw (1 week)
    • 2nw
  • -J job1 sets job1 as your job name.
  • < lxplusbatchscript.csh gives your script to the job.

Create my own website

1. Register a site from a private server: https://webservices.web.cern.ch/webservices/

2. new .htaccess configure file in my afs path, ex. /afs/cern.ch/work/f/fatsai

AuthName KellyTsai
AuthType Basic
AuthUserFile /CERN_WWW/Apache/fyweb/Apache/users
require valid-user
Options +Indexes
ShibRequireAll Off
ShibRequireSession Off
ShibExportAssertion Off
Satisfy Any

Allow from all

3. open it from browser: http://fyweb.web.cern.ch/fyweb/web/

more details: https://espace2013.cern.ch/webservices-help/websitemanagement/ConfiguringAFSSites/Pages/default.aspx
*permission for afs folder: https://espace2013.cern.ch/webservices-help/websitemanagement/ConfiguringAFSSites/Pages/PermissionsforyourAFSfolder.aspx</verbatim&gt;

Interactive Run

1. create a folder under the path of : GenProduction /bin/MadGraph5_aMCatNLO/cards/production/13TeV/xxxxxx (ZPrimeTohA0 _M1000)
2. create run card and param card In the xxxxx folder: ZptohA0 _1_proc_card.dat ZptohA0 _1_run_card.dat
3. add model
4. I wrote a script for multi run for 20K events.

#! /bin/bash
   ./gridpack_generation.sh ZptohA0 _$i cards/production/13TeV/ZPrimeTohA0_M1000 1nd
Note: adjust iseed for getting statistically independent events:
If you perform the runs serially in the same directory, you can safely use iseed=0, which gives unique seeds to each subsequent run. If you run in different directories (I assume to get even better parallelization of e.g. the Pythia step), then you can use any values you want as long as they are different for all runs. iseed=1 through 20 should be just fine - as long as they are different, you will get statistically independent events. Note that there is no such thing as "best random seeds" or "as unique as possible" - either the events are statistically independent or they are not.
With iseed=0, the seed is automatically increased for each run, ensuring that you get statistically independent events (as long as you run in the same directory). So there is no need to modify the iseed in the run_card by hand.

Statistically independent means that there are no two identical events between the runs, and that the distribution of events is statistically correct when you add together the events from multiple event files (i.e., the variation decreases as 1/sqrt(N) where N is the number of events). This means that there is no statistical difference between making one run with 100,000 events and 10 runs with 10,000 events each (as long as you either use iseed=0 or provide different iseed values for each run).

Edit | Attach | Watch | Print version | History: r16 < r15 < r14 < r13 < r12 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r16 - 2015-05-10 - FangYingTsai
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback