Daily web log book


2014 04 22

  • Ran reweight on 100,000 event pp --> w+zjj --> l+l-l+vl with SM_LT012 model data set.
    • It appears that this process is not currently configured to run on Condor, and it is veeeery slow:
      • First reweight completed at 1:46 (AM) April 20
      • 31st reweight completed at 0:45 April 23
    • Files are located in ~/cms/Madgraph/generated_events/w+zjj_ewk_LT_reweights and in /nfs_scratch/kdlong/w+zjj_ewk_LT_reweights

2014 04 23

2014 04 25

  • Created a simple text file for easier submission of Madgraph results. I should have done this a long time ago!
    • Contents of survey_refine.cmd:
      • survey
      • refine 100000
      • refine 100000
      • refine 100000
      • refine 100000
    • Contents of combine.cmd
      • combine_events
      • store_events
  • Run processes as follows:
    • Initially edit file Cards/me5_configuration.txt to:
      • run_mode = 1 (cluster mode)
      • cluster_type = condor
      • cluster_queue = None
    • Run: nohup ./bin/madevent survey_refine.cmd > survey_refine.out &
      • Note: nohup is roughly equivalent to using C-z then bg (which is equivalent to &, sends processes to background) + disown. This completely detaches the jobs from the shell.
    • Edit Cards/me5_configuration.txt to:
      • run_mode = 2 (combine_events script gives error when run on cluster)
      • Run: nohup ./bin/madevent combine.cmd > combine.out &

  • Attempting to create a 100,000 event data set with the process pp --> w+l+l- QED = 6, w+ --> l+ vl. This allows the inclusion of zg interference and is more realistic.
  • Generated 1000 events using generate p p > w+ l- l+ j j QED=6, w+ > l+ vl with default settings in the SM. Stored as wpz_zg_default_sm in /nfs_scratch/kdlong and ~/cms/Madgraph/generated_events.
    • I do get quite a few errors during the survey command, of the form: ^[[1;34mWARNING: resubmit job (for the 1 times) ^[[0m * See survey_refine.out of these respective directories for details. This doesn't seem to affect the refine process in this case, the events were still generated without apparent issue. * It seems that the error is related to submission to the condor cluster and not with madgraph. A similar problem is discussed here: * https://answers.launchpad.net/mg5amcnlo/+question/245706 * Ran 100 events with SM pp --> w+zjj process to see if this error is related to the process. I still get errors in the survey process of the same form. It again does not cause issues in the refine process. * Files are stored in nfs_scratch/kdlong/w+zjj_ewk_sm. See survey.out for error.
      #LogDay20140426 ---++ 2014 04 26 * I didn't do). * Run as follows: * ./bin/madevent * import model SM_LT012 * define p g u c s d b u~ c~ s~ d~ b~ * define j = p * define l+ e+ mu+ ta+ * define l- e- mu- ta- * define vl ve vm vt * define vl~ ve~ vm~ vt~ * generate p p > w+ l- l+ j j QED=6, w+ > l+ vl * output wpz_zg_ewk
      #LogDay20140430 ---++ 2014 04 30 ERROR: Failed to set ClusterId=549192 for job 549192.0 (110) ERROR: Failed to queue job. ERROR: Failed to set ClusterId=549293 for job 549293.0 (110) ERROR: Failed to queue job.
      #LogDay20140506 ---++ 2014 05 06 * Met with Dan Bradley to discuss the condor errors I have been consistently seeing, e.g. (#LogDay20140425
      ). As expected, the issue comes from jobs running on machines that are not "properly configured" for MadGraph. Specifically, some machines do not have CVMFS. Dan Bradley fixed this by making CVMFS are requirement for all jobs submitted by CMS users.
  • It's not clear why Matt had not encountered this issue, but it should now be fixed for everyone.

2014 05 15

  • Matt has been investigating the differences between the CMS recommended card (which does not converge for Z-gamma interference processes) and the default Madgraph card. It appears that the convergence errors are linked to dRll = 0 (separation between leptons) and ptl = 0 (minimum pt for leptons). Using dRll = 0.001 and ptl = 5 does converge. The current run_card.dat, modified from Matt's, is attached.
  • Currently running zg interference process including b quarks and new physics with this run_card.dat, located in /nfs_scatch/kdlong/wpz_zg_ewk_all_aqgc_FT1

  • I now understand the solution to the "Failed to set ClusterId =549293 for job 549293.0 (110)", "Failed to queue job." problem, referenced, e.g. on 2014 04 30. The solution is embarrassingly simple: The run_card.dat file has the code path directory hardcoded in. The errors arise when I try to copy another run_card.dat and use it!
  • Investigating running reweight feature on condor cluster. Running 31 reweights took ~2 days (see 2014 04 22).
    • Parallelizing the reweight process is addressed here: https://answers.launchpad.net/mg5amcnlo/+question/244512, but the solution is unclear.
    • Matt suggest splitting the 100,000 event data file into 100 pieces and reweighting the full sequence of events separately. This would be easy to implement, and is likely the best option, though it does come with a fair amount of inconvenience, such as file transfers and the large amount of files which would be produced.
    • I have submitted a question to the authors asking for more information about how they suggest to do this.
  • Registered for Fermilab computing account. Will attend a hadron callorimeter upgrade discussion/class tomorrow: https://indico.cern.ch/event/314999/

2014 05 16

  • Attended HATS class on hadron calorimeter upgrade at Fermilab
  • Recieved an answer from the Madgraph authors. They suggest that Matt's prescription is easiest, but also suggest how one would parallelize code.
  • 100,000 event zg interference process from yesterday appears to have successfully run survey/refine.

2014 05 19

  • The 100,000 event zg interference process only created 660 events after the combine_events stage. Not sure why this is, apparently I did not run enough refines. Trying again.
  • Discussed reorganizing the WpZ _ana.C code with Matt.
    • He agrees that a large reorganization of the code is a good idea and worth the effort.
    • Looking into creating a make file to run independently from root. Getting errors from Shared Object files. Possible more of a hassel than I realize.
    • See GitHub for for log of edits.

2014 05 20

  • 100,000 event zg interference process ran and combined successfully. Located in /nfs_scratch/kdlong/wpz_zg_ewk_all_aqgc_FT1 and ~/cms/Madgraph/generated_events/wpz_zg_ewk_all_aqgc_FT1
  • Sucessfully compiled, linked, and ran the WpZ _ana.C file from a make file. The path of the libExRootAnalysis.so file has to be given to LD_LIBRARY_PATH so it can be found at run time.
    • Added LD_LIBRARY_PATH= $LD_LIBRARY_PATH:~/WpZ_ana/lib to .bashrc file.

2014 05 21



2014 06 18

2014 06 19

  • Attended second day of HATS class at Fermilab on Jet substructure
  • Spoke with Matt about physics details of project
    • Important Note: values in param card are in units of 1/M^4, with M in units of GeV. Thus a mass scale of 1 TeV gives param = 1 x 10^{-12}

2014 06 20

  • Day off in Chicago

2014 06 23

  • Working on root script for reading histograms from .root file created by WpZ _ana program that compares plots from a specified weight 'number' (it's number in reweight_card.dat with the SM plot
    • modified LHEWeights.cpp file to indentify the SM weight as set param = 0.0 and append _SM for this histogram

2014 06 30

  • The reweighting for 10000 events /nfs_scratch/kdlong/wpz_zg_ewk_all_LT012 seems to have run sucessfully.
    • Madgraph actually includes a (perl) script for combining N .lhe.gz files, merge.pl in PROCESS_DIRECTORY/bin/internal
    • run as ./merge.pl file1.lhe.gz file2.lhe.gz ... fileN.lhe.gz combinedfile.lhe.gz banner.txt where banner.txt is an empty file
    • Obviously running 1000 command line arguments isn't feasible. Wrote setup_merge.py to create a simple shell script for combining N files of the form unweighted_events_XXX.lhe.gz
    • Tested by comining the 100 .lhe.gz files from this reweighting
  • Finished reworking code to add the following plots:
    • pt and eta for each (3) leptons in an event, orded by highest pt
    • pt_ll and m_ll for each combination of lepton particle anti-particle in an event. For events with 3 same flavor leptons, these are ordered by the distance os m_ll from the Z mass.
  • Began 100,000 event processes in /nfs_scratch/kdlong/wpz_zg_ewk_all_LT012 using run_card.dat with mmll = 12.
Command "reweight" interrupted with error: AttributeError : 'Banner' object has no attribute 'lhe_version' Please report this bug on https://bugs.launchpad.net/madgraph5 More information is found in '/nfs_scratch/kdlong/w+zjj_all_LT102_update/run_01_tag_2_debu g.log'. Please attach this file to your report.


2014 09 17

I've got to do a better job maintaining this.

  • Sorting through parton level processes generated in MadGraph
  • Define "signal" by total QCD + QED generated events - QCD only
  • In Madgraph:
    • QED = 6 QCD = 0 is "signal", but does not include QED/QCD interference. Instead use QED = 4 QCD = 2 as background and subtract from QED = 6
    • Note that the order of QCD or QED is the maximum number of vertices allowed. Lower is also allowed, see "examples/format" at http://madgraph.hep.uiuc.edu/

2014 09 18

2014 09 19

  • Generated 1,000,000 event samples for QED = 6 and QCD = 4 QED = 2 pp > w+ l+ l- jj > l+ l- vl l+
  • Attempts at following the recipes I find online have been pretty frustrating. I haven't found anything yet that gives a specific example of taking in a .lhe file and converting it to EDM. My procedure keeps failing in the hadronization step with error:
    • [2] Calling event method for module BetafuncEvtVtxGenerator /'VtxSmeared'
    • Exception Message:
    • Principal::getByLabel: Found zero products matching all criteria
    • Looking for type: edm::HepMCProduct
    • Looking for module label: generator
    • Looking for productInstanceName:

2014 09 22

  • CMSSW integration continues to frustrate
  • Working on making plot of signal = (QED6) - (QED4 QCD2). Can make the histogram subtraction work in CINT but am having trouble getting it to work in my plotting script.

2014 09 23

  • Sucessfully made QED = 6 QCD = 0 and [QED6 - (QED4 QCD2)] "signal" comparison plot.
  • Presented at meeting and met with Matt
  • Matt points out that the errors in my CMSSW generation appear to be from the first (LHE->EDM) step not working correctly
  • Met with Robert (undergraduate student working with Matt) and passed on my version of the WpZ _ana code. Noticed a few updates that should be made:
    • Make a simpler command line argument interface to accept a root and lhe file at run time
    • Create a simple technique for making unweighted plots (no lhe file)
    • Make LHEWeights function compatible with weights that set multiple Lagrangian parameters.
  • A few other points from talking with Robert not related to analysis code:
    • Update condor reweight code with a simple user interface so that it can be passed to Robert
    • Matt, Robert, and I should be doing a better job making sure we are on the same page with our MadGraph release and run/configuration cards.
  • Wrote script to allow commonly edited parameters to be changed from default cards while keeping run_card parameters that we don't usually change untouched. Also takes care of setting run_mode in me5_configuration.txt so that survey/refine is done via condor but combine/store is not.
    • Script is located at /nfs_scratch/kdlong/Default_MadGraph_Run. Invoked as ./run_madevent <path_to_run> <num_events> <COM energy in TeV >
    • Tested on Tyler's username sucessfully. Hopefully will work out of the box for Matt and Robert.

2014 09 24

  • Emailed Matt and Robert about madevent script implementation
  • Successfully updated code to run with no LHE file
  • Trying to fix LHEWeights to cooperate with multiple parameters set. Getting the same mysterious errors with file reading hanging up for no apparent reason as at the beginning of the summer.

2015 01 06

  • New Years resolution: keep my log book updated!
  • Researching jet matching
  • Found the LHE file from the official Monte Carlo campaign for W+/-Z + jets > 3lv from the SUSY group. Currently located in ~/SUSY_WZ
    • For future reference:
    • The Monte Carlo campaign through which the sample was created is here: https://cms-pdmv.cern.ch/mcm/requests?dataset_name=WZJetsTo3LNu_Tune4C_13TeV-madgraph-tauola&page=0&shown=127
      • I found this by:
      • searching for dataset=/*WZ*/*/* on https://cmsweb.cern.ch/das/ to find the name of the data set
      • Go to https://cms-pdmv.cern.ch/mcm/
      • Click on "Request" at the top
      • Click on "Output dataset" in blue, above the lists of campaigns, but below the topmost menus
      • In the search box which appears, search for "/WZJetsTo3LNu_Tune4C_13TeV-madgraph-tauola/Fall13pLHE-START62_V1-v1/GEN"
      • This should give one hit, "SUS-Fall13pLHE-00008." Click on "WZJetsTo3LNu_Tune4C_13TeV-madgraph-tauola" under "DataSet Name"
    • There should be a list of various stages of the Monte Carlo generation. To get the LHE file, look at the pLHE step. On the line that begins with "SUSFall13pLHE00008", click the small check mark under the "Actions" heading. Hovering your mouse over it should give the text "get test command"
    • This is the command to convert a private LHE file to a CMS EDM file. From the command, the input file is lhe:12632. This indicates that the LHE has been uploaded to the EOS storage system (https://cern.service-now.com/service-portal/article.do?n=KB0001998), which is where we can find it!
      • ssh to lxplus
      • Use "eos ls store/lhe/12632" to list the contents of this directory. It should contain the file WZJetsTo3LNu _500k_events_q15_noclus.lhe.xz
      • Copy from eos with the "cmsStage" script:
        cmsStage /store/lhe/12632/WZJetsTo3LNu_500k_events_q15_noclus.lhe.xz .
      • This file is zipped with xz. You can unzip it with: unxz WZJetsTo3LNu _500k_events_q15_noclus.lhe.xz
      • Ta da! You can always scp it back to the login machines.

2015 01 07

  • Found the solution to the problem of discontinuities at the m_{4l} cut when combining two samples formed with high and low m_{4l} cuts. (See attached)
    • The problem is neither a physics problem nor a code problem, but actually a usage of the code problem: when the reweighting is done via condor, the lhe file is split and then recombined. This recombining step does not necessarily restore the events to their original order. The application of weights in the WpZ _analysis program depends on the events in the root file being in the same order as in the lhe file.
    • The solution is simple: create a new root file from the new weighted lhe file with ExRootAnalysis /ExRootLHEFConverter and use this root file instead.
  • Investigated the usage of MatchChecker (https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/MatchChecker) for checking that the differential jet rate plots are smooth, which indicates a reasonable setting of the matching parameter xqcut.
    • My impression is that this program may have been superseded by MadAnalysis. Trying to properly configure MadAnalysis and hoping that these plots are then automatically generated.

2015 01 08

  • The differential jet rate plots are automatically generated if MadAnalysis is installed. Then after running pythia the plots are created. Ran for wz+jets sample with xqcut = 10, 20, 30. I believe qcut is automatically set in MadGraph to be > xqcut, something like 1.2*xqcut or xqcut + 5.
  • I need to understand what these plots actually mean.
  • Working on pre-exercises for CMSDAS

2015 01 09

  • Worked on trying to get the recipe for submitting jobs finished before going off to CMSDAS.
  • Talked to Bhawna about which pileup settings she used. She had the minbias files moved to the UW cluster so the completion time should be improved for this step.
  • Talked to Matt about which pileup setting to use. He recommends AVE_20_BX_25ns i.e. 20 PU and 25 ns bunch crossings, which should be the running conditions in the longer future ~months - year rather than immediately after turn on), so this is appropriate for the longer term studies such as this.
  • Worked on pre-exercises for CMSDAS

2015 01 12 - 16

At CMS Data Analysis School: https://indico.cern.ch/event/346968/


2015 01 19

  • The jobs which I submitted the week before the CMSDAS all seem to give incorrect results in the pythia step. Out of 500 events, many fail with errors of the type:
    • 495 Abort from Pythia::next: parton+hadronLevel failed; giving up
    • 33145 Error in Pythia::check: unmatched particle energy/momentum/mass
  • I believe this error can be tracked to lhe file with weight information, running a file without any event weights does not give such errors.
  • It appears that using a newer version of the software fixes the errors. I ran 100 events with CMSSW_7_0_6patch1 without any such errors. Submitted Fall13 step to condor for m4l > and < 600 100,000 event samples.

2015 01 20

  • Jobs run through Pythia step successfully when using CMSSW_7_0_6_patch1 instead of 6_2_3 as used in Fall13.
  • Ran Madgraph process for W+Z + 2 jets QCD production (100,000 events) with exact same settings as
  • Attended UW meeting and Devin's prelim

2015 01 21

  • Added CMSSW generation code and config files to github
    • Code now includes proper FarmoutAnalysisJobs call for each step. Using a simple bash script to avoid excessively long names to output files
    • Will move to a config file style input rather than command line options to simplify input
  • Finished all steps for wpz_all_m600h_rwgt data set (inclusive wl+l-jj sample with weights, with m4l < 600 GeV), close to completing wpz_all_m600l_rwgt and wpz_qcd2jet (no matching)
Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf m4l600_disc.pdf r1 manage 15.2 K 2015-01-08 - 01:27 KennethDavidLong Before and after plots of the problem caused weights to be applying to the wrong events, resulting in discontinuities between high and low m4l samples
PDFpdf m4l600_fixed.pdf r1 manage 15.2 K 2015-01-08 - 01:27 KennethDavidLong Before and after plots of the problem caused weights to be applying to the wrong events, resulting in discontinuities between high and low m4l samples
Unknown file formatdat run_card_14_5_15.dat r1 manage 15.8 K 2014-05-15 - 23:58 KennethDavidLong Current run_card.dat for full simulations at 13 TeV as of May 15, 2014
Edit | Attach | Watch | Print version | History: r24 < r23 < r22 < r21 < r20 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r24 - 2015-01-22 - KennethDavidLong
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback