Abraham Tishelman-Charny's Homepage


Hello, my name is Abraham (Abe) Tishelman-Charny and I'm a PhD student from Northeastern University based at CERN in Geneva, Switzerland. I'm interested in lifting, classical music, hip-hop, chocolate, and of course, Physics. I'm using this page to gather my thoughts and document my research.

HH→WWγγ Analysis

The diHiggs to WWγγ channel can be studied in the context of the standard model and multiple beyond the standard model theories, including searches for the exotic Radion or Graviton.

Private Signal Production

A github repository exists for creating HH→WWγγ samples, but in theory can take any pythia configuration file and run your desired cmsDriver commands. In this example, the steps for HH→WWγγ private MC production will be followed.

Begin by cloning this github repository:


git clone https://github.com/NEUAnalyses/HH_WWgg.git
cd HH_WWgg

or Via SSH:

git clone git@github.com:NEUAnalyses/HH_WWgg.git
cd HH_WWgg

Based on how the repository is currently set up (25 June 2019), the pythia fragment must be compatible with CMSSW_9_3_9_patch1. To begin, you should clone CMSSW_9_3_9_patch1:

cmsrel CMSSW_9_3_9_patch1

Then you should place your pythia configuration file in CMSSW_9_3_9_patch1/src/Configuration/GenProduction/python/. An example configuration file can be found here: ggF_X750_WWgg_qqlnugg.txt (just change the extension from .txt to .py before use). Note the madgraph gridpack included in the preamble. This will create a 750 GeV Radion in MadGraph, and pythia will decay it into two SM Higgs Bosons, where one Higgs decays into two photons and the other Higgs decays into two W bosons. One of the W bosons will then decay into two quarks (which can be u, d, s or c) which will hadronize, and the other W boson will decay into a lepton and neutrino (of the electron or muon flavor).


The next step is to create a proper keys and values in the MC_Configs json file. If one only wants to study the GEN signal without detector simulation, one only needs to run the GEN step of the production. In this case, the json should look like this:

        "step"    : "GEN",
        "events"  : 100000,
        "jobs_jobsize"    : 200,
        "fragment_directory"  :  "ggF_X750_WWgg_qqlnugg",
        "pileup"              :  "dummy"

This will create 100k events of the signal spread out over 200 CRAB jobs. The script will look for the fragment with the path: CMSSW_9_3_9_patch1/src/Configuration/GenProduction/python/ggF_X750_WWgg_qqlnugg.py. For the GEN step, the pileup entry is not read by the script, so it can be set to a dummy value or string. If one wants to also include detector simulation, which will take a lot more time but will allow you perform the Digi Reco step afterwards, one simply needs to change the "step" key value from "GEN" to "GEN-SIM".

N.B. You can add as many configurations as you'd like here by adding a comma after the closing curly bracket and then adding the next set of keys. This allows you to submit jobs for as many pythia configurations as you'd like.

After saving the MC_Configs.json file, you can run the script with the command:

. main.sh

Note that when running this, the output directory of the CRAB jobs is currently hardcoded. You may wish to change this to your desired output location. It's also useful to manually set your VOMS proxy incase the script doesn't catch that you don't have one set. This can be done with the command:

voms-proxy-init --voms cms --valid 168:00

as long as you have the certificate setup properly.

If everything worked as it should have (which is very very rare the first time with anything) then you should have a crab project created for each of your pythia configuration files. You can check the status of a crab project with the command:

crab status -d CMSSW_9_3_9_patch1/src/crab_projects/<crab_project_name>


If CRAB has successfully outputted the files for your GEN-SIM step, you're now ready to submit jobs for the DR1 (DigiReco 1) step. An example MC_Configs.json that would be used following the previous GEN-SIM step settings is the following:

        "step"    : "DR1",
        "events"  : 100000,
        "jobs_jobsize"    : 1,
        "fragment_directory"  :  "/eos/cms/store/group/phys_higgs/resonant_HH/RunII/MicroAOD/HHWWggSignal/ggF_X750_WWgg_lnulnugg/100000events_GEN-SIM/190624_230034/0000/",
        "pileup"              :  "wPU"

The steps following GEN or GEN-SIM have different configuration parameters. DR1 is chosen for the step, 100000 events is chosen because this was the number of GEN-SIM events produced. The third parameter is 'jobsize' which is the number of GEN-SIM files to be used per job. Because the last step outputted 200 files, choosing a jobsize of 1 here will create 200 DR1 jobs. Directory is where you enter the full path to the directory containing the output GEN-SIM files, which should end with a slash '/'. the pileup options can be 'wPU' or 'woPU' to include or not include pileup. If you include pileup, this step may take some time because it may be configured to read in many many pileup files which will all end up printing on the terminal.

When DR1 finished, running DR2 is the same procedure as DR1, except with replacing the "step" name to DR2, and the directory of files to the location of the output DR1 files. Just make sure to keep the pileup name consistent between DR1 and DR2. It should either be 'wPU' on both steps, or 'woPU' for both steps. Here is an example for DR2:

        "step"    : "DR2",
        "events"  : 100000,
        "jobs_jobsize"    : 1,
        "fragment_directory"  :  "/eos/cms/store/group/phys_higgs/resonant_HH/RunII/MicroAOD/HHWWggSignal/ggF_X750_WWgg_lnulnugg/100000events_wPU_DR1/190625_073937/0000/",
        "pileup"              :  "wPU"

When DR2 finished, the final step is to produce MINIAOD's. The json config follows the same convention as DR1 and DR2, but now the "pileup" key won't be read, and can be given a dummy value. The directory key should contain the path to the DR2 output, like so:

        "step"    : "MINIAOD",
        "events"  : 100000,
        "jobs_jobsize"    : 1,
        "fragment_directory"  :  "/eos/cms/store/group/phys_higgs/resonant_HH/RunII/MicroAOD/HHWWggSignal/ggF_X750_WWgg_lnulnugg/100000events_wPU_MINIAOD/190625_073937/0000/",
        "pileup"              :  "wPU"

When this step finishes, your output directory should have MINIAOD files containing all of your events. If this worked, good job! If it did not, something needs to be fixed.

HH MC Studies

To study an HH signal with custom BSM parameters, you can begin by cloning the CMS genProductions github repository:

git clone git@github.com:cms-sw/genproductions.git genproductions
cd genproductions/bin/MadGraph5_aMCatNLO/

You can proceed by editing an existing gluon fusion HH MadGraph card. For example, the Benchmark 1 scenario:


The BSM parameters can be changed in the customization card:


Creating a Custom BSM Gridpack for MadGraph5 _aMCatNLO

Steps to produce a gridpack can be found here, but I'm going to include BSM HH specific instructions here.

In order to study a desired HH signal produced by MadGraph5, you must first create the gridpack. This means creating a "run", "proc", "extramodels" and "customizecards" card. In this example, we'll see how to create a GEN sample for (κ_{λ}, κ_{t}) = (10,1).

You can begin by creating a folder for your new gridpack, for example 'genproductions/bin/MadGraph5_aMCatNLO/cards/HH_MC_Study/kl_10_kt_1'. In this folder is where you'll place the madgraph cards. You can find examples of these cards by looking at an existing set. For the Benchmark 1 scenario (one of 12 BSM scenarios defined in a DiHiggs MC study), the customizecards.dat file looks like this:

set param_card bsm 30 -1.0
set param_card bsm 31 0.0 
set param_card bsm 32 0.0 
set param_card bsm 189 1.0 
set param_card bsm 188 7.5

the final two constants, 1.0 and 7.5, represent the ratio of the BSM coupling constant to the SM coupling constant for the Higgs boson Yukawa coupling to heavy quarks, and the Higgs boson tri-linear coupling respectively. This means for this BSM benchmark scenario, the coupling constant of the Higgs to itself is 7.5 times the coupling constant in SM. In this study, these ratios are referred to as κ_{λ} and κ_{t}. The other three constants, -1.0, 0.0 and 0.0 represent c2, cg and c2g. These are the coupling constants of a Higgs to a top/antitop pair, the Higgs to one gluon, and the Higgs to two gluons. In SM these three parameters are zero because these coupling are not predicted. They can be predicted in BSM scenarios in the presence of a heavy new state (see Section two of the HH MC Benchmark Study).

To adjust this for the (κ_{λ}, κ_{t}) = (10,1) BSM model, these lines should be changed to the following:

set param_card bsm 30 0.0
set param_card bsm 31 0.0 
set param_card bsm 32 0.0 
set param_card bsm 189 1.0 
set param_card bsm 188 10.0

The other file that must be changed is the 'proc_card' (process card). This contains the name of the output in the final line of the card, which in this case could be changed from GF_HH_1 to kl_10_kt_1 like so:

set group_subprocesses Auto 
set ignore_six_quark_processes False 
set loop_optimized_output True 
set complex_mass_scheme False 
import model BSM_gg_hh
define lep = e+ e- mu+ mu-
define nus = ve~ vm~ ve vm
generate p p > h h 
output kl_10_kt_1 -nojpeg

You should place these cards, in addition to the 'extramodels' and 'run' cards, in your gridpack folder genproductions/bin/MadGraph5_aMCatNLO/cards/HH_MC_Study/kl_10_kt_1. This folder should contain four files:

  1. kl_10_kt_1_customizecards.dat
  2. kl_10_kt_1_extramodels.dat
  3. kl_10_kt_1_proc_card.dat
  4. kl_10_kt_1_run_card.dat
naming them properly matters in the next steps. It's ideal for the file paths to begin with the folder name, as they do in this case.

The next step is to move to 'genproductions/bin/MadGraph5_aMCatNLO', and run the following command:

. gridpack_generation.sh kl_10_kt_1 cards/HH_MC_Study/kl_10_kt_1

this will search the directory 'cards/HH_MC?Study/kl_10_kt_1' for the madgraph cards, and will expect the name 'kl_10_kt_1' before '_proc_card.dat' in the process card file name.

κ_{λ} Extrapolation

In the Gluon Gluon Fusion Standard Model HH production scenario, there exist two coupling constans: The Higgs Boson trilinear coupling, which is the Higgs Boson's coupling to two other Higgs', and the top Yukawa coupling, representing the strength of the 'box' diagram, which is the Higgs' coupling to a heavy quark. Because the Yukawa coupling is controlled by the fermion mass, the Higgs coupling to quarks lighter than the top quark is considered small and may be neglected.

In BSM HH scenarios, the values of these two coupling constants can change. The factors by which the tri-linear and Yukawa coupling constants change with respect to the standard model are κ_{λ} and κ_{t}. In the standard model scenario, these are both equal to 1. In BSM scenarios these can vary, leading to many possible BSM scenarios to study.

In order to model the distributions of the many possible κ_{λ}, κ_{t} combinations without producing a gridpack for every point, an extrapolation method can be used to model the kinematics of many points with a limited number of samples. The following case will serve as an example for how to follow the procedure:


In this example, it is assumed you have three samples corresponding to the (κ_{λ}, κ_{t}) points: (0,1), (1,1) and (5,1), and you want to plot the extrapolated invariant HH mass for the scenario (20,1). The desired extrapolated κ_{λ} value is 20. For this procedure we assume you will be using κ_{λ} = {0,1} samples, so the third sample in this case is κ_{λ} = 5. First, the HH invariant mass distribution must be obtained for each sample. This can be done using FWLite to plot GEN variables. The distribution must then be normalized to the total cross section of the sample. This value can be obtained using GenXSecAnalyzer, and should then be applied as a multiplicative factor to the distribution.

After obtaining total cross section normalized distributions, each distribution must by multiplied by a weight. The weights are calculated with the formula [1, slide 4]:

σ_{k'} = k'^{2}*t(k) + b(k) + k'*i(k)


t(k) = ( (σ_{k} - σ_{0}) / ( k * ( k - 1) ) ) - ( (σ_{1} - σ_{0}) / ( k - 1 ) )

i(k) = ( k * (σ_{1} - σ_{0}) / ( k - 1) ) ) - ( (σ_{k} - σ_{0}) / ( k * ( k - 1 ) ) )

b = σ_{0}

In this case, having k' = 20 and k = 5 gives us:

σ_{20} = (57)*σ_{0} + (-75)*σ_{1} + (19)*σ_{5}

This means the extrapolated (κ_{λ}, κ_{t}) = (20,1) distribution can be obtained by summing the three total cross section normalized distributions with the weights 57, -75 and 19 for (κ_{λ}, κ_{t}) = (0,1), (1,1) and (5,1) respectively. The output of such a process looks like this:


The Compact Muon Solenoid (CMS) Electromagnetic Calorimeter (ECAL) is a high precision calorimeter comprising of 75,848 Lead Tungstate crystals. The purpose of this subdetector is to measure the energies of electrons and photons.

Run 3 L1 Optimization

For run three operation of ECAL, set to begin in 2021, improvement of the ECAL algorithm used at level 1 (L1) to create trigger primitives (TP's) is being studied. See GitLab Repository, GitHub Repository (probably out of date compared to GitLab), and Thursday 11:05am - 12:20pm talks from ECAL Days at Zurich. A particular point of interest is the adjustment of the weights used in the FIR filter used to calculate transverse energy values on a bunch crossing by bunch crossing basis. In Run 2, there was one set of L1 weights used for the entire barrel (EB), and one set for both endcaps (EE). There is evidence that these weights aren't ideal, as there is an increased bias away from the correct energy as pseudorapidity is increased. In addition to this potential area of improvement, after investigation of the FENIX chip design (used by the ECAL electronics), it's been discovered it may be possible to use a second set of weights at L1 to possibly improve spike and noise mitigation.

Ideal Weights Calculation

In October 2017, June 2018 and September 2018, timing scans were performed on ECAL crystals in order to precisely measure the electronic pulse shape output for all crystals. The method with which this was done results in a pulse shape that is free of pileup (PU) interactions. For each waveform, one can derive a set of 'ideal' weights. The definition of ideal being that they are designed to have a minimum bias when applied to the samples of that waveform, meaning they are the most accurate possible weights for the set weights configuration.

Updating Flashgg

If you are working from your forked flashgg branch which uses an old version of flashgg compared to the version on the main flashgg repository, and you would like to work with the latest version, you can start out by cloning the latest version of flashgg, a specific branch if desired. An example of this is with the commands:

cmsrel CMSSW_10_5_0
cd CMSSW_10_5_0/src
git cms-init
cd $CMSSW_BASE/src
git clone -b dev_legacy_runII https://github.com/cms-analysis/flashgg
source flashgg/setup_flashgg.sh

If everything looks good, you can then build:

cd $CMSSW_BASE/src
scram b -j

This will clone and build the latest flashgg branch. You can then create a new branch with:

git checkout <New_Dev_Branch>

(I don't think the -b flag is necessary, as it may create a new branch anyway if it doesn't exist. If this doesn't work, you may want to try including the -b flag)

You will now be on a new branch called <New_Dev_Branch> which will be identical to dev_legacy_runII (in this example. In general it will be identical to whichever branch you cloned from the main repository with the -b flag). You can then add your development specific files to this branch with 'git add', and stage them for a commit with "git commit" or "git commit -m '<commit message>' ". Now is when you decide where to push this new branch to. In order to push this new branch to your remote (forked) flashgg repository, you need to fork the main flashgg repository. You can do this by clicking the 'Fork' button on the top right of the github.com repo.

Once you have a forked repository (the remote repository), you need to set this to the remote repository where you will push commits to. If your forked repository was forked via SSH, this is done with the command:

git remote set-url origin git@github.com:<your_git_username>/flashgg.git

If forked via HTTPS, you would use the command:

git remote set-url origin https://github.com/<your_git_username>/flashgg.git

These commands create a new remote repository called 'origin' with the url of your forked repository. Now, you can push your new branch with:

git push origin <New_Dev_Branch>

You should now see <New_Dev_Branch> on your forked repository on github.com. Now you can work on this development branch locally. In order to make commits of your development branch to your forked repository, you simply add the changes with 'git add <new_file>', commit with 'git commit' or "git commit -m '<commit_message>' ", and then push with:

git push origin <New_Dev_Branch>

Now you can save your changes commit by commit, and when you would like to add it to the main flashgg repository, you can make a pull request of your forked development branch <New_Dev_Branch> to the main flashgg repository.

Flashgg MicroAOD 's with CRAB

In order to produce flashgg microAOD's from existing MINIAOD's, it's recommended you first begin from a fresh clone of flashgg. The reason for this is because when you eventually produce a tarball for crab submission, it will not run if it exceeds 100 MB, which means it will be necessary to delete files from flashgg that are unnecessary for CRAB job submission, but could potentially be necessary for other work in flashgg.

You can clone flashgg by following these commands:

cmsrel CMSSW_10_5_0
cd CMSSW_10_5_0/src
git cms-init
cd $CMSSW_BASE/src
git clone -b dev_legacy_runII https://github.com/cms-analysis/flashgg
source flashgg/setup_flashgg.sh

If everything then looks good, you can build:

cd $CMSSW_BASE/src
scram b -j 

If everything worked properly (which it does not always for me) then you now have a freshly cloned version of flashgg. I'm now going to follow some steps listed on flashgg [see here] but with some differences in steps and comments.

You should now switch to the following directory:

cd $CMSSW_BASE/src/flashgg/MetaData/work

You next need to create a JSON configuration file that contains the sample names of you data, signal and background. Note that these are not the physical paths, but something that looks like a path named to define different samples. Your configuration file should be located in MetaData /work, and look like this example: JSON_Config_File.txt (Just make sure to change the extension from .txt to .json before using) This includes only one sample, a Drell Yan background sample. If you created MINIAOD's with the Private Production Steps above, and allowed publication in your crab config file, you can find the sample names with this command:

dasgoclient --query='/<config.Data.outputPrimaryDataset>*/<yourusername>*/USER instance=prod/phys03'

If you don't remember the name of the output PrimaryDataset, you can just set that to '*' to see all of your published samples. Note: If you use one of these samples, later on in your crab config file you'll need to manually change 'prod/global' to 'prod/phys03' because this sample is from a different DBS instance.

At this point you should make sure crab has been sourced and your VOMS proxy is active. You can perform these two steps with the following commands:

source /cvmfs/cms.cern.ch/crab3/crab.sh
voms-proxy-init --voms cms --valid 168:00

this will allow you to use crab commands, and will activate your proxy for 168 hours which allows you to access the grid (I recommend creating bash aliases for these two commands if you use them often).

The next step is to run the crab configuration. In order to do this you need to choose:

  • a CMSSW configuration file (a pset file) which in this case will the default flashgg microAOD producer: flashgg/MicroAOD/test/microAODstd.py
  • a Campaign name, which we can call 'test'
  • the JSON file you just created, which may be called JSON_Config_File.json

Note: The max number of events to run per file should be set in the microAODstd.py file. This is done for 1000 events with the following line:

process.maxEvents = cms.untracked.PSet( input = cms.untracked.int32( 1000 ) )


these will be passed as flags when running the crab config with the following command:

./prepareCrabJobs.py -p ../../MicroAOD/test/microAODstd.py -C test -s JSON_Config_File.json --mkPilot

If you get the following error:

Exception: jobname remains too long, additional hacks needed in prepareCrabJobs.py

You need to go into prepareCrabJobs.py and look for the list 'replacements'. Each element in this list has a long string, and a shortened version of the string. You should add to this list a long string contained in the jobname complained about, and a shortened version of it so that the jobname will no longer be too long. The job name needs to be <= 97 characters long, and the error messages will tell you the current length of the job name. For example, when I run this on my private samples, I'm told that my job name:


is too long. Therefore in the replacements list I add:


If you run the ./prepareCrabJobs.py command again, you will see whether or not any job names are too long. I recommend trying to remove a lot of the clutter from the jobname, as I've had at least one instance where the jobname was not said to be too long, but CRAB ended up complaining that the config.General.requestName was too long.

Running this command will will create a folder with the name 'test' containing the crab configuration files. The 'mkPilot' option will create a crab configuration file that will run on a single file, rather than all of the sample files included. If your sample is from a phys03 DBS instance and not a global one, you'll need to now edit your crab configuration file inputDBS line from:

config.Data.inputDBS = 'global'


config.Data.inputDBS = 'phys03'

If you try to submit your crab jobs at this point, it is likely (maybe almost certain) that the tarball created for your CRAB submission will exceed the limit of 100 MB due to extra flashgg files. There is a script called 'deleted_to_submit.sh' which will delete lots of files to clear up space, but this may include files necessary for microAOD production in the latest flashgg version which requires a conditions json file. You need to make sure you still have the json files contained in flashgg/MetaData/data/MetaConditions/* as choosing one of these during microAOD productions allows you to run on your choice of 2016, 2017 or 2018 samples.

The default conditionsJSON is an empty string, so the next thing to do is edit the crab configuration file you want to run (either the one that begins with 'pilot' or 'crabConfig') to change "conditionsJSON" in the 'pyCfgParams' option from an empty string to a json file in MetaData /data/MetaConditions/*, for example, $CMSSW_BASE/src/flashgg/MetaData/data/MetaConditions/Era2017_RR-31Mar2018_v1.json which is meant for 2017 era samples.

If you now attempt to run the crab configuration with the following step:

cd test
echo crabConfig_*.py | xargs -n 1 crab sub

you should run into the problem of a tar ball that is too large. If you look at the log file for the crab submission, it will list the largest files in the repository, giving you good candidates to delete. Personally, I deleted the DNN files and everything in Taggers/data/*.

After deleting extraneous files, you'll want to run again. You'll first need to delete the directory made for your previous crab submission (because submitting again under the same crab job name will fail), and run again:

cd test
echo crabConfig_*.py | xargs -n 1 crab sub

If everything worked, crab will tell you your job was successfully submitted to the CRAB server. You can check the status of the job with:

crab status -d <directory_of_crab_project>

The output files should appear in the directory:


Adding MicroAOD 's to a flashgg Campaign

As long as you had the config.Data.publication option set to True (default in flashgg), you can now add your microAOD files to a flashgg campaign. This will then allow you to run your fgg modules over your microAOD's by specifying the campaign and dataset name in your fggrunjobs json configuration file.

The steps can be found on the flashgg page [here], but I will list and organize them here.

Step 1) Import

fggManageSamples.py -C <campaignName> -V <flashggVersion> import

Searches for datasets matching:


Step 2) Review

fggManageSamples.py -C <campaignName> review 

This step allows you to to decide which samples to keep from the import step. This for example allows you to save all, or choose to remove certain samples.

Step 3) Check

fggManageSamples.py -C <campaignName> check 

This step marks bad files as such in the catalog created in Metadata/data/, and the total weight for each file will be reported.

Flashgg PhotonID


This section will provide documentation for the ongoing studies to improve the flashgg photonID using a CNN (Convolutional Neural Network). For my local area, begin work in:

cd /afs/cern.ch/work/a/atishelm/21JuneFlashgg/CMSSW_9_4_0/src; cmsenv; 

example command to prepare condor jobs:

python condor_production_MINIAOD.py -o /eos/cms/store/group/phys_higgs/cmshgg/atishelm/flashgg/GJetMINIAOD/RunIIFall17 -c /afs/cern.ch/work/a/atishelm/21JuneFlashgg/CMSSW_9_4_0 -q tomorrow -e cms -d MakeMINIAOD -nE 5

this will create condor_job.txt. You can run the jobs with:

condor_submit condor_job.txt

and it will queue arguments from arguments.txt, which has one line per input file. In order to run properly, you first need to run your voms command.

After this runs, some jobs may not run. You can check the number of 'bad' jobs with $ . skimLogs.sh, making sure to change the clusterID variable to be equal to the clusterID of the logs of interest in the "output" and "error" directories. ALSO make sure to delete output and error files from previous job submissions. This will give the number of jobs that had a fatal error in their output file. You can create a new arguments text file with ReproduceArgs.sh, with the "input" variable set equal to the arguments text file.

After changing the name of the old arguments file: mv arguments.txt oldArguments.txt and naming the newly created text file arguments.txt, you can rerun with condor_submit condor_job.txt

You can then repeat this process if jobs from the next submission fail.


The next step is to publish the condor produced MINIAOD's.

20to40 sample:



I think MicroAOD:


cd /afs/cern.ch/work/a/atishelm/CrabFlashgg/CMSSW_10_5_0/src/flashgg/MetaData/work
cd Publish

after setting appropriate paths and crab config params, simply: crab submit -c crabConfig_Publish.py.


The next step is to create flashgg microAOD's from the published dataset of MINIAOD's. For me this is done in:

cd /afs/cern.ch/work/a/atishelm/CrabFlashgg/CMSSW_10_5_0/src/flashgg/MetaData/work

Using MicroAOD /test/PhotonID_microAODstd.py to save ECAL collections in MicroAODs. From the above directory:

./prepareCrabJobs.py -p ../../MicroAOD/test/PhotonID_microAODstd.py -C GJet40toInf -s GJet_40toInf.json 
cd GJet40toInf

example crab config here: CRAB_Template.txt.

Then simply crab submit -c <crabconfig>

NOTE: If you run into the error:

edm::FileInPath unable to find file flashgg/MicroAOD/data/TMVAClassification_BDTVtxId_SL_2016.xml anywhere in the search path.

You should first check that the metaconditions does not specify this path, and follow in instructions on this page .

Make sure to also fix the paths of the



NOTE: If you fix the paths but simply resubmit the crab submission, you will run into an error because in your crab project directory, the file:


may still have the incorrect path with the "flashgg/MicroAOD/data" prefix. This can be changed by hand or a new crab project can be created with the properly updated metaconditions file.


CERN Website: http://atishelm.web.cern.ch/atishelm/

Github: https://github.com/atishelmanch

Indico: https://indico.cern.ch/user/89338/dashboard/

Twiki Pages I like



Topic attachments
I Attachment History Action Size Date Who Comment
Texttxt CRAB_Template.txt r1 manage 2.3 K 2020-01-24 - 10:29 AbrahamTishelmanCharny  
Texttxt JSON_Config_File.txt r1 manage 0.2 K 2019-06-29 - 16:15 AbrahamTishelmanCharny  
Texttxt ggF_X250_WWgg_qqlnugg.py.txt r1 manage 3.3 K 2019-12-20 - 14:43 AbrahamTishelmanCharny pythia/madgraph config file
Texttxt ggF_X750_WWgg_qqlnugg.txt r1 manage 3.3 K 2019-06-25 - 00:45 AbrahamTishelmanCharny HHWWgg_PythiaDecayFragment
Edit | Attach | Watch | Print version | History: r37 < r36 < r35 < r34 < r33 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r37 - 2020-02-04 - AbrahamTishelmanCharny
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback