-- RachikSoualah - 2015-04-16 %CERTIFY%

TopMCHowTo_Saved

Introduction

This page is intended to detail how to handle MC11/12 requests from the view of the TopMC contacts: Andrea Helen Knue, Rachik Soualah

Preparing a request

Generally there are two types of request. Those required for the working group as a whole (baseline and systematic samples) and those specifically required by an analysis team. What follows can equally apply to both cases.

JIRA portal for MC request

Go to the JIRA production website: https://its.cern.ch/jira/browse/ATLMCPROD and click the "Create" button in the upper part of the webpage. Pick Issue = Task and then click "next". Then you get a form where you have to fill the following boxes:

  • pick meaning full title for the "Summary" box.
  • Components: Top
  • Description: write what you want to produce, for which analysis, how much statistics etc
  • Label: TopMC

Careful: if you don't pick the components and labels correctly, no notification will be sent to the top MC contacts!

Discussion of request

Generally, we ask in the first case to ensure that the analysis team outlines their proposal within their sub-group and additionally (where possible) perform some particle level validation with showered events. Please also ask the analysis team to load their validation plots onto the TopMCValidation twiki in the appropriate section. In the discussion you will need to make sure you obtain the answers to the following questions:

  • Generator(ME+PS or standalone): additionally check what version is required and whether that has been included in a pCache.
  • UE tune: default fPythia tune for ATLAS is AUE2TB, but the top group have found better agreement with the Perugia2011c(CTEQ6L1) tune so we ask if they have any objections to switching. Since fHerwig is quickly being phased out we ask if it is required (maybe still as a systematic sample).
  • PDF: check what PDF is being used - defaults can be found: McProductionCommonParametersMC12
  • Size of request: get an outline of how many samples, what statistics per sample and what proportion of AFII/fullsim. It is now very hard to ask for fullsim since AFII is now very good so always query a request that comes to you with 100% fullsim and without a good reason for it.
  • 4-vectors: have any 4-vectors already been generated? If not point them to PreparingLesHouchesEven (or in future a twiki that we can write which is more Top specific). If 4-vectors are ready then ask that they copy them to /tmp/ on a lxplus node and tell them to give permissions to others to read that directory.
  • jobOptions: have they already written some jobOptions? If so ask them for the jOs since sometimes you spot problems early on.
You should also check that the sample being requested has not been requested before by another physics group (for this I'm afraid you would end up having to dq2-ls to find similar named samples)

The next step is to assign as many DSIDs as needed if these are new samples. If they are extensions to existing samples or a request for a MC12 version of an existing MC11 dataset then no new DSIDs are required (see below for clarification).

Assigning a DSID

DSIDs need to be unique between MC11 and MC12. That is if a DSID has been assigned in MC11/12 then it can only be used in MC12/11 for the same process with the same configuration (UE tune, PDF etc). If any parameter (other than the ECM) then we must assign a new DSID to avoid confusion.

The top group have been assigned DSIDs 110xxx and 117xxx. (117xxx is almost all used up whereas 110xxx has a lot free still). To check if a DSID is free you need to perform the following steps:

 
                                                      dq2-ls "mc11*.{DSID}.*/"
                                                      dq2-ls "mc12*.{DSID}.*/"
                                                      dq2-ls "group.phys-gener*.{DSID}.*/"

Go to the TopMC afs area:

  • cd /afs/cern.ch/user/a/atltopmc/twikis/TopMC12twiki
  • cd topDSIDs
  • add the DSID for example for MC12 to reservedDSIDs_MC12.txt
  • store the file
  • source cronTopDSIDtwiki.sh
  • after the script ran, please check if you see your changes on TopMCDSIDs !!!
You can now pass the DSID(s) back to the analysis teams so that they can then name the jobOptions and any 4-vector input datasets. For advice on naming jOs please refer to AtlasMcProductionMC12.

Uploading jobOptions

jobOptions need to be uploaded into the MC11JobOptions or MC12JobOptions SVN via JIRA. Here's how:

  • In case you are uploading jO for MC11, make sure that the "full" name of the container is specified in the jO file, its not enough to put a substring.
  • check that the jO are written in the requested format, following the procedure here: https://twiki.cern.ch/twiki/bin/view/AtlasProtected/TechnicalValidationRules
  • Go to the JIRA production website: https://its.cern.ch/jira/browse/ATLMCPROD and click the "Create" button in the upper part of the webpage. Pick Issue = Bug (yes, bug!) and then click "next".
  • Components: MCJobOptions
  • Labels: MC12JobOptions
  • Summary: "Registration of jobOptions for" (put name of process and if its MC12,15 etc..)
  • MainText: "Description of jobOptions and ask that they are included in a new MC11JobOptions or MC12JobOptions tag". Remember that you need to know the tag when you come to submit the request for production.
If you are requesting MC12 jobOptions remember that you have also to say that MC12JobOptions/evgeninputfiles.csv has to be updated. Of course, you have to provide this information. This file contains the input dataset which has to be used for each DSID as well as its energy.

The team uploading the jobOptions are very good and often spot problems before they include them in a tag.

Uploading 4-vector inputs

If the request requires 4-vectors then they must be uploaded to the GRID such that they are available during production. This step won't be required for any of the generators with the new "auto" feature such as MadGraph and Alpgen (Powheg to come) which generate the 4-vectors on the fly.

To upload inputs follow the procedure here:

  • Log on to an lxplus node (normally the node where the inputs have been copied to).
  • Setup dq2 as you would normally, but instead of doing voms-proxy-init -voms atlas you need to do voms-proxy-init -voms atlas:/atlas/phys-gener/Role=production to give you production rights:
   source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.sh
   voms-proxy-init -voms atlas:/atlas/phys-gener/Role=production

> mkdir ~/tmp
> cd ~/tmp
> cp phys-gener-reg.sh ./
> cp validate_ds2eos.py ./
  • Now you need to make a soft link from where the inputs have been copied to this temporary directory. For example, if the inputs have been copied to ==/tmp//group.phys-gener.powheg.110140.singletop_Wtch_DR_incl_7TeV.TXT.mc11_v1/ then do the following:
> cd ~/tmp
> ln -s /tmp//group.phys-gener.powheg.110140.singletop_Wtch_DR_incl_7TeV.TXT.mc11_v1/ ./
  • Repeat the previous step for all input datasets that you wish to register.
Before continuing it is best to check that the input are correctly formatted. The checks you should do are:
  • Check that the numbering of the files starts from 00001 and there are no holes in the numbering.
  • Check that all files have approximately the same size and that there are no zero-sized files.
  • Copy a random file from the dataset to a separate directory and untar it. Now check that there are two files !.dat (the parameters / run card) and !.events (the LHEF events). Note that some generators do not require a !.dat file (e.g. Powheg), but all require a !.events file. Protos inputs additionally require a soft link protos.dat->!.dat and protos.events->!.events to be included in the tarball.
  • Check that there at least 5000 events (unless evgenConfig.minEvents has been set in the jOs) in the *.events file. cat *.events | grep "" | wc -l should help here. A word on the number of events per file: for cases where there is a filter to be applied or a matching efficiency from the PS then you have to ensure that there are enough events in the input files such that at least 5000 events (or evgenConfig.minEvents) can be produced per input file. Better is to ensure that you have a buffer of ~10% and have enough for say 5500 events.
All input datasets should be named according to AtlasMcProductionMC12. The general procedure is you upload the inputs to a dataset with the additional suffixes _iXY where Y increases 1,2... with extension of the DS and X increases with re-registration attempt, i.e. v23 is the second iteration of 3rd extension. First attempted dataset should read _i11, a subsequent extension would then be _i12. All the datasets are then added to a container. (See PreparingLesHouchesEven for better explanation)

Now you can start to upload the inputs:

  • Firstly edit the phys-gener-reg.sh and replace all instances (if needed) of dset=${fbase}_i11 with dset=${fbase}_iXY where X and Y are the numbers as explained above. This can probably be written better or made to be a command line argument of the script so feel free to adapt.
  • Next run the script:
> cd ~/tmp
> source phys-gener-reg.sh > runCommands.txt
  • This will print out a list of commands to the file runCommands.txt, rather than running the commands directly. The reason for this is that you can often have problems during the dq2-put step, such that if you don't check with the validate_ds2eos.py script that the inputs were successfully uploaded then you end up freezing the dataset with missing/problematic files.
  • From the runCommands.txt file copy the dq2-put commands and then execute them in the terminal. This step can take a while so I often do it in a screen window.
  • Next copy the python validate_ds2eos.py commands and execute those. For each command you need to check that the output does not throw any errors and that you receive a message saying something like:
count=XX/XX/XX (where XX is the number of files in the dataset)
CERN location CERN-PROD_SCRATCHDISK checked
  • If there were no errors then you can safely execute the remaining commands in runCommands.txt file. Most of the commands are self explanatory.
  • Its generally good practice to pipe the output of running the above commands to text file such that in case of problems further down the line you can refer back to the log file.

Validate the jobOptions/inputs

The previous two steps (uploading 4-vectors and jOs) can be done in parallel with this step. Generally, since we ask the analysis teams to perform validation on the requests they are making then the validation at this point is to ensure that the evgen jobs will run smoothly in the production system.

For MC12 do the following:

  • Setup athena 17.2.X.Y (or any recent version):
export AtlasSetup=/afs/cern.ch/atlas/software/dist/AtlasSetup
alias asetup='source $AtlasSetup/scripts/asetup.sh'
asetup 17.2.X.Y,rel_0 --testarea=~/testarea/AtlasNightly-17.2.X.Y
mkdir ~/testarea/AtlasNightly-17.2.X.Y/testMC
cd ~/testarea/AtlasNightly-17.2.X.Y/testMC
  • Copy over a random input file (tar.gz) from the input dataset (e.g. group.phys-gener.mcatnlo409.117304.ttbar_8TeV_mass4.TXT.mc12_v1._00001.tar.gz)
  • Now run the following command to generate showered events (replace the items runNumber==(the DSID), ==jobConfig==(the jOs), ==evgenJobOpts==(the tag that the jOs were included in. Available tags can be found in EvgenJobOpts), ==outputEVNTFile, ==inputGeneratorFile==(the input 4-vector tar.gz file):
Generate_trf.py ecmEnergy=8000 runNumber=117304 firstEvent=1 randomSeed=1234567 jobConfig=MC12.117304.McAtNloJimmy_AUET2CT10_ttbar_noAllHad_mtt_1100_1300.py evgenJobOpts=MC12JobOpts-00-07-82_v5.tar.gz outputEVNTFile=/tmp//117304.EVGEN.pool.root inputGeneratorFile=group.phys-gener.mcatnlo409.117304.ttbar_8TeV_mass4.TXT.mc12_v1._00001.tar.gz | tee /tmp//117304.test.log

  • Check that the run is successful and ran as expected.
  • Additionally, you could do some more validation yourself (generally we only do this for "in house" requests e.g. new ttbar/single top baseline and systematic samples which we would plan to use in the future etc). See a page by Daniel Hayden - https://www.pp.rhul.ac.uk/~hayden/MC12.html for some useful tools to help with validation. Some people also use Rivet to write validation routines.
For MC11 the procedure is the same except that you use a different release (and change the ECM to 7000):

export AtlasSetup=/afs/cern.ch/atlas/software/dist/AtlasSetup
alias asetup='source $AtlasSetup/scripts/asetup.sh'
asetup 16.9.X.Y,rel_0 --testarea=~/testarea/AtlasNightly-16.9.X.Y
mkdir ~/testarea/AtlasNightly-16.9.X.Y/testMC
cd ~/testarea/AtlasNightly-16.9.X.Y/testMC

Preparing the approval spreadsheet

Once all of the above steps have been completed you need to create a spreadsheet which details the request. PC and the production team use these spreadsheets to check what request you are making and also during the submission. The template can be found here: http://www-f9.ijs.si/~kersevan/request_template.xls

An example of a completed spreadsheet is also attached to the twiki. In general only the fields "Simul", "Merge", "Digi", "Reco", "Rec Merge" only need to be filled in when you require a specific set of tags to be used (e.g. you already produced one channel and now you want to produce another channel but with exactly the same configuration). The priority goes from 0-3, where 0 is the highest priority (you need a good reason for this).

When you have uploaded the jOs, inputs done some validation and attached the spreadsheet to the savannah ticket.

Requesting approval.

At this point you can now mark the status of the request as "Ready for Approval" in the savannah ticket. At the same time include the Top Convenors in the cc field with a message detailing the reasons for requesting the sample and asking if they approve of the request.

Once approved by the Convenors you can send a mail to PC (Kevin E and Bill M) and the atlas-csc-prodman@cern.ch list. Include in the cc the Convenors and possibly also the relevant sub-group convenors. In your mail you should give a brief description of the sample you are requesting, why it is needed (and who it will benefit) and justify the size of request and the AFII vs Fullsim split.

Sometimes PC will come back with queries regarding the request, so it is best that you discuss with the convenors first (and possibly with the analysers) to form a response back to PC. When approved the production team take over.

Post-approval tasks

  • Pass the production (either Panda or Prod) links to the requesters.
  • Add the new DSID to Production/TopMC12twiki/data/ , as well as to the monitoring script Production/TopMC12twiki/inProduction/tasks_in_progress.sh
  • When the AOD merge tasks are defined/done, use tasks_in_progress.sh to check, you can add the files (dq2-ls mc*.DSID.* | grep merge.AOD) to TopMC12twiki /output/D3PDProdList/MC*_D3PD_requests_p*.txt, located here, and inform the Top D3PD production managers (Douglas Benjamin, Farida Fassi) so that they can submit the D3PD jobs.
  • Set the status of the task in savannah to "Complete" and close it.
  • Make sure the new samples appear on the TWiki as further detailed below.

TopMC12 twiki pages

The TopMC12 twiki pages are arranged differently to the TopMC11 pages in that each set of samples has its own page. This way analysis teams can link to the pages relevant to their specific analysis. The TopMC12 page gives an overview of the MC12 production campaign, known problems with generators/samples and an explanation of the content of the MC sample tables.

All the tables on the TopMC12* twiki pages (e.g. TopMC12DiTopSamples) are automatically generated through use of python scripts which takes as input the DSIDs of the samples. The scripts search through AMI for the corresponding chain of EVNT, AOD and NTUP_TOP datasets, where the evgen, sim, reco and D3PD tags can either be specified in the input file or (by default) internal logic retrieves the latest and valid samples. This way ahead of time you don't have to worry about knowing if there are three samples but with different tags, which one is the "correct" one. The scripts then retrieves from AMI (via the pyAMI interface) the metadata for the dataset chain, including the event generation configuration, cross-section (* filter-eff) channel, NEvts, and links to jOs, production tasks and AMI datasets. In addition the scripts can also be used to calculate kFactors provided a reference xsec (and BR) are supplied.

The scripts reside here, and the input files here.

The input files are structured as follows using ID_MC12a_DiTop_Baseline_FS.txt as an example:

DSID,evgenTag,simTag,recoTag,ntupTag
105200,e1513,,,p1312
110001,,,,
105204,,,,

For 105200 the tag e1513 is specified such that the script doesn't pick up an older (buggy version). Additionally, the p1312 tag is specified since there was a problem with the default p1269 production. For 110001 and 105204, no tags are specified and the scripts will use internal logic to search for the corresponding valid dataset chain. Note that there are separate input files for AFII and Fullsim samples, since the scripts treat them slightly different.

The two main python scripts are TopMC12_twikiTable_status.py and pyAMIHelpers.py. The first script is one you execute and deals with reading in the input files, defining lists of valid dataset tags (and overriding the defaults if tags have been specified in the input file), finding the chain of datasets and printing out the results to various files. The second script contains a couple of classes which take the results of pyAMI queries, extract and modify the relevant metadata and convert it into an object which is more comprehensible.

To execute the main script you can do:

source /afs/cern.ch/atlas/software/tools/pyAMI/setup.sh
python TopMC12_twikiTable_status.py -f ID_MC12a_DiTop_Baseline_FS.txt

you can turn on debug by adding the -d option. For input files with AFII in the name you need to add the option -m. The output from the script in this example would be:

  • ID_MC12a_DiTop_Baseline_FS_twiki.txt: the twiki table lines - 1 line per input DSID
  • ID_MC12a_DiTop_Baseline_FS_info.csv: a csv file with the metadata for the dataset chains
  • ID_MC12a_DiTop_Baseline_FS_missingEVNTs.csv: a csv file containing cases where no EVNT can be found for a DSID
  • ID_MC12a_DiTop_Baseline_FS_missingAOD.csv: a csv file containing cases where a EVNT exists but no AOD can be found for a DSID
  • ID_MC12a_DiTop_Baseline_FS_missingNTUP.csv: a csv file containing cases where a AOD exists but no NTUP_TOP can be found for a DSID
  • ID_MC12a_DiTop_Baseline_FS_D3PDnames.txt: a text file with the names of the D3PD datasets (plan to upload these next to the tables on the twiki so that users can download it and get a list of the datasets that they should use)
  • ID_MC12a_DiTop_Baseline_FS_evgenInfo.txt: a text file that gives evgen info (DSID, xsec, kFactor) as needed for TopDataPreparation (Samuel Calvet)
Finally, there is a text file containing data regarding the samples xsec, filter efficiency (FE) etc - sampledata.txt - which the scripts use to calculate kFactors and add comments to "Comments" field of the ouptut twiki tables. The file is structured as follows:

# DSID, xsec(pb), FE, Ref xsec, BR, kFactor, "BriefName", "Channel", "Comments"

The xsec and FE fields can be used to override the default behaviour of extracting the xsec and FE from AMI, useful in the case where the AMI metadata is wrong or not available. The Ref xsec and BR fields can be used to provide the theoretical (N)NLO xsec*BR. When present the script will calculate the kFactor (LO->(N)NLO) for a sample according to:

kFactor = (Ref xsec * BR) / (AMI xsec * FE)

Note sometimes it is not appropriate to use the FE, but rather calculate the kFactor for the inclusive sample. The script also allows you to specify a kFactor which has been calculated elsewhere or cannot be calculated on a per sample basis, e.g. LO multi-leg generators. The last three string fields are used to add information to the output twiki tables. "BriefName" and "Channel" are appended to the auto extracted text from AMI and "Comments" can be used to provide additional information or highlight problems.

How to run the scripts

How to move a job to the MCFinished page:

You should check the TopMCProductionStatus every morning. If a job is finished and D3PDs need to be requested, please follow the instructions a bit further down the page. If the D3PDs are done, you can move the tables to the page as follows:

  • first check if the number of events in D3PDs and AODs are identical! We had several problems with the D3PD production and this really needs to be checked carefully!
  • if everything is ok, please click on the button: Move to TopMCProductionFinished
  • you will get some lines with commands, that you have to copy
  • login to lxplus and go to: /afs/cern.ch/user/a/atltopmc/twikis/TopMC12twiki/inProduction
  • paste the text from above into the window
  • if no errror message is given, check if the table was correcly moved from the TopMCProductionStatus page to the TopMCProductionFinished finished page
  • since the TopMCProductionFinished page is extremely long, it takes very long to load. Here we need some mechanism to make the list shorter and maybe only show the last 50 entries or so
  • send a mail/entry in the respective JIRA ticket for the request, so that the people that sent the request know that the samples are ready to be used!

How to make the tables

Several wrapper (bash) scripts are included to help with running the scripts. Firstly on lxplus check out the TopMC12twiki repository from svn and navigate to the scripts directory:

cd ~
svn co svn+ssh://svn.cern.ch/reps/atlasphys/Physics/Top/Software/Production/TopMC12twiki/ TopMC12twiki
cd TopMC12twiki/scripts

The generateTopMC12_twikiTable_status.sh will setup pyAMI, create a unique directory under:

dateTime=$(date +"%m_%d_%Y_%H_%M")
mkdir ~/tmp_$dateTime 

copy over the relevant scripts/inputs and then run the script for each input file (AFII and FS) - each instance of the script is run as a background process (the & at the end of the script call) such that they can be run in parallel. A sleep command (20s) is used inbetween each instance of the script so as not to create a I/O problem of many files being opened and written to in a short space of time.

Execute the script:

./generateTopMC12_twikiTable_status.sh

A lot of output is written to the screen but its hard to read since you have many processes running. To check on the progress of the scripts you can use another script:

cd ~/tmp_$dateTime
~/TopMC12twiki/scripts/checkMC12OutputStatus.sh

which will give you a list of the input files still being used by the scripts along with how many samples per file have been processed vs how many are expected. On occasions a script may fail to process all entries in an input file (normally AMI connection problems on server side). You can either rerun the scripts for the affected input file by hand or use the following script:

ln -s ~/TopMC12twiki/scripts/checkMC12OutputStatus.sh .
~/TopMC12twiki/scripts/combineMC12InfoCSV.sh

which will rerun any failed samples. If there are no failed samples and everything ran fine then this script will combine the csv outputs from all of the samples into one file. E.g. the line:

cat *MC12a*info.csv > MC12a_Status.csv

combines all the sample info files into one large csv file which can be opened with excel and manipulated to find problems in the processing of samples. The other lines are similar and allow you to immediately see cases where for example there are no D3PDs but AODs exist.

Finally, when all the scripts have run successfully you can copy the new output files to the TopMC12twiki /output folders and check them in:

cp ID*twiki.txt ~/TopMC12twiki/output/Twiki/
cp ID*info.csv ~/TopMC12twiki/output/MCstatus/
cp MC* ~/TopMC12twiki/output/MCstatus/
cp *EVGENnames.txt ~/TopMC12twiki/output/EVGENnames/
cp *TruthD3PDnames.txt ~/TopMC12twiki/output/TruthD3PDnames/
cp *_D3PDnames.txt ~/TopMC12twiki/output/D3PDnames/
cd ~/TopMC12twiki/ 
svn ci -m "updating output files..."

Creating the twiki pages

All the twiki tables generated by the script are in the TopMC12twiki /output/Twiki/ directory. Due to the shear volume of twiki tables and pages it would take a long time to insert the tables by hand. Instead templates for each page are created and a script to insert the tables into placeholders on the templates is used. The templates and script are here. For each page on TopMC12twiki you will see a corresponding MC12_XXX.txt file which is the template for that page. In the template you will see the table headers followed by a string like xxxPOWHEGWWFSxxx. The insertTwikiTables.py script contains a dictionary/map of the filenames for the twiki tables and of the xxxBLAHxxx name in the template, and when executed will replace the xxxBLAHxxx placeholder with the contents of the file. For each MC12_XXX.txt file you will have a corresponding MC12_XXX_tables.txt which is the twiki page that you then need to upload.

There is no (easy) automated way of uploading the pages, and therefore it has to be done by hand which can be a bit time consuming.

Adding samples

To add a new set of samples, then create a new input file in TopMC12twiki /data following the conventions above for the list of samples. Run the scripts to generate the output files. If the new samples require you to create a new twiki page, then first create the MC12_XXX.txt template with placeholders in TopMC12twiki /twikiPages. Next modify insertTwikiTables.py to connect the placeholders to the filename of the twiki table you wish to insert.

Remember to do svn add for new files and their corresponding output files.

Details on pyAMI scripts - automated fields in twiki tables

pyAMIHelpers.py uses the jobOptions name to create the "Channel" field in the twiki tables. Since no-one conforms to any one naming convention, special rules have been written in the function getChannelFromJOs(jo) to extract something meaningful. The default is to split the jobOptions by "_" and take the last two items. E.g. for MC12.xxx.A_B_C.py, "Channel = B C". The "Brief/Generator" field is a combination of the generator, UE tune and PDF. When the generator name is extracted from AMI it is altered since there are many formats of the same generator name (e.g. Herwig vs Jimmy, Pythia8 vs Pythiapp etc). Convention chosen is Pythia -> fPythia, Pythia8, Herwig/Jimmy -> fHerwig, Herwigpp -> Herwig++ etc.

Generator contacts (for internal top requests)

Powheg/MC@NLO ttbar: Gia Khoriauli Powheg/MC@NLO single top: Cunfeng Feng / Huaqiao Zhang Alpgen - Alexander Grohsjean (previously: Thorsten Kuhl /James Ferrando) Sherpa ttbar - Kiran Joshi Sherpa W+jets - Paul Thompson (HSG5) Protos - generally we ask the analysers to generate the 4-vectors but Nuno Castro and Carlos Escobar can help if they have time. Acer - Liza Mijovic

You can also check this twiki: ATLAS MC generator responsibles

Find a version of a specific generator

If you want to find out which version of a generator was used for a certain sample, please check in Ami which Atlas Transformation package has been used (click on "Details") and then look at the following webpage: http://test-jgarcian.web.cern.ch/test-jgarcian/Generators/ which generator version belongs to which Atlas setup.

Useful tools and repositories

TopSoftware contains many useful tools and information collected over time from the contacts and liaisons.

A few scripts are available to help with the uploading a validation of inputs, see here.

Add new MC contacts to SVN TopProduction

Here it is the link to add new MC contacts to SVN TopProduction: https://atlas-svnadmin.cern.ch

Request D3PD Production

First you have to check if the D3PDs are already available. For NTUP_Common, the list can be found here:

https://svnweb.cern.ch/trac/atlasphys/browser/Physics/Top/Software/Production/TopMC12twiki/D3PDProdList/MC12_NTUP_COMMON_requests_p1575.txt

The AOD names you put into the request should be the ones with "merged" and complete rtag, not the recon ones.

Then you have to check which disks have enough space to store the new D3PDs. For that, please go to https://rucio-ui.cern.ch/account_usage

and select user: phys-gener and look for PHYS-TOP. Please choose one sites to store the new files for NTUP_TOP (no specification of site neccessary for NTUP_COMMON!).

Then you have to open a new ticket here: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/DPDProductionTeam#via_DEFT_Database_Engine_For_Tas

And fill in the relevant information. For the sample tag to use, please check here:

https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/TopMC12#CommonD3PD_Tags https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/CommonNTUPTags

Make sure that you are logged in before filling the fields. Furthermore please add the relevant people for the email-alert.

Finally check again if the information you filled it is correct and upload the list of the AOD files that should be processed. Then update the svn list here:

https://svnweb.cern.ch/trac/atlasphys/browser/Physics/Top/Software/Production/TopMC12twiki/D3PDProdList/MC12_NTUP_COMMON_requests_p1575.txt

Useful links

AtlasProductionGroup

AtlasMcProductionMC12

PreparingLesHouchesEven

TopMC12

TopMC11

Borut's Production task page

Booking AMI DSIDs

Full overview over ATLAS MC prod

TopSystematics2012

Mailing lists to join:

All the relevant Top mailing lists (WG, Reco, xsec, properties etc)

atlas-csc-prodman@cern.ch

hn-atlas-generators@cern.ch


Major updates:
-- NeilCooperSmith - 22-Jan-2013

%RESPONSIBLE% RachikSoualah
%REVIEW% Never reviewed

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2015-04-16 - RachikSoualah
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback