This page provides some information for running on the grid for top analysis for 8TeV data and MC12 samples. A nice reference page for setting up and using the different tools is the ATLASOfflineTutorial, and a very good page for a host of commands needed to run on the GRID is here. If you get stuck you can write to ATLAS Distributed Computing (ADC) support, browse the Email Archive, and you can refer to the guidelines here.

The current consensus is to run data and MC differently to optimised speed of processing and conserve space:

  • data_8TeV: Slim and skim data samples on the Grid, download the skimmed/slimmed samples locally onto the atlasscatchdisk or atlaslocalgroupdisk and run TopRootCore locally (on batch farm) to make your mini ntuples
  • mc12: due to their size its decided to run TopRootCore on the Grid (so NO skimming or slimming done on MC) and download the mini ntuples locally


The TopD3PDProduction page documents the D3PD samples for the Top Physics group. You can find datasets by using AMI. DQ2 can be used to seach for data sets and downloaded.

You can search for and download data sets using DQ2. An important point is that you can't setup both DQ2 and athena at the same time. Best thing to do is have a separate terminal for any DQ2 commands to stop any errors. To setup DQ2 you need to setup the grid:

source /afs/

Then you need to give you local site, in our INFN case:


and finally the voms-proxy:

voms-proxy-init -voms atlas

Then you can search for samples using:

dq2-ls *<partofsamplename>*

and download them using

dq2-get <samplename> 


Data can be slimmed and skimmed on the GRID. Common slimming code is You need to make a text file with the list of braches you want to keep and if you are doing skimming a selection file with your selection. Then setup the grid and then athena, and then run a prun script.


Detailed information for running TopRootCore on the GRID can be found here. Before submitting to the grid, you need to setup the following, specifically in this order:
  • setup TopRootCore (RootCore/scripts/
  • setup the grid
  • setup athena

Some general comments:

  • Remember to call the output file in the prun script the same as you define in TopRootCore - for example if running mini ntuple maker D3PD2MiniSL the default output name is el.root,mu.root
  • The InputFileList.txt that contain all your samples can just be entered as normal, unlike with data when they must be split up
  • The execute command for prun I use for running D2PD2MiniSL is

--exec="ln -s \$ROOTCOREDIR/data .; ln -s \$ROOTCOREDIR/../TopD3PDAnalysis/control .; echo >> InputFilesList.txt; D3PD2MiniSL -f InputFilesList.txt -p control/settings.txt -mcType mc12"

Setting up the grid

First you need your grid certificate, information on how to obtain this and general information on the grid is here. Once you have your certificate, to be able to use the grid you need a setup script that looks something like the following:
source /afs/
export PATH=$PATH:/afs/
echo "Setting valid time = 90hrs"
voms-proxy-init -voms atlas -valid 90:00
I also setup Panda in the same script.

* Trouble shooting - If many jobs failing on a specific site can add the --excludedSite=SiteName to your prun job


Panda is used to submit jobs to the grid. You need to setup the grid and panda and then athena. To setup panda you need to do the following (which I add to my grid setup script:

source /afs/"

  • To submit jobs prun is used
  • Bookeeping of your analysis jobs is done using pbook command
  • You can monitor your jobs online using the Panda monitor


The prun utility allows one to submit non-athena jobs to the grid via the Panda backend. There is a good twiki on How to sumbit ROOT/general jobs to Panda that you may find useful.


You can control your panda analysis jobs using pbook. Once you have the grid setup you just type pbook. That will print out some jobs that are being updated:

$ pbook
INFO : Synchronizing local repository ...
INFO : Got 3 jobs to be updated
INFO : Updating JobID=374 ...
INFO : Updating JobID=373 ...
INFO : Updating JobID=372 ...

INFO : Synchronization Completed
INFO : Done

Start pBook 0.4.8

Here are a few useful commands you can then type:

To show all jobs:


For help:


To kill jobs:


To retry jobs (for example if they failed):


To exit pbook press Ctl-D

Panda Monitor

Using the Panda monitor you can search for your individual user page by searching for your name. This will show all your jobs you have running on the grid. To download the job output files either you can do a simple DQ2-get, or if you need the files saved to a atlaslocalgroupdisk you need to first register, and then you can request the data transfer using this page.

-- KateShaw - 23-Oct-2012

Edit | Attach | Watch | Print version | History: r14 < r13 < r12 < r11 < r10 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r14 - 2013-01-10 - KateShaw
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback