-- MonicaVerducci - 11 Oct 2006

RunningCalibrationOnGrid

Introduction to Run Athena Calibration Job On the Grid

Useful webpage to get more information about how to run full Athena full chain on the grid:

Source of the different setups

First of all you need an account on the grid, see the web page https://twiki.cern.ch/twiki/bin/view/Atlas/WorkBookGetAccount to have more information and to access to the VO registration page

After that, before submitting any job on the grid, you have to make a temporary version of your Grid Certificate (you need to do this once every session or after 12 hours have elapsed):

> grid-proxy-init

Your identity: /C=CH/O=CERN/OU=GRID/CN=Monica Verducci xxxx
Enter GRID pass phrase for this identity:
Creating proxy .................................. Done
Your proxy is valid until: Thu Oct 12 22:37:59 2006

You need access to a node that is runing the User Interface (UI) middleware. You can use lxplus as a UI if you issue the command:

> source /afs/cern.ch/user/l/lcgatlas/scripts/ATLAS-setenv.sh  [or .csh]
With this command it is also possible to specify a particular service as Resource Broker, File catalog, BDII. Use the --help option for details.

Preparing an ATHENA script to submit a job on the Grid

The scripts to prepare, in case or running ATHENA on the Grid, are 2: (calling the job as job_grid)

1) job_grid.jdl In the InputSandbox you should put job_options to run Athena and the job_grid.sh script In the OutputSandbox you should declare the expected output from the job 2) job_grid.sh Usual command to run Athena plus some instructions to retrieve the datafile (comments on the file)

An example of these are here reported:

1) job_grid.jdl:

#######################################  
############# job_grid.jdl #################
Executable = "job_grid.sh";
StdOutput = "job_grid.out";
StdError = "job_grid.err";
InputSandbox = {"job_grid.sh","test12.py"};
OutputSandbox = {"job_grid.err", "CLIDDBout.txt", "CalibrationNtuple.root"};
Requirements = Member("VO-atlas-offline-12.0.2", other.GlueHostApplicationSoftwareRunTimeEnvironment);
######################################

2) job_grid.sh:

#!/bin/bash
# Script to run job_grid on the Grid

source $VO_ATLAS_SW_DIR/software/12.0.2/setup.sh
source $SITEROOT/AtlasOffline/12.0.2/AtlasOfflineRunTime/cmt/setup.sh

lcg-cp -v --vo atlas lfn:/grid/atlas/verducci/G4_mu_100GeV_etaphi_1_2_100K.03.digi.root file://`pwd`/G4_mu_100GeV_etaphi_1_2_100K.03.digi.root
#### here you copy from /grid/atlas/verducci/ the datafile and put in the same dir of the run (not local one but on the grid)


ls -lt

# Insert the file into the FileCatalog
pool_insertFileToCatalog G4_mu_100GeV_etaphi_1_2_100K.03.digi.root
#athena.py /afs/cern.ch/atlas/maxidisk/d98/calibration/Reconstruction/RecExample/RecExCommon/RecExCommon-00-07-01/run/test12.py
athena.py test12.py 

## in the jobOptions the file is included as follow : PoolRDOInput = [ "./G4_mu_100GeV_etaphi_1_2_100K.03.digi.root" ]
## run athena with the test12.py joboptions , in this case copied in the local dir from where you are submitting the job_grid.jdl job.

  • ATTENTION: in this case the file has been just copied in the SE castor/grid at CERN, usually if you produce some files these should be register in the Grid to be read by a job.
The G4_mu_100GeV_etaphi_1_2_100K.03.digi.root has been copy from a local scratch are to the castor/grid/atlas area available for each atlas user. lcg-cr -v --vo atlas -d castorgrid.cern.ch -l /grid/atlas/verducci/G4_mu_100GeV_etaphi_1_2_100K.01.digi.root file://`pwd`/G4_mu_100GeV_etaphi_1_2_100K.01.digi.root

You can choose the SE (in this example castorgrid.cern.ch) selecting one from the list obtained by:

> lcg-infosites --vo atlas se

In this way the file is in the Grid. For some data discovery tools, as DQ2, this procedure is not enought, you should, in addition, register the file in DQ2 catalog. But it is not mandatory!

You can add in the job_grid.sh, after the athena run, the same command to copy directly in the SE the output file (CalibrationNtuple.root)

Run an ATHENA Job

Now submit the job:

> edg-job-submit --vo atlas -o jobIDfile  job_grid.jdl

To check the satus of the job:

> edg-job-status -i jobIDfile

If you have more than one job, you can choose from the output list:

1 : https://rb105.cern.ch:9000/IZesyhx4XC-5GkhvjimSTw
.................
7 : https://gdrb01.cern.ch:9000/xH17m1b4jH4KDYAkJgJ2lg
a : all
q : quit

Check the last:

*************************************************************
BOOKKEEPING INFORMATION:

Status info for the Job : https://gdrb01.cern.ch:9000/xH17m1b4jH4KDYAkJgJ2lg
Current Status:     Running 
Status Reason:      Job successfully submitted to Globus
Destination:        dgce0.icepp.jp:2119/jobmanager-lcgpbs-atlas
reached on:         Thu Oct 12 08:43:14 2006
*************************************************************

After the end of the job, you should retrieve the SandBox, in which therei the CalibrationNtuple.root file.

> edg-job-get-output -dir . -i jobIDfile

As before, choosing the right job number, you should see on the screen:

Retrieving files from host: gdrb01.cern.ch ( for https://gdrb01.cern.ch:9000/xH17m1b4jH4KDYAkJgJ2lg )

*********************************************************************************
                        JOB GET OUTPUT OUTCOME

 Output sandbox files for the job:
 - https://gdrb01.cern.ch:9000/xH17m1b4jH4KDYAkJgJ2lg
 have been successfully retrieved and stored in the directory:
 /afs/cern.ch/atlas/testbeam/muontbh8/scratch03/verducci_xH17m1b4jH4KDYAkJgJ2lg

*********************************************************************************

In the local directory: /afs/cern.ch/atlas/testbeam/muontbh8/scratch03/verducci_xH17m1b4jH4KDYAkJgJ2lg you have copied all the output included the root file, ready to be analised.

Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r4 - 2006-10-12 - MonicaVerducci
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback