SBAint1 User's Guide

General Tier3 Info

Some general information on Tier 3's can be found here: https://twiki.cern.ch/twiki/bin/viewauth/Atlas/Tier3gUsersGuide. Some of the information could be out of date, or irrelevant (handled behind the scenes by Dean), but it is a good place to start if you have questions/problems not addressed here.

Setting Up Your Environment

Your home directory is /export/home/<username>. You do not need a cmthome directory or requirements file.

You should begin by creating a work area.

mkdir ~/testarea
cd ~/testarea

To setup your environment for release 16.0.0, issue these commands

setupATLAS
export ATLAS_TEST_AREA=~/testarea
localSetupGcc --gccVersion=gcc432_x86_64_slc5
source /opt/atlas/software/i686_slc5_gcc43_opt/16.0.0/cmtsite/setup.sh -tag=16.0.0,setup

Your environment is now (minimally) setup. You should be able to run the SBD3PDAnalysis package at this point, but not Athena.

SBD3PDAnalysis package

To check out the analysis package, simply do

svn co $SVNGRP/Institutes/StonyBrook/SBD3PDAnalysis/trunk

To run the package, do

cd trunk/python/pyAnalysis
python looper.py <looperConfigFile>

Athena

To run Athena, you must also issue the asetup command

cd ~/testarea
asetup 16.0.0 --testarea=$PWD
mkdir $TestArea

To test your setup, do

cd $TestArea
cmt co -r UserAnalysis-00-15-04 PhysicsAnalysis/AnalysisCommon/UserAnalysis
cd PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt
gmake
cd ../run
get_files HelloWorldOptions.py 
athena.py HelloWorldOptions.py

You will know you are setup correctly if your job finishes with

Py:Athena            INFO leaving with code 0: "successful run"

DQ2

WARNING: Always use DQ2 tools in a fresh environment

Note: You must have a valid grid certificate installed to use DQ2 tools. If you don't, follow these instructions: http://atlaswww.hep.anl.gov/asc/GettingGridCertificates.html

Since you have a fresh environment (if you don't, then log out and log back in), you must begin by

setupATLAS
localSetupDQ2Client --skipConfirm
voms-proxy-init -voms atlas -valid 96:00

Now you are ready to use DQ2 tools. See https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2ClientsHowTo for usage instructions.

XRootD

Putting data on the XRootD disk

See /export/home/atlasadmin/copy_data_to_xrootd.txt

Removing data from the XRootD disk

See /export/home/atlasadmin/clean_xrootd_area.txt

Condor

The batch system on SBAint1 is Condor. Currently, there are 228 batch slots.

Once you have an interactive job running successfully, you need 2 additional files to run it as a batch job. They are a job description file and an executable shell script. You should also create a directory for each individual job to run ( outdir.0, outdir.1, ... ).

The job description file should look like this

Universe      = vanilla
Notification  = Always
Executable    = <path to condor area>/<shell script>
Arguments     = $(Process)
GetEnv        = True
Output        = <path to condor area>/outdir.$(Process)/log.$(Process)
Error         = <path to condor area>/outdir.$(Process)/err.$(Process)
Log           = <path to condor area>/outdir.$(Process)/myJob.$(Process)

Queue <number of batch jobs desired>

The <shell script> should perform the actual execution of the job, and should be executable by any user. It must also contain the environment setup discussed above. A minimal example would be

#!/bin/bash
<environment setup>
cd ~/testarea/outdir.$1
<instructions to run job>

<instructions to run job> could be as simple as 'athena.py HelloWorldOptions.py.' '$1' is a placeholder for the job number. So in this example job 0 would be executed in outdir.0, job 1 would be executed in outdir.1, etc.

To submit your job, simply enter

condor_submit <job description file>

You can check the status of all condor jobs with

condor_q

To kill a running job, use

condor_rm <ID number>

If you are accessing data from one of the XRootD areas, there is an additional complication. The path to the XRootD area is different for interactive and batch jobs. To convert an 'interactive path' to a 'batch path,' replace '/sbahead/' with 'root://sbahead.physics.sunysb.edu//' The same goes for data on the sbanfs server.

This also means that glob will not work on files in the XRootD area from within a batch job. As a workaround, ~jstupak/public/dataFiles.txt lists every single data file on the sbahead and sbanfs servers (with correct 'batch paths'). From within a batch job, you can scan through this file, searching for a particular dataset name, to find the locations of all of its files.

Algorithmically, it would go something like this:

DSName="*group09.phys-exotics.mc09_7TeV.105985.WW_Herwig.merge.AOD.e521_s765_s767_r1302_r1306.WZphys.100612.01_D3PD*"

from fnmatch import fnmatchcase
DSLookupFile=open("/export/home/jstupak/public/dataFiles.txt")
inputs=[]
for line in DSLookupFile:
    if fnmatchcase(line,DSName):
        inputs.append(line.rstrip('\n'))

Whenever new data is added to the XRootD area, dataFiles.txt must be regenerated. For now, that means asking John Stupak to regenerate it. This should be automated in the near future.

Edit | Attach | Watch | Print version | History: r9 < r8 < r7 < r6 < r5 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r9 - 2011-04-08 - JohnStupak
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback