ATLAS Tier3 Cluster Usage

Initial Login

You'll be logging in to login.tier3-atlas.uta.edu. This can only be done from a UTA machine -- please use VPN or a gateway machine if you are off campus.

Your username will match your lxplus username. Your password will be the one you use on the UTA HEP machines -- please change this as soon as you can!

As you log in to the cluster for the first time, you'll be asked to create ssh keys automatically. Please hit enter until you get to a prompt.

Please use the command passwd to change your password to something new and different. smile Since this system is not on the campus network (with its inconveniences and protections), we need to be extra-careful.

Setting Up ATLAS Software

Each of you will have a few settings that are run on login in your .bashrc or .tcshrc -- they set up access to the Athena installations on the machines you are using. The most important is the setupATLAS script. setupATLAS is an alias to /cluster/app/ATLASLocalRootBase/user/atlasLocalSetup.sh, set up in your .bashrc. The command is also run on login, since there's little you can do if it's not there.

There are a few subcommands in the setupATLAS command. These are shown each time you run setupATLAS unless you use the --quiet option, as is done in the initial session setup.

...Type localSetupDQ2Client to use DQ2 Client
...Type localSetupGanga to use Ganga
...Type localSetupGcc to use alternate gcc
...Type localSetupGLite to use GLite
...Type localSetupPacman to use Pacman
...Type localSetupPandaClient to use Panda Client
...Type localSetupROOT to setup (standalone) ROOT
...Type saveSnapshot [--help] to save your settings
...Type showVersions to show versions of installed software
...Type changeASetup [--help] to change asetup configuration
...Type setupDBRelease to use an alternate DBRelease
...Type diagnostics for diagnostic tools

Most of these are self-explanatory. You see options to let you use pathena, ganga, dq2, ROOT, pacman and others. The commands are enabled after running setupATLAS.

Setting up Athena

Once you have run setupATLAS, you can also run asetup as an automated Athena setup tool. asetup -h gives you a complete options list, but for the most part you'll just run it as

asetup 15.6.3.8

using whichever release you desire. Available releases are listed in the folder

/opt/atlas/software/i686-slc5-gcc43-opt/setupScripts/

as a series of setup scripts as well, which can be run without the setupATLAS infrastructure (of interest during batch processing, for example).

Since Athena is coming in under cvmfs, the first setup and run of a release can be time-consuming -- the files you have never used before are being fetched and cached from a central server at BNL or CERN. Once fetched and cached, the load time will be much smaller, no matter which compute node is doing the work. The cache is local to the Tier3, and is shared among the compute nodes on a fast local network.

Go ahead and create the appropriate run directory (like ~/athena/AtlasProduction-15.6.3.8), and:

cd ~/athena/AtlasProduction-15.6.3.8
mkdir run
cd run
get_files -jo HelloWorldOptions.py
athena HelloWorldOptions.py

You should (after a pause) get the Hello World sequence from Athena, and this will run much faster after it has been cached. If something is unused for a long time, it will fall off the back of the cache, and have to be downloaded again -- but the cache is larger than several entire releases, and separately cached DB transactions as well.

Setting up ROOT

localSetupROOT (easy enough with tab completion) will turn on ROOT 5.22 by default. This is also what is installed in Athena. If you need a newer version, check back here when I have figured out how to use the newer version of ROOT from the packages, or after I have given up and build my own ROOT installations on these machines. (Soon).

Setting up PyRoot

With setupATLAS, run localSetupPython as well to get a compatible python version in place. Then, run as you are accustomed to doing.

Submitting Torque Jobs

You will not (at least initially) need to log in to any machine other than login -- you'll be able to use all the other machines through the PBS/Torque batch system. You can even use them interactively with the command:

qsub -IV

where the -I option makes it an interactive session, and the -V brings along all environment variables from your session.

All the compute nodes are up and ready to accept PBS batch jobs. Basics on Torque usage can be found here.

Using Athena through the asetup command in PBS can be tricky. Use this header for all submit scripts involving Athena. This specific one sets up Athena 17.0.3.

#! /usr/bin/env bash

athVersion='17.0.3'
shopt -s expand_aliases

export ATLAS_LOCAL_ROOT_BASE='/cluster/app/ATLASLocalRootBase'
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'

setupATLAS -q

mkdir $HOME/athena/AtlasOffline-$athVersion
alias asetup='source $AtlasSetup/scripts/asetup.sh'
asetup $athVersion

Using Xrootd

xrootd has been installed across the cluster, and it all points to storage on the storage server

*Unless you are doing a very specific set of actions, you should stay logged out of the storage server -- it is not a compute node, interactive node, or scratch area. I am leaving it open for now for reasons of convenience. If the machine is misused, even unintentionally, I will close it to most interactive use. smile So be sure you know what you're doing before starting stuff there.

Files on storage-1-8 (hereafter XRD) are placed in a 48 TB pair of RAID shelves. xrootd provides extremely fast access to all of these files.

An example of using the copy command xrdcp is as follows:

xrdcp testfile xroot://redirector-1-18.local:1094//atlas/local/$USER/testfile

xrdcp xroot://redirector-1-18.local:1094//atlas/local/$USER/testfile testfile1

The main xrootd cache is in place and working. To see how full it is, use:

ssh storage-1-8 "df -h /xrd_cache_*"

Listing files can be done on the login node (or any other) as, for example:

ls /xrootdfs/atlas/dq2/mc10_7TeV

The /xrootdfs/ will need to be replaced with root://redirector-1-18.local:1094// for any file operations in ROOT or SPyRoot or xrdcp.

For copies in the shell,

xrdcp root://redirector-1-18.local:1094//atlas/dq2/mc10_7TeV/NTUP_SUSY/e598_s933_s946_r1831_r1700_p403/mc10_7TeV.105200.T1_McAtNlo_Jimmy.merge.NTUP_SUSY.e598_s933_s946_r1831_r1700_p403_tid254878_00/NTUP_SUSY.254878._000001.root.1 .

For file access inside ROOT, for example:

TFile *f = TFile::Open("root://redirector-1-18.local:1094//atlas/dq2/mc10_7TeV/NTUP_SUSY/e598_s933_s946_r1831_r1700_p403/mc10_7TeV.105200.T1_McAtNlo_Jimmy.merge.NTUP_SUSY.e598_s933_s946_r1831_r1700_p403_tid254878_00/NTUP_SUSY.254878._000001.root.1")

Inside PyROOT:

TFile.Open('root://redirector-1-18.local:1094//atlas/dq2/mc10_7TeV/NTUP_SUSY/e598_s933_s946_r1831_r1700_p403/mc10_7TeV.105200.T1_McAtNlo_Jimmy.merge.NTUP_SUSY.e598_s933_s946_r1831_r1700_p403_tid254878_00/NTUP_SUSY.254878._000001.root.1')

Copying files into the xrootd store from DQ2 is still something you should request of Alden, for the moment... but permissions will be handed out more generally soon as we get comfortable with training and file management. If you haven't been told you have permission, you probably don't. If it's after March 2011, you'll probably get permission immediately.

Copying your own files in via scp and then doing an xrdcp into your user space (/atlas/local/$USER/...) is completely up to you, and you can feel free to do so within reason.

The technique is as follows:

On the login node, set up dq2:

setupATLAS

localSetupDQ2 --quiet

voms-proxy-init --voms atlas --valid 96:00

Then select the dataset names (can include wildcards, as you'll see in the example), and submit the following command:

dq2-get -T 6,10  -Y -L SWT2_CPB_LOCALGROUPDISK  -q FTS -o https://fts.usatlas.bnl.gov:8443/glite-data-transfer-fts/services/FileTransfer -S gsiftp://gw01.tier3-atlas.uta.edu/atlas mc10_7TeV.107*.AlpgenJimmyZ*Np*_pt20.merge.NTUP_SUSY.e600_s933_s946_r1831_r1700_p403/

This command will TAKE A LONG TIME in many cases -- be sure you're on a machine that doesn't have to move or shut down for a while. If the command fails to find datasets, make sure the / is attached to the end of the container name.

The files will be downloaded to the T3 storage area and when complete, the command will exit cleanly.

Using DQ2

localSetupDQ2Client will set up DQ2. Since you have already copied in your ~/.globus folder, just type

voms-proxy-init -voms atlas

and you are ready to go. If you want to put the files you are getting into xrootd, please read the xrootd section.

-- AldenStradling - 01-Oct-2010

Edit | Attach | Watch | Print version | History: r7 < r6 < r5 < r4 < r3 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r7 - 2011-09-16 - AldenStradling
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback