TWiki> Main Web>WebPreferences>IHEPDESYComputing (revision 16)EditAttachPDF

1.DESY computer resources

DESY hosts

Hosts working with Qsub: ~

Hosts working with HTCondor: ~

submit HTCondor jobs:

Since Dec. 2017, the DESY local computing system has been updated to HTcondor

The following commands is the cheat list from qsub to HTcondor

QSub (SGE) HTCondor
qsub 123.job echo "executable = 123.job" > 123.sub
condor_submit 123.sub
(more details below)
qstat condor_q
qdel condor_rm
condor_rm [username]: kill all your jobs

While submitting jobs, it has to be changed from

qsub -q short.q,default.q  -l os=sld6 -l site=hh -l h_vmem=8G -o ${base_dir}/out/${jobname}_"$ijob".out  -e  ${base_dir}/err/{jobname}_"$ijob".err  ${jobfilename}


rm -fr  ${submitfilename}
echo "executable     = ${jobfilename}"  >  ${submitfilename}
echo "should_transfer_files   = Yes" >> ${submitfilename}
echo "when_to_transfer_output = ON_EXIT" >> ${submitfilename}
echo "input          = ${base_dir}/input/${jobname}_"$ijob".txt"  >> ${submitfilename}
echo "output         = ${base_dir}/out/${jobname}_"$ijob".out2" >> ${submitfilename}
echo "error          = ${base_dir}/err/${jobname}_"$ijob".err2" >> ${submitfilename}
echo "log            = ${base_dir}/log/${jobname}_"$ijob".log2" >> ${submitfilename}
echo "+RequestRuntime = 43200" >> ${submitfilename}
echo "universe       = vanilla" >> ${submitfilename}
echo "RequestMemory   = 4G" >> ${submitfilename}
echo "queue" >> ${submitfilename}
condor_submit ${submitfilename}

The example scripts are in this path:

DESY HTCondor queue :

On 14th Feb. 2018 there are ~2500 cpus, HTCondor can distribute cpus for jobs itself

HTCondor is going to have more and more cpus, please always check

On 14th Feb. 2018 there are still ~4000 cpus for Qsub (SGE), please always check

DESY disks

Login to check AFS and NAF quota

AFS is usually 16G, NAF is 1T (default)

DESY Tire2

The quota on DESY-HH_LOCALGROUPDISK will be 20T (default) after applying for atlas/de VO

The transferred files on this disk can be found under /pnfs path

2. IHEP computer resources

IHEP hosts:

SLC6 hosts: ~ (IHEP inner hosts)

SLC5 hosts:

submit HTCondor jobs:

before doing anything, use the command:

export PATH=/afs/$PATH  # SLC5
export PATH=/afs/$PATH  # SLC6

Then you use the following cheat table

Qsub HTCondor
qsub hep_sub
qstat hep_q
qdel hep_rm
hep_rm -a: kill all the jobs

hep_q -u yourIhepUserName: check your jobs status

normally it is very similar as using Qsub, we just need use the command:

hep_sub -o out -e err -mem 4800 -g atlas
other optional arguments:
                        # set the system version of resource you want.
-prio PRIORITY, --priority PRIORITY
                        # set the inner job priority of your own jobs.
                        # set the total cores required by your job.

The example scripts are in this path (do not submit in /workfs or /afs):

IHEP HTCondor queue :

one universal queue sharing 4300 cpus, the working time is infinite, HTCondor can distribute cpus for jobs itself

IHEP disks :

disk name disk path size backup Can submit HTCondor jobs
afs /afs/ 500MB unknown no
workfs /workfs/atlas/ 10GB yes no
scratchfs /scratchfs/atlas/ 500GB delete every 2 weeks yes
publicfs atlasnew /publicfs/atlas/atlasnew/higgs/hgg/ 320TB shared no yes
publicfs codesbackup /publicfs/atlas/codesbackup/ 320TB shared yes yes

IHEP Tier2

The quota on BEIJING-LCG2_LOCALGROUPDISK will be 8T after applying for atlas/cn VO

But those files (on disk /dpm) should be downloaded to Tier3 anyway by command "dpns-ls -l" and "rfcp" , they cannot be directly read

3. ATLAS environment

Setup ATLAS environment:

Setup ATLAS Environment:

export ATLAS_LOCAL_ROOT_BASE=/cvmfs/
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/'
Then, setup Root:
lsetup root

Certificate, ATLAS VO, Rucio, AMI

Certificate: "New Grid User Certificate" in
Setup certificate on DESY/IHEP/Lxplus:

openssl pkcs12 -in usercert.p12 -nokeys -clcerts -out ~/.globus/usercert.pem
openssl pkcs12 -in usercert.p12 -nocerts  -out ~/.globus/userkey.pem
chmod 400 ~/.globus/userkey.pem
chmod 444 ~/.globus/usercert.pem
voms-proxy-init voms atlas
ATLAS VO page (request here):

Rucio transfer request page:
Rucio commands:

voms-proxy-init -voms atlas
lsetup rucio
rucio list-dids DATASETNAME
rucio list-file-replicas DATASETNAME
rucio list-rules --file FILETYPE(e.g. data16_13TeV):FILEPATH

AMI page:
pyAMI commands:

voms-proxy-init -voms atlas
lsetup pyami
ami show dataset info  DATASETNAME

How to read Ntuple from CERN eos disk (see

voms-proxy-init -voms atlas
lsetup xrootd rucio root
root -l
root [0] TFile *mytuple =TFile::Open("root://*/myntuple.root")

While submitting jobs in condor, in the job-file, one should type

echo "#!/bin/expect" >
echo "spawn voms-proxy-init -voms atlas" >>
echo "expect 'Enter GRID pass phrase for this identity:'"  >>
echo "send 'your_proxy_passwd\r'" >>
echo "interact" >>
lsetup xrootd rucio root
and in condor submit-file, type
transfer_input_files  = ${PATHTOPROXY}
Where the $PATHTOPROXY is default to be $X509_USER_PROXY in host, but need to be copied to a readable path, e.g. cp /tmp/x509up_u2xxxx ${HOME}/nfs/proxyfile/

ATLAS Configs/Tools

Truth Type and Truth origin :
Electron / Photon ID config:
Binary location of e/gamma ID:
e/gamma Author definitions:
e/gamma Enum (ConversionType, EgammaType, ShowerShapeType):
Tracking Enum :
Isolation working points twiki:

Lowest un-prescaled trigger:
Cross-Section and Branching-Ratio page:
Current Luminosity:
GRL page:
Lumi-calculator:, select "Create output plots and ntuples (--plots)" to create PRW input

Athena nightly builds:
MC JobOptions SVN:

ATLAS Twiki and Tutorials

PRW config page:
MC Reconstruction twiki: and
MC production twiki:
HGamma twiki:
MxAOD twiki:

Creating Event-Loop package:
Analysis Base tutorial:
ATLAS tutorial meetings:
ATLAS software tutorial twiki:
PubCome Latex twiki:

4. Softwares


Get a ssh Key here :
Add the ssh Key here :
Start a new project here :
Basic commands here :
Some basic commands:

git clone SSH_PATH
git checkout BRANCH_OR_TAG
git pull
git add --all
git commit -am "something"
git push origin BRANCH


CmakeLists.txt of Athena based package :
CmakeLists.txt of local code :

Use Virtual machine (or Docker) to run Athena on your computer

Docker with Athena:

docker pull lukasheinrich/atlas_base_fromhepsw
docker run --privileged -it -v $HOME:$HOME -e DISPLAY=${ip}:0 lukasheinrich/atlas_base_fromhepsw /bin/bash

Follow this page until “SetupATLAS”, so you have a linux virtual machine in ATLAS environment
Administrator (in case you want to use sodo): account : atlas, password: boson

ATLAS Event Display

Latest event:
Public plots:
Event Pickup (JML and ESD):
Atlantis: download latest version here (, run in a system where Java is available

asetup,AtlasProduction, here
vp1 DATASET.ESD.root


Home Page:
Setup on DESY NAF disk:

lsetup "root 6.10.04-x86_64-slc6-gcc62-opt"
export ILCSOFT=/nfs/dust/atlas/user/hans/EUTelescrope_desy
git clone -b dev-desynaf $ILCSOFT/ilcinstall
cd $ILCSOFT/ilcinstall
./ilcsoft-install -i examples/eutelescope/release-standalone.cfg

5. ADCoS Shift links

6. Internal Code/Packages

Edit | Attach | Watch | Print version | History: r20 | r18 < r17 < r16 < r15 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r16 - 2018-02-15 - ShuoHan
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback