RESPONSIBLE
ShuoHan

Cern Web Service (Personal Website)

This twiki stoped updating since June 2018. A personal website was build up following this tutorial: https://cernbox.web.cern.ch/cernbox/en/web/personal_website_content.html

The new website is : https://cern.ch/shhan

1.DESY computer resources

DESY hosts

Hosts working with Qsub:

nafhh-atlas01.desy.de ~ nafhh-atlas02.desy.de

Hosts working with HTCondor:

naf-atlas11.desy.de ~ naf-atlas16.desy.de.

submit HTCondor jobs:

Since Dec. 2017, the DESY local computing system has been updated to HTcondor https://confluence.desy.de/pages/viewpage.action?pageId=67639562

The following commands is the cheat list from qsub to HTcondor

QSub (SGE) HTCondor
qsub 123.job echo "executable = 123.job" > 123.sub
condor_submit 123.sub
(more details below)
qstat condor_q
qdel condor_rm
condor_rm [username]: kill all your jobs

While submitting jobs, it has to be changed from

qsub -q short.q,default.q  -l os=sld6 -l site=hh -l h_vmem=8G -o ${base_dir}/out/${jobname}_"$ijob".out  -e  ${base_dir}/err/{jobname}_"$ijob".err  ${jobfilename}

To

rm -fr  ${submitfilename}
echo "executable     = ${jobfilename}"  >  ${submitfilename}
echo "should_transfer_files   = Yes" >> ${submitfilename}
echo "when_to_transfer_output = ON_EXIT" >> ${submitfilename}
echo "input          = ${base_dir}/input/${jobname}_"$ijob".txt"  >> ${submitfilename}
echo "output         = ${base_dir}/out/${jobname}_"$ijob".out2" >> ${submitfilename}
echo "error          = ${base_dir}/err/${jobname}_"$ijob".err2" >> ${submitfilename}
echo "log            = ${base_dir}/log/${jobname}_"$ijob".log2" >> ${submitfilename}
echo "+RequestRuntime = 43200" >> ${submitfilename}
echo "universe       = vanilla" >> ${submitfilename}
echo "RequestMemory   = 4G" >> ${submitfilename}
echo "queue" >> ${submitfilename}
condor_submit ${submitfilename}

The example scripts are in this path:
/afs/desy.de/user/h/hans/public/condor_example

DESY HTCondor queue :

On 25th May 2018 there are ~7500 cpus, HTCondor can distribute cpus for jobs itself

HTCondor is going to have more and more cpus, please always check http://bird.desy.de/stats/day.html

On 25th May 2018 there are still ~500 cpus for Qsub (SGE), please always check http://bird.desy.de/status/day.html

DESY disks

Login https://amfora.desy.de to check AFS and NAF quota

AFS is usually 16G, NAF is 1T (default)

DESY Tire2

The quota on DESY-HH_LOCALGROUPDISK will be 20T (default) after applying for atlas/de VO

The transferred files on this disk can be found under /pnfs path

2. IHEP computer resources

http://afsapply.ihep.ac.cn:86/quick/

IHEP hosts:

SLC6 hosts:

lxslc6.ihep.ac.cn
atlasui01.ihep.ac.cn
atlasui04.ihep.ac.cn ~ atlasui06.ihep.ac.cn (IHEP inner hosts)

SLC5 hosts:

atlasui03.ihep.ac.cn
lxslc5.ihep.ac.cn

submit HTCondor jobs:

before doing anything, use the command:

export PATH=/afs/ihep.ac.cn/soft/common/sysgroup/hep_job/bin5:$PATH  # SLC5
export PATH=/afs/ihep.ac.cn/soft/common/sysgroup/hep_job/bin:$PATH  # SLC6

Then you use the following cheat table

Qsub HTCondor
qsub hep_sub
qstat hep_q
qdel hep_rm
hep_rm -a: kill all the jobs

hep_q -u yourIhepUserName: check your jobs status

normally it is very similar as using Qsub, we just need use the command:

hep_sub job.sh -o out -e err -mem 4800 -g atlas
other optional arguments:
-os OPERATINGSYSTEM, --OperatingSystem OPERATINGSYSTEM
                        # set the system version of resource you want.
-prio PRIORITY, --priority PRIORITY
                        # set the inner job priority of your own jobs.
-np NUMBERPROCESS, --numberprocess NUMBERPROCESS
                        # set the total cores required by your job.

The example scripts are in this path (do not submit in /workfs or /afs):
/publicfs/atlas/atlasnew/higgs/hgg/ShuoHan/ATLAS/job1_data2016

IHEP HTCondor queue :

one universal queue sharing 4300 cpus, the working time is infinite, HTCondor can distribute cpus for jobs itself

IHEP disks :

disk name disk path size backup Can submit HTCondor jobs
afs /afs/ihep.ac.cn/users/ 500MB unknown no
workfs /workfs/atlas/ 10GB yes no
scratchfs /scratchfs/atlas/ 500GB delete every 2 weeks yes
publicfs atlasnew /publicfs/atlas/atlasnew/higgs/hgg/ 320TB shared no yes
publicfs codesbackup /publicfs/atlas/codesbackup/ 320TB shared yes yes

IHEP Tier2

The quota on BEIJING-LCG2_LOCALGROUPDISK will be 8T after applying for atlas/cn VO

But those files (on disk /dpm) should be downloaded to Tier3 anyway by command "dpns-ls -l" and "rfcp" , they cannot be directly read

3. ATLAS environment

Setup ATLAS environment:

Setup ATLAS Environment:

export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'
setupATLAS
Then, setup Root:
lsetup root

Certificate, ATLAS VO, Rucio, AMI

Certificate: "New Grid User Certificate" in https://ca.cern.ch/ca/
Setup certificate on DESY/IHEP/Lxplus:

openssl pkcs12 -in usercert.p12 -nokeys -clcerts -out ~/.globus/usercert.pem
openssl pkcs12 -in usercert.p12 -nocerts  -out ~/.globus/userkey.pem
chmod 400 ~/.globus/userkey.pem
chmod 444 ~/.globus/usercert.pem
voms-proxy-init voms atlas
ATLAS VO page (request here): https://voms2.cern.ch:8443/voms/atlas/user/home.action


Rucio transfer request page: https://rucio-ui.cern.ch/r2d2/request
Rucio commands:

voms-proxy-init -voms atlas
lsetup rucio
rucio list-dids DATASETNAME
rucio list-file-replicas DATASETNAME
rucio get DATASETNAME
rucio list-rules --file FILETYPE(e.g. data16_13TeV):FILEPATH


AMI page: https://ami.in2p3.fr/
pyAMI commands: https://ami.in2p3.fr/pyAMI/

voms-proxy-init -voms atlas
lsetup pyami
ami show dataset info  DATASETNAME


How to read Ntuple from CERN eos disk (see https://confluence.desy.de/display/IS/How+to+use+Grid+resources+in+batch+jobs)

voms-proxy-init -voms atlas
lsetup xrootd rucio root
root -l
root [0] TFile *mytuple =TFile::Open("root://eosatlas.cern.ch//eos/atlas/atlascerngroupdisk/phys-higgs/*/myntuple.root")

While submitting jobs in condor, in the job-file, one should type

export X509_USER_PROXY=${PATHTOPROXY}
echo "#!/bin/expect" > voms.sh
echo "spawn voms-proxy-init -voms atlas" >> voms.sh
echo "expect 'Enter GRID pass phrase for this identity:'"  >> voms.sh
echo "send 'your_proxy_passwd\r'" >> voms.sh
echo "interact" >> voms.sh
expect voms.sh
lsetup xrootd rucio root
and in condor submit-file, type
transfer_input_files  = ${PATHTOPROXY}
Where the $PATHTOPROXY is default to be $X509_USER_PROXY in host, but need to be copied to a readable path, e.g. cp /tmp/x509up_u2xxxx ${HOME}/nfs/proxyfile/

ATLAS Configs/Tools

Truth Type and Truth origin : https://gitlab.cern.ch/atlas/athena/blob/master/PhysicsAnalysis/MCTruthClassifier/MCTruthClassifier/MCTruthClassifierDefs.h
Electron / Photon ID config: http://atlas.web.cern.ch/Atlas/GROUPS/DATABASE/GroupData/ElectronPhotonSelectorTools/offline/
Binary location of e/gamma ID: https://gitlab.cern.ch/atlas/athena/blob/master/PhysicsAnalysis/ElectronPhotonID/ElectronPhotonSelectorTools/ElectronPhotonSelectorTools/egammaPIDdefs.h
e/gamma Author definitions: https://gitlab.cern.ch/atlas/athena/blob/master/Event/xAOD/xAODEgamma/xAODEgamma/EgammaDefs.h
e/gamma Enum (ConversionType, EgammaType, ShowerShapeType): https://gitlab.cern.ch/atlas/athena/blob/master/Event/xAOD/xAODEgamma/xAODEgamma/EgammaEnums.h
Tracking Enum : https://gitlab.cern.ch/atlas/athena/blob/master/Event/xAOD/xAODTracking/xAODTracking/TrackingPrimitives.h
Isolation working points twiki: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/IsolationSelectionTool


Lowest un-prescaled trigger: https://twiki.cern.ch/twiki/bin/view/Atlas/LowestUnprescaled
Cross-Section and Branching-Ratio page: https://twiki.cern.ch/twiki/bin/view/LHCPhysics/LHCHXSWG
Current Luminosity: https://twiki.cern.ch/twiki/bin/view/AtlasPublic/LuminosityPublicResultsRun2
GRL page: http://atlasdqm.web.cern.ch/atlasdqm/grlgen/All_Good/
Lumi-calculator: https://atlas-lumicalc.cern.ch/, select "Create output plots and ntuples (--plots)" to create PRW input


Athena nightly builds: http://atlas-nightlies-browser.cern.ch/~platinum/nightlies/globalpage
MC JobOptions SVN: https://svnweb.cern.ch/trac/atlasoff/browser/Generators/MC15JobOptions/trunk/share

ATLAS Twiki and Tutorials

PRW config page: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/ExtendedPileupReweighting
MC Reconstruction twiki: https://twiki.cern.ch/twiki/bin/view/AtlasComputing/RecoTf and https://twiki.cern.ch/twiki/bin/view/Atlas/PileupMC2016
MC production twiki: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/AtlasProductionGroup
Derivation twiki: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/DerivationProductionTeam
HGamma twiki: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/HggDerivationAnalysisFramework
MxAOD twiki: https://twiki.cern.ch/twiki/bin/view/AtlasProtected/MxAODs


Creating Event-Loop package: https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/SoftwareTutorialxAODAnalysisInROOT
Analysis Base tutorial: https://atlassoftwaredocs.web.cern.ch/ABtutorial/
ATLAS tutorial meetings: https://indico.cern.ch/category/397/
ATLAS software tutorial twiki: https://twiki.cern.ch/twiki/bin/view/AtlasComputing/SoftwareTutorialSoftwareBasics
PubCome Latex twiki: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/PubComLaTeX

4. Softwares

GitLab

Get a ssh Key here : https://gitlab.cern.ch/help/ssh/README
Add the ssh Key here : https://gitlab.cern.ch/profile/keys
Start a new project here : https://gitlab.cern.ch/dashboard/projects
Basic commands here : https://gitlab.cern.ch/help/gitlab-basics/README.md
Some basic commands:

git clone SSH_PATH
git checkout BRANCH_OR_TAG
git pull
git add --all
git commit -am "something"
git push origin BRANCH

Cmake

CmakeLists.txt of Athena based package : https://gitlab.cern.ch/atlas-hgam-sw/HGamCore/blob/master/HGamTools/CMakeLists.txt
CmakeLists.txt of local code : https://gitlab.cern.ch/shhan/Purity_2D/blob/master/CMakeLists.txt

Use Virtual machine (or Docker) to run Athena on your computer

Docker: https://docs.docker.com/docker-for-mac/
Docker with Athena: https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/AthenaMacDockerSetup

docker pull lukasheinrich/atlas_base_fromhepsw
docker run --privileged -it -v $HOME:$HOME -e DISPLAY=${ip}:0 lukasheinrich/atlas_base_fromhepsw /bin/bash

CERNVM: https://atlas-vp1.web.cern.ch/atlas-vp1/blog/tutorial/running-vp1-on-any-system-in-a-virtual-machine/
Follow this page until “SetupATLAS”, so you have a linux virtual machine in ATLAS environment
Administrator (in case you want to use sodo): account : atlas, password: boson

ATLAS Event Display

Latest event: http://atlas-live.cern.ch/
Public plots: https://twiki.cern.ch/twiki/bin/view/AtlasPublic/EventDisplayPublicResults
Event Pickup (JML and ESD): https://twiki.cern.ch/twiki/bin/view/AtlasComputing/Atlantis
Atlantis: download latest version here (http://atlantis.web.cern.ch/atlantis/), run in a system where Java is available
VP1:

asetup 20.7.9.2,AtlasProduction, here
vp1 DATASET.ESD.root

EUTelescrope

Home Page: http://eutelescope.web.cern.ch
Setup on DESY NAF disk:

setupATLAS
lsetup "root 6.10.04-x86_64-slc6-gcc62-opt"
export ILCSOFT=/nfs/dust/atlas/user/hans/EUTelescrope_desy
cd $ILCSOFT
git clone -b dev-desynaf https://github.com/eutelescope/ilcinstall $ILCSOFT/ilcinstall
cd $ILCSOFT/ilcinstall
./ilcsoft-install -i examples/eutelescope/release-standalone.cfg

5. ADCoS Shift links

6. Internal Code/Packages

Edit | Attach | Watch | Print version | History: r20 < r19 < r18 < r17 < r16 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r20 - 2018-06-27 - ShuoHan
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback