TWiki> Main Web>WebPreferences>IRBTier3Instructions (revision 10)EditAttachPDF

IRB Tier 3 Instructions

Official CMS name of the site is T3_HR_IRB. PHEDEX Configuration is found under (user: phedex)

Site hosts: (headnode), and UPDATE: and have been added in the meantime.

To use the site, first a new local account needs to be created.

Site contacts are and

Various information on setting up the user workflow on the site (for local users), as well as several administration tasks, are described in the following sections.

New CMSSW installation (using CVMFS)

* set up scram using:

export SCRAM_ARCH=slc5_amd64_gcc462
source /cvmfs/


export SCRAM_ARCH=slc6_amd64_gcc491

* other gcc versions are also supported. CVMFS caches approx 20 GB of CMSSW installation data (this can be increased if necessary), so any version is available without separate installation.

* this installation, (as well as two other found in /users/cms and /users/cmssw) are now configured to use local Squid to proxy and cache condition data needed for CMS data processing.

* Old CMSSW installations in /users/cms and /users/cmssw are obsoleted by this and it is possible that they will be deleted in the future (exception is CRAB in /users/cms) to free disk space. At this point it might only require users to create a new project space and recompile the code.

CRAB installation on the site

* After "cmsenv" (UPDATE: also works before "cmsenv" so you can add this to your environment):

source /users/cms/CRAB/CRAB_2_9_1/crab.(c)sh

UPDATE: for CRAB3 use

source /cvmfs/

Grid environment (UI) should already set up automatically.

How to submit grid jobs that copy data back to IRB

* Add/edit these lines in crab.cfg (user section)

return_data = 0
copy_data = 1
eMail = your.mail@xxx
user_remote_dir=/somedir # or set it (as below) in multicrab.cfg using "USER.user_remote_dir" (this is subdirectory in user default directory)
#do not set storage_path here. Files will end up in LFN /store/user/%username% (PFN: /STORE/se/cms/store/user/%username%

If your user directory is not created in /STORE/se/cms/store/user, please ask site contacts (Srecko, Vuko). This directory must belong to "storm" user and group. There is a periodically running cron script that makes these directories writable for anyone (setfacl), so that analysis output can be deleted.

* Please be careful not to write to someone else's directory. Currently, access rights do not distinguish between different users.

*In multicrab.cfg (these options can also go in crab.cfg where section is in capital letters below)

CMSSW.lumis_per_job=50 #set your own
USER.user_remote_dir=IRBtest #this sets subdirectory under "storage_path" as above

* Note: instead of "lumis_per_job" (recommended), it is possible to use "CMSSW.number_of_jobs = XX" in section of each dataset. The latter can be dangerous because of limited amount of space present in condor working directory which is on the system partition. Number of available Condor job slots on all three machines is 90, but better queue more jobs.

* Note: manual deletion or moving of files copied to SRM might still not be possible for local users (e.g. if you want to delete data no longer needed). This will be addressed by running a a cron job to set proper permissions.

* Note: in some cases default voms-proxy-* installed might not work for some grid related activities (e.g. using srmcp). In case of problems, it is recommended to use /opt/voms-clients-compat/voms-proxy-init and related tools for creating voms proxies. E.g.

/opt/voms-clients-compat/voms-proxy-init -voms cms # to get proxy with access to CMS resources

* Crab takes care of creating voms proxiy itself, so use above only in case of problems. Just make sure to have .globus populated with proper CERN certificate and key.

* Alternative copying mode (use ONLY if above doesn't work):

return_data = 0
copy_data = 1
eMail = your.mail@xxx
storage_element =
storage_path = /srm/managerv2?SFN=/STORE/se/cms/store/user/username  #set your user dir
storage_port = 8444
#srm_version = srmv2 #optional

UPDATE: For using CRAB3, in your configuration python script set storageSite to T3_HR_IRB (you don't need to have an account!):

config.Site.storageSite     = 'T3_HR_IRB'
config.Data.outLFNDirBase  = '/store/user/%username%/%directory%'

To check and/or copy your files from T3_HR_IRB, use lcg-ls and lcg-cp, for example:

lcg-ls -v -b -l -T srmv2 --vo cms srm://\?SFN=/STORE/se/cms/store/user/%username%

How to run CRAB jobs directly on T3_HR_IRB:

To run jobs on Condor batch directly on the site (on three machines that ara available), change scheduler to:


* Jobs must be submitted directly from any of the three site hosts. Also, any samples to be processed must be available on the site (transferred using PHEDEX, which site admins/executives can do). Do not use "condor_g" scheduler, only plain "condor" * Note: use first method for copying only (alternative ignores subdirectory setting)

List of datasets currently replicated on T3_HR_IRB

Query it here:

Custom analysis datasets (add your analysis dataset here):

Dataset name size status DBS instance

Manualy copying data from other sites (examples)

#using lcg-cp:

lcg-cp -v -b -D srmv2  srm://\?SFN=/hdfs/store/user/smorovic/53Xtest/DataPatTrilepton-W07-03-00-DoubleMu-Run2012A-
13Jul2012-v1/patTuple_10_1_QVr.root  srm://\?SFN=/STORE/se/cms/store/user/test/patTuple_10_1_QVr.root

#using srm-cp

srmcp -retry_num=0 file:////tmp/testfile srm://\?SFN=/STORE/se/user/smorovic/test890708133 -debug -2

* Note: srm-to-srm needs -pushmode switch. Also, srmcp only recognizes certificates created by /opt/voms-clients-compat/* tools (not default ones installed).


Xrootd service is presently not installed. This is on a TODO list.

UPDATE: xrootd service is installed and working. After "cmsenv" and creating proxies (voms-proxy-init -voms cms) simply use, for example xrdcp for copying.

Local DBS for publishing locally processed datasets to private DB (updated)

It is possible to publish processed data to CMS DBS analysis instance. This is not yet tried, but detailed here (for another T3 site)

Name of your analysis dataset can be arbitrary, however I recommend this convention:


User analysis datasets can be published to cms_dbs_ph_analysis_01_writer or cms_dbs_ph_analysis_02_writer DBS instance. For the former, use this in crab.cfg:

return_data = 0
copy_data = 1
eMail =
storage_element = T3_HR_IRB
publish_data_name = T3HRIRB_V00
dbs_url_for_publication =

After all jobs are fully processes, do:

(multi)crab -getoutput
(multi)crab -publish

This will produce dataset in the form:

Your dataset can be found on DAS: after selecting DBS instance "cms_dbs_ph_analysis_01_writer" in a drop down menu and performing the search (e.g. "dataset dataset=YOUR_DATASET")

Now you can process this dataset in CRAB similarly to the previous steps. Be careful to specify the name of the output file produced (e.g. name of the TTree root file produced by a WZAnalyzer module) and a separate output subdirectory.

See also CRAB FAQ:

Submitting jobs to CONDOR

A brief description is given on how to submit jobs to condor. This can be used for submitting any type of executables, including CMSSW programs.

Note: you are strongly encouraged to use condor to run any CPU and/or memory intensive job that will take longer than a few minutes to run.

Note: If you run CMSSW on local datasets using CRAB as explained above, you are actually implicitly using CONDOR.

Create a job description file job_desc.txt with this content:

executable  =  your_executable
universe    =  vanilla
log         =  Your_Log_File
Output   = Your_output_file
error       =  Your_error_file
initialdir  =  your_initial_directory

If you will need the environment from which you submit the job, add the following line to the job description file (before the last line, "queue" should always be the final line):

getenv      =  True

The values of the variables executable, log and initialdir should be adapted to your job. You can then submit the job with the command:

condor_submit job_desc.txt

You can check the status of your job with:


And you can check the status of all condor queues with


All these commands can be issued from any of the 3 hosts in the "lorien forest".

This should be enough to get you started and should probably satisfy most of your needs. You can find more detailed instructions and options in the CONDOR manual.

There is also a CONDOR monitoring page where you can get statistics and history plots of CONDOR jobs.

Site Administration

Restarting gluster after shutdown

As of now, gluster will not be able to restart cleanly after shutdown, and you need to the following:

Adding a new user

This procedure is currently not automated, however the job could be simplified by a script.

On lorienmaster, as "root" go to the following directory:

cd /etc/openldap/inputs
cp usertemplate.ldif %username%.ldif
choosing some new name for the %username%. modify all instances of "username" and "User Name" in the file to reflect credentials of the new user. Set up a new UID number.

The UID must not overlap any existing UID. After picking some number, for example 12345, (try to use numbers over 10000), Check that the following commands don't find anything:

grep "12345" /etc/passwd
ldapsearch -x | grep 12345

*Note: Setting the password hash is no longer necessary in the ldif file. If for some reason you need this, a SHA hash can be generated using the slappasswd command.

NOTE (17-03-2016): the hostname from which home directories in the automounter map has to be, and not!!! Mounting will not work otherwise.

After completing the ldif file, upload it to the LDAP server:

ldapadd -D"cn=root,dc=irb,dc=hr" -W -x -f newusername.ldif
#Type password for ldap server. It can be found under "rootpw" entry in /etc/openldap/slapd.conf .

If the file is inconsistent or you forgot some entries, this can fail. If successful, restart the NSCD daemon to refresh the name cache (It might take a few minutes for username to be picked up by the system):

/etc/init.d/nscd restart

Add a home directory and storage directory

mkdir /home/%username%
chown username:users /home/%username%
mkdir /STORE/se/cms/store/user/%username%
chown storm:storm /STORE/se/cms/store/user/%username%

It might be necessary to "reload" autofs (on any of the Site hosts), but possibly it is not needed.

/etc/init.d/autofs reload

Finally, add the user to Kerberos DB and set password:

addprinc %username%@IRB.HR
#now set password
q #quit

Password can also be changed by the user with "kpasswd" (before that type "kdestroy").

If the user wants to use AFS, s/he needs to init a CERN token:

kinit %username%@CERN.CH
aklog #converts it to legacy krb4 token for AFS

Modifying an LDAP entry for an existing user

This procedure is currently not automated. However, the job could be simplified by a script.

On lorienmaster, as "root" go to the following directory:

cd /etc/openldap/inputs

Create a text file which specifies which entry and for what user needs to be modified. Here is an example of a recently done change that modifies the default shell for user ceci to Bash

dn: uid=ceci,ou=People,dc=irb,dc=hr
replace: loginShell
loginShell: /bin/bash

After completing the modification description file (in this case called modif-shell-ceci), upload it to the LDAP server:

ldapmodify -D"cn=root,dc=irb,dc=hr" -W -x -f modif-shell-ceci
#Type password for ldap server. It can be found under "rootpw" entry in /etc/openldap/slapd.conf .

If successful, restart the NSCD daemon to refresh the name cache (It might take a few minutes for username to be picked up by the system):

/etc/init.d/nscd restart

Note: Not sure if restarting the NSCD deamon is really needed but it is done here just to be on the safe side.

(Re)starting PHEDEX scripts

su - phedex #as root
cd ~
source stopallkill
source cleanall #wipes all logs and previous state (optional)
#check that no phedex perl scripts are running
ps aux | grep phedex
source startall

Completing PHEDEX transfers which complain about duplicate files

In some cases, it can happen that the transfer job fails for some reason (for example, overloaded server so the checksum script times out), while the file remains on the disk. Then Phedex will retry the transfer, but complain about duplicate file (noticeable in /var/log/storm/storm-backend.log, or in error log on Phedex web).

The simplest trick is to (log in as root) rename the dataset directory to a temporary name, to allow those transfers to complete. Then after transfer is 100%, move files back from the temporary into correcty location (possibly keep the latter copy of duplicates because it passed the checksum). It is also possible to delete the offending files, but has to be done for each of them (a lot of work). It is generally recommended not to overload the "forest" machines while Phedex transfers are ongoing to avoid these problems. Alternatively, checksum script could be modified to detect the stall and do some proper action (or catch the kill signal and delete the file before terminating?), so there is a TODO item for this.

File system troubleshooting


Home directories in /users (or part of it) not visible

In case some of the home directories under /users/ become invisible, mostly on lorientree01 and lorientree02, you can try to restart the NFS service on lorienmaster.

/sbin/service nfs restart
This will probably solve the problem (though give it a few seconds to become effective). In case it does not, you can also try to reload the automounter (autofs) maps on the 2 tree machines:
[root@lorientree01 ~]# /sbin/service autofs reload
Reloading maps

Parts of gluster distributed filesystem /STORE invisible

It can happen that parts of the distributed gluster file system in /STORE become invisible. You may notice that some files are invisible or that the total visible /STORE file system is smaller than what it should be, simply with the df command. e.g.:

[root@lorienmaster ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
                      162G   57G   97G  38% /
                      7.8T  3.2T  4.2T  44% /home
/dev/sda1              99M   27M   68M  28% /boot
tmpfs                  32G  612K   32G   1% /dev/shm
/dev/sdc1              25T   24T  831G  97% /export/brick1
AFS                   8.6G     0  8.6G   0% /afs
                       71T   55T   17T  78% /STORE
cvmfs2                 20G   16G  3.8G  81% /cvmfs/

The size of the /STORE file system should be 79 TB (NEW: the total capacity is 96 TB after the integration of 2 new servers in May 2015). So in the example above, part of it is missing. We can find out which "brick" (element of the gluster fs) is missing by checking the status of the gluster volume:

[root@lorienmaster ~]# gluster volume status
Status of volume: gv0
Gluster process                                         Port    Online  Pid
Brick            24009   Y       25405
Brick            24009   Y       1519
Brick            24010   N       20910
NFS Server on localhost                                 38467   Y       20916
NFS Server on                   38467   Y       1524
NFS Server on                   38467   Y       25410
In this case, we see that the brick on lorienmaster is not online. We can restart it as follows:

[root@lorienmaster ~]# gluster volume stop gv0
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
Stopping volume gv0 has been successful
[root@lorienmaster ~]# gluster volume start gv0
Starting volume gv0 has been successful
If you see that another host is not online, you need to login to that host and execute those commands there.

We can now recheck the status.

[root@lorienmaster ~]# gluster volume status
Status of volume: gv0
Gluster process                                         Port    Online  Pid
Brick            24009   Y       21159
Brick            24009   Y       9833
Brick            24010   Y       25150
NFS Server on localhost                                 38467   Y       25156
NFS Server on                   38467   Y       9838
NFS Server on                   38467   Y       21164
Everything is now fine, and we can now see that /STORE recovered its full size:

[root@lorienmaster ~]# df -h /STORE
Filesystem            Size  Used Avail Use% Mounted on
                       96T   79T   17T  83% /STORE

NOTE on 23.02.2016.: We added 2 new servers in May 2015, and splitted bricks to have all equal size bricks, the status is now:

[root@lorienmaster ~]# gluster volume status
Status of volume: gv0
Gluster process						Port	Online	Pid
Brick lorientree03:/export/sdb1/brick1			49152	Y	2923
Brick lorientree03:/export/sdb2/brick2			49153	Y	2922
Brick lorientree03:/export/sdb3/brick3			49154	Y	2932
Brick lorientree04:/export/sdb1/brick1			49152	Y	2953
Brick lorientree04:/export/sdb2/brick2			49153	Y	2959
Brick lorientree04:/export/sdb3/brick3			49154	Y	2964
Brick lorientree02:/export/sdb1/brick1			49152	Y	12898
Brick lorientree02:/export/sdb2/brick2			49153	Y	12906
Brick lorientree01:/export/sdb2/brick2			49156	Y	22128
Brick lorientree01:/export/sdb3/brick3			49157	Y	22129
Brick lorienmaster:/export/sdb2/brick2			49152	Y	3076
Brick lorienmaster:/export/sdb3/brick3			49153	Y	3075
Brick lorientree02:/export/sdb3/brick3			49154	Y	12911
Brick lorientree01:/export/sdb1/brick1			49158	Y	22127
NFS Server on localhost					2419	Y	3089
NFS Server on lorientree01				2419	Y	3372
NFS Server on lorientree02				2419	Y	12933
NFS Server on				2419	Y	2939
NFS Server on lorientree04				2419	Y	2971
Task Status of Volume gv0
Task                 : Rebalance           
ID                   : 697ef0ff-91c5-4e82-9f14-25a6b956944d
Status               : completed           

/STORE not accessible on lorienmaster

It happened the whole /STORE file system was invisible on lorienmaster, due to a problem with the mount point. What you would then see is:

[lorienmaster] /users/tsusa/wbbAna/compiled_code/code> df -h /STORE
df: `/STORE': Transport endpoint is not connected

For now, we were only able to solve it by rebooting lorienmaster, which is obviously highly unsatisfactory. This problem has usually been observed when many processes were accessing files on /STORE. It has however been noted that /STORE is completely visible on the other two hosts.

Condor troubleshooting

Sometimes condor schedd complains about a directory in /tmp to which it does not have write permissions. Maybe you have to delete this file to get Condor working again, so delete this as necessary (the exact error is logged in one of Condor log files --> /var/log/condor/).

If you see that Condor restarts jobs after restarting it: To wipe them out, run (possibly as root):

condor_rm -all -forcex

Useful links

Log of changes

  • 19.12.2014: add PREEMPT and CLAIM_WORKLIFE options to /etc/condor/condor_config.local
  • 06.06.2016: /STORE was not mounted after rebooting on lorientree0{1,2,4}: simply mount it "mount /STORE"

-- VukoBrigljevic - 26 May 2014

Edit | Attach | Watch | Print version | History: r14 | r12 < r11 < r10 < r9 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r10 - 2016-08-18 - DinkoFerencek
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback