ECDF is the Edinburgh Compute and Data Facility. It is a University wide resource, available to all research groups. This is a short introduction to getting your LHCb jobs running on the cluster and accessing your data. A link to ECDF home page.

Getting an account

You can request an account online or by sending an email to Science Support:

Logging in

First of all, you need to request a user account on the machine. It should be the same username/password as EASE. Email ecdf-systems-team AT


You will see that you have a home area at /exports/home/username. There is a ppe group work directory at /exports/work/ppe/ which has more space available for us to use.

Sourcing the LHCb environment

For the moment, you need to do something like this:

source /exports/work/ppe/lhcb/lhcb-soft/scripts/

Then you can set your application environment in the usual way:

setenvGauss v30r3

  • Before using getpack, you will need to follow the instructions for a local installation of the LHCB software (to setup a ssh-key; bear in mind you will also need to have a .hepix directory in your ecdf-home area).

After that, you can get the applications and build your own algorithms, e.g.,

getpack Sim/Gauss v30r3
source Sim/Gauss/v30r3/cmt/

You all know the rest....

32bit compatibility libraries

At the moment (August 2007) the LHCb software is still not fully supported on 64bit platforms (like ECDF). Therefore, before you build any software, you should ensure that you change your LD_LIBRARY_PATH by doing this (bash or csh):

export LD_LIBRARY_PATH=/exports/work/ppe/lib32:$LD_LIBRARY_PATH
setenv LD_LIBRARY_PATH /exports/work/ppe/lib32:${LD_LIBRARY_PATH}

Submitting Jobs

ECDF uses the Sun Grid Engine (SGE) batch system. It has commands like qsub and qstat which you will be familiar with. To get this environment,

module load sge

You can then submit jobs doing something like:

qsub -l h_rt=HH:MM:SS

where HH:MM:SS is the time that you expect your job to run for. If you don't specify -l h_rt HH:MM:SS then your job will be added to the 30min queue. There are four different run time limits for various portions of ECDF. These are currently set to 30 minutes (default), 6 hours, 24 hours, 48 hours. You should get into the habit of specifying your required run time when you submit your job.

qstat -u username

gives you a list of your jobs and their status.

Test first on interactive node

Before submitting your job to the batch queue, you should test that it works first of all by first logging into one of the interactive machines:


You can then proceed as normal.

Job output

By default, the stdout and stderr will be written to your home area (/exports/home/username/) as jobscript.csh.oJOBID and jobscript.csh.eJOBID. You can custmize the output location by using the -o and -e options when submitting the job using qstat.

Any other output from the job will be written to the $TMPDIR on the node where the job runs. You need to copy this back to your home or work area once the job is complete. See the example script for an idea of what to do.

Array jobs

If you want to submit a large number of jobs, each of which changes a parameter, then you can use the concept of an array of jobs. For example:

qsub -t 2-20:2 jobscript.csh

Submits 10 jobs, the first with 2 as a parameter, then 4, 6...20. You can use the shell variable $SGE_TASK_ID to get hold of this number and use it in your script. Again, see the example.

Example script

# Greig A Cowan, August 2007

# Set output location
set OUTDIR=/exports/work/gridpp/gcowan1/Gauss
#$ -o /exports/home/gcowan1/work/Gauss
#$ -e /exports/home/gcowan1/work/Gauss

# Source the environment
source /exports/work/ppe/lhcb/lhcb-soft/scripts/local-setup.csh
setenvGauss v30r3
source Sim/Gauss/v30r3/cmt/setup.csh

# Set up LD so that it gets the correct libs
setenv LD_LIBRARY_PATH ${HOME}/temp/lib32:${LD_LIBRARY_PATH}

# Options and Executable
set MYOPTS=myGauss.opts
set OPTS=~/cmtuser/Gauss_v30r3/Sim/Gauss/v30r3/options/${MYOPTS}
set EXE=~/cmtuser/Gauss_v30r3/Sim/Gauss/v30r3/slc4_ia32_gcc34/Gauss.exe

# Change to the job working directory and get a local copy of the options file
cd ${TMPDIR}
cp -r ${OPTS} .

# Need to change this for each run to get independent events
echo "GaussGen.FirstEventNumber = ${EVTNUM};" >> ${MYOPTS}

# Run the job

# Rename the output so that we don't overwrite files
mv ${TMPDIR}/GaussHistos.root ${OUTDIR}/GaussHistos-${EVTNUM}-${JOB_ID}.root
mv ${TMPDIR}/GaussMonitor.root ${OUTDIR}/GaussMonitor-${EVTNUM}-${JOB_ID}.root

# Tidy up
rm *.sim


Root can be set within an ECDF (a.k.a Eddie) session by adding the following lines to your .bash_profile script:
#Path to LHCb-PPE group software
export LHCBPPE=/exports/work/ppe/lhcb

#set ROOT
export ROOTSYS=$LHCBPPE/cern/root/pro
export PATH=$ROOTSYS/bin:$PATH
It is also possible to set the Root enviroment on ECDF using:
 module add root

Where the above add module line could aslo be added to your .bash_profile script.

Using Ganga on ECDF

Ganga SGE setup

Ganga supports SGE as a backend, in the same way that it supports LSF at CERN, so if you are used to using Ganga for configuring your job environment and options, then you can continue to use it while submitting jobs to ECDF. If you use Ganga, you don't have to worry about all of the SGE scripts, qsub and qstat commands.

You'll need some entries in your bashrc to find grid libraries:

export X509_CERT_DIR='/exports/work/middleware/WN/etc/grid-security/certificates'

Before using Ganga you need to first setup your .gangarc file to be able to submit to SGE with some special options that we need. Add this stanza to the bottom of your .gangarc file on the ECDF frontend nodes:

kill_str = qdel %s
submit_str = cd %s;qsub -cwd -l h_vmem=2500M -V %s %s %s %s
preexecute = os.chdir(os.environ["TMPDIR"])

If you're running with dCache data or DPM data you will need extra bits and pieces here, check out the individual pages for more details.

Ganga local filesystem setup

The file system in use on eddie is very sensitive to large numbers of small files in your home directory. Hence it is adviseable to place all output files in the shared filespace.

You will need to change the job workspace location to a new directory under /exports/work/physics_ifp_ppe, either by making a softlink in your gangadir/workspace to point there, or be editing your gangarc


You must change the outputdata location for files like .sim .digi .dst to somewhere with more space, i.e. the scratch space under /exports/work/physics_ifp_ppe/scratch which is also given here.


You can check these in your ganga session by typing:

In [34]:config['FileWorkspace']
Out[34]: [FileWorkspace]
*    topdir = '/exports/work/physics_ifp_ppe/scratch/rlambert/ganga_workspace'
     splittree = 0

In [35]:config['LHCb']
Out[35]: [LHCb]
*    DiracTopDir = '/afs/'
     DiracLoggerLevel = 'ERROR'
     copy_cmd = '/bin/cp'
*    DataOutput = '/exports/work/physics_ifp_ppe/scratch/rlambert/ganga_outputdata'
     mkdir_cmd = '/bin/mkdir'
     maximum_cache_age = 10080
*    LocalSite = 'CERN'
*    SEProtocol = 'castor'

If you like, you can also change the location of your job repository to point to shared filespace, or use a remote repository to reduce the number of files in the filesystem.

Ganga Quickstart

To start Ganga, first source the LHCb environment (see above) and then use:


to pick your Ganga version. You can then start up ganga with:


To submit a test "Hello, World!" Executable() job you can do something like:

j=Job( backend=SGE())

As you will have worked out from the options added to the .gangarc file, you can use Ganga to submit DaVinci jobs that need access to the files stored on the LHCbEdinburghGroupDcache.

Submitting to different queues

On LSF you would type:

j=Job( backend=LSF())

To submit to an 8nh queue. There is no implimentation of this in the Ganga backend. Instead you must modify the configuration to submit to different queues.

Add the string: -l h_rt=hh:mm:ss to the submission string, where hh:mm:ss is the expected length of the job.

Set a default length in your gangarc:

submit_str = cd %s;qsub -cwd -l h_vmem=2500M -l h_rt=4:00:00 -V %s %s %s %s

Or set it directly in ganga on the command line before submitting your job.

config['SGE']['submit_str'] = 'cd %s;qsub -cwd -l h_vmem=2500M -l h_rt=8:00:00 -V %s %s %s %s'

Submitting ROOT jobs from Ganga

I would recommend using the Root() application type:

j=Job(application=Root(), splitter=ArgSplitter(), merger=RootMerger())

You should first of all set the path in the [ROOT] section of .gangarc. This will gaurantee that Ganga picks up the correct version of ROOT to run your jobs. Probably best not to have $ROOTSYS set in your environment in addition to this.

path = /exports/work/ppe/sw/builds/root

If instead you want to submit a precompiled excecutable with root, make sure you build, run and merge with the same version of root. For example version 5.12:

export ROOTSYS='/exports/work/physics_ifp_ppe/sw/builds/root5_12/root'

#  Location of ROOT
location = /exports/work/physics_ifp_ppe/sw/builds/root5_12/root

#  Set to a specific ROOT version. Will override other options.
path = /exports/work/physics_ifp_ppe/sw/builds/root5_12/root

version = 5.12.00g

Current Issues

  • Using setenvGauss in a bash script does not work. This is OK if you use csh. Strange.
  • Using Gauss v30r3 you need to get a copy of a couple of libraries that are not install on ECDF. You then need to modify your LD_LIBRARY_PATH before compiling the code.

-- GreigCowan - 03 Aug 2007

Edit | Attach | Watch | Print version | History: r16 | r14 < r13 < r12 < r11 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r12 - 2008-09-04 - unknown
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback