First Data stuff

ssh -L 8080:pccmsdqm04:80 <username>

IPN Lyon commands

T3_FR_IPNL links

T3_FR_IPNL batch how-to

For launching/running a job on the batch system, you mainly need two files : a bash script, that will be run on batch, and a jdl file, specifying your job's setup.

  • If you want to have access to the T3 storage element (/dpm/...) in the execution of your bash script, you need to add the lines Requirements = other.GlueCEUniqueID==""; and VirtualOrganisation = "cms"; to your jdl file. For such access, please make sure the variable DPNS_HOST is correctly set to lyogrid06.in2p3.f (should be ok by default). The commands to access the files are the same as for castor at CERN (rfdir, rfcp, rfio:/dpm/.../file.root in a CMSSW python config file/ ROOT macro, etc.)
  • You should have default read access to the /gridgroup directory from batch (with the above jdl file). However, for write access, you need to fully open the directory permissions (ie chmod uag+rwx directory), because the batch system is not a known user of the cms group. For security reasons, if you need to write on gridgroup, please set up such an open directory in /gridgroup/cms/. THIS SHOULD BE AVOIDED, please consider using the dpm storage element first.
  • In case you want to submit long jobs : your job is automatically killed at the expiration of your proxy lifetime (12 hours by default), even if it was waiting all the time! You may want to ask for a longer proxy: voms-proxy-init --voms cms --valid 48:00. An automatic renewal of the proxy can also be put into place (see, not tested). For convenience, consider:
    • Renew your proxy just before launching a job
    • Do not launch lots of jobs at once (most of them will stay in "waiting" of "scheduled" for a while, and may be killed while "running" because of proxy lifetime !

Connect remotely to windows (lyoserv)

First open a connection to lyoserv
ssh -XY -L
Then in another shell type :
rdesktop localhost:1024
and connect !

CERN commands

batch CERN bsub -q queue_name (8nh, 1nh, 8nm) -L /bin/csh 'Process' -M memory_limit
bpeek (-f (view with tail)) job_number
stager_qry -M mon_fichier_castor
Setting backspace to backspace in vim (instead of printing ^?) type stty sane in terminal (for every new session)


yum install vim-X11 vim-common vim-enhanced vim-minimal
yum install git
yum install zlib-devel.x86_64

Give write/read access to some place on afs

afind ${thePathName} -t d -e "fs setacl -dir {} -acl system:anyuser  rl"

Compile a CMS note locally on ubuntu (and not on lxplus....)

The information is available on the following twiki page : CMSSubversionCheckoutUsingUbuntu. Please note that the page is outdated, and that some of the packages required are not available anymore... But they appear not to be required in the end !

CCIN2P3 commands

Loading correctly the environment variables in the right order :

source /afs/
source ${VO_CMS_SW_DIR}/
source /afs/
cd $CMSSW_X_Y_Z/src
eval `scramv1 runtime -sh`


the following extra commands might solve a couple of CRAB-related bugs:


Copy grid output staged out on T2_FR_CCIN2P3 to sps

First (you need a valid proxy) get the job report to find the exact path where the files are stored on the T2 :

crab -report -c Zmumu/

The complete path looks like srm://

Once you know the path, you can ls it with the srmls command :

srmls srm://

Then copy it where you want using the srmcp command : FIRST, remove the "/srm/managerv2?SFN=" from the path name, SECOND, specify file:/// in front of the destination :

srmcp srm:// file:///{path}
Notes :
  • Things like * does not seem to work with this command
  • {path} is any local path, from /sps/cms/obondu/ to the simple .


srm access addresses for all T1 T2 T3

Install Morgan's TotoAnalyzer :

To check out the code, you must open a token to CERN CVS :

kinit -5 obondu@CERN.CH

Then :

  • for CMSSW >= 3_2_X : cvs co -r {tagName} -d ./UserCode/IpnTreeProducer UserCode/Morgan/IpnTreeProducer
  • for CMSSW <= 3_1_X : cvs co -r {tagName} UserCode/Morgan
then go in UserCode/IpnTreeProduce/src and make it

go in CMSSW_X_Y_Z/src and scramv1 build it go in CMSSW_X_Y_Z/src/IpnTreeProducer/src/ and make

Check memory leak in TotoAnalyzer :

To find a memory leak, use valgrind :

valgrind --tool=memcheck `cmsvgsupp` --suppressions=$ROOTSYS/etc/valgrind-root.supp --leak-check=yes --show-reachable=yes --num-callers=20 --track-fds=yes cmsRun >& log.out

Compiling a root macro YourMacro.C running on Toto-uples :

  1. Compile the libraries going in the src directory, then make. It will produce Then edit YourMacro.C :
  2. include all the needed .h from Root (especially TSystem.h)
  3. include all headers TRoot*.h headers
  4. replace int YourMacro() by int main()
  5. Declare properly the collections used in the analysis, according to C++ standard (and not C interpreter which is more relaxed)
  6. add just after the main : gSystem->Load("../../src/"); (or equivalent place in your area)
  7. Compile: g++ YourRootMacro.C `root-config --libs --cflags` -o YourRootMacro
  8. To compile with RooFit: include all necessary RooFit header + g++ YourRootMacro.C -lRooFit -lRooFitCore `root-config --libs --cflags` -o YourRootMacro

NOTE: you should NOT include the TRoot*.h headers and replace int YourMacro() by int main() if you want to run the macro interactively

Blacklisting a grid computing element after submission of the jobs:

crab -resubmit x

class to analyse root-tuples located in the SE


Execute a compiled macro on a distant machine (e.g. CC batch worker)

For this to work properly, you need the following prerequistes:
  • You need to link all IpnTree header references in your macro with a folder interface/ in the directory where you compile
  • You need to load the in your macro from lib/ (it does not matter for the actual compilation, but it will for execution)

The core of the thing is:

  • Load the usual environment variables ON SPS (cf
  • COPY IpnTree Header files (interface files) to worker : mkdir ${TMPBATCH}/interface; cp ${SPSDIR}/UserCode/IpnTreeProducer/interface/*h ${TMPBATCH}/interface/
  • COPY IpnTree compliled lib to worker : mkdir ${TMPBATCH}/lib; cp ${SPSDIR}/UserCode/IpnTreeProducer/src/ ${TMPBATCH}/lib/
  • COPY executable to worker : cp ${SPSDIR}/PATH/EXECUTABLE ${TMPBATCH}/


  • With this, the only sps/batch interaction will be from opening/chaining the input totouples files. If you do not copy locally the librairies not the executable, it will access sps for execution, and every time the executable tries to read an object in the root file. This is slowing down the job execution, because sps/batch I/O is very inefficient.
  • In order to speed up execution, you can also keep the outfiles and errfiles (ie cout and cerr) to a minimum. If you need to dump more information, you can do it on files locally, that can be copied after the job execution from the worker to sps. This will also decrease sps/batch I/O and speed up the job.
  • Same remark as above for output root files, please prefer create the file locally (on worker) and copy it at the end of the job to sps.

Shell Commands

Create a link ln -s (symbolic) -d (directory) {target} {linkname}
Change permissions chmod {u (user),g (group),o (others) ,a (all)}+{w,x,r} {file,folder}
Change permissions (afs folder) find ./ -type d -exec fs setacl {} account_name rlidwka \; -print
Find file find {path} -name "{name}"
Usage of awk for scientific notation echo "65" | awk '{printf "%4.2e\n" , $0}'

CVS stuff

Fichiers modifiés entre 2 tags de cvs : cvs diff -u -r RECO_3_8_X_v4 -r RECO_3_8_X_v5

specific folder for CMSSW: scramv1 project -n CMSSW_4_2_8_patch2__V12_00_03 CMSSW CMSSW_4_2_8_patch2

SVN stuff

On lxplus Instructions to get a Note the first time under a directory "myDir" and make a pdf file

svn co -N svn+ssh:// myDir

cd myDir

svn update utils

svn update -N notes

svn update notes/AN-11-088

eval `./notes/tdr runtime -csh`

cd notes/AN-11-088/trunk

tdr --style=an b AN-11-088

The created pdf file is: ../../tmp/AN-11-129_temp.pdf

To add a file, remove a file, update, commit

svn add newFile

svn remove oldFile

svn update

svn commit 


/UserCode/hbrun/HughFilter for filtering single events

ROOT Stuff

Disable CINT loop optimimzation

It might be a cause of bugs from time to time (ie ROOT crashes even though your code is perfectly correct, because it precompiles loops)...

If you are running in interactive mode, the solution is:

.O 0
(the first is a capital o and the second one is a zero)

If you are running in batch mode (either -b option either truly on batch, the solution is to put the following line in your code:

#pragma optimize 0

How to browse a RECO file


Dumping miniTree information

TChain* plaf = new TChain("miniTree");
plaf->Scan("Mmumugamma", "isTightMMG"); > dump.txt


Password generation

  • pwgen -1 -B -y -N 10
Topic attachments
I Attachment History Action Size Date Who Comment
Unknown file formatig RelValZEE.ig r1 manage 6746.7 K 2010-06-21 - 13:39 OlivierBondu TEST file
Edit | Attach | Watch | Print version | History: r47 < r46 < r45 < r44 < r43 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r47 - 2018-02-22 - OlivierBondu
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback