# MPI student quick start guide

This is a quick guide to setting up the most essential software you will need at the three locations you might use. This is not a tutorial, and there is no explanation of what any of the commands actually mean - you will learn this over time. Also, you will likely find that you will want to further customise your environments to help you work more efficiently - for example, by defining aliases for commands you normally use, setting up ssh tunnels, etc. The intention here is just to provide you with a minimal working setup, which you can then develop over the course of your project work.

# MPI IT services

In case you are encountering a problem with the MPI computing infrastructure, you can open a ticket here.

# Working at MPI

## Working directly on MPI computers

Step 2: Customise your login. This isn't strictly necessary, but should make some things a little easier. Create a file called .bashrc (with the leading ".") in your home directory, with this content:

#!.bashrc

### User specific aliases and functions

### Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi

### Page-up and Page-down bindings
bind '"\e[5~": history-search-backward'
bind '"\e[6~": history-search-forward'

## Places to ssh to
function pc()
{
ssh -Y pcatlas$1 } alias rzggate='ssh -Y gateafs.rzg.mpg.de' function rzg() { ssh -Y mppui${1}.t2.rzg.mpg.de
}
alias lxplus='ssh -Y lxplus.cern.ch'


Step 2: Open a terminal, eg using Konsole.

Step 3: Copy some useful scripts. You can edit these later to suit you own needs, but they should be enough to get you started.

$cp ~flowerde/rootsetup.sh .  Note that the $ symbol represents the prompt, and should not be typed!

Step 4: Start using ROOT. This is the only step you need to do every time you log in.

$source rootsetup.sh$ root

This will give you the interactive prompt, see the various tutorials for other ways of using ROOT.

### Basic ROOT tutorials

You can, for example, test your setup using the this quick tutorial. The level of the tutorial is very basic, but it contains several useful links.

More in-depth tutorials can be found here and here. Also, codecademy has some tutorials on python and git, if you need those. Thanks to Philipp Gadow for the links!

## Working on RZG

Rechenzentrum Garching, or RZG, is our local Tier 3 computing site. This means that we are able to use it for analysis, while still benefiting from the fact that it is a Grid site, with access to data files and other useful tools.

Step 1: Obtain an account following the instructions on this page. On the electronic registration form, you will first need to select the institute (MPI fuer Physik) and the person responsible for authorising your account (Stefan Kluth). You will need to request access to the MPP linux cluster. It is recommended that you request the same username as you have on the MPI system.

Step 2: Log in. Once you have your account (you will get an email), you can log in to RZG with the following command:

$ssh -Y [username@]mppui1.t2.rzg.mpg.de  The user name is only necessary if it is different to your local account. Other machines (mppui2, mppui3) are also available. Alternatively, use the command defined in the .bashrc file you created earlier: $ rzg 1


From external locations (eg at home), you will need to first access the "gate" machine before you can access the mppui machines:

$ssh -Y [username@]gateafs.rzg.mpg.de  Either way, you will need to type in your RZG password. Step 3: Change your password. I think there are instructions on this in your welcome email, ask for help if it's not there. Step 4: Customise your login. Again, this is managed by a .bashrc file (note that certain editors might not be available at every location). # .bashrc ### User specific aliases and functions ### Page-up and Page-down bindings bind '"\e[5~": history-search-backward' bind '"\e[6~": history-search-forward' ### Login nodes function mppui() { ssh -Y mppui${1}.t2.rzg.mpg.de
}

if [[ ${HOSTNAME} != rzgate ]] then export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase source${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh
fi


In addition, you will need a file called .bash_profile, with this as contents:

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi


Step 5: Start a new session. It is best to leave your existing session running in case there is a mistake in the previous step which prevents you from logging in. If you are using Konsole, press CTRL-SHIFT-T to open a new session in a new tab. You will need to access the gate machine first. From there, you can use the simpler aliased command in .bashrc:

$mppui 1  Simply replace the number by 2, 3 etc to access a different machine. Step 6: Read the new information printed to screen. Information is printed about several new commands set up by atlasLocalSetup.sh. These each have inbuilt help, which you should refer to for more information. For example, to begin ROOT, do the following: $ localSetupROOT
(some screen output is printed)
$root -l  The -l in this case suppresses the splash screen, which can be very slow over a remote connection. To set up athena, just execute the following (for example): $ cd some/directory/you/want/to/use
$asetup 17.2.7.5.2,AtlasPhysics,here  Note that the version number will change depending on exactly what it is you are studying. Make sure to always use a fresh shell when executing these commands! ### Running on the batch system RZG has a very good batch system that uses the Sun Grid Engine (SGE). Before starting to use it, please read the official documentation here. There is also some useful information here. In particular, you need to run the save-password command before you do anything, and every time you change your RZG password. Here is an example job script to get you started: # setupATLASUI is not aliased by default, we have to call the underlying command instead source /t2/sw/setupui_cvmfs.sh atlas # Note: if you want your .bashrc, you need to source it yourself # . .bashrc # Run in$TMPDIR for fast I/O - it will be something like /tmp/7993986.1.short, based on the task ID and the queue name
testdir=$TMPDIR/BatchTest if [ -d$testdir ]
then
rm -r $testdir fi mkdir$testdir
cd $testdir # Setup athena for an example. You could also setup ROOT (localSetupROOT) or anything else you like asetup MCProd,19.2.4.4.2,here # Technical detail for this example export DATAPATH=/cvmfs/atlas.cern.ch/repo/sw/Generators/MC15JobOptions/latest/share/DSID392xxx/:$DATAPATH

# Example: Generate 1k SUSY events with Herwig++
Generate_tf.py --ecmEnergy=13000. \
--runNumber=392522 \
--firstEvent=1 \
--maxEvents=1000 \
--randomSeed=234 \
--jobConfig=/cvmfs/atlas.cern.ch/repo/sw/Generators/MC15JobOptions/latest/share/DSID392xxx/MC15.392522.HppEG_UE5C6L1_C1C1_SlepSnu_x0p50_500p0_200p0_2L8.py \
--outputEVNTFile=TestJob.pool.root

# In a real job you would now copy the output you wish to keep to somewhere you can access, eg /ptmp/mpp/$USER  To submit the job, copy and paste the above into a file, let's call it BatchTest.sh. A minimal submit command would be: $ qsub BatchTest.sh

A more complete (read:useful) command might be
qsub -j y -m ea -o ${HOME}/Qout BatchTest.sh  The "best" combination of arguments is subjective, so check man qsub to find out more about what is possible. The man page also has a very useful section on environment variables available within the job, for example $JOB_NAME, $JOB_ID, $SGE_TASK_ID, etc.

Neither /ptmp nor /afs are designed to handle many small subsequent i/o operations well. Even when your jobs do effectively not generate much output (e.g. only filling a lot of histograms), they might slow down the discs quite significantly (for you and for all other others). For that reason it's best to let your jobs produce output on the /tmp directories (typically on fast SSDs) of the working nodes and later copy your output to the target directories in one go. In particular \ptmp is meant to be used like this, see here. For convenience, slurm creates automatically a folder on $\tmp$ for every job (and deletes it automatically after the job finished) that you can can access via the $TMPDIR variable in your slurm script. There are also higher-level frameworks for job submission available which may do this automatically for you, e.g. ClusterSubmission that has been developed and is curated at MPI. ## Working on official CERN (virtual) machines (lxplus) Step 1: Register for an account. Instructions are on this page. Again, it is recommended to try and get the same user name as at MPI and RZG, if possible. Correction: The above page cannot be seen by non-users (?), so email Atlas.Secretariat@cernNOSPAMPLEASE.ch after reading the following information: Some extra information: • The ATLAS group code ("Grp") is zp. • You need an AFS/PLUS account to use the public Linux systems and to authorise access to some other ATLAS computing resources. • You need a NICE/MAIL account to authorise access to some (network) services such as wireless access and the mailing list archives. Step 2: Log in. Again, the lxplus system is accessed via ssh. As with RZG, you will need to provide your user name if it is different to your local account. $ ssh -Y [username@]lxplus.cern.ch

If you are using the .bashrc file above at MPI, you can use the shorter alias
$lxplus  Either way, you will need to enter your password. Step 3: Customise your account. First, follow the instructions on the page in Step 1 to change the password. Then, check which shell you are using: $ echo $SHELL  It is in principle possible to change this, but the default is usually zsh, not bash. If this is the case, open a file called .zshrc in your home directory, and fill it like this: #!./.zshrc ### Source global definitions if [ -f /etc/zshrc ]; then . /etc/zshrc fi export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase source${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh

If instead you have (or set up) bash, just replace zsh with bash throughout. In practice, the two shells are very similar.

Step 4: Start a new session. You need to log in again in order for your changes in step 3 to take effect.

Step 5: Begin work! The atlasLocalSetup.sh script does the same thing as at RZG and sets up the same commands, customised for the local environment. See the previous section on how to set up ROOT and athena.

## Accessing your CERN account (including SVN) directly from RZG

If you use scp or svn commands a lot to transfer things between RZG and CERN, typing in your password all the time is a pain. To make this easier you can create an ssh key to give you password-less access. This procedure is stolen from here.

First, execute these lines on your local system:

ssh-keygen -t rsa       # Don't provide a password

Then, log in to lxplus and do
/afs/cern.ch/project/svn/dist/bin/set_ssh
cat id_rsa.pub >> .ssh/authorized_keys


Still on lxplus, create a file ~/.ssh/config, with the following contents:

Host lxplus.cern.ch lxplus
Protocol 2
PubkeyAuthentication no

Host svn.cern.ch svn
GSSAPIAuthentication yes
GSSAPIDelegateCredentials yes
Protocol 2
ForwardX11 no


Note that this procedure has been observed to screw up some other things (I thing pull requests via scp from CERN to RZG), so if anyone knows a better recipe do please share it!

## Accessing your code at RZG or LXPlus from MPI or your laptop

You will find the selection of text editors / coding tools at rzg and lxplus at bit limited. It can be very useful to edit your code from the MPI machine or your laptop, and only use the ssh sessions to compile / run. To do this, there are two tools:

### Obtaining a token for AFS access:

Using AFS, you can access your RZG/LXPlus home folders from MPI or your laptop as if they were local folders on that machine. This way, you can for example edit your code using the local editors like kate, IDEs like kdevelop or eclipse, or even just add files to emails etc.

To make this work on your laptop (tested on Ubuntu 14), make sure the following modules are installed: openafs-modules-dkms, openafs-client, openafs-krb5, krb5-user, krb5-config. You may need to restart once after first installing them.

The following commands should be used to obtain an afs token. You need to do this every time you log into the computer (if you just lock the screen, the token will stay valid for 7(?) days).

#### for RZG:

From MPI or your laptop:
kinit <your username at rzg>@IPP-GARCHING.MPG.DE
# when requested, enter your RZG password
aklog -c IPP-GARCHING.MPG.DE -noprdb


On doing this, you should see your folder at /afs/ipp-garching.mpg.de/home//. To access this in a more handy location, you can use a symlink:

ln -s /afs/ipp-garching.mpg.de/home/<first letter of username>/<username> ~/garching

Then your home folder will have a subfolder "garching" that contains your RZG home folder.

#### for CERN lxplus:

From MPI or your laptop:
kinit <your CERN username>@CERN.CH
# when requested, enter your lxplus password
aklog -c CERN.CH


Here, there are two interesting places to look at: Your actual home folder is at /afs/cern.ch/user/// So let's create another link:

ln -s /afs/cern.ch/user/<first letter of lxplus username>/<username>/ ~/cernhome

In addition, there is a 'workdir' where you can get a lot of quota (up to 50Gig I think!): /afs/cern.ch/work/// You know the drill...
ln -s /afs/cern.ch/work/<first letter of lxplus username>/<username>/ ~/cernwork


All AFS tokens expire after 10h - so you basically need to renew them daily.

### Setting up SSHFS:

sshfs allows you to mount a remote computer's file system via ssh. This is not extremely fast, but more than sufficient to edit text or look at plots etc.

On your laptop, you need the sshfs module. MPI has this installed.

The command to setup sshfs is the following:

 sshfs <username>@<remote machine>:<path on remote machine> <empty folder on your PC to mount it in>


Sounds complicated? Let's try an example! We will mount the RZG's /ptmp space ('unlimited' scratch disk...) to MPI. First, we create an empty (important!) directory where we want to see the content on our PC. For a lack of creativity, let's put it in our home folder and call it ptmp...

 mkdir -p ~/ptmp

Done! Now we mount the ptmp volume via sshfs:
sshfs -o follow_symlinks <username>@mppui2.t2.rzg.mpg.de:/ptmp ~/ptmp

And that's it! As for AFS, the sshfs command needs to be run each time you log into your computer/laptop. If you happen to lose the network connection, you may need to rerun it.

### Installing the ATLAS ROOT style

When you are using some macros for the first time, and you are not using the ATLAS ROOT style by default in your working area, you may encounter an error message like
        Error in <TROOT::Macro>: macro rootlogon.C not found in path .:/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/root/...
Error in <TROOT::SetStyle>: Unknown style:ATLAS

This means, that you have not installed the ATLAS ROOT style yet. In order to do it:
        #include "AtlasStyle.C"
void rootlogon()
{
SetAtlasStyle();
}

• In case, there is no file called -~/.rootrc, enter
        echo -e 'Unix.*.Root.DynamicPath:    .:$(ROOTSYS)/lib:$(HOME)/RootUtils/lib:\nUnix.*.Root.MacroPath:      .:\$(HOME)/RootUtils:' > ~/.rootrc

• From now on, every time you open ROOT, the Applying ATLAS style settings... message should get printed

# Working in the MPI office at CERN (Building 40, room 40-2-C16)

There are 3 desktop machines in room 40-2-C16. To ssh into them (only from inside the CERN network), use

        # machine located directly behind the door
ssh -Y <cern username>@pcphmpi04.dyndns.cern.ch
# machine located on the left
ssh -Y <cern username>@pcphmpi14.dyndns.cern.ch
# machine located on the right
ssh -Y <cern username>@mppnuc4002c16.dyndns.cern.ch

Obviously, you can also login at the machines themselves via the graphical interface. In case, you cannot login, your username was not put on the list of allowed (MPI) usernames. This can only be done by Stefan Stonjek (stonjek@mppNOSPAMPLEASE.mpg.de, +49 89 32354 296).

# PhD thesis submission for students enrolled at TUM

Students enrolled at TUM have to be member of the TUM Graduate School (https://www.gs.tum.de/promovieren-an-der-tum/). This means, you have to be registered at https://www.docgs.tum.de/.

At the end of your PhD, there are some steps to be done for submitting your thesis. Those are further explained here: https://www.gs.tum.de/promovierende/administratives/einreichung-der-dissertation/neue-promo/ . This Twiki is meant to give additional help and contains some documents to speed up the submission.

## Before printing

• At the time of handing in, you need an amtliches Führungszeugnis which has to be requested to be directly send to TUM Promotionsamt, Arcisstr. 21, 80333 München. Be aware that the creation of that document takes up to 3 weeks from the request until it is sent to TUM. Thus, submit the request in time before the date you want to hand in the thesis. Also be aware that the document must not be older than 3 months. Thus, do not request it too early.

• You will need a TUM PhD coverpage (not the first page of the thesis, but usually the third one) which has all information listed here https://www.tum.de/fileadmin/w00bfo/www/Studium/Dokumente/Promotionsamt/Titelblatt.pdf. An example .tex file resulting in the needed coverpage is attached tumcoverpage.tex: LaTeX template to create a TUM coverpage. Note: You do not necessarily need to know your second examinar nor the head of the exam. You can just write NN on the coverpage. The comittee will be decided later on and can be filled after the first 5 exemplars were handed in.

## Before handing in at the Promotionsamt

• You need to request the thesis submission at https://www.docgs.tum.de/ by clicking on Mein Fortschritt and subsquently on Antrag auf Einreichung. Be aware that Antrag auf Einreichung will not show up at https://www.docgs.tum.de/ until you finished your qualification period which then has to be signed of by the Physics Department of TUM.

• You need to fill the Eidesstattliche Erklärung which can be downloaded here. A pre-filled pdf document of this dotx is attached to this Twiki (EidesstattlicheErklaerung.pdf: Pre-filled Eidesstattliche Erklärung).

• You need to register your thesis at the TUM library. Therefore, go to Melden Ihres Dissertationsthemas and fill out the form. You will need to use html syntax for special characters. You cannot use LaTeX symbols. For the \sqrt{s} sign, use √. The only form will show you a space after using √. You cannot do anything about it. Removing the ; at the end, will remove the space, but the system will need the ; in the end, so do not remove it. Every html character will not be displayed correctly in the summary pdf file created by the submission form. Don't worry, in the end, the symbol will be correcty displayed on the library's website. Just leave the pdf as it is.

• You need a list of publications. A LaTex template for the list is attached here: list_of_publications.tex: LaTeX template to create a list of publications

• You do not submit a publikationsbasierte Dissertation. Thus, also no signed form is needed for that.

• You need to upload officially certified copies (amtlich beglaubigte Kopien) of your Master certificate (both Urkunde and Abschlusszeugnis). So have them ready.

• You need to bring a printed version of your CV when you hand in the thesis!

## Before your defense

• After you handed in, the printed copies are sent to the examiners. As soon as they submitted their reports, you need to wait at least 2 more weeks until the date of your exam. You will have to collect 20 signatures for your thesis (so called Rundlauf) from professors/PDs at TUM. After collected the 20 signatures, you need to wait at least 1 more week until the date of your exam. You can find the form (version from August 2018) you will get from the secretary here: TUM_rundlauf_leer.pdf.

## After your defense

• After your defense, you have to print the official exemplars including the official TUM coverpage (usually on page 3) which has the date of the submission, the date of acception and the names of the referees.

• For the IMPRS secretariat, you also need two printed exemplars of the thesis (which are handed in the last month before you leave MPI within your checkout list procedure).
Topic attachments
I Attachment History Action Size Date Who Comment
pdf EidesstattlicheErklaerung.pdf r1 manage 37.2 K 2018-06-28 - 15:17 NicolasMaximilianKoehler Pre-filled Eidesstattliche Erklärung
pdf TUM_rundlauf_leer.pdf r1 manage 147.7 K 2018-08-14 - 14:27 NicolasMaximilianKoehler Empty Rundlauf form from August 2018
tex list_of_publications.tex r1 manage 0.7 K 2018-06-28 - 16:26 NicolasMaximilianKoehler LaTeX template to create a list of publications
tex tumcoverpage.tex r1 manage 1.1 K 2018-06-28 - 15:59 NicolasMaximilianKoehler LaTeX template to create a TUM coverpage
Topic revision: r30 - 2020-08-13 - MichaelHolzbock

Webs

Welcome Guest

 Cern Search TWiki Search Google Search Main All webs
Copyright &© 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback