Welcome to the UCT Twiki

Hi guest, Welcome!

Welcome to ATLAS


Every Monday between 13:30-15:00 p.m. SAST on zoom and RWJames room 5.05 South African Indico pages here

Skills development sessions will be on Tuesdays at 14:00 to 17:00, for those who are in Cape Town https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/SoftwareTutorial

Weekly updates will be given in the following order:

  • James: Bye-----
  • Sahal: 08/11-----
  • Ryan: 22/11------
  • Kevin: 29/12------
  • Cameron Bye------
  • Senzo 06/12------
  • Engineering ---when ready---

Holidays: 05/04 Family Day 09/08 National Women's Day

Suggested Topics for Supervisor Talks

Hierarchy problem, naturalness, fine tuning.


For printing at campus, from the the office open https://mpsportal.uct.ac.za. Sign in using your UCT credentials. Click "submit a job" and choose the "srvwinppc003\UCT_XRX_Follow" printer. Upload the documents and submit. Wait for the job to be held in queue. Once in "held", go to the printing room on the 5th floor, tap your UCT card on the printer to sign in, use you pin to login. Then release your printing job. If doing this for the first time, follow the instructions in the printing room for setting up your pin. More info here: https://lib.uct.ac.za/services-tools/print-copy-scan

Visa Stuff

Unless you're going for more than 90 days in a 6 month period, apply for a multi entry visa, as documented on ATLAS website! If asked for the details of a person within the host company, you may use the ATLAS secretariat.

If you stay visit CERN for longer than 90 days in a 180 day period, you need to apply for a long stay visa (D type visa) through the embassy of the country you will be staying in the most. For example, if you will be predominantly in Switzerland, you apply with the Swiss embassy. You then need to go to the CERN USERs office and apply for a Swiss legitimation card and a French work permit. The CERN hostel only allows one to stay for a maximum of 90 days within a period of a year, which is less than over 90 days required for a residency card.

First CERN Visit

To get to the CERN Meyrin site, once you have collected your luggage but before you leave the luggage area, collect a free bus/tram ticket from the machine near the doors. When you exit the airport, go up to the next level of parking and head to the train station. You can catch the 68 bus directly to CERN, or possibly get one of the other buses to Blandonet, where you can get a tram to CERN. When you arrive at CERN, the first place to go is to building 55 just outside Entrance B to sort out your access card which you require to enter CERN. However, you may need to go to the user's office first, in building 61 on the other side of the main entrance to Restaurant 1, to complete your registration as a CERN member. Unfortunately everything is closed on the weekends, so if you are staying at the CERN hostel and you arrive during the weekend, just show your proof of accommodation and your passport to the guard at Entrance B.

The new SA-CERN funding forms can be found at the following address: http://sa-cern.tlabs.ac.za/travel-documents/

SA-Cern travel procedures: https://twiki.cern.ch/twiki/pub/AtlasSandbox/SouthAfricanCluster/SA-Cern-TravelProcedures-1.pdf

Contract Extensions

If you would only like to extend your CERN USERS contract, you can only do this earliest 1 month before the expiration of your contract. If you would like to change your presence at CERN to 55% and above, you need a Swiss D type visa first. You can't increase your presence without having a D visa as when you go to the USERS office, they need the D visa.

More info on CERN contract modifications here. More info on ATLAS contracts here.

Obtaining a Grid Certificate

Follow the link and instructions to obtain a Grid Certificate: Installing Your Grid Certificate.

Signing up for shifts

This applies to taking shifts in the ATLAS Control Room (ACR)

  • First thing to do is sign up for the two day shift training. The training should be done at most 1-2 months before your first shift. If you have not done shifts in a while then you will need to do training again. The first day is a general ACR training while the second day is desk specific. You may need to email the person in charge of the desk you will be shifting for
  • When the booking period for the shifts open (http://atlas-otp.cern.ch/), and be ready the second they open as they go very quickly, book your shifts. You have to book atleast 20 OTP points worth of shifts in the year and a minimum of 10 OTP points worth in a given shift period, of which there are two in a year. In otherwords, you can do 10 shifts per period or 20 in one period. Morning and evening shifts count about 0.66 points and night and weekend shifts count about 1.33.
  • Once your shifts are booked, you need to go back and book your 2-3 shadow shifts. They must be after your training and before your first actual shift.
  • You then need to go to edh.cern.ch and request access to ATL_CR, add in the reasons section that you wish to do shifts. Also on edh.cern.ch you can connect to SIR where you need to take the ATLAS level 4A and Safety at CERN safety courses. These courses need to be done before requesting for access. The computer safety course should already be done, so if not done, do it.
  • Once these are all done, you just need to attend the training and shadow shifts and you can begin your shifts.

Setting up ATLAS Access Locally on hep01

This can be done once you have a CERN computer account.

The ATLAS style sheet is available here, but also on hep01 in /atlas

Make sure you subscribe to the "atlas-sa-uct" egroup at CERN here.

Access to local data on hep01

Samples are in /atlas/DATAMC, Monte-Carlo samples in the 'mc' directory, and data samples in the 'data' directory. When downloading samples download to /atlas/temp_data and then move them to the correct area.

Useful Resources

Useful GitLab links

Useful Bits of Computing Help

  • to save your history in the zsh shell on lxplus add the following to your .zshrc
       export HISTSIZE=1000
       export SAVEHIST=1000
       export HISTFILE=~/.history

Using Rucio

  • To use Rucio to download random files from a sample:
       lsetup rucio
       voms-proxy-init -voms atlas
  • You will be prompted for your GRID password. Then to download {n} random files from {sample_name}:
       rucio download --nrandom {n} {sample_name} 

Using XRootD to download files from EOS

This applies to downloading from EOS to the hep01 server.
lsetup xrootd

If you are on lxplus, you can access EOS as if it was a directory, e.g., ls /eos/user/g/guest. If you want to access someone else's cernbox, you may need them to grant you permission.

First find files on EOS

  • to list the contents of /eos/user/g/guest (CERNBOX Storage):

xrd eosuser.cern.ch dirlist /eos/user/g/guest

  • to list the contents of /eos/atlas/user/g/guest (ATLAS Storage):

xrd eosatlas.cern.ch dirlist /eos/atlas/user/g/guest

Then Copy Them to the Local Directory

  • From eosatlas:

xrdcp root://eosatlas.cern.ch/{remote_file_path} {local_directory_path}

  • From eosuser (CERNBox):
xrdcp root://eosuser.cern.ch/{remote_file_path} {local_directory_path}

Using GEANT4 on HEP01

  • Create a directory within which to work on GEANT4 e.g..
       mkdir ${HOME}/workdir/Geant_Workdir (once)
       export G4WORKDIR=${HOME}/workdir/Geant_Workdir (every time you log in)
  • Set up LCG 85 from sft.cern.ch
       source /cvmfs/sft.cern.ch/lcg/contrib/gcc/4.9/x86_64-slc6-gcc49-opt/setup.sh
       source /cvmfs/sft.cern.ch/lcg/views/LCG_85/x86_64-slc6-gcc49-opt/setup.sh

  • An example on how to run exampleB1:
       export G4Examples="/cvmfs/sft.cern.ch/lcg/releases/LCG_85/Geant4/10.02.p02/x86_64-slc6-gcc49-opt/share/Geant4-10.2.2/examples
       cd $G4WORKDIR
       mkdir B1-build
       cd B1-build
       cmake -DGeant4_DIR=${G4LIB} ${G4Examples}/basic/B1
       make -jN               (N is the number of cores on your machine)
  • Once that is done the exampleB1 executable should be created. To run the application just type:
       ./exampleB1 exampleB1.in

Using Allpix on HEP01

  • Create a directory Allpix, then
       cd Allpix
       mkdir allpix-build allpix-install
       git clone https://github.com/ALLPix/allpix.git
  • Run the setup file that will then be sourced every time you log into a new session. The setup file contains:
       source /cvmfs/sft.cern.ch/lcg/views/LCG_85/x86_64-slc6-gcc49-opt/setup.sh
       export G4WORKDIR=/Path/to/your/Allpix/directory
       export PATH=$PATH:$G4WORKDIR
       export G4INSTALL=/cvmfs/sft.cern.ch/lcg/releases/Geant4/10.02.p02-1c9b9/x86_64-slc6-gcc49-opt
  • After all this
       cd allpix-build
       cmake ../allpix -DCMAKE_INSTALL_PREFIX=../allpix-install
       make -j install
  • Once that is done Allpix should be installed. To run an example, change to the allpix directory and run:
       allpix macros/YourChoiceOfMacro


* To check the manual go to:

* To check the manual for RooFit:

Using batch on lxplus (lxbatch)

  • Useful twiki: https://twiki.cern.ch/twiki/bin/view/Main/BatchJobs
  • The batch system is very useful if you're running jobs that require a lot of CPU time. It's quite easy to use and the CPU time allocated to you is proportional to the queues to which you submit the jobs.
  • You basically need to write a shell script then submit it to one of the queues listed on the above twiki. For example, if the script is called test.sh; bsub -q 1nd test.sh This submits to the 1 day (1nd) queue.
  • See below two batch script examples; one used in ATHENA and another in ROOTCORE.
  • Setup the environment and compile locally.
  • Do a test run with a few events to make sure everything is running well locally.
  • An example script can be found here: /afs/cern.ch/work/c/cmwewa/public/BatchJobs_Examples/batchtest.sh
  • NOTE: you have to setup ATHENA on the batch system too.
  • Setup the environment and compile locally.
  • Do a test run with a few events to make sure everything is running well locally.
  • Before submitting jobs to batch, delete the ROOTCOREBIN directory and the rcsetup.sh/csh scripts then logout of lxplus. Login again and, without setting up aything, submit the jobs to batch. You will setup ROOTCORE on the batch sytem. **Not sure why but if this isn't done, the batch job will return with a complaint like; rcSetup: command not found. This solution was only found by trial and error and no technical details are known. It would be nice if someone figured out how to run without deleting the ROOTCORE environment on the local system.
  • For HWW framework users, you can run various small jobs in parallel and merge them once they are done. Compile and run the makeSample step before submitting to batch since that is much quicker to run.
  • An example script can be found here: /afs/cern.ch/work/c/cmwewa/public/BatchJobs_Examples/batchtest_data_m.sh
  • However, the easiest way is to use the submitAnalysis.py script which gets a list of sub-jobs listed in jobs.txt and runs them in parallel on the batch or whatever cluster you specify or that it automatically suggests for you. You can see how to run the script on the HWWFramework twiki. I run it like this: ./submitAnalysis.py config/runAnalysis.cfg --jobs jobs.txt --submit bsub --queue 8nh
  • To merge outputs, check the HWWAnalysis code twiki on parallelization: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/HWWAnalysisCode#Running_in_parallel_and_on_batch I currently merge using a script created by one of the ssWW contacts. To merge using the instructions on the above twiki, I do this:
  • cd QFramework TQPATH=$(pwd)
  • alias tqmerge='$TQPATH/share/tqmerge'
  • export PATH=$PATH:$TQPATH/share
  • cd SSWWDualUseUtils/share/
  • tqmerge -o MergedOutputname.root firstfiletomerge.root secondfiletomerge.root thirdfiletomerge.root -t runAnalysis
  • You can merge multiple files.

Getting extra space on lxplus, and access to your CERNBox (EOS)

  • Your default lxplus directory (/afs/cern.ch/user/g/guest) is only allocated 3 GB of space. To increase it to 10 GB, go to the following site CERN computer resources, go the the List Services tab and under Storage you should see something about AFS storage, click that. Then go to settings to increase your limit as well as gain access to an additional directory which has 100 GB (/afs/cern.ch/work/g/guest). Going back to the Storage, you should also see an option for EOS/CERNBox that explains how to get access to your own 1TB EOS directory. To access EOS, read the "Using XRootD to download files from EOS" section above

Machine Learning

Written by Matthew Leigh

Machine learning (ML) is one of the fastest growing fields in data science today. ML is now used in almost all aspects of HEP at ATLAS; from reconstruction, to modelling, and event classification. It is highly likely that a student in the UCT-ATLAS group will come across ML sometime during their research, so the following resources are available to help understand the topic. Most ML models students encounter at ATLAS are either Decision Tree based or Artificial Neural Network based.

Artificial Neural Networks

The advised tutorials on neural networks and deep learning can be found here.

  • Mathew Leigh's Dissertation This is my dissertation which focused on the development of neural networks. Chapter 3 provides and introduction and overview of deep learning with multi-layer-perceptions. It was written to be approachable by any postgraduate student with an understanding of linear algebra. It covers many of the latest techniques used in deep learning, such as adaptive optimisers and normalisation layers. However, the following texts might be a better place to start for new students.
  • Neural Networks: This is a fantastic online textbook on neural networks that reads more like a blog post. It is only 6 chapters long and very easy read. It is probably the most useful starting place for students unfamiliar to ML. Some of the topics however are slightly outdated. Chapter 2 delves into how the backpropagation algorithm works, but since most deep learning libraries use automatic differentiation, this is just a nice-to-know. Also, a lot of the working examples use pure Python, which is useful for understanding the mechanisms behind deep learning, but the libraries listed below should be used where possible.
  • Deep Learning This is another online textbook that focuses on neural networks and is written more like a classical textbook compared to the previous example. For students already familiar with linear algebra, Chapters 5-12 are probably the most useful. If you are already familiar with neural networks and/or have read the above, then this is the next best resource on the topic.

There are three very good software packages for deep learning. They are all based on Python and the debate on which to use will probably never be settled. The optimal choice will come down to personal preference as all of these packages have good tutorials, active community support and very strong development tools.

  • Pytorch is a relatively new library that has been gaining popularity over the past few years. Still seen as the 'alternative choice' by many, it was the package I used in my research so I can attest to its usefulness. It has a smaller but very active community compared to the next options, better debugging capability, and is more popular for academic research.
  • TensorFlow is the most widely used platform and thus has the largest community. It has a steeper learning curve than PyTorch but can be better for larger datasets and visualization.
  • Keras is a higher level API that runs on top of TensorFlow. It is very easy to learn and use, but it is noticeably slower than the other packages. It good for building models that run on smaller datasets.

Decision Trees

For decision tree based models, the following online tutorials are advised.

There are two main Python packages for decision tree learning. Both have fantastic tutorials that are worth a read.

  • Scikit-Learn is very broad package for machine learning in Python, but its main strengths are building and reviewing decision trees and random forests. It can also perform gradient boosting, but that is best left for the next platform, and don't even think about using it to building neural nets.
  • XGBoost is a package specifically built to perform a type of gradient boosting (also called XGBoost). A boosted decision tree grown using XGBoost has become the standard method for many classification tasks.
Edit | Attach | Watch | Print version | History: r133 < r132 < r131 < r130 < r129 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r133 - 2022-11-02 - RyanJustinAtkin
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Atlas All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback