Using Containers

Introduction

What are Containers?

Containers are an operating system virtualization technology used to package applications and their dependencies and run them in isolated environments. Unlike Virtual Machines, they share the host OS kernel, thus provide a lightweight method of packaging and deploying applications in a standardized way across many different types of infrastructure.

The software required to use containers are known as runtime; while there are several runtimes available but the most popular ones, and supported in ATLAS, are described in this section. You can create a container using these runtimes on your laptop, and then run it anywhere, including on the grid. This Twiki describes only how to run the container and to submit to the grid, not the creation of it.

Singularity

Singularity runtime allows you to create and run containers that package up pieces of software in a portable and reproducible way. Singularity does not give superuser privileges and it can access to the GPU on a host node in native speed.

Docker

Docker is another supported, and well established, runtime with many publicly available containers. Docker created containers can run using Singularity but Singularity created containers cannot run with Docker runtime.

Installation and Preparation

ATLASLocalRootBase (ALRB)

NoteWindows users: This part has to be done inside the Linux Distro after you install WSL.

You will need ALRB. The preferred way is to use it from cvmfs and not install this but, if you want to work offline without cvmfs on your laptop, these instructions show you how to install a minimal version on your Mac or Linux machines.

# install a minimal ALRB
 git clone https://gitlab.cern.ch/atlas-tier3sw/manageTier3SW.git ~/userSupport/manageTier3SW
 cd ~/userSupport/manageTier3SW
./updateManageTier3SW.sh -a <some local dir> -i "," -j

# keep it updated manually at least daily
export ATLAS_LOCAL_ROOT_BASE=<some local dir>/ATLASLocalRootBase
updateManageTier3SW.sh -i "," -j

# in your login script, define local ALRB location
export ATLAS_LOCAL_ROOT_BASE=<some local dir>/ATLASLocalRootBase
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'

cvmfs

NoteWindows users: This part has to be done inside the Linux Distro after you install WSL.

You can install cvmfs and not let automount unmount the cvmfs repos.

  • eg for Linux, /etc/auto.master should have a no timeout: /cvmfs /etc/auto.cvmfs --timeout 0

Installing cvmfs allows access to every container and release available to ATLAS but you will need to be connected to the internet for access. If you have a laptop, you may want to install a local minimal version of ALRB and switch between this and the one from cvmfs as needed. for example:

alias cvmfsALRB='echo "using cvmfs ALRB"; export  ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase'
alias localALRB='echo "using local ALRB"; export  ATLAS_LOCAL_ROOT_BASE=<some local dir>/ATLASLocalRootBase'

Runtime

Runtime for Linux OS

The recommended runtime to use is Singularity but Docker will also work.

The preferred method is to use Singularity from cvmfs.. To use this, as root, enable namespace.

      echo "user.max_user_namespaces = 15000"  > /etc/sysctl.d/90-max_user_namespaces.conf
      sysctl -p /etc/sysctl.d/90-max_user_namespaces.conf

You can also install Singularity from the Linux Distros if you cannot use the version from cvmfs, or read the documents at https://sylabs.io.

Runtime for MacOS

The recommended runtime for use is Docker.

Singularity for MacOS is available as a beta version but it is not yet supported in ATLAS. It will be revisited in the future once s stable version is releases.

Runtime for Windows 10

In order to use Singularity on Windows, you need install a Linux distro first. It could be achieved through Windows Subsystem for Linux (WSL) without involving a Virtual Machine. WSL is a new Windows 10 feature that enables you to run native Linux command-line tools directly on Windows. So it is not available for other old Windows such as Windows 7.

Installation of Windows Subsystem for Linux (WSL)

Please refer the WSL installation guide for Windows 10.

First enable the option feature Microsoft-Windows-Subsystem-Linux. Open PowerShell as Administrator and run:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux
When prompted restart the computer.

Then you can install your preferred Linux Distro, following the link on the Microsoft store, where there is no CentOS available. However, you can find the installation guide for CentOS at TipsMake.com. Or download directly the zip file at github and following the instruction to install it.

Once your distro has been downloaded and installed, you will be prompted to create a new user account together with its password to initialize the new Linux distro.

Running Windows 10 build 18917 or higher

In order to install WSL2, ensure that you are running Windows 10 build 18917 or higher. You can check your Windows version by opening Command Prompt and running the ver command.

Microsoft Windows [Version 10.0.19587.1000]
(c) 2020 Microsoft Corporation. All rights reserved.

C:\Users\Shuwei>ver

Microsoft Windows [Version 10.0.19587.1000]

Actually the Windows build information has already be displayed on the terminal top when the Command Prompt app is opened.

You can also check the Windows build info in PowerShell with command systeminfo:

PS C:\Users\Shuwei> systeminfo | Select-String "^OS Name","^OS Version"

OS Name:                   Microsoft Windows 10 Home Insider Preview
OS Version:                10.0.19587 N/A Build 19587

If your Windows build is lower than 18917, you can update it by joining the Windows Insider Program and select the Fast ring or the Slow ring. Search for "Windows Insider Program" in the Windows start search box, and click the found Windows Insider Program settings. In the setting, pick your Insider settings to Fast or Slow ring. You can find more details at How to get started with Windows 10 Insider Preview builds.

Installation of WSL2

You can find the detailed instruction on installing WSL2 on Windows 10. The first 2 requirements have already been discussed above. Next you need:

  • Enable the 'Virtual Machine Platform' optional component
  • Set a distro to be backed by WSL2 using the command line
  • Verify what versions of WSL your distros are using

To Enable the Virtual Machine Platform, run PowerShell as Adminstrator with:

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
Then restart the computer.

Next set the Linux dsitro to WSL2 by the following command under PowerShell:

wsl --set-default-version 2
which may take a while to apply.

Now you can verify which WSL version used the Linux distro.

PS C:\Users\Shuwei> wsl -l -v
  NAME            STATE           VERSION
* Ubuntu-18.04    Stopped         2

Use the Installed Linux on Windows 10

Open PowerShell under a regular user and run wsl:

PS C:\Users\Shuwei> wsl
yesw2000@Home-Dell660:/mnt/c/Users/Shuwei$ echo $0
-bash
yesw2000@Home-Dell660:/mnt/c/Users/Shuwei$
which starts the Linux and enter bash.

You can also start the Linux by searching wsl or bash on the Windows Start Search Box and click on wsl Run command or bash Run command.

If the Linux is already running, you can run "bash" to enter the Linux:

PS C:\Users\Shuwei> wsl -l -v
  NAME            STATE           VERSION
* Ubuntu-18.04    Running         2
PS C:\Users\Shuwei> bash
yesw2000@Home-Dell660:/mnt/c/Users/Shuwei$

After all terminals associated with the Linux have been closed, the running Linux will stop then.

Singularity Installation on WSL2

Start the Linux Distro on Windows. Then install cvmfs as the same ways as on the Linux OS. You can also install a minimal ALRB and/or Singularity if you do not want to have cvmfs.

Post Installation Preparation

  • All Platforms
    • Create $HOME/.globus and copy your grid credentials. Make sure that the protections are correct
      • chmod 744 $HOME/.globus
      • chmod 444 $HOME/.globus/usercert.pem
      • chmod 400 $HOME/.globus/userkey.pem
    • Define RUCIO_ACCOUNT either in your $HOME login script or in the $HOME/.*rc.container scripts (see Login Scripts section below)
    • Define SINGULARITY_TMPDIR to a directory. This directory will contain the Singularity downloaded caches and built images. (Recommended not to use /afs or /eos as it fails on those shared filesystems to build.)

  • MacOSX:
    • Install XQuartz https://www.xquartz.org
      • Launch XQuartz preference, select security and enable "Authenticate connections" and "Allow connects from network clients"
    • In System Preferences, select Sharing, entry a name in "Computer Name". Give it a name without spaces or special characters - only A-Za-z0-9 in the name.

Running the Container

Running a container consists of specifying the -c option to setupATLAS which can handle any time of container. Note that:

  • setupATLAS -c -h will show an extensive help list
  • setupATLAS -c --showVersions will show all available Singularity containers in /cvmfs/unpacked.cern.ch (works with Singularity runtime only.)

Here are some examples of usage:

  • setup a Centos7 or slc6 ATLAS-ready container. (OS containers).
    • setupATLAS -c centos7
    • setupATLAS -c slc6
  • setup a standalone ATLAS container from docker
    • setupATLAS -c docker://atlas/athena:21.0.15_100.0.2-noAtlasSetup
  • setup the same ATLAS container /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlas/athena:21.0.15_100.0.2-noAtlasSetup
    • setupATLAS -c atlas/athena:21.0.15_100.0.2-noAtlasSetup
    • setupATLAS -c /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlas/athena:21.0.15_100.0.2-noAtlasSetup
  • setup various public containers from docker
    • setupATLAS -c docker://alpine
    • setupATLAS -c docker://busybox
    • setupATLAS -c docker://ubuntu
    • setupATLAS -c docker://opensuse/leap
    • setupATLAS -c docker://dockershelf/latex
    • setupATLAS -c docker://fastgenomics/sklearn:0.19.1-p36-v5

More details on setupATLAS -c can be found here.

Examples

Using Containers for LaTeX

If you need LaTeX and do not have LaTeX installed locally, you can use LaTeX container images.

$ setupATLAS -c docker://dockershelf/latex
Using ($SINGULARITY_TMPDIR defined) /home/desilva/singularity-tmp for sandbox ...
Now building container ...
Locking dir ...
lockfile: creating /home/desilva/singularity-tmp/dockershelf_latex/lockfile
 /home/desilva/singularity-tmp/dockershelf_latex/noupdate found; will not do 'singularity build -u'
Removing lock dir ...
Info: /cvmfs mounted; do 'setupATLAS -d -c ...' to skip default mounts.
Info: $HOME mounted; do 'setupATLAS -d -c ...' to skip default mounts.
------------------------------------------------------------------------------
Singularity: 3.2.1
From: /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.2.1/bin/singularity
ContainerType: non-atlas
singularity  exec  -e  -H /home/desilva/.alrb/container/singularity/home.JBuw0W:/alrb -B /cvmfs:/cvmfs -B /home:/home -B /home/desilva:/srv /home/desilva/singularity-tmp/dockershelf_latex/image /bin/bash
------------------------------------------------------------------------------

         This image was built using         
 ,-.          .               .       .     
 |  \         |               |       |  ,- 
 |  | ,-. ,-. | , ,-. ;-. ,-. |-. ,-. |  |  
 |  / | | |   |<  |- |   `-. | | |- |  |- 
 `-  `- `-  ` `-    `-   `-   |  
                                        -  
        For more information, visit         
https://github.com/Dockershelf/dockershelf

 setupATLAS is available for this non-atlas container type.
          
$ 

There is another older PDFLaTex on the docker image hub, but with much more packages installed.

$ setupATLAS -c docker://astrotrop/pdflatex
Using ($SINGULARITY_TMPDIR defined) /home/desilva/singularity-tmp for sandbox ...
Now building container ...
Locking dir ...
lockfile: creating /home/desilva/singularity-tmp/astrotrop_pdflatex/lockfile
 /home/desilva/singularity-tmp/astrotrop_pdflatex/noupdate found; will not do 'singularity build -u'
Removing lock dir ...
Info: /cvmfs mounted; do 'setupATLAS -d -c ...' to skip default mounts.
Info: $HOME mounted; do 'setupATLAS -d -c ...' to skip default mounts.
------------------------------------------------------------------------------
Singularity: 3.2.1
From: /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.2.1/bin/singularity
ContainerType: non-atlas
singularity  exec  -e  -H /home/desilva/.alrb/container/singularity/home.cWSX64:/alrb -B /cvmfs:/cvmfs -B /home:/home -B /home/desilva:/srv /home/desilva/singularity-tmp/astrotrop_pdflatex/image /bin/bash
------------------------------------------------------------------------------
 setupATLAS is available for this non-atlas container type.
          
Singularity> 

Then you can process your tex file inside the container.

Using Containers for Machine Learning

There are many containers available for machine learning.

For example, you can use sklearn containers.

$ setupATLAS -c docker://fastgenomics/sklearn:0.19.1-p36-v5
Using ($SINGULARITY_TMPDIR defined) /home/desilva/singularity-tmp for sandbox ...
Now building container ...
Locking dir ...
lockfile: creating /home/desilva/singularity-tmp/fastgenomics_sklearn_0.19.1-p36-v5/lockfile
 /home/desilva/singularity-tmp/fastgenomics_sklearn_0.19.1-p36-v5/noupdate found; will not do 'singularity build -u'
Removing lock dir ...
Info: /cvmfs mounted; do 'setupATLAS -d -c ...' to skip default mounts.
Info: $HOME mounted; do 'setupATLAS -d -c ...' to skip default mounts.
------------------------------------------------------------------------------
Singularity: 3.2.1
From: /cvmfs/atlas.cern.ch/repo/containers/sw/singularity/x86_64-el7/3.2.1/bin/singularity
ContainerType: non-atlas
singularity  exec  -e  -H /home/desilva/.alrb/container/singularity/home.nlv1uq:/alrb -B /cvmfs:/cvmfs -B /home:/home -B /home/desilva:/srv /home/desilva/singularity-tmp/fastgenomics_sklearn_0.19.1-p36-v5/image /bin/bash
------------------------------------------------------------------------------
 setupATLAS is available for this non-atlas container type.
          
Singularity> python3
Python 3.6.6 (default, Aug 24 2018, 05:04:18) 
[GCC 6.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sklearn
/usr/lib/python3.6/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
>>> import pandas
>>> import numpy 
>>> import scipy
>>> 

Using Linux OS

If you would like to run Atlas applications on Centos7, but you are running a Linix OS not compatible with Centos7, you can make use of the Centos7 with ARLB:

lxplus$ setupATLAS  -c centos7
------------------------------------------------------------------------------
Singularity: 3.5.3
From: /usr/bin/singularity
ContainerType: atlas-default
singularity  exec  -e  -H /afs/cern.ch/user/y/yesw/.alrb/container/singularity/home.snlmtE:/alrb -B /cvmfs:/cvmfs -B /afs/cern.ch/user/y:/home -B /tmp/yesw:/srv /cvmfs/atlas.cern.ch/repo/containers/fs/singularity/x86_64-centos7 /bin/bash
------------------------------------------------------------------------------
lsetup               lsetup <tool1> [ <tool2> ...] (see lsetup -h):
 lsetup agis          ATLAS Grid Information System
 lsetup asetup        (or asetup) to setup an Athena release
 lsetup atlantis      Atlantis: event display
 lsetup eiclient      Event Index 
 lsetup emi           EMI: grid middleware user interface 
 lsetup ganga         Ganga: job definition and management client
 lsetup lcgenv        lcgenv: setup tools from cvmfs SFT repository
 lsetup panda         Panda: Production ANd Distributed Analysis
 lsetup pod           Proof-on-Demand (obsolete)
 lsetup pyami         pyAMI: ATLAS Metadata Interface python client
 lsetup root          ROOT data processing framework
 lsetup rucio         distributed data management system client
 lsetup views         Set up a full LCG release
 lsetup xcache        XRootD local proxy cache
 lsetup xrootd        XRootD data access
advancedTools        advanced tools menu
diagnostics          diagnostic tools menu
helpMe               more help
printMenu            show this menu
showVersions         show versions of installed software

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
You are running on a RHEL7 compatible OS.
Please refer to these pages for the status and open issues:
 For releases:
  https://twiki.cern.ch/twiki/bin/view/AtlasComputing/CentOS7Readiness#ATLAS_software_status
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


Singularity> 

Using Atlas Release Containers

Many Atlas releases have also been distributed in container images. You can run "setupATLAS -c --showVersions | grep atlas/" to find the full list. The container name is in the syntax of project:releasename, where the project name is all in lowercase. For the project of AnalysisBase, you can run "setupATLAS -c --showVersions | grep atlas/analysisbase" to get the full list.

Let us take an example of release Analysis,21.2.126 under /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlas/

lxplus$ setupATLAS -c atlas/analysisbase:21.2.126
Info: /cvmfs mounted; do 'setupATLAS -d -c ...' to skip default mounts.
Info: $HOME mounted; do 'setupATLAS -d -c ...' to skip default mounts.
------------------------------------------------------------------------------
Singularity: 3.5.3
From: /usr/bin/singularity
ContainerType: non-atlas-standalone
singularity  exec -B /afs:/afs -B /eos:/eos -e  -H /afs/cern.ch/user/y/yesw/.alrb/container/singularity/home.6HTys2:/alrb -B /cvmfs:/cvmfs -B /afs/cern.ch/user/y:/home -B /tmp/yesw:/srv /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlas/analysisbase:21.2.126 /bin/bash
------------------------------------------------------------------------------
 setupATLAS is available for this non-atlas-standalone container type.
          
 /afs /eos available in this container
             _ _____ _      _   ___ 
            /_\_   _| |    /_\ / __|
           / _ \| | | |__ / _ \\__ \
          /_/ \_\_| |____/_/ \_\___/

This is a self-contained ATLAS AnalysisBase image.
To set up the analysis release of the image, please
execute:

          source /release_setup.sh

Singularity> source /release_setup.sh
Configured GCC from: /opt/lcg/gcc/8.3.0-cebb0/x86_64-centos7/bin/gcc
Configured AnalysisBase from: /usr/AnalysisBase/21.2.126/InstallArea/x86_64-centos7-gcc8-opt
[bash][yesw AnalysisBase-21.2.126]:srv >

That is, start the wanted container, then source /release_setup.sh.

Let us take another example of release AthAnalysis,21.2.115.

lxplus$ setupATLAS -c atlas/athanalysis:21.2.115
setupATLAS -c atlas/athanalysis:21.2.115
Info: /cvmfs mounted; do 'setupATLAS -d -c ...' to skip default mounts.
Info: $HOME mounted; do 'setupATLAS -d -c ...' to skip default mounts.
------------------------------------------------------------------------------
Singularity: 3.5.3
From: /usr/bin/singularity
ContainerType: non-atlas-standalone
singularity  exec -B /afs:/afs -B /eos:/eos -e  -H /afs/cern.ch/user/y/yesw/.alrb/container/singularity/home.6wzpdF:/alrb -B /cvmfs:/cvmfs -B /afs/cern.ch/user/y:/home -B /tmp/yesw:/srv /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlas/athanalysis:21.2.115 /bin/bash
------------------------------------------------------------------------------
 setupATLAS is available for this non-atlas-standalone container type.
          
 /afs /eos available in this container
             _ _____ _      _   ___ 
            /_\_   _| |    /_\ / __|
           / _ \| | | |__ / _ \\__ \
          /_/ \_\_| |____/_/ \_\___/

This is a self-contained ATLAS AthAnalysis image.
To set up the analysis release of the image, please
execute:

          source /release_setup.sh

Singularity> source /release_setup.sh
Configured GCC from: /opt/lcg/gcc/8.3.0-cebb0/x86_64-centos7/bin/gcc
Taking LCG releases from: /opt/lcg
Taking Gaudi from: /usr/GAUDI/21.2.115/InstallArea/x86_64-centos7-gcc8-opt
Configured AthAnalysis from: /usr/AthAnalysis/21.2.115/InstallArea/x86_64-centos7-gcc8-opt
[bash][yesw AthAnalysis-21.2.115]:~ >

Using Atlas Containers for Machine Learning

Atlas also provides machine learning containers on CVMFS:

lxplus$ ls /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlasml

atlasml-base:latest    ml-base:centos           ml-base:py-3.6.8
atlasml-base:py-3.6.8  ml-base:centos-py-3.6.8  ml-base:py-3.7.2
atlasml-base:py-3.7.2  ml-base:centos-py-3.7.2
ml-base:bionic         ml-base:latest
singularity shell /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlasml/atlasml-base:py-3.7.2
Singularity> python3
>>> import sklearn
>>> import torch

As the container path /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlasml/ indicates, the above container is also available on the docker hub via docker://. But it would take quite a while (about an hour) to pull the container from the docker hub and convert into a singularity image file (sif). To use the latest atlasml container, you can run:

lxplus$ setupATLAS -c atlasml/atlasml-base:latest
Info: /cvmfs mounted; do 'setupATLAS -d -c ...' to skip default mounts.
Info: $HOME mounted; do 'setupATLAS -d -c ...' to skip default mounts.
------------------------------------------------------------------------------
Singularity: 3.5.3
From: /usr/bin/singularity
ContainerType: non-atlas
singularity  exec -B /afs:/afs -B /eos:/eos -e  -H /afs/cern.ch/user/y/yesw/.alrb/container/singularity/home.Zv9FcG:/alrb -B /cvmfs:/cvmfs -B /afs/cern.ch/user/y:/home -B /tmp/yesw:/srv /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlasml/atlasml-base:latest /bin/bash
------------------------------------------------------------------------------
 setupATLAS is available for this non-atlas container type.
          
 /afs /eos available in this container
Singularity> 

Explore Container Images

Containers on CVMFS

There are a few singularity containers accessible by keyword slc5, slc6, centos6 and centos7 through command setupATLAS -c. There are many other images available under /cvmfs/unpacked.cern.ch/. If this CVMFS path is not visible, please add this mount point to the CVMFS client on your computer.

lxplus$ ls /cvmfs/unpacked.cern.ch/
gitlab-registry.cern.ch  logDir  registry.hub.docker.com

lxplus$ ls /cvmfs/unpacked.cern.ch/registry.hub.docker.com/
atlas        atlasml   cmssw      jodafons  lofaruser      siscia
atlasadc     atlrpv1l  danikam    kratsg    lukasheinrich  stfc
atlasamglab  clelange  engineren  library   pyhf           sweber613

lxplus$ ls /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlasml
atlasml-base:latest    ml-base:centos           ml-base:py-3.6.8
atlasml-base:py-3.6.8  ml-base:centos-py-3.6.8  ml-base:py-3.7.2
atlasml-base:py-3.7.2  ml-base:centos-py-3.7.2

The containers under /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlasml/ are for machine learning. For Atlas release containers, they are under /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlas/

lxplus$ ls /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlas
analysisbase:21.2.10            athanalysis:21.2.102
analysisbase:21.2.100           athanalysis:21.2.10-20171115
[...]
analysisbase:21.2.16            athanalysis:21.2.19
analysisbase:21.2.16-20180129   athanalysis:21.2.19-20180221
analysisbase:21.2.17            athena:21.0.15
analysisbase:21.2.17-20180206   athena:21.0.15_100.0.2
analysisbase:21.2.18            athena:21.0.15_31.8.1
analysisbase:21.2.18-20180213   athena:21.0.15_DBRelease-100.0.2_Patched
analysisbase:21.2.19            athena:21.0.23
analysisbase:21.2.19-20180221   athena:21.0.23_DBRelease-200.0.1
analysisbase:21.2.60            athena:21.0.31
analysisbase:21.2.88            athena:21.0.31_100.0.2
athanalysis:21.2.10             athena:21.0.31_31.8.1
athanalysis:21.2.100            athena:22.0.5_2019-09-24T2128_100.0.2
athanalysis:21.2.100-20191127   athena:22.0.6_2019-10-04T2129
athanalysis:21.2.101            athena:22.0.9
athanalysis:21.2.101-20191208

For containers under /cvmfs/unpacked.cern.ch/, the image location in ALRB can be shortcut by omitting the leading path of /cvmfs/unpacked.cern.ch/ as shown by setupATLAS -c --showVersions

Containers on Docker Hub

The Docker hub hosts the largest container images. You can input keyword to search on the hub. For example, you can put a keyword "atlas/" under the search field as shown below:

  • A screenshot of searching for "Atlas/" on the Docker Hub:
    DockerHub-Atlas.jpg

Click on the found container, it will provides the pull command instruction and sometimes also a brief description.

Containers on Singularity Hub and Library

There are many container images on the Singularity Hub and Library.

  • Singularity Hub: https://singularity-hub.org/. Click "Collections" on the top menu to search by Label, Tag or App name.
  • Singularity Library: https://cloud.sylabs.io/library. It is not supported in Singularity version 2. Put keyword in the search field on the very top to search for your wanted container.

Contained-based Jobs on the Grid

There are more resources available on the grid, you can run container-based jobs on the grid. Both prun and pathena provide an option --containerImage to allow jobs to run inside a specified container on the grid.

Note that if you are using a OS Container (that is, setupATLAS -c centos7 or setupATLAS -c slc6), you can do lsetup panda inside that container and submit the job to the grid. It will automatically run on the grid using the same container as you submitted from. For other containers, the rest of this section has relevant details.

Usage help on grid container-based jobs

You can also run prun --helpGroup=containerJob or pathena --helpGroup=containerJob for more container-related options as shown below:

containerJob:
  For container-based jobs

  --containerImage CONTAINERIMAGE
                        Name of a container image
  --alrb                Use ALRB for container execution
  --wrapExecInContainer
                        Execute the --exec string through runGen in the
                        container
  --alrbArgs ALRBARGS   Additional arguments for ALRB to run the container.
                        "setupATLAS -c --help" shows available ALRB arguments.
                        For example, --alrbArgs "--nocvmfs --nohome" to skip
                        mounting /cvmfs and $HOME. This option is mainly for
                        experts who know how the system and the container
                        communicates with each other and how additional ALRB
                        arguments affect the consequence
  --oldContMode         Use runcontainer for container execution. Note that
                        this option will be deleted near future. Try the new
                        ARLB scheme as soon as possible and report if there is
                        a problem

Visit the following wiki page for examples:
  https://twiki.cern.ch/twiki/bin/view/PanDA/PandaRun#Run_user_containers_jobs

Please test the job interactively first prior to submitting to the grid.
Check the following on how to test container job interactively:
  https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/SingularityInAtlas

Currently there are two implementation of running container-based jobs on grid:

  1. oldContMode: at present, it is still the default mode, and will be deprecated soon.
  2. ALRB: will become the default mode soon.
Please use the option --alrb to try the ALRB mode. You can test your job interactively by "setupATLAS -c" (see the examples above), prior to submitting them to the grid

A few notices about the container-based jobs on the grid:

  • Other options of prun/pahena should work both for standard and container jobs. *. Files in the local working directory are sent to the grid similarly as standard jobs, so that users can use those files on top of containers.
  • The initial working directory is mounted to /srv/workDir where all output files must be placed for stage-out. Input files are also copied there if sites or jobs use the copy-to-scratch mode for stage-in. Users applications are executed in the directory.
  • The path to the initial working directory should not be hardcoded since it is just a convention and might be changed in the future. Users applications are encouraged to read/write files where they are executed or dynamically get the path using os.getcwd() or something.
  • is docker://blah, docker://gitlab-registry.cern.ch/blah, or /cvmfs/unpacked.cern.ch/blah
  • Containers are executed in the read-only mode, so that all write operations need to be done in the current working directory.
  • ALRB uses /srv, /home, /scratch, /cvmfs, /alrb, /eos, and /afs mount points. If containers have those directories inside they will be hidden.
  • It is possible to disable the default mounting except for /srv, where the initial working directory is mounted to meet the minimum requirement for jobs to run on the grid, by using --alrbArgs="-d". The --alrbArgs option takes an additional argument string for ALRB. Run setupATLAS -c -h for available arguments.

Examples of grid container-based jobs

For example, you can use docker images or images on cvmfs to prun jobs like:

  • $ prun --exec "echo %IN > out.dat" --containerImage docker://busybox --alrb --outputs out.dat --outDS user.blah --inDS ...
  • $ prun --exec "source /release_setup.sh; echo %IN > input.txt; root.exe -b -q macrotest.C" --containerImage /cvmfs/unpacked.cern.ch/gitlab-registry.cern.ch/multibjets/mbj_analysis:21-2-114-ewk-2 --alrb --outputs out.dat --outDS user.blah --inDS ...

For pathena, you can run simililarly:

  • $ pathena --containerImage docker://atlas/athsimulation:sw2-1 --outDS user.blah --inDS ... myOptions.py
  • $ pathena --containerImage /cvmfs/unpacked.cern.ch/registry.hub.docker.com/atlas/athsimulation:sw2-1 --outDS user.blah --trf "Reco_tf.py ..." --inDS ...

Major updates: -- ShuweiYE - 2020-06-30

%RESPONSIBLE% AsokaDeSilva
%REVIEW% Never reviewed

-- AsokaDeSilva - 2020-06-30

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r2 - 2020-07-01 - AsokaDeSilva
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback