atlaslogo.png ATLAS Storage at CERN


This page describes how to access data at CERN in different locations, explaining the different commands and tools to be used and the access policy to the storage.
ATLAS storage at CERN mostly relies now on EOS for disk only space and Castor for pools with tape back-end.
The rfio protocol (nsls, rfdir, rfcp, ...) is highly deprecated when accessing data stored at CERN and it will be dropped in a near future (except for special users and activities, such as Tier0, Tier3 and CAF).
xrootd should be used instead: it will work both with Castor and EOS technology.
All commands and examples are thought to be issued on lxplus nodes inside the CERN network. Most of them work anyway as well from outside CERN.

Storage resources

Disk only spaces will use the newly developed EOS system (based on the xrootd architecture), while spaces with tape back-end will still use the Castor system.
In detail, the following table summarizes the architecture of the storage resources for ATLAS.

Space token Path Notes
CERN-PROD_DAQ /castor/ the pool is not readable directly; request a Rucio rule
CERN-PROD_DATATAPE /castor/ the pool is not readable directly; request a Rucio rule
CERN-PROD_LOCALGROUPDISK /castor/ the pool is readable by all atlas users, only CERN Tier3 users can write on it
CERN-PROD_MCTAPE /castor/ the pool is not readable directly; request a Rucio rule
CERN-PROD_SPECIALDISK /castor/ used for COND data, read-only pool
CERN-PROD_TZERO /castor/ the pool is not readable directly; request a Rucio rule
atlcal /castor/ the pool is readable by all ATLAS users; only CAF members can write on it (more info at AtlasCAF)
CERN-PROD_DATADISK /eos/atlas/atlasdatadisk/ deployed on 13th Sept 2011
CERN-PROD_SCRATCHDISK /eos/atlas/atlasscratchdisk/ deployed on 13th Sept 2011
CERN-PROD_[GROUP_NAME] /eos/atlas/atlasgroupdisk/[group_name]/dq2 deployed mid October 2011
private ATLAS users areas /eos/atlas/user/[l]/[login] deployed on 13th Sept 2011, deprecated since May 2017
personal users areas /eos/user/[l]/[login]  
local groups areas /eos/atlas/atlascerngroupdisk/[group_name] deployed on 19th Sept 2011

Main changes in the migration from Castor to EOS

The main point in the migration is the discarding of the rfio protocol (EOS will not support it). Though Castor pools still can handle rfio requests, this protocol is highly deprecated and access to the storage should always go through xrootd.
The following table explains the former and new ways to access data in different locations (more detailed info in the following sections).

Former access New access
rfio://castoratlas//castor/ no rfio
root://castoratlas//castor/ root://eosatlas//eos/atlas/atlasdatadisk/
no local access advanced * file:/eos/atlas/atlasdatadisk/
rfio://castoratlas//castor/ no rfio
root://castoratlas//castor/ root://eosatlas//eos/atlas/atlasscratchdisk/
no local access advanced * file:/eos/atlas/atlasscratchdisk/
private ATLAS users areas
rfio://castoratlas//castor/[l]/[login] no rfio
root://castoratlas//castor/[l]/[login] root://eosatlas//eos/atlas/user/[l]/[login]
no local access advanced * file:/eos/atlas/user/[l]/[login]
personal users areas
  no rfio
no local access advanced * file:/eos/user/[l]/[login]
local groups areas
rfio://castoratlas//castor/[group_name] no rfio
root://castoratlas//castor/[group_name] root://eosatlas//eos/atlas/atlascerngroupdisk/[group_name]
no local access advanced * file:/eos/atlas/atlascerngroupdisk/[group_name]
* In order to have file access, you first have to mount EOS as a local filesystem; please refer to the Advanced Usage page

EOS storage system

The following links keep the main documentation of EOS:

Users area on EOS

Till May 2017 the space for individual users was allocated on ATLAS EOS in the directory /eos/atlas/user/<letter>/<username>

Since May 2017 all users should use the EOS via the CERNBox service:

  • The path is /eos/user/<letter>/<username>
  • The quota is 1TB
  • The space is allocated automatically with the first login to the CERNBox web interface
  • The CERN IT provides detailed documentation on how to access CERNBox from on various platforms with various protocols.
    • It includes useful tips and tricks on sharing your data or how to access your data from web applications like Root file viewer, SWAN project etc.

The ATLAS-personal space under /eos/atlas/user/ is deprecated and no new areas will be created there.

The EOS is now automatically mounted on all lxplus and lxbatch nodes in /eos directory. It means that all files can be copied, opened etc. with standard linux tools.
The following recipe provides the instructions how to mount EOS on your linux box.
Permissions and Sharing for /eos/user is currently only possible via the CERNBox UI. Please see the CERNBox Tutorial for details.
If you choose to install the CERNBox Desktop Sync Client, please be careful about what you sync.
Syncing the entire area is not recommended, especially if you use your EOS area for storing grid files.

If you need to open the files from ROOT you need to specify the path including the protocol (root://) the server hostname (eosatlas for ATLAS data, eosuser for user data) one slash and full path (including starting slash). The same applies if you want to initiate a transfer with xrdcp command. For example:

TFile *file = TFile::Open("root://eosatlas//eos/atlas/atlascerngroupdisk/phys-higgs/HSG1/MxAOD/h012/Archive.h012.SmallFiles_LessThan100Mb/data16/data16_13TeV.periodAll25ns_410ipb_onlyToroidIssues.physics_Main.MxAOD.p2623.h012.root");

xrdcp local.file root://eosuser//eos/user/t/tkouba/dest_file

If you need to issue special management commands (e.g. changing permissions, checking quota etc.) you need to use eos CLI command as described in EOS FAQ.
The CLI does NOT work for EOS space on CERNBox under /eos/user/


Local groups area on EOS

ATLAS groups who need local (non-Grid) space at CERN may have a reserved space in EOS under the /eos/atlas/atlascerngroupdisk/ directory.
To ask for space or check the quota, please read the instructions in ATLASGroupsOnEOS

Castor pools

This information is obsolete

This section is intended for the Tier0 resources only!

How to list directories on Castor

Directories in Castor can be listed using xrootd or the rfio protocol.

With xrootd, the command xrd castoratlas ls /castor/ will list the content of a given directory.
To recursively list the content of a directory, issue xrd castoratlas dirlistrec /castor/

Going through rfio, two commands can be used: nsls or rfdir.
rfdir is a generic command (it can also be used to browse through your local filesystem), nsls only works on castor files and has a few more options (e.g. to print the tape the files reside on) and features (e.g. the "m" flag for tape-migrated files) that make it the preferable command:

nsls -l /castor/

How to write files to Castor

ATLAS Castor pools are not meant for direct write-access by the users. They are managed by the DDM system.
In order to write data to a Castor pool, you should request a Rucio rule. Note that simple users can only ask for subscriptions to SCRATCHDISK. Subscriptions to other tokens require membership to special projects or groups.

How to read files from Castor

Tape pools cannot be directly read by users. Data on them can only be accessed through a request of a Rucio rule to one accessible space token (e.g. CERN-PROD_SCRATCHDISK).
Data on disk pools can be accessed either through a subscription or also directly using dq2 tools (namely, dq2-get) or xrdcp. Please, note that rfcp is highly deprecated.
To copy a file from a disk pool, just follow this example:
xrdcp root://castoratlas//castor/ .
You can also directly access a root file from inside ROOT:
TFile *file = TFile::Open("root://castoratlas//castor/");

Access to RAW datasets and to Tier0 products

This information is obsolete

The way you access RAW data depends on the kind of activity you are planning to run.

If your activity is in the scope of Calibration and Alignment, you can access such data from the CAF. A twiki explaining the way the CAF works and the contacts for the various subgroups can be found at AtlasCAF.
Instructions on how to access data on the CAF are reported in this twiki page.

If your activity is not in the scope of the CAF, you can request samples of RAW data to be moved to a user accessible area. You can use the DDM request interface, provided the data are registered in DDM, selecting as destination site the T0 cloud and the CERN-PROD_DATADISK site. Your data will be moved (upon approval) into the subpath


and you can access them following these instructions.

Advanced Usage

The Advanced Usage page describes a few advanced procedures that can be used in special situations to perform operations not covered in this twiki and is not intended for basic users.

Dumps for consistency checks

The dumps of CERN-PROD_ endpoints are stored according to the requirements described in DDMDarkDataAndLostFiles. They are created on machine by user ddmusr03.


Please, check the FAQ page for further information and troubleshooting.

Support and contacts

You can report problems to service-now.
Please, when reporting problems or errors to the e-group try to provide the name of the machine you're issuing your commands from, the complete commands that fail and any other significant information that could help to understand the problem.
In case of urgent problems or if you want further information, you can contact the e-group.
Please note that ATLAS user support for EOS is on best-effort basis outside CERN working hours. We will do our best to address your EOS quota request within several working days.

For CERNBox related issues and questions please open a ticket in CERN Service portal.


  • 2011 Aug 09 "Status of EOS migration" by Guido Negri at ATLAS Weekly
  • 2009 Jul 07 "Storage for group and user data" by I UEDA at ATLAS Week

Major updates:

Responsible: : please contact the atlas-adc-cloud-cern AT

Last reviewed by: Never reviewed

Edit | Attach | Watch | Print version | History: r65 < r64 < r63 < r62 < r61 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r65 - 2022-07-01 - ChristopherLee
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Atlas All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback