This page describes the storage organisation for DDM aware sites. It is oriented towards site admins and cloud squad support. T3s without DDM components are described in AtlasTier3

Requirements on SE functions

space reservation
ATLAS uses the space tokens described below (see the section Space Tokens)
ATLAS uses adler32 to verify files


Some recommended storage technologies already used in WLCG:



  • All sites should support read, write and deletion via WebDAV
  • ATLAS uses HTTP/WebDAV for:
    • LAN data access (r/w)
    • 3rd party transfer protocol (r/w)
    • Deletion


  • All sites should support read access via xrootd for both internal and external access
  • ATLAS uses the xrootd protocol for:
    • LAN data access when this is more performant than other protocols, for example direct access (r/o)
    • Occasional WAN data access, usually <1% of the volume of LAN access (r/o)
    • 3rd party transfer protocol in limited cases where HTTP is not an option (eg CTA) (r/w)


  • SRM is no longer used in disk-only storage. It is only used in tape storage to stage files from tape, after which 3rd party copy uses HTTP or xroot protocols


  • No longer necessary for ATLAS sites

Space Tokens or quota tokens


_ATLAS Space Tokens for Data Storage DDM aware sites_
Space Token Storage Type Used For @T0 @T1 @T2 @T3 Comments
ATLASLOCALGROUPTAPE T1D0 Local User Data o o o o  
ATLASDATATAPE T1D0 RAW from T0, AOD from re-processing X X      
ATLASDATADISK T0D1 AOD + data on demand X X X o  
ATLASMCTAPE T1D0 Custodial copy of MC files   X      
ATLASCALIBDISK T0D1 Files for detector calibration a a a a  
ATLASGROUPDISK T0D1 Data managed by groups a a a a Managed by group space managers
ATLASSCRATCHDISK T0D1 Temp. user Data X X X o Mandatory to host Grid analysis outputs
ATLASLOCALGROUPDISK T0D1 Local User Data o o o o  

  • X : Space token is mandatory
  • o : Space token is optional and decided by site
  • a : Space token request is validated by CREM

Each space token is associated to a path on the SE.

SP : Space token in capital letter
sp : Space token in small letter

Space token name : SP
Associated path : se_path/sp/...

Nota Bene 1
The list of sites serving each perf-phys group (ATLASGROUPDISK) is defined by ATLAS. More information at GroupsOnGrid.

Nota Bene 2
All space tokens contribute to the pledged storage resources except ATLASLOCALGROUPDISK and ATLASLOCALGROUPTAPE

Nota Bene 3
ATLASCALIBDISK is deployed only on demand from detector groups which want to run prompt calibration in these sites.

DDM Endpoint with TAPE backend, used for local interest, no pledge, not necessarily at T1
  • Set up on request by a site, and accepted as far as DDM-ops does not get an “extra load”
  • Setup should follow the standard ones similar to DATATAPE/MCTAPE + LOCALGROUPDISK
    • Non-standard setup can be discussed, though may not be supported

Space Reservation


  • ATLASSCRATCHDISK (for analysis outputs if site runs analysis jobs)
    • for each job slot dedicated to Grid analysis : 50 GB, with a minimum of 1 TB
  • ATLASDATADISK (as production buffer)
    • for each job slot dedicated to MC production : 20 GB, with a minimum of 0.5 TB


The Tier2s host Grid analysis and production jobs. Tier2s can be grouped in federations. The pledge resources are defined per federation. For ATLAS it is important that each site provides appropriate disk space, in order that disk space is not too fragmented.

Required Space Tokens

    • 500 TB as bare minimum for each site (i.e. not the whole federation)
    • 100TB per 1k analysis job slots (based on calculation of average job output)


Required Space Tokens

  • ATLASDATADISK: size according to MoU
  • ATLASSCRATCHDISK: 100TB per 1k analysis job slots
  • ATLASDATATAPE, ATLASMCTAPE: size according to MoU
  • Tape staging buffers
    • Can be shared or split over the space tokens
    • Recommended size: enough to sustain the required rate of reading/writing, a rough rule of thumb is between 10 and 20TB per PB of data stored



ACLs for DDM endpoint and space token/storage

Why different acls between DDM and storage ?

The acls in space tokens are implemented by sites while the DDM acls are managed by ATLAS team. The DDM acls should be more or equally restrictive than the space token ones. The acls in space tokens are defined to avoid to bypass DDM acls for reading or writing files.

DDM acls:

  • atlas/role=production is able to do any action on any DDM endpoint (including ATLASLOCALGROUPDISK)
  • atlas/country/role=production is able to do any action on any LOCALGROUPDISK in the same country
  • atlas/group/role=production is able to do any action only on the associated DDM group disk endpoints.
ATLASLOCALGROUPDISK Yes Yes (for users in the country)
Space Token Read Access for user Write Access for user

Some US Tier3s have decided to forbid read access to their LOCALGROUPDISK.

Space token acls

Not all the storage implementations at the moment are fully VOMS aware and not all storage implementations at the moment support ACLs. After discussion with developers and experts, those are the recommendations for setting up various ATLAS space tokens concerning groups and users, remembering the following limitation on the currently deployed storage solutions:

N.B. In dCache, one can bound (one or more) directory trees to a space token. Therefore we ask sites to bound the space tokens to the corresponding path in CRIC. Contact the ATLAS contact in your cloud if you need clarifications on what this means.


The following table presents the acls which ATLAS would like to setup. It is coherent with the acls implemented in DDM. The acls should be identical for the space-token and the associated storage path. As stated in the introduction, effective acls will not be exactly the same.

ATLAS Space Tokens
Space Token atlas/Role=production atlas/Role=pilot atlas/Role=NULL atlas/<country>/Role=production atlas/<country>/Role=NULL
ATLASLOCALGROUPTAPE Read/Write Read Read Read/Write Read
ATLASDATATAPE Read/Write *No Access* *No Access* *No Access* *No Access*
ATLASDATADISK Read/Write Read Read Read Read
ATLASMCTAPE Read/Write *No Access* *No Access* *No Access* *No Access*
ATLASGROUPDISK Read/Write Read Read Read Read
ATLASCALIBDISK Read/Write Read Read Read Read
ATLASSCRATCHDISK Read/Write Read/Write Read/Write Read/Write Read/Write
ATLASLOCALGROUPDISK Read/Write Read Read Read/Write Read

  • Write access for atlas/country to ATLASLOCALGROUPDISK/ATLASLOCALGROUPTAPE is not recommended (to avoid the creation of dark data in this area) through rucio upload. Data are to be put into those spaces via R2D2

Space token acl implementation


  • In case of DPM: configure the space token to be owned only by atlas/Role=production. Configure the namespace area to be owned by the atlas/Role=production VOMS FQAN.
  • In case of dCache: configure both the space token and the corresponding namespace path to allow rwx to everyone.


Permissions should be rwx for every user both at the level of Space Token and namespace.


Permissions should be identical to ATLASDATADISK since groups are not allowed to write directly datasets to the DDM endpoint.


In principle ATLAS should not discuss any requirement on this, since it is not pledged resources. Anyway, here are some suggestions:

  • In case of DPM: configure the space token to be owned by atlas to allow rwx to everyone (DPM < 1.7.0), or owned by atlas/Role=production and the local group (usually atlas/country). Configure the namespace to grant rwx privileges to the local group (in VOMS) and the atlas/Role=production VOMS FQAN (this is needed to serve those sites via DDM). These privileges should be granted with default ACLs to ensure that permissions propagate correctly.
  • In case of dCache: configure both the space token and the corresponding namespace path to allow rwx to everyone.

Deletion policy per DDM endpoint

All files in space tokens managed by DDM are deleted through central deletion.


A watermark of free space is set to min(10%, 300TB). When the space goes below this limit, unlocked (secondary) data will be deleted until the limit is reached.


A watermark of free space is set to min(25%, 50TB). When the space goes below this limit, unlocked (secondary) data will be deleted until the limit is reached.


The deletion is triggered by the responsible persons.

Major updates:
-- KorsBos - 06 Jul 2008 -- DavidCameron - 2015-09-15

Responsible: DavidCameron Last reviewed by: Never reviewed

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf 20140227_ADCOpsSiteClassification_rev01.pdf r1 manage 512.3 K 2015-11-03 - 11:01 AleDiGGi  
Unix shell scriptsh r3 r2 r1 manage 1.7 K 2008-09-09 - 12:21 UnknownUser Setup script for ATLASGROUPDISK for sites using DPM
Edit | Attach | Watch | Print version | History: r112 < r111 < r110 < r109 < r108 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r112 - 2022-11-02 - DavidCameron
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Atlas All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback