TWiki
>
AtlasComputing Web
>
AtlasComputing
>
AtlasDistributedComputing
>
AtlasTierZero
>
CastorPools
>
ATLASStorageAtCERN
(2021-05-18,
ArminNairz
)
(raw view)
E
dit
A
ttach
P
DF
<!-- This is the default ATLAS template. Please modify it in the sections indicated to create your topic! In particular, notice that at the bottom there are some sections that must be filled for publicly accessible pages. If you have any comments/complaints about this template, then please email : Patrick Jussel (patrick dot jussel at cern dot ch) and/or Maria Smizanska (maria dot smizanska at cern dot ch) --> <!-- By default the title is the WikiWord used to create this topic --> <!-- if you want to modify it to something more meaningful, just replace %TOPIC% below with i.e "My Topic"--> <!--------------------------------------------------------- snip snip -----------------------------------------------------------------> %CERTIFY% <!-- * Set ALLOWTOPICVIEW = Main.AllUsersGroup --> ---+!! %ATLASLOGO% ATLAS Storage at CERN %TOC% <!--optional--> %STARTINCLUDE% ---+Introduction This page describes how to access data at !CERN in different locations, explaining the different commands and tools to be used and the access policy to the storage. <br> !ATLAS storage at CERN mostly relies now on !EOS for disk only space and !Castor for pools with tape back-end. <br> The *rfio* protocol (nsls, rfdir, rfcp, ...) is %RED%highly deprecated%ENDCOLOR% when accessing data stored at !CERN and it will be dropped in a near future (except for special users and activities, such as !Tier0, !Tier3 and !CAF). <br> *xrootd should be used instead*: it will work both with !Castor and !EOS technology. <br> All commands and examples are thought to be issued on lxplus nodes inside the !CERN network. Most of them work anyway as well from outside !CERN. ---+Storage resources Disk only spaces will use the newly developed !EOS system (based on the xrootd architecture), while spaces with tape back-end will still use the !Castor system. <br> In detail, the following table summarizes the architecture of the storage resources for !ATLAS.<br> | *Space token* | *Path* | *Notes* | | *CASTOR* ||| | CERN-PROD_DAQ | /castor/cern.ch/grid/atlas/DAQ/ | the pool is not readable directly; request a [[https://rucio-ui.cern.ch/request_rule][Rucio rule]] | | CERN-PROD_DATATAPE | /castor/cern.ch/grid/atlas/atlasdatatape/ | the pool is not readable directly; request a [[https://rucio-ui.cern.ch/request_rule][Rucio rule]] | | CERN-PROD_LOCALGROUPDISK | /castor/cern.ch/grid/atlas/atlaslocalgroupdisk/ | the pool is readable by all atlas users, only !CERN !Tier3 users can write on it | | CERN-PROD_MCTAPE | /castor/cern.ch/grid/atlas/atlasmctape/ | the pool is not readable directly; request a [[https://rucio-ui.cern.ch/request_rule][Rucio rule]] | | CERN-PROD_SPECIALDISK | /castor/cern.ch/grid/atlas/atlasspecialdisk/ | used for !COND data, read-only pool | | CERN-PROD_TZERO | /castor/cern.ch/grid/atlas/tzero/ | the pool is not readable directly; request a [[https://rucio-ui.cern.ch/request_rule][Rucio rule]] | | atlcal | /castor/cern.ch/grid/atlas/caf/ | the pool is readable by all !ATLAS users; only !CAF members can write on it (more info at AtlasCAF) | | *EOS* ||| | CERN-PROD_DATADISK | /eos/atlas/atlasdatadisk/ | deployed on 13th Sept 2011 | | CERN-PROD_SCRATCHDISK | /eos/atlas/atlasscratchdisk/ | deployed on 13th Sept 2011 | | CERN-PROD_[GROUP_NAME] | /eos/atlas/atlasgroupdisk/[group_name]/dq2 | deployed mid October 2011 | | private ATLAS users areas | /eos/atlas/user/[l]/[login] | deployed on 13th Sept 2011, deprecated since May 2017 | | personal users areas | /eos/user/[l]/[login] | | | local groups areas | /eos/atlas/atlascerngroupdisk/[group_name] | deployed on 19th Sept 2011 | <br> ---+Main changes in the migration from !Castor to !EOS The main point in the migration is the discarding of the *rfio* protocol (!EOS will not support it). Though !Castor pools still can handle rfio requests, this protocol is %RED%highly deprecated%ENDCOLOR% and access to the storage should always go through *xrootd*. <br> The following table explains the former and new ways to access data in different locations (more detailed info in the following sections). <br> | *Former access* | *New access* | | *CERN-PROD_DATADISK* || | rfio://castoratlas//castor/cern.ch/grid/atlas/atlasdatadisk/ | <b>no rfio</b> | | root://castoratlas//castor/cern.ch/grid/atlas/atlasdatadisk/ | root://eosatlas//eos/atlas/atlasdatadisk/ | | no local access | *advanced* * !file:/eos/atlas/atlasdatadisk/ | | *CERN-PROD_SCRATCHDISK* ||| | rfio://castoratlas//castor/cern.ch/grid/atlas/atlasscratchdisk/ | <b>no rfio</b> | | root://castoratlas//castor/cern.ch/grid/atlas/atlasscratchdisk/ | root://eosatlas//eos/atlas/atlasscratchdisk/ | | no local access | *advanced* * !file:/eos/atlas/atlasscratchdisk/ | | *private ATLAS users areas* ||| | rfio://castoratlas//castor/cern.ch/user/[l]/[login] | <b>no rfio</b> | | root://castoratlas//castor/cern.ch/user/[l]/[login] | root://eosatlas//eos/atlas/user/[l]/[login] | | no local access | *advanced* * !file:/eos/atlas/user/[l]/[login] | | *personal users areas* ||| | | <b>no rfio</b> | | | root://eosuser//eos/user/[l]/[login] | | no local access | *advanced* * !file:/eos/user/[l]/[login] | | *local groups areas* || | rfio://castoratlas//castor/cern.ch/atlas/atlascerngroupdisk/[group_name] | <b>no rfio</b> | | root://castoratlas//castor/cern.ch/atlas/atlascerngroupdisk/[group_name] | root://eosatlas//eos/atlas/atlascerngroupdisk/[group_name] | | no local access | *advanced* * !file:/eos/atlas/atlascerngroupdisk/[group_name] | | * In order to have file access, you first have to mount !EOS as a local filesystem; please refer to the [[AtlasComputing.AdvancedUsage][Advanced Usage]] page || <br> ---+EOS storage system The following links keep the main documentation of !EOS: * [[https://cern.service-now.com/service-portal/faq.do?se=eos-service][Knowledge base for users]] * [[https://eos.readthedocs.org/en/latest/][Internals and admin documentation]] ---++Users area on !EOS Till May 2017 the space for individual users was allocated on !ATLAS !EOS in the directory /eos/atlas/user/<letter>/<username> Since May 2017 all users should use the EOS via the [[https://cernbox.web.cern.ch/][CERNBox service]]: * The path is /eos/user/<letter>/<username> * The quota is 1TB * The space is allocated automatically with the first login to the [[https://cernbox.cern.ch/][CERNBox web interface]] * For the details about the allocation of the CERNBox resources see the [[https://resources.web.cern.ch/resources/Manage/EOS/Default.aspx][CERN Resources Portal]] * The CERN IT provides [[https://cernbox-manual.web.cern.ch/cernbox-manual/en/][detailed documentation]] on how to access CERNBox from on various platforms with various protocols. * It includes useful tips and tricks on sharing your data or how to access your data from web applications like Root file viewer, SWAN project etc. The ATLAS-personal space under /eos/atlas/user/ is deprecated and no new areas will be created there. The EOS is now automatically *mounted* on all lxplus and lxbatch nodes in */eos* directory. It means that all files can be copied, opened etc. with standard linux tools. <br />The [[https://cern.service-now.com/service-portal?id=kb_article&n=KB0003846][following recipe]] provides the instructions how to mount EOS on your linux box.<br />Permissions and Sharing for */eos/user* is currently only possible via the CERNBox UI. Please see the <a href="https://cern.service-now.com/service-portal?id=kb_article&n=KB0003174" target="_blank" title="CERNBox Tutorial">CERNBox Tutorial</a> for details.<br /> *If you choose to install the CERNBox Desktop Sync Client, please be careful about what you sync.* <br /> *Syncing the entire area is not recommended, especially if you use your EOS area for storing grid files.* If you need to open the files from ROOT you need to specify the path including the protocol (root://) the server hostname (eosatlas for ATLAS data, eosuser for user data) one slash and full path (including starting slash). The same applies if you want to initiate a transfer with xrdcp command. For example: <verbatim>TFile *file = TFile::Open("root://eosatlas//eos/atlas/atlascerngroupdisk/phys-higgs/HSG1/MxAOD/h012/Archive.h012.SmallFiles_LessThan100Mb/data16/data16_13TeV.periodAll25ns_410ipb_onlyToroidIssues.physics_Main.MxAOD.p2623.h012.root");</verbatim> <verbatim>xrdcp local.file root://eosuser//eos/user/t/tkouba/dest_file</verbatim> If you need to issue special management commands (e.g. changing permissions, checking quota etc.) you need to use *eos* CLI command as described in [[https://cern.service-now.com/service-portal?id=kb_article&n=KB0004567][EOS FAQ]].<br />%RED%The CLI does *NOT* work for EOS space on CERNBox under */eos/user/* %ENDCOLOR% ---+++Communications * All users having personal space on !ATLAS !EOS are added to the atlas-eos-user@cern.ch e-group.<br> * The users with personal space in CERNBox will be contacted by CERN IT if needed. * For support, check the [[#Support_and_contacts][Support and contacts]] section. ---++Local groups area on !EOS ATLAS groups who need local (non-Grid) space at CERN may have a reserved space in !EOS under the =/eos/atlas/atlascerngroupdisk/= directory.<br> To ask for space or check the quota, please read the instructions in [[ATLASGroupsOnEOS]] <br> <br> ---+Castor pools <font size="+1" color="red">This information is obsolete</font> This section is intended for the !Tier0 resources only! <br> ---++How to list directories on !Castor Directories in !Castor can be listed using xrootd or the rfio protocol. With xrootd, the command =xrd castoratlas ls /castor/cern.ch/...= will list the content of a given directory. <br> To recursively list the content of a directory, issue =xrd castoratlas dirlistrec /castor/cern.ch/...=. Going through rfio, two commands can be used: =nsls= or =rfdir=. <br> =rfdir= is a generic command (it can also be used to browse through your local filesystem), =nsls= only works on castor files and has a few more options (e.g. to print the tape the files reside on) and features (e.g. the "m" flag for tape-migrated files) that make it the preferable command: <verbatim>nsls -l /castor/cern.ch/grid/atlas/caf</verbatim> ---++How to write files to !Castor ATLAS Castor pools are not meant for direct write-access by the users. They are managed by the !DDM system.<br> In order to write data to a Castor pool, you should request a [[https://rucio-ui.cern.ch/request_rule][Rucio rule]]. Note that simple users can only ask for subscriptions to !SCRATCHDISK. Subscriptions to other tokens require membership to special projects or groups. ---++How to read files from !Castor *Tape pools* cannot be directly read by users. Data on them can only be accessed through a request of a [[https://rucio-ui.cern.ch/request_rule][Rucio rule]] to one accessible space token (e.g. !CERN-PROD_SCRATCHDISK).<br> Data on *disk pools* can be accessed either through a subscription or also directly using dq2 tools (namely, dq2-get) or xrdcp. Please, note that rfcp is %RED%highly deprecated%ENDCOLOR%. <br> To copy a file from a disk pool, just follow this example: <verbatim>xrdcp root://castoratlas//castor/cern.ch/grid/atlas/caf/.../source_file .</verbatim> You can also directly access a root file from inside !ROOT: <verbatim>TFile *file = TFile::Open("root://castoratlas//castor/cern.ch/grid/atlas/caf/.../source_file");</verbatim> <br> ---++Access to RAW datasets and to Tier0 products <font size="+1" color="red">This information is obsolete</font> The way you access RAW data depends on the kind of activity you are planning to run. If your activity is in the scope of Calibration and Alignment, you can access such data from the CAF. A twiki explaining the way the CAF works and the contacts for the various subgroups can be found at [[https://twiki.cern.ch/twiki/bin/view/AtlasComputing/AtlasCAF][AtlasCAF]]. <br>Instructions on how to access data on the CAF are reported in [[https://twiki.cern.ch/twiki/bin/view/AtlasComputing/FastAccessToRawOnCAF][this twiki page]]. If your activity is not in the scope of the CAF, you can request samples of RAW data to be moved to a user accessible area. You can use the [[http://panda.cern.ch:25980/server/pandamon/query?mode=ddm_req][DDM request interface]], provided the data are registered in DDM, selecting as destination site the T0 cloud and the =CERN-PROD_DATADISK= site. Your data will be moved (upon approval) into the subpath =/eos/atlas/atlasdatadisk/= and you can access them following [[#EOS_storage_system][these instructions]]. <br> ---+Advanced Usage The [[https://twiki.cern.ch/twiki/bin/viewauth/AtlasComputing/AdvancedUsage][Advanced Usage]] page describes a few advanced procedures that can be used in special situations to perform operations not covered in this twiki and is not intended for basic users. <br> ---+ Dumps for consistency checks The dumps of CERN-PROD_ endpoints are stored according to the requirements described in DDMDarkDataAndLostFiles. They are created on adcops.cern.ch machine by user ddmusr03. ---+FAQ Please, check the [[AtlasEosFAQPage][FAQ page]] for further information and troubleshooting. ---+Support and contacts You can report problems to [[https://cern.service-now.com/service-portal?id=sc_cat_item&?name=incident&fe=EOS][service-now]].<br> Please, when reporting problems or errors to the e-group try to provide the name of the machine you're issuing your commands from, the complete commands that fail and any other significant information that could help to understand the problem.<br> In case of urgent problems or if you want further information, you can contact the atlas-comp-cern-storage-support@cern.ch e-group.<br /> Please note that ATLAS user support for EOS is on best-effort basis outside CERN working hours. We will do our best to address your EOS quota request within several working days. <br /> For CERNBox related issues and questions please open a ticket in [[https://cern.service-now.com/service-portal/home.do][CERN Service portal]]. <br> ---+ References * 2011 Aug 09 _"Status of EOS migration"_ by Guido Negri at [[https://indico.cern.ch/conferenceDisplay.py?confId=119650 ATLAS Weekly]] * 2009 Jul 07 _"Storage for group and user data"_ by I UEDA at [[https://indico.cern.ch/contributionDisplay.py?contribId=4&sessionId=8&confId=47255 ATLAS Week]] <!--***********************************************************--> <!--Do NOT remove the remaining lines, but add requested info as appropriate--> <!--***********************************************************--> ----- <!--For significant updates to the topic, consider adding your 'signature' (beneath this editing box)--> *Major updates*:%BR% <!--Person responsible for the page: Either leave as is - the creator's name will be inserted; Or replace the complete REVINFO tag (including percentages symbols) with a name in the form Main.TwikiUsersName--> <!-- %RESPONSIBLE% %REVINFO{"$wikiusername" rev="1.1"}% %BR% --> %RESPONSIBLE% : please contact the atlas-adc-cloud-cern AT cern.ch %BR% <!--Once this page has been reviewed, please add the name and the date e.g. Main.StephenHaywood - 31 Oct 2006 --> %REVIEW% *Never reviewed* %STOPINCLUDE%
E
dit
|
A
ttach
|
Watch
|
P
rint version
|
H
istory
: r64
<
r63
<
r62
<
r61
<
r60
|
B
acklinks
|
V
iew topic
|
WYSIWYG
|
M
ore topic actions
Topic revision: r64 - 2021-05-18
-
ArminNairz
Log In
AtlasComputing
ATLAS Collaboration
ATLAS TWiki
ATLAS Protected
ATLAS Computing
Public Results
Report outdated page
Twiki-Support
Create
a LeftBar for this page
Index
Archives
Changes
Notifications
Cern Search
TWiki Search
Google Search
Atlas
All webs
Copyright &© 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use
Discourse
or
Send feedback