TWiki> HEPTape Web>Survey>KIT-GridKa_Survey (revision 2)EditAttachPDF

Site

Subsection Question ResponseSorted ascending
Protocol support Are there any unsupported or partially supported operations (e.g. pinning) ? All features that dCache and xrootd support natively should work for GridKa, too.
Site and Endpoints How is tape storage selected for a write (choice of endpoint, specification of a spacetoken, namespace prefix). dCache and xrootd both archive into the same tape storage backend.
Queuing → Min/Max requests submitted at one time dCache assignes flush and stage tasks to pools, which all have an upper limit for concurrent active tasks, usually 2k. Requests beyond that are queued. For xrootd the limit is a total of 3200 concurrent flushing and staging tasks.
Site and Endpoints What is the site name? FZK_LCG2
Queuing Is it advantageous to group requests by a particular criterion (e.g. tape family, date)? In theory, yes, that would be advantageous. But we cannot guarantee that it will stay in that order or grouping. There is only a loose chronological order.
Queuing What limits should clients respect?  
Queuing → Max number of outstanding requests  
Queuing → Min/Max bulk request size?  
Prioritisation → How is this requested?  
Timeouts What timeouts do you recommend?  
Recommendations for clients    
Prioritisation Can you handle priority requests? No.
Queuing Should clients back off under certain circumstances? SRM feature with dCache: A limit can be set for every request type, including srm-bring-online (10k by default). Once more requests are accumulated, SRM will block and return "overloaded" error. For xrootd, there is no such feature that would reckognise an overload situation. If a file cannot be staged from tape, xrootd will fail on each subsequent request immediately.
Queuing → For which operations? srm-bring-online / open
Queuing → How is this signalled to client? srm-bring-online and accessing a file will fail.
Site and Endpoints Which endpoint URLs do your archival systems expose? srm:{atlassrm-fzk,cmssrm-kit,lhcbsrm-kit}.gridka.de
Queuing → What criterion? Timestamps would be most useful, since tape families don't necessarily match what the VOs use as classification.
Operations and metrics How do you allocate free tape space to VOs? We do not allocate space on tape for any VO.
Operations and metrics What is the frequency with which you run repack operations to reclaim space on tapes after data deletion? We have defined a threshold for "tape occupancy", which will trigger reclamation per tape.
Operations and metrics Can you provide total sum of data stored by VO in the archive to 100TB accuracy? Yes
Operations and metrics Can you provide space occupied on tapes by VO (includes deleted data, but not yet reclaimed space) to 100TB accuracy? Yes
Timeouts Do you have hardcoded or default timeouts? Yes, we have default timeouts with dCache for flushing and staging of at least 24 hours (may be larger on request). No timeouts are enforced with xrootd.

History:

OliverKeeble - 2018-01-30
Created article.
XavierMol - 2018-04-16
Inserted answers for KIT/GridKa on behalve of Doris Ressmann.
XavierMol - 2018-04-16
Changed format of the tabel
Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r2 - 2018-04-16 - XavierMol
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    HEPTape All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback