• We assert that we have a working system, do not need a new system, but the current system (cost and complexity) exceeds what is needed. Would the experiment agree with this statement?
  • What elements are being used, and how? Which have been added and why?
  • Which components failed to deliver (larger) functionality needed? Does that mean we can deprecate and obsolete these components, or was it not truly needed?
  • If the middleware dropped complexity and removed functionality, does the experiment have the resources to adopt to the change?
  • What is the experiment's interest in revising data placement models? What kinds of revisions have gone through?
    • What is the experiment's interest in data federations?
  • What sort of assumptions does the experiment need to make for data federations work?
  • Could you work directly with clustered file systems at smaller sites?
  • Could you work directly with cloud file systems (i.e., GET/PUT)? Assume "cloud file systems" implies REST-like APIs, transfer via HTTP, no random access within files, no third-party transfer, and possibly no file/directory structure. See Amazon S3 for inspiration.
  • How thoroughly does the experiment use space management today?
  • Does your framework support HTTP-based access?
  • Is the archive / disk split an agreed-upon strategy in your experiment? Can HSM be dropped?
  • What role do you see for Data Federation versus Data Push? Is this useful at smaller sites?
  • For smaller sites, would caching work for your experiment?


This topic: LCG > WebHome > WLCGTEGDataManagement > WLCGTEGDataManagement_CMS
Topic revision: r1 - 2011-11-14 - BrianBockelman
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback