ATLAS Computing and Data Model Evolution
We describe the evolution of ATLAS Computing and Data Placement Model. The original model was proposed in early 2000
and it reflected limitations and constraints inherent in the early Grid models and infrastructure.
The concept of datasets and containers, dynamic data caching, relaxing regional tier boundaries, optimization of data
types were important additions to the model. We will describe in detail the recent changes and implementations.
In 2011 we reached a delicate balance between planned and dynamic data placement. We monitor the data usage pattern by physics
analysis jobs and we tune the data placement algorithm accordingly.
- Track: Distributed Processing and Analysis on Grids and Clouds
- Primary Author: Dr. Alexei Klimentov (Brookhaven National Laboratory (US))
- Co-Authors from IT-ES: Simone Campana, Fernando Harald Barreiro Megino
- Full author list: Dr. Dario Barberis (Universita e INFN (IT)), Fernando Harald Barreiro Megino (Universidad Autonoma de Madrid (ES)), Kors Bos (NIKHEF), Dr. Jamie Boyd (CERN), Simone Campana (CERN), Kaushik De (University of Texas at Arlington (US)), Dr. Stephane Jezequel (LAPP), Dr. roger jones (Univ. of Manchester), Dr. Beate Heinemann (LBNL and UC Berkeley), Borut Kersevan (Jozef Stefan Institute), Dr. Alexei Klimentov (Brookhaven National Laboratory (US)), Tadashi Maeno (Brookhaven National Laboratory (US)), Sergey Panitkin (Brookhaven National Laboratory (US)), Jim Shank (Boston University (US)), I Ueda (University of Tokyo (JP)), Dr. Torre Wenaus (Brookhaven National Laboratory (US))
- Presentation Type: parallel
--
SimoneCampana - 06-Oct-2011