Activity summary

Activity Support
VO/project Purpose Contacts Web site Grid users Managerial contact Technical contacts Web site
LQCD Support the CERN TH / ETH Ph. de Forcrand (CERN PH and ETH)
LQCD-DM Grid Data Management tools for data sharing in the Lattice QCD community A. Juttner (CERN)          
CERNTH Support the CERN TH Unit (PH dept) M. Mangano (CERN PH)

Activity reporting

VO/project Period Goals Achievements Support requested Time spent Issues Changes in responsibility or support
LQCD-DM 2010 enable Castor-based file repository for the users in remote HPC centers After thorough technology investigation, xrdcp was chosen as a client solution for transfers to/from HPC Centers. Castor access provided by xrootd farm and xrootd proxy server running in a VO-box at CERN. Evaluation activity (usage of Grid tools for different use cases) 3% Integration into mainstream CERN/IT services. Discussion in progress with IT-DSS for possible hand-over and future setup of this service.
LQCD 2009 Collect new data for studying the QGP phase transition Multiple runs (~700 CPUyears). Links to Lattice here. In 2009 this activity has been presented at the ECSA09 conference: and other workshop/conferences Application porting + establishing a system to run Ganga/DIANE. Ran on best effort. Essential development (Tech. Student in 2008) allowed to automatised the running of the system. Remaining tasks left to the users 5% Still running on GEAR + Swiss resources. Do we really need a specialised VO? Contacts with LSU (Shantenu Jha et al.). The goal is to have them run the system on TeraGrid. Consultancy role kept with us
CERNTH 2010-Q2 Evaluation phase (no commitment). Run with 1-2 pilot users to evaluate requirements and needs Use Ganga to support TH unit Pilot users: check how much support effort would be needed Twiki refurbished. 2 coffee meetings with pilot users. Few hours to cook up a couple of scripts (examples to transparently switch between LCG and LSF) 1 user (Jeppe) uses C++ application based on SFT libraries. For the moment we agreed to avoid sw installations at side (.so dowloaded with wget). Examples and test results under ~laman/public/jeppe . Although the reliability (#failed jobs) is really low, this application does not rely on all jobs to suceed: every subjob contribute 1nth of the desired statistics and all jobs are completely independent.

The use case of the second pilot user is more complex (application running on a cluster and using the shared file system to do some interprocess synchronisation). Wait for more feedback from the first pilot user



is the list to contact the grid users
is the list to contact the responsible(s) for a grid activity
is the list to contact the support team for a grid activity

-- MassimoLamanna - 27-Apr-2010

Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r4 - 2010-06-04 - MassimoLamanna
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback