NA3.2.4 - Collaborations and Use Cases: EMI - FutureGrid Collaboration

Work on understanding the collaboration between EMI and FutureGrid.

FutureGrid Information
'FutureGrid is a distributed, high-performance test-bed that allows scientists to collaboratively develop and test innovative approaches to parallel, grid, and cloud computing.'

  • Links
  • FutureGrid Contact:
    • New: Gregor von Laszewski (Indiana University, USA) - Technical Contact, SW Architect
    • Old: Gary F. Miksik (Indiana University, USA) - Project Manager
  • EMI Contact:
    • Coordination: Morris Riedel (Juelich Supercomputing Centre, Germany)
    • EMI Testbed activities: Danilo Dongiovanni (INFN-CNAF, Italy)
      • Works on testing infrastructure, inter-component testing, or test infrastructure that users can use for training
  • Comments: EMI is represented in the FutureGrid User Advisory Board; EMI could also become a user of FutureGrid



Planned: 2011-11-15 - Brief Report about EMI&FutureGrid Plans at User Advisory Board (UAB) Meeting @ SC2011



Yes / Done 2011-11-02 - EMI FutureGrid project proposal submitted



Yes / Done 2011-10-28 - Applied for Account

  • Applied for FutureGrid Account (pending approval)
  • Next Step is to create a FutureGrid EMI project - others (like Danilo) should then join this particular project



Yes / Done 2011-10-06 - Clarified EMI component list for FutureGrid Installation

  • ToDo Item from Telcon
  • Result:

Defined list of EMI products to be installed on FutureGrid:

(1) EMI-UI, WMS, A-Rex, CREAM, EMI-WN, UNICORE, topBDII, siteBDII

(2) DPM, dCache, LFC

(3) ARGUS, VOMS?

New question: certificate handling, initially VOMS and then bridge with FutureGrid accounting...?



Yes / Done 2011-09-07 - TelCon with FutureGrid

Participants: Gregor, Danilo, Morris

The following questions need to be clarified:

  • Who is the major contact on the technical side?
    • Gregor, possibly referred to other experts (sys admins, etc.), always on CC

  • What are the resources FutureGrid offers?
    • no technical policies (i.e. only Globus) - all technologies allowed
    • Small resources (5000 cores together, distributed among 5 computers, ~ 1000 cores/computer)
    • List of resources is listed on Website: Link
    • new machine, but with a lot of large memory (biology applications)
    • 500 cores dedicated to HPC, ~200 to Eukalyptus
    • Other status page illustrating which technologies are used in percentage Link

  • How is the deployment done?
    • 1) could be done via virtual machine, EMI in virtual machine for example (what is done with Globus - a virtualized Globus)
    • 2) deploy on bare metal - core services and running continously (GIN permanent endpoints, is not listed, but on another node - one of the servers)

  • What is the support in the installation of the EMI software on FutureGrid resources?
    • same problem with XSEDE
    • policy - EMI folks could do it on their own - commands need to be limited
    • sudo is not allowed in the moment, but there is work going on to change the policies
    • first establish a level of trust
    • FutureGrid staff would do this in the moment - overworked to have not the capability to install the software
    • How exatly is the access to systems done, indirectly access could be given

  • What type of platform you support
    • When SSH access
    • Some EMI services need special ports to be open, specific OSs

  • Permanent training infrastructure setup with permanent endpoints?
    • other end-users?
    • certificates?
    • for UNICORE, there is a CA to get the credentials
    • Portal account is needed - everybody needs to have a portal account
    • not incommon, but security working group should be installed, e.g. IGTF and X.509 for example?
    • Integration of accounts for EMI?! Automatically get a portal account
    • Nimbus creates automatically certificates, end-users pick them up from FutureGrid
    • Events - e.g. GridKA School or so
      • 1 - get an portal account, join button will be added, e.g. "EMI training project"
      • 2 - ... needs to be

  • Large-scale testing?
    • What exactly this means, requirements, minimum amount of sites, end-users at the same time
    • Scalaility tests?
    • Testbed - 5 distributed environments for FutureGrid are good for tests
    • not really large-scale testing, many users concurrent usage of same resources
    • all resources would be not possible, and large-scale testing needs to be done in XSEDE
    • test plan for large-scale testing needs to be provided by PTs, then check case by case

Next Steps

  • TBD (Morris): Send e-mail to FutureGrid contact*
    • Done; Follow-up!
    • Possibly a telcon on 2011-09-06 with Gregor, Danilo and Morris
    • TODO (Morris, Danilo): Formulate requirements (OS), services description needed
    • TODO (Morris): Identify EMI package and services out of this that should be on FutureGrid - perhaps use one very flexible service
    • TODO (Morris): Project submission to FutureGrid, file a project
    • TODO (Morris): EMI Slide set about Gregor
    • TODO (Morris, Gregor): Meet ISC Cloud



Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf 2011-11-02-EMI-fg-project-proposal.pdf r1 manage 70.5 K 2011-11-02 - 22:31 MorrisRiedelExCern EMI FutureGrid Project Proposal (submitted on 2011-11-02)
Edit | Attach | Watch | Print version | History: r6 < r5 < r4 < r3 < r2 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r6 - 2011-11-15 - MorrisRiedelExCern
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    EMI All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback