-- JamieShiers - 22 Jan 2008

General

General concern that sites - it particular Tier2s - might not be fully in the loop regarding requirements, their involvement, configuration issues etc.

Overall site readiness is also a concern.

NDGF

The cern FTS, AFAIK, still does not support gridftp protocol version 2. This means that transfers to NDGF handled by the cern FTS are limited to a total of about 1Gbit/s. While this is sufficient for MoU data rates, it is a factor 5-10 lower than what should be possible.

There are some remaining uncertanties about how the space manager will interact with everything else once we enable it, but that will just have to be worked out in production as we go along.

The automatic gocdb broadcasts don't reach the right people. We currently have a scheduled read-only downtime for our T1 storage, and the only ATLAS people that knew was a few nordic people that had gotten that information through other channels.

This clearly needs to be adressed, both in being able to select the right people to be notified and getting proper feedback on who the downtime announcement has been sent to.

Mattias Wadenstein

IN2P3

from my site point of view, what I would like to see for each experiment is the specific and clear answers to each one of the questions we (sites) documented and presented on December 17th (http://indico.cern.ch/conferenceDisplay.py?confId=25342)..

Currently, in our site, we are trying to answer ourselves those questions based on the information provided by the experiments. We are using the information in the wiki in the links below:

ATLAS LHCB ALICE CMS

However, the task is not easy. First, there is another link for CMS N.B. this link is the correct one! (ed.) and there is no specific information on the sizes of the SRM storage spaces that should be configured for CMS. Second, in the ALICE URL above there are no information about the size of the requested zones.

My concern is that there are 11 sites doing exactly the same work of digging all the available information for extracting the necessary information to configure their infrastructure for this exercice: this seems not very efficient to me. Given the lack of reference unambigous information, it is very likely that the configured systems do not match the experiments's requirements. In addition, there is no easy way to check that we (sites) did our homework configuring all the necessary stuff.

I hope this clarifies my concerns, that obviously, may not necessarily be shared by other sites.

Regards,

Fabio

NL-T1

The biggest issue we currently have is that our SRM SE running dCache is not in production. After upgrading to dCache 1.8.0-11 and configuring space management we ran into problems. The problem we see is dat the users are not correctly mapped onto users according to VOMS attributes. Yesterday we upgraded to the latest version, patch level 12, but this did not solve the problem.

We report this problem last week at Desy. As far as I know Tigran is currently working on the problem. But not solving this problem is a show stopper for us for CCRC.

Greetz., Mark

Edit | Attach | Watch | Print version | History: r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r3 - 2008-01-23 - JamieShiers
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback