TWiki
>
LCG Web
>
LCGServiceChallenges
>
WorkshopAndTutorials
(2006-11-28,
LaurenceField
)
(raw view)
E
dit
A
ttach
P
DF
-- Main.JamieShiers - 17 Mar 2006 ---++ Draft Agendas * [[http://indico.cern.ch/conferenceDisplay.py?confId=1148&view=egee_meeting&showDate=all&showSession=all&detailLevel=contribution Tier2 workshop]] and [[http://indico.cern.ch/conferenceDisplay.py?confId=a058483 tutorials]] - CERN, 12 - 16 June * (There will also be a JointOperations workshop at CERN the following week) ---++ Comments and Suggestions %RED%Please feel free to add your comments and suggestions below! It will help organise the workshop such that it addresses the real needs of the sites and the experiments.%ENDCOLOR% ---++ From Marco La Rosa [mlarosa@physics.unimelb.edu.au] Is it possible to have a discussion about the timeframe for migration to SL4? Specifically, it would be good to know if SE packages are available for SL4 and if so, where we can get them. Recent tests have revealed an RTT of ~350 ms from Australia to Taiwan. Accordingly, for us to achieve any reasonable transfer rates, we will need to tune the tcp stack at the endpoints of the connection (our Tier 2 and Taiwan - our Tier 1) - and this requires a linux distribution with a 2.6 kernel. Marco La Rosa (University of Melbourne, Australia) ---++ From Cal Loomis [Charles.Loomis@cern.ch] Aim to have a Quattor tutorial in conjunction with T2 workshop. See presentation at [[http://agenda.cern.ch/fullAgenda.php?ida=a057704 Rome GDB]] for more information. ---++ From Coles, J (Jeremy) [J.Coles@rl.ac.uk] [[http://www.gridpp.ac.uk GridPP]] agreed to fund one person per site plus those who are contributing like Graeme. So far I have had 20 site requests and a couple of others. I imagine we could send 25 people. In principle this is good as you'll get direct communication with the people that will make it work at the root level. In practice it means the event will need more input to ensure people get what they need for the full period. For example I am keen if there is a spare morning or afternoon to setup some specific sessions to address training UK people really need (we don't get them in one place too often) - other T2s may also need it I just have not had time to develop a list and plan. Some example areas that come to mind are: 1) Configuring schedulers with fair share policies, 1) Detailed information on job/data flows within the middleware (this comes down to better understanding of the architecture), 1) Review of middleware conifiguration options - we've seen sites using incorrect scaling factors etc. I would setup any extra sessions in parallel with those of less relevance to UK people but I would need help securing a room - laptops could provide the link for any hands on work required. Just an idea. >But should it simply be first-come-first-served? Should the experiments >/ ROCs / T1s decide / provide their input? We see this as important as there are not many occasions to get the experiment representatives and sysadmins in one place. Cascading the knowledge is less effective which is why we are encouraging people to go. I doubt other countries will send so many but there needs to be some capping process in place. Either this is done in proportion to resources that are/will be provided by a given site or country, or a flat quota which will discriminate more! If there is a flat quota then the Tier-1/local organisation will have to select. First-come-first-served may bias attendane from countries like the UK which are already organised! I can give you a full list of names today. If I had to decide I would try to get those unable to attend the LCG session to go to the operations workshop the week after! Personally I think the event should be open, but unless you have a registration procedure you will not know how many people are likely to come. Perhaps the question should be asked at the GDB next week and countries should come prepared to answer. On the experiment side it would be useful to have the people who actually submit the jobs and experience problems with/at the sites present, not just someone who can talk at the generalist level. We do need the higher and broader view represented and explained, but experience from [[http://www.gridpp.ac.uk GridPP]] meetings is that sysadmins will pay more attention to technical discussions that are of direct concern to them. Thus having people from all sides present who can talk at the technical level and understands sysadmin issues would produce more useful discussion and output! I will put together more ideas for some of the possbile parallel sessions if you think this might work. Having just looked at the tutorials page it looks like there is still space for ideas in the main program anyway. What happens in the experiment session though? Experiment software is supposed to be installed by the sgm for the VO so I'm curious about what will fall into these sessions. Regards, Jeremy ---+++ From Graeme Stewart [g.stewart@physics.gla.ac.uk] I was musing over the T2 workshop looking at the WLCG services and tutorial sections, and thinking about contributions that the UKI ROC could make. I make these suggestions without any idea of what the other ROCs have been doing; I'm sure you both have a better overview and I won't be offended if you turn these down! (Oh, Jeremy might have suggestions on the operational front - I don't know, but I see Jeff's chairing that session.) So, areas where we are well advanced: Storage: every UK T2 has an SRM. The split ended up ~1/3 dCache and ~2/3 DPM. This offers the possibility of something in the LCG services discussion: SRM in general? DPM specifically (me) or dCache specifically (Greig). But... it's possible that it would be better to have one of the SRM developers give an overview of the storage systems (Patrick, Jean-Philippe?) and for us to collaborate/ contribute to a storage tutorial on the Thursday or Friday. Data management: As you know the UK has done a lot in testing T1<->T2 file transfers. This is, of course, intimately related to storage and indeed the client setup for T2s and data management is pretty much a no-brainer these days (and especially if we realise our goal of FTS clients using the EGEE.BDII). We do have a quite nice script for doing T2 data transfers, which should probably be publicised more widely. Shaking out all the problems with networks, FTS and storage is a lot of work and a review of our experiences here might be useful. The main thing for T2s (once they've ripped out that last piece of 100Mb cable) is that it's tricky to optimise is T2 storage write rates. We now seem to have a good understanding of the issues. There's scope for an overview of FTS, as well as a tutorial on using it. Of course, you have Gavin and Paolo to hand. That's all I can think of right now - let us know how we can help! Cheers Graeme ---+++ From LEROY Christine DAPNIA [c.leroy@cea.fr] For the Tutorial, it would be good also to have sessions: * on the Information system (sites misconfigured can be black hole ...) * monitoring and accounting (How we will measure the production at one site ?) * Something on the submission system of the experiments (DIRAC, BOSS, ProdSyss Alien ??...to know what is the flow of the jobs, there requirements...) * xrootd if it is required? * VObox if it is required? Cheers Christine ---+++ From DONNO Flavia CERN [Flavia.Donno@cern.ch] The Tutorials are a good occasion for GGUS supporters to learn more about the services run for SC4 and the problems site admins might encounter during installation and operation. Therefore, it should be strongly suggested to GGUS ROC supporters to attend these tutorials. Also GGUS TPMs (Ticket Processing Managers) should be encouraged to follow the tutorials. The tutorials material should be made available online. Tutorials should be possibly digitally recorded and the recording also made available on-line for further use. Cheers Flavia
E
dit
|
A
ttach
|
Watch
|
P
rint version
|
H
istory
: r7
<
r6
<
r5
<
r4
<
r3
|
B
acklinks
|
V
iew topic
|
WYSIWYG
|
M
ore topic actions
Topic revision: r7 - 2006-11-28
-
LaurenceField
Log In
LCG
LCG Wiki Home
LCG Web Home
Changes
Index
Search
LCG Wikis
LCG Service
Coordination
LCG Grid
Deployment
LCG
Apps Area
Public webs
Public webs
ABATBEA
ACPP
ADCgroup
AEGIS
AfricaMap
AgileInfrastructure
ALICE
AliceEbyE
AliceSPD
AliceSSD
AliceTOF
AliFemto
ALPHA
Altair
ArdaGrid
ASACUSA
AthenaFCalTBAna
Atlas
AtlasLBNL
AXIALPET
CAE
CALICE
CDS
CENF
CERNSearch
CLIC
Cloud
CloudServices
CMS
Controls
CTA
CvmFS
DB
DefaultWeb
DESgroup
DPHEP
DM-LHC
DSSGroup
EGEE
EgeePtf
ELFms
EMI
ETICS
FIOgroup
FlukaTeam
Frontier
Gaudi
GeneratorServices
GuidesInfo
HardwareLabs
HCC
HEPIX
ILCBDSColl
ILCTPC
IMWG
Inspire
IPv6
IT
ItCommTeam
ITCoord
ITdeptTechForum
ITDRP
ITGT
ITSDC
LAr
LCG
LCGAAWorkbook
Leade
LHCAccess
LHCAtHome
LHCb
LHCgas
LHCONE
LHCOPN
LinuxSupport
Main
Medipix
Messaging
MPGD
NA49
NA61
NA62
NTOF
Openlab
PDBService
Persistency
PESgroup
Plugins
PSAccess
PSBUpgrade
R2Eproject
RCTF
RD42
RFCond12
RFLowLevel
ROXIE
Sandbox
SocialActivities
SPI
SRMDev
SSM
Student
SuperComputing
Support
SwfCatalogue
TMVA
TOTEM
TWiki
UNOSAT
Virtualization
VOBox
WITCH
XTCA
Welcome Guest
Login
or
Register
Cern Search
TWiki Search
Google Search
LCG
All webs
Copyright &© 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use
Discourse
or
Send feedback