1a. [Igor] Document expected scale and test at 2x scale.
  • File open rate
  • # connections
  • rate MB/s
  • Detailed assumptions: One client: 10 Hz evt/sec, 2 GB/file, 100k/evt. peak global analysis 35k jobs; US: 1/3, overflow: 1/2

1b. [Frank] User documentation
- workbook twiki
- anaOps education

1c. [Brian - progressing on target] Deploy system at 5 sites and make it manageable for long-term operation

  • How does admin/manager know if it is broken? Nagios emails; RSV probe written
  • Manager should get report in morning RSV emails. RSV probe written, not deployed.
  • alarms when something is broken. In place, needs review.
  • site admin documentation. In place, needs review.
  • SAM-like test (James Letts is working on a JobRobot-like test that reads a file that is supposed to be there and one that isn't). Not done.
  • monitor IO perf for WAN and LAN. Partially done - see monitoring section.
  • dashboard will examine job report to determine if job overflowed, so this data should be available. Done.
  • Place this in info Gratia. Done
  • Operations plan. Not done.
  • Decide on a set of metrics for overall system that we expect it to deliver on a regular basis. Not done.

1d. [Ken -- have discussed with Rob S., but limited adoption so far -- try again at OSG AHM] work with at least one T3
- have a T3 (with good WAN) use xrootd to access local NSF plus xrootd.unl.edu
- create a test instance to study scale and performance of this
- this may relate to OSG campus grids activity: want to show that this storage solution allows CMS to make use of campus resources with little investment

2. [Matevz] Deploy usage accounting and monitoring of abuse.
- accounting of xrootd activity that is relevant to AAA
(e.g. need to filter out local xrootd usage in Wisconsin; apparently this can be done by looking at some difference in the URL)
- We think 95% accuracy is acceptable, and we believe a single monalisa instance (at UCSD) can achieve this (multiple ones could be added with a configuration change at all sites)
- Status monitoring
- Abuse monitoring
- high-water mark for individuals/total based on scale we have defined as reasonable
- redirector/cmsd monitoring
- visualization (Derek's fancy thing)

In general going well. Both MonALISA and Gled detailed monitoring work well and there is steady progress on all fronts.

What is still missing:

  • abuse alarms (need to decide what abuse is ... otherwise all machinery is really there);
  • a top-level dashboard page, meshing up relevant info from all monitoring;
  • redirection detailed monitoring (was waiting for xrootd implementation to stabilize);
  • fancy visualization -- on the side, but not forgotten.

Problems:

  • correlation with jobs -- we'll get some info from CMSSW jobs into xrootd monitoring stream with 5_2;
  • handling of alarms in MonALISA is somewhat broken
  • Wisconsin sends monitoring info for internal and external access mixed up. This is only a problem for summary monitoring.

3. [Dan - not started] Smarter source routing v1.0
- current source selection in xrootd is weighted round robin
- weights reported by the site, so weights may not be very accurate
- perhaps site should only be able to "downgrade" itself and not "promote" itself
- currently, files are at 2 T2s plus FNAL; in future, it may be 1 T2 plus FNAL, in which case we would prefer T2 over FNAL unless T2 is overloaded
3a. define metric to determine how well system is working
3b. If 3a determines current algorithm is "bad", develop a new one
- idea for v1.0 (1st year): be able to "upgrade" weight of site via central configuration
- idea for v2.0 (2nd year): have a centralized mechanism for adjusting the configuration
- potential variables to include: network usage of sites, # connections, cpu eff of overflow jobs, exit codes
- the above is all thinking only about xrootd metrics, but it may also be good to include actual network metrics. Brian thinks networking people (such as Harvey) will be happy to provide ideas.
- Brian says that Miron considers this a Condor problem (in the broader sense of Condor), so he would be good to consult.
3c. implement

4. Develop necessary gWMS enhancements for opportunistic and T3s

4a. [Dan - not started] regulation mechanism
- existing solution uses frontend to partition overflows (done)
- discussion of concurrency limits vs. i/o slots
- if we wanted to regulate how many data "sinks" there are, we could use concurrency limits for that, but currently we don't worry about this
- data source regulation:
- concurrency limits: need site to control max
- i/o slots: need to optimize matchmaking for this to even be feasible
- conclusion: implement concurrency limit enhancements; do i/o slot work as part of matchmaker enhancements in point 7 (year-2 deliverable)

4b. [Dan - parrot/CVMFS integration done, deployed in Wisc T3 done, deploying to crab server not yet done] "transparent" software access
- CVMFS using parrot instead of fuse
- Brian says Doug Thain expressed interest in doing this
- prototype + scaling + test & evaluation
- Document why we chose this way of doing things instead of dynamically installing sw in glidein
4c. [Igor] End-to-end overflow into opportunistic site using xrootd & 4b

5. CMSSW I/O

5a. [Brian] metrics. In progress
5b. [Matevz] standard candle
- be able to detect regressions in i/o performance
5c. improved 2-file solution (depends on 5d)
- putting less used objects in 2nd file that is read remotely
- many have expressed the opinion that this is problematic
5d. [Matevz] assess use of edm objects
- sample FJR from CRAB server for X days
- don't want to bias measurement towards small tasks; want proportional to data read, so if just looking at one FJR per CRAB task (for efficiency), perhaps multiply by number of tasks
- produce a histogram of branch usage
- find a subset of branches that is not used for Y% of the jobs
5e. [Brian] improve measurements in CMSSW
- xrootd statistics in FJR. Done.
- exit codes; use a different exit code if there is a fallback and fallback failed. Done
- recording fallback in FJR. Needs to verify.

Matevz Haven't really started but have been thinking about this (especially in context of caching proxy implementation) and we have better tools ready now.

  1. Detailed xrootd monitoring;
  2. XrdAdapter on CMSSW side.

Brian Furiously writing XrdAdaptor support into CMSSW_5_2. Will be a close finish.

6. (year 2) Caching
- based on last week's discussion at FNAL, we don't envision trying to turn the T2s into caches purely managed by xrootd
- however, we may still want some cache management capabilities
- Matevz will at least keep this option open in his work with the xrootd folks

  • We decided to focus on caching proxy for "automatically healing storage".
  • Spent ~ two weeks understanding the situation in xrootd.
  • I could have something in about a month at 50% engagement.

7. (year-2 deliverable)

7a. [Igor] Condor matchmaking changes for data locality
- current CRAB mechanism is very course-grained (dataset)
- we want perhaps per-file granularity

10 PB/year data from experiment
suppose total data we need to select form is 100PB and file size is 10GB --> 10^7 files x 10 locations per file = 10^8 database rows
each lfn is 10^2 bytes, so total bytes is 10^10 bytes

Plan: Igor + student will work on implementing matchmaking via DB join, Dan will work on making a reasonable plugin architecture in matchmaker so this can fit in

7b. [Dan - not started] io slot issues that having nothing to do with matchmaking (e.g. schedd claiming multiple resources)
- Igor will deal with the matchmaking part of io slots as part of his DB MM project

8. Operations Tasks
8a. [Brian - progressing nicely] Operate Services
- top level redirector in US and at CERN
- monitoring infrastructure in US
- listserv for AAA ( ANYDATA@listservNOSPAMPLEASE.unl.edu)
- twiki (CMS?)
8b. [Brian] attend to trouble tickets
- respond to savannah tickets from sites/anaOps/...

8c. [Matevz] collect and present internal project metrics
- ops presentation at AAA meetings (and annual report, CMS meetings, OSG meetings, ...)
- attend to monitoring to investigate issues before users generate tickets


MonALISA plots seem to be doing fine for presentations so far.

There are practically no recurring problems. For two weeks it seemed authentication failures will be with us forever but it has subsided (plus I have email alarms for sites that run xrootd-3.1 or later).

Summary of who "owns" what:

Brian 1c, 5a, 5e, 8a, 8b
Dan 3, 4a, 4b, 7b, misc condor
Frank 1b
Igor 1a, 4c, 7a
Ken 1d, mgmt
Matevz 2, 5b, 5d, 8c

Student Work
- reports based on monitoring (tables, charts, emails)

Proposed Agenda for bi-weekly AAA meeting:
1. Infrastructure
2. xrootd s/w
3. roundtable
4. special topics

-- KenBloom - 07-Feb-2012

Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r5 - 2012-02-21 - BrianBockelman
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback