Summary of GDB meeting, December 10, 2014 (CERN)


Introduction - M. Jouvin

Future GDB exceptions

  • The March meeting will be in Amsterdam and co-hosted by EGI and NIHKEF
  • The meeting in April is cancelled due to the workshop in Okinawa.
  • October GDB during HEPiX at BNL: an occasion for a meeting in the US?

Pre-GDBs planned at each GDB until next summer

  • January: Data Management
  • February: F2F meeting of Cloud traceability TF
  • March (Amsterdam): cloud issues
  • Spring: batch systems, volunteer computing, cloud accounting
  • As always, let Michel know about topics of interest to consider for future meetings.

Status of actions in progress

  • perfSONAR reinstall becomes urgent
  • Feedback on ginfo by experiments
  • gstat relevance for WLCG: no longer critical, sites can ignore it
    • Reminder that BDII issues are now reported by SAM tests (glue-validator in particular)
  • xrootd v4 deployment: milestone refinement needed if we want to make progress
  • T1s requested to join IPv6 working group: some did since last month, still a few missing
  • H2020 VRE proposal in EINFRA-9: work in progress, should make it in time
  • GEANT Data Protection Code of conduct endorsed by MB
    • Letter addressed to identity providers and agreement on how service providers will handle personal data (attributes) released by IdP
    • Will have to be taken into account by WLCG services using federated identity but no difficult requirement
  • Cloud adoption document: still time to send feedback to Laurence Field

Reminder of forthcoming meetings:

  • HEP Software Foundation workshop at SLAC (20-22nd Jan)
  • ISGC Taipei (15-20th March)
  • HEPiX Oxford (23-27th March)
  • WLCG workshop Okinawa (11th-12th April)
  • CHEP Okinawa (13th-17th April).

Machine/Job Features TF Update - S. Roiser

The mechanism provides a way to convey information to the user either via the WN filesystem or through some other resource or store. 2 blocks of information provided to jobs:

  • Machine: WN power, shutdown time, total # of slots, # of physical and logical cores...
  • Job: CPU/WC time limit, scratch space limit, allocated number of cores...
  • Actual list of information can be extended easily
  • Architecture reviewed

Resource provider is responsible for populating the feature store and returns a pointer to the user that can be consumed once

  • Pointer can be environment variables or an http-based metadata service
  • Can be read only once to avoid high load on the metadata service

Implementation available for all batch systems

  • SLURM not yet complete but should be soon
  • More reference implementations than off-the-shelf component: may require some tweaking at site

Virtualized infrastructure status

  • First implementation proposed on top of CouchDB but several differences from the batch system implementation: was developed to abstract these differences
  • Second implementation based on Apache that mimics the batch system implementation 1 to 1 and makes no longer necessary
    • Python urllib can deal both with http urls and local files
    • X509 authentication easy to integrate
    • Easy to scale

Status & ToDo

  • 2 sites for each batch system implementation: get more sites deploying them
  • Apache-based service running at one UK site and one experiment VOBOX
  • Client not needed anymore
  • Bi-directional use case: a CMS request but no interest from the other experiments and not clear if still needed
    • Would make the implementation more complex
    • Do we really want to pursue?
  • More information:

Proposal: roll-out the existing implementation to all sites

  • Finish the TF by Easter


  • How does a site test their implementation?
    • There is a SAM job that can be adapted.
  • CMS no longer has effort to follow-up on the bi-directional feature. It would be good to keep the possibility though.
    • Stefan: architecture allows it, just the implementation is not done and not necessarily trivial
    • Michel: agree on the pragmatic approach, the current implemntation is already a great improvement on the current situation
  • MJ: How do we ensure a general deployment? Still too early to follow the CVMFS rollout approach, probably need to get experience with a wider adoption first.
    • Stefan: also need to understand who is the resource provider in the case of virtualized infrastructure like VAC (more clear for clouds)

Action: collect sites with interest to be adopters of this technology in the short term.

  • Send names to Stefan.
  • Review the list at next GDB

Accounting Update - S. Pullinger

Batch system accounting WG has not been very active since its inception. A few outcomes anyway:

  • Improvement to GE parser
    • GE has no built-in mechanism for CPU time normalization
    • cpumult and wallmut attributes added to node definition and have to be set by the site
    • Works with both open source and UNIVA version
  • An HTCondor parser
    • A new parser written that produces CSV file from Condor history files (rather than log files) that can be parsed by the new parsing framework
  • Epoch dates
    • A workaround for some batch systems logging an end time for jobs that failed to be started: ending in very long job time!
    • APEL will also handle properly jobs started but which don't have an end time registered because there were removed by site admins
  • Scheduling problem
    • Proper handling of preempted jobs that takes a long WC time to complete with other jobs interleaved
  • All of this should be part of next release expected Feb. 15


  • EMI3 APEL client gathers data on number of CPUs but has to be explicitly enabled
  • Several sites publishing directly with their own accounting database: all have plans to migrate to SSM2
  • Deelopment portal now has a view that reports cores and use WC*core to compute efficiency: need to carefully check the numbers before putting them in the production portal
    • Several views allowing to see the respective number of single core and multicore jobs per VO, site, submission host...

EMI2 to EMI3 database migration still to be done: waiting for the last sites publishing to EMI2 to have switched to EMI3

  • Historic data will be moved then to EMI3
  • Unlikely before March

Cloud accounting

  • Accounting scripts available for OpenStack, OpenNebula and Synnefo, written by 3d parties
  • Accounting portal has a simple cloud view, VOView under test and Tree view coming soon
    • EGI may ask a FedCloud view
    • Do we want a WLCG cloud view?
  • Cloud Usage Record being revised: granularity below cloud level, benchmark type and value, IP numbers, image names from a marketplace
    • Final round of discussions: as soon as agreed, will deploy it in database and then in providers
    • VM benchmarking still triggering a lot of discussions: share your thoughts!
  • Long-running VMs issue: accounting summarized resources used taken into account payloads that completed that month. May be a problem if VMs are spanning many months.
    • One alternative could be to have each summary report updates from last report: significant change, need to agree on the final proposal


  • C. Grandi: INFN moved from DGAS to APEL. One thing missing is ability to assign local jobs to a VO. Perhaps a plug-in?
    • J. Gordon (JG): some of the data is picked up but (showing local jobs) not implemented in portal. Need to look at use-case. Easier if all local jobs assigned to one "pseudo-VO".
  • Ulrich: cloud accounting tools: CERN using its own solution which is in GitHub, ready for sharing and comments. Also, worried about long running VMs.. the approach is wrong. How should views be fed back?
    • JG: discussion mainly happens in the EGI Federated Cloud group: feel free to join that.
  • Stefan: what are the info collected for VMs?
    • S. Pullinger (SP): number of VMs run but also their duration and the CPU they used
  • J. Coles: why not to enable proper publishing of multicore jobs by default as it seems to be harmless for single core job?
    • JG: was made optional when implemented in 2012 as there were not that many multicore jobs, mainly MPI ones requiring a specific action from the site. Need also to take into accounts it triggers generation of a few additional reports...
    • MJ: every site now receive multicore jobs. Critical for WLCG to have information published correct ASAP: required to get an accurate efficiency calculation, reporting more than 100% of efficiency to FAs ruins the credibility of our other reports. Having the option turned on by default in next release would help to fix the problem quickly at many sites.
    • JG: new RPMs take ages to be widely deployed
    • MJ: not completely true now that almost all sites rely on YUM. But can be tracked by package reporter or similar tools. Whereas it is very difficult to detect if a site is correct or not currently.
    • JG: only way to check if a site published correctly right now if to consider suspect a site not publishing multicore jobs...
    • P. Solagna: correct publishing of multicore jobs also critical for EGI. EGI OMB next week will probably ask all sites to turn on the 'parallel' option.
  • Jeff T: on long running VMs, please do not assume VMs, in cloud in particular, will run for a long time. ATLAS presented service last week and shows how site can shut off VM whenever they want without major impact.
    • Stefan: LHCb also planning on the same timescale as pilot jobs (ie. weeks not months).

Action: consensus seems to be that we should have the parallel flag on by default.

  • Stuart must check that there is no showstopper we may have overlooked.

Actions in Progress

OpsCoord Report - M. Dimou

Site survey out since Nov. 26, deadline Dec. 19th

  • Currently ~25 answers


  • Bug in RHEL 6.6 kernel affecting CVMFS NFS installation: patch sent back to RH, sites recommended not to upgrade to 6.6 if using this config

DPM 1.8.9

  • Too verbose level of gridftp log, not possible to disable, fixed: pushed to EPEL
    • Discovered by the MW readiness verification
  • SAM pb fixed: new probe being deployed this week by NGIs


  • ALICE: contributed a SAM probe for ARC CE
  • ATLAS: Rucio/Prodsys migration over
  • LHCb: stripping 21 campaign on-goinng, Xmas activities not yet decided

VOMS and VOMS-Admin migration

  • New CERN VOMS servers: ok for all VOs
    • LHCb found that some users were storing their own vomses files, thus having problem...
  • CMS started to evaluate voms-admin
  • Old VOMS servers and VOMRS scheduled to be closed Feb. 3
    • VOMS ports are already blocked on these servers: they don't deliver proxy anymore


  • ATLAS testing it in PanDA: good results so far

Multicore TF

  • Twiki ready for configuring passing of job parameters to batch systems
  • CMS testing multithreaded jobs at T1s
  • Accounting: sites remembered they must enable the new APEL option ('parallel')

Squid monitoring and HTTP PRoxy discovery

  • GOCDB & OIM now support multiple servers per site
  • WLCG monitoring page automatically updated from GOCDB and OIM: sites must check their status (and registration)

MW Readiness TF

  • Many product onboard: DPM bug discovery proved usefulness
  • Progress tracked in JIRA

Network and Transfer Metrics WG

  • Metrics meeting last week: progress, waiting for feedback
  • perfSonar: sites must redeploy it before Jan. 8

WLCG critical services: list being updated, feedback already received from experiments, dedicated meeting with T0 on Dec. 12

Next OpsCoord meeting is one Dec. 18, first 2015 meeting will be on January 22.


  • Can we know per region the sites that have (not) responded so we can chase others?
    • Maria/Andrea: yes, contact us. No problem to share site names but not the response contents.

MW Readiness WG Update - A. Manzi

Mandate: ensure sites do not suffer when called upon to upgrade.

Products on board: DPM, CREAM, dCache, StoRM, EOS, VOMS client, FTS3, Xrootd...

  • For some products the test setup is being completed
  • Good list of volunteer sites
  • ATLAS and CMS contribute workflows to this activity
    • ALICE not yet participating
    • LHCb has a document on certification

Lessons learnt

  • DPM bug with gridftp logging proved this effort is needed
  • When an experiment validates a version, other experiments should start their validation effort with this version to help converge quicly
  • Deadline for verification must be set
  • Not all products equal with respect to well-established validation process...

Good coordination with EGI Staged Rollout effort: benefit to both sides

  • TF participating to bi-weekly URT meetings

Package reporter developed to collect information about SW deployed at site

  • Aim to have it deploy everywhere at some point... currently only at some of the volunteer sites
  • Integration with Pakiti agreed: code will be shared but the data will not be shared between MW readiness and Security activities
    • Sites will be able to choose who they report they package too
    • A new release has been submitted to EPEL
  • MWR reporting will be done through the Site Status Board
    • Also planning a tool for sites admins allowing an easy identification of the packages below the baseline
    • Foreseen by end of next quarter

Wishes for the future

  • More participation to meetings from volunteering sites
  • (more active) Participation of ALICE and LHCb and sites supporting them

SAM Test Framework Update - M. Babik

STF: main source of information for SAM3 A/R calculation and WLCG monthly reports

STF relies on Nagios as the check scheduler

  • Checks implemented as Nagios probes
  • Nagios interfaced to messaging infrastructure

3 categories of tests

  • Public grid services, e.g. storage
  • Job submission
  • WNs

Recent changes

  • Direct CREAM submission plugins since June
  • Condor-based submission replaced WMS in October
  • Direct HTCondor submission since November
    • Used to test OSG sites: already used for CMS sites
  • WebDAV probe developed and tested: in prod soon
  • Planning to merge changes from SAM update 23 related to UMD3

Test submission timeouts

  • Problem reported in February analyzed and aknowledges: turned out to be a too short timeout in WMS waiting state
    • Up to 35% for CMS and ATLAS
    • With the new direct submission, now back to ~0.1%
    • Timeouts now producing warnings and not affecting A/R calculation
  • Still discover queues through BDII: correct setup/information needed
  • Fundamental limitation remains: if a test job cannot be scheduled, this will lead to a timeout
    • SAM3 offers the possibility of alternative sources

Future evolution

  • Test framework: see WLCG monitoring consolidation Twiki
    • Migration to OMD distribution, looking at new "Nagios compatible" solutions
    • Ultimate goal remains simplifcation
    • Auto-generation of Nagios configuration planned
  • Storage probles based on GFAL2: base Python framework available
    • Also support new protocols
  • Direct ARC submission: almost ready to be deployed in production
  • Improvements to WN tests to provide better flexibility


  • Cristina: what CREAM probes do you use? There are several… Do you talk with the EGI SAM team?
    • Marian: EGI moving in slightly different direction – more lightweight tests of basic infrastructure, all probes/plugins in UMD. WLCG need to probe more deeply on behalf of experiments (i.e. down to WN level). Need ability to run probes developed/maintained by experiments.
    • MJ: if there is interest, WLCG is certainly ready to share ideas. Must take into account that the manpower for this effort is very limited both on EGI and WLCG sides.

OSG Update - B. Bockelman

OSG organization: positive review last summer by FAs but not the review summary is not yet public

  • Main faces stable!


  • OSG-run CVMFS Stratum O/1 stable
    • Still running 2.0 on Stratum 0
    • Request from OSG VO to mirror repos but need ability to "blank" a mirrored repo in case of a security emergency: still in progress
  • Only minimal progress on IPv6 support on central services
  • SLA met everywhere
  • WLCG accounting for multicore jobs: validation done internally


  • New 3.4 version currently in deployment with the new central archive component
    • Central archiving is done by querying archive of each instance in a mesh

User support

  • Continuing to attract new users, outside WLCG
    • Mostly individual users grouped in the OSG VO
  • Non HEP usage now at the level of 15-20%

Campus grids: enable users to use HTC workflow on their local resources without putting them on the grid or using certs

  • Key service: OSG-connect


  • 3.2 is the current release, 3.1 about to be EOL
    • 1 minor release per month
  • Recent updatyes
    • HTCondor 8.2
    • GUMS 1.4: central banning improvement, performance improvements (~100 Hz easily sustained)
    • HTCondor-CE 1.x
  • OSG SW is fully SHA-2 and RFC proxy compliant
  • Starting to build RPM for PanDA and APF
  • 3.3 release being planned, firs release envisioned mid-May
    • EL7 support, considering dropping EL5
    • Reduce the number of RPMs produced, relying more on EPEL


  • Now the default CE in OSG: aim to have sites converted before Run2
  • Already a good deployment in USCMS and USATLAS: not all production service yet
    • Already 2 sites which completed the migration, ie turned off GRAM
  • ATLAS requested an improvement to the info service: work needed to properly expose the information without a queue-based model
    • A script returning the information to pass to Condor based on user requirements
    • New approach should allow to turn off publishing to BDII

glExec: no longer required by OSG for all the VOs but still required by USCMS

  • Doing an audit of VO systems to ensure the needed information is collected, secured and can be retrieved (aka "traceability requirements")
  • Sites support WLCG agree to continue to support glexec for these VOs


  • Dave: on security traceability, we arguments showing that this is not fully achievable via the audits
    • Brian: not possible either with glexec... Decision was that security risk is sufficiently low that audited traceability requirements could be considered enough...
  • Ian: potentially interesting for GUMS and ARGUS to work together. Is this possible to merge, become a similar/same solution?
    • Brian: OSG provides a very limited support for GUMS. Door is not completely shut but also not widely open. More discussion at the ARGUS/authorization post-GDB meeting tomorrow.
    • Michel: we'll report the outcome of the meeting at next GDB
  • S. Lin: how is the progress for users other than LHC
    • One particular area of growth is the “intensity frontier” experimentd at FNAL.

Batch Systems

Condor Workshop Summary - I. Collier

Very well attended: ~40 people in the room

HTCondor based on a central negotiation (match-making) between machine requirements/preferences and user (job) requirements/preferences

Several site experiences reported

  • FNAL: 10+ years of production at a large scale
  • Also INFN Milan, RAL, IAC (Canarias)

Several advanced topics presented, mainly related to site management

  • Scripting APIs, policies
  • Job isolation: containers, cgroups, docker...

Several panel discussions with experts: see summary


Jeff: HTCondor was not able some time ago to group users and implement hierchical shares. Was there some progress?

  • Todd Tannenbaum (TD, Condor developer): you can create groups of users and groups of groups (hierarchy) and define (hierarchical) quotas for these groups, either as a number of cores or as a percentage of the total share of its group parent (for example 50% to ATLAS with 30% of them to production and the rest (70%) to analysis.
  • TD: users with a group have an historicised fairshare. A group can get a "surplus" from another group below its share but there is not history made and this cannot be "reimbursed" later.

Michel: time for WLCG to think about recommendation(s) running Torque/MAUI and wanting/having to move. This workshop is one contribution in this area. Maybe also interesting to hear about SLURM if sites are interested in it.

  • Alessandro G (AG): is it worth to prepare an action list for moving on into this direction?
    • Michel: probably still too early, need to get more feedback from sites who really moved to production (currently many sites are still running pre-production resources or small productino resources beside their main CE). Upgrading is more than setting up and running HTCondor: also need to understand how to efficiently integrate HTCondor without the concept of queue in our infrastructure. RAL example, described in detail during the workshop, is a source of inspiration for many advanced topics.
    • Alessandra: can we have a pre-GDB on batch systems?
    • Michel: yes, this is the plan but we need more sites who have adopted the system scheduling it. We also need to work closely with Hepix: next workshop is end of March, could be the next step. Maybe spring is a good date for a pre-GDB.
    • Helge: batch system is a traditional HEPiX topic and agreed that we should collaborate. Some related topics like BDII integration can only be addressed by WLCG/GDB.

Jeff: need to separate what we suggest to new sites and what is the urgency for existing sites to move ahead.

  • Pepe: just before Run2, sites will not take potentially disruptive actions.
  • Michel: if carefully planned, should be doable during data taking.
  • Brian: probably less stressful when done outside a Run

Pepe: this Condor Workshop was very helpful and appreciated it very much, can we have this annually?

  • To be discussed later...
  • Michel: thanks again to Condor developers (Todd and Greg) for coming and for the quality of their presentations and interactions

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r2 - 2014-12-11 - MichelJouvin
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback