Software and Computing open tasks

This is a collection of open tasks in the Software and Computing area. The descriptions are intentionally kept short, as what is needed evolves with time. We also want to make sure we can match projects to people interests. If you would like to contribute, please get in touch with the people listed.


(Main contacts, Ed Moyse, Walter Lampl)
  • Currently looking for a reconstruction coordinator to join Mark Hodgkinson. Contact Ed Moyse and Walter Lampl
  • Shifters to handle git merge requests, in particular L2 shifters. See GitShifts. Contact Ed Moyse and Walter Lampl
  • Core simulation software. Both software maintenance and new developments. Contact John Chapman and Heather Grey
  • Fast simulation software. Contact Jana Schaarschmidt
  • Analysis Software Group (ASG). Contact Kerim Suruliz and Attila Krasznahorkay
  • Event visualization. Contact Ed Moyse and Walter Lampl
  • Maintainer for the AtlasExternals project. Contact Ed Moyse and Walter Lampl
  • Detector software. This is maintained by the detector software coordinators. Please enquire within the relevant detectors for more information.
  • Combined performance software development. See AtlasPhysics for details

ATLAS Distributed computing (ADC)

(Main contacts, Alessandro Di Girolamo, Johannes Elmsheuser)
  • Support the daily ADC operations, with data management and distributed production and analysis.
  • Implement, setup and monitor an ART test for direct I/O support in all recent offline software releases. Could be combined with a performance study and similar HammerCloud functional tests.
  • Add network monitoring to the existing MemoryMonitor tool that is used in every TRF and Panda job.
  • Validate new monit based job and DDM monitoring and accounting infrastructure and compare it with existing legacy dashboard based monitoring and accounting
  • Setup elasticsearch/kibana/graphana based monitoring/accounting plot frequently/occasionally requested by CREM, ICB or other review panels.


(Main contacts Elizabeth Gallas and Nurcan Ozturc)
  • Develop a better long-term infrastructure for offline DCS data (currently in COOL). A “time series” database structure (e.g. InfluxDB/Grafana) seems more appropriate to make DCS-type data available to offline.
  • Grow a repository of conditions in the new Conditions DB from COOL which can be used by tests accessing it.
  • Develop simple unit tests for “CondDB for Run3” comparing metrics like performance and memory footprint to similar tests against COOL.
  • Running dedicated tests of event processing using data from the Run 3 Conditions DB.
  • Detector conditions and related software. Contact subsystem software responsibles.

Data Curation and Characterization

(Main contacts: Borut Kersevan and Davide Costanzo)
  • Develop of an analysis dashboard, based on Glance and interfaced to the CERN analysis preservation (CAP) portal
  • Analysis preservation tests and development

-- DavideCostanzo1 - 2017-10-25

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r2 - 2017-10-30 - WalterLampl
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback