Dashboard

Tier0 time limit to execute

Changing Memory Requested for jobs when creating the jobs

Getting some numbers from jobs

SM Ops and transfers to Tier0 - How to know if a run has ended yet or not

Job with an input file too large

  • Coming from (https://cms-logbook.cern.ch/elog/Tier-0+processing/13044)
  • Job killed by the condor periodic expression because a massive input file (~52GB)
  • It is a possible case (not an error one) because of the multicore repacking.
  • That is going to end up in the error dataset anyway
  • Fail the job and report the data loss (dataset and lumi section) to the comissioning HN

-- JohnHarveyCasallasLeon - 2015-06-23

https://cms-logbook.cern.ch/elog/Tier-0+processing/13039

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2015-06-23 - JohnHarveyCasallasLeon
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback