DRAFT Tier2s associated to Muon group

Tier2s associated with the muon group are (Full Tier2 associations to Physics group is in CMST2Associations):

  • T2_ES_CIEMAT
  • T2_IT_Legnaro
  • T2_US_Florida
  • T2_US_Purdue
  • T2_RU

Useful links for each site:

T2_ES_CIEMAT
CMS Pledge resources from SiteDB
Site Status (downtime etc.)
Data & disk space
  PhEDEx Data subscription by muon group
  PhEDEx space usage by groups
  space usage (look for Available BDII space )
Analysis jobs
  jobs by accessed dataset

T2_IT_Legnaro
CMS Pledge resources from SiteDB
Site Status (downtime etc.)
Data & disk space
  PhEDEx Data subscription by muon group
  PhEDEx space usage by groups
  space usage from local monitor
Analysis jobs
  jobs by accessed dataset

T2_US_Florida
CMS Pledge resources from SiteDB
Site Status (downtime etc.)
Data & disk space
  PhEDEx Data subscription by muon group
  PhEDEx space usage by groups
  space usage from local monitor
Analysis jobs
  jobs by accessed dataset

T2_US_Purdue
CMS Pledge resources from SiteDB Cpu: 5080kSI2k Job Slots: 1728 Disk: 470.0TB
Site Status (downtime etc.)
Data & disk space
  PhEDEx Data subscription by muon group
  PhEDEx space usage by groups
  space usage from local monitor
Analysis jobs
  jobs by accessed dataset

T2_RU
to be defined

DRAFT How to run DQM code for DT Offline using CRAB at CAF

Pre-requirements: you need to be on lxplus and to be allowed to run at CAF

1. Set up CMSSW area

  • Download and build the code the first time:
        cd YOURDIR
        scramv1 project CMSSW CMSSW_2_1_10
        cd CMSSW_2_1_10/src
        cvs co DQM/DTMonitorModule
        cvs co DQM/DTMonitorClient
        cvs co UserCode/DTDPGAnalysis
        eval `scramv1 runtime -csh`
        scramv1 b
  • Set the enviornment:
        cd YOURDIR/CMSSW_2_1_10/src
        eval `scramv1 runtime -csh`

2. Set up CRAB environment

   source /afs/cern.ch/cms/ccs/wm/scripts/Crab/crab.csh    

3. Configure CRAB to run DQM DT Offline code

  • Copy the crab template configuration file:
       cp YOURDIR/CMSSW_2_1_10/src/UserCode/DTDPGAnalysis/test/crab_runDQM_template.cfg .
  • Configure it changing:
    • the pset location, filling YOURDIR:
              pset  = YOURDIR/CMSSW_2_1_10/src/UserCode/DTDPGAnalysis/python/test/runDQMOfflineDPGSources_cfg.py
    • the area where to stage out the output filling CASTOR_AREA:
              storage_path=/castor/cern.ch
              lfn=/CASTOR_AREA/DQMDTRunINSERTRUN
              ## for example:
              # lfn=/user/a/afanfani/DQMDTRunINSERTRUN
    • you might want to change the statistics to precess and the splitting into several jobs:
              total_number_of_events  = 400000
              number_of_jobs          = 20

  • The replacement of the run (i.e. 66733) to use can be done simply with:
      less  crab_runDQM_template.cfg | sed -e "s?INSERTRUN?66733?g"  >  crab_runDQM_66733.cfg

4. Submit the jobs to produce DQM root files

  • Submit the jobs:
      crab -create -submit -cfg crab_runDQM_66733.cfg
  • Check the status with:
      crab -status -c runDQM_66733

5. Produce DQM plots

Once the jobs above have finished they have produced several DQM root files that need to be read to produce DQM plots.

  • Download the template python configuration file to produce those plots and an auxiliary script:
        cp YOURDIR/CMSSW_2_1_10/src/UserCode/DTDPGAnalysis/python/test/runDQMOfflineDPGClients_cfg_template.py .
        cp YOURDIR/CMSSW_2_1_10/src/UserCode/DTDPGAnalysis/test/configureDQMPlotter.sh .
  • Run the configureDQMPlotter.sh script providing the Run Number and the castore area where DQM root files where produced:
       ./configureDQMPlotter.sh <RunNumber> /castor/cern.ch/user/a/afanfani/DQMCRAFT/DQMDTRun<Run Number>
  • Run the cmsRun :
        cmsRun runDQMOfflineDPGClients_cfg_<Run Number>.py

The output will be a directory Run with summary plots and plots split into the usual wheel's sub-directories.

DRAFT How to store output with CRAB 2_4_0

Warning: since CRAB_2_4_0 release the parameters for stageout configuration are changed, so your older crab.cfg can not be used.

CRAB allows you to copy your analysis outputs directly to a Tier2 or Tier3 Storage Element. You can decide to store them in a Storage Element of an "official CMS site" or in your local SE.

The various options are:

1. Stage out to an "official CMS site" without publication in DBS

you have to configure the crab.cfg with:
     [USER]
     copy_data = 1
     storage_element = "The official CMS site name"  
     user_remote_dir = subdirectory where your output will be stored

  • The official CMS site names are reported in the SiteDB list. The mapping between the StorageElement name and CMS site names is reported here. Note that there are two exceptions: you have to use T1_FR_CCIN2P3_Buffer instead of T2_FR_CCIN2P3 and T2_RU_IHEP_Disk instead of T2_RU_IHEP.

  • The area in which your output file will be written is:
            site's endpoint +  /store/user/<yourHNusername>/<user_remote_dir>/<output-file-name>
       
    where the site's endpoint is discovered by CRAB and your HyperNews username is extracted form SiteDB.
Important Note: you need to be registered in SiteDB. The instructions to register in SiteDB are in: SiteDBForCRAB .

For example:

   storage_element = T2_ES_CIEMAT
   user_remote_dir = myTestDir
will write the output into:
   srm://srm.ciemat.es:8443/srm/managerv2?SFN=/pnfs/ciemat.es/data/cms/store/user/<yourHNusername>/myTestDir/<output-file-name>

2. Stage out to an "official CMS site" with publication in DBS

you have to configure the crab.cfg with:
    [USER]
    copy_data = 1
    storage_element = "The official CMS site name"  
    publish_data=1
    publish_data_name = "data name to publish"
    dbs_url_for_publication = "your local dbs_url" 

  • The official CMS site names are reported in the SiteDB list. The mapping between the StorageElement name and CMS site names is reported here. Note that there are two exceptions: you have to use T1_FR_CCIN2P3_Buffer instead of T2_FR_CCIN2P3 and T2_RU_IHEP_Disk instead of T2_RU_IHEP.
  • The directory and LFN of your output file are enforced to be:
      site's endpoint +  /store/user/<yourHyperNewsusername>/<primarydataset>/<publish_data_name>/<PSETHASH>/<output_file_name>
where the site's endpoint is discovered by CRAB and your HyperNews username is extracted form SiteDB. More information about publicataion are available in SWGuideCrabForPublication.

3. Stage out to a "not official CMS site" that means not included in the SiteDB

you have to write in the crab.cfg:
    [USER]
    copy_data = 1
    storage_element = the complete Storage Element name (i.e se.xxx.infn.it).
    storage_path= the full path of the Storage Element writeable by all
    lfn = the directory or tree of directories that !CRAB will create under the storage path of the SE.
where storage_path is the mountpoint of the SE (i.e /srm/managerv2?SFN=/pnfs/se.xxx.infn.it/yyy/zzz/) and lfn is the directory or tree of directories that CRAB will create under the storage path of the SE.

That lfn will be used as logical file name of your files in the case of publication in DBS. Publication in DBS make sense if the data are accessible via a Grid Computing Element.

4. Stage out in your own directory in CASTOR at CERN

You are encouraged to stage out your data at Tier2s you are associated with, however if you want to stage out in your area in CASTOR you have to configure the crab.cfg with:
  [USER] 
  copy_data = 1
  storage_element=srm-cms.cern.ch
  storage_path=/srm/managerv2?SFN=/castor/cern.ch
  lfn=/user/<yourinitial>/<username>/whatever

You should also make sure that your area need to have permissions to allow write access to the group:

        
rfchmod 775  /castor/cern.ch/user/.....                         

5. CAF Stage out

If you are running jobs at CAF then the stageout options are:

  • Stage out into your own directory in CASTOR configuring crab.cfg with:
  [USER] 
    copy_data = 1
    storage_element=srm-cms.cern.ch
    storage_path=/castor/cern.ch
    lfn=/user/<yourinitial>/<username>/whatever

  • Stage out into CAF /store/user area
    [USER] 
    copy_data = 1
    storage_element=T2_CH_CAF
T2_CH_CAF is the offical site name for CAF and the stageout configuration there is the same as described in Section 1. and Section 2. above.

END END END END CRAB part

DRAFT User registration in SiteDB DRAFT

1. Access SiteDB people page

Go to SiteDB people page:
   https://cmsweb.cern.ch/sitedb/sitedb/people/

2. Log in with your HyperNews account

3. Edit your own information to add the "Distinguished Name" field

  • Edit contacts and associations

  • Edit your own details here

  • Get your DN as provided by the command:
  voms-proxy-info -identity

  • Click "Edit these details" button

DRAFT version of MC Development Plan

Contents:

ProdAgent Release Plan

Each major release may include several sub releases as the required features are rolled out over time. The release schedule can be highly affected by unscheduled requests coming from MC operations

PRODAGENT_0_2_0 series: Mid April
This version is target for
  • Sync with CMSSW 1_3_x massive production starting from 15 April
  • Switch to use DBS-2 (including DLS functionality in DBS2)
  • First release with DBS-2 Integration
  • Automation of block management and global migration

it contains:

  • Use new PU algorithm in CMSSW to remove need to select PU files (not used/tested)
  • Update to new CMSSW python API
    • Implication for ProdRequest: Dependency on CMSSW release for Cfg python API
    • Allows validation of cfg file at time request is made
  • Support for PhEDEX injection using PhEDEX micro client
  • Support for GlideIn bulk submission (Testing Phase)
  • Support for OSG Condor bulk submission (Testing Phase)
  • Tier-0 Support for LSF submission (for use in Tier-0)
  • Support for time based processing usable for GEN-SIM like jobs
  • Support for CMSSW_1_4_0_preX Release Validation
    • Includes maxEvents PSet

PRODAGENT_0_3_0 series : Beginning of June
This version is target for:
  • Sync with CMSSW_1_4_x massive production and CMSSW_1_5_x processing
  • switch to use CMSSW python API
  • First prototype for the Tier-0

it contains:

  • ResourceMonitor and JobQueue Components (Testing Phase)
  • Support for CmsGen nodes in workflows (Prototyping Phase with Alpgen)
    • Sync with release of CmsGen tool
    • Has implications for ProdRequest supporting requests with a cmsGen step
  • JobKiller Component Prototype
  • Support for automatic PhEDEX injection (to be enabled via configuration)
  • Support for Glite bulk submission (Testing Phase)
  • Tier-0 Evolution based on LSF submission testing
  • Tier-0 Prototype for Repacker job injection

PRODAGENT_0_4_0 series : June/July
This version is target for:
  • Development/testing in several areas
  • Tier-0 setup for Tier-0 testing plan and Global Run

it contains:

  • Support for Alpgen via CmsGen interface
  • Testing the ProdRequest/ProdMgr/ProdAgents chain
  • RelValInjector Component prototype
  • Log Archiving on local storage element for production jobs. (No logfile collection)
  • Merge Sensor supporting plugins: run-by-ryn plugin added
  • Support for clean up jobs (No automatic strategy and deployment)
  • Improvements in JobQueue/ResourceMonitor
  • Initial release of ProdMon Global Monitor
  • Tier-0 Testing & refinement of submitter and repacker

PRODAGENT_0_5_0 series: End of July
This version is target for:
  • Sync with CMSSW_1_4_x ALPGEN massive production
  • switch using Session,Trigger and workflow entities in PA DB

it contains:

  • Alpgen production support
  • ProdMon testing
  • JQ/RM for teams to start testing

Minor version releases will have:

  • Progress on Global monitor: port Brian's plots out of Dashboard/ProdMon DB
  • Migration of old production data to Dashboard/ProdMon DB
  • ProdMon deployed to teams and Global monitor
  • Test ProdMon for DataOps & Tier 0 monitoring
  • Support Multiple cmsRun Chains in single jobs (if required by CSA07 workflow: Need external input to confirm/deny )
  • Workflow/JobSpec factory for Tier0 repacker jobs and for tweaking the unmerged/mergedness of output modules
  • Tier-0 : prototype DatasetScrambler for CSA07 (Time Ordered Mixing of datasets)

PRODAGENT_0_6_0 series: Mid September
This version is target for:
  • support for CSA07 activities
  • Deployment of ProdManger for CSA07 Signal GEN-SIM production

it will contains:

  • PM/PA chain for GEN-SIM
  • Tier-0 : Thread JobCreator
  • Improve ProdRequest GUI for request tracking/progress
  • Add in ProdRequest link to ProdMon
  • Support Integration of JQ/RM and (optionally) bulk ops for ops teams
  • Note: pre-stage is entirely left up to sites for CSA07 so no MC devel foreseen

Minor version releases:

  • Thread DBS Component to increase speed of files registration
  • Probot Prototype Component to increase the CSA07 load, if needed
  • Tier-0 : Thread JobSubmission
  • Testing of PM/PA chain for file based requests

PRODAGENT_0_7_0 series: Mid October

This version is target for:

  • improvements in Reports and Diagnostics

it will contains:

  • Refactor workflow injector : plugin based
  • Preliminar prototype at CERN for Reco Monitoring with DQM histograms
  • Collection/Display of Diagnostic reports: Requirements & design
  • Fast PhEDEX injection (injecting open fileblocks)
  • ProcSensor prototype & testing (ask Mike confirmation...)

PRODAGENT_0_8_0 series: Mid November

  • Refine prototype for Reco Monitoring
  • Cleanup strategy
  • Log archive collection
  • Deployment of Local (MySQL) DBS-2 Instances (pending testing experiences)
  • Changes to LFN Convention (PENDING CONVENTION)

PRODAGENT_0_9_0 series: Mid December

  • Time based processing (like DIGI-RECO) jobs
  • BOSS-lite refactoring
  • pre-stage strategy
  • Tier-0 ReReco Handler Prototype
  • Tier-0 Prototype for ConfigSensor job injection for Calibration Changes

-- AlessandraFanfani - 21 Jan 2007

Edit | Attach | Watch | Print version | History: r23 | r21 < r20 < r19 < r18 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r19 - 2009-03-15 - AlessandraFanfani
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback