FSQ Trigger Studies: HLT_50ns_Dimuon0_Jpsi, HLT_50ns_DoubleMu0

  • Importing from old menu:
  • Old Test Sample:
  • Responsibles: Eliza
  • ConfDB: Check Triggers Menu
  • Working on Test Menu: /users/eliza/HLTtest/V5, HLT_Dimuon0_Jpsi and HLT_DoubleMu0 was imported.
  • *L1 seeds - Test:L1_DoubleMu0, L1_SingleMuOpen

Useful Information

Processing NEWS Processing



Configuration Software Tools for Run-2


Please, follow this link to setup your confDB menu and your CMSSW. Currently CMSSW_7_3_0.

Recipe Step by Step

1 - Creating confDB HLT Private menu


First, you need to create or import your own HLT Menu using confDB. Instructions from here are been followed. You need the password to access confDB. Ask it to experts in case you do not have. - open a new terminal, and run

env

- open a new terminal, and set up the tunnel with

ssh user@lxplusNOSPAMPLEASE.cern.ch -v -v -L 10126:cmsr3-s.cern.ch:10121 -f -N

- open a new terminal, try to set some variables before running javaws

export TZ='-03:00'

export LANG=C

export LOCALE=C

export LC_ALL=C

-and launch the GUI from the command line, with

javaws http://j2eeps.cern.ch/cms-project-confdb-hltdev/gui/start.jnlp

- connecting to "HLT Development (SSH tunnel)" over port 10126


2 - Setup CMSSW release recommended for Run-2


Detailed commands and explanations are here.

Note: the specific GlobalTag to use depends on the samples you process:

  • * Fall13 25ns / 50ns : MCRUN2_72_V3A / MCRUN2_72_V4A
  • * Spring14 25ns / 50 ns : PHYS14_25_V1 / PHYS14_50_V1
  • * Phys14 samples:

!Note: At the moment there is no 50ns L1 menu, please use the 25 L1 menu v2 as described here, until the 50ns L1 menu is advertised (we will need to change the seeds then).

cmsrel CMSSW_7_3_1_patch2
cd CMSSW_7_3_1_patch2/src
cmsenv
git cms-merge-topic cms-tsg-storm:firstHltMenuFoL1MenuCollisions201525nsV2_73X

# Setup standard config files: this is optional
cd HLTrigger/Configuration/test
./cmsDriver.csh 
cd -

# Compile
scram b


To run your own menu containing only HLT paths (and eventually additional ESModules), but using the services and event setup from the master configuration use.
For data:
edmConfigFromDB --cff --configName /dev/CMSSW_7_3_0/GRun --nopaths > setup_cff.py
hltGetConfiguration /your/test/menu/with/only/paths --full --offline --data --unprescale --process TEST --globaltag auto:hltonline > hlt.py

# Running on Monte Carlo simulation (reset HLT prescales all to 1)
hltGetConfiguration /dev/CMSSW_7_3_0/GRun --full --offline --mc --unprescale --process TEST --globaltag auto:run2_mc_GRun > hlt.py

hltGetConfiguration /users/{username}/{config name}/V{number} --full --offline --mc --unprescale --process TEST --l1-emulator 'stage1,gt' --l1Xml L1Menu_Collisions2015_25ns_v2_L1T_Scales_20141121_Imp0_0x1030.xml --globaltag PHYS14_25_V3 > hlt_user_XXns.py


add to hlt.py this line, just after process = cms.Process( "HLT" )
process.load("setup_cff")


Importante: "The first option will be tested with 25ns seeds (a kind of placeholders) and then only instances of HLTLevel1GTSeed module will be updated to new 50ns seeds while structure of the 50ns paths will stay unchanged."

Note that post-LS1 samples produced with releases older than 7.2.0 do not pack the CSC digis in the raw data, but include them directly from the simulation step. When using CMSSW 7.2.X or newer to process such samples, one needs to adapt the HLT configuration to read the simulated CSC digis, adding at the end of the HLT configuration:

# In all config files, ONLY if you process samples produced in releases older than 7_2_X, add at the end:

process.hltCsc2DRecHits.wireDigiTag  = cms.InputTag("simMuonCSCDigis","MuonCSCWireDigi") 
process.hltCsc2DRecHits.stripDigiTag = cms.InputTag("simMuonCSCDigis","MuonCSCStripDigi")

3 - L1 Instructions


Detailed commands and explanations are here.

As of CMSSW_7_2_0_pre8, running the Stage1 Emulator is fully compatible with the HLT MC Run-2 workflow recipe given on the SWGuideGlobalHLT twiki. To run it, please setup your CMSSW development area according to the instructions given on SWGuideGlobalHLT Preparing a CMSSW developer area and run the following:

rehash  # (hash -r if using bash shell) 

$ hltGetConfiguration /dev/CMSSW_7_2_0/GRun/V11 --full --offline --mc --unprescale --process TEST --globaltag auto:run2_mc_GRun --l1-emulator 'stage1,gt' --l1Xml L1Menu_Collisions2015_25ns_v2_L1T_Scales_20141121_Imp0_0x1030.xml > hlt_stage1.py

$ cmsRun hlt_stage1.py >& runTest.log &

$ cd /afs/cern.ch/work/e/eliza/private/TriggerStudy2014/CMSSW_7_3_0/src/L1Trigger/L1TCalorimeter/test

$ cmsRun SimL1Emulator_Stage1.py > & runStage1.log

eos ls -l /eos/cms/store/caf/user/eliza/customL1Ntuple_TEST/13TeV_0PU_50ns_62X_ReEmul2015_PYTHIA
eos ls -l /eos/cms/store/caf/user/eliza/customL1Ntuple_TEST/13TeV_0PU_50ns_62X_ReEmul2015_POMPYTMINUS

cmsRun customL1NtupleFromRaw.py reEmulation=True reEmulMuons=True reEmulCalos=True patchNtuple=True force2012Config=True customDTTF=True dttfLutsFile=sqlite:../data/dttf_config.db globalTag=POSTLS162_V2::All runOnMC=True runOnPostLS1=True useStage1Layer2=True


../../../../L1TriggerDPG/L1Menu/test/L1Tree.root

FILE_TEST:  'root://xrootd.unl.edu//store/user/eliza/POMPYT_minus_JpsiMinus_13TeV_POSTLS162_GENSIM_50/POMPYT_minus_JpsiMinus_13TeV_POSTLS162_V2_RAW_step2/dd24ac44ecca7092ed3cb973499d6fc4/step2_DIGI_L1_DIGI2RAW_HLT_PU_102_1_Rai.root'

4- OpenHLT Instructions

Simple Recipe for Finding the rate of a Trigger

If you want to get a rate for your new (or already existing trigger), there are two simple steps:

*Find out how many events pass your trigger. That is, get the count.

*Convert that count to rate.

The equation to convert the count into rate when using MC is:

rate_mc = collrate * (1 - math.exp(-1* (xs*ilumi*counts/nevt)/collrate))

If the collision rate is small (i.e. nfillb is very small), then you have to use the above equation. Usually, you can just use the simplified version:

rate_mc_simple = (xs * ilumi * counts)/nevt
The equation for the error of your rate is:
rateerr_mc = xs * ilumi * ((math.sqrt(counts + ((counts)**2)/nevt))/nevt)
rateerr_mc_simple = ((xs * ilumi)/nevt)*math.sqrt(counts) 
*collrate = (nfillb/mfillb)/xtime

*nfillb = the number of filled bunches (usually 2662 for 25ns bunch spacing, 1331 for 50ns bunch spacing)

*mfillb = the maximum number of bunches, which is 3564

*xtime = the spacing between the bunches (25e-9 for 25ns bunch spacing, 50e-9 for 50ns bunch spacing)

*xs = the cross section of your sample multiplied by 1e-36 to convert from pb to cm^2

*ilumi = the luminosity you want to find the rate for. For example, the biggest luminosity we can achieve in 2015 with 25ns bunch spacing is estimated to be 1.4e34

*counts = the number of events that pass your trigger

*nevt = the total number of events of your sample

The lumi landscape is the following:

- So called "Week 1" - from 1 to 40 bunches, PU=0.4

- So called "Week 2" -

- For LHCf fills - 40 bunches, PU=0.01, L = 5e28 Hz/cm2

- For CMS VdM scan - only ZeroBias trigger (also 40 bunches)

- For VdM scans in other IP (e.g. Atlas) - probably other triggers possible. Beam conditions: PU=0.4, L = 2e30 Hz/cm2

hlt_50ns_Dimuon0_Jpsi

hltGetConfiguration /users/eliza/HLT_50ns_Dimuon0/V2 --full --offline --mc --unprescale --process TEST --l1-emulator 'stage1,gt' --l1Xml L1Menu_Collisions2015_25ns_v2_L1T_Scales_20141121_Imp0_0x1030.xml  --globaltag MCRUN2_72_V4A > hlt_50ns_Dimuon0_Jpsi_gt.py

nohup python openHLT.py -p -i /store/user/eliza/POMPYT_minus_JpsiMinus_13TeV_POSTLS162_GENSIM_50/POMPYT_minus_JpsiMinus_13TeV_POSTLS162_V2_RAW_step2/dd24ac44ecca7092ed3cb973499d6fc4/step2_DIGI_L1_DIGI2RAW_HLT_PU_102_1_Rai.root -o Prod.root -t hlt_50ns_Dimuon0_Jpsi_gt.py -n 2000 --go >& out_JpsiToMuMu.log &

eos ls /eos/cms/store/caf/user/eliza/JpsiMuMu_13TeV_openhlt_pythia6
eos ls /eos/cms/store/caf/user/eliza/JpsiMuMu_13TeV_openhlt_pompytminus

"root://eoscms//eos/cms/store/caf/user/eliza/JpsiMuMu_13TeV_openhlt_pythia6/Prod_10_1_6lz.root"
"root://eoscms//eos/cms/store/caf/user/eliza/JpsiMuMu_13TeV_openhlt_pompytminus/Prod_10_1_2MZ.root"

python openHLT.py -i Prod.root -o Filt.root -t hlt_50ns_Dimuon0_Jpsi_gt.py -n 100 --go

rate_mc_simple = (xs * ilumi * counts)/nevt

For signal sample single difraction JpsiToMuMu (POMPYT):

ilumi=2.0e30 Hz/cm2

xsec_signal = 2.5E03(pb*1e-36)

counts = 508

nevt = 175505.0

The total rate is ~ 1.5e-05 Hz

For bckg sample JpsiToMuMu (PYTHIA):

ilumi=2.0e30 Hz/cm2

xsec_bckg = 7.5E04(pb*1e-36)

counts = 908

nevt = 87826.0
The total rate is ~ 1.5e-03 Hz

hlt_50ns_DoubleMu0

hltGetConfiguration /users/eliza/HLT_50ns_DoubleMu0_Test/V2 --full --offline --mc --unprescale --process TEST --l1-emulator 'stage1,gt' --l1Xml L1Menu_Collisions2015_25ns_v2_L1T_Scales_20141121_Imp0_0x1030.xml  --globaltag MCRUN2_72_V4A > hlt_50ns_DoubleMu0_gt.py

nohup python openHLT.py -p -i /store/user/eliza/POMPYT_minus_JpsiMinus_13TeV_POSTLS162_GENSIM_50/POMPYT_minus_JpsiMinus_13TeV_POSTLS162_V2_RAW_step2/dd24ac44ecca7092ed3cb973499d6fc4/step2_DIGI_L1_DIGI2RAW_HLT_PU_102_1_Rai.root -o Prod.root -t hlt_50ns_DoubleMu0_gt.py -n 2000 --go > & out_JpsiToMuMuHLTDoubleMu0.log &

hlt_HIL1DoubleMu0_HighQ

nohup python openHLT.py -p -i /store/user/eliza/POMPYT_minus_JpsiMinus_13TeV_POSTLS162_GENSIM_50/POMPYT_minus_JpsiMinus_13TeV_POSTLS162_V2_RAW_step2/dd24ac44ecca7092ed3cb973499d6fc4/step2_DIGI_L1_DIGI2RAW_HLT_PU_102_1_Rai.root -o Prod.root -t hlt_HIL1DoubleMu0_HighQ.py -n 2000 --go >& out_JpsiToMuMu_HLTHI.log &

sample:  JpsiToMuMuPompyt       

nevt:  175505.0 

count: 1350.0

 xsec:  2500.0(pb*1e-36)

The total rate is  3.84604427224e-05  +-  1.04676066613e-06

instructions

The instructions on how to calculate a rate which is acceptable to the TSG. By Inga: There's more than one way to get the number of events that pass your trigger. One of them is by creating the ntuples, another (more cumbersome way for MC) is using openHLT. You say you 'dumped' the paths, so I assume you did something like
hltGetConfiguration /dev/CMSSW_7_2_0/GRun --full --offline --mc --unprescale --process TEST --globaltag auto:run2_mc_GRun --l1-emulator 'stage1,gt' --l1Xml
L1Menu_Collisions2015_25ns_v1_L1T_Scales_20101224_Imp0_0x102f.xml > hlt.py
See https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideGlobalHLT for the latest instructions.

Now you can use this hlt.py to make your ntuples for QCD30-1800 MC. If you'll use 62X MC, then you'll want to rerun the HLT so you should have something like this in your hlt.py

# Define the analyzer modules
process.load("HLTrigger.HLTanalyzers.HLTBitAnalyser_cfi")
process.hltbitanalysis.hltresults = cms.InputTag( 'TriggerResults','','TEST' )
process.hltbitanalysis.RunParameters.HistogramFile="hltbitanalysis720.root"
process.hltbitanalysis.l1GtReadoutRecord = cms.InputTag('hltGtDigis','',process.name_() )
#process.hltbitanalysis2.l1GtReadoutRecord = cms.InputTag('hltGtDigis','',process.name_() )
process.HLTriggerFinalPath = cms.Path( process.hltGtDigis +
process.hltScalersRawToDigi + process.hltFEDSelector +
process.hltTriggerSummaryAOD + process.hltTriggerSummaryRAW )
process.out=cms.EndPath(process.hltbitanalysis)
process.hltL1GtTrigReport = cms.EDAnalyzer( "L1GtTrigReport",
    PrintVerbosity = cms.untracked.int32( 10 ),
    UseL1GlobalTriggerRecord = cms.bool( False ),
    PrintOutput = cms.untracked.int32( 3 ),
    L1GtRecordInputTag = cms.InputTag( "hltGtDigis" )
)
process.HLTAnalyzerEndpath = cms.EndPath( process.hltL1GtTrigReport +
process.hltTrigReport )
To test it out you can just do cmsRun hlt.py, but you'll probably want to submit them to crab, since they take several hours to run. You can see this multicrab as an example

https://github.com/cms-steam/HLTrigger/blob/master/HLTanalyzers/test/multicrab.cfg

or https://github.com/cms-steam/HLTrigger/blob/master/HLTanalyzers/test/crab.cfg

After crab jobs finish, in the crab res files you'll see something like

TrigReport Events total = 15005 passed = 15005 failed = 0

TrigReport ---------- Path   Summary ------------
TrigReport  Trig Bit#        Run     Passed     Failed      Error Name

TrigReport     1   12      15005          0      15005          0
HLT_MonoCentralPFJet80_PFMETnoMu105_NHEF0p95_v5
TrigReport     1   13      15005       6738       8267          0
HLT_SingleForJet25_v4
TrigReport     1   14      15005      13355       1650          0
HLT_SingleForJet15_v4
TrigReport     1   15      15005       5881       9124          0
HLT_DiPFJetAve40_v10

You can use those numbers (events that passed for your trigger) to convert that number to rate using the simple recipe in https://twiki.cern.ch/twiki/bin/viewauth/CMS/TmdRecipes#Samples_and_Common_TSG_Recipes

Note: You can try to change the big numbers in process.MessageLogger in .py file from 100000000 to like a 1000. You can change : > reportEvery = 1000
> Also, you can add skipBadFiles = cms.bool(True)
> And try diisabling the write-out of modules that output too many messages.

Samples

*DAS POMPYTSD-

*DAS POMPYTSD+

*DAS PYTHIA

Sample (POSTLS162_V1::All) Type Number of Events T2 Cross Section (pb) x <s^2 > = 10%
/POMPYT_plus_JpsiPlus_13TeV_POSTLS162_GENSIM_50/eliza-POMPYT_plus_JpsiPlus_13TeV_POSTLS162_V2_RECO-7d9ad8f26548f72967911c05f338e6d3/USER AODSIM 179001 T2_RU_IHEP 2.5E+03 ± 3.8E+02
/POMPYT_minus_JpsiMinus_13TeV_POSTLS162_GENSIM_50/eliza-POMPYT_minus_JpsiMinus_13TeV_POSTLS162_V2_RECO-7d9ad8f26548f72967911c05f338e6d3/USER AODSIM 173507 T2_RU_IHEP 2.5E+03 ± 3.5E+02
/PYTHIA6_JpsiPt0_13TeV_POSTLS162_GENSIM_14Out2014/eliza-PYTHIA6_Pt0_13TeV_POSTLS162_STEP3_GENSIMDIGIRAWRECO-7d9ad8f26548f72967911c05f338e6d3/USER AODSIM 87826 T2_RU_IHEP 7.5E+04


Testing HLT paths


input with 2k events: 'file:/afs/cern.ch/user/e/eliza/private/TriggerStudy2014/CMSSW_7_2_0_pre8/src/HLTrigger/Configuration/test/step2_DIGI_L1_DIGI2RAW_HLT_PU_102_1_Rai.root',
Before to use the version 2 of L1 Menu do this command line: git cms-merge-topic 6960
1. hlt_50ns_Dimuon0_Jpsi * hlt_50ns_Dimuon0_Jpsi_L1SingleMuOpen

* hlt_50ns_Dimuon0_Jpsi_L1DoubleMuOpen

hltGetConfiguration /users/eliza/HLT_50ns_Dimuon0/V2 --full --offline --mc --unprescale --process TEST --globaltag auto:run2_mc_GRun --l1-emulator 'stage1,gt' --l1Xml L1Menu_Collisions2015_25ns_v2_L1T_Scales_20141121_Imp0_0x1030.xml > hlt_50ns_Dimuon0_Jpsi_l1v2.py
L1 seed used: L1_SingleMuOpen 
cmsRun hlt_50ns_Dimuon0_Jpsi_l1v2.py > & hlt_50ns_Dimuon0_Jpsi_L1SingleMuOpen.log &
L1 seed used: L1_DoubleMu0 
cmsRun hlt_50ns_Dimuon0_Jpsi_l1v2.py >& hlt_50ns_Dimuon0_Jpsi_L1DoubleMuOpen.log &
outputfile:DQMIO_L1DoubleMU_HLT_50ns_Dimuon0_Jpsi.root

2. hlt_50ns_DoubleMu0_3_Jpsi_Displaced
hltGetConfiguration /users/eliza/HLT_50ns_DoubleMu0/V1 --full --offline --mc --unprescale --process TEST --globaltag auto:run2_mc_GRun --l1-emulator 'stage1,gt' --l1Xml L1Menu_Collisions2015_25ns_v2_L1T_Scales_20141121_Imp0_0x1030.xml > hlt_50ns_DoubleMu4_3_Jpsi_Displaced_l1v2.py
L1 seed used: L1_SingleMuOpen
L1 seed used: L1_DoubleMu0 

3. hlt_50ns_DoubleMu_JpsiTrk_Displaced
hltGetConfiguration /users/eliza/HLT_50ns_DoubleMu_JpsiTrk_Displaced/V1 --full --offline --mc --unprescale --process TEST --globaltag auto:run2_mc_GRun --l1-emulator 'stage1,gt' --l1Xml L1Menu_Collisions2015_25ns_v2_L1T_Scales_20141121_Imp0_0x1030.xml > hlt_50ns_DoubleMu_JpsiTrk_Displaced_l1v2.py
L1 seed used: L1_SingleMuOpen
L1 seed used: L1_DoubleMu0 

4. hlt_50ns_DoubleMu0

hltGetConfiguration /users/eliza/HLT_50ns_DoubleMu0_Test/V2 --full --offline --mc --unprescale --process TEST --globaltag auto:run2_mc_GRun --l1-emulator 'stage1,gt' --l1Xml L1Menu_Collisions2015_25ns_v2_L1T_Scales_20141121_Imp0_0x1030.xml > hlt_50ns_DoubleMu0_Test_l1v2.py
L1 seed used: L1_DoubleMu0 

5. HLT_HIL1DoubleMu0_HighQ

hltGetConfiguration /users/krajczar/HLTHIMuonsFor2015/V2 --full --offline --mc --unprescale --process TEST --globaltag auto:run2_mc_GRun --l1-emulator 'stage1,gt' --l1Xml L1Menu_Collisions2015_25ns_v2_L1T_Scales_20141121_Imp0_0x1030.xml --path HLT_HIL1DoubleMu0_HighQ > hlt_HIL1DoubleMu0_HighQ.py

EOS storage


This describes the basic commands to use EOS storage at CERN. Since it is the preferred access way now, we focus on xrootd that also may work on other systems, like CASTOR, and preferred instead of the rfio protocol (nsls, rfdir, rfcp, ...)

Basic handling of files

To list the content of a directory in EOS:


eos ls [-l] /eos/cms/store/caf/user/eliza
or, an alternative command (after setting the CMS software):
xrd eoscms dirlist /eos/cms/store/caf/user/eliza
This one is also available for CASTOR, in case it is needed:
xrd castorcms dirlist /castor/cern.ch/user/e/eliza
One can use the command help to retrieve the full list of possibilities to interact with EOS. In addition, I define the variable $EOSCAF_HOME to mark the previous directory.

However it should be remarked that EOS also allows to access the CAF-T2 area (/store/user/...) that can be used as EOS area too. It is unclear what is the difference but I will have to assume that EOS and xrootd

will be the default access of "CERN-local" data, but the location will be the one at CAF-T2 (and you can find the information on how to use it <>a href="caft2notes.html">here).

To create a directory:


xrd eoscms mkdir /eos/cms/store/caf/user/eliza/test
In case one needs to set the permissions to allow copy from a group, which does not seem to be the case, that can be done using:
eos chmod -r 775 /eos/cms/store/caf/user/eliza/test
or also (in case we want to use it with CASTOR):
xrd castorcms chmod /castor/cern.ch/user/e/eliza 7 7 5

To copy a file to EOS:


xrdcp test.test root://eoscms//eos/cms/store/caf/user/eliza/test/oi.test

To copy a file from EOS:


xrdcp root://eoscms//eos/cms/store/caf/user/eliza/test/oi.test temp/test.test
xrd eoscms cp /eos/cms/store/caf/user/eliza/test/oi.test /eos/cms/store/caf/user/eliza/test/file.test2
Note that the last one assumes that the indicated directory are in root://eoscms/, so it is only convenient when changing stuff (e.g. rename using xrd eoscms rm since xrd eoscms mv is not supported)

To view a file from EOS:


xrd eoscms cat /eos/cms/store/caf/user/eliza/test/oi.test

To check the size of a directory of a file from EOS:


eos find --size /eos/cms/store//caf/user/eliza/test/oi.test | awk -F='{size+=$3}END{print size/1024/1024/1024/1024" TB"}'


To delete a file in EOS:


eos rm /eos/cms/store/caf/user/eliza//test/oi.test
xrd eoscms rm [-r] /eos/cms/store/caf/user/eliza//test/oi.test

To check whether a file is online (available?)


xrd eoscms isfileonline /eos/cms/store/caf/user/eliza/test/file.test2

To check the quota:

eos quota
eos quota | grep -A 4 "Quota Node: /eos/cms/store/caf/user/" | head -5


Generating SSH key

$ sls -al ~/.ssh

# Lists the files in your .ssh directory, if they exist

$ ssh-keygen -t rsa -C "your_email@example.com"

# Creates a new ssh key using the provided email # Generating public/private rsa key pair. # Enter file in which to save the key (/your_home_path/.ssh/id_rsa): # just press Enter to continue.

Enter passphrase (empty for no passphrase): [Type a passphrase] # Enter same passphrase again: [Type passphrase again]

Which should give you something like this:

# Your identification has been saved in /your_home_path/.ssh/id_rsa. # Your public key has been saved in /your_home_path/.ssh/id_rsa.pub. # The key fingerprint is: # 01:0f:f4:3b:ca:85:d6:17:a1:7d:f0:68:9d:f0:a2:db your_email@exampleNOSPAMPLEASE.com

If using csh as a shell

$eval `ssh-agent -c`

next you only need to do something like:

$ ssh-add ~/.ssh/id_rsa

git-for-beginners

git init usercode

cd usercode

git remote add origin $MY_REMOTE

git push origin master

git push --mirror -u origin

Montar uerjpowerp100 na lxplus:

1) Criar a pasta que será de montagem. Por exemplo:

/afs/cern.ch/user/e/eliza/private/uerj-1
2) Fazer o comando:
sshfs eliza@uerjpowerp100:/storage1/eliza /afs/cern.ch/user/e/eliza/private/uerj-1/
Estou montando a pasta /storage1/eliza na uerjpowerp100.

3) Caso queira, defina no alias.

4) No crab, definir o destino de output dos jobs:

ui_working_dir = /storage/eliza/uerj-1/

-- ElizaMelo - 28 Feb 2014

Edit | Attach | Watch | Print version | History: r33 < r32 < r31 < r30 < r29 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r33 - 2015-06-25 - ElizaMelo
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Sandbox All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback