Documentation

  • Ganga's online help: type help() on Ganga prompt.

Software preparation

User prepares his software directory containing executable script and all files needed for analysis or personal production task (max size 50MB). User software directory will be packed and compressed (.tar.bz2) at job submission time; such a tarball will be transferred to the remote farm node, unpacked and uncompressed. The software directory will be available in job ${HOME} at remote farm node.

At job run time on remote node the following environment variables should be used by user's executable:

  • WN_INPUTFILES : points to the directory where input files are downloaded. User's executable should refer to such a directory for input file access
  • WN_OUTPUTFILES : points to directory where user's executable should write its output files to be registered on grid.
  • WN_INPUTLIST : points to a txt file in the $WN_INPUTFILES directory which contains the input files local absolute path list. User executable can access such a file to simplify the input files access.
The user's executable script should comply these information to run correctly.

Start ganga

Obtain a certificate and setup your home on UI: CNAF_services/How_to_access_Grid_resources
Add to your ~/.bashrc file the following line: alias ganga="${VO_SUPERBVO_ORG_SW_DIR}/ganga/bin/ganga_wrap.sh"
Source your new ~/.bashrc
Then start ganga: $ ganga

Job preparation and submission step by step

A Ganga job has to be intended as a bulk job; user prepares and submits a bulk job composed by N sub jobs

Create a new job

Create a new job object and give it a name:

j=Job()
j.name = 'myJob '

Define the job application

Assign SBApp as the application used by the job:

j.application = SBApp()

Set the directory where your sources are located. Ganga will create a tarball and ship it in the inputsandbox:

j.application.sw_dir = '/storage/gpfs_superb/users/ganga_util/GangaSuperB/test/analysisSoftware '

Executable relative path after software unpacking. eg: analysisExe.sh

j.application.executable = 'analysisSoftware/analysisExe.sh '

NOTE :

 For expert users: SBApp overwrites the Ganga original Generic Splitter or the user Splitter is in place. 

Define job input dataset

The following step characterizes the different system behaviour per use case

Production analysis use case: jobs preposed to analyse official and personal simulation production dataset

j.inputdata = SBInputProductionAnalysis()
  • User can include the following instructions in a script (scripting mode):
j.inputdata.dataset_id = '4f394214a328d55f2900003b'
j.inputdata.events_total = 50000
# zero for all
j.inputdata.events_per_subjobs = 250000
  • Or user can launch the following method and answer the system for required information (interactive mode):
You can filter the results interactively:
j.inputdata.jobInputDefinitionWizard() # Returns all the dataset with no filter
Number of returned dataset is 100, confirm the operation? Type 'yes' to confirm or (q)uit: q
...
j.inputdata.jobInputDefinitionWizard(session='fullsim')
# Filter on FullSim generation dataset only
...
j.inputdata.jobInputDefinitionWizard(prod_series='2010_September_311', analysis='generics')
# Filters are case-insensitive
...
j.inputdata.jobInputDefinitionWizard(prod_series='2010_September_311', analysis='gen')
# Filters can work with first characters only
...
j.inputdata.jobInputDefinitionWizard(dataset_id='4f394214a328d55f2900003b')
# Univocally filter by dataset_id
...

# Example of interactive work:
j.inputdata.jobInputDefinitionWizard(prod_series='2010_September_311', analysis='Generics', bkg_mixing='MixSuperbBkg_NoPair')

Fastsim Official Production
+----+--------------------+------------+-----------------+------+---------------------+------------------+--------+
| id | prod_series        | analysis   | generator       | dg   | bkg_mixing          | analysis_type    | status |
+----+--------------------+------------+-----------------+------+---------------------+------------------+--------+
| 0  | 2010_September_311 | Generics   | B+B-_generic    | DG_4 | MixSuperbBkg_NoPair | HadRecoil        | closed |
| 1  | 2010_September_311 | Generics   | B+B-_generic    | DG_4 | MixSuperbBkg_NoPair | SemiLepKplusNuNu | closed |
| 2  | 2010_September_311 | Generics   | B0B0bar_generic | DG_4 | MixSuperbBkg_NoPair | HadRecoil        | closed |
| 3  | 2010_September_311 | Generics   | B0B0bar_generic | DG_4 | MixSuperbBkg_NoPair | SemiLepKplusNuNu | closed |
+----+--------------------+------------+-----------------+------+---------------------+------------------+--------+

Choose the dataset:
enter an integer or (q)uit: 0

Chosen dataset details:
+-----------------+----------------------------+
| key             | value                      |
+-----------------+----------------------------+
| analysis        | Generics                   |
| analysis_type   | HadRecoil                  | 
| bkg_mixing      | MixSuperbBkg_NoPair        |
| creation_date   | 2012-02-13 18:10:49.885510 |
| dataset_id      | 4f394214a328d55f2900003b   |
| dg              | DG_4                       |
| evt_file        | 50000                      |
| evt_tot         | 94500000                   |
| evt_tot_human   | 94.5M                      |
| files           | 1890                       |
| generator       | B+B-_generic               |
| id              | 0                          |
| occupancy       | 121915466273               |
| occupancy_human | 113.5GiB                   |
| owner           | Official                   |
| prod_series     | 2010_September_311         |
| session         | fastsim                    |
| status          | closed                     |
+-----------------+----------------------------+

Insert the minimum number of events that you need for your analysis (zero for all):
enter an integer or (q)uit: 510000

Total job input size: 685.4MiB
Total selected number of events: 550.0K
Total number of involved lfns: 11

Insert the maximum number of events for each subjob. Remember:
- maximum output size is 2GiB.
- suggested maximum job duration 18h.
- maximum input size job is 10GiB.
- at least 50000 (that is the number of events of one file).
enter an integer or (q)uit: 270000

Subjobs details:
+----+-------------------------------------------------------------------------------+----------+--------+------+
| id | list_path                                                                     | size     | events | lfns |
+----+-------------------------------------------------------------------------------+----------+--------+------+
| 0  | /home/SUPERB/galvani/gangadir/workspace/galvani/LocalXML/157/input/list_0.txt | 253.1MiB | 200.0K | 4    |
| 1  | /home/SUPERB/galvani/gangadir/workspace/galvani/LocalXML/157/input/list_1.txt | 246.9MiB | 200.0K | 4    |
| 2  | /home/SUPERB/galvani/gangadir/workspace/galvani/LocalXML/157/input/list_2.txt | 185.4MiB | 150.0K | 3    |
+----+-------------------------------------------------------------------------------+----------+--------+------+

Personal production use case: jobs preposed to simulate user arbitrary production

j.inputdata = SBInputPersonalProduction() # class instantiation
j.inputdata.number_of_subjobs = 3 # insert the number of subjobs, mandatory
j.inputdata.background_frame = True # Enable job input background_frame, default is False

Set the simulation session (Fast or Full) and software version

  • scripting mode:
    • manual:
      j.inputdata.session = 'FastSim '
      j.inputdata.sw_version = 'V0.3.1 '
    • automatic, the system try to retrieve information accessing .sbcurrent file
  • or interactive mode:
    • automatic, not mandatory:
      j.inputdata.detectSwVersion()
    • manual:
      j.inputdata.setSwVersion()

Define job output dataset

If you haven't already identify the appropriate datasets, create one or more first. Subjobs can create output files related to diverse datasets in a relationship m to one: a unique output file subset (m elements) can be in relationship with a unique dataset.

The Files included in defined output datasets will be transferred back to submission site storage area sb_analysis.

For example you can directly access output dataset contents at CNAF in:

/storage/gpfs_superb/sb_analysis////output/subjobid_

Maximum job output size per file is 3GB.

Class instantiation:

j.outputdata = SBOutputDataset()

Dataset parameter definition: output file pattern - dataset relationship

  • scripting mode:
j.outputdata.pairs = {'': ''}
  • or interactive mode:
j.outputdata.setOutputDataset()
 #
insert and

: identifying the job output files to be associated with one dataset_id, e.g.: file_*.root
Each subjob can create more then one file to be associated possibly to different dataset (e.g. HadRecoil.root and SemiLepKplusNuNu.root)

: unique dataset string id, e.g.: 4f6a556ea328d57013000002
dataset_id can be retrieved via SBDatasetManager method ShowDataset.

Job outputsandbox management

User can instruct the Master job to retire a set of files by each subjob. The outputsandbox can be setup specifing which files should be copied from the worker node to the submitter machine. The outputsandbox includes individual file names, shell patterns or directory names reported as worker node job relative path for example the following files and dirs reside on job home dir:

j.outputsandbox = ['b.dat','a*.txt','dir_example']

The b.dat file will be copied (if exists) as well as all files which match the a*.txt pattern. The outputsandbox content will be copied to submission UI in ~/gangadir/workspace//LocalXML///output

If user executable generates its own log file, it can be transferred back via outputsandbox.

Job computational backend definition

Local machine backend is really useful for testing purpose (with small jobs). Recommended.

j.backend=Local()

Local batch system backend (in the sites where LSF batch system is on use eg: CNAF):

j.backend=LSF()

Distributed computing Grid backend:

j.backend=LCG()

job submission

Now you can retreive job information summary:

j
full_print(j)
If it's all ok, type j.submit()
j.submit()

Job monitoring via Ganga

jobs
jobs(id).subjobs
j
j.subjobs(0)
In [23]:jobs(161).subjobs
Out[23]: 
Registry Slice: jobs(161).subjobs (3 objects)
-------------
    fqid |    status |     name | subjobs |   application |    backend |         backend.actualCE |  exitcode 
------------------------------------------------------------------------------------------------------------
   161.0 | completed |    myJob |         |    Executable |      Local |   bbr-ui.cr.cnaf.infn.it |         0 
   161.1 | completed |    myJob |         |    Executable |      Local |   bbr-ui.cr.cnaf.infn.it |         0 
   161.2 | completed |    myJob |         |    Executable |      Local |   bbr-ui.cr.cnaf.infn.it |         0 

stdout and stderr for each subjob are available in your gangadir

!cat $jobs().subjobs().outputdir/stdout
!cat ~/gangadir/workspace//LocalXML///output/stdout

Every SuperB (sub)job will return via outputsandbox an output_files.txt that contains the list of LFNs registered in the grid. Next command will show the merge of output_files.txt belonging to each subjobs.

!zcat $j.outputdir/output_files.txt.gz

The merge of the job wrapper log, useful for debugging. It also include stdout and stderr of every subjob.

!zcat $j.outputdir/severus.log.gz

job resubmission

As you run over more and more data, you will find that you will have a failure rate on the grid of ~0-5% of your jobs. This is unfortunately a consequence of the massive complexity of the system and will generally be transient network errors rather than systematic issues.

Resubmission of failed subjobs only:

j.resubmit()

Resubmission of an existing job as new:

j = jobs().copy()
j.submit()

Killing a job

j.kill() # the job j is killed
jobs.kill(keep_going=True) # kill all the registered master jobs

Dataset management

dataset status

Prepared

  • New datasets with no files registered are in 'prepared' status
  • Can be deleted ( 1. remove)
Open
  • Dataset with at least one registered output file
  • The status change between 'prepared' and'open' is automatic
  • The dataset can be used as output of several submissions
Closed
  • No more usable as output dataset
  • Can be changed to 'open' or 'bad' status
  • Can be deleted ( 1. remove)
Bad
  • No more usable as output or input dataset
  • Can be changed to 'open' or 'bad' status
  • Can be deleted ( 1. remove)
Temp
  • Files not owned by a dataset will reside in a system generated dataset
  • The dataset can not be used as job input or job output
  • The related files can be downloaded
  • An automatic cleanup procedure will delete such a dataset every 30 days
  • 'temp' status can not be modified
Removed
  • Hidden status used by the system to manage central deletion operation

dataset manager methods

m = SBDatasetManager()

show

show all datasets and all their metadata

m.showDatasets()
No arguments means ALL datasets.
It is strongly suggested to apply a filter to the command using the following keywords: See Section 4.3.1, grey box, key-value table for a complete filter list

  • session -> 'fastsim', 'fullsim', 'analysis'.
  • owner -> 'official'[*], '' (Use method m.whoami(), ie.: IT-INFN-Personal_Certificate-Ferrara-Andrea_Galvani).
  • status -> 'prepared', 'open', 'closed', 'bad', 'temp'.
  • dataset_id -> '4f6a5539a328d57013000001'
  • free_string -> 'Version3'
  • prod_series -> '2010_September_311'
  • physics parameters -> 'DG_4', 'Generic', 'HadRecoil', 'B+B-_generic', 'MixSuperbBkg_NoPair'

[*] The official production dataset owner is 'official'

Filters are case insensitive and are applied to the start of the string. Logical 'AND' are placed between keywords.
You can specify multiple values for 'session', 'owner', 'status' keywords. Logical 'OR' are placed between its values.

Some examples:

m.showDatasets(dataset_id='4f6a5539a328d57013000001')
m.showDatasets(prod_series='2010_September_311', analysis='generic', dg='DG_4a')
m.showDatasets(status=['prepared','closed'])
m.showDatasets(freestring='some')
m.showDatasets(session=['fastsim','fullsim'], owner='IT-INFN-Personal_Certificate-Ferrara-Andrea_Galvani')

This filtering system is used throughout the ganga SuperB plugin.

create

job output dataset creation.

m.createDataset()

remove

to set dataset status from 'prepared', 'closed' and 'bad' to hidden status 'removed'; such a datasets will be permanently removed once a day at cronjob execution.

m.removeDataset()

bad

to set dataset status to 'bad'

m.badDataset()

closed

to set dataset status to 'close'

m.closeDataset()

open

to set dataset status to 'open'

m.openDataset()

download

to retrieve all files belonging to a dataset from GRID to submission machine. The files will be transferred in $HOME/

m.downloadDataset()

who am I

print the effective username of the current user.

m.whoami()

getFileList

It creates a .txt file containing all file references belonging to the chosen dataset

m.getFileList()

Hands on session

This session contains the straight instructions to perform a set of operations permitting to test the main analysis framework functionalities:

  1. A personal production will be performed: an output dataset will be created and populated with job output files
  2. The analysis of dataset created on previous step will be performed, results will be setup as a new output dataset
  3. The dataset with analysis results will be transferred back to user home area or accessed via posix (CNAF specific example)

Testbed description

The application used in this test session, referred also all along the tutorial has been setup to check the main system workflow steps.

It resides at CNAF here: /storage/gpfs_superb/users/ganga_util/GangaSuperB/test/analysisSoftware/

The analysisSoftware directory contains all the files needed for the computation, in the case of personal production it can be the result of sbnewrel command execution with proper user customization. Such a directory will be automatically compressed and transferred with jobs on remote resources.

The content of the test application directory is the following:

bbr-ui $> ls -R /storage/gpfs_superb/users/ganga_util/GangaSuperB/test/analysisSoftware/

.:
analysisExe.sh  graphs  roots

./graphs:
fileout_1  fileout_2  fileout_3

./roots:
file_1.root  file_2.root  file_3.root

The executable analysisExe.sh is the following:

  1. writes in a text file the contents of environment variables. See 1. Software_preparation
  2. moves the .root file in $WN_OUTPUTFILES

bbr-ui $> cat analysisExe.sh 

   1. !/bin/bash

NOW=$(date +"%F_%H-%M-%S")
HOST=$(hostname)
FILE="${NOW}_${HOST}_${$}.txt"
echo "$FILE"

pwd | tee $FILE

mv roots/* $WN_OUTPUTFILES/
cd $WN_OUTPUTFILES

echo "begin analysis..." | tee $FILE

echo 'echo $WN_INPUTFILES' | tee -a $FILE
echo $WN_INPUTFILES | tee -a $FILE
echo 'echo $WN_OUTPUTFILES' | tee -a $FILE
echo $WN_OUTPUTFILES | tee -a $FILE
echo 'echo $WN_INPUTLIST' | tee -a $FILE
echo $WN_INPUTLIST | tee -a $FILE
echo 'cat $WN_INPUTLIST' | tee -a $FILE
cat $WN_INPUTLIST | tee -a $FILE

echo "end analysis." | tee -a $FILE

Personal Production step

j=Job()
j.name = 'myJob'
j.application = SBApp()
j.application.sw_dir = '/storage/gpfs_superb/users/ganga_util/GangaSuperB/test/analysisSoftware'
j.application.executable = 'analysisSoftware/analysisExe.sh'
j.inputdata = SBInputPersonalProduction()
j.inputdata.number_of_subjobs = 5
j.inputdata.background_frame = True
j.inputdata.setSwVersion()
m = SBDatasetManager()
m.createDataset()

#
see "Dataset creation section" below
j.outputdata = SBOutputDataset()
#
Interactive mode: j.outputdata.setOutputDataset()
#
Scripting mode: j.outputdata.pairs = {'': ''}
#
insert file name map (ex: *.root) and dataset_id
j.outputsandbox = ['graphs']
j.backend=LCG()
j
j.submit()

Dataset Creation

In [1]:m = SBDatasetManager()

In [2]:m.createDataset()

+----+-----------------------------+
| id | dataset_type                |
+----+-----------------------------+
| 0  | FastSim Personal Production |
| 1  | FullSim Personal Production |
| 2  | Analysis                    |
+----+-----------------------------+
enter an integer or (q)uit: 0

Enter Events per file: 50000

Choose Analysis:
+----+----------------------+
| id | value                |
+----+----------------------+
| 0  | BtoKNuNu             |
| 1  | BtoKstarNuNu         |
| 2  | DstD0ToXLL           |
| 3  | DstD0ToXLL           |
| 4  | Generics             |
| 5  | HadRecoilCocktail    |
| 6  | KplusNuNu            |
| 7  | SLRecoilCocktail     |
| 8  | tau->3mu             |
| 9  | Enter a custom value |
+----+----------------------+
enter an integer or (q)uit: 0

Choose Geometry:
+----+----------------------+
| id | value                |
+----+----------------------+
| 0  | DG_4                 |
| 1  | DG_4a                |
| 2  | DG_BaBar             |
| 3  | Enter a custom value |
+----+----------------------+
enter an integer or (q)uit: 0

Choose Generator:
+----+----------------------------------------------+
| id | value                                        |
+----+----------------------------------------------+
| 0  | B0B0bar_Btag-HD_Cocktail                     |
| 1  | B0B0bar_Btag-SL_e_mu_tau_Bsig-HD_SL_Cocktail |
| 2  | B0B0bar_generic                              |
| 3  | B0B0bar_K0nunu                               |
| 4  | B0B0bar_K0nunu_SL_e_mu_tau                   |
| 5  | B0B0bar_Kstar0nunu_Kpi                       |
| 6  | B0B0bar_Kstar0nunu_Kpi_SL_e_mu_tau           |
| 7  | B+B-_Btag-HD_Cocktail                        |
| 8  | B+B-_Btag-SL_e_mu_tau_Bsig-HD_SL_Cocktail    |
| 9  | B+B-_generic                                 |
| 10 | B+B-_K+nunu                                  |
| 11 | B+B-_K+nunu_SL_e_mu_tau                      |
| 12 | B+B-_Kstar+nunu                              |
| 13 | B+B-_Kstar+nunu_SL_e_mu_tau                  |
| 14 | B+B-_taunu_SL_e_mu_tau                       |
| 15 | bhabha_bhwide                                |
| 16 | ccbar                                        |
| 17 | tau+tau-_kk2f                                |
| 18 | uds                                          |
| 19 | udsc                                         |
| 20 | Upsilon4S_generic                            |
| 21 | Enter a custom value                         |
+----+----------------------------------------------+
enter an integer or (q)uit: 21
Custom value: someUserDefinedValue

Choose Background mixing type:
+----+----------------------+
| id | value                |
+----+----------------------+
| 0  | All                  |
| 1  | NoPair               |
| 2  | NoMixing             |
| 3  | Enter a custom value |
+----+----------------------+
enter an integer or (q)uit: 0

Choose Analysis Type:
+----+----------------------+
| id | value                |
+----+----------------------+
| 0  | BtoKNuNu             |
| 1  | BtoKstarNuNu         |
| 2  | HadRecoil            |
| 3  | SemiLepKplusNuNu     |
| 4  | Enter a custom value |
+----+----------------------+
enter an integer or (q)uit: 0

Enter free string: myDataset_01

New dataset details:
+---------------+-----------------------------------------------------+
| key           | value                                               |
+---------------+-----------------------------------------------------+
| analysis      | BtoKNuNu                                            |
| analysis_type | BtoKNuNu                                            |
| dataset_id    | 4f7d8e5aa328d51ea1000000                            |
| dg            | DG_4                                                |
| evt_file      | 50000                                               |
| free_string   | myDataset_01                                        |
| generator     | someUserDefinedValue                                |
| owner         | IT-INFN-Personal_Certificate-Ferrara-Andrea_Galvani |
| session       | fastsim                                             |
| site          | INFN-T1                                             |
| bkg_mixing    | All                                                 |
+---------------+-----------------------------------------------------+
Type 'yes' to confirm dataset creation or (q)uit: yes

Production Analysis step

j=Job()
j.name = 'myJob'
j.application = SBApp()
j.application.sw_dir = '/storage/gpfs_superb/users/ganga_util/GangaSuperB/test/analysisSoftware'
j.application.executable = 'analysisSoftware/analysisExe.sh'
j.inputdata = SBInputProductionAnalysis()
j.inputdata.jobInputDefinitionWizard()

#
interactive mode

or alternatively the direct scripting mode set of three commands:

j.inputdata.dataset_id = '4f394214a328d55f2900003b'
#
use ShowDatasetDetail method presented in next section
j.inputdata.events_total = 250000
#
zero for all
j.inputdata.events_per_subjobs = 100000

j.outputdata = SBOutputDataset()
m = SBDatasetManager()
m.createDataset()
#
see "Dataset creation" section. Select Analysis purpose dataset.
j.outputdata = SBOutputDataset()
#
Interactive mode: j.outputdata.setOutputDataset()
#
Scripting mode: j.outputdata.pairs = {'': ''}
#
insert file name map (ex: *.root) and dataset_id
j.backend=LCG()
j.submit()

Show dataset details

In [4]:m.showDatasets()

...
...

Analysis
+-----+--------------+----------------------------+--------+
| id  | free_string  | creation_date              | status |
+-----+--------------+----------------------------+--------+
| 135 | myDataset_01 | 2012-03-21 23:25:52.527268 | open   |
+-----+--------------+----------------------------+--------+

Choose the dataset for detailed information:
enter an integer or (q)uit: 135

Parent datasets, older first:

+---------------+----------------------------+
| key           | value                      |
+---------------+----------------------------+
| analysis_type | HadRecoil                  |
| bkg_mixing    | MixSuperbBkg_NoPair        |
| creation_date | 2012-02-13 18:10:49.885510 |
| dataset_id    | 4f394214a328d55f2900003b   |
| dg            | DG_4                       |
| evt_file      | 50000                      |
| evt_tot       | 94500000                   |
| files         | 1890                       |
| generator     | B+B-_generic               |
| occupancy     | 121915466273               |
| owner         | Official                   |
| prod_series   | 2010_September_311         |
| analysis      | Generics                   |
| session       | fastsim                    |
| status        | closed                     |
+---------------+----------------------------+

Selected dataset:
+-----------------+-----------------------------------------------------+
| key             | value                                               |
+-----------------+-----------------------------------------------------+
| creation_date   | 2012-03-21 23:25:52.527268                          |
| dataset_id      | 4f6a556ea328d57013000002                            |
| files           | 15                                                  |
| free_string     | myDataset_01                                        |
| id              | 135                                                 |
| occupancy       | 180                                                 |
| occupancy_human | 180.0B                                              |
| owner           | IT-INFN-Personal_Certificate-Ferrara-Andrea_Galvani |
| parent          | 4f394214a328d55f2900003b                            |
| session         | analysis                                            |
| status          | open                                                |
+-----------------+-----------------------------------------------------+

Output retrieval

Download

This method retrieves all files belonging to a dataset from GRID to submission machine. The files will be transferred in $HOME/

In [2]:m.downloadDataset(dataset_id='4f6a556ea328d57013000002')

Analysis
+----+----------------+----------------------------+--------+
| id | free_string    | creation_date              | status |
+----+----------------+----------------------------+--------+
| 0  | myDataset_01   | 2012-03-21 23:25:52.527268 | open   |
+----+----------------+----------------------------+--------+

Automatically selected the only entry

Total download size: 180.0B

Downloading to /home/SUPERB/galvani/4f6a556ea328d57013000002 ...
15/15

$ ls ~/4f6a556ea328d57013000002/

0_file_1.root
0_file_2.root
0_file_3.root
1_file_1.root
1_file_2.root
1_file_3.root
2_file_1.root
2_file_2.root
2_file_3.root

Analysis storage area at CNAF

Default job output final destination is the Storage Element of the site involved in submission operation. CNAF site will be the first Ganga enabled site, the Storage Element is accessible via posix, this is an exception wrt average sites scenario in which the use of an external tool (m.downloadDataset) is a must.

The storage area at CNAF dedicated to distributed job analysis output is /storage/gpfs_superb/sb_analysis. Its content is expressed in the following synopsis:

/storage/gpfs_superb/sb_analysis////output/_


bbr-ui $> ls /storage/gpfs_superb/sb_analysis/

IT-INFN-Personal_Certificate-Ferrara-Andrea_Galvani
IT-INFN-Personal_Certificate-CNAF-Armando_Fella
...

bbr-ui $> ls /storage/gpfs_superb/sb_analysis/IT-INFN-Personal_Certificate-Ferrara-Andrea_Galvani

4f6a556ea328d57013000002
56ass345ea234jsd9888022q
9t768rf7661uii9s00991161
...

bbr-ui $> ls 4f6a556ea328d57013000002

20120403_172_analysisSoftware_FastAnalysis
20120222_133_analysisSoftware_FastAnalysis
20120328_166_analysisSoftware_FastAnalysis
...

$ ls 4f6a556ea328d57013000002/20120403_172_analysisSoftware_myJob/output/
0_2012-04-03_18-45-36_bbr-ui.cr.cnaf.infn.it_8755.txt
0_file_1.root
0_file_2.root
0_file_3.root
1_2012-04-03_18-45-36_bbr-ui.cr.cnaf.infn.it_8736.txt
1_file_1.root
1_file_2.root
1_file_3.root
2_2012-04-03_18-45-36_bbr-ui.cr.cnaf.infn.it_8735.txt
2_file_1.root
2_file_2.root
2_file_3.root

-- MikeKenyon - 14-Jun-2012

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2012-06-14 - MikeKenyon
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    ArdaGrid All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback