EMI user tutorial

Access the EMI UI

you have to login via ssh on a remote machine in Catania; XX is a number between 01 and 40, you will be told which actually use. Password will be also told verbally.

ssh -p 2222 -l vilniusXX emi-tutor.ct.infn.it


Authenticate yourself

Create your voms proxy, certificate passphrase is VILNIUS, capital letters

voms-proxy-init --voms testers.eu-emi.eu

and check it's valid

voms-proxy-info -all
subject   : /C=IT/O=GILDA/OU=Personal Certificate/L=VILNIUS/CN=VILNIUS40/CN=proxy
issuer    : /C=IT/O=GILDA/OU=Personal Certificate/L=VILNIUS/CN=VILNIUS40
identity  : /C=IT/O=GILDA/OU=Personal Certificate/L=VILNIUS/CN=VILNIUS40
type      : proxy
strength  : 1024 bits
path      : /tmp/x509up_u539
timeleft  : 11:59:51
key usage : Digital Signature, Key Encipherment, Data Encipherment
=== VO testers.eu-emi.eu extension information ===
VO        : testers.eu-emi.eu
subject   : /C=IT/O=GILDA/OU=Personal Certificate/L=VILNIUS/CN=VILNIUS40
issuer    : /C=IT/O=INFN/OU=Host/L=CNAF/CN=emitestbed07.cnaf.infn.it
attribute : /testers.eu-emi.eu/Role=NULL/Capability=NULL
attribute : test_ga = test_ga_value for root group (/testers.eu-emi.eu)
timeleft  : 11:59:51
uri       : emitestbed07.cnaf.infn.it:15002

Browse available resources

Through the command lcg-infosites we can gather the available resources for our VO. We see first which Computing Elements are available

[vilnius40@emi-tutor ~]$ lcg-infosites --vo testers.eu-emi.eu ce
#   CPU    Free Total Jobs      Running Waiting ComputingElement
     12      12          0            0       0 cert-09.cnaf.infn.it:8443/cream-lsf-demo
      0       0          0            0       0 cream-37.pd.infn.it:8443/cream-lsf-cert
      0       0          0            0       0 cream-37.pd.infn.it:8443/cream-lsf-creamtest1
      0       0          2            0       2 cream-37.pd.infn.it:8443/cream-lsf-creamtest2
      8       8          0            0       0 lxbra2308.cern.ch:8443/cream-pbs-testersemi

Tip : If you get sick of typing such a long VO name, shortcut it through an environment variable

Now we query the information system to know which Storage Elements are available

[vilnius40@emi-tutor ~]$ export MYVO="testers.eu-emi.eu"
[vilnius40@emi-tutor ~]$  lcg-infosites --vo $MYVO se
 Avail Space(kB)  Used Space(kB)  Type  SE
         7908181         1010947  SRM   cork.desy.de
       101168616         6153137  SRM   lxbra1910.cern.ch
        99630252         7691501  SRM   lxbra2502.cern.ch
        10511159          215773  SRM   lxbra2506v1.cern.ch
[vilnius40@emi-tutor ~]$ 

Store a file on a Storage Element

Create a local file, and then store it on an available SE:

[vilnius40@emi-tutor ~]$ echo "This a sample file" > example.txt
[vilnius40@emi-tutor ~]$ cat example.txt 
This a sample file
[vilnius40@emi-tutor ~]$ lcg-cr -d  lxbra1910.cern.ch file:$PWD/example.txt 
 GSIFTP: default set up URL mode
GSIFTP: dest: set up FTP mode. DCAU disabled. Streams =  1, Tcp BS = 0

The file has been stored on the SE lxbra1910.cern.ch and also automatically registered in the File Catalog. We have not specified a Logical File Name (lfn), so the server has put an automatic identifier, but with the option -l we can set up an lfn for the registering file.

[vilnius40@emi-tutor ~]$ lfc-ls /grid/$MYVO/generated/2011-04-13
[vilnius40@emi-tutor ~]$ 

We can now delete the registered file using the GUID; if we check for file existence after deletion, we obviously don't find it.

[vilnius40@emi-tutor ~]$ lcg-del -a guid:e28371ae-5e56-4c34-989d-21204b803212
[vilnius40@emi-tutor ~]$ 
[vilnius40@emi-tutor ~]$ lfc-ls /grid/$MYVO/generated/2011-04-13
[vilnius40@emi-tutor ~]$ 

Submit a job

Job submission request are expressed via JDL (Job Description Language). Find below an usable example, that just run "uname -a" on the executing node

[vilnius40@emi-tutor ~]$ cat uname.jdl
Type = "Job";
JobType = "Normal";
Executable = "/bin/uname";
StdOutput = "uname.out";
StdError = "uname.err";
OutputSandbox = {"uname.out", "uname.err"}; 
Arguments = "-a";
requirements = other.GlueCEStateStatus == "Production";
rank = -other.GlueCEStateEstimatedResponseTime;
RetryCount = 0;

We now run the job on one of the resources available :

[vilnius40@emi-tutor ~]$ lcg-infosites --vo $MYVO ce
#   CPU    Free Total Jobs      Running Waiting ComputingElement
     12      12          0            0       0 cert-09.cnaf.infn.it:8443/cream-lsf-demo
      0       0          0            0       0 cream-37.pd.infn.it:8443/cream-lsf-cert
      0       0          0            0       0 cream-37.pd.infn.it:8443/cream-lsf-creamtest1
      0       0         12           10       2 cream-37.pd.infn.it:8443/cream-lsf-creamtest2
      8       8          0            0       0 lxbra2308.cern.ch:8443/cream-pbs-testersemi
[vilnius40@emi-tutor ~]$ glite-ce-job-submit -a -r  lxbra2308.cern.ch:8443/cream-pbs-testersemi uname.jdl 

On success, the submission command returns a job identifier, that we eventually use to monitor job status and, once it's done, we use the job identifier to retrieve the output

[vilnius40@emi-tutor ~]$ glite-ce-job-status https://lxbra2308.cern.ch:8443/CREAM852910790

******  JobID=[https://lxbra2308.cern.ch:8443/CREAM852910790]
        Status        = [DONE-OK]
        ExitCode      = [0]

[vilnius40@emi-tutor ~]$ glite-ce-job-output https://lxbra2308.cern.ch:8443/CREAM852910790

2011-04-13 10:51:14,437 INFO - For JobID [https://lxbra2308.cern.ch:8443/CREAM852910790] output will be stored in the dir ./lxbra2308.cern.ch_8443_CREAM852910790
[vilnius40@emi-tutor ~]$ 
[vilnius40@emi-tutor ~]$ ls ./lxbra2308.cern.ch_8443_CREAM852910790/
uname.err  uname.out
[vilnius40@emi-tutor ~]$ ls -l ./lxbra2308.cern.ch_8443_CREAM852910790/
total 4
-rw------- 1 vilnius40 users   0 Apr 13 10:51 uname.err
-rw------- 1 vilnius40 users 114 Apr 13 10:51 uname.out
[vilnius40@emi-tutor ~]$ cat ./lxbra2308.cern.ch_8443_CREAM852910790/uname.out 
Linux lxbra2506v6.cern.ch 2.6.18-238.5.1.el5xen #1 SMP Tue Mar 1 19:22:01 EST 2011 x86_64 x86_64 x86_64 GNU/Linux


Creating and managing proxy certificates

Don't do "--old" now!

$ arcproxy [--voms testers.eu-emi.eu] [--old]
$ arcproxy -I


It provides features for communicate with the information systems, do brokering, translate, move input files, submit jobs to cluster

$ arcinfo pgs03.grid.upjs.sk
$ cat <<EOF > myjob.xrsl
(arguments="Hello World")
$ arcsub -c pgs03.grid.upjs.sk myjob.xrsl -dump
$ arcsub -c pgs03.grid.upjs.sk myjob.xrsl


arcstat queries the status of active jobs from clusters

$ arcstat JOBID or arcstat -a


arccat prints the standard output and error of job

$ arccat JOBID or arccat -a


arcget get results of a job and clean it from cluster

$ arcget JOBID or arcget -a

Submitting to other type of resources

$ arcinfo pgs03.grid.upjs.sk:50000/arex
$ arcsub -c pgs03.grid.upjs.sk:50000/arex myjob.xrsl -dump
$ arcinfo CREAM:lxbra2308.cern.ch
$ arcsub -c CREAM:lxbra2308.cern.ch myjob.xrsl -dump


The UCC is already configured; you can see this in ~/.ucc/preferences (Note: You don't have to specify the password in the configuation file. If you omit the line, UCC will ask you for it on every call. To avoid typing your password repeatedly, you can run ucc shell and then issue every UCC command from within the UCC shell.)

First, you have to run the connect command:

$ucc connect
Help for each ucc command with -h:
$ucc -h
List available sites
$ucc list-sites

To enter an interactive mode:

$ucc shell
List applications and storages and exit an interactive mode:

UCC date.u

Copy this file to a date.u file:

# simple job: run Date
   ApplicationName: Date, 
   ApplicationVersion: 1.0,

UCC - Running job

$ucc run date.u -v
In this case the standard out went for example to 58c55a2d-83ec-450f-b5f7-3e6f958312f7.stdout

Get the status of a specific job using ucc get-status. As an argument you can either use the job file that you got from run -a or the End Point Reference (EPR) you got from list-jobs :

$ucc run -a date.u -v -b
$ucc list-jobs
$ucc get-status job
$ucc get-output job

UCC Data Management

    Imports: [
       { From: "/path/fileName", To: "remoteFileName" },

   Exports: [
        { From: "remoteFileName", To:"/path/localFileName" },  

UCC Data Management

    Imports: [
       { From: "u6://TS/Storage/fileName", To: "remoteFileName" },

   Exports: [
        { From: "remoteFileName", To:"u6://TS/Storage/fileName" },  

UCC Resources

Resources: {
      Memory: 16M,
      CPUs: 32,
      Nodes: 4,
      Runtime: 3600

UCC localScript.sh

Copy this file to a localScript.sh file:

echo "Hello" >> newFile

UCC Data Management

Copying a file into your remote home directory:

$ucc put-file -s localScript.sh -t u6://EMI-UNICOREX/Home/script.sh
$ucc ls u6://EMI-UNICOREX/Home

UCC - bash.u

Copy this file to a bash.u file:

 ApplicationName: "Bash shell",

 Environment: [

         Imports: [
                { From: "u6://EMI-UNICOREX/Home/script.sh", To: "remoteScript.sh"}
        Exports: [
                { From: "newFile", To: "localNewFile"},
                { From:"newFile", To: "u6://EMI-UNICOREX/Home/homeNewFile"}

 Resources: {
        CPUs: 1 ,

UCC - Running bash job

$ucc run bash.u -v

UCC - Running on set of files

$mkdir ex
$cp *.u ex
$ucc batch -i ex -o out



browsing files:
srmls -2 srm://xen-ep-emi-tb-se-3.desy.de:8443/pnfs/desy.de/data/testers.eu-emi.eu/

writing file to SE:
srmcp -2 file://////etc/group  srm://xen-ep-emi-tb-se-3.desy.de:8443/pnfs/desy.de/data/testers.eu-emi.eu/group_DDMMYY_[A-Za-z]

srmls -2 srm://xen-ep-emi-tb-se-3.desy.de:8443/pnfs/desy.de/data/testers.eu-emi.eu/

writing file back from SE
=srmcp -2 srm://xen-ep-emi-tb-se-3.desy.de:8443/pnfs/desy.de/data/testers.eu-emi.eu/group_DDMMYY_[A-Za-z] file://///tmp/groups_080711A.back=

deleting a file:

srmrm -2 srm://xen-ep-emi-tb-se-3.desy.de:8443/pnfs/desy.de/data/testers.eu-emi.eu/group_DDMMYY_[A-Za-z]


writing file to SE

dccp /etc/group dcap://xen-ep-emi-tb-se-3.desy.de:22125/pnfs/desy.de/data/testers.eu-emi.eu/group_DDMMYY_[A-Za-z] srmls -2 srm://xen-ep-emi-tb-se-3.desy.de:8443/pnfs/desy.de/data/testers.eu-emi.eu/

writing file back from SE
dccp dcap://xen-ep-emi-tb-se-3.desy.de:22125/pnfs/desy.de/data/testers.eu-emi.eu/group_DDMMYY_[A-Za-z] /tmp/group_DDMMYY_[A-Za-z].back


Browse files from command line:
     cadaver http://xen-ep-emi-tb-se-3.desy.de:2880

GUI clients: nautilus, firefox add-on TrailMix (now proprietary), OS-based file browsers that support webDAV

More information by Oleg and Tanja http://trac.dcache.org/projects/dcache/wiki/WebDAV%20Hands%20on

Write files:

curl -v -T /etc/group http://xen-ep-emi-tb-se-3.desy.de:2880/pnfs/desy.de/data/testers.eu-emi.eu/testFileCURL_DDMMYY_[A-Za-z]

Look for the file through srmls or cadaver

-- EmidioG - 13-Apr-2011

Edit | Attach | Watch | Print version | History: r9 < r8 < r7 < r6 < r5 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r9 - 2011-07-08 - ChristianBernardtExCern
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    EMI All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback