How to setup you dream GRID account for use from CERN

Prerequisites

Grid users with a valid certificate as described on the GridHelp page can first load their certificate into their browser, then register in the DREAM VOMS service to become registered members of the DREAM VO. Once registered, they can use grid data transfer tools to move data into and out of the storage area at TTU or other grid locations, such as CERN, where the DREAM VO is supported.

To complete the setup steps described below, you need to have a working lxplus account or other account on a local resource equipped with grid software, a grid certificate and you have registered with the Dream Grid VO. You can probably reproduce these steps based on any computer from your home institution that is configured for use with grid software. See also our DataTransfer and GlobusOnline pages for related information.

First time setup

You will have to complete the following steps only once.

  • Login to lxplus and check whether the following command exists:

which voms-proxy-init

If not, you may need to set up grid software on your resource. It should work without further setup on lxplus and on the TTU machines for local users. Contact your local administrators to find out how to set up grid software if needed.

Once the above command returns a result, test whether you can get a basic DREAM proxy using this command:

voms-proxy-init -voms dream

If this works, skip to the next sub-section labeled "Accessing to your DREAM account on any supported resource". If not, proceed to add DREAM virtual organization information to your local setup as below.

To add DREAM VO information, first check to see that the DREAM VOMS server is defined (It should be on lxplus) by issuing the following command.

grep -r dream /etc/vomses

If it is absent (i.e., the above command returns nothing), add the following line to your list of VOMS servers in your own personal ~/.glite/vomses file, creating this file and the ~/.glite directory to contain it if necessary:

"dream" "voms.hpcc.ttu.edu" "15004" "/DC=com/DC=DigiCert-Grid/O=Open Science Grid/OU=Services/CN=voms.hpcc.ttu.edu" "dream"

Next, check to see if the LSC file for the DREAM VOMS server is available:

ls -l /etc/grid-security/vomsdir/*dream*

If there are no matching .lsc</span files listed by the above command, do the following two steps:

* create a directory ~/.glite/vomsdir and the subdirectory ~/.glite/vomsdir/dream

mkdir ~/.glite/vomsdir
mkdir ~/.glite/vomsdir/dream

  • create a file called voms.hpcc.ttu.edu.lsc inside ~/.glite/vomsdir/dream with the contents:

/DC=com/DC=DigiCert-Grid/O=Open Science Grid/OU=Services/CN=voms/voms.hpcc.ttu.edu
/DC=com/DC=DigiCert-Grid/O=DigiCert Grid/CN=DigiCert Grid CA-1

The above steps should only have to be done once, and then the resulting files will be available to you from then on.

Finally, you may have to add the following lines to your login profile or issue them each time you want to use DREAM VO information:

If using bash shell:
export X509_VOMSES=~/.glite/vomses
export X509_VOMS_DIR=~/.glite/vomsdir

OR

If using csh/tcsh:
setenv X509_VOMSES ~/.glite/vomses
setenv X509_VOMS_DIR ~/.glite/vomsdir

Accessing to your DREAM account on any supported resource

Set up your grid software if needed and make sure the voms-proxy-init command exists and that you can get a DREAM proxy as described above.

For regular access to your own DREAM VO account as mapped on the target resource, if you have not already done so as above, get a VOMS proxy from the DREAM server using the following command:

voms-proxy-init -voms dream

A general DREAM proxy as above should be enough to let you log in or issue data transfer commands. In addition, Michele, Sehwook and Alan have special access to maintain pristine remote copies of the original DREAM data files and can assert an administrative role to do so. (For those with this role, the regular proxy as above should be used for data analysis, as for the rest of the collaboration. For maintaining the DREAM data copies, and only for this purpose, the command becomes voms-proxy-init -voms dream:/dream/dreamdaq before a data maintenance operation, and voms-proxy-destroy followed by a regular voms-proxy-init -voms dream command once it is done can be used to revert to regular access.)

Now you are ready to login to one of the DREAM grid machines. The DREAM proxy

At TTU, the grid nodes are antaeus.hpcc.ttu.edu (gatekeeper) and sigmorgh.hpcc.ttu.edu (data transfer). We use sigmorgh specifically for data transfers and it is tuned for transfer speed, so that is the one to use for accessing storage, issuing gridftp or uberftp commands directly, etc. Antaeus can be used for general login, but note that data analysis jobs should not be run directly on this machine, but instead submitted to the batch queue.

Login to one of these machines with a command like

gsissh antaeus.hpcc.ttu.edu -p 49922

NOTE: The TTU machines require a special port as specified above. Your grid resource may or may not support gsissh logins, and the port may be different, so check with your local grid administrator.

Simple data transfers for single files or small sets of files can be done using GridFTP or related commands, like UberFTP:

uberftp sigmorgh.hpcc.ttu.edu

Within uberftp, you can issue commands to set the remote directory, such as:

cd /lustre/hep/osg/dream
ls

You can then issue the command help to find out more about how uberftp works. (It is a grid-enabled version of FTP with closely similar features, and can be used to list and transfer files to your originating machine, etc.). For more extensive automated data transfer details, see our GlobusOnline page. Other expert data transfer tasks are documented on our DataTransfer and DreamDataAnalysis pages.

Finally, to analyze data, please either copy the data to your own machines as above or consult the user manuals at the TTU High Performance Computing Center to submit batch jobs, which can be very quick, to analyze data on the TTU cluster. Please do not run interactive analysis jobs directly on the antaeus.hpcc.ttu.edu grid gatekeeper or the sigmorgh.hpcc.ttu.edu data transfer nodes, which are dedicated to these purposes. It is also possible to submit grid jobs using commands like globus-job-submit or through grid workflow engines like Condor-g; please contact Alan, Michele, or Sehwook for details.

-- AlanSill - 2014-11-30

Edit | Attach | Watch | Print version | History: r7 < r6 < r5 < r4 < r3 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r7 - 2014-12-01 - AlanSill
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    DREAM All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback