mkdir ~/.ssh chmod 700 ~/.ssh ssh-keygen -t rsaThe last command will produce a prompt similar to
Generating public/private rsa key pair. Enter file in which to save the key (/home/<local_user_name>/.ssh/id_rsa):Unless you want to change the location of the key, continue by pressing enter. Now you will be asked for a passphrase. Enter a passphrase that you will be able to remember and which is secure:
Enter passphrase (empty for no passphrase): Enter same passphrase again:When everything has successfully completed, the output should resemble the following:
Your identification has been saved in /home/<local_user_name>/.ssh/id_rsa. Your public key has been saved in /home/<local_user_name>/.ssh/id_rsa.pub. The key fingerprint is: ae:89:72:0b:85:da:5a:f4:7c:1f:c2:43:fd:c6:44:38 myname@mymac.localThe key's randomart image is:
+--[ RSA 2048]----+ | | | . | | E . | | . . o | | o . . S . | | + + o . + | |. + o = o + | | o...o * o | |. oo.o . | +-----------------+Windows Putty Open the PuTTYgen program. For Type of key to generate, select SSH-2 RSA. Click the Generate button. Move your mouse in the area below the progress bar. When the progress bar is full, PuTTYgen generates your key pair. Type a passphrase in the Key passphrase field. Type the same passphrase in the Confirm passphrase field. You can use a key without a passphrase, but this is not recommended. Click the Save private key button to save the private key. Warning! You must save the private key. You will need it to connect to your machine. Right-click in the text field labeled Public key for pasting into OpenSSH authorized_keys file and choose Select All. Right-click again in the same text field and choose Copy. Follow the instructions here to generate keys: https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/#platform-windows
Permission denied (publickey).This most likely means that the remote permissions are too unconstrained. Please execute:
chmod go-w ~/ chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keysTo verify access
$ ssh netid@login.uscms.org
$ copy_certificates <b>=============================================================================</b> This script checks if you have globus certificates or lets you copy them from another machine otherwise (default: lxplus.cern.ch) $ NOTE: New certificates need to be requested first. Follow this Twiki for that: https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookStartingGrid#ObtainingCert <b>=============================================================================</b> Check for certificates in /home/yourusername/.globus ... Couldn't find any certificates. Copying certificates from another machine Note: This requires certificates to be under the standard $HOME/.globus location ... Enter hostname of machine to login: lxplus.cern.ch Enter username for lxplus.cern.ch: yourusername Warning: Permanently added the RSA host key for IP address '188.184.70.205' to the list of known hosts. Password: usercert.pem 100% 3526 3.4KB/s 00:00 userkey.pem 100% 2009 2.0KB/s 00:01 All Done... You can execute the following to initialize your proxy: voms-proxy-init -voms cms -valid 192:00
$ tutorial quickstart Installing quickstart (master)... Tutorial files installed in ./tutorial-quickstart. Running setup in ./tutorial-quickstart... $ cd tutorial-quickstart $ condor_submit tutorial01.submit
+REQUIRED_OS = "rhel7"to your submit file to run jobs under RedHat 7 on Sites supporting Singularity. For more information on how to use singularity on CMS Connect, click here
Currently, a job can only use 1 GPU at the time. request_gpus = 1 +RequiresGPU=1 request_cpus = 1 request_memory = 2 GBNote the number of GPU resources in CMS is still limited at present, so the matching can take longer than regular (cpu) jobs. It is currently not possible to specify exactly what type of GPU you want, but you can match on for example CUDA compute capability. For example, use the following requirements expression in your job: requirements = CUDACapability >= 3 For information about submitting GPU jobs to the Global Pool via CMS Connect, see think link
# Load the software $ export LD_LIBRARY_PATH=/opt/xrootd/lib:$LD_LIBRARY_PATH $ export PATH=/opt/xrootd/bin:/opt/StashCache/bin:$PATHNow, to copy a file inside /stash/user/khurtado/work/test.txt with either one of these tools, you can do:
$ xrdcp root://stash.osgconnect.net:1094//user/khurtado/work/test.txt . $ stashcp /user/khurtado/work/test.txt .
+DESIRED_Sites="T2_US_Purdue,T2_US_UCSD"Setting Sites on your shell session If you don't set +DESIRED_Sites in your submission file, all US Tier-2 and Tier-3 Sites will be used by default. You can change the default behavior on your shell session with by sourcing /etc/ciconnect/set_condor_sites.sh
# Usage: source /etc/ciconnect/set_condor_sites.sh "<pattern>" # Examples: # - All Sites: set_condor_sites "T*" # - T2 Sites: set_condor_sites "T2_*" # - Tier US Sites: set_condor_sites "T?_US_*" $ source /etc/ciconnect/set_condor_sites.sh "T*" All Done Note: To verify your list of sites, simply do: echo $CONDOR_DEFAULT_DESIRED_SITES NOTE: Remember that condor submission files with +DESIRED_Sites NOTE: will give priority to that over $CONDOR_DEFAULT_DESIRED_SITES $CONDOR_DEFAULT_DESIRED_SITES has been set to: T3_BY_NCPHEP,T2_IT_Bari,T3_US_Baylor,T2_CN_Beijing,T3_IT_Bologna,T2_UK_SGrid_Bristol,T2_UK_London_Brunel,T1_FR_CCIN2P3,T2_FR_CCIN2P3,T0_CH_CERN,T2_CH_CERN,T2_CH_CERN_AI,T2_CH_CERN_HLT,T2_ES_CIEMAT,T1_IT_CNAF,T3_CN_PKU,T2_CH_CSCS,T2_TH_CUNSTDA,T2_US_Caltech,T3_US_Colorado,T3_US_Cornell,T2_DE_DESY,T2_EE_Estonia,T3_RU_FIAN,T3_US_FIT,T1_US_FNAL,T3_US_FNALLPC,T3_US_Omaha,T2_US_Florida,T2_FR_GRIF_IRFU,T2_FR_GRIF_LLR,T2_BR_UERJ,T2_FI_HIP,T2_AT_Vienna,T2_HU_Budapest,T3_GR_IASA,T2_UK_London_IC,T2_ES_IFCA,T2_RU_IHEP,T2_BE_IIHE,T3_FR_IPNL,T3_IT_Napoli,T2_RU_INR,T2_FR_IPHC,T2_RU_ITEP,T2_GR_Ioannina,T3_US_JHU,T2_RU_JINR,T1_RU_JINR,T2_UA_KIPT,T1_DE_KIT,T2_KR_KNU,T3_KR_KNU,T3_US_Kansas,T2_IT_Legnaro,T2_BE_UCL,T2_TR_METU,T2_US_MIT,T2_PT_NCG_Lisbon,T2_PK_NCP,T3_TW_NCU,T3_TW_NTU_HEP,T3_US_NotreDame,T2_TW_NCHC,T2_US_Nebraska,T3_US_NU,T3_US_OSU,T3_ES_Oviedo,T3_UK_SGrid_Oxford,T1_ES_PIC,T2_RU_PNPI,T3_CH_PSI,T3_IN_PUHEP,T3_IT_Perugia,T2_IT_Pisa,T3_US_Princeton_ICSE,T2_US_Purdue,T3_UK_London_QMUL,T1_UK_RAL,T2_DE_RWTH,T3_US_Rice,T2_IT_Rome,T3_US_Rutgers,T2_UK_SGrid_RALPP,T2_RU_SINP,T2_BR_SPRACE,T2_PL_Swierk,T3_US_MIT,T3_US_NERSC,T3_US_SDSC,T3_CH_CERN_CAF,T3_HU_Debrecen,T3_US_FIU,T3_US_FSU,T3_US_OSG,T3_US_TAMU,T2_IN_TIFR,T3_US_TTU,T3_IT_Trieste,T3_US_UCR,T3_US_UCD,T3_US_UCSB,T2_US_UCSD,T3_UK_ScotGrid_GLA,T3_US_UMD,T3_US_UMiss,T3_CO_Uniandes,T3_KR_UOS,T2_MY_UPM_BIRUNI,T3_US_PuertoRico,T3_BG_UNI_SOFIA,T3_UK_London_UCL,T2_US_Vanderbilt,T2_PL_Warsaw,T2_US_Wisconsin,T3_MX_CinvestavSite Lists: A list of all CMS Sites can be obtained via get_condor_sites.
$ get_condor_sites Usage: get_condor_sites <pattern> ----------------------------------- Examples: - All Sites: get_condor_sites T* - T2 Sites: get_condor_sites T2_* - Tier US Sites: get_condor_sites T?_US_*Here is a list of the current Sites available, followed by some code block examples for DESIRED_Sites that you can use in your job submission files: Tier-2 Resources: T2_US_MIT,T2_US_Florida, T2_US_Purdue, T2_US_UCSD, T2_US_Vanderbilt, T2_US_Wisconsin, T2_US_Caltech, T2_US_Nebraska Tier-3 Resources: T3_US_Baylor, T3_US_Colorado, T3_US_Cornell, T3_US_FIT, T3_US_FIU, T3_US_NotreDame, T3_US_Rutgers, T3_US_TAMU,T3_US_TTU, T3_US_UCD, T3_US_UCR, T3_US_PuertoRico, T3_US_UCSB, T3_US_UMD, T3_US_UMiss, T3_US_OSU
Report from the Worker Node
Each job is reported once it is assigned to an available machine and executed from it.
As opposed to regular CRAB workflows, users define their own submission scripts in CMS-Connect (as in any regular condor workflow). Due to this fact, tasks like stage-out, stage-in and error code management are implemented and handled by each user. For this reason, only a few parameters are reported by default, without the need of any further action from the user.
PARAMETER = VALUE # Example: Print this out at the end of your job to report the number of events on it. CMS_DASHBOARD_N_EVENTS = 5000 |
Parameters | Description |
---|---|
Parameters | Description |
CMS_DASHBOARD_N_EVENTS | Number of events in the job. Default: 0 |
CMS_DASHBOARD_EXE_WC_TIME | Executable wall clock time. Default: Condor executable WC time. |
CMS_DASHBOARD_EXE_CPU_TIME | Executable CPU time. Default: Condor executable CPU time. |
CMS_DASHBOARD_EXE_EXIT_CODE | Executable exit code. Default: Condor Executable exit code.
|
Note: The user might want to override the default values for EXE_WC_TIME, EXE_CPU_TIME and EXE_EXIT_CODE in cases where e.g the Condor Executable is just a user wrapper running the actual executable. | |
CMS_DASHBOARD_STAGEOUT_SE | Storage Element name. Default: unknown. |
CMS_DASHBOARD_STAGEOUT_EXIT_CODE | Stage out exit code. |
CMS_DASHBOARD_STAGEOUT_TIME | Stage out exit time. |
CMS_DASHBOARD_JOB_EXIT_CODE | Job Exit code. Default: Executable exit code. User can report their own job exit codes to handle the overall completion state of the job. |
CMS_DASHBOARD_JOB_EXIT_REASON | Job Exit Reason. Default: Empty |
git clone https://github.com/CMSConnect/cmsconnect-client cd cmsconnect-client source cmsconnect_client.sh # or for tcsh source cmsconnect_client.tcsh
# First, cmsenv a CMSSW release providing python2.7. For example: $ source /cvmfs/cms.cern.ch/cmsset_default.csh cmsrel CMSSW_7_1_28; cd CMSSW_7_1_28; cmsenv # Now, install the client $ wget -O - https://raw.githubusercontent.com/CMSConnect/cmsconnect-client/master/cmsconnect_client_install.sh | bashPlease notice that if you change to a different CMSSW release (e.g from 8.x to 9.x), you might need to reinstall the client.
# For tcsh source ~/software/connect-client/cmsconnect_client.tcsh # For bash: source ~/software/connect-client/cmsconnect_client.sh
$ connect setup username@login.uscms.org
$ connect shell $ export HOME=/home/$USER $ voms-proxy-init -voms cms -valid 192:00 $ exit
$ connect shell voms-proxy-info -allUse the git repo to get a submit example from the tutorial:
$ git clone https://github.com/CMSConnect/tutorial-quickstart $ cd tutorial-quickstartYou will need to add your project name to the submit file (tutorial01.submit). +ProjectName="cms.org.yourinstitution" to see your default project you can type:
$ connect shell cat /home/\$USER/.ciconnect/defaultprojectTo submit your job, you can type:
$ connect submit tutorial01.submitTo see the queue:
$ connect qOnce your job is done in the queue, you can pull the output via:
$ connect pullNote: In case of the tutorial01 example, you will see "job.output, job.err" being transferred.
Reviewer/Editor and Date (copy from screen) | Comments |
---|---|
Kenyi Hurtado - 10 June 2016 | created documentation v1 |