Difference: AsoDeployment (1 vs. 82)

Revision 822019-09-13 - StefanoBelforte

Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
>
>
META TOPICPARENT name="SWGuideCrab"
 
<!-- /ActionTrackerPlugin -->
Changed:
<
<
CRAB Logo
>
>
CRAB Logo
 

Deployment of AsyncStageOut for CRAB3

Line: 47 to 45
 

Additional machine preparation steps

The host must be registered for proxy retrieval from myproxy.cern.ch. Request it by sending an e-mail to px.support@cernNOSPAMPLEASE.ch giving the DN of the host certificate. If host certificate is not correct, or need to be updated, need to contact VOC . It can be obtained by

Changed:
<
<
voms-proxy-info -file /etc/grid-security/hostcert.pem -subject
>
>
voms-proxy-info -file /etc/grid-security/hostcert.pem -subject
  In case voms-proxy-info is not available, use
Changed:
<
<
openssl x509 -subject -noout -in /data/certs/hostcert.pem
>
>
openssl x509 -subject -noout -in /data/certs/hostcert.pem
  Registration with myproxy.cern.ch can be checked with:
Changed:
<
<
ldapsearch -p 2170 -h myproxy.cern.ch -x -LLL -b "mds-vo-name=resource,o=grid" | grep $(hostname)
>
>
ldapsearch -p 2170 -h myproxy.cern.ch -x -LLL -b "mds-vo-name=resource,o=grid" | grep $(hostname)
  Prepare directories for the deployment, owned by the service account:
Changed:
<
<
>
>
 sudo mkdir /data/srv /data/admin /data/certs
Changed:
<
<
sudo chown crab3:zh /data/srv /data/admin /data/certs
>
>
sudo chown crab3:zh /data/srv /data/admin /data/certs
  Make a copy of the host certificate, accessible by the service account to be used by AsyncStageOut:
Changed:
<
<
>
>
 sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs
Changed:
<
<
sudo chown crab3:zh /data/certs/*
>
>
sudo chown crab3:zh /data/certs/*
 

Deployment

The deployment and operations are done as the service user, so we switch to it:

Changed:
<
<
sudo -u crab3 -i bash
>
>
sudo -u crab3 -i bash
  Create directories for the deployment scripts and the deployment:
Changed:
<
<
>
>
 mkdir /data/admin/asyncstageout
Changed:
<
<
mkdir /data/srv/asyncstageout
>
>
mkdir /data/srv/asyncstageout
  If not already done by a previous deployment, create the secrets file, filling in the CouchDB username and password, the IP address of the local machine and the host where the CRAB3 REST interface is installed:
Changed:
<
<
>
>
 cat > $HOME/Async.secrets <<EOF COUCH_USER=*** COUCH_PASS=***
Line: 97 to 96
 UFC_SERVICE_URL=https://cmsweb-testbed.cern.ch/crabserver/preprod/filemetadata COUCH_CERT_FILE=/data/certs/servicecert.pem COUCH_KEY_FILE=/data/certs/servicekey.pem
Changed:
<
<
EOF
>
>
EOF
  The file contains sensitive data and must be protected with the appropriate permissions:
Changed:
<
<
chmod 600 $HOME/Async.secrets
>
>
chmod 600 $HOME/Async.secrets
  Get the deployment scripts (take a look at the github dmwm/deployment repository for the latest tag -here we assume it is HG1411a-):
Changed:
<
<
>
>
 cd /data/admin/asyncstageout rm -rf Deployment wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1411a.zip unzip cfg.zip; rm cfg.zip mv deployment-* Deployment
Changed:
<
<
cd Deployment
>
>
cd Deployment
  Perform the deployment of the appropriate AsyncStageOut release tag from the corresponding CMS repository (contact Hassen Riahi in case of doubt):
Changed:
<
<
>
>
 ASOTAG=1.0.3pre1 REPO=comp.pre.riahi ARCH=slc6_amd64_gcc481 ./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s sw -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
Changed:
<
<
./Deploy -R asyncstageout@$ASOTAG -s post -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
>
>
./Deploy -R asyncstageout@$ASOTAG -s post -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
  Create a directory to store user credentials obtained from myproxy, accessible to the service exclusively:
Changed:
<
<
>
>
 mkdir /data/srv/asyncstageout/state/asyncstageout/creds
Changed:
<
<
chmod 700 /data/srv/asyncstageout/state/asyncstageout/
>
>
chmod 700 /data/srv/asyncstageout/state/asyncstageout/
  Commit patches if required:
Changed:
<
<
>
>
 cd /data/srv/asyncstageout/current/ #wget https://github.com/dmwm/WMCore/pull/4965.patch -O - | patch -d apps/asyncstageout/ -p 1 #wget https://github.com/dmwm/WMCore/pull/4967.patch -O - | patch -d apps/asyncstageout/ -p 1
Changed:
<
<
#wget https://github.com/dmwm/WMCore/commit/a4563fa3cbc451dcce27669052518769a5e00a2a.patch -O - | patch -d apps/asyncstageout/lib/python2.6/site-packages/ -p 3
>
>
#wget https://github.com/dmwm/WMCore/commit/a4563fa3cbc451dcce27669052518769a5e00a2a.patch -O - | patch -d apps/asyncstageout/lib/python2.6/site-packages/ -p 3
  Initialize the service:
Changed:
<
<
>
>
 cd /data/srv/asyncstageout/current ./config/asyncstageout/manage activate-asyncstageout ./config/asyncstageout/manage start-services
Changed:
<
<
./config/asyncstageout/manage init-asyncstageout
>
>
./config/asyncstageout/manage init-asyncstageout
  Set correct values of some essential configuration parameters in the config file config/asyncstageout/config.py:
Changed:
<
<
>
>
 sed --in-place "s|\.credentialDir = .*|\.credentialDir = '/data/srv/asyncstageout/state/asyncstageout/creds'|" config/asyncstageout/config.py sed --in-place "s|\.serviceCert = .*|\.serviceCert = '/data/certs/hostcert.pem'|" config/asyncstageout/config.py sed --in-place "s|\.serviceKey = .*|\.serviceKey = '/data/certs/hostkey.pem'|" config/asyncstageout/config.py
Line: 158 to 157
 sed --in-place "s|\.cache_area = .*|\.cache_area = 'https://cmsweb-testbed.cern.ch/crabserver/preprod/filemetadata'|" config/asyncstageout/config.py sed --in-place "s|\.opsProxy = .*|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py sed --in-place "s|\.UISetupScript = .*|\.UISetupScript = '/data/srv/tmp.sh'|" config/asyncstageout/config.py
Changed:
<
<
sed --in-place "s|\.log_level = .*|\.log_level = 10|" config/asyncstageout/config.py
>
>
sed --in-place "s|\.log_level = .*|\.log_level = 10|" config/asyncstageout/config.py
 

Operations

OpsProxy Renewal or Creation

First connect to machine and create a seed for proxy delegation:

Changed:
<
<
>
>
 ssh machine_name sudo mkdir /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ voms-proxy-init -voms cms sudo cp -p /tmp/x509up_u$(id -u) /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/seed-100001.cert
Changed:
<
<
sudo chown crab3:zh /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/seed-100001.cert
>
>
sudo chown crab3:zh /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/seed-100001.cert
  Create a delegation (don`t forget to change the MACHINE NAME and DELEGATION_NAME):
Changed:
<
<
myproxy-init -l DELEGATION_NAME_100001 -x -R "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -c 720 -t 36 -Z "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -s myproxy.cern.ch
>
>
myproxy-init -l DELEGATION_NAME_100001 -x -R "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -c 720 -t 36 -Z "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -s myproxy.cern.ch
  Copy the script to /data/admin/ProxyRenew.sh and do chown:
Changed:
<
<
sudo chown crab3:zh /data/admin/ProxyRenew.sh # Not needed if renewing 
>
>
sudo chown crab3:zh /data/admin/ProxyRenew.sh # Not needed if renewing 
  Before adding into crontab, try to run command and see if Proxy is renewed:
Changed:
<
<
/data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms
>
>
/data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms
  Add in crontab if does not exist :
Changed:
<
<
>
>
 MAILTO="justas.balcas@cern.ch"
Changed:
<
<
3 */3 * * * /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms
>
>
3 */3 * * * /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms
 

Starting and stopping the service or CouchDB

Environment needed before starting or stoping AsyncStageOut

Changed:
<
<
>
>
 export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy cd /data/srv/asyncstageout/current/
Changed:
<
<
source ~/Async.secrets
>
>
source ~/Async.secrets
  Starting AsyncStageOut (If first time starting AsyncStageOut, make sure you have entered the correct FTS3 server URLs in CouchDB)
Changed:
<
<
./config/asyncstageout/manage start-asyncstageout
>
>
./config/asyncstageout/manage start-asyncstageout
  Stopping AsyncStageOut :
Deleted:
<
<
./config/asyncstageout/manage stop-asyncstageout
 
Changed:
<
<

Starting CouchDB

>
>
./config/asyncstageout/manage stop-asyncstageout
 
Changed:
<
<
>
>

Starting CouchDB

 ./config/asyncstageout/manage start-services
Deleted:
<
<
 
Changed:
<
<
./config/asyncstageout/manage stop-services
>
>
./config/asyncstageout/manage stop-services
 

Switch to FTS3

Set by hand all FTS servers endpoints in the asynctransfer_config database of the local CouchDB instance from the futon interface. There is one document per T1 site. For example, the RAL FTS server can be found at:

Changed:
<
<
http://LocalCouchInstance:5984/_utils/document.html?asynctransfer_config/T1_UK_RAL
>
>
http://LocalCouchInstance:5984/_utils/document.html?asynctransfer_config/T1_UK_RAL
  Modify the URL from the default to:
Line: 242 to 234
 There must be scheduled cron jobs for:

  • compacting the local CouchDB databases:
Changed:
<
<
>
>
 0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_compact' &> /dev/null

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_stat/_compact' &> /dev/null

Changed:
<
<
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984//asynctransfer_agent/_compact' &> /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984//asynctransfer_agent/_compact' &> /dev/null
 
  • querying/caching views of the asynctransfer database in the global CouchDB:
Changed:
<
<
>
>
 */10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all?stale=update_after' &> /dev/null

Line: 278 to 267
  */10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorFilesCount/_view/filesCountByDestSource?stale=update_after' &> /dev/null
Changed:
<
<
*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/monitor/_view/publicationStateSizeByTime?stale=update_after' &> /dev/null
>
>
*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/monitor/_view/publicationStateSizeByTime?stale=update_after' &> /dev/null
 
  • querying/caching views of the asynctransfer_config database (local CouchDB):
Changed:
<
<
>
>
 */10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' &> /dev/null

Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' &> /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' &> /dev/null
 
  • querying/caching views of the asynctransfer_agent database (local CouchDB):
Changed:
<
<
*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_agent/_design/Agent/_view/existWorkers' &> /dev/null
>
>
*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_agent/_design/Agent/_view/existWorkers' &> /dev/null
 

Deployment of AsyncStageOut for CRAB3 developers

Line: 310 to 294
 

Pre-configuration

The AsyncStageOut requires a couple of password/username settings for databases. These are provided using a simple formatted (parameter-key=parameter-value) secrets file, expected to be by default in the home area of the account running the agent. Thus, create a secrets file (the default location/name is $HOME/Async.secrets) with the following definitions for supported parameters:

Changed:
<
<
>
>
 COUCH_USER=<a-couchdb-username> # Choose a username for the ASO CouchDB. COUCH_PASS=<a-couchdb-password> # Choose a password for the ASO CouchDB. COUCH_HOST=<IP-of-this-ASO-host> # You can read the host IP in https://openstack.cern.ch/dashboard/project/instances/.
Line: 319 to 302
 UFC_SERVICE_URL=https://osmatanasi2.cern.ch/crabserver/dev/filemetadata # The URL of the crabserver instance from where ASO should get the filemetadata. OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy COUCH_CERT_FILE=/data/certs/hostcert.pem
Changed:
<
<
COUCH_KEY_FILE=/data/certs/hostkey.pem
>
>
COUCH_KEY_FILE=/data/certs/hostkey.pem
  Notes:
  • The CouchDB password parameter is required. The other parameters will default to somewhat sensible values if not provided.
  • CouchDB requires an IP address; it has problems with hostnames.

The file contains sensitive data and must be protected with the appropriate permissions:

Changed:
<
<
chmod 600 $HOME/Async.secrets
>
>
chmod 600 $HOME/Async.secrets
  You can actually put this file in any directory you want and/or give to it any name you want, but then you have to set the environment variable ASYNC_SECRETS_LOCATION to point to the file:
Changed:
<
<
export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file
>
>
export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file
  This secrets file will be used by the deployment scripts to set parameters in the configuration files (e.g. /data/srv/asyncstageout/current/config/asyncstageout/config.py).

Installation

1) Create the directories where you will do the deployment.

Changed:
<
<
>
>
 sudo mkdir /data/admin /data/srv sudo chown :zh /data/admin /data/srv
Changed:
<
<
mkdir /data/admin/asyncstageout /data/srv/asyncstageout
>
>
mkdir /data/admin/asyncstageout /data/srv/asyncstageout
  2) Get the DMWM deployment package from github (https://github.com/dmwm/deployment). See https://github.com/dmwm/deployment/releases for the available releases. Note that you don't need to get the latest release.
Changed:
<
<
>
>
 cd /data/admin/asyncstageout wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1509i.zip unzip cfg.zip; rm cfg.zip mv deployment-* Deployment
Changed:
<
<
cd Deployment
>
>
cd Deployment
  3) Set auxiliary variables ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you. The AsyncStageOut releases can be found in https://github.com/dmwm/AsyncStageout/releases and should be taken from the CMS repository comp.pre.riahi.
Changed:
<
<
>
>
 ASOTAG=1.0.3pre14 REPO=comp.pre.riahi
Changed:
<
<
ARCH=slc6_amd64_gcc481
>
>
ARCH=slc6_amd64_gcc481
  4) The deployment is separated in three steps: prep, sw and post.
Changed:
<
<
./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
>
>
./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
 INFO: 20150917103852: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150917-103852-4382-prep.log INFO: deploying backend - variant: default, version: default INFO: deploying wmcore-auth - variant: default, version: default INFO: deploying couchdb - variant: default, version: default INFO: deploying asyncstageout - variant: offsite, version: default
Changed:
<
<
INFO: installation completed sucessfully

./Deploy -R asyncstageout@$ASOTAG -s sw -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
>
>
INFO: installation completed sucessfully
./Deploy -R asyncstageout@$ASOTAG -s sw -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
 INFO: 20150917103903: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150917-103903-4464-sw.log INFO: deploying backend - variant: default, version: default
Line: 397 to 359
 INFO: deploying wmcore-auth - variant: default, version: default INFO: deploying couchdb - variant: default, version: default INFO: deploying asyncstageout - variant: offsite, version: default
Changed:
<
<
INFO: installation completed sucessfully

./Deploy -R asyncstageout@$ASOTAG -s post -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
>
>
INFO: installation completed sucessfully
./Deploy -R asyncstageout@$ASOTAG -s post -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
 INFO: 20150917104616: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150917-104616-5911-post.log INFO: Updating /data/srv/asyncstageout/v1.0.3pre14/apps to apps.sw.pre.riahi
Line: 413 to 369
 INFO: deploying wmcore-auth - variant: default, version: default INFO: deploying couchdb - variant: default, version: default INFO: deploying asyncstageout - variant: offsite, version: default
Changed:
<
<
INFO: installation completed sucessfully
>
>
INFO: installation completed sucessfully
 

Authentication

Certificate for interactions with CMSWEB

Access to CMSWEB is restricted to CMS users and services by requesting authentication with certificates registered in VO CMS and SiteDB. The AsyncStageOut configuration file has a parameter (actually one parameter for each component) to point to a certificate that each component should use for interactions with CMSWEB. The parameter value is taken from the secrets file:

Changed:
<
<
OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
>
>
OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
  Some of the CMS services use Grid service certificates for interactions with CMSWEB, but the majority, and in particular the production and pre-production CRAB services, use the operator's proxy. The reasons are both for convenience and security. For private installations you are the operator, so you should use your own user proxy:
Changed:
<
<
>
>
 mkdir /data/srv/asyncstageout/state/asyncstageout/creds chmod 700 /data/srv/asyncstageout/state/asyncstageout
Deleted:
<
<
 
Deleted:
<
<
 voms-proxy-init --voms cms --valid 192:00
Changed:
<
<
cp /tmp/x509up_u$UID /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
>
>
cp /tmp/x509up_u$UID /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
 The proxy is created for 8 days (192 hours), because this is the maximum allowed duration of the VO CMS extension. Thus, the proxy has to be renewed every 7 days (at least). You can do it manually (executing the last two commands) or you can set up an automatic renewal procedure like is being done in production and pre-production. See OpsProxy Renewal or Creation.

Note: The operator proxy is also used by ASO to manage the users files in case the retrieval of the user's proxy has failed.

Line: 447 to 395
 
Certificate for interactions with myproxy server

CRAB services use the host certificate (and private key) for interacting with myproxy server. The AsyncStageOut configuration file has two parameters to point to a certificate and private key to use for interactions with myproxy server. (The ASO components that need to interact with myproxy server are AsyncTransfer and DBSPublisher.) The parameter values are taken from the secrets file:

Changed:
<
<
>
>
 COUCH_CERT_FILE=/data/certs/hostcert.pem
Changed:
<
<
COUCH_KEY_FILE=/data/certs/hostkey.pem
>
>
COUCH_KEY_FILE=/data/certs/hostkey.pem
  So copy the host certificate and private key to the /data/certs/ directory and change the owner to yourself:
Changed:
<
<
>
>
 #sudo mkdir /data/certs # This should have been done already by the Deploy script when installing the VM. sudo chown :zh /data/certs #sudo cp -p /etc/grid-security/host{cert,key}.pem /data/certs/ # This should have been done already by the Deploy script when installing the VM.
Changed:
<
<
sudo chown :zh /data/certs/host{cert,key}.pem
>
>
sudo chown :zh /data/certs/host{cert,key}.pem
  Add the Grid host certificate DN of the ASO host to the CRAB external REST configuration in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that this ASO instance is allowed to retrieve users proxies from myproxy server. The external REST configuration file will look something like this (in my case I have my private TaskWorker installed in osmatanasi2.cern.ch and I am installing ASO in osmatanasi1.cern.ch):
Changed:
<
<
>
>
 { "cmsweb-dev" : { "delegate-dn": [ "/DC=ch/DC=cern/OU=computers/CN=osmatanasi2.cern.ch|/DC=ch/DC=cern/OU=computers/CN=osmatanasi1.cern.ch" ],
Changed:
<
<
>
>
? ?
  }
Changed:
<
<
}
>
>
}
 

Patches

No patches necessary.

Initialization

Changed:
<
<
cd /data/srv/asyncstageout/current
>
>
cd /data/srv/asyncstageout/current
  The next command creates some directories and copies some template configuration files into the actual directories where they should be. This step is sometimes called "activation of the system". A hidden file ./install/asyncstageout/.using is created to signal that this activation step has been done. If this file exists already, the activation does nothing.
Changed:
<
<
./config/asyncstageout/manage activate-asyncstageout
>
>
./config/asyncstageout/manage activate-asyncstageout
  The next command initializes and starts CouchDB. A hidden file ./install/couchdb/.init is created to signal that this initialization step has been done. If this file exists already, the initialization part is skipped and only the starting part is executed. The initialization includes the generation of the appropriate databases.
Changed:
<
<
./config/asyncstageout/manage start-services
>
>
./config/asyncstageout/manage start-services
 Starting Services... starting couch... CouchDB has not been initialised... running pre initialisation
Changed:
<
<
Initialising CouchDB on :5984
>
>
Initialising CouchDB on :5984?
 Apache CouchDB has started, time to relax.
Changed:
<
<
CouchDB has not been initialised... running post initialisation
>
>
CouchDB has not been initialised... running post initialisation
  The next command generates a couple of configuration files (e.g. /data/srv/asyncstageout/current/config/asyncstageout/config.py) with parameters for all AsyncStageOut components. Most parameters are read from the Async.secrets file. A hidden file ./install/asyncstageout/.init is created to signal that this initialization step has been done. If this file exists already, the initialization does nothing.
Changed:
<
<
./config/asyncstageout/manage init-asyncstageout
>
>
./config/asyncstageout/manage init-asyncstageout
 Initialising AsyncStageOut... Installing AsyncTransfer into asynctransfer Installing monitor into asynctransfer Installing stat into asynctransfer_stat Installing DBSPublisher into asynctransfer Installing config into asynctransfer_config
Changed:
<
<
Installing Agent into asynctransfer_agent
>
>
Installing Agent into asynctransfer_agent
 

Configuration

ASO configuration

1) In the configuration file /data/srv/asyncstageout/current/config/asyncstageout/config.py there are still some parameters that need to be modified "by hand":

Changed:
<
<
>
>
 sed --in-place "s|\.credentialDir = .*|\.credentialDir = '/data/srv/asyncstageout/state/asyncstageout/creds'|" config/asyncstageout/config.py sed --in-place "s|\.serviceCert = .*|\.serviceCert = '/data/certs/hostcert.pem'|" config/asyncstageout/config.py sed --in-place "s|\.serviceKey = .*|\.serviceKey = '/data/certs/hostkey.pem'|" config/asyncstageout/config.py
Line: 537 to 464
 sed --in-place "s|\.serverDN = .*|\.serverDN = '$serverDN'|" config/asyncstageout/config.py sed --in-place "s|\.log_level = .*|\.log_level = 10|" config/asyncstageout/config.py sed --in-place "s|\.UISetupScript = .*|\.UISetupScript = '/data/srv/tmp.sh'|" config/asyncstageout/config.py
Changed:
<
<
sed --in-place "s|\.opsProxy = .*|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py
>
>
sed --in-place "s|\.opsProxy = .*|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py
  Have a look at the configuration file and change other parameters values if you consider it appropriate.

2) Create a file /data/srv/tmp.sh with just the following line:

Changed:
<
<
#!/bin/sh
>
>
#!/bin/sh
  3) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8443 or https://fts3-pilot.cern.ch:8443). For development ASO instances one should use https://fts3-pilot.cern.ch:8443:
Changed:
<
<
service = https://fts3-pilot.cern.ch:8443
>
>
service = https://fts3-pilot.cern.ch:8443
 
CouchDB configuration

First of all check that CouchDB is up and that you can access it from your VM:

Changed:
<
<
curl -X GET "http://$(hostname):5984/"

{"couchdb":"Welcome","uuid":"3bf548a5f518781332037d68da62ff28","version":"1.6.1","vendor":{"version":"1.6.1","name":"The Apache Software Foundation"}}
>
>
curl -X GET "http://$(hostname):5984/"
{"couchdb":"Welcome","uuid":"3bf548a5f518781332037d68da62ff28","version":"1.6.1","vendor":{"version":"1.6.1","name":"The Apache Software Foundation"}}
  1) The CouchDB installed in the VM is protected by a firewall. One can not access it not even from another machine at CERN. To be able to access this CouchDB from another machine at CERN, one needs to stop the iptables:
Changed:
<
<
sudo /etc/init.d/iptables stop # To start the iptables again: sudo /etc/init.d/iptables start; To check the status: sudo /etc/init.d/iptables status
>
>
sudo /etc/init.d/iptables stop # To start the iptables again: sudo /etc/init.d/iptables start; To check the status: sudo /etc/init.d/iptables status
  2) To access the CouchDB installed in the VM from outside CERN, create an ssh tunnel between the your machine and lxplus:
Changed:
<
<
[mylaptop]$ ssh -D 1111 <username>@lxplus.cern.ch
>
>
[mylaptop]$ ssh -D 1111 <username>@lxplus.cern.ch
  and configure your browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.
Line: 591 to 504
 

Cron jobs for CouchDB

You should already have the following cron jobs for compacting the CouchDB databases:

Changed:
<
<
>
>
 0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_compact' &> /dev/null

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_stat/_compact' &> /dev/null

Changed:
<
<
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984//asynctransfer_agent/_compact' &> /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984//asynctransfer_agent/_compact' &> /dev/null
  Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is solved, one needs to update the crontab by hand to create cron jobs for querying/caching the views every X minutes. To edit the crontab do:
Changed:
<
<
crontab -e
>
>
crontab -e
  Add the following cron jobs for querying/caching views of the asynctransfer database:
Changed:
<
<
>
>
 */10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/JobsIdsStatesByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate?stale=update_after' &> /dev/null

Line: 639 to 547
  */10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/DBSPublisher/_view/publish?stale=update_after' &> /dev/null
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/monitor/_view/publicationStateSizeByTime?stale=update_after' &> /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/monitor/_view/publicationStateSizeByTime?stale=update_after' &> /dev/null
  Note: FYI, in (pre-)production the global asynctransfer database is installed in the cmsweb CouchDB. This means that the protocol is HTTPS and therefore the curl query is something like curl --capath /path/to/CA/certs/dir --cacert /path/to/proxy --cert /path/to/proxy --key /path/to/proxy -H 'Content-Type: application/json' -X GET 'https://...'.

Add the following cron jobs for querying/caching views of the asynctransfer_config database:

Changed:
<
<
>
>
 */10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' &> /dev/null

Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' &> /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' &> /dev/null
  Add the following cron jobs for querying/caching views of the asynctransfer_agent database:
Changed:
<
<
*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_agent/_design/Agent/_view/existWorkers' &> /dev/null
>
>
*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_agent/_design/Agent/_view/existWorkers' &> /dev/null
  And if you have something like
Changed:
<
<
>
>
 @reboot /data/srv/asyncstageout/current/config/couchdb/manage sysboot 1 0 * * * /data/srv/asyncstageout/current/config/couchdb/manage compact wmstats 'I did read documentation' 1 6 * * * /data/srv/asyncstageout/current/config/couchdb/manage compact all_but_wmstats 'I did read documentation' 1 12 * * * /data/srv/asyncstageout/current/config/couchdb/manage compactviews wmstats WMStats 'I did read documentation'
Changed:
<
<
1 18 * * * /data/srv/asyncstageout/current/config/couchdb/manage compactviews all_but_wmstats all 'I did read documentation'
>
>
1 18 * * * /data/srv/asyncstageout/current/config/couchdb/manage compactviews all_but_wmstats all 'I did read documentation'
  delete that.
Line: 679 to 580
 First one has to start CouchDB, otherwise ASO will not start as it will fail to connect to the CouchDB.

Note: If following from the above initialisation steps, you may have CouchDB already running. This can be checked using the status command.

Changed:
<
<
./config/asyncstageout/manage start-services
>
>
./config/asyncstageout/manage start-services
  To start all ASO components:
Changed:
<
<
./config/asyncstageout/manage start-asyncstageout
>
>
./config/asyncstageout/manage start-asyncstageout
 Starting AsyncStageOut... Checking default database connection... ok. Starting components: ['AsyncTransfer', 'Reporter', 'DBSPublisher', 'FilesCleaner', 'Statistics', 'RetryManager']
Line: 733 to 628
 2015-06-15 15:32:12: ASOMon[24464]: Reading config file /data/srv/asyncstageout/v1.0.3pre8/config/asyncstageout/monitor.conf 2015-06-15 15:32:12: ASOMon[24464]: Using FTS service https://fts3-pilot.cern.ch:8443 ASOMon: pid 24466
Changed:
<
<
writing logfile to /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/Monitor/aso-monitor.log
>
>
writing logfile to /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/Monitor/aso-monitor.log
  To stop all ASO components:
Changed:
<
<
./config/asyncstageout/manage stop-asyncstageout
>
>
./config/asyncstageout/manage stop-asyncstageout
 Shutting down asyncstageot... Checking default database connection... ok. Stopping components: ['AsyncTransfer', 'Reporter', 'DBSPublisher', 'FilesCleaner', 'Statistics', 'RetryManager']
Line: 752 to 642
 Stopping: FilesCleaner Stopping: Statistics Stopping: RetryManager
Changed:
<
<
Stopping: Monitor
>
>
Stopping: Monitor
  To stop CouchDB:
Changed:
<
<
./config/asyncstageout/manage stop-services
>
>
./config/asyncstageout/manage stop-services
 Shutting down services... stopping couch...
Changed:
<
<
Apache CouchDB has been shutdown.
>
>
Apache CouchDB has been shutdown.
 

Start/stop a single AsyncStageOut component

To start the Monitor component:

Changed:
<
<
source apps/asyncstageout/Monitor/setup.sh; ./apps/asyncstageout/Monitor/ASO-Monitor.pl --config config/asyncstageout/monitor.conf
>
>
source apps/asyncstageout/Monitor/setup.sh; ./apps/asyncstageout/Monitor/ASO-Monitor.pl --config config/asyncstageout/monitor.conf
  To stop the Monitor component:
Changed:
<
<
kill -9 `cat install/asyncstageout/Monitor/aso-monitor.pid`
>
>
kill -9 `cat install/asyncstageout/Monitor/aso-monitor.pid`
  To start any other component:
Changed:
<
<
./config/asyncstageout/manage execute-asyncstageout wmcoreD --start --component <component-name>
>
>
./config/asyncstageout/manage execute-asyncstageout wmcoreD --start --component <component-name>
  To stop any other component:
Changed:
<
<
./config/asyncstageout/manage execute-asyncstageout wmcoreD --shutdown --component <component-name>
>
>
./config/asyncstageout/manage execute-asyncstageout wmcoreD --shutdown --component <component-name>
 

Use your private ASO in your CRAB jobs

There is a CRAB configuration parameter named Debug.ASOURL which specifies to which CouchDB instance should CRAB inject the transfer documents. And if this parameter is not specified, the parameter backend-urls.ASOURL from the external REST configuration takes place. For the production (pre-production) installation of AsyncStageOut, the documents should be injected to the central CouchDB deployed in CMSWEB (CMSWEB-testbed). For a private installation of AsyncStageOut, the documents should be injected to the local private CouchDB. So if you want to use your private ASO instance, you can either set in the CRAB configuration file

Changed:
<
<
config.Debug.ASOURL = 'http://<couchdb-hostname>.cern.ch:5984/'
>
>
config.Debug.ASOURL = 'http://<couchdb-hostname>.cern.ch:5984/'
  or set in the external REST configuration
Changed:
<
<
>
>
 { "cmsweb-dev": { ...
Line: 813 to 686
  }, ... }
Changed:
<
<
}
>
>
}
 

Possible "glite-delegation-init: command not found" error

When running ASO, I got the following error message in ./install/asyncstageout/AsyncTransfer/ComponentLog:

Changed:
<
<
>
>
 2015-02-06 18:41:54,162:DEBUG:TransferDaemon:Starting <AsyncStageOut.TransferWorker.TransferWorker instance at 0x1fabe18> 2015-02-06 18:41:54,162:DEBUG:TransferWorker:executing command: export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/eebe07e240237dde878ab66bb56e7794e8d5b39e ; source /data/srv/tmp.sh ; glite-delegation-init -s https://fts3-pilot.cern.ch:8443 at: Fri, 06 Feb 2015 18:41:54 for: /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=atanasi/CN=710186/CN=Andres Jorge Tanasijczuk 2015-02-06 18:41:54,271:DEBUG:TransferWorker:Executing :
Line: 828 to 699
 output : error: /bin/sh: glite-delegation-init: command not found retcode : 127
Changed:
<
<
2015-02-06 18:41:54,271:DEBUG:TransferWorker:User proxy of atanasi could not be delagated! Trying next time.
>
>
2015-02-06 18:41:54,271:DEBUG:TransferWorker:User proxy of atanasi could not be delagated! Trying next time.
  To fix it, I had to install fts2-client. But first I had to create a file /etc/yum.repos.d/EMI-3-base.repo with the following content:
Changed:
<
<
>
>
 [EMI-3-base] name=EMI3 base software baseurl=http://linuxsoft.cern.ch/emi/3/sl6/x86_64/base
Line: 841 to 710
 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-emi exclude=
Changed:
<
<
priority=15

sudo yum install fts2-client
>
>
priority=15
sudo yum install fts2-client
 

Disk may become full after some days

After some days the disk in the VM may become full because of many ASO documents (or files related to the views, I don't know) and ASO will stop working. One has to stop all AsyncStageOut components and CouchDB, and then do a clean-all.

Changed:
<
<
>
>
 ./config/asyncstageout/manage stop-asyncstageout ./config/asyncstageout/manage stop-services
Changed:
<
<
./config/asyncstageout/manage clean-all
>
>
./config/asyncstageout/manage clean-all
 

manage script commands

Line: 878 to 741
 Log files to watch for errors and to check and search in case of problems:

AsyncStageOut component logs:

Changed:
<
<
>
>
 ./install/asyncstageout/AsyncTransfer/ComponentLog ./install/asyncstageout/DBSPublisher/ComponentLog ./install/asyncstageout/Reporter/ComponentLog ./install/asyncstageout/Statistics/ComponentLog ./install/asyncstageout/FilesCleaner/ComponentLog ./install/asyncstageout/RetryManager/ComponentLog
Changed:
<
<
./install/asyncstageout/Monitor/aso-monitor.log
>
>
./install/asyncstageout/Monitor/aso-monitor.log
  CouchDB log:
Changed:
<
<
./install/couchdb/logs/couch.log
>
>
./install/couchdb/logs/couch.log
 

Tips for operators

ASO operations must be done as the service user (crab3 for production and pre-production):

Changed:
<
<
sudo -u crab3 -i bash
>
>
sudo -u crab3 -i bash
  Export the operators proxy:
Changed:
<
<
export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
>
>
export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
  Source the ASO environment (the wild cards are for the arch and ASO version: for example, slc6_amd64_gcc481 and 1.0.3pre8 respectively):
Changed:
<
<
>
>
 cd /data/srv/asyncstageout/current source sw.dciangot/slc6_amd64_gcc493/cms/asyncstageout/1.0.5pre3/etc/profile.d/init.sh # equivalent at the moment to
Changed:
<
<
# source sw.dciangot/*/cms/asyncstageout/*/etc/profile.d/init.sh
>
>
# source sw.dciangot/*/cms/asyncstageout/*/etc/profile.d/init.sh
 

Killing transfers

Kill all the transfers in CouchDB for a given task:

Changed:
<
<
./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname> [-i docID comma separated list]
>
>
./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname> [-i docID comma separated list]
  Note: If the FTS transfer was already submitted, it is (currently) not possible to kill it in FTS.

Retrying publication

Retry the publication for a given task:

Changed:
<
<
./config/asyncstageout/manage execute-asyncstageout retry-publish -t <taskname> [-i docID comma separated list]
>
>
./config/asyncstageout/manage execute-asyncstageout retry-publish -t <taskname> [-i docID comma separated list]
 

ASO update/upgrade

Line: 942 to 795
 

ASO on production (cmsweb)

Changed:
<
<
Machine ASO tag Description Date
vocms031 1.0.3pre8 from comp.pre.riahi release notes compatible with HG1503d Full reinstall. Patched with 1.0.3pre14 TransferWorker.py, PublisherWorker.py and ReporterWorker.py. 2015 Jun 3
>
>
Machine Path ASO tag Database
vocms031 /data/srv/asyncstageout/current 1.0.6pre1 downgraded to 105pre3 cmsweb.cern.ch/couchdb2/asodb4
  /data/srv/asyncstageout2/current 1.0.6pre1 downgraded to 105pre3 cmsweb.cern.ch/couchdb2/asodb1
vocms0108 /data/srv/asyncstageout/current 1.0.6pre1 downgraded to 105pre3 cmsweb.cern.ch/couchdb2/asodb2
  /data/srv/asyncstageout2/current 1.0.6pre1 downgraded to 105pre3 cmsweb.cern.ch/couchdb2/asodb3
vocms0105 /data/srv/asyncstageout/current 1.0.7pre1 patched ASO/Oracle cmsweb.cern.ch
 

ASO on pre-production (cmsweb-testbed)

Changed:
<
<
Machine ASO tag Description Date
vocms021 1.0.3pre14 from comp.pre.riahi release notes compatible with HG1509i Full reinstall 2015 Aug 28
>
>
Machine Path ASO tag Database
vocms0108 /data/srv/asyncstageout/current 1.0.6pre1 comp.pre.dciangot compatible with HG1612h cmsweb-testbed.cern.ch/couchdb/asodb3
  /data/user/asostandalone/ ORACLE ASO v2.0  
 
META FILEATTACHMENT attachment="ProxyRenew.sh" attr="" comment="" date="1396593242" name="ProxyRenew.sh" path="ProxyRenew.sh" size="8920" user="jbalcas" version="1"
META TOPICMOVED by="atanasi" date="1412700584" from="CMS.ASODeployment" to="CMSPublic.AsoDeployment"

Revision 672016-10-26 - StefanoBelforte

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 911 to 911
 
cd /data/srv/asyncstageout/current
Changed:
<
<
source sw.pre.riahi/*/cms/asyncstageout/*/etc/profile.d/init.sh
>
>
source sw.dciangot/slc6_amd64_gcc493/cms/asyncstageout/1.0.5pre3/etc/profile.d/init.sh # equivalent at the moment to # source sw.dciangot/*/cms/asyncstageout/*/etc/profile.d/init.sh
 

Killing transfers

Revision 662015-11-11 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 554 to 554
 service = https://fts3-pilot.cern.ch:8443
Deleted:
<
<
4) Make sure the directory /data/srv/asyncstageout/current/install/asyncstageout/Monitor/work was created. If not, create it.

mkdir -p /data/srv/asyncstageout/current/install/asyncstageout/Monitor/work
 
CouchDB configuration

First of all check that CouchDB is up and that you can access it from your VM:

Revision 652015-09-18 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 20 to 20
 
Added:
>
>

About CouchDB

The ASO service uses four CouchDB databases which purposes are summarized in the following table:

Database name Description
asynctransfer This is the database where the user transfer/publication request documents are stored.
asynctransfer_config This database contains a few documents with some ASO configuration parameters.
asynctransfer_agent This database contains a few documents where ASO keeps track of the status (running, not running, etc) of the different ASO components.
asynctransfer_stat description needed

There are asynctransfer databases deployed in the CMSWEB CouchDBs (one in production and one in the testbed). These databases are used by the production and pre-production ASO services respectively.

When deploying the ASO service, a CouchDB instance is deployed locally in the host. This CouchDB contains the four databases listed in the table above. While in a development ASO deployment all databases in this CouchDB are relevant, in production and pre-production the asynctransfer database of this CouchDB is redundant (because the asynctransfer databases from the CMSWEB CouchDBs are used).

To distinguish between the CouchDB deployed in the ASO host and the CouchDB in CMSWEB, we use the term "local" when referring to the first. In a development ASO deployment it is only the local CouchDB that is used, so we may omit the term "local".

 

Deployment of AsyncStageOut for CRAB3 operators

Machine details

Revision 642015-09-18 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 179 to 179
 3 */3 * * * /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms
Changed:
<
<

Starting and stopping the service or CouchDB:

>
>

Starting and stopping the service or CouchDB

  Environment needed before starting or stoping AsyncStageOut
Line: 188 to 188
 source ~/Async.secrets
Changed:
<
<
Starting AsyncStageOut (If first time starting AsyncStageOut, make sure you have entered the correct FTS3 server urls in CouchDB)
>
>
Starting AsyncStageOut (If first time starting AsyncStageOut, make sure you have entered the correct FTS3 server URLs in CouchDB)
 
./config/asyncstageout/manage start-asyncstageout
Line: 209 to 210
 

Switch to FTS3

Changed:
<
<
Set by hand all FTS servers endpoints in the ASO config database in the local CouchDB instance from the futon interface. There is one document per T1 site. For example, the RAL FTS server can be found here:
http://CouchInstance:port/_utils/document.html?asynctransfer_config/T1_UK_RAL
>
>
Set by hand all FTS servers endpoints in the asynctransfer_config database of the local CouchDB instance from the futon interface. There is one document per T1 site. For example, the RAL FTS server can be found at:

http://LocalCouchInstance:5984/_utils/document.html?asynctransfer_config/T1_UK_RAL

Modify the URL from the default to:

Cron jobs for CouchDB

There must be scheduled cron jobs for:

  • compacting the local CouchDB databases:

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_compact' &> /dev/null

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_stat/_compact' &> /dev/null

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984//asynctransfer_agent/_compact' &> /dev/null

  • querying/caching views of the asynctransfer database in the global CouchDB:

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/sites?stale=update_after' &> /dev/null

*/5 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/JobsIdsStatesByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/get_acquired?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/DBSPublisher/_view/publish?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorStartedEnded/_view/startedSizeByTime?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorStartedEnded/_view/endedSizeByTime?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorFilesCount/_view/filesCountByUser?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorFilesCount/_view/filesCountByTask?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorFilesCount/_view/filesCountByDestSource?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/monitor/_view/publicationStateSizeByTime?stale=update_after' &> /dev/null

  • querying/caching views of the asynctransfer_config database (local CouchDB):

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' &> /dev/null
 
Changed:
<
<
Modify the url from https://fts-fzk.gridka.de:8443/glite-data-transfer-fts/services/FileTransfer to https://fts3-pilot.cern.ch:8446
>
>
  • querying/caching views of the asynctransfer_agent database (local CouchDB):

*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_agent/_design/Agent/_view/existWorkers' &> /dev/null
 

Deployment of AsyncStageOut for CRAB3 developers

Line: 467 to 532
 #!/bin/sh
Changed:
<
<
3) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8443 or https://fts3-pilot.cern.ch:8443). For development ASO instances one should use https://fts3-pilot.cern.ch:8443:
>
>
3) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8443 or https://fts3-pilot.cern.ch:8443). For development ASO instances one should use https://fts3-pilot.cern.ch:8443:
 
service = https://fts3-pilot.cern.ch:8443
Line: 491 to 556
 {"couchdb":"Welcome","uuid":"3bf548a5f518781332037d68da62ff28","version":"1.6.1","vendor":{"version":"1.6.1","name":"The Apache Software Foundation"}}
Changed:
<
<
1) The local CouchDB installed in the VM is protected by a firewall. One can not access it not even from another machine at CERN. To be able to access the local CouchDB from another machine at CERN, one needs to stop the iptables:
>
>
1) The CouchDB installed in the VM is protected by a firewall. One can not access it not even from another machine at CERN. To be able to access this CouchDB from another machine at CERN, one needs to stop the iptables:
 
sudo /etc/init.d/iptables stop # To start the iptables again: sudo /etc/init.d/iptables start; To check the status: sudo /etc/init.d/iptables status
Changed:
<
<
2) To access the local CouchDB installed in the VM from outside CERN, create an ssh tunnel between the your machine and lxplus:
>
>
2) To access the CouchDB installed in the VM from outside CERN, create an ssh tunnel between the your machine and lxplus:
 
[mylaptop]$ ssh -D 1111 <username>@lxplus.cern.ch
Line: 505 to 570
  and configure your browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.
Changed:
<
<
3) For each of the 8 documents "T1_*_*" in the asynctransfer_config database in the local CouchDB instance, change the FTS3 server URL to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8446, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446; for the production ASO instance use https://lcgfts3.gridpp.rl.ac.uk:8446; for development ASO instances use https://fts3-pilot.cern.ch:8446):
>
>
3) For each of the 8 documents "T1_*_*" in the asynctransfer_config database, change the FTS3 server URL to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8446, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For development ASO instances use https://fts3-pilot.cern.ch:8446:
 
Changed:
<
<
>
>
 
  • Login to CouchDB.
Changed:
<
<
>
>
 
  • Open each of the (8) documents "T1_*_*" and change the FTS server in the "url" key (remember to save each document after editing it).
  • Make sure the changes have propagated to the getRunningFTSserver view (ASO uses this view when selecting the FTS3 server to which should submit the transfer).
Changed:
<
<

Cron jobs

>
>

Cron jobs for CouchDB

 
Changed:
<
<
You should already have the following three cron jobs for compacting the CouchDB databases:
>
>
You should already have the following cron jobs for compacting the CouchDB databases:
 
Changed:
<
<
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_compact' > /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_compact' &> /dev/null
 
Changed:
<
<
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_stat/_compact' > /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_stat/_compact' &> /dev/null
 
Changed:
<
<
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort//asynctransfer_agent/_compact' > /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984//asynctransfer_agent/_compact' &> /dev/null
 
Changed:
<
<
Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is fixed, one needs to update the crontab by hand to create cron jobs for querying the views every X minutes.
>
>
Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is solved, one needs to update the crontab by hand to create cron jobs for querying/caching the views every X minutes. To edit the crontab do:
 
crontab -e
Changed:
<
<
Add the following cron jobs for caching the CouchDB views:
>
>
Add the following cron jobs for querying/caching views of the asynctransfer database:
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/JobsIdsStatesByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/PublicationStateByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/TransfersByFailuresReasons?stale=update_after' &> /dev/null

 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/ftscp_all' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/UserByStartTime?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/sites' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/forKill?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/JobsStatesByWorkflow' &> /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/forResub?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/get_acquired' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/ftscp_all?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/getFilesToRetry?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/DBSPublisher/_view/publish' &> /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/get_acquired?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/filesCountByUser' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/sites?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/filesCountByTask' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/DBSPublisher/_view/PublicationFailedByWorkflow?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/filesCountByDestSource' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/DBSPublisher/_view/cache_area?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/FailedAttachmentsByDocId' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/DBSPublisher/_view/last_publication?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/DoneAttachmentsByDocId' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/DBSPublisher/_view/publish?stale=update_after' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/publicationStateSizeByTime' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/monitor/_view/publicationStateSizeByTime?stale=update_after' &> /dev/null

Note: FYI, in (pre-)production the global asynctransfer database is installed in the cmsweb CouchDB. This means that the protocol is HTTPS and therefore the curl query is something like curl --capath /path/to/CA/certs/dir --cacert /path/to/proxy --cert /path/to/proxy --key /path/to/proxy -H 'Content-Type: application/json' -X GET 'https://...'.

 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/startedSizeByTime' > /dev/null
>
>
Add the following cron jobs for querying/caching views of the asynctransfer_config database:
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/endedSizeByTime' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' > /dev/null
>
>
Add the following cron jobs for querying/caching views of the asynctransfer_agent database:
 
Changed:
<
<
*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_agent/_design/Agent/_view/existWorkers' > /dev/null
>
>
*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_agent/_design/Agent/_view/existWorkers' &> /dev/null
 

And if you have something like

Line: 585 to 662
  delete that.
Added:
>
>
Note: LocalCouchHost can simply be 127.0.0.1 if the cron jobs are running in the same host where CouchDB is installed.
 

Start/stop AsyncStageOut

First one has to start CouchDB, otherwise ASO will not start as it will fail to connect to the CouchDB.

Revision 632015-09-17 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 269 to 269
 mkdir /data/admin/asyncstageout /data/srv/asyncstageout
Changed:
<
<
2) Get the DMWM deployment package from github (https://github.com/dmwm/deployment). See https://github.com/dmwm/deployment/releases for the available releases.

Note that you don't need to get the latest release. HG1503d is the latest release that uses Couch 1.5. Newer releases use Couch 1.6 and will fail in the sw deployment step with error Couldn't find package external+couchdb15.

>
>
2) Get the DMWM deployment package from github (https://github.com/dmwm/deployment). See https://github.com/dmwm/deployment/releases for the available releases. Note that you don't need to get the latest release.
 
cd /data/admin/asyncstageout
Changed:
<
<
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1503d.zip
>
>
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1509i.zip
 unzip cfg.zip; rm cfg.zip mv deployment-* Deployment cd Deployment
Line: 284 to 282
 3) Set auxiliary variables ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you. The AsyncStageOut releases can be found in https://github.com/dmwm/AsyncStageout/releases and should be taken from the CMS repository comp.pre.riahi.
Changed:
<
<
ASOTAG=1.0.3pre8
>
>
ASOTAG=1.0.3pre14
 REPO=comp.pre.riahi ARCH=slc6_amd64_gcc481
Line: 296 to 294
 
Changed:
<
<
INFO: 20150615131021: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150615-131021-15087-prep.log
>
>
INFO: 20150917103852: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150917-103852-4382-prep.log
 INFO: deploying backend - variant: default, version: default INFO: deploying wmcore-auth - variant: default, version: default INFO: deploying couchdb - variant: default, version: default
Line: 310 to 308
 
Changed:
<
<
INFO: 20150615131031: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150615-131031-15163-sw.log
>
>
INFO: 20150917103903: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150917-103903-4464-sw.log
 INFO: deploying backend - variant: default, version: default
Changed:
<
<
INFO: bootstrapping comp.pre.riahi software area in /data/srv/asyncstageout/v1.0.3pre8/sw.pre.riahi
>
>
INFO: bootstrapping comp.pre.riahi software area in /data/srv/asyncstageout/v1.0.3pre14/sw.pre.riahi
 INFO: bootstrap successful INFO: deploying wmcore-auth - variant: default, version: default INFO: deploying couchdb - variant: default, version: default
Line: 326 to 324
 
Changed:
<
<
INFO: 20150615131533: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150615-131533-16570-post.log
>
>
INFO: 20150917104616: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150917-104616-5911-post.log INFO: Updating /data/srv/asyncstageout/v1.0.3pre14/apps to apps.sw.pre.riahi INFO: Updating /data/srv/asyncstageout/current to v1.0.3pre14
 INFO: deploying backend - variant: default, version: default INFO: deploying wmcore-auth - variant: default, version: default INFO: deploying couchdb - variant: default, version: default
Line: 421 to 421
 Starting Services... starting couch... CouchDB has not been initialised... running pre initialisation
Changed:
<
<
Initialising CouchDB on :5984
>
>
Initialising CouchDB on :5984
 Apache CouchDB has started, time to relax. CouchDB has not been initialised... running post initialisation
Line: 488 to 488
 
Changed:
<
<
{"couchdb":"Welcome","version":"1.1.1"}
>
>
{"couchdb":"Welcome","uuid":"3bf548a5f518781332037d68da62ff28","version":"1.6.1","vendor":{"version":"1.6.1","name":"The Apache Software Foundation"}}
 

1) The local CouchDB installed in the VM is protected by a firewall. One can not access it not even from another machine at CERN. To be able to access the local CouchDB from another machine at CERN, one needs to stop the iptables:

Revision 622015-09-16 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 18 to 18
 
Complete: 5 Go to SWGuideCrab
Deleted:
<
<
Read the generic deployment instructions.
 

Deployment of AsyncStageOut for CRAB3 operators

Line: 101 to 99
 cd Deployment
Changed:
<
<
Perform the deployment of the appropriate AsyncStageOut release tag from the corresponding CMS repository (check the AsyncStageOutManagement page or contact Hassen):
>
>
Perform the deployment of the appropriate AsyncStageOut release tag from the corresponding CMS repository (contact Hassen Riahi in case of doubt):
 
ASOTAG=1.0.3pre1
REPO=comp.pre.riahi
Line: 520 to 518
 You should already have the following three cron jobs for compacting the CouchDB databases:
Changed:
<
<
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort/asynctransfer/_compact' > /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_compact' > /dev/null
 
Changed:
<
<
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort/asynctransfer_stat/_compact' > /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_stat/_compact' > /dev/null
 
Changed:
<
<
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort//asynctransfer_agent/_compact' > /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort//asynctransfer_agent/_compact' > /dev/null
 

Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is fixed, one needs to update the crontab by hand to create cron jobs for querying the views every X minutes.

Line: 544 to 542
  */10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/JobsStatesByWorkflow' &> /dev/null
Deleted:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/DBSPublisher/_view/publish' &> /dev/null
 */10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/get_acquired' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate' > /dev/null

Added:
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/DBSPublisher/_view/publish' &> /dev/null
 */10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/filesCountByUser' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/filesCountByTask' > /dev/null

Line: 853 to 851
 

ASO on production (cmsweb)

Changed:
<
<
Machine Patch Description Date
vocms031 Version v1.0.3pre8 Full reinstall 2015 Jun 3
>
>
Machine ASO tag Description Date
vocms031 1.0.3pre8 from comp.pre.riahi release notes compatible with HG1503d Full reinstall. Patched with 1.0.3pre14 TransferWorker.py, PublisherWorker.py and ReporterWorker.py. 2015 Jun 3
 

ASO on pre-production (cmsweb-testbed)

Changed:
<
<
Machine Patch Description Date
vocms021 Version v1.0.3pre14 Full reinstall 2015 Aug 28
>
>
Machine ASO tag Description Date
vocms021 1.0.3pre14 from comp.pre.riahi release notes compatible with HG1509i Full reinstall 2015 Aug 28
 
META FILEATTACHMENT attachment="ProxyRenew.sh" attr="" comment="" date="1396593242" name="ProxyRenew.sh" path="ProxyRenew.sh" size="8920" user="jbalcas" version="1"
META TOPICMOVED by="atanasi" date="1412700584" from="CMS.ASODeployment" to="CMSPublic.AsoDeployment"

Revision 612015-09-15 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 230 to 230
 

Pre-configuration

Changed:
<
<
Create a secrets file (the default location is ~/Async.secrets) with the following content:
>
>
The AsyncStageOut requires a couple of password/username settings for databases. These are provided using a simple formatted (parameter-key=parameter-value) secrets file, expected to be by default in the home area of the account running the agent. Thus, create a secrets file (the default location/name is $HOME/Async.secrets) with the following definitions for supported parameters:
 
COUCH_USER=<a-couchdb-username>  # Choose a username for the ASO CouchDB.
Line: 243 to 243
 COUCH_KEY_FILE=/data/certs/hostkey.pem
Added:
>
>
Notes:
  • The CouchDB password parameter is required. The other parameters will default to somewhat sensible values if not provided.
  • CouchDB requires an IP address; it has problems with hostnames.

The file contains sensitive data and must be protected with the appropriate permissions:

chmod 600 $HOME/Async.secrets
 You can actually put this file in any directory you want and/or give to it any name you want, but then you have to set the environment variable ASYNC_SECRETS_LOCATION to point to the file:
Line: 391 to 401
  No patches necessary.
Changed:
<
<

Initialisation

>
>

Initialization

 
cd /data/srv/asyncstageout/current
Changed:
<
<
The next command will copy some template configuration files into the actual directories where they should be.
>
>
The next command creates some directories and copies some template configuration files into the actual directories where they should be. This step is sometimes called "activation of the system". A hidden file ./install/asyncstageout/.using is created to signal that this activation step has been done. If this file exists already, the activation does nothing.
 
./config/asyncstageout/manage activate-asyncstageout
Changed:
<
<
The next command will initialize (and actually start) CouchDB.
>
>
The next command initializes and starts CouchDB. A hidden file ./install/couchdb/.init is created to signal that this initialization step has been done. If this file exists already, the initialization part is skipped and only the starting part is executed. The initialization includes the generation of the appropriate databases.
 
./config/asyncstageout/manage start-services
Line: 418 to 428
 CouchDB has not been initialised... running post initialisation
Changed:
<
<
The next command will create a couple of configuration files (e.g. /data/srv/asyncstageout/current/config/asyncstageout/config.py) with parameters for all AsyncStageOut components. Most parameters are read from the Async.secrets file.
>
>
The next command generates a couple of configuration files (e.g. /data/srv/asyncstageout/current/config/asyncstageout/config.py) with parameters for all AsyncStageOut components. Most parameters are read from the Async.secrets file. A hidden file ./install/asyncstageout/.init is created to signal that this initialization step has been done. If this file exists already, the initialization does nothing.
 
./config/asyncstageout/manage init-asyncstageout
Line: 451 to 461
 sed --in-place "s|\.opsProxy = .*|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py
Added:
>
>
Have a look at the configuration file and change other parameters values if you consider it appropriate.

 2) Create a file /data/srv/tmp.sh with just the following line:
Line: 495 to 507
  and configure your browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.
Changed:
<
<
3) For each of the 8 documents "T1_*_*" in the asynctransfer_config database of the local CouchDB, change the FTS3 server URL to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8446, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For the production ASO instance use https://lcgfts3.gridpp.rl.ac.uk:8446. For development ASO instances use https://fts3-pilot.cern.ch:8446.
>
>
3) For each of the 8 documents "T1_*_*" in the asynctransfer_config database in the local CouchDB instance, change the FTS3 server URL to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8446, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446; for the production ASO instance use https://lcgfts3.gridpp.rl.ac.uk:8446; for development ASO instances use https://fts3-pilot.cern.ch:8446):
 
Line: 663 to 675
 
Changed:
<
<
bla
>
>
Shutting down services... stopping couch... Apache CouchDB has been shutdown.
 

Start/stop a single AsyncStageOut component

Line: 758 to 771
 ./config/asyncstageout/manage clean-all
Added:
>
>

manage script commands

  • activate-asyncstageout : activate the AsyncStageOut
  • status : print the status of the services and the AsyncStageOut
  • start-couch : start the CouchDB server
  • stop-couch : stop the CouchDB server
  • start-services : same as start-couch
  • stop-services : same as stop-couch
  • start-asyncstageout : start the AsyncStageOut
  • stop-asyncstageout : stop the AsyncStageOut
  • clean-couch : wipe out the CouchDB databases and couchapps (removes database completely)
  • clean-asyncstageout : remove all agents configuration and installation directories (non-recoverable)
  • clean-all : clean the AsyncStageOut and CouchDB (wipes everything, non-recoverable)
  • execute-asyncstageout <command> <args> : execute the asyncstageout/bin command with the arguments provided
 

AsyncStageOut log files

Log files to watch for errors and to check and search in case of problems:

Line: 817 to 845
 ./config/asyncstageout/manage execute-asyncstageout retry-publish -t [-i docID comma separated list]
Added:
>
>

ASO update/upgrade

You can move databases from an old installation to a new one by copying the databases files located in the directory install/couchdb/database to the new installation path.

Note: If CouchDB is also upgraded during the ASO upgrade, the databases files of the previous CouchDB version may be incompatible with the new CouchDB version.

 

ASO on production (cmsweb)

Machine Patch Description Date
Line: 825 to 859
 

ASO on pre-production (cmsweb-testbed)

Machine Patch Description Date
Changed:
<
<
vocms021 Version v1.0.3pre8 Full reinstall 2015 Apr 20
>
>
vocms021 Version v1.0.3pre14 Full reinstall 2015 Aug 28
 
META FILEATTACHMENT attachment="ProxyRenew.sh" attr="" comment="" date="1396593242" name="ProxyRenew.sh" path="ProxyRenew.sh" size="8920" user="jbalcas" version="1"
META TOPICMOVED by="atanasi" date="1412700584" from="CMS.ASODeployment" to="CMSPublic.AsoDeployment"

Revision 602015-07-28 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 763 to 763
 Log files to watch for errors and to check and search in case of problems:

AsyncStageOut component logs:

Changed:
<
<
>
>
 ./install/asyncstageout/AsyncTransfer/ComponentLog ./install/asyncstageout/DBSPublisher/ComponentLog ./install/asyncstageout/Reporter/ComponentLog
Line: 774 to 774
 

CouchDB log:

Changed:
<
<
>
>
 ./install/couchdb/logs/couch.log
Line: 782 to 782
  ASO operations must be done as the service user (crab3 for production and pre-production):
Changed:
<
<
>
>
 sudo -u crab3 -i bash
Added:
>
>
Export the operators proxy:

export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
 Source the ASO environment (the wild cards are for the arch and ASO version: for example, slc6_amd64_gcc481 and 1.0.3pre8 respectively):
Changed:
<
<
>
>
 cd /data/srv/asyncstageout/current source sw.pre.riahi/*/cms/asyncstageout/*/etc/profile.d/init.sh
Line: 797 to 803
  Kill all the transfers in CouchDB for a given task:
Changed:
<
<
>
>
 ./config/asyncstageout/manage execute-asyncstageout kill-transfer -t [-i docID comma separated list]
Line: 807 to 813
  Retry the publication for a given task:
Changed:
<
<
>
>
 ./config/asyncstageout/manage execute-asyncstageout retry-publish -t [-i docID comma separated list]

Revision 592015-07-28 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 798 to 798
 Kill all the transfers in CouchDB for a given task:
Deleted:
<
<
cd /data/srv/asyncstageout/current
 ./config/asyncstageout/manage execute-asyncstageout kill-transfer -t [-i docID comma separated list]
Line: 809 to 808
 Retry the publication for a given task:
Changed:
<
<
cd /data/srv/asyncstageout/current ./config/asyncstageout/manage execute-asyncstageout retry-publication -t [-i docID comma separated list]
>
>
./config/asyncstageout/manage execute-asyncstageout retry-publish -t [-i docID comma separated list]
 

ASO on production (cmsweb)

Revision 582015-07-16 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 463 to 463
 service = https://fts3-pilot.cern.ch:8443
Changed:
<
<
4) Make sure the directory /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor/work was created. If not, create it.
>
>
4) Make sure the directory /data/srv/asyncstageout/current/install/asyncstageout/Monitor/work was created. If not, create it.
 
mkdir -p /data/srv/asyncstageout/current/install/asyncstageout/Monitor/work

Revision 572015-06-18 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 448 to 448
 sed --in-place "s|\.serverDN = .*|\.serverDN = '$serverDN'|" config/asyncstageout/config.py sed --in-place "s|\.log_level = .*|\.log_level = 10|" config/asyncstageout/config.py sed --in-place "s|\.UISetupScript = .*|\.UISetupScript = '/data/srv/tmp.sh'|" config/asyncstageout/config.py
Changed:
<
<
sed --in-place "s|\.opsProxy = '/path/to/ops/proxy'|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py
>
>
sed --in-place "s|\.opsProxy = .*|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py
 

2) Create a file /data/srv/tmp.sh with just the following line:

Revision 562015-06-18 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 149 to 149
 

Operations

Changed:
<
<

OpsProxy Renewal or Creation

>
>

OpsProxy Renewal or Creation

  First connect to machine and create a seed for proxy delegation:
Line: 340 to 340
 Some of the CMS services use Grid service certificates for interactions with CMSWEB, but the majority, and in particular the production and pre-production CRAB services, use the operator's proxy. The reasons are both for convenience and security. For private installations you are the operator, so you should use your own user proxy:
Deleted:
<
<
voms-proxy-init --voms cms --valid 168:00
 mkdir /data/srv/asyncstageout/state/asyncstageout/creds chmod 700 /data/srv/asyncstageout/state/asyncstageout
Added:
>
>

voms-proxy-init --voms cms --valid 192:00
 cp /tmp/x509up_u$UID /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
Added:
>
>
The proxy is created for 8 days (192 hours), because this is the maximum allowed duration of the VO CMS extension. Thus, the proxy has to be renewed every 7 days (at least). You can do it manually (executing the last two commands) or you can set up an automatic renewal procedure like is being done in production and pre-production. See OpsProxy Renewal or Creation.
 Note: The operator proxy is also used by ASO to manage the users files in case the retrieval of the user's proxy has failed.

Note: Grid host certificates can not be used as Grid service certificates if they are not registered in VO CMS and SiteBD.

Revision 552015-06-17 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 226 to 226
  See Deployment of CRAB REST Interface / Get and install a virtual machine.
Deleted:
<
<

Add the DN of the ASO host to the external REST configuration

Add the Grid host certificate DN of the ASO host to the external REST configuration in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that your ASO instance is allowed to retrieve users proxies from myproxy server. The external REST configuration file will look something like this (in my case I have my private TaskWorker installed in osmatanasi2.cern.ch and I will install ASO in osmatanasi1.cern.ch):

{
    "cmsweb-dev" : {
        "delegate-dn": [
            "/DC=ch/DC=cern/OU=computers/CN=osmatanasi2.cern.ch|/DC=ch/DC=cern/OU=computers/CN=osmatanasi1.cern.ch"
        ],


    }
}
 

AsyncStageOut installation and configuration

Pre-configuration

Create a secrets file (the default location is ~/Async.secrets) with the following content:

Changed:
<
<
>
>
 COUCH_USER=<a-couchdb-username> # Choose a username for the ASO CouchDB. COUCH_PASS=<a-couchdb-password> # Choose a password for the ASO CouchDB. COUCH_HOST=<IP-of-this-ASO-host> # You can read the host IP in https://openstack.cern.ch/dashboard/project/instances/. COUCH_PORT=5984
Deleted:
<
<
OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
 UFC_SERVICE_URL=https://osmatanasi2.cern.ch/crabserver/dev/filemetadata # The URL of the crabserver instance from where ASO should get the file metadata.
Added:
>
>
OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
 COUCH_CERT_FILE=/data/certs/hostcert.pem COUCH_KEY_FILE=/data/certs/hostkey.pem
Line: 265 to 249
 export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file
Changed:
<
<
This secrets file will be used by the deployment scripts to set parameters in the configuration files.
>
>
This secrets file will be used by the deployment scripts to set parameters in the configuration files (e.g. /data/srv/asyncstageout/current/config/asyncstageout/config.py).
 

Installation

Line: 345 to 329
 

Authentication

Changed:
<
<
Copy the host certificate and key to the /data/certs/ directory and change the owner to yourself:
>
>
Certificate for interactions with CMSWEB
 
Changed:
<
<
#sudo mkdir /data/certs  # This should have been done already by the Deploy script when installing the VM.
sudo chown <username>:zh /data/certs
#sudo cp -p /etc/grid-security/host{cert,key}.pem /data/certs/  # This should have been done already by the Deploy script when installing the VM.
sudo chown <username>:zh /data/certs/*
>
>
Access to CMSWEB is restricted to CMS users and services by requesting authentication with certificates registered in VO CMS and SiteDB. The AsyncStageOut configuration file has a parameter (actually one parameter for each component) to point to a certificate that each component should use for interactions with CMSWEB. The parameter value is taken from the secrets file:

OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
 
Changed:
<
<
Create a directory necessary for credentials:
>
>
Some of the CMS services use Grid service certificates for interactions with CMSWEB, but the majority, and in particular the production and pre-production CRAB services, use the operator's proxy. The reasons are both for convenience and security. For private installations you are the operator, so you should use your own user proxy:
 
Added:
>
>
voms-proxy-init --voms cms --valid 168:00
 mkdir /data/srv/asyncstageout/state/asyncstageout/creds chmod 700 /data/srv/asyncstageout/state/asyncstageout
Added:
>
>
cp /tmp/x509up_u$UID /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
 
Changed:
<
<
Operators proxy
>
>
Note: The operator proxy is also used by ASO to manage the users files in case the retrieval of the user's proxy has failed.
 
Changed:
<
<
Copy a valid proxy into /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy and export X509_USER_PROXY to point to that file.
>
>
Note: Grid host certificates can not be used as Grid service certificates if they are not registered in VO CMS and SiteBD.
 
Changed:
<
<
voms-proxy-init --voms cms --valid 168:00
>
>
Certificate for interactions with myproxy server
 
Changed:
<
<
Enter GRID pass phrase:
...
...
Your proxy is valid until ...
>
>
CRAB services use the host certificate (and private key) for interacting with myproxy server. The AsyncStageOut configuration file has two parameters to point to a certificate and private key to use for interactions with myproxy server. (The ASO components that need to interact with myproxy server are AsyncTransfer and DBSPublisher.) The parameter values are taken from the secrets file:

COUCH_CERT_FILE=/data/certs/hostcert.pem
COUCH_KEY_FILE=/data/certs/hostkey.pem
 
Added:
>
>
So copy the host certificate and private key to the /data/certs/ directory and change the owner to yourself:
 
Changed:
<
<
cp /tmp/x509up_u$(id -u) /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
>
>
#sudo mkdir /data/certs # This should have been done already by the Deploy script when installing the VM. sudo chown :zh /data/certs #sudo cp -p /etc/grid-security/host{cert,key}.pem /data/certs/ # This should have been done already by the Deploy script when installing the VM. sudo chown :zh /data/certs/host{cert,key}.pem

Add the Grid host certificate DN of the ASO host to the CRAB external REST configuration in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that this ASO instance is allowed to retrieve users proxies from myproxy server. The external REST configuration file will look something like this (in my case I have my private TaskWorker installed in osmatanasi2.cern.ch and I am installing ASO in osmatanasi1.cern.ch):

{
    "cmsweb-dev" : {
        "delegate-dn": [
            "/DC=ch/DC=cern/OU=computers/CN=osmatanasi2.cern.ch|/DC=ch/DC=cern/OU=computers/CN=osmatanasi1.cern.ch"
        ],


    }
}
 

Patches

No patches necessary.

Changed:
<
<

Initialization

>
>

Initialisation

 
cd /data/srv/asyncstageout/current
Line: 442 to 443
 sed --in-place "s|\.serverDN = .*|\.serverDN = '$serverDN'|" config/asyncstageout/config.py sed --in-place "s|\.log_level = .*|\.log_level = 10|" config/asyncstageout/config.py sed --in-place "s|\.UISetupScript = .*|\.UISetupScript = '/data/srv/tmp.sh'|" config/asyncstageout/config.py
Added:
>
>
sed --in-place "s|\.opsProxy = '/path/to/ops/proxy'|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py
 

2) Create a file /data/srv/tmp.sh with just the following line:

Line: 450 to 452
 #!/bin/sh
Changed:
<
<
3) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For development ASO instances one should use https://fts3-pilot.cern.ch:8446:
>
>
3) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8443 or https://fts3-pilot.cern.ch:8443). For development ASO instances one should use https://fts3-pilot.cern.ch:8443:
 
Changed:
<
<
service = https://fts3-pilot.cern.ch:8446
>
>
service = https://fts3-pilot.cern.ch:8443
 
Changed:
<
<
4) Make sure the directories /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor[/work] were created. If not, create them.
>
>
4) Make sure the directory /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor/work was created. If not, create it.
 
Changed:
<
<
mkdir /data/srv/asyncstageout/current/install/asyncstageout/Monitor mkdir /data/srv/asyncstageout/current/install/asyncstageout/Monitor/work
>
>
mkdir -p /data/srv/asyncstageout/current/install/asyncstageout/Monitor/work
 

CouchDB configuration
Line: 475 to 476
 {"couchdb":"Welcome","version":"1.1.1"}
Changed:
<
<
1) CouchDB is protected by the VM firewall. You can not access CouchDB not even from another machine at CERN. To be able to access CouchDB from another machine, you need to stop the iptables:
>
>
1) The local CouchDB installed in the VM is protected by a firewall. One can not access it not even from another machine at CERN. To be able to access the local CouchDB from another machine at CERN, one needs to stop the iptables:
 
sudo /etc/init.d/iptables stop # To start the iptables again: sudo /etc/init.d/iptables start; To check the status: sudo /etc/init.d/iptables status
Changed:
<
<
Or create an ssh tunnel between the machine from where you want to access CouchDB and the VM:
>
>
2) To access the local CouchDB installed in the VM from outside CERN, create an ssh tunnel between the your machine and lxplus:
 
Changed:
<
<
[mylaptop]$ ssh -D 1111 @.cern.ch
>
>
[mylaptop]$ ssh -D 1111 @lxplus.cern.ch
 
Changed:
<
<
and configure the browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.
>
>
and configure your browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.
 
Changed:
<
<
2) Change the FTS3 server URL for each of the 8 documents "T1_*_*" in the asynctransfer_config database to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For development ASO instances one should use https://fts3-pilot.cern.ch:8446.
>
>
3) For each of the 8 documents "T1_*_*" in the asynctransfer_config database of the local CouchDB, change the FTS3 server URL to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8446, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For the production ASO instance use https://lcgfts3.gridpp.rl.ac.uk:8446. For development ASO instances use https://fts3-pilot.cern.ch:8446.
 
Line: 499 to 500
 

Cron jobs

Deleted:
<
<
Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is fixed, one needs to update the crontab by hand to create cron jobs for querying the views every X minutes.

crontab -e
 You should already have the following three cron jobs for compacting the CouchDB databases:
Line: 515 to 510
 0 1 * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort//asynctransfer_agent/_compact' > /dev/null
Added:
>
>
Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is fixed, one needs to update the crontab by hand to create cron jobs for querying the views every X minutes.

crontab -e
 Add the following cron jobs for caching the CouchDB views:
Line: 557 to 558
 */5 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_agent/_design/Agent/_view/existWorkers' > /dev/null
Added:
>
>
And if you have something like

@reboot /data/srv/asyncstageout/current/config/couchdb/manage sysboot
1 0 * * * /data/srv/asyncstageout/current/config/couchdb/manage compact wmstats 'I did read documentation'
1 6 * * * /data/srv/asyncstageout/current/config/couchdb/manage compact all_but_wmstats 'I did read documentation'
1 12 * * * /data/srv/asyncstageout/current/config/couchdb/manage compactviews wmstats WMStats 'I did read documentation'
1 18 * * * /data/srv/asyncstageout/current/config/couchdb/manage compactviews all_but_wmstats all 'I did read documentation'

delete that.

 

Start/stop AsyncStageOut

First one has to start CouchDB, otherwise ASO will not start as it will fail to connect to the CouchDB.

Line: 614 to 627
  started with pid 24459 2015-06-15 15:32:12: ASOMon[24464]: Reading config file /data/srv/asyncstageout/v1.0.3pre8/config/asyncstageout/monitor.conf
Changed:
<
<
2015-06-15 15:32:12: ASOMon[24464]: Using FTS service https://fts3-pilot.cern.ch:8446
>
>
2015-06-15 15:32:12: ASOMon[24464]: Using FTS service https://fts3-pilot.cern.ch:8443
 ASOMon: pid 24466 writing logfile to /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/Monitor/aso-monitor.log
Line: 674 to 687
 ./config/asyncstageout/manage execute-asyncstageout wmcoreD --shutdown --component <component-name>
Added:
>
>

Use your private ASO in your CRAB jobs

There is a CRAB configuration parameter named Debug.ASOURL which specifies to which CouchDB instance should CRAB inject the transfer documents. And if this parameter is not specified, the parameter backend-urls.ASOURL from the external REST configuration takes place. For the production (pre-production) installation of AsyncStageOut, the documents should be injected to the central CouchDB deployed in CMSWEB (CMSWEB-testbed). For a private installation of AsyncStageOut, the documents should be injected to the local private CouchDB. So if you want to use your private ASO instance, you can either set in the CRAB configuration file

config.Debug.ASOURL = 'http://<couchdb-hostname>.cern.ch:5984/'

or set in the external REST configuration

{
    "cmsweb-dev": {
        ...
        "backend-urls" : {
            ...
            "ASOURL" : "http://<couchdb-hostname>.cern.ch:5984/"
        },
        ...
    }
}
 

Possible "glite-delegation-init: command not found" error

When running ASO, I got the following error message in ./install/asyncstageout/AsyncTransfer/ComponentLog:

Revision 542015-06-16 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 224 to 224
 

Get and install a virtual machine

Changed:
<
<
Follow the instructions in Deployment of CRAB REST Interface / Get and install a virtual machine.
>
>
See Deployment of CRAB REST Interface / Get and install a virtual machine.
 

Add the DN of the ASO host to the external REST configuration

Changed:
<
<
Add the Grid host certificate DN of the ASO host to the external REST configuration in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that your ASO instance is allowed to retrieve users proxies from myproxy server. The external REST configuration file will look something like this (in my case I have a private TaskWorker installed in osmatanasi2.cern.ch and I will install ASO in osmatanasi1.cern.ch):
>
>
Add the Grid host certificate DN of the ASO host to the external REST configuration in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that your ASO instance is allowed to retrieve users proxies from myproxy server. The external REST configuration file will look something like this (in my case I have my private TaskWorker installed in osmatanasi2.cern.ch and I will install ASO in osmatanasi1.cern.ch):
 
{
Line: 244 to 244
 

AsyncStageOut installation and configuration

Deleted:
<
<

Preparation

Create the necessary directories where you will do the deployment:

sudo mkdir /data/admin
sudo mkdir /data/srv
sudo chown <username>:zh /data/admin /data/srv
mkdir /data/admin/asyncstageout
mkdir /data/srv/asyncstageout
 

Pre-configuration

Create a secrets file (the default location is ~/Async.secrets) with the following content:

Line: 281 to 269
 

Installation

Changed:
<
<
Get the DMWM deployment package from github (https://github.com/dmwm/deployment). See https://github.com/dmwm/deployment/releases for the available releases. Note that you don't need to get the latest HG tag. I took HG1503d because this is the latest tag that uses Couch 1.5, while newer tags use Couch 1.6 and will fail in the Deploy sw step with error Couldn't find package external+couchdb15.
>
>
1) Create the directories where you will do the deployment.

sudo mkdir /data/admin /data/srv
sudo chown <username>:zh /data/admin /data/srv
mkdir /data/admin/asyncstageout /data/srv/asyncstageout

2) Get the DMWM deployment package from github (https://github.com/dmwm/deployment). See https://github.com/dmwm/deployment/releases for the available releases.

Note that you don't need to get the latest release. HG1503d is the latest release that uses Couch 1.5. Newer releases use Couch 1.6 and will fail in the sw deployment step with error Couldn't find package external+couchdb15.

 
cd /data/admin/asyncstageout
Line: 291 to 289
 cd Deployment
Changed:
<
<
Set the ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you. The AsyncStageOut releases can be found in https://github.com/dmwm/AsyncStageout/releases and should be taken from the CMS repository comp.pre.riahi.
>
>
3) Set auxiliary variables ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you. The AsyncStageOut releases can be found in https://github.com/dmwm/AsyncStageout/releases and should be taken from the CMS repository comp.pre.riahi.
 
ASOTAG=1.0.3pre8
Line: 299 to 297
 ARCH=slc6_amd64_gcc481
Changed:
<
<
The deployment is separated in three steps: prep, sw and post.
>
>
4) The deployment is separated in three steps: prep, sw and post.
 
./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
Line: 367 to 365
  Copy a valid proxy into /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy and export X509_USER_PROXY to point to that file.
Deleted:
<
<
If voms-proxy-init is not available in the VM, install the package that provides it.

yum provides */voms-proxy-init

Loaded plugins: changelog, kernel-module, priorities, protectbase, security, tsflags, versionlock
147 packages excluded due to repository priority protections
0 packages excluded due to repository protections
EGI-trustanchors/filelists                                                                                                                            |  15 kB     00:00     
glite-SCAS/filelists                                                                                                                                  | 2.8 kB     00:00     
glite-SCAS_ext/filelists                                                                                                                              |  12 kB     00:00     
glite-SCAS_updates/filelists                                                                                                                          | 2.3 kB     00:00     
slc6-extras/filelists_db                                                                                                                              | 181 kB     00:00     
slc6-updates/filelists_db                                                                                                                             |  28 MB     00:00     
voms-clients-2.0.12-1.el6.x86_64 : Virtual Organization Membership Service Clients
Repo        : epel
Matched from:
Filename    : /usr/bin/voms-proxy-info

sudo yum install voms-clients-2.0.12-1.el6.x86_64

You may also need to copy the directories /etc/vomses and /etc/grid-security/vomsdir/cms from lxplus.

sudo scp -r <username>@lxplus.cern.ch:/etc/vomses /etc/vomses
sudo mkdir /etc/grid-security/vomsdir
sudo scp -r <username>@lxplus.cern.ch:/etc/grid-security/vomsdir/cms /etc/grid-security/vomsdir/cms
 
voms-proxy-init --voms cms --valid 168:00

Enter GRID pass phrase:
Changed:
<
<
Your identity: /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=atanasi/CN=710186/CN=Andres Jorge Tanasijczuk Creating temporary proxy ....................................... Done Contacting lcg-voms.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch] "cms" Failed

Error: Error during SSL handshake:

Trying next server for cms. Creating temporary proxy ........................................... Done Contacting voms.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=voms.cern.ch] "cms" Failed

Error: Error during SSL handshake:

Trying next server for cms. Creating temporary proxy ........................................................................ Done Contacting lcg-voms2.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch] "cms" Done Creating proxy ............................... Done

Your proxy is valid until Sun Feb 22 00:53:58 2015

>
>
... ... Your proxy is valid until ...
 
Line: 436 to 385
  No patches necessary.
Changed:
<
<

Initialize CouchDB and ASO

Perform the initialization of CouchDB and ASO.

>
>

Initialization

 
cd /data/srv/asyncstageout/current
Line: 450 to 397
 ./config/asyncstageout/manage activate-asyncstageout
Changed:
<
<
The next command will start CouchDB.
>
>
The next command will initialize (and actually start) CouchDB.
 
Changed:
<
<
./config/asyncstageout/manage start-services # start-services is equivalent to start-couch
>
>
./config/asyncstageout/manage start-services
 
Line: 518 to 465
 
CouchDB configuration
Changed:
<
<
1) First of all check that CouchDB is up and that you can access it from your VM:
>
>
First of all check that CouchDB is up and that you can access it from your VM:
 
curl -X GET "http://$(hostname):5984/"
Line: 528 to 475
 {"couchdb":"Welcome","version":"1.1.1"}
Changed:
<
<
2) CouchDB is protected by the VM firewall. You can not access CouchDB not even from another machine at CERN. To be able to access CouchDB from another machine, you need to stop the iptables:
>
>
1) CouchDB is protected by the VM firewall. You can not access CouchDB not even from another machine at CERN. To be able to access CouchDB from another machine, you need to stop the iptables:
 
sudo /etc/init.d/iptables stop # To start the iptables again: sudo /etc/init.d/iptables start; To check the status: sudo /etc/init.d/iptables status
Line: 552 to 499
 

Cron jobs

Changed:
<
<

CouchDB views caching

Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is fixed, one needs to update the crontab by hand to create cron jobs for querying the views every X minutes:

>
>
Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is fixed, one needs to update the crontab by hand to create cron jobs for querying the views every X minutes.
 
crontab -e
Added:
>
>
You should already have the following three cron jobs for compacting the CouchDB databases:
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow' > /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort/asynctransfer/_compact' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all' > /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort/asynctransfer_stat/_compact' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/sites' > /dev/null
>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort//asynctransfer_agent/_compact' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/JobsStatesByWorkflow' &> /dev/null
>
>
Add the following cron jobs for caching the CouchDB views:
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/DBSPublisher/_view/publish' &> /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/ftscp_all' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/sites' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/JobsStatesByWorkflow' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/get_acquired' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/DBSPublisher/_view/publish' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/get_acquired' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByUser' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByTask' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/filesCountByUser' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByDestSource' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/filesCountByTask' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/FailedAttachmentsByDocId' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/filesCountByDestSource' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/DoneAttachmentsByDocId' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/FailedAttachmentsByDocId' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/publicationStateSizeByTime' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/DoneAttachmentsByDocId' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/startedSizeByTime' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/publicationStateSizeByTime' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/endedSizeByTime' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/startedSizeByTime' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer/_design/monitor/_view/endedSizeByTime' > /dev/null

  */10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' > /dev/null
Line: 598 to 555
 */10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' > /dev/null

*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_agent/_design/Agent/_view/existWorkers' > /dev/null

Deleted:
<
<
0 * * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort/asynctransfer/_compact' &> /dev/null

0 * * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort/asynctransfer_stat/_compact' &> /dev/null

0 * * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort/asynctransfer_agent/_compact' &> /dev/null

 

Start/stop AsyncStageOut

Changed:
<
<
note.gif Note: Without starting the services (CouchDB), ASO will not start as it will fail to connect to CouchDB if it is not running.
>
>
First one has to start CouchDB, otherwise ASO will not start as it will fail to connect to the CouchDB.

Note: If following from the above initialisation steps, you may have CouchDB already running. This can be checked using the status command.

./config/asyncstageout/manage start-services
  To start all ASO components:
Line: 657 to 614
  started with pid 24459 2015-06-15 15:32:12: ASOMon[24464]: Reading config file /data/srv/asyncstageout/v1.0.3pre8/config/asyncstageout/monitor.conf
Changed:
<
<
2015-06-15 15:32:12: ASOMon[24464]: Using FTS service https://fts3-pilot.cern.ch:8443/
>
>
2015-06-15 15:32:12: ASOMon[24464]: Using FTS service https://fts3-pilot.cern.ch:8446
 ASOMon: pid 24466 writing logfile to /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/Monitor/aso-monitor.log
Line: 681 to 638
 Stopping: Monitor
Added:
>
>
To stop CouchDB:

./config/asyncstageout/manage stop-services

bla
 

Start/stop a single AsyncStageOut component

To start the Monitor component:

Line: 736 to 703
 
Changed:
<
<
yum install fts2-client
>
>
sudo yum install fts2-client
 

Disk may become full after some days

Revision 532015-06-16 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 244 to 244
 

AsyncStageOut installation and configuration

Changed:
<
<
1) Create the necessary directories.
>
>

Preparation

Create the necessary directories where you will do the deployment:

 
sudo mkdir /data/admin
sudo mkdir /data/srv
Changed:
<
<
#sudo mkdir /data/certs # This should have been done already by the Deploy script when installing the VM. sudo chown :zh /data/admin /data/srv /data/certs

2) Copy the host certificate and key to /data/certs/ and change the owner to yourself.

#sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs/  # This should have been done already by the Deploy script when installing the VM.
sudo chown <username>:zh /data/certs/*

3) Create the subdirectories where you will put the deployment package that you will download from github.com/dmwm/deployment and where the deployment will be done.

>
>
sudo chown :zh /data/admin /data/srv
 mkdir /data/admin/asyncstageout mkdir /data/srv/asyncstageout
Changed:
<
<
4) Create a secrets file (the default location is ~/Async.secrets) with the following content:
>
>

Pre-configuration

Create a secrets file (the default location is ~/Async.secrets) with the following content:

 
COUCH_USER=<a-couchdb-username>  # Choose a username for the ASO CouchDB.
Line: 286 to 277
 export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file
Changed:
<
<
5) Deployment.
>
>
This secrets file will be used by the deployment scripts to set parameters in the configuration files.

Installation

 
Changed:
<
<
  • Get the deployment package from github. See https://github.com/dmwm/deployment/releases for the releases. You don't need to get the latest HG tag. I took HG1503d which is the latest that uses Couch 1.5 (newer tags will fail in the Deploy sw step with error Couldn't find package external+couchdb15).
>
>
Get the DMWM deployment package from github (https://github.com/dmwm/deployment). See https://github.com/dmwm/deployment/releases for the available releases. Note that you don't need to get the latest HG tag. I took HG1503d because this is the latest tag that uses Couch 1.5, while newer tags use Couch 1.6 and will fail in the Deploy sw step with error Couldn't find package external+couchdb15.
 
cd /data/admin/asyncstageout
Line: 298 to 291
 cd Deployment
Changed:
<
<
  • Set the ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you. The AsyncStageOut releases can be found in https://github.com/dmwm/AsyncStageout/releases and should be taken from the CMS repository comp.pre.riahi.
>
>
Set the ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you. The AsyncStageOut releases can be found in https://github.com/dmwm/AsyncStageout/releases and should be taken from the CMS repository comp.pre.riahi.
 
ASOTAG=1.0.3pre8
Line: 306 to 299
 ARCH=slc6_amd64_gcc481
Changed:
<
<
  • The deployment is separated in three steps: prep, sw and post.
>
>
The deployment is separated in three steps: prep, sw and post.
 
./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
Line: 352 to 345
 INFO: installation completed sucessfully
Changed:
<
<
6) Create a directory necessary for credentials.
>
>

Authentication

Copy the host certificate and key to the /data/certs/ directory and change the owner to yourself:

#sudo mkdir /data/certs  # This should have been done already by the Deploy script when installing the VM.
sudo chown <username>:zh /data/certs
#sudo cp -p /etc/grid-security/host{cert,key}.pem /data/certs/  # This should have been done already by the Deploy script when installing the VM.
sudo chown <username>:zh /data/certs/*

Create a directory necessary for credentials:

 
mkdir /data/srv/asyncstageout/state/asyncstageout/creds
chmod 700 /data/srv/asyncstageout/state/asyncstageout
Added:
>
>
Operators proxy

Copy a valid proxy into /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy and export X509_USER_PROXY to point to that file.

If voms-proxy-init is not available in the VM, install the package that provides it.

yum provides */voms-proxy-init

Loaded plugins: changelog, kernel-module, priorities, protectbase, security, tsflags, versionlock
147 packages excluded due to repository priority protections
0 packages excluded due to repository protections
EGI-trustanchors/filelists                                                                                                                            |  15 kB     00:00     
glite-SCAS/filelists                                                                                                                                  | 2.8 kB     00:00     
glite-SCAS_ext/filelists                                                                                                                              |  12 kB     00:00     
glite-SCAS_updates/filelists                                                                                                                          | 2.3 kB     00:00     
slc6-extras/filelists_db                                                                                                                              | 181 kB     00:00     
slc6-updates/filelists_db                                                                                                                             |  28 MB     00:00     
voms-clients-2.0.12-1.el6.x86_64 : Virtual Organization Membership Service Clients
Repo        : epel
Matched from:
Filename    : /usr/bin/voms-proxy-info

sudo yum install voms-clients-2.0.12-1.el6.x86_64

You may also need to copy the directories /etc/vomses and /etc/grid-security/vomsdir/cms from lxplus.

sudo scp -r <username>@lxplus.cern.ch:/etc/vomses /etc/vomses
sudo mkdir /etc/grid-security/vomsdir
sudo scp -r <username>@lxplus.cern.ch:/etc/grid-security/vomsdir/cms /etc/grid-security/vomsdir/cms

voms-proxy-init --voms cms --valid 168:00

Enter GRID pass phrase:
Your identity: /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=atanasi/CN=710186/CN=Andres Jorge Tanasijczuk
Creating temporary proxy ....................................... Done
Contacting  lcg-voms.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch] "cms" Failed

Error: Error during SSL handshake:

Trying next server for cms.
Creating temporary proxy ........................................... Done
Contacting  voms.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=voms.cern.ch] "cms" Failed

Error: Error during SSL handshake:

Trying next server for cms.
Creating temporary proxy ........................................................................ Done
Contacting  lcg-voms2.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch] "cms" Done
Creating proxy ............................... Done

Your proxy is valid until Sun Feb 22 00:53:58 2015

cp /tmp/x509up_u$(id -u) /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
 

Patches

No patches necessary.

Changed:
<
<
8) Perform the initialization of CouchDB and ASO.
>
>

Initialize CouchDB and ASO

Perform the initialization of CouchDB and ASO.

 
cd /data/srv/asyncstageout/current
Line: 406 to 481
 Installing Agent into asynctransfer_agent
Changed:
<
<
9) In the configuration file /data/srv/asyncstageout/current/config/asyncstageout/config.py there are still some parameters that need to be modified "by hand".
>
>

Configuration

ASO configuration

1) In the configuration file /data/srv/asyncstageout/current/config/asyncstageout/config.py there are still some parameters that need to be modified "by hand":

 
sed --in-place "s|\.credentialDir = .*|\.credentialDir = '/data/srv/asyncstageout/state/asyncstageout/creds'|" config/asyncstageout/config.py
Line: 418 to 497
 sed --in-place "s|\.UISetupScript = .*|\.UISetupScript = '/data/srv/tmp.sh'|" config/asyncstageout/config.py
Changed:
<
<
And create a file /data/srv/tmp.sh with just the following line:
>
>
2) Create a file /data/srv/tmp.sh with just the following line:
 
#!/bin/sh
Changed:
<
<
10) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For development ASO instances one should use https://fts3-pilot.cern.ch:8446.
>
>
3) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For development ASO instances one should use https://fts3-pilot.cern.ch:8446:
 
service = https://fts3-pilot.cern.ch:8446
Changed:
<
<
11) Make sure the directories /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor[/work] were created. If not, create them.
>
>
4) Make sure the directories /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor[/work] were created. If not, create them.
 
mkdir /data/srv/asyncstageout/current/install/asyncstageout/Monitor
mkdir /data/srv/asyncstageout/current/install/asyncstageout/Monitor/work
Changed:
<
<
12) Copy a valid proxy into /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy and export X509_USER_PROXY to point to that file.

If voms-proxy-init is not available in the VM, install the package that provides it.

yum provides */voms-proxy-init

Loaded plugins: changelog, kernel-module, priorities, protectbase, security, tsflags, versionlock
147 packages excluded due to repository priority protections
0 packages excluded due to repository protections
EGI-trustanchors/filelists                                                                                                                            |  15 kB     00:00     
glite-SCAS/filelists                                                                                                                                  | 2.8 kB     00:00     
glite-SCAS_ext/filelists                                                                                                                              |  12 kB     00:00     
glite-SCAS_updates/filelists                                                                                                                          | 2.3 kB     00:00     
slc6-extras/filelists_db                                                                                                                              | 181 kB     00:00     
slc6-updates/filelists_db                                                                                                                             |  28 MB     00:00     
voms-clients-2.0.12-1.el6.x86_64 : Virtual Organization Membership Service Clients
Repo        : epel
Matched from:
Filename    : /usr/bin/voms-proxy-info

sudo yum install voms-clients-2.0.12-1.el6.x86_64

You may also need to copy the directories /etc/vomses and /etc/grid-security/vomsdir/cms from lxplus.

sudo scp -r <username>@lxplus.cern.ch:/etc/vomses /etc/vomses
sudo mkdir /etc/grid-security/vomsdir
sudo scp -r <username>@lxplus.cern.ch:/etc/grid-security/vomsdir/cms /etc/grid-security/vomsdir/cms

voms-proxy-init --voms cms --valid 168:00

Enter GRID pass phrase:
Your identity: /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=atanasi/CN=710186/CN=Andres Jorge Tanasijczuk
Creating temporary proxy ....................................... Done
Contacting  lcg-voms.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch] "cms" Failed

Error: Error during SSL handshake:

Trying next server for cms.
Creating temporary proxy ........................................... Done
Contacting  voms.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=voms.cern.ch] "cms" Failed

Error: Error during SSL handshake:

Trying next server for cms.
Creating temporary proxy ........................................................................ Done
Contacting  lcg-voms2.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch] "cms" Done
Creating proxy ............................... Done

Your proxy is valid until Sun Feb 22 00:53:58 2015

cp /tmp/x509up_u$(id -u) /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
>
>
CouchDB configuration
 
Changed:
<
<
13) Check that CouchDB is up and that you can access it from your VM.
>
>
1) First of all check that CouchDB is up and that you can access it from your VM:
 
curl -X GET "http://$(hostname):5984/"
Line: 514 to 528
 {"couchdb":"Welcome","version":"1.1.1"}
Changed:
<
<
14) CouchDB is protected by the VM firewall. You can not access CouchDB not even from another machine at CERN. To be able to access CouchDB from another machine, you need to stop the iptables:
>
>
2) CouchDB is protected by the VM firewall. You can not access CouchDB not even from another machine at CERN. To be able to access CouchDB from another machine, you need to stop the iptables:
 
sudo /etc/init.d/iptables stop # To start the iptables again: sudo /etc/init.d/iptables start; To check the status: sudo /etc/init.d/iptables status
Line: 528 to 542
  and configure the browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.
Changed:
<
<
15) Change the FTS3 server URL for each of the 8 documents "T1_*_*" in the asynctransfer_config database to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For development ASO instances one should use https://fts3-pilot.cern.ch:8446.
>
>
2) Change the FTS3 server URL for each of the 8 documents "T1_*_*" in the asynctransfer_config database to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For development ASO instances one should use https://fts3-pilot.cern.ch:8446.
 
Line: 538 to 552
 

Cron jobs

Changed:
<
<

Views caching

>
>

CouchDB views caching

  Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is fixed, one needs to update the crontab by hand to create cron jobs for querying the views every X minutes:

Revision 522015-06-15 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 424 to 424
 #!/bin/sh
Changed:
<
<
10) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use.
>
>
10) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For development ASO instances one should use https://fts3-pilot.cern.ch:8446.
 
Changed:
<
<
service = https://lcgfts3.gridpp.rl.ac.uk:8443 # For the FTS3-RAL server. or
service = https://fts3.cern.ch:8443  # For the production FTS3-CERN server.
or
service = https://fts3-pilot.cern.ch:8443  # For the pilot FTS3-CERN server.
>
>
service = https://fts3-pilot.cern.ch:8446
 

11) Make sure the directories /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor[/work] were created. If not, create them.

Line: 515 to 507
 13) Check that CouchDB is up and that you can access it from your VM.
Changed:
<
<
curl -X GET "http://$(hostname):5984/" -k --key $X509_USER_PROXY --cert $X509_USER_PROXY
>
>
curl -X GET "http://$(hostname):5984/"
 
Line: 536 to 528
  and configure the browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.
Changed:
<
<
15) Change the FTS3 server URL for each of the 8 documents "T1_*_*" in the asynctransfer_config database to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446).
>
>
15) Change the FTS3 server URL for each of the 8 documents "T1_*_*" in the asynctransfer_config database to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For development ASO instances one should use https://fts3-pilot.cern.ch:8446.
 
Line: 555 to 547
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/sites' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/sites' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/JobsStatesByWorkflow' &> /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/JobsStatesByWorkflow' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/DBSPublisher/_view/publish' &> /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/DBSPublisher/_view/publish' &> /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/_utils/database.html?asynctransfer/_design/AsyncTransfer/_view/get_acquired' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/get_acquired' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/_utils/database.html?asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByUser' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByUser' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByTask' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByTask' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByDestSource' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByDestSource' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/FailedAttachmentsByDocId' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/FailedAttachmentsByDocId' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/DoneAttachmentsByDocId' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/DoneAttachmentsByDocId' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/publicationStateSizeByTime' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/publicationStateSizeByTime' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/startedSizeByTime' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/startedSizeByTime' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/endedSizeByTime' > /dev/null
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/endedSizeByTime' > /dev/null
  */10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' > /dev/null
Line: 616 to 608
 Starting components: ['AsyncTransfer', 'Reporter', 'DBSPublisher', 'FilesCleaner', 'Statistics', 'RetryManager'] Starting : AsyncTransfer Starting AsyncTransfer as a daemon
Changed:
<
<
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/AsyncTransfer
>
>
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/AsyncTransfer
 Waiting 1 seconds, to ensure daemon file is created
Changed:
<
<
started with pid 23078
>
>
started with pid 24188
 Starting : Reporter Starting Reporter as a daemon
Changed:
<
<
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Reporter
>
>
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/Reporter
 Waiting 1 seconds, to ensure daemon file is created
Changed:
<
<
started with pid 23165
>
>
started with pid 24275
 Starting : DBSPublisher Starting DBSPublisher as a daemon
Changed:
<
<
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/DBSPublisher
>
>
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/DBSPublisher
 Waiting 1 seconds, to ensure daemon file is created
Changed:
<
<
started with pid 23252
>
>
started with pid 24362
 Starting : FilesCleaner Starting FilesCleaner as a daemon
Changed:
<
<
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/FilesCleaner
>
>
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/FilesCleaner
 Waiting 1 seconds, to ensure daemon file is created
Changed:
<
<
started with pid 23339
>
>
started with pid 24449
 Starting : Statistics Starting Statistics as a daemon
Changed:
<
<
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Statistics
>
>
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/Statistics
 Waiting 1 seconds, to ensure daemon file is created
Changed:
<
<
started with pid 23348
>
>
started with pid 24454
 Starting : RetryManager Starting RetryManager as a daemon
Changed:
<
<
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/RetryManager
>
>
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/RetryManager
 Waiting 1 seconds, to ensure daemon file is created
Changed:
<
<
started with pid 23353 2015-02-15 09:26:20: ASOMon[23365]: Reading config file /data/srv/asyncstageout/v1.0.3pre6/config/asyncstageout/monitor.conf 2015-02-15 09:26:20: ASOMon[23365]: Using FTS service https://lcgfts3.gridpp.rl.ac.uk:8443 ASOMon: pid 23367 writing logfile to /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor/aso-monitor.log
>
>
started with pid 24459 2015-06-15 15:32:12: ASOMon[24464]: Reading config file /data/srv/asyncstageout/v1.0.3pre8/config/asyncstageout/monitor.conf 2015-06-15 15:32:12: ASOMon[24464]: Using FTS service https://fts3-pilot.cern.ch:8443/ ASOMon: pid 24466 writing logfile to /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/Monitor/aso-monitor.log
 

To stop all ASO components:

Revision 512015-06-15 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 247 to 247
 1) Create the necessary directories.
Changed:
<
<
sudo mkdir /data/admin /data/srv /data/certs sudo chown :zh /data/admin /data/srv /data/certs # Now one can change the ownership of the directories to yourself.
>
>
sudo mkdir /data/admin sudo mkdir /data/srv #sudo mkdir /data/certs # This should have been done already by the Deploy script when installing the VM. sudo chown :zh /data/admin /data/srv /data/certs
 

2) Copy the host certificate and key to /data/certs/ and change the owner to yourself.

Changed:
<
<
sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs/ # Need to use sudo, because /etc/grid-security/host<cert,key>.pem are owned by root. sudo chown :zh /data/certs/* # Now one can change the ownership of the files.
>
>
#sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs/ # This should have been done already by the Deploy script when installing the VM. sudo chown :zh /data/certs/*
 

3) Create the subdirectories where you will put the deployment package that you will download from github.com/dmwm/deployment and where the deployment will be done.

Line: 284 to 286
 export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file
Changed:
<
<
5) Do the deployment. For what is the latest dmwm/deployment HG tag see https://github.com/dmwm/deployment/releases.
>
>
5) Deployment.
 
Changed:
<
<
  • Get the deployment package from dmwm/deployment.
>
>
  • Get the deployment package from github. See https://github.com/dmwm/deployment/releases for the releases. You don't need to get the latest HG tag. I took HG1503d which is the latest that uses Couch 1.5 (newer tags will fail in the Deploy sw step with error Couldn't find package external+couchdb15).
 
cd /data/admin/asyncstageout
Changed:
<
<
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1506f.zip
>
>
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1503d.zip
 unzip cfg.zip; rm cfg.zip mv deployment-* Deployment cd Deployment
Line: 311 to 313
 
Changed:
<
<
INFO: 20150202172322: starting deployment of: asyncstageout/offsite
>
>
INFO: 20150615131021: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150615-131021-15087-prep.log
 INFO: deploying backend - variant: default, version: default INFO: deploying wmcore-auth - variant: default, version: default INFO: deploying couchdb - variant: default, version: default INFO: deploying asyncstageout - variant: offsite, version: default
Deleted:
<
<
INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150202-172322-25497-prep.log
 INFO: installation completed sucessfully
Line: 325 to 327
 
Changed:
<
<
INFO: 20150202172401: starting deployment of: asyncstageout/offsite
>
>
INFO: 20150615131031: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150615-131031-15163-sw.log
 INFO: deploying backend - variant: default, version: default
Changed:
<
<
INFO: bootstrapping comp.pre.riahi software area in /data/srv/asyncstageout/v1.0.3pre6/sw.pre.riahi
>
>
INFO: bootstrapping comp.pre.riahi software area in /data/srv/asyncstageout/v1.0.3pre8/sw.pre.riahi
 INFO: bootstrap successful INFO: deploying wmcore-auth - variant: default, version: default INFO: deploying couchdb - variant: default, version: default INFO: deploying asyncstageout - variant: offsite, version: default
Deleted:
<
<
INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150202-172401-25574-sw.log
 INFO: installation completed sucessfully
Line: 341 to 343
 
Changed:
<
<
INFO: 20150202172649: starting deployment of: asyncstageout/offsite
>
>
INFO: 20150615131533: starting deployment of: asyncstageout/offsite INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150615-131533-16570-post.log
 INFO: deploying backend - variant: default, version: default INFO: deploying wmcore-auth - variant: default, version: default INFO: deploying couchdb - variant: default, version: default INFO: deploying asyncstageout - variant: offsite, version: default
Deleted:
<
<
INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150202-172649-26929-post.log
 INFO: installation completed sucessfully

Revision 502015-06-15 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 226 to 226
  Follow the instructions in Deployment of CRAB REST Interface / Get and install a virtual machine.
Changed:
<
<

Add the DN of the ASO host to the external REST configuration file

>
>

Add the DN of the ASO host to the external REST configuration

  Add the Grid host certificate DN of the ASO host to the external REST configuration in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that your ASO instance is allowed to retrieve users proxies from myproxy server. The external REST configuration file will look something like this (in my case I have a private TaskWorker installed in osmatanasi2.cern.ch and I will install ASO in osmatanasi1.cern.ch):
Line: 265 to 265
 mkdir /data/srv/asyncstageout
Changed:
<
<
4) Create an Async.secrets file in the home directory of the host with the following content:
>
>
4) Create a secrets file (the default location is ~/Async.secrets) with the following content:
 
COUCH_USER=<a-couchdb-username>  # Choose a username for the ASO CouchDB.
Line: 284 to 284
 export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file
Changed:
<
<
5) Do the deployment. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for what is the current AsyncStageOut release and from which repository should be taken from (I will use 1.0.3pre8 from comp.pre.riahi). For what is the latest dmwm/deployment HG tag see https://github.com/dmwm/deployment/releases.
>
>
5) Do the deployment. For what is the latest dmwm/deployment HG tag see https://github.com/dmwm/deployment/releases.
 
  • Get the deployment package from dmwm/deployment.
Line: 296 to 296
 cd Deployment
Changed:
<
<
  • Set the ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you.
>
>
  • Set the ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you. The AsyncStageOut releases can be found in https://github.com/dmwm/AsyncStageout/releases and should be taken from the CMS repository comp.pre.riahi.
 
ASOTAG=1.0.3pre8
Line: 357 to 357
 chmod 700 /data/srv/asyncstageout/state/asyncstageout
Changed:
<
<
7) Commit patches to AsyncStageOut if required. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for available patches. In my case there were no patches to apply.
>
>

Patches

No patches necessary.

  8) Perform the initialization of CouchDB and ASO.
Line: 542 to 544
 

Cron jobs

Changed:
<
<
Create cron jobs for querying the views every X minutes.
>
>

Views caching

Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is fixed, one needs to update the crontab by hand to create cron jobs for querying the views every X minutes:

 
crontab -e
Changed:
<
<
See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement "Views caching" for what cron jobs need to be added. I didn't create the cron that renews the proxy, and therefore whenever the cron curl command uses --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert I changed it to --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy.
>
>
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/sites' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/AsyncTransfer/_view/JobsStatesByWorkflow' &> /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/DBSPublisher/_view/publish' &> /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/_utils/database.html?asynctransfer/_design/AsyncTransfer/_view/get_acquired' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/_utils/database.html?asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByUser' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByTask' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/filesCountByDestSource' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/FailedAttachmentsByDocId' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/DoneAttachmentsByDocId' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/publicationStateSizeByTime' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/startedSizeByTime' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET ' https://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/couchdb/asynctransfer/_design/monitor/_view/endedSizeByTime' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' > /dev/null

*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://LocalCouchUserName:LocalCouchPass@LocalCouchUrl:LocalPort/asynctransfer_agent/_design/Agent/_view/existWorkers' > /dev/null

0 * * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort/asynctransfer/_compact' &> /dev/null

0 * * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort/asynctransfer_stat/_compact' &> /dev/null

0 * * * * curl -s -H 'Content-Type: application/json' -X POST ' http://LocalCouchUserName:LocalCouchPass@LacalCouchUrl:LocalPort/asynctransfer_agent/_compact' &> /dev/null
 

Start/stop AsyncStageOut

Revision 492015-06-15 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 228 to 228
 

Add the DN of the ASO host to the external REST configuration file

Changed:
<
<
Add the Grid host certificate DN of the ASO host to the external REST configuration file in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that your ASO instance is allowed to retrieve users proxies from myproxy server. The external REST configuration file will look something like this (in my case my REST Interface is installed in osmatanasi2.cern.ch and I will install ASO in osmatanasi1.cern.ch):
>
>
Add the Grid host certificate DN of the ASO host to the external REST configuration in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that your ASO instance is allowed to retrieve users proxies from myproxy server. The external REST configuration file will look something like this (in my case I have a private TaskWorker installed in osmatanasi2.cern.ch and I will install ASO in osmatanasi1.cern.ch):
 
{
Line: 255 to 255
 
sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs/  # Need to use sudo, because /etc/grid-security/host<cert,key>.pem are owned by root.
Changed:
<
<
sudo chown :zh /data/certs/* # Now one can change the ownership of the files to yourself.
>
>
sudo chown :zh /data/certs/* # Now one can change the ownership of the files.
 

3) Create the subdirectories where you will put the deployment package that you will download from github.com/dmwm/deployment and where the deployment will be done.

Line: 273 to 273
 COUCH_HOST=<IP-of-this-ASO-host> # You can read the host IP in https://openstack.cern.ch/dashboard/project/instances/. COUCH_PORT=5984 OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
Changed:
<
<
UFC_SERVICE_URL=https://vmatanasi2.cern.ch/crabserver/dev/filemetadata # The URL of the crabserver instance from where ASO should get the file metadata.
>
>
UFC_SERVICE_URL=https://osmatanasi2.cern.ch/crabserver/dev/filemetadata # The URL of the crabserver instance from where ASO should get the file metadata.
 COUCH_CERT_FILE=/data/certs/hostcert.pem COUCH_KEY_FILE=/data/certs/hostkey.pem
Changed:
<
<
You can actually put this file in any directory you want and/or give to it any name you want, but then you have to set ASYNC_SECRETS_LOCATION to the path to the file.
>
>
You can actually put this file in any directory you want and/or give to it any name you want, but then you have to set the environment variable ASYNC_SECRETS_LOCATION to point to the file:
 
export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file
Changed:
<
<
5) Do the deployment. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for what is the current AsyncStageOut release and from which repository should be taken from (I will use 1.0.3pre6 from comp.pre.riahi). It also may suggest which dmwm/deployment HG tag to use (I will use HG1502g).
>
>
5) Do the deployment. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for what is the current AsyncStageOut release and from which repository should be taken from (I will use 1.0.3pre8 from comp.pre.riahi). For what is the latest dmwm/deployment HG tag see https://github.com/dmwm/deployment/releases.
 
  • Get the deployment package from dmwm/deployment.

cd /data/admin/asyncstageout
Changed:
<
<
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1502g.zip
>
>
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1506f.zip
 unzip cfg.zip; rm cfg.zip mv deployment-* Deployment cd Deployment
Line: 299 to 299
 
  • Set the ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you.
Changed:
<
<
ASOTAG=1.0.3pre6
>
>
ASOTAG=1.0.3pre8
 REPO=comp.pre.riahi ARCH=slc6_amd64_gcc481

Revision 482015-06-08 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 222 to 222
  These are my (Andres Tanasijczuk) notes from when I did the installation of AsyncStageOut in a new CERN OpenStack virtual machine on February 2015.
Changed:
<
<

Request a CERN OpenStack virtual machine

>
>

Get and install a virtual machine

 
Changed:
<
<
Read the CERN OpenStack Private Cloud Guide (http://clouddocs.web.cern.ch/clouddocs/); in the second chapter it explains how to request a VM. The page where the request is done is https://openstack.cern.ch/dashboard/project/instances/. I requested a VM with the following requirements: 2 VCPUs, 4GB RAM, 40GB disk ("Flavor = m1.medium"). I selected "Instance Count = 1", "Boot from image" and "Image name = SLC6 CERN Server - x86_64 [2014-11-06] (776.3 MB)". And I called it osmatanasi1.
>
>
Follow the instructions in Deployment of CRAB REST Interface / Get and install a virtual machine.
 
Changed:
<
<

Get a host certificate for the virtual machine

>
>

Add the DN of the ASO host to the external REST configuration file

 
Changed:
<
<
  • Using a browser that has your certificate issued by the CERN CA imported on it, go to https://gridca.cern.ch/gridca/.
  • Sign in using your certificate.
  • Click on New Grid Host Certificate.
  • Click on Request certificate using OpenSSL (recommended for Linux machines).
  • A list of hosts for which you can request a certificate will appear. Choose the host for which you want to request a certificate and click on the Select button.
  • Log in to the host for which you want to request a certificate.
ssh <username>@<hostname>.cern.ch
  • Make sure you have the ~/.globus directory with your valid user certificate files usercert.pem and userkey.pem. (If your certificate was issued by an authority other than CERN CA, associate your CERN primary account to the certificate. You can do that in https://gridca.cern.ch/gridca/.)
  • Run the following command from your home area:
openssl req -new -subj "/CN=<hostname>.cern.ch" -out newcsr.csr -nodes -sha512 -newkey rsa:2048
  • Two files, newcsr.csr and privkey.pem, should have been created in your home area. The file newcsr.csr contains your certificate request, which you should send to CERN CA. Open the file, copy all its content and paste it in the webpage in the field "Certificate request:". Then click "Submit".
  • Download the host certificate by clicking in the "Base 64 encoded" link under "Download Certificate". The certificate will be in a file named host.cert. Copy this file to the home area in the VM.
  • In your VM home area run the following command:
openssl pkcs12 -export -inkey privkey.pem -in host.cert -out myCertificate.p12
  • Your certificate in pkcs12 format is ready in the file myCertificate.p12. You can delete the newcsr.csr file.
  • Move (or copy) the host certificate to the directory /etc/grid-security (if this directory doesn't exist, create it), change the owner:group to root:root and protect the private key with permission 400:
sudo mkdir /etc/grid-security
sudo cp host.cert /etc/grid-security/hostcert.pem
sudo cp privkey.pem /etc/grid-security/hostkey.pem
sudo chmod 400 /etc/grid-security/hostkey.pem
sudo chown root:root /etc/grid-security/hostcert.pem
sudo chown root:root /etc/grid-security/hostkey.pem
  • That's it.
logout

Request proxy renewal rights for the virtual machine.

Send an e-mail to px.support(AT)cern.ch with Cc to cms-service-webtools(AT)cern.ch.

E-mail subject:

myproxy registration request for <hostname>.cern.ch

E-mail body:

Could you please add the following host certificate to myproxy.cern.ch trusted retrievers, authorized retrievers, authorized renewers policy?
This is a development server for CM web services and requires use of grid proxy certificates.

/DC=ch/DC=cern/OU=computers/CN=<hostname>.cern.ch
>
>
Add the Grid host certificate DN of the ASO host to the external REST configuration file in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that your ASO instance is allowed to retrieve users proxies from myproxy server. The external REST configuration file will look something like this (in my case my REST Interface is installed in osmatanasi2.cern.ch and I will install ASO in osmatanasi1.cern.ch):
 
Changed:
<
<
Regards,

Add the DN of the ASO host to the external REST configuration file.

Add the DN of the ASO host to the external REST configuration file in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this AsyncStageOut deployment. This is so that the REST Interface instance delegates the proxies from users to the ASO service. The external REST configuration file will look something like this (in my case my REST Interface is installed in vmatanasi2.cern.ch and I will install AsyncStageOut in osmatanasi1.cern.ch):

>
>
 {
Changed:
<
<
"private" : { "delegate-dn": ["/DC=ch/DC=cern/OU=computers/CN=vocms(3[136]|21|045|052|021|031).cern.ch|/DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=atanasi/CN=710186/CN=Andres Jorge Tanasijczuk|/DC=ch/DC=cern/OU=computers/CN=vmatanasi2.cern.ch|/DC=ch/DC=cern/OU=computers/CN=osmatanasi1.cern.ch"],
>
>
"cmsweb-dev" : { "delegate-dn": [ "/DC=ch/DC=cern/OU=computers/CN=osmatanasi2.cern.ch|/DC=ch/DC=cern/OU=computers/CN=osmatanasi1.cern.ch" ],
  } }
Changed:
<
<

AsyncStageOut deployment

1) Log-in to the VM (host) where you want to install AsyncStageOut.

ssh <username>@<hostname>.cern.ch

2) Do a basic system install (see 2. Basic system install in https://cms-http-group.web.cern.ch/cms-http-group/tutorials/environ/vm-setup.html).

sudo yum -y install git.x86_64
mkdir -p /tmp/foo
cd /tmp/foo
git clone git://github.com/dmwm/deployment.git cfg
sudo -l
cfg/Deploy -t dummy -s post $PWD system/devvm

INFO: 20150209151113: starting deployment of: system/devvm
INFO: deploying system - variant: devvm, version: default
INFO: installing required system packages. This operation may take a few minutes complete.

less /tmp/foo/.deploy/* # if you want to check what happened
cd ~
rm -fr /tmp/foo
>
>

AsyncStageOut installation and configuration

 
Changed:
<
<
3) Create the necessary directories.
>
>
1) Create the necessary directories.
 
Changed:
<
<
sudo mkdir /data # The /data directory must be owned by root. sudo mkdir /data/admin /data/srv /data/certs # These directories must be created with sudo, because /data is owned by root.
>
>
sudo mkdir /data/admin /data/srv /data/certs
 sudo chown :zh /data/admin /data/srv /data/certs # Now one can change the ownership of the directories to yourself.
Changed:
<
<
4) Copy the host certificate and key to /data/certs/ and change the owner to yourself.
>
>
2) Copy the host certificate and key to /data/certs/ and change the owner to yourself.
 
sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs/  # Need to use sudo, because /etc/grid-security/host<cert,key>.pem are owned by root.
sudo chown <username>:zh /data/certs/*  # Now one can change the ownership of the files to yourself.
Changed:
<
<
5) Create the subdirectories where you will put the deployment package that you will download from github.com/dmwm/deployment and where the deployment will be done.
>
>
3) Create the subdirectories where you will put the deployment package that you will download from github.com/dmwm/deployment and where the deployment will be done.
 
mkdir /data/admin/asyncstageout
mkdir /data/srv/asyncstageout
Changed:
<
<
6) Create an Async.secrets file in the home directory of the host with the following content:
>
>
4) Create an Async.secrets file in the home directory of the host with the following content:
 
COUCH_USER=<a-couchdb-username>  # Choose a username for the ASO CouchDB.
Line: 371 to 284
 export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file
Changed:
<
<
7) Do the deployment. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for what is the current AsyncStageOut release and from which repository should be taken from (I will use 1.0.3pre6 from comp.pre.riahi). It also may suggest which dmwm/deployment HG tag to use (I will use HG1502g).
>
>
5) Do the deployment. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for what is the current AsyncStageOut release and from which repository should be taken from (I will use 1.0.3pre6 from comp.pre.riahi). It also may suggest which dmwm/deployment HG tag to use (I will use HG1502g).
 
  • Get the deployment package from dmwm/deployment.
Line: 437 to 350
 INFO: installation completed sucessfully
Changed:
<
<
8) Create a directory necessary for credentials.
>
>
6) Create a directory necessary for credentials.
 
mkdir /data/srv/asyncstageout/state/asyncstageout/creds
chmod 700 /data/srv/asyncstageout/state/asyncstageout
Changed:
<
<
9) Commit patches to AsyncStageOut if required. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for available patches. In my case there were no patches to apply.
>
>
7) Commit patches to AsyncStageOut if required. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for available patches. In my case there were no patches to apply.
 
Changed:
<
<
10) Perform the initialization of CouchDB and ASO.
>
>
8) Perform the initialization of CouchDB and ASO.
 
cd /data/srv/asyncstageout/current
Line: 489 to 402
 Installing Agent into asynctransfer_agent
Changed:
<
<
11) In the configuration file /data/srv/asyncstageout/current/config/asyncstageout/config.py there are still some parameters that need to be modified "by hand".
>
>
9) In the configuration file /data/srv/asyncstageout/current/config/asyncstageout/config.py there are still some parameters that need to be modified "by hand".
 
sed --in-place "s|\.credentialDir = .*|\.credentialDir = '/data/srv/asyncstageout/state/asyncstageout/creds'|" config/asyncstageout/config.py
Line: 507 to 420
 #!/bin/sh
Changed:
<
<
12) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use.
>
>
10) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use.
 
service = https://lcgfts3.gridpp.rl.ac.uk:8443  # For the FTS3-RAL server.
Line: 521 to 434
 service = https://fts3-pilot.cern.ch:8443 # For the pilot FTS3-CERN server.
Changed:
<
<
13) Make sure the directories /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor[/work] were created. If not, create them.
>
>
11) Make sure the directories /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor[/work] were created. If not, create them.
 
mkdir /data/srv/asyncstageout/current/install/asyncstageout/Monitor
mkdir /data/srv/asyncstageout/current/install/asyncstageout/Monitor/work
Changed:
<
<
14) Copy a valid proxy into /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy and export X509_USER_PROXY to point to that file.
>
>
12) Copy a valid proxy into /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy and export X509_USER_PROXY to point to that file.
  If voms-proxy-init is not available in the VM, install the package that provides it.
Line: 595 to 508
 export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
Changed:
<
<
15) Check that CouchDB is up and that you can access it from your VM.
>
>
13) Check that CouchDB is up and that you can access it from your VM.
 
curl -X GET "http://$(hostname):5984/" -k --key $X509_USER_PROXY --cert $X509_USER_PROXY
Line: 605 to 518
 {"couchdb":"Welcome","version":"1.1.1"}
Changed:
<
<
16) CouchDB is protected by the VM firewall. You can not access CouchDB not even from another machine at CERN. To be able to access CouchDB from another machine, you need to stop the iptables:
>
>
14) CouchDB is protected by the VM firewall. You can not access CouchDB not even from another machine at CERN. To be able to access CouchDB from another machine, you need to stop the iptables:
 
sudo /etc/init.d/iptables stop # To start the iptables again: sudo /etc/init.d/iptables start; To check the status: sudo /etc/init.d/iptables status
Line: 619 to 532
  and configure the browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.
Changed:
<
<
17) Change the FTS3 server URL for each of the 8 documents "T1_*_*" in the asynctransfer_config database to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446).
>
>
15) Change the FTS3 server URL for each of the 8 documents "T1_*_*" in the asynctransfer_config database to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446).
 

Revision 472015-06-03 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 839 to 839
 

ASO on production (cmsweb)

Machine Patch Description Date
Changed:
<
<
vocms031 Version v1.0.3pre5 Full reinstall 2015 Mar 20
>
>
vocms031 Version v1.0.3pre8 Full reinstall 2015 Jun 3
 

ASO on pre-production (cmsweb-testbed)

Revision 462015-06-03 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 16 to 16
 

Deployment of AsyncStageOut for CRAB3

Changed:
<
<
Complete: 4 Go to SWGuideCrab
>
>
Complete: 5 Go to SWGuideCrab
  Read the generic deployment instructions.
Line: 781 to 781
 ./config/asyncstageout/manage clean-all
Deleted:
<
<

Killing transfers

It will kill all transfers in CouchDB, but if FTS transfer was already submitted, currently it is not possible to kill it in FTS:

./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname> [-i docID comma separated list]

OTHER (Will be updated)

 

AsyncStageOut log files

Log files to watch for errors and to check and search in case of problems:

Line: 811 to 801
 ./install/couchdb/logs/couch.log
Added:
>
>

Tips for operators

ASO operations must be done as the service user (crab3 for production and pre-production):

sudo -u crab3 -i bash

Source the ASO environment (the wild cards are for the arch and ASO version: for example, slc6_amd64_gcc481 and 1.0.3pre8 respectively):

cd /data/srv/asyncstageout/current
source sw.pre.riahi/*/cms/asyncstageout/*/etc/profile.d/init.sh

Killing transfers

Kill all the transfers in CouchDB for a given task:

cd /data/srv/asyncstageout/current
./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname> [-i docID comma separated list]

Note: If the FTS transfer was already submitted, it is (currently) not possible to kill it in FTS.

Retrying publication

Retry the publication for a given task:

cd /data/srv/asyncstageout/current
./config/asyncstageout/manage execute-asyncstageout retry-publication -t <taskname> [-i docID comma separated list]
 

ASO on production (cmsweb)

Machine Patch Description Date

Revision 452015-06-03 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 814 to 814
 

ASO on production (cmsweb)

Machine Patch Description Date
Changed:
<
<
vocms031 Version v1.0.3pre1 Full reinstall 2014 Nov 19
>
>
vocms031 Version v1.0.3pre5 Full reinstall 2015 Mar 20
 

ASO on pre-production (cmsweb-testbed)

Machine Patch Description Date
Changed:
<
<
vocms021 Version v1.0.3pre5 Full reinstall 2015 Jan 14
>
>
vocms021 Version v1.0.3pre8 Full reinstall 2015 Apr 20
 
META FILEATTACHMENT attachment="ProxyRenew.sh" attr="" comment="" date="1396593242" name="ProxyRenew.sh" path="ProxyRenew.sh" size="8920" user="jbalcas" version="1"
META TOPICMOVED by="atanasi" date="1412700584" from="CMS.ASODeployment" to="CMSPublic.AsoDeployment"

Revision 442015-02-24 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 224 to 224
 

Request a CERN OpenStack virtual machine

Changed:
<
<
Read the CERN IT OpenStack User's Guide (https://information-technology.web.cern.ch/book/cern-private-cloud-user-guide); in the first chapter it explains how to request a VM. The page where the request is done is https://openstack.cern.ch/dashboard/project/instances/. I requested a VM with the following requirements: 2 VCPUs, 4GB RAM, 40GB disk ("Flavor = m1.medium"). I selected "Instance Count = 1", "Boot from image" and "Image name = SLC6 CERN Server - x86_64 [2014-11-06] (776.3 MB)". And I called it osmatanasi1.
>
>
Read the CERN OpenStack Private Cloud Guide (http://clouddocs.web.cern.ch/clouddocs/); in the second chapter it explains how to request a VM. The page where the request is done is https://openstack.cern.ch/dashboard/project/instances/. I requested a VM with the following requirements: 2 VCPUs, 4GB RAM, 40GB disk ("Flavor = m1.medium"). I selected "Instance Count = 1", "Boot from image" and "Image name = SLC6 CERN Server - x86_64 [2014-11-06] (776.3 MB)". And I called it osmatanasi1.
 

Get a host certificate for the virtual machine

Changed:
<
<
>
>
 
  • Sign in using your certificate.
Changed:
<
<
  • Click on "New Host Certificate".
  • A list of hosts for which you can request a certificate will appear. Click on "[Request]" for the host you want to request a certificate.
>
>
  • Click on New Grid Host Certificate.
  • Click on Request certificate using OpenSSL (recommended for Linux machines).
  • A list of hosts for which you can request a certificate will appear. Choose the host for which you want to request a certificate and click on the Select button.
 
  • Log in to the host for which you want to request a certificate.
ssh <username>@<hostname>.cern.ch
Changed:
<
<
  • Make sure you have the ~/.globus directory with your user certificate.
>
>
  • Make sure you have the ~/.globus directory with your valid user certificate files usercert.pem and userkey.pem. (If your certificate was issued by an authority other than CERN CA, associate your CERN primary account to the certificate. You can do that in https://gridca.cern.ch/gridca/.)
 
  • Run the following command from your home area:
openssl req -new -subj "/CN=<hostname>.cern.ch" -out newcsr.csr -nodes -sha512 -newkey rsa:2048
Line: 306 to 307
 ssh @.cern.ch
Changed:
<
<
2) Install some missing packages necessary for the deployment (see https://cms-http-group.web.cern.ch/cms-http-group/tutorials/environ/vm-setup.html).
>
>
2) Do a basic system install (see 2. Basic system install in https://cms-http-group.web.cern.ch/cms-http-group/tutorials/environ/vm-setup.html).
 
sudo yum -y install git.x86_64
mkdir -p /tmp/foo
cd /tmp/foo
git clone git://github.com/dmwm/deployment.git cfg
Changed:
<
<
sudo cfg/Deploy -t dummy -s post $PWD system/devvm rm -fr /tmp/foo
>
>
sudo -l cfg/Deploy -t dummy -s post $PWD system/devvm
 
Line: 323 to 324
 INFO: installing required system packages. This operation may take a few minutes complete.
Added:
>
>
less /tmp/foo/.deploy/* # if you want to check what happened
cd ~
rm -fr /tmp/foo
 3) Create the necessary directories.
Line: 553 to 560
 
sudo scp -r <username>@lxplus.cern.ch:/etc/vomses /etc/vomses
Added:
>
>
sudo mkdir /etc/grid-security/vomsdir
 sudo scp -r @lxplus.cern.ch:/etc/grid-security/vomsdir/cms /etc/grid-security/vomsdir/cms

Revision 432015-02-23 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 306 to 306
 ssh @.cern.ch
Changed:
<
<
2) Install some missing packages necessary for the deployment (add link to dmwm page where this recipe is shown).
>
>
2) Install some missing packages necessary for the deployment (see https://cms-http-group.web.cern.ch/cms-http-group/tutorials/environ/vm-setup.html).
 
sudo yum -y install git.x86_64
Line: 314 to 314
 cd /tmp/foo git clone git://github.com/dmwm/deployment.git cfg sudo cfg/Deploy -t dummy -s post $PWD system/devvm
Added:
>
>
rm -fr /tmp/foo
 

Revision 422015-02-23 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 610 to 610
  and configure the browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.
Changed:
<
<
17) Change the FTS3 server for each of the 8 documents "T1_*_*" in the asynctransfer_config database to https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446.
>
>
17) Change the FTS3 server URL for each of the 8 documents "T1_*_*" in the asynctransfer_config database to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446).
 
Changed:
<
<
>
>
 
  • Go to the asynctransfer_config database (http://\<couchhost\>:5984/_utils/database.html?asynctransfer_config/_all_docs).
  • Open each of the (8) documents "T1_*_*" and change the FTS server in the "url" key (remember to save each document after editing it).
  • Make sure the changes have propagated to the getRunningFTSserver view (ASO uses this view when selecting the FTS3 server to which should submit the transfer).

Revision 412015-02-16 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
<!-- /ActionTrackerPlugin -->
Line: 220 to 220
 

Deployment of AsyncStageOut for CRAB3 developers

Changed:
<
<
These are notes from Andres Tanasijczuk from when I did the installation of ASO in a new CERN OpenStack virtual machine.
>
>
These are my (Andres Tanasijczuk) notes from when I did the installation of AsyncStageOut in a new CERN OpenStack virtual machine on February 2015.
 

Request a CERN OpenStack virtual machine

Line: 241 to 241
 
openssl req -new -subj "/CN=<hostname>.cern.ch" -out newcsr.csr -nodes -sha512 -newkey rsa:2048
Changed:
<
<
  • Two files, newcsr.csr and privkey.pem should have been created in your home area. The file newcsr.csr contains your certificate request, which you should send to CERN CA. Open the file, copy all its content and paste it in the webpage in the field "Certificate request:". Then click "Submit".
>
>
  • Two files, newcsr.csr and privkey.pem, should have been created in your home area. The file newcsr.csr contains your certificate request, which you should send to CERN CA. Open the file, copy all its content and paste it in the webpage in the field "Certificate request:". Then click "Submit".
 
  • Download the host certificate by clicking in the "Base 64 encoded" link under "Download Certificate". The certificate will be in a file named host.cert. Copy this file to the home area in the VM.
  • In your VM home area run the following command:
openssl pkcs12 -export -inkey privkey.pem -in host.cert -out myCertificate.p12
Changed:
<
<
Your certificate in pkcs12 format is ready in the file myCertificate.p12. You can delete the newcsr.csr file.
  • Move (or copy) the host certificate to the directory /etc/grid-security (if this directory doesn't exist, create it), change the owner:group to root:root and protect the key with permission 400:
>
>
  • Your certificate in pkcs12 format is ready in the file myCertificate.p12. You can delete the newcsr.csr file.
  • Move (or copy) the host certificate to the directory /etc/grid-security (if this directory doesn't exist, create it), change the owner:group to root:root and protect the private key with permission 400:
 
sudo mkdir /etc/grid-security
sudo cp host.cert /etc/grid-security/hostcert.pem
Line: 256 to 256
 sudo chmod 400 /etc/grid-security/hostkey.pem sudo chown root:root /etc/grid-security/hostcert.pem sudo chown root:root /etc/grid-security/hostkey.pem
Added:
>
>
  • That's it.
 logout
Line: 283 to 286
 

Add the DN of the ASO host to the external REST configuration file.

Changed:
<
<
Add the DN of the ASO host to the external REST configuration file in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that the REST Interface instance delegates the proxies from users to the ASO service. The external REST configuration file will look something like this (in my case my REST Interface is installed in vmatanasi2.cern.ch and I will install ASO in osmatanasi1.cern.ch):
>
>
Add the DN of the ASO host to the external REST configuration file in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this AsyncStageOut deployment. This is so that the REST Interface instance delegates the proxies from users to the ASO service. The external REST configuration file will look something like this (in my case my REST Interface is installed in vmatanasi2.cern.ch and I will install AsyncStageOut in osmatanasi1.cern.ch):
 
{
Line: 295 to 298
 }
Changed:
<
<

ASO deployment

>
>

AsyncStageOut deployment

 
Changed:
<
<
1) Log-in to the VM (host) where you want to install ASO.
>
>
1) Log-in to the VM (host) where you want to install AsyncStageOut.
 
ssh <username>@<hostname>.cern.ch
Line: 360 to 363
 export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file
Changed:
<
<
7) Now we do the deployment. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for what is the current ASO release and from which repository should be taken from. It also may suggest which dmwm/deployment HG tag to use.
>
>
7) Do the deployment. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for what is the current AsyncStageOut release and from which repository should be taken from (I will use 1.0.3pre6 from comp.pre.riahi). It also may suggest which dmwm/deployment HG tag to use (I will use HG1502g).

  • Get the deployment package from dmwm/deployment.
 
cd /data/admin/asyncstageout
Changed:
<
<
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1502g.zip # See https://github.com/dmwm/deployment what is the latest HG tag.
>
>
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1502g.zip
 unzip cfg.zip; rm cfg.zip mv deployment-* Deployment
Added:
>
>
cd Deployment
 
Added:
>
>
  • Set the ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you.
 
Deleted:
<
<
cd Deployment
 ASOTAG=1.0.3pre6 REPO=comp.pre.riahi
Changed:
<
<
ARCH=slc6_amd64_gcc481 # This is the only architecture -at this point- for which ASO RPMs are build. You don't need to change the SCRAM_ARCH to match this architecture; the Deploy script will do that for you.
>
>
ARCH=slc6_amd64_gcc481
 
Added:
>
>
  • The deployment is separated in three steps: prep, sw and post.
 
./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
Line: 427 to 436
 chmod 700 /data/srv/asyncstageout/state/asyncstageout
Changed:
<
<
9) Commit patches to ASO if required. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for available patches. In my case there were no patches to apply.
>
>
9) Commit patches to AsyncStageOut if required. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for available patches. In my case there were no patches to apply.
 
Changed:
<
<
10) Perform the initialization of the CouchDB and ASO.
>
>
10) Perform the initialization of CouchDB and ASO.
 
cd /data/srv/asyncstageout/current
Line: 456 to 465
 CouchDB has not been initialised... running post initialisation
Changed:
<
<
The next command will create a couple of configuration files (e.g. /data/srv/asyncstageout/current/config/asyncstageout/config.py) with parameters for all ASO components. Most parameters are read from the Async.secrets file.
>
>
The next command will create a couple of configuration files (e.g. /data/srv/asyncstageout/current/config/asyncstageout/config.py) with parameters for all AsyncStageOut components. Most parameters are read from the Async.secrets file.
 
./config/asyncstageout/manage init-asyncstageout
Line: 577 to 586
 export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
Changed:
<
<
15) Check that CouchDB is up and you can access it from your VM.
>
>
15) Check that CouchDB is up and that you can access it from your VM.
 
curl -X GET "http://$(hostname):5984/" -k --key $X509_USER_PROXY --cert $X509_USER_PROXY
Line: 587 to 596
 {"couchdb":"Welcome","version":"1.1.1"}
Changed:
<
<
16) CouchDB is protected by the VM firewall. You can not access CouchDB not even from another machine at CERN. To be able to access CouchDB from another machine, you need to either stop the iptables:
>
>
16) CouchDB is protected by the VM firewall. You can not access CouchDB not even from another machine at CERN. To be able to access CouchDB from another machine, you need to stop the iptables:
 
sudo /etc/init.d/iptables stop # To start the iptables again: sudo /etc/init.d/iptables start; To check the status: sudo /etc/init.d/iptables status
Line: 599 to 608
 [mylaptop]$ ssh -D 1111 @.cern.ch
Changed:
<
<
and configure the browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111. I did this in Firefox. If you close the ssh tunnel, you have to configure the browser back to the original proxy settings.
>
>
and configure the browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.
  17) Change the FTS3 server for each of the 8 documents "T1_*_*" in the asynctransfer_config database to https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446.
Line: 619 to 628
 See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement "Views caching" for what cron jobs need to be added. I didn't create the cron that renews the proxy, and therefore whenever the cron curl command uses --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert I changed it to --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy.
Changed:
<
<

Start/stop ASO

>
>

Start/stop AsyncStageOut

note.gif Note: Without starting the services (CouchDB), ASO will not start as it will fail to connect to CouchDB if it is not running.

  To start all ASO components:
Line: 673 to 684
 writing logfile to /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor/aso-monitor.log
Deleted:
<
<
It may be that the monitor component will fail to start, in which case the above command output will end with a message like this:

...
started with pid 7207
2015-02-09 16:39:51: ASOMon[7212]: Reading config file /data/srv/asyncstageout/v1.0.3pre6/config/asyncstageout/monitor.conf
2015-02-09 16:39:51: ASOMon[7212]: Using FTS service https://lcgfts3.gridpp.rl.ac.uk:8443
Use of uninitialized value $me in concatenation (.) or string at /data/srv/asyncstageout/v1.0.3pre6/sw.pre.riahi/slc6_amd64_gcc481/cms/asyncstageout/1.0.3pre6/Monitor/   perl_lib/ASO/Monitor.pm line 144.
: fatal error: cannot write to PID file (/data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor/aso-monitor.pid): No such file or directory

In that case one has to start the monitor component "by hand":

source apps/asyncstageout/Monitor/setup.sh; ./apps/asyncstageout/Monitor/ASO-Monitor.pl --config config/asyncstageout/monitor.conf
 To stop all ASO components:
Line: 709 to 703
 Stopping: Monitor
Added:
>
>

Start/stop a single AsyncStageOut component

To start the Monitor component:

source apps/asyncstageout/Monitor/setup.sh; ./apps/asyncstageout/Monitor/ASO-Monitor.pl --config config/asyncstageout/monitor.conf

To stop the Monitor component:

kill -9 `cat install/asyncstageout/Monitor/aso-monitor.pid`

To start any other component:

./config/asyncstageout/manage execute-asyncstageout wmcoreD --start --component <component-name>

To stop any other component:

./config/asyncstageout/manage execute-asyncstageout wmcoreD --shutdown --component <component-name>
 

Possible "glite-delegation-init: command not found" error

When running ASO, I got the following error message in ./install/asyncstageout/AsyncTransfer/ComponentLog:

Line: 743 to 763
 

Disk may become full after some days

Changed:
<
<
After some days the disk in the VM may become full because of many ASO documents (or files related to the views, I don't know) and ASO will stop working. One has to stop ASO and QCouchDB, and then do a clean-all.
>
>
After some days the disk in the VM may become full because of many ASO documents (or files related to the views, I don't know) and ASO will stop working. One has to stop all AsyncStageOut components and CouchDB, and then do a clean-all.
 
Changed:
<
<
>
>
 ./config/asyncstageout/manage stop-asyncstageout ./config/asyncstageout/manage stop-services ./config/asyncstageout/manage clean-all

Revision 402015-02-15 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
Added:
>
>
<!-- /ActionTrackerPlugin -->
 

CRAB Logo

Line: 216 to 220
 

Deployment of AsyncStageOut for CRAB3 developers

Added:
>
>
These are notes from Andres Tanasijczuk from when I did the installation of ASO in a new CERN OpenStack virtual machine.
 
Added:
>
>

Request a CERN OpenStack virtual machine

Read the CERN IT OpenStack User's Guide (https://information-technology.web.cern.ch/book/cern-private-cloud-user-guide); in the first chapter it explains how to request a VM. The page where the request is done is https://openstack.cern.ch/dashboard/project/instances/. I requested a VM with the following requirements: 2 VCPUs, 4GB RAM, 40GB disk ("Flavor = m1.medium"). I selected "Instance Count = 1", "Boot from image" and "Image name = SLC6 CERN Server - x86_64 [2014-11-06] (776.3 MB)". And I called it osmatanasi1.

Get a host certificate for the virtual machine

  • Using a browser that has your certificate issued by the CERN CA imported on it, go to https://ca.cern.ch/ca/ (click on "CERN Grid Certification Authority", it should re-direct you to https://gridca.cern.ch/gridca/).
  • Sign in using your certificate.
  • Click on "New Host Certificate".
  • A list of hosts for which you can request a certificate will appear. Click on "[Request]" for the host you want to request a certificate.
  • Log in to the host for which you want to request a certificate.
ssh <username>@<hostname>.cern.ch
  • Make sure you have the ~/.globus directory with your user certificate.
  • Run the following command from your home area:
openssl req -new -subj "/CN=<hostname>.cern.ch" -out newcsr.csr -nodes -sha512 -newkey rsa:2048
  • Two files, newcsr.csr and privkey.pem should have been created in your home area. The file newcsr.csr contains your certificate request, which you should send to CERN CA. Open the file, copy all its content and paste it in the webpage in the field "Certificate request:". Then click "Submit".
  • Download the host certificate by clicking in the "Base 64 encoded" link under "Download Certificate". The certificate will be in a file named host.cert. Copy this file to the home area in the VM.
  • In your VM home area run the following command:
openssl pkcs12 -export -inkey privkey.pem -in host.cert -out myCertificate.p12
Your certificate in pkcs12 format is ready in the file myCertificate.p12. You can delete the newcsr.csr file.
  • Move (or copy) the host certificate to the directory /etc/grid-security (if this directory doesn't exist, create it), change the owner:group to root:root and protect the key with permission 400:
sudo mkdir /etc/grid-security
sudo cp host.cert /etc/grid-security/hostcert.pem
sudo cp privkey.pem /etc/grid-security/hostkey.pem
sudo chmod 400 /etc/grid-security/hostkey.pem
sudo chown root:root /etc/grid-security/hostcert.pem
sudo chown root:root /etc/grid-security/hostkey.pem
logout

Request proxy renewal rights for the virtual machine.

Send an e-mail to px.support(AT)cern.ch with Cc to cms-service-webtools(AT)cern.ch.

E-mail subject:

myproxy registration request for <hostname>.cern.ch

E-mail body:

Could you please add the following host certificate to myproxy.cern.ch trusted retrievers, authorized retrievers, authorized renewers policy?
This is a development server for CM web services and requires use of grid proxy certificates.

/DC=ch/DC=cern/OU=computers/CN=<hostname>.cern.ch

Regards,
<Your Name>

Add the DN of the ASO host to the external REST configuration file.

Add the DN of the ASO host to the external REST configuration file in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that the REST Interface instance delegates the proxies from users to the ASO service. The external REST configuration file will look something like this (in my case my REST Interface is installed in vmatanasi2.cern.ch and I will install ASO in osmatanasi1.cern.ch):

{
"private" : {
    "delegate-dn": ["/DC=ch/DC=cern/OU=computers/CN=vocms(3[136]|21|045|052|021|031).cern.ch|/DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=atanasi/CN=710186/CN=Andres Jorge Tanasijczuk|/DC=ch/DC=cern/OU=computers/CN=vmatanasi2.cern.ch|/DC=ch/DC=cern/OU=computers/CN=osmatanasi1.cern.ch"],


    }
}

ASO deployment

1) Log-in to the VM (host) where you want to install ASO.

ssh <username>@<hostname>.cern.ch

2) Install some missing packages necessary for the deployment (add link to dmwm page where this recipe is shown).

sudo yum -y install git.x86_64
mkdir -p /tmp/foo
cd /tmp/foo
git clone git://github.com/dmwm/deployment.git cfg
sudo cfg/Deploy -t dummy -s post $PWD system/devvm

INFO: 20150209151113: starting deployment of: system/devvm
INFO: deploying system - variant: devvm, version: default
INFO: installing required system packages. This operation may take a few minutes complete.

3) Create the necessary directories.

sudo mkdir /data  # The /data directory must be owned by root.
sudo mkdir /data/admin /data/srv /data/certs  # These directories must be created with sudo, because /data is owned by root.
sudo chown <username>:zh /data/admin /data/srv /data/certs  # Now one can change the ownership of the directories to yourself.

4) Copy the host certificate and key to /data/certs/ and change the owner to yourself.

sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs/  # Need to use sudo, because /etc/grid-security/host<cert,key>.pem are owned by root.
sudo chown <username>:zh /data/certs/*  # Now one can change the ownership of the files to yourself.

5) Create the subdirectories where you will put the deployment package that you will download from github.com/dmwm/deployment and where the deployment will be done.

mkdir /data/admin/asyncstageout
mkdir /data/srv/asyncstageout

6) Create an Async.secrets file in the home directory of the host with the following content:

COUCH_USER=<a-couchdb-username>  # Choose a username for the ASO CouchDB.
COUCH_PASS=<a-couchdb-password>  # Choose a password for the ASO CouchDB.
COUCH_HOST=<IP-of-this-ASO-host>  # You can read the host IP in https://openstack.cern.ch/dashboard/project/instances/.
COUCH_PORT=5984
OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
UFC_SERVICE_URL=https://vmatanasi2.cern.ch/crabserver/dev/filemetadata  # The URL of the crabserver instance from where ASO should get the file metadata.
COUCH_CERT_FILE=/data/certs/hostcert.pem
COUCH_KEY_FILE=/data/certs/hostkey.pem

You can actually put this file in any directory you want and/or give to it any name you want, but then you have to set ASYNC_SECRETS_LOCATION to the path to the file.

export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file

7) Now we do the deployment. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for what is the current ASO release and from which repository should be taken from. It also may suggest which dmwm/deployment HG tag to use.

cd /data/admin/asyncstageout
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1502g.zip  # See https://github.com/dmwm/deployment what is the latest HG tag.
unzip cfg.zip; rm cfg.zip
mv deployment-* Deployment

cd Deployment
ASOTAG=1.0.3pre6
REPO=comp.pre.riahi
ARCH=slc6_amd64_gcc481  # This is the only architecture -at this point- for which ASO RPMs are build. You don't need to change the SCRAM_ARCH to match this architecture; the Deploy script will do that for you.

./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite

INFO: 20150202172322: starting deployment of: asyncstageout/offsite
INFO: deploying backend - variant: default, version: default
INFO: deploying wmcore-auth - variant: default, version: default
INFO: deploying couchdb - variant: default, version: default
INFO: deploying asyncstageout - variant: offsite, version: default
INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150202-172322-25497-prep.log
INFO: installation completed sucessfully

./Deploy -R asyncstageout@$ASOTAG -s sw -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite

INFO: 20150202172401: starting deployment of: asyncstageout/offsite
INFO: deploying backend - variant: default, version: default
INFO: bootstrapping comp.pre.riahi software area in /data/srv/asyncstageout/v1.0.3pre6/sw.pre.riahi
INFO: bootstrap successful
INFO: deploying wmcore-auth - variant: default, version: default
INFO: deploying couchdb - variant: default, version: default
INFO: deploying asyncstageout - variant: offsite, version: default
INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150202-172401-25574-sw.log
INFO: installation completed sucessfully

./Deploy -R asyncstageout@$ASOTAG -s post -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite

INFO: 20150202172649: starting deployment of: asyncstageout/offsite
INFO: deploying backend - variant: default, version: default
INFO: deploying wmcore-auth - variant: default, version: default
INFO: deploying couchdb - variant: default, version: default
INFO: deploying asyncstageout - variant: offsite, version: default
INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150202-172649-26929-post.log
INFO: installation completed sucessfully

8) Create a directory necessary for credentials.

mkdir /data/srv/asyncstageout/state/asyncstageout/creds
chmod 700 /data/srv/asyncstageout/state/asyncstageout

9) Commit patches to ASO if required. See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement for available patches. In my case there were no patches to apply.

10) Perform the initialization of the CouchDB and ASO.

cd /data/srv/asyncstageout/current

The next command will copy some template configuration files into the actual directories where they should be.

./config/asyncstageout/manage activate-asyncstageout

The next command will start CouchDB.

./config/asyncstageout/manage start-services  # start-services is equivalent to start-couch

Starting Services...
starting couch...
CouchDB has not been initialised... running pre initialisation
Initialising CouchDB on <COUCH_HOST>:5984
Apache CouchDB has started, time to relax.
CouchDB has not been initialised... running post initialisation

The next command will create a couple of configuration files (e.g. /data/srv/asyncstageout/current/config/asyncstageout/config.py) with parameters for all ASO components. Most parameters are read from the Async.secrets file.

./config/asyncstageout/manage init-asyncstageout

Initialising AsyncStageOut...
Installing AsyncTransfer into asynctransfer
Installing monitor into asynctransfer
Installing stat into asynctransfer_stat
Installing DBSPublisher into asynctransfer
Installing config into asynctransfer_config
Installing Agent into asynctransfer_agent

11) In the configuration file /data/srv/asyncstageout/current/config/asyncstageout/config.py there are still some parameters that need to be modified "by hand".

sed --in-place "s|\.credentialDir = .*|\.credentialDir = '/data/srv/asyncstageout/state/asyncstageout/creds'|" config/asyncstageout/config.py
sed --in-place "s|\.serviceCert = .*|\.serviceCert = '/data/certs/hostcert.pem'|" config/asyncstageout/config.py
sed --in-place "s|\.serviceKey = .*|\.serviceKey = '/data/certs/hostkey.pem'|" config/asyncstageout/config.py
serverDN=$(openssl x509 -text -subject -noout -in /data/certs/hostcert.pem | grep subject= | sed 's/subject= //')
sed --in-place "s|\.serverDN = .*|\.serverDN = '$serverDN'|" config/asyncstageout/config.py
sed --in-place "s|\.log_level = .*|\.log_level = 10|" config/asyncstageout/config.py
sed --in-place "s|\.UISetupScript = .*|\.UISetupScript = '/data/srv/tmp.sh'|" config/asyncstageout/config.py

And create a file /data/srv/tmp.sh with just the following line:

#!/bin/sh

12) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use.

service = https://lcgfts3.gridpp.rl.ac.uk:8443  # For the FTS3-RAL server.
or
service = https://fts3.cern.ch:8443  # For the production FTS3-CERN server.
or
service = https://fts3-pilot.cern.ch:8443  # For the pilot FTS3-CERN server.

13) Make sure the directories /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor[/work] were created. If not, create them.

mkdir /data/srv/asyncstageout/current/install/asyncstageout/Monitor
mkdir /data/srv/asyncstageout/current/install/asyncstageout/Monitor/work

14) Copy a valid proxy into /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy and export X509_USER_PROXY to point to that file.

If voms-proxy-init is not available in the VM, install the package that provides it.

yum provides */voms-proxy-init

Loaded plugins: changelog, kernel-module, priorities, protectbase, security, tsflags, versionlock
147 packages excluded due to repository priority protections
0 packages excluded due to repository protections
EGI-trustanchors/filelists                                                                                                                            |  15 kB     00:00     
glite-SCAS/filelists                                                                                                                                  | 2.8 kB     00:00     
glite-SCAS_ext/filelists                                                                                                                              |  12 kB     00:00     
glite-SCAS_updates/filelists                                                                                                                          | 2.3 kB     00:00     
slc6-extras/filelists_db                                                                                                                              | 181 kB     00:00     
slc6-updates/filelists_db                                                                                                                             |  28 MB     00:00     
voms-clients-2.0.12-1.el6.x86_64 : Virtual Organization Membership Service Clients
Repo        : epel
Matched from:
Filename    : /usr/bin/voms-proxy-info

sudo yum install voms-clients-2.0.12-1.el6.x86_64

You may also need to copy the directories /etc/vomses and /etc/grid-security/vomsdir/cms from lxplus.

sudo scp -r <username>@lxplus.cern.ch:/etc/vomses /etc/vomses
sudo scp -r <username>@lxplus.cern.ch:/etc/grid-security/vomsdir/cms /etc/grid-security/vomsdir/cms

voms-proxy-init --voms cms --valid 168:00

Enter GRID pass phrase:
Your identity: /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=atanasi/CN=710186/CN=Andres Jorge Tanasijczuk
Creating temporary proxy ....................................... Done
Contacting  lcg-voms.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch] "cms" Failed

Error: Error during SSL handshake:

Trying next server for cms.
Creating temporary proxy ........................................... Done
Contacting  voms.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=voms.cern.ch] "cms" Failed

Error: Error during SSL handshake:

Trying next server for cms.
Creating temporary proxy ........................................................................ Done
Contacting  lcg-voms2.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=lcg-voms2.cern.ch] "cms" Done
Creating proxy ............................... Done

Your proxy is valid until Sun Feb 22 00:53:58 2015

cp /tmp/x509up_u$(id -u) /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy

15) Check that CouchDB is up and you can access it from your VM.

curl -X GET "http://$(hostname):5984/" -k --key $X509_USER_PROXY --cert $X509_USER_PROXY

{"couchdb":"Welcome","version":"1.1.1"}

16) CouchDB is protected by the VM firewall. You can not access CouchDB not even from another machine at CERN. To be able to access CouchDB from another machine, you need to either stop the iptables:

sudo /etc/init.d/iptables stop # To start the iptables again: sudo /etc/init.d/iptables start; To check the status: sudo /etc/init.d/iptables status

Or create an ssh tunnel between the machine from where you want to access CouchDB and the VM:

[mylaptop]$ ssh -D 1111 <username>@<hostname>.cern.ch

and configure the browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111. I did this in Firefox. If you close the ssh tunnel, you have to configure the browser back to the original proxy settings.

17) Change the FTS3 server for each of the 8 documents "T1_*_*" in the asynctransfer_config database to https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446.

Cron jobs

Create cron jobs for querying the views every X minutes.

crontab -e

See https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement "Views caching" for what cron jobs need to be added. I didn't create the cron that renews the proxy, and therefore whenever the cron curl command uses --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert I changed it to --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy.

Start/stop ASO

To start all ASO components:

./config/asyncstageout/manage start-asyncstageout

Starting AsyncStageOut...
Checking default database connection... ok.
Starting components: ['AsyncTransfer', 'Reporter', 'DBSPublisher', 'FilesCleaner', 'Statistics', 'RetryManager']
Starting : AsyncTransfer
Starting AsyncTransfer as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/AsyncTransfer 
Waiting 1 seconds, to ensure daemon file is created

started with pid 23078
Starting : Reporter
Starting Reporter as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Reporter 
Waiting 1 seconds, to ensure daemon file is created

started with pid 23165
Starting : DBSPublisher
Starting DBSPublisher as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/DBSPublisher 
Waiting 1 seconds, to ensure daemon file is created

started with pid 23252
Starting : FilesCleaner
Starting FilesCleaner as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/FilesCleaner 
Waiting 1 seconds, to ensure daemon file is created

started with pid 23339
Starting : Statistics
Starting Statistics as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Statistics 
Waiting 1 seconds, to ensure daemon file is created

started with pid 23348
Starting : RetryManager
Starting RetryManager as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/RetryManager 
Waiting 1 seconds, to ensure daemon file is created

started with pid 23353
2015-02-15 09:26:20: ASOMon[23365]: Reading config file /data/srv/asyncstageout/v1.0.3pre6/config/asyncstageout/monitor.conf
2015-02-15 09:26:20: ASOMon[23365]: Using FTS service https://lcgfts3.gridpp.rl.ac.uk:8443
ASOMon: pid 23367
writing logfile to /data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor/aso-monitor.log

It may be that the monitor component will fail to start, in which case the above command output will end with a message like this:

...
started with pid 7207
2015-02-09 16:39:51: ASOMon[7212]: Reading config file /data/srv/asyncstageout/v1.0.3pre6/config/asyncstageout/monitor.conf
2015-02-09 16:39:51: ASOMon[7212]: Using FTS service https://lcgfts3.gridpp.rl.ac.uk:8443
Use of uninitialized value $me in concatenation (.) or string at /data/srv/asyncstageout/v1.0.3pre6/sw.pre.riahi/slc6_amd64_gcc481/cms/asyncstageout/1.0.3pre6/Monitor/   perl_lib/ASO/Monitor.pm line 144.
: fatal error: cannot write to PID file (/data/srv/asyncstageout/v1.0.3pre6/install/asyncstageout/Monitor/aso-monitor.pid): No such file or directory

In that case one has to start the monitor component "by hand":

source apps/asyncstageout/Monitor/setup.sh; ./apps/asyncstageout/Monitor/ASO-Monitor.pl --config config/asyncstageout/monitor.conf

To stop all ASO components:

./config/asyncstageout/manage stop-asyncstageout

Shutting down asyncstageot...
Checking default database connection... ok.
Stopping components: ['AsyncTransfer', 'Reporter', 'DBSPublisher', 'FilesCleaner', 'Statistics', 'RetryManager']
Stopping: AsyncTransfer
Stopping: Reporter
Stopping: DBSPublisher
Stopping: FilesCleaner
Stopping: Statistics
Stopping: RetryManager
Stopping: Monitor

Possible "glite-delegation-init: command not found" error

When running ASO, I got the following error message in ./install/asyncstageout/AsyncTransfer/ComponentLog:

2015-02-06 18:41:54,162:DEBUG:TransferDaemon:Starting <AsyncStageOut.TransferWorker.TransferWorker instance at 0x1fabe18>
2015-02-06 18:41:54,162:DEBUG:TransferWorker:executing command: export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/eebe07e240237dde878ab66bb56e7794e8d5b39e ; source /data/srv/tmp.sh ; glite-delegation-init -s https://fts3-pilot.cern.ch:8443 at: Fri, 06 Feb 2015 18:41:54 for: /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=atanasi/CN=710186/CN=Andres Jorge Tanasijczuk
2015-02-06 18:41:54,271:DEBUG:TransferWorker:Executing : 
command : export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/eebe07e240237dde878ab66bb56e7794e8d5b39e ; source /data/srv/tmp.sh ; glite-delegation-init -s https://fts3-pilot.cern.ch:8443 
output : 
error: /bin/sh: glite-delegation-init: command not found
retcode : 127
2015-02-06    18:41:54,271:DEBUG:TransferWorker:User proxy of atanasi could not be delagated! Trying next time.

To fix it, I had to install fts2-client. But first I had to create a file /etc/yum.repos.d/EMI-3-base.repo with the following content:

[EMI-3-base]
name=EMI3 base software
baseurl=http://linuxsoft.cern.ch/emi/3/sl6/x86_64/base
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-emi
exclude=
priority=15

yum install fts2-client

Disk may become full after some days

After some days the disk in the VM may become full because of many ASO documents (or files related to the views, I don't know) and ASO will stop working. One has to stop ASO and QCouchDB, and then do a clean-all.

./config/asyncstageout/manage stop-asyncstageout
./config/asyncstageout/manage stop-services
./config/asyncstageout/manage clean-all
 

Killing transfers

Revision 392015-02-11 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
Added:
>
>
CRAB Logo
 

Deployment of AsyncStageout for CRAB3

Added:
>
>
Complete: 4 Go to SWGuideCrab
 Read the generic deployment instructions.
Changed:
<
<

Machine details

>
>

Deployment of AsyncStageout for CRAB3 operators

Machine details

  Local service account used to deploy, run and operate the service is crab3.

If the machine is new one, take a look at CAF Configuration, if delegate DN does not exist, ask hn-cms-crabDevelopment@cernNOSPAMPLEASE.ch to update the rest configuration.

Changed:
<
<

Additional machine preparation steps

>
>

Additional machine preparation steps

  The host must be registered for proxy retrieval from myproxy.cern.ch. Request it by sending an e-mail to px.support@cernNOSPAMPLEASE.ch giving the DN of the host certificate. If host certificate is not correct, or need to be updated, need to contact VOC . It can be obtained by
Line: 47 to 55
 sudo chown crab3:zh /data/certs/*
Changed:
<
<

Deployment

>
>

Deployment

  The deployment and operations are done as the service user, so we switch to it:
Line: 135 to 143
 sed --in-place "s|\.log_level = .*|\.log_level = 10|" config/asyncstageout/config.py
Changed:
<
<

Operations

>
>

Operations

 
Changed:
<
<

OpsProxy Renewal or Creation:

>
>

OpsProxy Renewal or Creation:

  First connect to machine and create a seed for proxy delegation:
Line: 169 to 177
 3 */3 * * * /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms
Changed:
<
<

Starting and stopping the service or CouchDB:

>
>

Starting and stopping the service or CouchDB:

  Environment needed before starting or stoping AsyncStageOut
Line: 197 to 205
 ./config/asyncstageout/manage stop-services
Changed:
<
<

Switch to FTS3

>
>

Switch to FTS3

  Set by hand all FTS servers endpoints in the ASO config database in the local CouchDB instance from the futon interface. There is one document per T1 site. For example the RAL FTS server can be found here:
Line: 207 to 215
 Modify the url from https://fts-fzk.gridka.de:8443/glite-data-transfer-fts/services/FileTransfer to https://fts3-pilot.cern.ch:8446
Changed:
<
<

Killing transfers

>
>

Killing transfers

  It will kill all transfers in CouchDB, but if FTS transfer was already submitted, currently it is not possible to kill it in FTS:
Line: 216 to 224
  OTHER (Will be updated)
Changed:
<
<

Log files

>
>

AsyncStageout log files

  Log files to watch for errors and to check and search in case of problems:

Revision 372015-01-14 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
 

Deployment of AsyncStageout for CRAB3

Read the generic deployment instructions.

Machine details

Deleted:
<
<
Required software and initial configuration are provided by Quattor. One can take a look at the template.
 Local service account used to deploy, run and operate the service is crab3.
Changed:
<
<
If the machine is new one, take a look at CAF Configuration, if delegate DN does not exist, ask Justas Balcas or Marco Mascheroni to update the rest configuration.
>
>
If the machine is new one, take a look at CAF Configuration, if delegate DN does not exist, ask hn-cms-crabDevelopment@cernNOSPAMPLEASE.ch to update the rest configuration.
 

Additional machine preparation steps

Changed:
<
<
The host must be registered for proxy retrieval from myproxy.cern.ch. Request it by sending an e-mail to px.support@cernNOSPAMPLEASE.ch giving the DN of the host certificate. If host certificate is not correct, or need to be updated, need to contact Ivan Glushkov . It can be obtained by
>
>
The host must be registered for proxy retrieval from myproxy.cern.ch. Request it by sending an e-mail to px.support@cernNOSPAMPLEASE.ch giving the DN of the host certificate. If host certificate is not correct, or need to be updated, need to contact VOC . It can be obtained by
 voms-proxy-info -file /etc/grid-security/hostcert.pem -subject

In case voms-proxy-info is not available, use

Changed:
<
<
>
>
 openssl x509 -subject -noout -in /data/certs/hostcert.pem
Changed:
<
<
or
source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh

and then try voms-proxy-info. Registration with myproxy.cern.ch can be checked with:

>
>
Registration with myproxy.cern.ch can be checked with:
 ldapsearch -p 2170 -h myproxy.cern.ch -x -LLL -b "mds-vo-name=resource,o=grid" | grep $(hostname)

Prepare directories for the deployment, owned by the service account:

Changed:
<
<
>
>
 sudo mkdir /data/srv /data/admin /data/certs sudo chown crab3:zh /data/srv /data/admin /data/certs

Make a copy of the host certificate, accessible by the service account to be used by AsyncStageout:

Changed:
<
<
>
>
 sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs sudo chown crab3:zh /data/certs/*
Line: 49 to 50
 

Deployment

The deployment and operations are done as the service user, so we switch to it:

Changed:
<
<
>
>
 sudo -u crab3 -i bash

Create directories for the deployment scripts and the deployment:

Changed:
<
<
>
>
 mkdir /data/admin/asyncstageout mkdir /data/srv/asyncstageout

If not already done by a previous deployment, create the secrets file, filling in the CouchDB username and password, the IP address of the local machine and the host where the CRAB3 REST interface is installed:

Changed:
<
<
>
>
 cat > $HOME/Async.secrets <<EOF COUCH_USER=*** COUCH_PASS=*** COUCH_PORT=5984 COUCH_HOST=HOST_IP OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
Changed:
<
<
UFC_SERVICE_URL=https://vocms29.cern.ch/crabserver/dev/filemetadata AMQ_AUTH_FILE=$HOME/AmqAuthFile
>
>
UFC_SERVICE_URL=https://cmsweb-testbed.cern.ch/crabserver/preprod/filemetadata
 COUCH_CERT_FILE=/data/certs/servicecert.pem COUCH_KEY_FILE=/data/certs/servicekey.pem EOF

The file contains sensitive data and must be protected with the appropriate permissions:

Changed:
<
<
>
>
 chmod 600 $HOME/Async.secrets
Changed:
<
<
Get the deployment scripts (take a look at the github dmwm/deployment repository for the latest tag -here we assume it is HG1404a-):
>
>
Get the deployment scripts (take a look at the github dmwm/deployment repository for the latest tag -here we assume it is HG1411a-):
 cd /data/admin/asyncstageout rm -rf Deployment
Changed:
<
<
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1404a.zip
>
>
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1411a.zip
 unzip cfg.zip; rm cfg.zip mv deployment-* Deployment cd Deployment

Perform the deployment of the appropriate AsyncStageout release tag from the corresponding CMS repository (check the AsyncStageOutManagement page or contact Hassen):

Changed:
<
<
ASOTAG=1.0.1pre11
>
>
ASOTAG=1.0.3pre1
 REPO=comp.pre.riahi
Changed:
<
<
ARCH=slc5_amd64_gcc461
>
>
ARCH=slc6_amd64_gcc481
 ./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s sw -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s post -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite

Create a directory to store user credentials obtained from myproxy, accessible to the service exclusively:

Changed:
<
<
>
>
 mkdir /data/srv/asyncstageout/state/asyncstageout/creds chmod 700 /data/srv/asyncstageout/state/asyncstageout/

Commit patches if required:

Changed:
<
<
>
>
 cd /data/srv/asyncstageout/current/ #wget https://github.com/dmwm/WMCore/pull/4965.patch -O - | patch -d apps/asyncstageout/ -p 1 #wget https://github.com/dmwm/WMCore/pull/4967.patch -O - | patch -d apps/asyncstageout/ -p 1
Deleted:
<
<
# For testing patch (WMCore matching LFN) :
 #wget https://github.com/dmwm/WMCore/commit/a4563fa3cbc451dcce27669052518769a5e00a2a.patch -O - | patch -d apps/asyncstageout/lib/python2.6/site-packages/ -p 3

Initialize the service:

Changed:
<
<
>
>
 cd /data/srv/asyncstageout/current ./config/asyncstageout/manage activate-asyncstageout ./config/asyncstageout/manage start-services
Line: 123 to 122
 

Set correct values of some essential configuration parameters in the config file config/asyncstageout/config.py:

Changed:
<
<
>
>
 sed --in-place "s|\.credentialDir = .*|\.credentialDir = '/data/srv/asyncstageout/state/asyncstageout/creds'|" config/asyncstageout/config.py sed --in-place "s|\.serviceCert = .*|\.serviceCert = '/data/certs/hostcert.pem'|" config/asyncstageout/config.py sed --in-place "s|\.serviceKey = .*|\.serviceKey = '/data/certs/hostkey.pem'|" config/asyncstageout/config.py serverDN=$(openssl x509 -text -subject -noout -in /data/certs/hostcert.pem | grep subject= | sed 's/subject= //') sed --in-place "s|\.serverDN = .*|\.serverDN = '$serverDN'|" config/asyncstageout/config.py
Changed:
<
<
sed --in-place "s|\.couch_instance = .*|\.couch_instance = 'https://cmsweb.cern.ch/couchdb'|" config/asyncstageout/config.py sed --in-place "s|\.cache_area = .*|\.cache_area = 'https://cmsweb.cern.ch/crabserver/prod/filemetadata'|" config/asyncstageout/config.py
>
>
sed --in-place "s|\.couch_instance = .*|\.couch_instance = 'https://cmsweb-testbed.cern.ch/couchdb'|" config/asyncstageout/config.py sed --in-place "s|\.cache_area = .*|\.cache_area = 'https://cmsweb-testbed.cern.ch/crabserver/preprod/filemetadata'|" config/asyncstageout/config.py
 sed --in-place "s|\.opsProxy = .*|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py
Deleted:
<
<
Monitoring (while the fix is not made for monitoring on cmsweb, we need to create a replication of the asynctransfer database):

curl -X POST http://username:password@server:5984/_replicator/ -H "Content-Type: application/json" -d '{"source":"https://cmsweb-testbed.cern.ch/couchdb/asynctransfer","target":"asynctransfer","continuous":true}'

 

Operations

OpsProxy Renewal or Creation:

First connect to machine and create a seed for proxy delegation:

Changed:
<
<
>
>
 ssh machine_name
Deleted:
<
<
source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
 sudo mkdir /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ voms-proxy-init -voms cms sudo cp -p /tmp/x509up_u$(id -u) /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/seed-100001.cert
Line: 153 to 147
 

Create a delegation (don`t forget to change the MACHINE NAME and DELEGATION_NAME):

Changed:
<
<
>
>
 myproxy-init -l DELEGATION_NAME_100001 -x -R "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -c 720 -t 36 -Z "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -s myproxy.cern.ch

Copy the script to /data/admin/ProxyRenew.sh and do chown:

Changed:
<
<
>
>
 sudo chown crab3:zh /data/admin/ProxyRenew.sh # Not needed if renewing

Before adding into crontab, try to run command and see if Proxy is renewed:

Changed:
<
<
>
>
 /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms

Add in crontab if does not exist :

Changed:
<
<
>
>
 MAILTO="justas.balcas@cern.ch" 3 */3 * * * /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms
Changed:
<
<

Starting and stopping the service:

>
>

Starting and stopping the service or CouchDB:

Environment needed before starting or stoping AsyncStageOut

 export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy cd /data/srv/asyncstageout/current/ source ~/Async.secrets
Changed:
<
<
source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
>
>
 
Added:
>
>
Starting AsyncStageOut (If first time starting AsyncStageOut, make sure you have entered correct FTS3 server urls in CouchDB)
 ./config/asyncstageout/manage start-asyncstageout
Added:
>
>

Stopping AsyncStageOut :

 ./config/asyncstageout/manage stop-asyncstageout
Changed:
<
<

Starting and stopping CouchDB

>
>

Starting CouchDB

 ./config/asyncstageout/manage start-services
Added:
>
>
 ./config/asyncstageout/manage stop-services
Changed:
<
<

Killing transfers

>
>

Switch to FTS3

 
Changed:
<
<
./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname> [-i docID comma separated list]
>
>
Set by hand all FTS servers endpoints in the ASO config database in the local CouchDB instance from the futon interface. There is one document per T1 site. For example the RAL FTS server can be found here:
http://CouchInstance:port/_utils/document.html?asynctransfer_config/T1_UK_RAL
 
Changed:
<
<
Examples:
>
>
Modify the url from https://fts-fzk.gridka.de:8443/glite-data-transfer-fts/services/FileTransfer to https://fts3-pilot.cern.ch:8446
 
Deleted:
<
<
- Kill all transfers in a task:
./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname>
 
Changed:
<
<
- Kill the transfers for specific documents in a task:
./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname> -i <docID1,docID2,...>
>
>

Killing transfers

It will kill all transfers in CouchDB, but if FTS transfer was already submitted, currently it is not possible to kill it in FTS:

./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname> [-i docID comma separated list]
 
Changed:
<
<
(don't include the "<" and ">" symbols)
>
>
OTHER (Will be updated)
 

Log files

Log files to watch for errors and to check and search in case of problems:

AsyncStageout component logs:

Changed:
<
<
>
>
 ./install/asyncstageout/DBSPublisher/ComponentLog ./install/asyncstageout/Statistics/ComponentLog ./install/asyncstageout/AsyncTransfer/ComponentLog
Line: 223 to 227
 

CouchDB log:

Changed:
<
<
>
>
 ./install/couchdb/logs/couch.log
Changed:
<
<

Monitoring

The current monitoring in cmsweb-testbed for old WMCore libraries used. The asynctransfer database is replicated into the local couchDB to get the monitoring.

Switch to FTS3

Set by hand all FTS servers endpoints in the ASO config database in the local CouchDB instance from the futon interface. There is one document per T1 site. For example the RAL FTS server can be found here:

http://CouchInstance:port/_utils/document.html?asynctransfer_config/T1_UK_RAL

Modify the url from https://fts-fzk.gridka.de:8443/glite-data-transfer-fts/services/FileTransfer to https://fts3-pilot.cern.ch:8446

Views caching

Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 will be fixed you need to update your crontab by hand:

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/user_monitoring_asynctransfer/_compact' > /dev/null
0 */2 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/sites' > /dev/null
*/10 * * * * curl -k --cert /home/vosusr01/gridcert/proxy.cert --key /home/vosusr01/gridcert/proxy.cert -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/JobsStatesByWorkflow' &> /dev/null
*/10 * * * * curl -k --cert /home/vosusr01/gridcert/proxy.cert --key /home/vosusr01/gridcert/proxy.cert -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/DBSPublisher/_view/publish' &> /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/_utils/database.html?asynctransfer/_design/AsyncTransfer/_view/get_acquired' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' > /dev/null
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' > /dev/null
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' > /dev/null
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer_agent/_design/Agent/_view/existWorkers' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer/_design/monitor/_view/endedByTime' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer/_design/monitor/_view/filesCountByUser?group_level=1' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer/_design/monitor/_view/filesCountByTask?group_level=1' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer/_design/monitor/_view/filesCountByDestSource?group_level=2' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer/_design/monitor/_view/FailedAttachmentsByDocId' > /dev/null

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
4 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
8 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
12 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
16 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
20 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null

@reboot /data/srv/asyncstageout/current/config/couchdb/manage sysboot
12 0 * * * /data/srv/asyncstageout/current/config/couchdb/manage compact all 'I did read documentation'
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null

MAILTO="justas.balcas@cern.ch"
3 */3 * * * /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ crab3vocms31 cms

vocms31 (prod) (cmsweb)

>
>

vocms031 (prod) (cmsweb)

 
Patch Description Date
Changed:
<
<
Version v1.0.3 Full reinstall 2014 August 8
https://github.com/juztas/AsyncStageout/commit/472a87698f140d06a3cf4094c8732bc528693da8

Myproxy valid until September 14 2014

>
>
Version v1.0.3pre1 Full reinstall 2014 Nov 19
 
Changed:
<
<

vocms33 (preprod) (cmsweb-testbed)

>
>

vocms021 (preprod) (cmsweb-testbed)

 
Patch Description Date
Changed:
<
<
Version v1.0.3pre3 Full reinstall 2014 Oct 28

Myproxy valid until October 28 2014

vocms21 (preprod bigcouch) (cmsweb-testbed)

Patch Description Date
Version v1.0.2pre3 Full reinstall 2014 July 03
https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement

Myproxy valid until September14 2014

>
>
Version v1.0.3pre1 Full reinstall 2014 Nov 19
 
META FILEATTACHMENT attachment="ProxyRenew.sh" attr="" comment="" date="1396593242" name="ProxyRenew.sh" path="ProxyRenew.sh" size="8920" user="jbalcas" version="1"
META TOPICMOVED by="atanasi" date="1412700584" from="CMS.ASODeployment" to="CMSPublic.AsoDeployment"

Revision 352014-11-12 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"

Deployment of AsyncStageout for CRAB3

Line: 238 to 238
 http://CouchInstance:port/_utils/document.html?asynctransfer_config/T1_UK_RAL
Changed:
<
<
Modify the url from https://fts-fzk.gridka.de:8443/glite-data-transfer-fts/services/FileTransfer to https://fts3-pilot.cern.ch:8443
>
>
Modify the url from https://fts-fzk.gridka.de:8443/glite-data-transfer-fts/services/FileTransfer to https://fts3-pilot.cern.ch:8446
 

Views caching

Revision 342014-10-28 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"

Deployment of AsyncStageout for CRAB3

Line: 308 to 308
 

vocms33 (preprod) (cmsweb-testbed)

Patch Description Date
Changed:
<
<
Version v1.0.2pre6 Full reinstall 2014 July 29
https://github.com/juztas/AsyncStageout/commit/472a87698f140d06a3cf4094c8732bc528693da8
>
>
Version v1.0.3pre3 Full reinstall 2014 Oct 28
 
Changed:
<
<
Myproxy valid until September 14 2014
>
>
Myproxy valid until October 28 2014
 

vocms21 (preprod bigcouch) (cmsweb-testbed)

Patch Description Date

Revision 332014-10-07 - AndresTanasijczuk

Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="CMSPandaDeployment"
>
>
META TOPICPARENT name="https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideCrab"
 

Deployment of AsyncStageout for CRAB3

Read the generic deployment instructions.

Line: 323 to 323
 

META FILEATTACHMENT attachment="ProxyRenew.sh" attr="" comment="" date="1396593242" name="ProxyRenew.sh" path="ProxyRenew.sh" size="8920" user="jbalcas" version="1"
Added:
>
>
META TOPICMOVED by="atanasi" date="1412700584" from="CMS.ASODeployment" to="CMSPublic.AsoDeployment"

Revision 322014-09-09 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 301 to 301
 

vocms31 (prod) (cmsweb)

Patch Description Date
Changed:
<
<
Version v1.0.2pre3 Full reinstall 2014 July 8
>
>
Version v1.0.3 Full reinstall 2014 August 8
 https://github.com/juztas/AsyncStageout/commit/472a87698f140d06a3cf4094c8732bc528693da8
Changed:
<
<
Myproxy valid until August 14 2014
>
>
Myproxy valid until September 14 2014
 

vocms33 (preprod) (cmsweb-testbed)

Patch Description Date
Version v1.0.2pre6 Full reinstall 2014 July 29
https://github.com/juztas/AsyncStageout/commit/472a87698f140d06a3cf4094c8732bc528693da8
Changed:
<
<
Myproxy valid until August 14 2014
>
>
Myproxy valid until September 14 2014
 

vocms21 (preprod bigcouch) (cmsweb-testbed)

Patch Description Date
Version v1.0.2pre3 Full reinstall 2014 July 03
https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement
Changed:
<
<
Myproxy valid until August 14 2014

vocms243 () (spare machine, not running)

Patch Description Date
Version v1.0.1pre8 Full reinstall 2014 Marc 27
#4965 Add cache_area to AsyncTransfer Comp 2014 Mar 27
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 Mar 27
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 Mar 27
>
>
Myproxy valid until September14 2014
 

Revision 312014-07-30 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 301 to 301
 

vocms31 (prod) (cmsweb)

Patch Description Date
Changed:
<
<
Version v1.0.1pre14 Full reinstall 2014 June 03
https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement
>
>
Version v1.0.2pre3 Full reinstall 2014 July 8
https://github.com/juztas/AsyncStageout/commit/472a87698f140d06a3cf4094c8732bc528693da8
  Myproxy valid until August 14 2014

vocms33 (preprod) (cmsweb-testbed)

Patch Description Date
Changed:
<
<
Version v1.0.1pre13 Full reinstall 2014 May 14
https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement
>
>
Version v1.0.2pre6 Full reinstall 2014 July 29
https://github.com/juztas/AsyncStageout/commit/472a87698f140d06a3cf4094c8732bc528693da8
  Myproxy valid until August 14 2014

Revision 302014-07-14 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 304 to 304
 
Version v1.0.1pre14 Full reinstall 2014 June 03
https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement
Changed:
<
<
Myproxy valid until June 14 2014
>
>
Myproxy valid until August 14 2014
 

vocms33 (preprod) (cmsweb-testbed)

Patch Description Date
Version v1.0.1pre13 Full reinstall 2014 May 14
https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement
Changed:
<
<
Myproxy valid until June 14 2014
>
>
Myproxy valid until August 14 2014
 

vocms21 (preprod bigcouch) (cmsweb-testbed)

Patch Description Date
Changed:
<
<
Version v1.0.1pre14 Full reinstall 2014 June 03
>
>
Version v1.0.2pre3 Full reinstall 2014 July 03
 https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement
Changed:
<
<
Myproxy valid until June 16 2014
>
>
Myproxy valid until August 14 2014
 

vocms243 () (spare machine, not running)

Patch Description Date

Revision 292014-06-03 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 301 to 301
 

vocms31 (prod) (cmsweb)

Patch Description Date
Changed:
<
<
Version v1.0.1pre8 Full reinstall 2014 Marc 27
#4965 Add cache_area to AsyncTransfer Comp 2014 Mar 27
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 Mar 27
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 Mar 27
#XXXX Bug in managing proxy 2014 April 16
>
>
Version v1.0.1pre14 Full reinstall 2014 June 03
https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement
  Myproxy valid until June 14 2014

vocms33 (preprod) (cmsweb-testbed)

Patch Description Date
Changed:
<
<
Version v1.0.1pre11 Full reinstall 2014 May 14
#5116 Fix ASO deployment using wmagent scripts. 2014 May 14
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 May 14
>
>
Version v1.0.1pre13 Full reinstall 2014 May 14
https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement
  Myproxy valid until June 14 2014

vocms21 (preprod bigcouch) (cmsweb-testbed)

Patch Description Date
Changed:
<
<
Version v1.0.1pre11 Full reinstall 2014 May 16
#5116 Fix ASO deployment using wmagent scripts. 2014 May 16
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 May 16
>
>
Version v1.0.1pre14 Full reinstall 2014 June 03
https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement
  Myproxy valid until June 16 2014

Revision 282014-05-16 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 65 to 65
 COUCH_USER=*** COUCH_PASS=*** COUCH_PORT=5984
Changed:
<
<
COUCH_HOST=128.142.174.28
>
>
COUCH_HOST=HOST_IP
 OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy UFC_SERVICE_URL=https://vocms29.cern.ch/crabserver/dev/filemetadata AMQ_AUTH_FILE=$HOME/AmqAuthFile
Line: 91 to 91
  Perform the deployment of the appropriate AsyncStageout release tag from the corresponding CMS repository (check the AsyncStageOutManagement page or contact Hassen):
Changed:
<
<
ASOTAG=1.0.1pre8
>
>
ASOTAG=1.0.1pre11
 REPO=comp.pre.riahi
Changed:
<
<
./Deploy -R asyncstageout@$ASOTAG -s prep -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s sw -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s post -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
>
>
ARCH=slc5_amd64_gcc461 ./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s sw -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s post -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
 

Create a directory to store user credentials obtained from myproxy, accessible to the service exclusively:

Line: 107 to 108
 Commit patches if required:
cd /data/srv/asyncstageout/current/
Changed:
<
<
wget https://github.com/dmwm/WMCore/pull/4965.patch -O - | patch -d apps/asyncstageout/ -p 1 wget https://github.com/dmwm/WMCore/pull/4967.patch -O - | patch -d apps/asyncstageout/ -p 1
>
>
#wget https://github.com/dmwm/WMCore/pull/4965.patch -O - | patch -d apps/asyncstageout/ -p 1 #wget https://github.com/dmwm/WMCore/pull/4967.patch -O - | patch -d apps/asyncstageout/ -p 1
 # For testing patch (WMCore matching LFN) :
Changed:
<
<
wget https://github.com/dmwm/WMCore/commit/a4563fa3cbc451dcce27669052518769a5e00a2a.patch -O - | patch -d apps/asyncstageout/lib/python2.6/site-packages/ -p 3
>
>
#wget https://github.com/dmwm/WMCore/commit/a4563fa3cbc451dcce27669052518769a5e00a2a.patch -O - | patch -d apps/asyncstageout/lib/python2.6/site-packages/ -p 3
 

Initialize the service:

Line: 218 to 219
 ./install/asyncstageout/DBSPublisher/ComponentLog ./install/asyncstageout/Statistics/ComponentLog ./install/asyncstageout/AsyncTransfer/ComponentLog
Added:
>
>
./install/asyncstageout/FilesCleaner/ComponentLog
 

CouchDB log:

Line: 227 to 229
 

Monitoring

Changed:
<
<
The current monitoring in cmsweb-testbed for old WMCore libraries used. The asynctransfer database is replicated into the local couchDB to get the monitoring. Monitoring page of ASO Monitoring.
>
>
The current monitoring in cmsweb-testbed for old WMCore libraries used. The asynctransfer database is replicated into the local couchDB to get the monitoring.
 

Switch to FTS3

Line: 236 to 238
 http://CouchInstance:port/_utils/document.html?asynctransfer_config/T1_UK_RAL
Changed:
<
<
Modify the url from https://fts-fzk.gridka.de:8443/glite-data-transfer-fts/services/FileTransfer to https://lcgfts3.gridpp.rl.ac.uk:8443
>
>
Modify the url from https://fts-fzk.gridka.de:8443/glite-data-transfer-fts/services/FileTransfer to https://fts3-pilot.cern.ch:8443
 

Views caching

Line: 305 to 307
 
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 Mar 27
#XXXX Bug in managing proxy 2014 April 16
Changed:
<
<
Myproxy valid until May 14 2014
>
>
Myproxy valid until June 14 2014
 

vocms33 (preprod) (cmsweb-testbed)

Patch Description Date
Changed:
<
<
Version v1.0.1pre7 Full reinstall 2014 feb 25
#4965 Add cache_area to AsyncTransfer Comp 2014 feb 25
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 feb 25
>
>
Version v1.0.1pre11 Full reinstall 2014 May 14
#5116 Fix ASO deployment using wmagent scripts. 2014 May 14
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 May 14
 
Added:
>
>
Myproxy valid until June 14 2014
 
Changed:
<
<
Myproxy valid until May 14 2014
>
>

vocms21 (preprod bigcouch) (cmsweb-testbed)

Patch Description Date
Version v1.0.1pre11 Full reinstall 2014 May 16
#5116 Fix ASO deployment using wmagent scripts. 2014 May 16
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 May 16

Myproxy valid until June 16 2014

 

vocms243 () (spare machine, not running)

Patch Description Date

Revision 272014-04-17 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 303 to 303
 
#4965 Add cache_area to AsyncTransfer Comp 2014 Mar 27
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 Mar 27
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 Mar 27
Added:
>
>
#XXXX Bug in managing proxy 2014 April 16
  Myproxy valid until May 14 2014

Revision 262014-04-14 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 148 to 148
 sudo mkdir /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ voms-proxy-init -voms cms sudo cp -p /tmp/x509up_u$(id -u) /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/seed-100001.cert
Added:
>
>
sudo chown crab3:zh /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/seed-100001.cert
 

Create a delegation (don`t forget to change the MACHINE NAME and DELEGATION_NAME):

Line: 157 to 158
  Copy the script to /data/admin/ProxyRenew.sh and do chown:
Changed:
<
<
sudo chown crab3:zh /data/admin/ProxyRenew.sh
>
>
sudo chown crab3:zh /data/admin/ProxyRenew.sh # Not needed if renewing
 

Before adding into crontab, try to run command and see if Proxy is renewed:

Line: 165 to 166
 /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms
Added:
>
>
Add in crontab if does not exist :
MAILTO="justas.balcas@cern.ch"
3 */3 * * * /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms
 

Starting and stopping the service:

export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
Line: 297 to 304
 
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 Mar 27
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 Mar 27
Added:
>
>
Myproxy valid until May 14 2014
 

vocms33 (preprod) (cmsweb-testbed)

Patch Description Date
Version v1.0.1pre7 Full reinstall 2014 feb 25
#4965 Add cache_area to AsyncTransfer Comp 2014 feb 25
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 feb 25
Added:
>
>
Myproxy valid until May 14 2014
 

vocms243 () (spare machine, not running)

Patch Description Date
Version v1.0.1pre8 Full reinstall 2014 Marc 27

Revision 252014-04-05 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 72 to 72
 COUCH_CERT_FILE=/data/certs/servicecert.pem COUCH_KEY_FILE=/data/certs/servicekey.pem EOF
Deleted:
<
<
chmod 600 $HOME/Async.secrets
 
Changed:
<
<
The file contains sensitive data and must be protected with the appropriate permissions.
>
>
The file contains sensitive data and must be protected with the appropriate permissions:
chmod 600 $HOME/Async.secrets
 
Changed:
<
<
Get the deployment scripts**:
>
>
Get the deployment scripts (take a look at the github dmwm/deployment repository for the latest tag -here we assume it is HG1404a-):
 
cd /data/admin/asyncstageout
rm -rf Deployment
Line: 87 to 89
 cd Deployment
Changed:
<
<
** Before getting deployment script, please take a look here for the latest tag : https://github.com/dmwm/deployment

* For ASO please check : https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement or contact Hassen

Perform the deployment of the appropriate AsyncStageout*** release tag from the corresponding CMS repository:

>
>
Perform the deployment of the appropriate AsyncStageout release tag from the corresponding CMS repository (check the AsyncStageOutManagement page or contact Hassen):
 
ASOTAG=1.0.1pre8
REPO=comp.pre.riahi
Line: 100 to 98
 ./Deploy -R asyncstageout@$ASOTAG -s post -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
Changed:
<
<
Create directory to store user credentials obtained from myproxy, accessible to the service exclusively:
>
>
Create a directory to store user credentials obtained from myproxy, accessible to the service exclusively:
 
mkdir /data/srv/asyncstageout/state/asyncstageout/creds
chmod 700 /data/srv/asyncstageout/state/asyncstageout/
Changed:
<
<
Commit patches if it is required :
>
>
Commit patches if required:
 
cd /data/srv/asyncstageout/current/
 wget https://github.com/dmwm/WMCore/pull/4965.patch -O - | patch -d apps/asyncstageout/ -p 1
Line: 135 to 133
 sed --in-place "s|\.opsProxy = .*|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py
Changed:
<
<
Monitoring (While the fix is not made for monitoring on cmsweb, we need to create a replication ):
>
>
Monitoring (while the fix is not made for monitoring on cmsweb, we need to create a replication of the asynctransfer database):
 
Changed:
<
<
curl -X POST http://username:password@server:5984/_replicator/ -H "Content-Type: application/json" -d '{"source":"https://cmsweb-testbed.cern.ch/couchdb/asynctransfer","target":"asynctransfer","continuous":true}
>
>
curl -X POST http://username:password@server:5984/_replicator/ -H "Content-Type: application/json" -d '{"source":"https://cmsweb-testbed.cern.ch/couchdb/asynctransfer","target":"asynctransfer","continuous":true}'
 

Operations

OpsProxy Renewal or Creation:

Changed:
<
<
First connect to machine and create seed for proxy delegation:
>
>
First connect to machine and create a seed for proxy delegation:
 
ssh machine_name
source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
Line: 152 to 150
 sudo cp -p /tmp/x509up_u$(id -u) /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/seed-100001.cert
Changed:
<
<
Create a delegation, don`t forget to change the MACHINE NAME and DELEGATION_NAME:
>
>
Create a delegation (don`t forget to change the MACHINE NAME and DELEGATION_NAME):
 
myproxy-init -l DELEGATION_NAME_100001 -x -R "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -c 720 -t 36 -Z "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -s myproxy.cern.ch
Changed:
<
<
Copy script to /data/admin/ProxyRenew.sh and do chown:
>
>
Copy the script to /data/admin/ProxyRenew.sh and do chown:
 
sudo chown crab3:zh /data/admin/ProxyRenew.sh
Line: 222 to 220
 

Monitoring

Changed:
<
<
The current monitoring in cmsweb-testbed for old WMCore libraries used. The asynctransfer database is replicated into the local couch to get the monitoring. Monitoring page of ASO ASO Monitoring
>
>
The current monitoring in cmsweb-testbed for old WMCore libraries used. The asynctransfer database is replicated into the local couchDB to get the monitoring. Monitoring page of ASO Monitoring.
 

Switch to FTS3

Revision 242014-04-04 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 10 to 10
  Local service account used to deploy, run and operate the service is crab3.
Added:
>
>
If the machine is new one, take a look at CAF Configuration, if delegate DN does not exist, ask Justas Balcas or Marco Mascheroni to update the rest configuration.
 

Additional machine preparation steps

Added:
>
>
 The host must be registered for proxy retrieval from myproxy.cern.ch. Request it by sending an e-mail to px.support@cernNOSPAMPLEASE.ch giving the DN of the host certificate. If host certificate is not correct, or need to be updated, need to contact Ivan Glushkov . It can be obtained by
voms-proxy-info -file /etc/grid-security/hostcert.pem -subject
Line: 23 to 28
 
source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
Changed:
<
<
and then try voms-proxy-info. Registration with myproxy.cern.ch can be checked with:
>
>
and then try voms-proxy-info. Registration with myproxy.cern.ch can be checked with:
 
ldapsearch -p 2170 -h myproxy.cern.ch -x -LLL -b "mds-vo-name=resource,o=grid" | grep $(hostname)
Line: 134 to 141
 

Operations

Added:
>
>

OpsProxy Renewal or Creation:

First connect to machine and create seed for proxy delegation:

ssh machine_name
source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
sudo mkdir /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/
voms-proxy-init -voms cms
sudo cp -p /tmp/x509up_u$(id -u) /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/seed-100001.cert

Create a delegation, don`t forget to change the MACHINE NAME and DELEGATION_NAME:

myproxy-init -l DELEGATION_NAME_100001 -x -R "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -c 720 -t 36 -Z "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -s myproxy.cern.ch

Copy script to /data/admin/ProxyRenew.sh and do chown:

sudo chown crab3:zh /data/admin/ProxyRenew.sh

Before adding into crontab, try to run command and see if Proxy is renewed:

/data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms
 

Starting and stopping the service:

export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
Line: 169 to 202
 ./config/asyncstageout/manage execute-asyncstageout kill-transfer -t -i <docID1,docID2,...>
Changed:
<
<
(don't include the "<" and ">" symbols)
>
>
(don't include the "<" and ">" symbols)
 

Log files

Added:
>
>
 Log files to watch for errors and to check and search in case of problems:

AsyncStageout component logs:

Line: 187 to 222
 

Monitoring

Changed:
<
<
The current monitoring in cmsweb-testbed for old WMCore libraries used. The asynctransfer database is replicated into the local couch to get the monitoring. Monitoring page of ASO ASO Monitoring
>
>
The current monitoring in cmsweb-testbed for old WMCore libraries used. The asynctransfer database is replicated into the local couch to get the monitoring. Monitoring page of ASO ASO Monitoring
 

Switch to FTS3

Line: 201 to 235
 

Views caching

Changed:
<
<
Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 will be fixed you need to update your contab by hand:

57 */4 * * * /data/proxy/vomsrenew.sh &> /tmp/vomsrenew.log

>
>
Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 will be fixed you need to update your crontab by hand:
 
Changed:
<
<
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST ' http://user:password@127.0.0.1:5984/asynctransfer/_compact' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://cmsweb-testbed.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://cmsweb-testbed.cern.ch/couchdb/asynctransfer/_design/DBSPublisher/_view/publish' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://cmsweb-testbed.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://cmsweb-testbed.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_retry' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/endedByTime' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/endedByTime?group_level=4' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb//asynctransfer/design/monitor/_view/endedByTime' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/filesCountByUser?group_level=1' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/filesCountByTask?group_level=1' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/filesCountByDestSource?group_level=2' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/FailedAttachmentsByDocId' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/FilesByWorkflow' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://user:password@127.0.0.1:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://user:password@127.0.0.1:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://user:password@127.0.0.1:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' > /dev/null

>
>
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/user_monitoring_asynctransfer/_compact' > /dev/null
0 */2 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/sites' > /dev/null
*/10 * * * * curl -k --cert /home/vosusr01/gridcert/proxy.cert --key /home/vosusr01/gridcert/proxy.cert -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/JobsStatesByWorkflow' &> /dev/null
*/10 * * * * curl -k --cert /home/vosusr01/gridcert/proxy.cert --key /home/vosusr01/gridcert/proxy.cert -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/DBSPublisher/_view/publish' &> /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/_utils/database.html?asynctransfer/_design/AsyncTransfer/_view/get_acquired' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' > /dev/null
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' > /dev/null
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' > /dev/null
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer_agent/_design/Agent/_view/existWorkers' > /dev/null

*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer/_design/monitor/_view/endedByTime' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer/_design/monitor/_view/filesCountByUser?group_level=1' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer/_design/monitor/_view/filesCountByTask?group_level=1' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer/_design/monitor/_view/filesCountByDestSource?group_level=2' > /dev/null
*/10 * * * * curl -k --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'http://username:password@server.cern.ch:5984/asynctransfer/_design/monitor/_view/FailedAttachmentsByDocId' > /dev/null

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
4 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
8 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
12 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
16 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
20 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null

@reboot /data/srv/asyncstageout/current/config/couchdb/manage sysboot
12 0 * * * /data/srv/asyncstageout/current/config/couchdb/manage compact all 'I did read documentation'
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984/asynctransfer_stat/_compact' > /dev/null
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://username:password@server.cern.ch:5984//asynctransfer_agent/_compact' > /dev/null
 
Changed:
<
<
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://user:password@127.0.0.1:5984/asynctransfer_config/_design/AsyncTransfer/_view/sites' > /dev/null
>
>
MAILTO="justas.balcas@cern.ch" 3 */3 * * * /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ crab3vocms31 cms
 
Changed:
<
<

vocms243 (prod) (cmsweb)

>
>

vocms31 (prod) (cmsweb)

 
Patch Description Date
Version v1.0.1pre8 Full reinstall 2014 Marc 27
#4965 Add cache_area to AsyncTransfer Comp 2014 Mar 27
Line: 252 to 304
 
Version v1.0.1pre7 Full reinstall 2014 feb 25
#4965 Add cache_area to AsyncTransfer Comp 2014 feb 25
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 feb 25
\ No newline at end of file
Added:
>
>

vocms243 () (spare machine, not running)

Patch Description Date
Version v1.0.1pre8 Full reinstall 2014 Marc 27
#4965 Add cache_area to AsyncTransfer Comp 2014 Mar 27
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 Mar 27
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 Mar 27

META FILEATTACHMENT attachment="ProxyRenew.sh" attr="" comment="" date="1396593242" name="ProxyRenew.sh" path="ProxyRenew.sh" size="8920" user="jbalcas" version="1"

Revision 232014-04-03 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Revision 222014-04-03 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 101 to 101
  Commit patches if it is required :
Added:
>
>
cd /data/srv/asyncstageout/current/
  wget https://github.com/dmwm/WMCore/pull/4965.patch -O - | patch -d apps/asyncstageout/ -p 1 wget https://github.com/dmwm/WMCore/pull/4967.patch -O - | patch -d apps/asyncstageout/ -p 1 # For testing patch (WMCore matching LFN) :
Line: 122 to 123
 sed --in-place "s|\.serviceKey = .*|\.serviceKey = '/data/certs/hostkey.pem'|" config/asyncstageout/config.py serverDN=$(openssl x509 -text -subject -noout -in /data/certs/hostcert.pem | grep subject= | sed 's/subject= //') sed --in-place "s|\.serverDN = .*|\.serverDN = '$serverDN'|" config/asyncstageout/config.py
Changed:
<
<

Also add these lines in the appropriate sections of the config file:

config.Statistics.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'
>
>
sed --in-place "s|\.couch_instance = .*|\.couch_instance = 'https://cmsweb.cern.ch/couchdb'|" config/asyncstageout/config.py sed --in-place "s|\.cache_area = .*|\.cache_area = 'https://cmsweb.cern.ch/crabserver/prod/filemetadata'|" config/asyncstageout/config.py sed --in-place "s|\.opsProxy = .*|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py
 

Monitoring (While the fix is not made for monitoring on cmsweb, we need to create a replication ):

Revision 212014-04-03 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 58 to 58
 COUCH_USER=*** COUCH_PASS=*** COUCH_PORT=5984
Changed:
<
<
COUCH_HOST=128.142.172.118
>
>
COUCH_HOST=128.142.174.28
 OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy UFC_SERVICE_URL=https://vocms29.cern.ch/crabserver/dev/filemetadata AMQ_AUTH_FILE=$HOME/AmqAuthFile
Added:
>
>
COUCH_CERT_FILE=/data/certs/servicecert.pem COUCH_KEY_FILE=/data/certs/servicekey.pem
 EOF chmod 600 $HOME/Async.secrets

Revision 202014-04-03 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 19 to 19
 
openssl x509 -subject -noout -in /data/certs/hostcert.pem
Changed:
<
<
>
>
or
source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
and then try voms-proxy-info.
 Registration with myproxy.cern.ch can be checked with:
ldapsearch -p 2170 -h myproxy.cern.ch -x -LLL -b "mds-vo-name=resource,o=grid" | grep $(hostname)
Line: 40 to 44
 

Deployment

The deployment and operations are done as the service user, so we switch to it:
Changed:
<
<
sudo -u panda -i bash
>
>
sudo -u crab3 -i bash
  Create directories for the deployment scripts and the deployment:

Revision 192014-04-02 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 11 to 11
 Local service account used to deploy, run and operate the service is crab3.

Additional machine preparation steps

Changed:
<
<
The host must be registered for proxy retrieval from myproxy.cern.ch. Request it by sending an e-mail to px.support@cernNOSPAMPLEASE.ch giving the DN of the host certificate. It can be obtained by
>
>
The host must be registered for proxy retrieval from myproxy.cern.ch. Request it by sending an e-mail to px.support@cernNOSPAMPLEASE.ch giving the DN of the host certificate. If host certificate is not correct, or need to be updated, need to contact Ivan Glushkov . It can be obtained by
 
voms-proxy-info -file /etc/grid-security/hostcert.pem -subject
Line: 28 to 28
 Prepare directories for the deployment, owned by the service account:
sudo mkdir /data/srv /data/admin /data/certs
Changed:
<
<
sudo chown panda:zh /data/srv /data/admin /data/certs
>
>
sudo chown crab3:zh /data/srv /data/admin /data/certs
 

Make a copy of the host certificate, accessible by the service account to be used by AsyncStageout:

sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs
Changed:
<
<
sudo chown panda:zh /data/certs/*
>
>
sudo chown crab3:zh /data/certs/*
 

Deployment

Revision 182014-04-02 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 6 to 6
 

Machine details

Deleted:
<
<
The service is deployed on vocms243.
 Required software and initial configuration are provided by Quattor. One can take a look at the template.
Changed:
<
<
Local service account used to deploy, run and operate the service is panda.
>
>
Local service account used to deploy, run and operate the service is crab3.
 

Additional machine preparation steps

The host must be registered for proxy retrieval from myproxy.cern.ch. Request it by sending an e-mail to px.support@cernNOSPAMPLEASE.ch giving the DN of the host certificate. It can be obtained by

Revision 172014-03-31 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 238 to 238
 

vocms243 (prod) (cmsweb)

Patch Description Date
Changed:
<
<
Version v1.0.1pre5 Full reinstall 2014 Marc 27
>
>
Version v1.0.1pre8 Full reinstall 2014 Marc 27
 
#4965 Add cache_area to AsyncTransfer Comp 2014 Mar 27
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 Mar 27
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 Mar 27

Revision 162014-03-27 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 66 to 66
  The file contains sensitive data and must be protected with the appropriate permissions.
Changed:
<
<
Get the deployment scripts:
>
>
Get the deployment scripts**:
 
cd /data/admin/asyncstageout
rm -rf Deployment
Changed:
<
<
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1306b.zip
>
>
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1404a.zip
 unzip cfg.zip; rm cfg.zip mv deployment-* Deployment cd Deployment
Changed:
<
<
Perform the deployment of the appropriate AsyncStageout release tag from the corresponding CMS repository:
>
>
** Before getting deployment script, please take a look here for the latest tag : https://github.com/dmwm/deployment

* For ASO please check : https://svnweb.cern.ch/trac/CMSDMWM/wiki/AsyncStageOutManagement or contact Hassen

Perform the deployment of the appropriate AsyncStageout*** release tag from the corresponding CMS repository:

 
Changed:
<
<
ASOTAG=1.0.1pre2
>
>
ASOTAG=1.0.1pre8
 REPO=comp.pre.riahi ./Deploy -R asyncstageout@$ASOTAG -s prep -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s sw -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
Line: 91 to 95
 chmod 700 /data/srv/asyncstageout/state/asyncstageout/
Added:
>
>
Commit patches if it is required :
 wget https://github.com/dmwm/WMCore/pull/4965.patch -O - | patch -d apps/asyncstageout/ -p 1
 wget https://github.com/dmwm/WMCore/pull/4967.patch -O - | patch -d apps/asyncstageout/ -p 1
# For testing patch (WMCore matching LFN) :
wget https://github.com/dmwm/WMCore/commit/a4563fa3cbc451dcce27669052518769a5e00a2a.patch -O - | patch -d apps/asyncstageout/lib/python2.6/site-packages/ -p 3
 Initialize the service:
cd /data/srv/asyncstageout/current
Line: 113 to 125
 config.Statistics.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'
Added:
>
>
Monitoring (While the fix is not made for monitoring on cmsweb, we need to create a replication ):

curl -X POST http://username:password@server:5984/_replicator/ -H "Content-Type: application/json" -d '{"source":" https://cmsweb-testbed.cern.ch/couchdb/asynctransfer","target":"asynctransfer","continuous":true}

 

Operations

Starting and stopping the service:

Line: 220 to 236
  */10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://user:password@127.0.0.1:5984/asynctransfer_config/_design/AsyncTransfer/_view/sites' > /dev/null
Changed:
<
<

vocms243

>
>

vocms243 (prod) (cmsweb)

Patch Description Date
Version v1.0.1pre5 Full reinstall 2014 Marc 27
#4965 Add cache_area to AsyncTransfer Comp 2014 Mar 27
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 Mar 27
#4961 Update matchLFN for fixing not matching LFN (TESTING) 2014 Mar 27

vocms33 (preprod) (cmsweb-testbed)

 
Patch Description Date
Changed:
<
<
Version v1.0.1pre5 Full reinstall 2014 feb 25
>
>
Version v1.0.1pre7 Full reinstall 2014 feb 25
 
#4965 Add cache_area to AsyncTransfer Comp 2014 feb 25
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 feb 25

Revision 152014-03-18 - AndresTanasijczuk

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 115 to 115
 

Operations

Changed:
<
<
Starting and stopping the service:
>
>

Starting and stopping the service:

 
export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
cd /data/srv/asyncstageout/current/
Line: 126 to 126
 ./config/asyncstageout/manage stop-asyncstageout
Changed:
<
<
Starting and stopping CouchDB
>
>

Starting and stopping CouchDB

 
./config/asyncstageout/manage start-services
./config/asyncstageout/manage stop-services
Added:
>
>

Killing transfers

./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname> [-i docID comma separated list]

Examples:

- Kill all transfers in a task:

./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname>

- Kill the transfers for specific documents in a task:

./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname> -i <docID1,docID2,...>

(don't include the "<" and ">" symbols)

Log files

 Log files to watch for errors and to check and search in case of problems:
Added:
>
>
 AsyncStageout component logs:
./install/asyncstageout/DBSPublisher/ComponentLog

Revision 142014-02-26 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 200 to 200
 

vocms243

Patch Description Date
Deleted:
<
<
Version v1.0.1pre2 Full reinstall 2014 feb 10
#4075 fix config server to connect local couchdb and fix forPass the myproxyAccount parameter to Proxy.py 2014 feb 11
#4072 Report correctly broken sites TFC. 2014 feb 11
#4915 Source the UI with the script provided before checking if voms-proxy-init... 2014 feb 11
 \ No newline at end of file
Added:
>
>
Version v1.0.1pre5 Full reinstall 2014 feb 25
#4965 Add cache_area to AsyncTransfer Comp 2014 feb 25
#4072 Separate the optional component Analytics from the AsyncTransfer component 2014 feb 25
 \ No newline at end of file

Revision 132014-02-25 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 117 to 117
  Starting and stopping the service:
Added:
>
>
export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy cd /data/srv/asyncstageout/current/ source ~/Async.secrets
 source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
Added:
>
>
 ./config/asyncstageout/manage start-asyncstageout ./config/asyncstageout/manage stop-asyncstageout

Revision 122014-02-11 - HassenRiahi

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 110 to 110
  Also add these lines in the appropriate sections of the config file:
Deleted:
<
<
config.Analytics.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'
 config.Statistics.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'
Line: 132 to 131
 Log files to watch for errors and to check and search in case of problems: AsyncStageout component logs:
Deleted:
<
<
./install/asyncstageout/Analytics/ComponentLog
 ./install/asyncstageout/DBSPublisher/ComponentLog ./install/asyncstageout/Statistics/ComponentLog ./install/asyncstageout/AsyncTransfer/ComponentLog
Line: 142 to 140
 ./install/couchdb/logs/couch.log
Added:
>
>

Monitoring

The current monitoring in cmsweb-testbed for old WMCore libraries used. The asynctransfer database is replicated into the local couch to get the monitoring.

 Monitoring page of ASO
Changed:
<
<
ASO Monitoring
>
>
ASO Monitoring

Switch to FTS3

Set by hand all FTS servers endpoints in the ASO config database in the local CouchDB instance from the futon interface. There is one document per T1 site. For example the RAL FTS server can be found here:

http://CouchInstance:port/_utils/document.html?asynctransfer_config/T1_UK_RAL
Modify the url from https://fts-fzk.gridka.de:8443/glite-data-transfer-fts/services/FileTransfer to https://lcgfts3.gridpp.rl.ac.uk:8443

Views caching

Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 will be fixed you need to update your contab by hand:

57 */4 * * * /data/proxy/vomsrenew.sh &> /tmp/vomsrenew.log

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST ' http://user:password@127.0.0.1:5984/asynctransfer/_compact' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://cmsweb-testbed.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://cmsweb-testbed.cern.ch/couchdb/asynctransfer/_design/DBSPublisher/_view/publish' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://cmsweb-testbed.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://cmsweb-testbed.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_retry' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/endedByTime' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/endedByTime?group_level=4' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb//asynctransfer/design/monitor/_view/endedByTime' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/filesCountByUser?group_level=1' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/filesCountByTask?group_level=1' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/filesCountByDestSource?group_level=2' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/FailedAttachmentsByDocId' > /dev/null

*/10 * * * * curl -k --cert /data/ASO/certs/proxy.cert --key /data/ASO/certs/proxy.cert -H 'Content-Type: application/json' -X GET ' https://user:password@127.0.0.1:5984/couchdb/asynctransfer/_design/monitor/_view/FilesByWorkflow' > /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://user:password@127.0.0.1:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' > /dev/null

 
Added:
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://user:password@127.0.0.1:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' > /dev/null
 
Added:
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://user:password@127.0.0.1:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' > /dev/null
 
Added:
>
>
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET ' http://user:password@127.0.0.1:5984/asynctransfer_config/_design/AsyncTransfer/_view/sites' > /dev/null
 

vocms243

Patch Description Date

Revision 112014-02-11 - HassenRiahi

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 6 to 6
 

Machine details

Changed:
<
<
The service is deployed on vocms103.
>
>
The service is deployed on vocms243.
  Required software and initial configuration are provided by Quattor. One can take a look at the template.
Line: 64 to 64
 chmod 600 $HOME/Async.secrets
Changed:
<
<
Create the ActiveMQ secrets file if not already done by a previous deployment:
cat > $HOME/AmqAuthFile <<EOF
{"MSG_HOST": "dashb-test-mb.cern.ch", "MSG_PWD": "***", "MSG_PORT": 6163, "MSG_USER": "***", "MSG_QUEUE": "/topic/poc.testbedMSG" }
EOF
chmod 600 $HOME/AmqAuthFile
The details for user name, password, messaging host and queue are obtained from the messaging server operators.

Both the above files contain sensitive data and must be protected with the appropriate permissions.

>
>
The file contains sensitive data and must be protected with the appropriate permissions.
  Get the deployment scripts:
Line: 87 to 78
  Perform the deployment of the appropriate AsyncStageout release tag from the corresponding CMS repository:
Changed:
<
<
ASOTAG=1.0.0pre18 REPO=comp.pre.spiga
>
>
ASOTAG=1.0.1pre2 REPO=comp.pre.riahi
 ./Deploy -R asyncstageout@$ASOTAG -s prep -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s sw -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s post -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
Line: 152 to 143
 

Monitoring page of ASO

Changed:
<
<
ASO Monitoring
>
>
ASO Monitoring
 

Revision 102014-02-11 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 153 to 153
  Monitoring page of ASO ASO Monitoring \ No newline at end of file
Added:
>
>

vocms243

Patch Description Date
Version v1.0.1pre2 Full reinstall 2014 feb 10
#4075 fix config server to connect local couchdb and fix forPass the myproxyAccount parameter to Proxy.py 2014 feb 11
#4072 Report correctly broken sites TFC. 2014 feb 11
#4915 Source the UI with the script provided before checking if voms-proxy-init... 2014 feb 11

Revision 92014-02-04 - JustasBalcas

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3

Line: 97 to 97
 Create directory to store user credentials obtained from myproxy, accessible to the service exclusively:
mkdir /data/srv/asyncstageout/state/asyncstageout/creds
Changed:
<
<
chmod 700 /data/srv/asyncstageout
>
>
chmod 700 /data/srv/asyncstageout/state/asyncstageout/
 

Initialize the service:

Revision 82014-01-22 - HassenRiahi

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"
Changed:
<
<

Deployment of AsyncStageout for CRAB3 with Panda

>
>

Deployment of AsyncStageout for CRAB3

  Read the generic deployment instructions.

Revision 72014-01-17 - PreslavKonstantinov

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3 with Panda

Line: 19 to 19
  In case voms-proxy-info is not available, use
Changed:
<
<
openssl x509 -text -subject -noout -in /data/certs/hostcert.pem | grep subject=
>
>
openssl x509 -subject -noout -in /data/certs/hostcert.pem
 

Registration with myproxy.cern.ch can be checked with:

Revision 62013-11-20 - PreslavKonstantinov

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3 with Panda

Line: 67 to 67
 Create the ActiveMQ secrets file if not already done by a previous deployment:
cat > $HOME/AmqAuthFile <<EOF
Changed:
<
<
{"MSG_HOST": "gridmsg007.cern.ch", "MSG_PWD": "***", "MSG_PORT": 6163, "MSG_USER": "***", "MSG_QUEUE": "/topic/poc.testbedMSG" }
>
>
{"MSG_HOST": "dashb-test-mb.cern.ch", "MSG_PWD": "***", "MSG_PORT": 6163, "MSG_USER": "***", "MSG_QUEUE": "/topic/poc.testbedMSG" }
 EOF chmod 600 $HOME/AmqAuthFile

Revision 52013-10-03 - PreslavKonstantinov

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3 with Panda

Line: 87 to 87
  Perform the deployment of the appropriate AsyncStageout release tag from the corresponding CMS repository:
Changed:
<
<
ASOTAG=1.0.0pre16
>
>
ASOTAG=1.0.0pre18
 REPO=comp.pre.spiga ./Deploy -R asyncstageout@$ASOTAG -s prep -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s sw -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
Line: 108 to 108
 ./config/asyncstageout/manage init-asyncstageout
Changed:
<
<
Set some essential configuration parameters in the config file config/asyncstageout/config.py:
>
>
Set correct values of some essential configuration parameters in the config file config/asyncstageout/config.py:
 
sed --in-place "s|\.credentialDir = .*|\.credentialDir = '/data/srv/asyncstageout/state/asyncstageout/creds'|" config/asyncstageout/config.py
sed --in-place "s|\.serviceCert = .*|\.serviceCert = '/data/certs/hostcert.pem'|" config/asyncstageout/config.py
Line: 117 to 117
 sed --in-place "s|\.serverDN = .*|\.serverDN = '$serverDN'|" config/asyncstageout/config.py
Added:
>
>
Also add these lines in the appropriate sections of the config file:
config.Analytics.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'
config.Statistics.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'
 

Operations

Starting and stopping the service:

Revision 42013-08-16 - FedericaFanzago

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3 with Panda

Line: 144 to 144
 
./install/couchdb/logs/couch.log
\ No newline at end of file
Added:
>
>
Monitoring page of ASO ASO Monitoring

Revision 32013-08-02 - PreslavKonstantinov

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3 with Panda

Line: 121 to 121
  Starting and stopping the service:
Added:
>
>
source /afs/cern.ch/cms/LCG/LCG-2/UI/cms_ui_env.sh
 ./config/asyncstageout/manage start-asyncstageout ./config/asyncstageout/manage stop-asyncstageout

Revision 22013-07-29 - PreslavKonstantinov

Line: 1 to 1
 
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3 with Panda

Line: 87 to 87
  Perform the deployment of the appropriate AsyncStageout release tag from the corresponding CMS repository:
Changed:
<
<
ASOTAG=1.0.0pre13 REPO=comp.pre.mcinquil
>
>
ASOTAG=1.0.0pre16 REPO=comp.pre.spiga
 ./Deploy -R asyncstageout@$ASOTAG -s prep -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s sw -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite ./Deploy -R asyncstageout@$ASOTAG -s post -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite

Revision 12013-07-18 - PreslavKonstantinov

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="CMSPandaDeployment"

Deployment of AsyncStageout for CRAB3 with Panda

Read the generic deployment instructions.

Machine details

The service is deployed on vocms103.

Required software and initial configuration are provided by Quattor. One can take a look at the template.

Local service account used to deploy, run and operate the service is panda.

Additional machine preparation steps

The host must be registered for proxy retrieval from myproxy.cern.ch. Request it by sending an e-mail to px.support@cernNOSPAMPLEASE.ch giving the DN of the host certificate. It can be obtained by
voms-proxy-info -file /etc/grid-security/hostcert.pem -subject
In case voms-proxy-info is not available, use
openssl x509 -text -subject -noout -in /data/certs/hostcert.pem | grep subject=

Registration with myproxy.cern.ch can be checked with:

ldapsearch -p 2170 -h myproxy.cern.ch -x -LLL -b "mds-vo-name=resource,o=grid" | grep $(hostname)

Prepare directories for the deployment, owned by the service account:

sudo mkdir /data/srv /data/admin /data/certs
sudo chown panda:zh /data/srv /data/admin /data/certs

Make a copy of the host certificate, accessible by the service account to be used by AsyncStageout:

sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs
sudo chown panda:zh /data/certs/*

Deployment

The deployment and operations are done as the service user, so we switch to it:
sudo -u panda -i bash
Create directories for the deployment scripts and the deployment:
mkdir /data/admin/asyncstageout
mkdir /data/srv/asyncstageout

If not already done by a previous deployment, create the secrets file, filling in the CouchDB username and password, the IP address of the local machine and the host where the CRAB3 REST interface is installed:

cat > $HOME/Async.secrets <<EOF
COUCH_USER=***
COUCH_PASS=***
COUCH_PORT=5984
COUCH_HOST=128.142.172.118
OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
UFC_SERVICE_URL=https://vocms29.cern.ch/crabserver/dev/filemetadata
AMQ_AUTH_FILE=$HOME/AmqAuthFile
EOF
chmod 600 $HOME/Async.secrets

Create the ActiveMQ secrets file if not already done by a previous deployment:

cat > $HOME/AmqAuthFile <<EOF
{"MSG_HOST": "gridmsg007.cern.ch", "MSG_PWD": "***", "MSG_PORT": 6163, "MSG_USER": "***", "MSG_QUEUE": "/topic/poc.testbedMSG" }
EOF
chmod 600 $HOME/AmqAuthFile
The details for user name, password, messaging host and queue are obtained from the messaging server operators.

Both the above files contain sensitive data and must be protected with the appropriate permissions.

Get the deployment scripts:

cd /data/admin/asyncstageout
rm -rf Deployment
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1306b.zip
unzip cfg.zip; rm cfg.zip
mv deployment-* Deployment
cd Deployment

Perform the deployment of the appropriate AsyncStageout release tag from the corresponding CMS repository:

ASOTAG=1.0.0pre13
REPO=comp.pre.mcinquil
./Deploy -R asyncstageout@$ASOTAG -s prep -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
./Deploy -R asyncstageout@$ASOTAG -s sw -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
./Deploy -R asyncstageout@$ASOTAG -s post -A slc5_amd64_gcc461 -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite

Create directory to store user credentials obtained from myproxy, accessible to the service exclusively:

mkdir /data/srv/asyncstageout/state/asyncstageout/creds
chmod 700 /data/srv/asyncstageout

Initialize the service:

cd /data/srv/asyncstageout/current
./config/asyncstageout/manage activate-asyncstageout
./config/asyncstageout/manage start-services
./config/asyncstageout/manage init-asyncstageout

Set some essential configuration parameters in the config file config/asyncstageout/config.py:

sed --in-place "s|\.credentialDir = .*|\.credentialDir = '/data/srv/asyncstageout/state/asyncstageout/creds'|" config/asyncstageout/config.py
sed --in-place "s|\.serviceCert = .*|\.serviceCert = '/data/certs/hostcert.pem'|" config/asyncstageout/config.py
sed --in-place "s|\.serviceKey = .*|\.serviceKey = '/data/certs/hostkey.pem'|" config/asyncstageout/config.py
serverDN=$(openssl x509 -text -subject -noout -in /data/certs/hostcert.pem | grep subject= | sed 's/subject= //')
sed --in-place "s|\.serverDN = .*|\.serverDN = '$serverDN'|" config/asyncstageout/config.py

Operations

Starting and stopping the service:

./config/asyncstageout/manage start-asyncstageout
./config/asyncstageout/manage stop-asyncstageout

Starting and stopping CouchDB

./config/asyncstageout/manage start-services
./config/asyncstageout/manage stop-services

Log files to watch for errors and to check and search in case of problems: AsyncStageout component logs:

./install/asyncstageout/Analytics/ComponentLog
./install/asyncstageout/DBSPublisher/ComponentLog
./install/asyncstageout/Statistics/ComponentLog
./install/asyncstageout/AsyncTransfer/ComponentLog
CouchDB log:
./install/couchdb/logs/couch.log
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback