CRAB Logo

Deployment of AsyncStageOut for CRAB3

Complete: 5 Go to SWGuideCrab

About CouchDB

The ASO service uses four CouchDB databases which purposes are summarized in the following table:

Database name Description
asynctransfer This is the database where the user transfer/publication request documents are stored.
asynctransfer_config This database contains a few documents with some ASO configuration parameters.
asynctransfer_agent This database contains a few documents where ASO keeps track of the status (running, not running, etc) of the different ASO components.
asynctransfer_stat description needed
There are asynctransfer databases deployed in the CMSWEB CouchDBs (one in production and one in the testbed). These databases are used by the production and pre-production ASO services respectively.

When deploying the ASO service, a CouchDB instance is deployed locally in the host. This CouchDB contains the four databases listed in the table above. While in a development ASO deployment all databases in this CouchDB are relevant, in production and pre-production the asynctransfer database of this CouchDB is redundant (because the asynctransfer databases from the CMSWEB CouchDBs are used).

To distinguish between the CouchDB deployed in the ASO host and the CouchDB in CMSWEB, we use the term "local" when referring to the first. In a development ASO deployment it is only the local CouchDB that is used, so we may omit the term "local".

Deployment of AsyncStageOut for CRAB3 operators

Machine details

Local service account used to deploy, run and operate the service is crab3.

If the machine is new one, take a look at CAF Configuration, if delegate DN does not exist, ask hn-cms-crabDevelopment@cernNOSPAMPLEASE.ch to update the rest configuration.

Additional machine preparation steps

The host must be registered for proxy retrieval from myproxy.cern.ch. Request it by sending an e-mail to px.support@cernNOSPAMPLEASE.ch giving the DN of the host certificate. If host certificate is not correct, or need to be updated, need to contact VOC . It can be obtained by

voms-proxy-info -file /etc/grid-security/hostcert.pem -subject

In case voms-proxy-info is not available, use

openssl x509 -subject -noout -in /data/certs/hostcert.pem

Registration with myproxy.cern.ch can be checked with:

ldapsearch -p 2170 -h myproxy.cern.ch -x -LLL -b "mds-vo-name=resource,o=grid" | grep $(hostname)

Prepare directories for the deployment, owned by the service account:

sudo mkdir /data/srv /data/admin /data/certs
sudo chown crab3:zh /data/srv /data/admin /data/certs

Make a copy of the host certificate, accessible by the service account to be used by AsyncStageOut:

sudo cp -p /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem /data/certs
sudo chown crab3:zh /data/certs/*

Deployment

The deployment and operations are done as the service user, so we switch to it:

sudo -u crab3 -i bash

Create directories for the deployment scripts and the deployment:

mkdir /data/admin/asyncstageout
mkdir /data/srv/asyncstageout

If not already done by a previous deployment, create the secrets file, filling in the CouchDB username and password, the IP address of the local machine and the host where the CRAB3 REST interface is installed:

cat > $HOME/Async.secrets <<EOF
COUCH_USER=***
COUCH_PASS=***
COUCH_PORT=5984
COUCH_HOST=HOST_IP
OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
UFC_SERVICE_URL=https://cmsweb-testbed.cern.ch/crabserver/preprod/filemetadata
COUCH_CERT_FILE=/data/certs/servicecert.pem
COUCH_KEY_FILE=/data/certs/servicekey.pem
EOF

The file contains sensitive data and must be protected with the appropriate permissions:

chmod 600 $HOME/Async.secrets

Get the deployment scripts (take a look at the github dmwm/deployment repository for the latest tag -here we assume it is HG1411a-):

cd /data/admin/asyncstageout
rm -rf Deployment
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1411a.zip
unzip cfg.zip; rm cfg.zip
mv deployment-* Deployment
cd Deployment

Perform the deployment of the appropriate AsyncStageOut release tag from the corresponding CMS repository (contact Hassen Riahi in case of doubt):

ASOTAG=1.0.3pre1
REPO=comp.pre.riahi
ARCH=slc6_amd64_gcc481
./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
./Deploy -R asyncstageout@$ASOTAG -s sw -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
./Deploy -R asyncstageout@$ASOTAG -s post -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite

Create a directory to store user credentials obtained from myproxy, accessible to the service exclusively:

mkdir /data/srv/asyncstageout/state/asyncstageout/creds
chmod 700 /data/srv/asyncstageout/state/asyncstageout/

Commit patches if required:

cd /data/srv/asyncstageout/current/
#wget https://github.com/dmwm/WMCore/pull/4965.patch -O - | patch -d apps/asyncstageout/ -p 1
#wget https://github.com/dmwm/WMCore/pull/4967.patch -O - | patch -d apps/asyncstageout/ -p 1
#wget https://github.com/dmwm/WMCore/commit/a4563fa3cbc451dcce27669052518769a5e00a2a.patch -O - | patch -d apps/asyncstageout/lib/python2.6/site-packages/ -p 3

Initialize the service:

cd /data/srv/asyncstageout/current
./config/asyncstageout/manage activate-asyncstageout
./config/asyncstageout/manage start-services
./config/asyncstageout/manage init-asyncstageout

Set correct values of some essential configuration parameters in the config file config/asyncstageout/config.py:

sed --in-place "s|\.credentialDir = .*|\.credentialDir = '/data/srv/asyncstageout/state/asyncstageout/creds'|" config/asyncstageout/config.py
sed --in-place "s|\.serviceCert = .*|\.serviceCert = '/data/certs/hostcert.pem'|" config/asyncstageout/config.py
sed --in-place "s|\.serviceKey = .*|\.serviceKey = '/data/certs/hostkey.pem'|" config/asyncstageout/config.py
serverDN=$(openssl x509 -text -subject -noout -in /data/certs/hostcert.pem | grep subject= | sed 's/subject= //')
sed --in-place "s|\.serverDN = .*|\.serverDN = '$serverDN'|" config/asyncstageout/config.py
sed --in-place "s|\.couch_instance = .*|\.couch_instance = 'https://cmsweb-testbed.cern.ch/couchdb'|" config/asyncstageout/config.py
sed --in-place "s|\.cache_area = .*|\.cache_area = 'https://cmsweb-testbed.cern.ch/crabserver/preprod/filemetadata'|" config/asyncstageout/config.py
sed --in-place "s|\.opsProxy = .*|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py
sed --in-place "s|\.UISetupScript = .*|\.UISetupScript = '/data/srv/tmp.sh'|" config/asyncstageout/config.py
sed --in-place "s|\.log_level = .*|\.log_level = 10|" config/asyncstageout/config.py

Operations

OpsProxy Renewal or Creation

First connect to machine and create a seed for proxy delegation:

ssh machine_name
sudo mkdir /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/
voms-proxy-init -voms cms
sudo cp -p /tmp/x509up_u$(id -u) /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/seed-100001.cert
sudo chown crab3:zh /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/seed-100001.cert

Create a delegation (don`t forget to change the MACHINE NAME and DELEGATION_NAME):

myproxy-init -l DELEGATION_NAME_100001 -x -R "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -c 720 -t 36 -Z "/DC=ch/DC=cern/OU=computers/CN=MACHINE_NAME.cern.ch" -s myproxy.cern.ch

Copy the script to /data/admin/ProxyRenew.sh and do chown:

sudo chown crab3:zh /data/admin/ProxyRenew.sh # Not needed if renewing 

Before adding into crontab, try to run command and see if Proxy is renewed:

/data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms

Add in crontab if does not exist :

MAILTO="justas.balcas@cern.ch"
3 */3 * * * /data/admin/ProxyRenew.sh /data/certs/ /data/srv/asyncstageout/state/asyncstageout/proxy-delegation/ DELEGATION_NAME cms

Starting and stopping the service or CouchDB

Environment needed before starting or stoping AsyncStageOut

export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
cd /data/srv/asyncstageout/current/
source ~/Async.secrets 

Starting AsyncStageOut (If first time starting AsyncStageOut, make sure you have entered the correct FTS3 server URLs in CouchDB)

./config/asyncstageout/manage start-asyncstageout

Stopping AsyncStageOut :

./config/asyncstageout/manage stop-asyncstageout

Starting CouchDB

./config/asyncstageout/manage start-services

./config/asyncstageout/manage stop-services

Switch to FTS3

Set by hand all FTS servers endpoints in the asynctransfer_config database of the local CouchDB instance from the futon interface. There is one document per T1 site. For example, the RAL FTS server can be found at:

http://LocalCouchInstance:5984/_utils/document.html?asynctransfer_config/T1_UK_RAL

Modify the URL from the default to:

Cron jobs for CouchDB

There must be scheduled cron jobs for:

  • compacting the local CouchDB databases:
0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_compact' &> /dev/null

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_stat/_compact' &> /dev/null

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984//asynctransfer_agent/_compact' &> /dev/null

  • querying/caching views of the asynctransfer database in the global CouchDB:
*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/FilesByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/ftscp_all?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/sites?stale=update_after' &> /dev/null

*/5 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/JobsIdsStatesByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/get_acquired?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/DBSPublisher/_view/publish?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorStartedEnded/_view/startedSizeByTime?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorStartedEnded/_view/endedSizeByTime?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorFilesCount/_view/filesCountByUser?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorFilesCount/_view/filesCountByTask?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/MonitorFilesCount/_view/filesCountByDestSource?stale=update_after' &> /dev/null

*/10 * * * * curl --capath /etc/grid-security/certificates --cacert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --cert /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy --key /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy -H 'Content-Type: application/json' -X GET 'https://cmsweb.cern.ch/couchdb/asynctransfer/_design/monitor/_view/publicationStateSizeByTime?stale=update_after' &> /dev/null

  • querying/caching views of the asynctransfer_config database (local CouchDB):
*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' &> /dev/null

  • querying/caching views of the asynctransfer_agent database (local CouchDB):
*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_agent/_design/Agent/_view/existWorkers' &> /dev/null

Deployment of AsyncStageOut for CRAB3 developers

These are my (Andres Tanasijczuk) notes from when I did the installation of AsyncStageOut in a new CERN OpenStack virtual machine on February 2015.

Get and install a virtual machine

See Deployment of CRAB REST Interface / Get and install a virtual machine.

AsyncStageOut installation and configuration

Pre-configuration

The AsyncStageOut requires a couple of password/username settings for databases. These are provided using a simple formatted (parameter-key=parameter-value) secrets file, expected to be by default in the home area of the account running the agent. Thus, create a secrets file (the default location/name is $HOME/Async.secrets) with the following definitions for supported parameters:

COUCH_USER=<a-couchdb-username>  # Choose a username for the ASO CouchDB.
COUCH_PASS=<a-couchdb-password>  # Choose a password for the ASO CouchDB.
COUCH_HOST=<IP-of-this-ASO-host>  # You can read the host IP in https://openstack.cern.ch/dashboard/project/instances/.
COUCH_PORT=5984
UFC_SERVICE_URL=https://osmatanasi2.cern.ch/crabserver/dev/filemetadata  # The URL of the crabserver instance from where ASO should get the filemetadata.
OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
COUCH_CERT_FILE=/data/certs/hostcert.pem
COUCH_KEY_FILE=/data/certs/hostkey.pem

Notes:

  • The CouchDB password parameter is required. The other parameters will default to somewhat sensible values if not provided.
  • CouchDB requires an IP address; it has problems with hostnames.

The file contains sensitive data and must be protected with the appropriate permissions:

chmod 600 $HOME/Async.secrets

You can actually put this file in any directory you want and/or give to it any name you want, but then you have to set the environment variable ASYNC_SECRETS_LOCATION to point to the file:

export ASYNC_SECRETS_LOCATION=/path/to/Async.secrets/file

This secrets file will be used by the deployment scripts to set parameters in the configuration files (e.g. /data/srv/asyncstageout/current/config/asyncstageout/config.py).

Installation

1) Create the directories where you will do the deployment.

sudo mkdir /data/admin /data/srv
sudo chown <username>:zh /data/admin /data/srv
mkdir /data/admin/asyncstageout /data/srv/asyncstageout

2) Get the DMWM deployment package from github (https://github.com/dmwm/deployment). See https://github.com/dmwm/deployment/releases for the available releases. Note that you don't need to get the latest release.

cd /data/admin/asyncstageout
wget -O cfg.zip --no-check-certificate https://github.com/dmwm/deployment/archive/HG1509i.zip
unzip cfg.zip; rm cfg.zip
mv deployment-* Deployment
cd Deployment

3) Set auxiliary variables ASOTAG, REPO and ARCH. The only architecture -at this point- for which AsyncStageOut RPMs are build is slc6_amd64_gcc481. You don't need to change SCRAM_ARCH to match this architecture; the Deploy script will do that for you. The AsyncStageOut releases can be found in https://github.com/dmwm/AsyncStageout/releases and should be taken from the CMS repository comp.pre.riahi.

ASOTAG=1.0.3pre14
REPO=comp.pre.riahi
ARCH=slc6_amd64_gcc481

4) The deployment is separated in three steps: prep, sw and post.

./Deploy -R asyncstageout@$ASOTAG -s prep -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
INFO: 20150917103852: starting deployment of: asyncstageout/offsite
INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150917-103852-4382-prep.log
INFO: deploying backend - variant: default, version: default
INFO: deploying wmcore-auth - variant: default, version: default
INFO: deploying couchdb - variant: default, version: default
INFO: deploying asyncstageout - variant: offsite, version: default
INFO: installation completed sucessfully
./Deploy -R asyncstageout@$ASOTAG -s sw -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
INFO: 20150917103903: starting deployment of: asyncstageout/offsite
INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150917-103903-4464-sw.log
INFO: deploying backend - variant: default, version: default
INFO: bootstrapping comp.pre.riahi software area in /data/srv/asyncstageout/v1.0.3pre14/sw.pre.riahi
INFO: bootstrap successful
INFO: deploying wmcore-auth - variant: default, version: default
INFO: deploying couchdb - variant: default, version: default
INFO: deploying asyncstageout - variant: offsite, version: default
INFO: installation completed sucessfully
./Deploy -R asyncstageout@$ASOTAG -s post -A $ARCH -t v$ASOTAG -r comp=$REPO /data/srv/asyncstageout asyncstageout/offsite
INFO: 20150917104616: starting deployment of: asyncstageout/offsite
INFO: installation log can be found in /data/srv/asyncstageout/.deploy/20150917-104616-5911-post.log
INFO: Updating /data/srv/asyncstageout/v1.0.3pre14/apps to apps.sw.pre.riahi
INFO: Updating /data/srv/asyncstageout/current to v1.0.3pre14
INFO: deploying backend - variant: default, version: default
INFO: deploying wmcore-auth - variant: default, version: default
INFO: deploying couchdb - variant: default, version: default
INFO: deploying asyncstageout - variant: offsite, version: default
INFO: installation completed sucessfully

Authentication

Certificate for interactions with CMSWEB

Access to CMSWEB is restricted to CMS users and services by requesting authentication with certificates registered in VO CMS and SiteDB. The AsyncStageOut configuration file has a parameter (actually one parameter for each component) to point to a certificate that each component should use for interactions with CMSWEB. The parameter value is taken from the secrets file:

OPS_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy

Some of the CMS services use Grid service certificates for interactions with CMSWEB, but the majority, and in particular the production and pre-production CRAB services, use the operator's proxy. The reasons are both for convenience and security. For private installations you are the operator, so you should use your own user proxy:

mkdir /data/srv/asyncstageout/state/asyncstageout/creds
chmod 700 /data/srv/asyncstageout/state/asyncstageout

voms-proxy-init --voms cms --valid 192:00
cp /tmp/x509up_u$UID /data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy
The proxy is created for 8 days (192 hours), because this is the maximum allowed duration of the VO CMS extension. Thus, the proxy has to be renewed every 7 days (at least). You can do it manually (executing the last two commands) or you can set up an automatic renewal procedure like is being done in production and pre-production. See OpsProxy Renewal or Creation.

Note: The operator proxy is also used by ASO to manage the users files in case the retrieval of the user's proxy has failed.

Note: Grid host certificates can not be used as Grid service certificates if they are not registered in VO CMS and SiteBD.

Certificate for interactions with myproxy server

CRAB services use the host certificate (and private key) for interacting with myproxy server. The AsyncStageOut configuration file has two parameters to point to a certificate and private key to use for interactions with myproxy server. (The ASO components that need to interact with myproxy server are AsyncTransfer and DBSPublisher.) The parameter values are taken from the secrets file:

COUCH_CERT_FILE=/data/certs/hostcert.pem
COUCH_KEY_FILE=/data/certs/hostkey.pem

So copy the host certificate and private key to the /data/certs/ directory and change the owner to yourself:

#sudo mkdir /data/certs  # This should have been done already by the Deploy script when installing the VM.
sudo chown <username>:zh /data/certs
#sudo cp -p /etc/grid-security/host{cert,key}.pem /data/certs/  # This should have been done already by the Deploy script when installing the VM.
sudo chown <username>:zh /data/certs/host{cert,key}.pem

Add the Grid host certificate DN of the ASO host to the CRAB external REST configuration in the "delegate-dn" field. Do this for the REST Interface instance you will use together with this ASO deployment. This is so that this ASO instance is allowed to retrieve users proxies from myproxy server. The external REST configuration file will look something like this (in my case I have my private TaskWorker installed in osmatanasi2.cern.ch and I am installing ASO in osmatanasi1.cern.ch):

{
    "cmsweb-dev" : {
        "delegate-dn": [
            "/DC=ch/DC=cern/OU=computers/CN=osmatanasi2.cern.ch|/DC=ch/DC=cern/OU=computers/CN=osmatanasi1.cern.ch"
        ],
?
?
    }
}

Patches

No patches necessary.

Initialization

cd /data/srv/asyncstageout/current

The next command creates some directories and copies some template configuration files into the actual directories where they should be. This step is sometimes called "activation of the system". A hidden file ./install/asyncstageout/.using is created to signal that this activation step has been done. If this file exists already, the activation does nothing.

./config/asyncstageout/manage activate-asyncstageout

The next command initializes and starts CouchDB. A hidden file ./install/couchdb/.init is created to signal that this initialization step has been done. If this file exists already, the initialization part is skipped and only the starting part is executed. The initialization includes the generation of the appropriate databases.

./config/asyncstageout/manage start-services
Starting Services...
starting couch...
CouchDB has not been initialised... running pre initialisation
Initialising CouchDB on <COUCH_HOST_IP>:5984?
Apache CouchDB has started, time to relax.
CouchDB has not been initialised... running post initialisation

The next command generates a couple of configuration files (e.g. /data/srv/asyncstageout/current/config/asyncstageout/config.py) with parameters for all AsyncStageOut components. Most parameters are read from the Async.secrets file. A hidden file ./install/asyncstageout/.init is created to signal that this initialization step has been done. If this file exists already, the initialization does nothing.

./config/asyncstageout/manage init-asyncstageout
Initialising AsyncStageOut...
Installing AsyncTransfer into asynctransfer
Installing monitor into asynctransfer
Installing stat into asynctransfer_stat
Installing DBSPublisher into asynctransfer
Installing config into asynctransfer_config
Installing Agent into asynctransfer_agent

Configuration

ASO configuration

1) In the configuration file /data/srv/asyncstageout/current/config/asyncstageout/config.py there are still some parameters that need to be modified "by hand":

sed --in-place "s|\.credentialDir = .*|\.credentialDir = '/data/srv/asyncstageout/state/asyncstageout/creds'|" config/asyncstageout/config.py
sed --in-place "s|\.serviceCert = .*|\.serviceCert = '/data/certs/hostcert.pem'|" config/asyncstageout/config.py
sed --in-place "s|\.serviceKey = .*|\.serviceKey = '/data/certs/hostkey.pem'|" config/asyncstageout/config.py
serverDN=$(openssl x509 -text -subject -noout -in /data/certs/hostcert.pem | grep subject= | sed 's/subject= //')
sed --in-place "s|\.serverDN = .*|\.serverDN = '$serverDN'|" config/asyncstageout/config.py
sed --in-place "s|\.log_level = .*|\.log_level = 10|" config/asyncstageout/config.py
sed --in-place "s|\.UISetupScript = .*|\.UISetupScript = '/data/srv/tmp.sh'|" config/asyncstageout/config.py
sed --in-place "s|\.opsProxy = .*|\.opsProxy = '/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy'|" config/asyncstageout/config.py

Have a look at the configuration file and change other parameters values if you consider it appropriate.

2) Create a file /data/srv/tmp.sh with just the following line:

#!/bin/sh

3) There is configuration file shared by the Monitor component and PhEDEx, /data/srv/asyncstageout/current/config/asyncstageout/monitor.conf. In this file make sure the service parameter points to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8443, https://fts3.cern.ch:8443 or https://fts3-pilot.cern.ch:8443). For development ASO instances one should use https://fts3-pilot.cern.ch:8443:

service = https://fts3-pilot.cern.ch:8443

CouchDB configuration

First of all check that CouchDB is up and that you can access it from your VM:

curl -X GET "http://$(hostname):5984/"
{"couchdb":"Welcome","uuid":"3bf548a5f518781332037d68da62ff28","version":"1.6.1","vendor":{"version":"1.6.1","name":"The Apache Software Foundation"}}

1) The CouchDB installed in the VM is protected by a firewall. One can not access it not even from another machine at CERN. To be able to access this CouchDB from another machine at CERN, one needs to stop the iptables:

sudo /etc/init.d/iptables stop # To start the iptables again: sudo /etc/init.d/iptables start; To check the status: sudo /etc/init.d/iptables status

2) To access the CouchDB installed in the VM from outside CERN, create an ssh tunnel between the your machine and lxplus:

[mylaptop]$ ssh -D 1111 <username>@lxplus.cern.ch

and configure your browser (Preferences - Network) to access the internet with SOCKS Proxy Server = localhost and Port = 1111.

3) For each of the 8 documents "T1_*_*" in the asynctransfer_config database, change the FTS3 server URL to point to the FTS3 server you intend to use (https://lcgfts3.gridpp.rl.ac.uk:8446, https://fts3.cern.ch:8446 or https://fts3-pilot.cern.ch:8446). For development ASO instances use https://fts3-pilot.cern.ch:8446:

Cron jobs for CouchDB

You should already have the following cron jobs for compacting the CouchDB databases:

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_compact' &> /dev/null

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_stat/_compact' &> /dev/null

0 1 * * * curl -s -H 'Content-Type: application/json' -X POST 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984//asynctransfer_agent/_compact' &> /dev/null

Until this ticket https://github.com/dmwm/AsyncStageout/issues/4068 is solved, one needs to update the crontab by hand to create cron jobs for querying/caching the views every X minutes. To edit the crontab do:

crontab -e

Add the following cron jobs for querying/caching views of the asynctransfer database:

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/JobsIdsStatesByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/LFNSiteByLastUpdate?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/PublicationStateByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/TransfersByFailuresReasons?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/UserByStartTime?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/forKill?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/forResub?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/ftscp_all?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/getFilesToRetry?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/get_acquired?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/AsyncTransfer/_view/sites?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/DBSPublisher/_view/PublicationFailedByWorkflow?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/DBSPublisher/_view/cache_area?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/DBSPublisher/_view/last_publication?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/DBSPublisher/_view/publish?stale=update_after' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer/_design/monitor/_view/publicationStateSizeByTime?stale=update_after' &> /dev/null

Note: FYI, in (pre-)production the global asynctransfer database is installed in the cmsweb CouchDB. This means that the protocol is HTTPS and therefore the curl query is something like curl --capath /path/to/CA/certs/dir --cacert /path/to/proxy --cert /path/to/proxy --key /path/to/proxy -H 'Content-Type: application/json' -X GET 'https://...'.

Add the following cron jobs for querying/caching views of the asynctransfer_config database:

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetAnalyticsConfig' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetStatConfig' &> /dev/null

*/10 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_config/_design/asynctransfer_config/_view/GetTransferConfig' &> /dev/null

Add the following cron jobs for querying/caching views of the asynctransfer_agent database:

*/5 * * * * curl -s -H 'Content-Type: application/json' -X GET 'http://LocalCouchUserName:LocalCouchPass@LocalCouchHost:5984/asynctransfer_agent/_design/Agent/_view/existWorkers' &> /dev/null

And if you have something like

@reboot /data/srv/asyncstageout/current/config/couchdb/manage sysboot
1 0 * * * /data/srv/asyncstageout/current/config/couchdb/manage compact wmstats 'I did read documentation'
1 6 * * * /data/srv/asyncstageout/current/config/couchdb/manage compact all_but_wmstats 'I did read documentation'
1 12 * * * /data/srv/asyncstageout/current/config/couchdb/manage compactviews wmstats WMStats 'I did read documentation'
1 18 * * * /data/srv/asyncstageout/current/config/couchdb/manage compactviews all_but_wmstats all 'I did read documentation'

delete that.

Note: LocalCouchHost can simply be 127.0.0.1 if the cron jobs are running in the same host where CouchDB is installed.

Start/stop AsyncStageOut

First one has to start CouchDB, otherwise ASO will not start as it will fail to connect to the CouchDB.

Note: If following from the above initialisation steps, you may have CouchDB already running. This can be checked using the status command:

./config/asyncstageout/manage status

If it is not running, start it with:

./config/asyncstageout/manage start-services

To start all ASO components need to add system python libs for FTS3 bindings to work:

export PYTHONPATH="/usr/lib/python2.6/site-packages/:$PYTHONPATH"
export PYTHONPATH="/usr/lib64/python2.6/site-packages/:$PYTHONPATH"
./config/asyncstageout/manage start-asyncstageout
Starting AsyncStageOut...
Checking default database connection... ok.
Starting components: ['AsyncTransfer', 'Reporter', 'DBSPublisher', 'FilesCleaner', 'Statistics', 'RetryManager']
Starting : AsyncTransfer
Starting AsyncTransfer as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/AsyncTransfer 
Waiting 1 seconds, to ensure daemon file is created

started with pid 24188
Starting : Reporter
Starting Reporter as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/Reporter 
Waiting 1 seconds, to ensure daemon file is created

started with pid 24275
Starting : DBSPublisher
Starting DBSPublisher as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/DBSPublisher 
Waiting 1 seconds, to ensure daemon file is created

started with pid 24362
Starting : FilesCleaner
Starting FilesCleaner as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/FilesCleaner 
Waiting 1 seconds, to ensure daemon file is created

started with pid 24449
Starting : Statistics
Starting Statistics as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/Statistics 
Waiting 1 seconds, to ensure daemon file is created

started with pid 24454
Starting : RetryManager
Starting RetryManager as a daemon 
Log will be in /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/RetryManager 
Waiting 1 seconds, to ensure daemon file is created

started with pid 24459
2015-06-15 15:32:12: ASOMon[24464]: Reading config file /data/srv/asyncstageout/v1.0.3pre8/config/asyncstageout/monitor.conf
2015-06-15 15:32:12: ASOMon[24464]: Using FTS service https://fts3-pilot.cern.ch:8443
ASOMon: pid 24466
writing logfile to /data/srv/asyncstageout/v1.0.3pre8/install/asyncstageout/Monitor/aso-monitor.log

To stop all ASO components:

./config/asyncstageout/manage stop-asyncstageout
Shutting down asyncstageot...
Checking default database connection... ok.
Stopping components: ['AsyncTransfer', 'Reporter', 'DBSPublisher', 'FilesCleaner', 'Statistics', 'RetryManager']
Stopping: AsyncTransfer
Stopping: Reporter
Stopping: DBSPublisher
Stopping: FilesCleaner
Stopping: Statistics
Stopping: RetryManager
Stopping: Monitor

To stop CouchDB:

./config/asyncstageout/manage stop-services
Shutting down services...
stopping couch...
Apache CouchDB has been shutdown.

Start/stop a single AsyncStageOut component

Be careful to set $PYTHONPATH as indicated above.

To start the Monitor component:

source apps/asyncstageout/Monitor/setup.sh; ./apps/asyncstageout/Monitor/ASO-Monitor.pl --config config/asyncstageout/monitor.conf

To stop the Monitor component:

kill -9 `cat install/asyncstageout/Monitor/aso-monitor.pid`

To start any other component:

./config/asyncstageout/manage execute-asyncstageout wmcoreD --start --component <component-name>

To stop any other component:

./config/asyncstageout/manage execute-asyncstageout wmcoreD --shutdown --component <component-name>

Use your private ASO in your CRAB jobs

There is a CRAB configuration parameter named Debug.ASOURL which specifies to which CouchDB instance should CRAB inject the transfer documents. And if this parameter is not specified, the parameter backend-urls.ASOURL from the external REST configuration takes place. For the production (pre-production) installation of AsyncStageOut, the documents should be injected to the central CouchDB deployed in CMSWEB (CMSWEB-testbed). For a private installation of AsyncStageOut, the documents should be injected to the local private CouchDB. So if you want to use your private ASO instance, you can either set in the CRAB configuration file

config.Debug.ASOURL = 'http://<couchdb-hostname>.cern.ch:5984/'

or set in the external REST configuration

{
    "cmsweb-dev": {
        ...
        "backend-urls" : {
            ...
            "ASOURL" : "http://<couchdb-hostname>.cern.ch:5984/"
        },
        ...
    }
}

Possible "glite-delegation-init: command not found" error

When running ASO, I got the following error message in ./install/asyncstageout/AsyncTransfer/ComponentLog:

2015-02-06 18:41:54,162:DEBUG:TransferDaemon:Starting <AsyncStageOut.TransferWorker.TransferWorker instance at 0x1fabe18>
2015-02-06 18:41:54,162:DEBUG:TransferWorker:executing command: export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/eebe07e240237dde878ab66bb56e7794e8d5b39e ; source /data/srv/tmp.sh ; glite-delegation-init -s https://fts3-pilot.cern.ch:8443 at: Fri, 06 Feb 2015 18:41:54 for: /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=atanasi/CN=710186/CN=Andres Jorge Tanasijczuk
2015-02-06 18:41:54,271:DEBUG:TransferWorker:Executing : 
command : export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/eebe07e240237dde878ab66bb56e7794e8d5b39e ; source /data/srv/tmp.sh ; glite-delegation-init -s https://fts3-pilot.cern.ch:8443 
output : 
error: /bin/sh: glite-delegation-init: command not found
retcode : 127
2015-02-06    18:41:54,271:DEBUG:TransferWorker:User proxy of atanasi could not be delagated! Trying next time.

To fix it, I had to install fts2-client. But first I had to create a file /etc/yum.repos.d/EMI-3-base.repo with the following content:

[EMI-3-base]
name=EMI3 base software
baseurl=http://linuxsoft.cern.ch/emi/3/sl6/x86_64/base
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-emi
exclude=
priority=15
sudo yum install fts2-client

Disk may become full after some days

After some days the disk in the VM may become full because of many ASO documents (or files related to the views, I don't know) and ASO will stop working. One has to stop all AsyncStageOut components and CouchDB, and then do a clean-all.

./config/asyncstageout/manage stop-asyncstageout
./config/asyncstageout/manage stop-services
./config/asyncstageout/manage clean-all

manage script commands

  • activate-asyncstageout : activate the AsyncStageOut
  • status : print the status of the services and the AsyncStageOut
  • start-couch : start the CouchDB server
  • stop-couch : stop the CouchDB server
  • start-services : same as start-couch
  • stop-services : same as stop-couch
  • start-asyncstageout : start the AsyncStageOut
  • stop-asyncstageout : stop the AsyncStageOut
  • clean-couch : wipe out the CouchDB databases and couchapps (removes database completely)
  • clean-asyncstageout : remove all agents configuration and installation directories (non-recoverable)
  • clean-all : clean the AsyncStageOut and CouchDB (wipes everything, non-recoverable)
  • execute-asyncstageout <command> <args> : execute the asyncstageout/bin command with the arguments provided

AsyncStageOut log files

Log files to watch for errors and to check and search in case of problems:

AsyncStageOut component logs:

./install/asyncstageout/AsyncTransfer/ComponentLog
./install/asyncstageout/DBSPublisher/ComponentLog
./install/asyncstageout/Reporter/ComponentLog
./install/asyncstageout/Statistics/ComponentLog
./install/asyncstageout/FilesCleaner/ComponentLog
./install/asyncstageout/RetryManager/ComponentLog
./install/asyncstageout/Monitor/aso-monitor.log

CouchDB log:

./install/couchdb/logs/couch.log

Tips for operators

ASO operations must be done as the service user (crab3 for production and pre-production):

sudo -u crab3 -i bash

Export the operators proxy:

export X509_USER_PROXY=/data/srv/asyncstageout/state/asyncstageout/creds/OpsProxy

Source the ASO environment (the wild cards are for the arch and ASO version: for example, slc6_amd64_gcc481 and 1.0.3pre8 respectively):

cd /data/srv/asyncstageout/current
source sw.dciangot/slc6_amd64_gcc493/cms/asyncstageout/1.0.5pre3/etc/profile.d/init.sh
# equivalent at the moment to
# source sw.dciangot/*/cms/asyncstageout/*/etc/profile.d/init.sh

Restart local CouchDB

In case of a local CouchDB crash, ASO stops registering jobs transfers and keeps showing jobs in transferring status.
To restart the local CouchDB:

./config/asyncstageout/manage stop-asyncstageout
export PYTHONPATH="/usr/lib/python2.6/site-packages/:$PYTHONPATH"
export PYTHONPATH="/usr/lib64/python2.6/site-packages/:$PYTHONPATH"

./config/asyncstageout/manage stop-services
./config/asyncstageout/manage start-services

After a few seconds:

./config/asyncstageout/manage start-asyncstageout

Killing transfers

Kill all the transfers in CouchDB for a given task:

./config/asyncstageout/manage execute-asyncstageout kill-transfer -t <taskname> [-i docID comma separated list]

Note: If the FTS transfer was already submitted, it is (currently) not possible to kill it in FTS.

Retrying publication

Retry the publication for a given task:

./retry-publish -h
usage: retry-publish [-h] [--task Taskname] [--username Username] [--dry]

Retry task publication.

optional arguments:
  -h, --help            show this help message and exit
  --task Taskname, -t Taskname
                        Taskname to be retried
  --username Username, -u Username
                        username to be retried
  --dry, -d             show list only

  • Important notes
    • beware retry-publish always fails, need to put the path : ./retry-pubish
    • do not use --username , SQL query takes forever. We'll need to change it to limit to last month or similar.

ASO update/upgrade

You can move databases from an old installation to a new one by copying the databases files located in the directory install/couchdb/database to the new installation path.

Note: If CouchDB is also upgraded during the ASO upgrade, the databases files of the previous CouchDB version may be incompatible with the new CouchDB version.

ASO on production (cmsweb)

Machine Path ASO tag Database
vocms031 /data/srv/asyncstageout/current 1.0.6pre1 downgraded to 105pre3 cmsweb.cern.ch/couchdb2/asodb4
  /data/srv/asyncstageout2/current 1.0.6pre1 downgraded to 105pre3 cmsweb.cern.ch/couchdb2/asodb2
vocms0108 /data/srv/asyncstageout/current 1.0.6pre1 downgraded to 105pre3 cmsweb.cern.ch/couchdb2/asodb3
  /data/srv/asyncstageout2/current 1.0.6pre1 downgraded to 105pre3 cmsweb.cern.ch/couchdb2/asodb1
vocms0105 /data/srv/asyncstageout/current 1.0.7pre1 patched ASO/Oracle cmsweb.cern.ch

ASO in Use/Drain

Follwogin A and B are two setups which can each be either in use (production) or in drain. Which is in production is defined by CRAB REST configuration
Host Path Database
vocms0108 /data/srv/asyncstageout2/current asodb1
vocms031 /data/srv/asyncstageout2/current asodb2
Host Path Database
vocms0108 /data/srv/asyncstageout/current asodb3
vocms031 /data/srv/asyncstageout/current asodb4

ASO on pre-production (cmsweb-testbed)

Machine Path ASO tag Database
vocms030 /data/srv/asyncstageout/current 1.0.6pre1 comp.pre.dciangot compatible with HG1612h cmsweb-testbed.cern.ch/couchdb/asodb3
  /data/user/asostandalone/ ORACLE ASO v2.0  
Topic attachments
I Attachment History Action Size Date Who Comment
Unix shell scriptsh ProxyRenew.sh r1 manage 8.7 K 2014-04-04 - 08:34 JustasBalcas  
Edit | Attach | Watch | Print version | History: r82 < r81 < r80 < r79 < r78 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r82 - 2019-09-13 - StefanoBelforte
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    CMSPublic All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback