The framework is coupled to several tools available at CERN. You will need to check if everything is set up correctly.

  • Every client machine should have an AFS access. It allows us to distribute the code to the clients and gather the results in a single folder. NFS can be used as a substitute but it will cause problems for the web based report.

  • One of the client machines should be able to access all the other nodes via ssh and without password. This one will be used to start the manager script.

Source code

You can get the source code from the CORAL CVS :

cd ~/
cvs -d co -d PerfTests -P coral/Tests/Performance_RelationalAccess/

Web site configuration

A web site should be configured in order to publish the web based report. An AFS based is easily created using this page : . Choose AFS folder for the Site type and put the log folder path (ie: /afs/

Install JPGraph

Get it from there : ( use "PHP4: JpGraph 1.x - series" ) and put it in the log directory (as jpgraph) :

cd ~/PerfTests/logs/
wget -O /tmp/jpgraph.tar.gz
tar zxf /tmp/jpgraph.tar.gz
mv jpgraph* jpgraph

Access rights

Change the access privileges to the log directory to enable access from web :

fs setacl -dir ~/PerfTests/logs/ ~/PerfTests/logs/jpgraph/src/ ~/PerfTests/logs/jpgraph/src/lang/ -acl system:anyuser rl

Monitoring schema on the target database

You should create a specific schema on the monitored batabase to use the monitoring feature. A script called install.sql is provided in order to do so. You should execute it with a DBA account. You will be asked for a username and a password which will be needed by the framework afterward (in authentication.xml).

Framework configuration

Modify testConf.xml to configure the test environment.

  • Indicate the client machine hostnames
  • and the RAC node hostnames
  • you can enable the different monitoring graphs
                cpu="1"   -- 0 to disable
                waitEvents="1"/>   -- 0 to disable
  • you can tweak the ramping-up of the connections
                numBoxes="1"   -- number of machines used for this test run
                numMaxClientsPerBox="2"   -- total number of client programs which will be started on each client machine
                numClientsPerBoxInit="1"   -- number of client programs which will be started on each client machine on the first step 
                numClientsPerStep="1"   -- number of client programs which will be started on each client machine on the following steps
                stepPeriod="20"   -- waiting time between each "ramping-up step"
                testMaxDuration="3600"/>   -- maximum duration of the test, after this period the clients are killed and the test is shut down
  • then you should indicate the folders used by the program
                baseDir="/afs/"   -- directory where you downloaded the framework
                clientLogDir="/afs/"/>   -- directory where you planned to store all the log folders 
                                (can be in another place than the base directory but should remain on a shared filesystem.
  • here you should indicate the information related to the client program
                db_connection="/dbdev/OracleStress"   -- connection string used by the monitoring script (declared in dblookup.xml and authentication.xml)
                db_username="ST_CLIENT"   -- name of the Oracle user which will be used as client for the test
                testClientCmd=""   -- command used to start the test program on the different client nodes
                testClientExe="python"   -- real name as seen by the operating system (eg: python for a python script)
                reportEveryXSecs="10"/>   -- time between each monitoring output (should also be used in the client program, eg: writes results on this same frequency)
  • you can save meaningful files in the log folder
                <file name="Client execution script"></file>   -- the client program should be saved in order to reproduce the test easily
                <file name="Configuration file">testConf.xml</file>   -- a saved version of the configuration file is also needed to generate the report

Database access

You'll have to modify the configuration files authentication.xml and dblookup.xml. They are needed by the monitoring extension and perhaps for the test client.

First the account for the monitoring
<?xml version="1.0" ?>
  <connection name="oracle://oracle_service_name/oracle_monitoring_username">
    <parameter name="user" value="oracle_monitoring_username" />
    <parameter name="password" value="oracle_monitoring_password" />
Then the account for the test client for each node (if using COOL or CORAL)
  <connection name="oracle://oracle_service_name_node1/oracle_client_username">
    <parameter name="user" value="oracle_client_username" />
    <parameter name="password" value="oracle_client_password" />

  <connection name="oracle://oracle_service_name_node2/oracle_client_username">
    <parameter name="user" value="oracle_client_username" />
    <parameter name="password" value="oracle_client_password" />

CORAL uses also this file to declare the monitoring service name
<?xml version="1.0" ?>
  <logicalservice name="/OracleStressReport">
     <service name="oracle://oracle_serice_name/oracle_monitoring_username" accessMode="readonly" authentication="password" />

Set the environment variables

There are two files provided depending on your shell. For bash, sh or zsh use bashrc and for csh or tcsh use cshrc.

You can copy them in your .bashrc (or .cshrc) or source them from there :

source ~/PerfTests/bin/bashrc

Don't forget to modify the following variable if needed :

export CMTCONFIG=slc4_amd64_gcc34  -- eg can be changed to slc4_ia32_gcc34 

This topic: PSSGroup > PhysicsDatabasesSection > DBBenchmarkAndStressTest > OracleRACLoadTesting > OracleRACLoadTestingInstall
Topic revision: r2 - 2007-12-07 - RomainBasset
This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback