Streams Configuration Checklist
The list below contain both minimum and recommended requirements for Tier-1 centers before having Streams configured and connect to CERN.
As a check list it aims to provide a method to check the configuration.
-
SQL> show parameter global_names
- (TRUE)
-
SQL> show parameter parallel_max_servers
- (recommended 20. About current value + 3*capture processes + 3*apply processes)
-
SQL> show parameter streams_pool_size
- (recommended 600M. Minimum 200M)
-
SQL> show parameter _job_queue_interval
- (1; recheck after upgrade to 10.2.0.3, if not there->set again)
-
SQL> show parameter job_queue_processes
- (minimum 4, maximum number of jobs can run simultaneously + 2)
-
SQL> show parameter db_domain
- (should be not empty)
-
SQL> show parameter listener
- (local_listener
should be local instance and remote_listener
the cluster listener)
-
$ lsnrctl services
(should show the domain name in the "address" lines)
-
$ $ORACLE_HOME/Opatch/opatch lsinventory
- (check list Streams patches)
-
SQL> select distinct(profile) from dba_profiles;
- (CERN_APP_PROFILE and CERN_DEV_PROFILE should exist)
-
select group#,thread#,bytes/1024/1024 sizeMB from v$log order by 2,1;
- (should have at least 4 redo log files per instance (thread) with a size of 200MB each (500MB recommended))
- Have a UNDO tablespace for each instance with at least 1GB and auto_increment up to 32GB
select a.tablespace_name, round(b.bytes/1024/1024),round(b.maxbytes/1024/1024)
from dba_tablespaces a, dba_data_files b
where a.tablespace_name=b.tablespace_name
and contents='UNDO'
order by 1;
- Have a TEMP tablespace with at least 1GB and auto_increment up to 32GB
select a.tablespace_name, a.contents,round(b.bytes/1024/1024),round(b.maxbytes/1024/1024)
from dba_tablespaces a, dba_temp_files b
where a.tablespace_name=b.tablespace_name
order by 1;
- check database services have domain name in application connection strings/tnsnames.ora
- Streams administrator tablespace with 1GB
- If running in archivelog mode (
select log_mode from v$database;
), you should set up RMAN to backup archive logs and also delete them. Preview enough space on the "Flash Recovery Area" for this (same as size of database is the minimum).
- Parameter aq_tm_processes must not be explicitly set to 0. It will disable the queue monitoring for all queues.
- recyclebin must be disabled
SQL> show parameter recyclebin
- (OFF).
- Profiles: (following the example below) The DBAs on the Tier1s have to adjust the values according to their cluster configuration limits (for example SESSIONS_PER_USER value should be lower than the initialization 'sessions' parameter at the Tier1 database).
- Verify that the parameter RESOURCE_LIMIT is set to true.
- Script for automatic sniped session killing: you can find short description, instructions and the script here: Killing Sniped Sessions. Please edit the script if you want to change the user or remove logging.
There is an automatic script to check these requirements, and is available below:
Other checks (for the source database):
Examples:
Streams administrator tablespace creation
define data_ts=LHCB_STREAMS_DATA01;
define initsize=1G
define nextsize=1G
define maxsize=100G
CREATE BIGFILE TABLESPACE "&DATA_TS"
DATAFILE SIZE &INITSIZE AUTOEXTEND ON NEXT &NEXTSIZE MAXSIZE &MAXSIZE
EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;
Please follow the script
create_strmadmin_10g.sql in order to create this schema.
Roles and profiles creation
CREATE ROLE cern_dev_role;
GRANT ALTER SESSION, CREATE CLUSTER, CREATE DATABASE LINK, CREATE MATERIALIZED VIEW,
CREATE PROCEDURE, CREATE ROLE, CREATE SEQUENCE, CREATE SESSION, CREATE SYNONYM,
CREATE TABLE, CREATE TRIGGER, CREATE TYPE, CREATE VIEW, QUERY REWRITE, ADVISOR TO cern_dev_role;
CREATE ROLE cern_app_role;
GRANT CREATE SESSION, ALTER SESSION, CREATE SYNONYM, CREATE VIEW, QUERY REWRITE to cern_app_role;
-- Development profile
-- To be given to developers and production owner accounts
CREATE PROFILE cern_dev_profile LIMIT
FAILED_LOGIN_ATTEMPTS 5
PASSWORD_LOCK_TIME 1/1440
PASSWORD_LIFE_TIME 365
PASSWORD_REUSE_TIME 180
PASSWORD_GRACE_TIME 10
SESSIONS_PER_USER 10
IDLE_TIME 2880;
-- Application profile
-- To be given to application reader and writer accounts
CREATE PROFILE cern_app_profile LIMIT
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LOCK_TIME 1/1440
PASSWORD_LIFE_TIME UNLIMITED
SESSIONS_PER_USER 1000
IDLE_TIME 2880;
Import command and parameter file
Below is shown how to run remotely the import of schemas to a database.
- The xxxxx user should have DBA privileges in the database.
- the network link corresponds to a database link from the remote database to a DBA user in the source database.
$ impdp xxxxx/yyyy@remote.db.domain/remote.db.service parfile=import.par
$ more import.par
network_link=lhcbr.cern.ch
schemas=lcg_lfc_lhcb,lcg_lfc_lhcb_r,lcg_lfc_lhcb_w
EXCLUDE=TABLE:"IN ('CNS_USERINFO','CNS_GROUPINFO')"
EXCLUDE=STATISTICS
FLASHBACK_TIME="TO_TIMESTAMP('02-11-2006 10:00:00', 'DD-MM-YYYY HH24:MI:SS')"