WARNING: This web is not used anymore. Please use PDBService.Installation_procedure_RHEL3 instead!
 

10g RAC on Linux - Installation Procedures RHEL3 (see RHEL4 for newer installations)

This document describes the installation steps for Oracle 10g for the physics database services at CERN. The key software and hardware components are: Oracle 10gR2, RAC, ASM, Linux RHEL 3, dual CPU Intel servers, SAN network, SATA disks in FC arrays (see also https://twiki.cern.ch/twiki/pub/PSSGroup/HAandPerf/Architecture_description.pdf)

OS Setup and Prerequisites

  • RHEL 3, Oracle user and group, kernel parameters and other OS installation details, as performed by FIO procedures are prerequisites for the following
  • Optional tools (download the tarball here CERNDB_scripts.tar.gz) for example using
    wget https://twiki.cern.ch/twiki/pub/PSSGroup/Installation_verbose/CERNDB_scripts.tar.gz
         
    • pconsole. Parallel console tool, it allows to connect to each node of your cluster simultaneously: and one can type administrative commands in a specialized window that 'multiplies' the input to each of the connections that have been opened.
    • clu-ssh. A wrap-up script for ssh.
    • strorage_reorg directory. A set of scripts to add and configure LUNs on Linux and with ASMlib. For example contains exec_addasmlib.sh to automatically map 128 partitions to ASMlib disks
    • check_rac. checks kernel configs
    • crsstat.sh. displays Oracle clusterware service status
    • listdisks.sql. script to list ASM disks (from an ASM instance)
    • login.sql. Our sqlplus login script
    • rlwrap. sqlplus with command line history with rlwrap (see also https://twiki.cern.ch/twiki/bin/viewauth/DbaServices/ToolsForOracle)
    • logrot_ora_install. A tool to install logrotation for bdump, udump, listener.log

Configure the private interconnect

On all cluster nodes as the root user (pconsole can be useful for more than 2 nodes):
  • Edit /etc/hosts to add cluster interconnect names, virtual ip names, etc (see sample below)
  • Edit and check network configs /etc/sysconfig/network-scripts/ifcfg-ethX (X=0,1,2)
    • eth0 is the public interface, should be OK
    • for eth1 and eth2 use the example in the docs below (note NETMASK and ONBOOT values)
  • Run /etc/rc.d/init.d/network restart to activate the new network configs

Setup ssh and host equivalence

On all cluster nodes as the oracle user (pconsole can be useful for more than 2 nodes):
  • if needed: mkdir ~/.ssh; chmod 700 ~/.ssh
  • /usr/bin/ssh-keygen -t rsa (default location and no password)
  • /usr/bin/ssh-keygen -t dsa (default location and no password)
  • create the ~oracle/.ssh/config file with the following contents:
Host *
ForwardX11 no
StrictHostKeyChecking=no
  • Choose a cluster node to setup the authorized_keys file (use the clu-ssh script):
    • cd $HOME/scripts and edit racnodes.db
    • clu-ssh racnodes "cat ~/.ssh/id_rsa.pub ~/.ssh/id_dsa.pub" >> ~/.ssh/authorized_keys.local
    • cat ~/.ssh/authorized_keys.local >> ~/.ssh/authorized_keys
    • deploy authorized_keys files using scp (ex: scp ~/.ssh/authorized_keys ~/.ssh/authorized_keys.local srv2:~/.ssh)
    • edit the file known_hosts. for each entry add a comma separated (no space) list of ip and host aliases for public and private networks. Example: *servername1,servername1.cern.ch,servername-v,test1-priv1-3,test1-priv2-3,172,31.5.1,172,31.6.1 *.... ssh-rsa .....
    • deploy the file known_hosts, using scp (ex: scp ~/.ssh/known_hosts srv2:~/.ssh)
  • check config. Example: $ clu-ssh racnodes "ssh test1-priv1-03 date"
  • optional: remove from the .ssh/config file the entry StrictHostKeyChecking=no
  • Note: register the configuration of the new nodes using DAM, or else a script at 2am will change them
  • Note: rm /etc/issue.net this is important, or Oracle installer will have issue to connect to remote nodes

Other checks and OS parameters

  • On all cluster nodes: optionally run ./check_rac.sh (see below) script over cluster nodes. On 10g R2 optionally run Oracle's CVU

  • On all cluster nodes using pconsole: echo "options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180" >> /etc/modules.conf

  • On all cluster nodes using pconsole: echo "/sbin/modprobe hangcheck-timer" >> /etc/rc.d/rc.local

  • On all cluster nodes using pconsole: echo "session required /lib/security/pam_limits.so" >> /etc/pam.d/login

  • On all cluster nodes using pconsole: edit .bashrc as from the example file below

  • On all cluster nodes using pconsole:
    • mkdir /ORA/dbs00/oracle
    • chown oracle:ci /ORA/dbs00/oracle
    • chown oracle:ci /ORA/dbs01/oracle

  • On all cluster nodes using pconsole: vi /etc/oraInst.loc
    • inventory_loc=/ORA/dbs01/oracle/oraInventory
    • inst_group=ci

  • Check that /ORA/dbs00 and /ORA/dbs01 filesystems are ext3 (run the mount command). Else umount them and run tune2fs -j /dev/sda..., then change the entry in /etc/fstab and remount the filesystems

Setup storage: multipathing, rawdevices and asmlib disks

Rawdevices are set up using devlabel to guarantee persistence across reboots: 6 rawdevices per cluster are needed. Disks for DB storage are mapped via asmlib (need root access). NOTE: some of the operations described below are irreversible, therefore extra care should be taken not to erase needed data.

  • Setup multipathing
    • enables the use of redundant path to access the storage arrays
    • enables failover over the two HBA ports
    • enables load balancing at the LUN level (improves performance)
    • see also multipathing configuration short guide

  • Check the shared storage mapped to all cluster nodes and identify the LUN-to-disk mapping (that is /dev/sdXX to physical disk mapping):
    • fdisk -l |grep Disk
    • cat /proc/scsi/qla2300/1 /proc/scsi/qla2300/2
    • crosscheck the list of 'SCSI Device Information' obtained above with the WWPN of the storage arrays controllers

  • Identify and partition the LUN to be used for raw devices.
    • they are visible from the OS as disks of about 1GB in size.
    • example: /dev/sdb, /dev/sds, /dev/sdaj, /dev/devba
    • clean disk partitions with dd (see also exec_cleandisk_prepart.sh in the scripts directory)
    • create 3 primary partitions of 256M for each disk each using fdisk
    • sync partitions on other cluster nodes. This can be done with partprobe or with fdisk by rewriting the partition table(s).

  • Configure rawdevices using devlabel(instead of using the rawdevices service)
    • On one cluster node add all the needed rawdevices volumes (6 rawdevices needed for 10g R2).
    • note: raw1,11 -> ocr, raw2,12,22 -> vtd, raw3-> asm spfile

devlabel add -s /dev/raw/raw1 -d /dev/sdb1
devlabel add -s /dev/raw/raw11 -d /dev/sds1
devlabel add -s /dev/raw/raw2 -d /dev/sdb2
devlabel add -s /dev/raw/raw12 -d /dev/sds2
devlabel add -s /dev/raw/raw22 -d /dev/sdaj2
devlabel add -s /dev/raw/raw3 -d /dev/sdaj3
    • Copy /etc/sysconfig/devlabel file and run devlabel reload to all the rest of the cluster nodes.
    • On all cluster nodes: chown oracle:ci /dev/raw/raw[123] /dev/raw/raw1[12] /dev/raw/raw22

  • On all cluster nodes using pconsole: setup devlabel for automatic startup
    • create the /etc/init.d/devlabel file (see example in the doc list below)
    • chmod +x /etc/init.d/devlabel
    • run chkconfig devlabel on

  • On all cluster nodes configure asmlib /etc/init.d/oracleasm configure
    • Default user and group: oracle, ci
    • autostartup= yes and Fix permissions on boot=yes

  • On one node configure disk devices with ASMlib
    • identify LUN with storage array number and disk disk volume (es: /dev/sdb -> ITSTOR01-VOl1)
    • use scripts (see storage_reorg.tar or the scripts directory) to
      • format disks with 2 partitions (Ex: p1=200GB, p2=200GB)
      • clean partitions headers
      • associate partitions to ASMlib disks

Sample script from storage_reorg.tar (or scripts directory)

#!/bin/bash

group1="c d e f g h i j k l m n o p q r"
group2="t u v w x y z aa ab ac ad ae af ag ah ai"
group3="ak al am an ao ap aq ar as at au av aw ax ay az"
group4="bb bc bd be bf bg bh bi bj bk bl bm bn bo bp bq"

diskname="itstor01"
i=1
for letter in $group1
do
 /etc/init.d/oracleasm createdisk ${diskname}_${i}_ext /dev/sd${letter}1
 /etc/init.d/oracleasm createdisk ${diskname}_${i}_int /dev/sd${letter}2
 let i=$i+1
done

  • Sync storage config on all cluster nodes
    • sync partition tables with partprobe or sudo /sbin/rmmod qla2300; sudo /sbin/modprobe qla2300
    • load asmlib config /etc/init.d/oracleasm scandisks
    • (optional) devlabel reload
    • check results with /etc/init.d/oracleasm listdisks|wc -l

  • Write to netops and ask for network aliases to be used in the tnsnames.ora

Oracle binaries Installation

Oracle rdbms and Oracle clusterware need to be installed in separate oracle homes.

Oracle clusterware installation

Oracle Clusterware installation in a separate home from the DB is the first install step. The following info will be needed: node names, public and private interfaces details, VIP names. Use eth0 as cluster interface.

  • mkdir /ORA/dbs01/oracle/oraInventory
  • vi /etc/oraInst.loc
    • inventory_loc=/ORA/dbs01/oracle/oraInventory
    • inst_group=ci

  • Install from clusterware CD
    • ORA_CRS_HOME=/ORA/dbs01/oracle/product/10.2.0/crs
    • configure cluster and node names
      • cluster name can be name of the db
      • node names (public and private) -> edit as in the /etc/hosts file
    • 6 raw devices are needed:
      • 2 copies of OCR (/dev/raw/raw1, /dev/raw/raw11) take care of using 2 different disk arrays
      • 3 copies of the voting disk (/dev/raw/raw2, /dev/raw/raw12, /dev/raw/raw22) on 3 different disk arrays
      • /dev/raw/raw3 for the ASM spfile

  • Stop crs on all nodes $ORA_CRS_HOME/bin/crsctl stop crs
    • apply latest patchset and patches, as required

RDBMS binaries installation

For a 2-node RAC a standard installation with Oracle runinstaller can be done. In alternative the following approach (cloning) can be followed (advised for N>2 nodes RAC):

  • Use Oracle runInstaller to install RAC on first node only
    • note you can skip to clonig (next paragraph) if you have a .tar image of the oracle home taken from another installation
    • ORACLE_HOME=/ORA/dbs01/oracle/product/10.2.0/rdbms
    • install Oracle Enterprise Edition (default)
    • choose cluster install on first node only
    • Install only the software (no DB creation at this stage)
    • Apply patchsets and security patches on the installed node

  • Clone the Oracle Home to the other cluster nodes
    • on the node where the installation and patch was done, create a TAR image
      • cd $ORACLE_HOME/..; tar cfp rdbms_10_2_0_2.tar rdbms
    • distrbute the tar (Ex: scp rdbms_10_2_0_2.tar srv2:$PWD)
    • at the destinations
      • tar xfp rdbms_10_2_0_2.tar
    • find out oracle_home path and oracle_home name on the installed node
      • cat /ORA/dbs01/oracle/oraInventory/ContentsXML/inventory.xml
    • on the new nodes perform the clonig operation
      • cd $ORACLE_HOME/clone/bin
      • perl clone.pl ORACLE_HOME="/ORA/dbs01/oracle/product/10.2.0/rdbms" ORACLE_HOME_NAME="OraDb10g_rdbms" '-O"CLUSTER_NODES={itracXX,itracYY}"' '-O"LOCAL_NODE=itracXX"' (edit node names)
      • repeat for all new nodes, editing LOCAL_NODE value
      • run root.sh on new nodes, as instructed by clone.pl
    • update the nodelist on the first installed node
      • (edit node names) cd $ORACLE_HOME/oui/bin; ./runInstaller -updateNodeList ORACLE_HOME="/ORA/dbs01/oracle/product/10.2.0/rdbms" ORACLE_HOME_NAME="OraDb10g_rdbms" CLUSTER_NODES="node_A,node_B" LOCAL_NODE="node_A"

  • Run netca to configure listener.ora for the cluster nodes (only for 10gr1: run vipca before this step)
    • cluster configuration
    • listener name: LISTERNER (each node will have a suffix with the node name automatically)
    • choose a port
    • after netca, manually remove the EXTPROC entry from listener.ora (and tnsnames.ora)

  • 10gR1 only: Post installation enabling async I/O as per metalink note 225751.1
    • cd $ORACLE_HOME/rdbms/lib
    • make PL_ORALIBS=-laio -f ins_rdbms.mk async_on
    • check with : nm $ORACLE_HOME/bin/oracle | grep io_getevent (theoutput should be w io_getevents@@LIBAIO_0.1)

Database creation

ASM instances and diskgroups creation

  • run dbca, select to configure ASM for all nodes
  • spfile in raw device (if using the convention described above it's /dev/raw/raw3)
    • tune parameters:
*.asm_diskgroups='' -> Note: will be populated after group creation
*.asm_diskstring='ORCL:*'
*.db_cache_size=50M
*.cluster_database=true
*.cluster_database_instances=4
*.instance_type='asm'
*.large_pool_size=50M
*.processes=100
*.remote_login_passwordfile='exclusive'
*.sga_max_size=200M
*.shared_pool_size=70M
*.user_dump_dest='/ORA/dbs00/oracle/admin/+ASM/udump'
*.background_dump_dest='/ORA/dbs00/oracle/admin/+ASM/bdump'
*.core_dump_dest='/ORA/dbs00/oracle/admin/+ASM/cdump'
+ASM4.instance_number=4
+ASM3.instance_number=3
+ASM1.instance_number=1
+ASM2.instance_number=2
+ASM1.local_listener='LISTENER_+ASM1'
+ASM2.local_listener='LISTENER_+ASM2'
+ASM3.local_listener='LISTENER_+ASM3'
+ASM4.local_listener='LISTENER_+ASM4'

  • don't use dbca to create diskgroups. Instead, exit and run sqlsys_ASM (sqlplus to ASM instance with command line history). Example:
create diskgroup comp_datadg1
failgroup fg1 disk 'ORCL:ITSTOR01*_EXT','ORCL:ITSTOR14*_EXT'
failgroup fg2 disk 'ORCL:ITSTOR05*_EXT','ORCL:ITSTOR15*_EXT';

create diskgroup comp_recodg1
failgroup fg1 disk 'ORCL:ITSTOR01*_INT','ORCL:ITSTOR14*_INT'
failgroup fg2 disk 'ORCL:ITSTOR05*_INT','ORCL:ITSTOR15*_INT';

  • check the spfile parameter *.asm_diskgroup, it may be that on +ASM1.asm_diskgroup has been changed. In that case:
    • shutdown asm instances srvctl stop asm -n ...
    • sqlsys_ASM -> * create pfile='/tmp/pfileasm.txt' from spfile='/dev/raw/raw3'*
    • vi /tmp/pfileasm.txt
    • sqlsys_ASM -> * create spfile='/dev/raw/raw3' from pfile='/tmp/pfileasm.txt'*
    • srvctl start asm -n ...
    • check: sqlsys_ASM *select * from v$asm_diskgroup;'

Database instances creation

  • run dbca, select to create cluster database for all nodes
  • custom database (not from template), no EM
  • ASM storage, select diskgroup to use (ex: comp_datadg1)
  • specify flash_recovery (ex: comp_recodg1) and size. No archivelog (will enable post install)
  • specify options to install (ex: leave only java and xdb)
  • parameters:
    • use defaults, but check: block size 8k, character set used.
    • change adump,bdump,udump,cdump parameters (advanced parameters) to use the /ORA/dbs00 filesystem
    • the rest of the parameters can be changed in post install
  • select create database and generate database creation scripts
  • click finish (click only once and wait for a few minutes)

  • fine tune db parameters:
    • sqlsys_DB -> create pfile='/tmp/initdb.txt' from spfile;
    • edit vi /tmp/initdb.txt (see parameter values below)
    • shutdown the DB instances srvctl stop database -d dbname and startup nomount 1 instance
    • sqlsys_DB -> ex: create spfile='+COMP_DATADG1/compr/spfilecompr.ora' from pfile='/tmp/initdb.txt';

*.cluster_database=TRUE
*.cluster_database_instances=4
*.compatible='10.2.0.2.0'
*.db_block_size=8192
*.db_create_file_dest='+DATA_DG1'
*.db_domain='cern.ch'
# autotuned in 10.2 -> don't set *.db_file_multiblock_read_count (128) 
*.db_name=..dbname..
*.db_recovery_file_dest='+recovery_db1'
*.db_recovery_file_dest_size=1024g
*.global_names=TRUE
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.log_archive_format='log_%t_%s_%r.arc'
*.log_buffer=10485760 
*.open_cursors=300
*.parallel_max_servers=0  # if streams parallel propagation is needed set this >0
*.pga_aggregate_target=1400m
*.processes=800
*.recyclebin=OFF
*.remote_listener='...listener_alias_here....'
*.remote_login_passwordfile='exclusive'
*.resource_limit=TRUE
*.sga_target=2200m
*.streams_pool_size=256m
*.undo_management='AUTO'
*.undo_retention=3600
*.audit_file_dest='/ORA/dbs00/oracle/admin/..dbname../adump'
*.core_dump_dest='/ORA/dbs00/oracle/admin/..dbname../cdump'
*.background_dump_dest='/ORA/dbs00/oracle/admin/..dbname../bdump'
*.user_dump_dest='/ORA/dbs00/oracle/admin/..dbname../udump'
*.audit_trail='db'
inst1.local_listener='LISTENER_INT1' (edit as explained in the note below)
OTHER parameters with instance dependent values: instance_number, local_listener, thread, undo_tablespace

  • NOTE: configure local_listener even when using port 1521, check also the listener alias in tnsnames.ora, the server name to be used is the VIP address with fully qualified name (Ex: ..-v.cern.ch). Edit tnsnames.ora accordingly.

  • apply catcpu from the latest security patch. Ex:
         cd $ORACLE_HOME/cpu/CPU...2006
         sqlsys_DB
         SQL> select count(*) from dba_objects where status='INVALID';
         SQL> @catcpu.sql
         SQL> @?/rdbms/admin/utlrp.sql
         SQL> select count(*) from dba_objects where status='INVALID';
         SQL> exit
    

  • change redo log size, number and multiplexing as appropriate. Ex: add 5 redo groups per thread, no multiplexing, redo size 1g and drop old redologs
       alter database add logfile thread 1 group 11 '+INT5R_DATADG1' size 1g;
       alter database add logfile thread 2 group 21 '+INT5R_DATADG1' size 1g;
       ...
       -drop old redologs
       (on all instances) alter system switch logfile; 
       alter system checkpoint global;
       (alternative: alter system archive log all)
       alter database drop logfile group 1; 
       ...
       
  • change undo and temp tbs size as appropriate
  • (optional) revoke from public unneeded privs, such as execute on vulnerable packages
         revoke execute on sys.utl_tcp from public;
         revoke execute on sys.utl_http from public;
         revoke execute on sys.utl_smtp from public;
         revoke execute on sys.dbms_export_extension from public;
       
  • Edit tnsnames.ora
    • local tnsnames in particular the service_name parameter (add .cern.ch where appropriate)
    • afs version

Post Installation

  • Install EM agent in a separate Oracle_Home
  • setup RAC services and tns_names aliases
  • setup monitoring
  • Setup backup
  • See also the post install steps in the PSS wike 'dba area'


Document history:

  • Version 1 - first version, Luca.Canali -AT- cern.ch Sep-2005
  • Version 2 - fully updated, revised and tested on 10g R2, Luca Feb-2006
  • Version 2.1 - Minor revisions, Luca Apr 2006
  • Version 2.2 - Added entry for /etc/rc.local, Luca,Nov 2006
  • END of life, see installation procedure doc (RHEL4)
Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2006-11-22 - LucaCanali
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    PSSGroup All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback