Migration of a RAC to new nodes
Purpose
This piece of documentation describes the steps to move a RAC installation/db to a new set of nodes.
Steps
- Configure cluster interconnects, ssh, host equivalence and OS parameters on the nodes where the RAC is supposed to be moved. Follow the instructions described here. Copy
.bashrc
files from source nodes.
- Update zoning on the FC switches to make the storage visible to the new nodes.
- Install desired version of Oracle Clusterware on the new nodes, following instructions here (Setup storage and Oracle clusterware installation sections).
Note 1: To determine which /dev/sdXX devices to label with devlabel one can use ls and grep commands. E.g. to identify what is the name of the first disk beloning to itstor13 one can execute:
ll /dev/oracleasm/disks/ITSTOR13_1_EXT
brw-rw---- 1 oracle ci 65, 49 Feb 13 17:29 /dev/oracleasm/disks/ITSTOR13_1_EXT
ll /dev |grep "65, 49"
brw-rw---- 1 root disk 65, 49 Jun 24 2004 sdt1
---> first disk of itstor13 = /dev/sds
Note 2: It is important to clear devices supposed to be used for OCR, voting disk and ASM spfile before starting the installation procedure. For this purpose one can use for example "dd" command.
- Install Oracle RAC (same version as this used on the source nodes). Again follow instructions here (the "RDBMS binaries installation" section).
Note: Before running the root.sh script set CRS_HOME environment variable.
export CRS_HOME=/ORA/dbs01/product/10.2.0/crs
- Apply to the new nodes the same security patches as ones installed on the source system.
- Copy from the source nodes: ASM parameter files, DB parameter files and tnsnames.ora files. Create password files and logging directories. Load ASM spfile:
on the TARGET_NODE_1:
export DB_NAME=<name_of_database_being_migrated>
export TARGET_NODE_1=<name_of_the_first_target_node>
cd $ORACLE_HOME/dbs
scp ${SOURCE_NODE_1}:${PWD}/init+ASM1.ora ./
scp ${SOURCE_NODE_1}:${PWD}/init${DB_NAME}1.ora ./
(note: chack the init ora files, they should only have 1 line spfile=...)
orapwd file=./orapw+ASM1 password=xxxxxx entries=20
orapwd file=./orapw${DB_NAME}1 password=xxxxxx entries=20
cd ../network/admin
scp ${SOURCE_NODE_1}:${PWD}/tnsnames.ora ./
scp ${SOURCE_NODE_1}:work/rwd_backups/asmspfile.bak ./
sudo dd if=./asmspfile.bak of=/dev/raw/raw3 bs=1024 count=100000
rm ./asmspfile.bak
on the TARGET_NODE_2:
export DB_NAME=<name_of_database_being_migrated>
export TARGET_NODE_2=<name_of_the_second_target_node>
cd $ORACLE_HOME/dbs
scp ${SOURCE_NODE_2}:${PWD}/init+ASM2.ora ./
scp ${SOURCE_NODE_2}:${PWD}/init${DB_NAME}2.ora ./
orapwd file=./orapw+ASM2 password=xxxxxx entries=20
orapwd file=./orapw${DB_NAME}2 password=xxxxxx entries=20
cd ../network/admin
scp ${SOURCE_NODE_2}:${PWD}/tnsnames.ora ./
on both target nodes:
sudo mkdir /ORA/dbs00/oracle/
sudo chown oracle:ci /ORA/dbs00/oracle/
mkdir -p /ORA/dbs00/oracle/admin/+ASM/bdump
mkdir -p /ORA/dbs00/oracle/admin/+ASM/cdump
mkdir -p /ORA/dbs00/oracle/admin/+ASM/udump
mkdir -p /ORA/dbs00/oracle/admin/${DB_NAME}/adump
mkdir -p /ORA/dbs00/oracle/admin/${DB_NAME}/bdump
mkdir -p /ORA/dbs00/oracle/admin/${DB_NAME}/cdump
mkdir -p /ORA/dbs00/oracle/admin/${DB_NAME}/udump
Note: tnsnames.ora files may need editing.
- On each node of the cluster connect to the database instance and grant the 'SYSDBA' privilege to the 'PDB_ADMIN' user:
- Configure devlabel startup, logrotation, migrate machines' aliases
- Install EM agents, and configure properly EM targets.
- Configure TSM backups if needed.
- Append the SSH public key of the pdb-backup machine to authorized_keys and authorized_keys.local files on new nodes.