WARNING: This web is not used anymore. Please use PDBService.Migration_to_new_nodes instead!
 

Migration of a RAC to new nodes

Purpose

This piece of documentation describes the steps to move a RAC installation/db to a new set of nodes.

Steps

  • Configure cluster interconnects, ssh, host equivalence and OS parameters on the nodes where the RAC is supposed to be moved. Follow the instructions described here. Copy .bashrc files from source nodes.

  • Update zoning on the FC switches to make the storage visible to the new nodes.

  • Export the contents of the cluster registry and stop oracle processes running on the source system:
           on the SOURCE_NODE_1:
           srvconfig -exp /tmp/ocr.txt
           srvctl stop database -d <DB_NAME>
           srvctl stop asm -n <SOURCE_NODE_1>
           srvctl stop asm -n <SOURCE_NODE_2>
           srvctl stop nodeapps -n <SOURCE_NODE_1>
           srvctl stop nodeapps -n <SOURCE_NODE_2>
           sudo /etc/init.d/init.crs stop
           on the SOURCE_NODE_2:
           sudo /etc/init.d/init.crs stop
         

  • Backup OCR, Voting Disk and ASM spfile:
           on the SOURCE_NODE_1:
           mkdir -p ~/work/rwd_backups
           cd ~/work/rwd_backups
           sudo dd if=/dev/raw/raw1 of=./ocr.bak bs=1024 count=100000
           sudo dd if=/dev/raw/raw2 of=./vd.bak bs=1024 count=100000
           sudo dd if=/dev/raw/raw3 of=./asmspfile.bak bs=1024 count=100000
         

  • Install desired version of Oracle Clusterware on the new nodes, following instructions here (Setup storage and Oracle clusterware installation sections).

Note 1: To determine which /dev/sdXX devices to label with devlabel one can use ls and grep commands. E.g. to identify what is the name of the first disk beloning to itstor13 one can execute:

       ll /dev/oracleasm/disks/ITSTOR13_1_EXT
       brw-rw----    1 oracle   ci        65,  49 Feb 13 17:29 /dev/oracleasm/disks/ITSTOR13_1_EXT

       ll /dev |grep "65,  49"
       brw-rw----    1 root     disk      65,  49 Jun 24  2004 sdt1

       ---> first disk of itstor13 = /dev/sds
     

Note 2: It is important to clear devices supposed to be used for OCR, voting disk and ASM spfile before starting the installation procedure. For this purpose one can use for example "dd" command.

  • Install Oracle RAC (same version as this used on the source nodes). Again follow instructions here (the "RDBMS binaries installation" section).

Note: Before running the root.sh script set CRS_HOME environment variable.

       export CRS_HOME=/ORA/dbs01/product/10.2.0/crs
     

  • Apply to the new nodes the same security patches as ones installed on the source system.

  • Copy from the source nodes: ASM parameter files, DB parameter files and tnsnames.ora files. Create password files and logging directories. Load ASM spfile:
           on the TARGET_NODE_1:
           export DB_NAME=<name_of_database_being_migrated>
           export TARGET_NODE_1=<name_of_the_first_target_node>
           
           cd $ORACLE_HOME/dbs
           scp ${SOURCE_NODE_1}:${PWD}/init+ASM1.ora ./
           scp ${SOURCE_NODE_1}:${PWD}/init${DB_NAME}1.ora ./
    (note: chack the init ora files, they should only have 1 line spfile=...)
           orapwd  file=./orapw+ASM1 password=xxxxxx entries=20
           orapwd  file=./orapw${DB_NAME}1 password=xxxxxx entries=20
    
           cd ../network/admin
           scp ${SOURCE_NODE_1}:${PWD}/tnsnames.ora ./
    
           scp ${SOURCE_NODE_1}:work/rwd_backups/asmspfile.bak ./
           sudo dd if=./asmspfile.bak of=/dev/raw/raw3 bs=1024 count=100000
           rm ./asmspfile.bak
    
           on the TARGET_NODE_2:
           export DB_NAME=<name_of_database_being_migrated>
           export TARGET_NODE_2=<name_of_the_second_target_node>
           
           cd $ORACLE_HOME/dbs
           scp ${SOURCE_NODE_2}:${PWD}/init+ASM2.ora ./
           scp ${SOURCE_NODE_2}:${PWD}/init${DB_NAME}2.ora ./
           orapwd  file=./orapw+ASM2 password=xxxxxx entries=20
           orapwd  file=./orapw${DB_NAME}2 password=xxxxxx entries=20
    
           cd ../network/admin
           scp ${SOURCE_NODE_2}:${PWD}/tnsnames.ora ./
    
           on both target nodes:
           sudo mkdir /ORA/dbs00/oracle/
           sudo chown oracle:ci /ORA/dbs00/oracle/
           mkdir -p /ORA/dbs00/oracle/admin/+ASM/bdump
           mkdir -p /ORA/dbs00/oracle/admin/+ASM/cdump
           mkdir -p /ORA/dbs00/oracle/admin/+ASM/udump
    
           mkdir -p /ORA/dbs00/oracle/admin/${DB_NAME}/adump
           mkdir -p /ORA/dbs00/oracle/admin/${DB_NAME}/bdump
           mkdir -p /ORA/dbs00/oracle/admin/${DB_NAME}/cdump  
           mkdir -p /ORA/dbs00/oracle/admin/${DB_NAME}/udump
         

Note: tnsnames.ora files may need editing.

  • Add information about database being migrated to the Cluster Registry. Use contents of OCR export file generated in one of the previous steps to determine DB spfile location and names of services. Use NETCA to configure listeners.
           on the TARGET_NODE_1:
           export TARGET_NODE_1=<name_of_the target_node_1>
           export TARGET_NODE_2=<name_of_the target_node_2>
           $ORA_CRS_HOME/bin/srvctl add asm -n ${TARGET_NODE_1} -i +ASM1 -o ${ORACLE_HOME}
           $ORA_CRS_HOME/bin/srvctl enable asm -n ${TARGET_NODE_1} -i +ASM1
           $ORA_CRS_HOME/bin/srvctl add asm -n ${TARGET_NODE_2} -i +ASM2 -o ${ORACLE_HOME}
           $ORA_CRS_HOME/bin/srvctl enable asm -n ${TARGET_NODE_2} -i +ASM2
    
           $ORA_CRS_HOME/bin/srvctl add database -d ${DB_NAME} -o ${ORACLE_HOME} -p '<DB_spfile_location>'
            (ex: $ORA_CRS_HOME/bin/srvctl add database -d ${DB_NAME} -o ${ORACLE_HOME} -p '+TEST1_DATADG1/test1/spfiletest1.ora')
           $ORA_CRS_HOME/bin/srvctl enable database -d ${DB_NAME}
    
           $ORA_CRS_HOME/bin/srvctl add instance -d ${DB_NAME} -i ${DB_NAME}1 -n ${TARGET_NODE_1}
           $ORA_CRS_HOME/bin/srvctl enable instance -d ${DB_NAME} -i ${DB_NAME}1 
           $ORA_CRS_HOME/bin/srvctl add instance -d ${DB_NAME} -i ${DB_NAME}2 -n ${TARGET_NODE_2}
           $ORA_CRS_HOME/bin/srvctl enable instance -d ${DB_NAME} -i ${DB_NAME}2
    
           netca
    
           $ORA_CRS_HOME/bin/srvctl start nodeapps -n ${TARGET_NODE_1}
           $ORA_CRS_HOME/bin/srvctl start nodeapps -n ${TARGET_NODE_2}
           $ORA_CRS_HOME/bin/srvctl start asm -n ${TARGET_NODE_1}
           $ORA_CRS_HOME/bin/srvctl start asm -n ${TARGET_NODE_2}
           $ORA_CRS_HOME/bin/srvctl start database -d ${DB_NAME}
    
           for each identified service do:
           srvctl add service -d ${DB_NAME} -s <name_of_the_service> -r <list_of_preffered_nodes> -a <list_of_available_nodes> -P BASIC
          
          srvctl start service -d ${DB_NAME}
         

  • On each node of the cluster connect to the database instance and grant the 'SYSDBA' privilege to the 'PDB_ADMIN' user:

  • Configure devlabel startup, logrotation, migrate machines' aliases

  • Install EM agents, and configure properly EM targets.

  • Configure TSM backups if needed.

  • Append the SSH public key of the pdb-backup machine to authorized_keys and authorized_keys.local files on new nodes.
Edit | Attach | Watch | Print version | History: r9 < r8 < r7 < r6 < r5 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r9 - 2006-05-02 - LucaCanali
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    PSSGroup All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback