10g RAC on Linux, removing cluster nodes
Please see also the installation procedure in Installation procedure
Stop Oracle services on the node to be removed from the RAC cluster
- relocate services running on the node to be evicted to other cluster nodes, when relevant
- check that you have ocr backups (and take one if needed) with: ocrconfig -showbackup
- run:
srvctl stop instance -d .. -i ..
srvctl remove instance -d .. -i ..
srvctl stop asm -n ..
srvctl remove asm -n ..
srvctl stop nodeapps -n ..
..as root:
srvctl remove nodeapps -n ..
crs_unregister ora.itrac40.<listener_name>.lsnr
crs_unregister ora.<nodename>.vip
- check ifconfig -a and set ethX:Y down if needed
- note1: if CRS resources are disabled they cannot be removed, so enable them before removal
- note2: if 'remove' does not work, for example because services have been partially dropped already, -f (force) can be used
Remove DB objects and parameters:
- connect to the db
- if in archive log make sure the logs from the deleted instance are archived *
select * from v$log where thread#=X;
* if needed: alter system archive log all;
+ check again that logs have been archived
-
alter database disable thread X;
- repeat for all the groups belonging to the disables thread: *
alter database drop logfile group YY;
-
drop tablespace undo...;
(drop the undo tablespace for the removed instance)
- unset the DB parameters specific to the removed instance
sqlsys_DB
alter system reset local_listener scope=spfile sid='XX';
alter system reset thread scope=spfile sid='XX';
alter system reset undo_tablespace scope=spfile sid='XX';
alter system reset instance_number scope=spfile sid='XX';
alter system reset "__db_cache_size" scope=spfile sid='XX';
alter system reset "__java_pool_size" scope=spfile sid='XX';
alter system reset "__large_pool_size" scope=spfile sid='XX';
alter system reset "__shared_pool_size" scope=spfile sid='XX';
alter system reset "__streams_pool_size" scope=spfile sid='XX';
create pfile='/tmp/pfileDB_for_further_checks' from spfile;
- analogously unset the ASM parameters
sqlsys_ASM
alter system reset local_listener scope=spfile sid='+ASM...';
alter system reset instance_number scope=spfile sid='+ASM...';
create pfile='/tmp/pfileASM_for_further_checks' from spfile;
- edit tnsnames.ora (on all nodes)
- edit the remote_listeners parameter
- edit tns entries (remove the node from the list of addresses
- edit afs tnsnames.ora
Remove the node from CRS
- check the current list of nodes: olsnodes -n
- on the node to be removed, run as root: $ORA_CRS_HOME/install/rootdelete.sh remote nosharedvar
- on a different node (node that stays in the cluster), run as root: $ORA_CRS_HOME/install/rootdeletenode.sh [nodename],[node_number]
- check the current list of nodes: olsnodes -n
Update the Oracle inventory
- Update the rdbms inventory
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=<node1>,<node2>,..
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORA_CRS_HOME CRS=TRUE CLUSTER_NODES=<node1>,<node2>,..
- if you have the EM agent, stop it and update the nodelist for EM agent too
Remove Oracle files
- Main files of the Oracle installation
rm -rf /etc/oracle /etc/oratab /oraInst.loc
rm -rf $ORACLE_BASE/product $ORACLE_BASE/oraInventory
/etc/init.d/oracleasm stop
chkconfig --del oracleasm
rm /etc/init.d/oracleasm
service devlabel stop
chkconfig --del devlabel
rm -rf /var/tmp/.oracle
vi $HOME/.bashrc (remove entries related to Oracle instances)
rm /etc/logrotate.d/ora_*
vi /etc/modules.conf
(comment out #options qla2300 and #options hangcheck-timer)
rm /etc/qla2300.conf
rm /boot/initrd-2.4.21-37.ELsmp.img; mkinitrd /boot/initrd-2.4.21-37.ELsmp.img 2.4.21-37.ELsmp (customize with current kernel name)
rmmod qla2300
ifconfig eth1 down
ifconfig eth2 down
vi /etc/sysconfig/network-scripts/ifcfg-eth1 (edit: ONBOOT=no)
vi /etc/sysconfig/network-scripts/ifcfg-eth2 (edit: ONBOOT=no)
shutdown -r now
Remove Oracle files on the evicted node