WARNING: This web is not used anymore. Please use PDBService.Adding_nodes instead! |
on each NODE (new and old): export OLD_NODE_1=<hostname of the first existing node of the cluster> export OLD_NODE_2=<hostname of the second existing node of the cluster> ... export NEW_NODE_1=<hostname of the first machine supposed to be added to the cluster> export NEW_NODE_2=<hostname of the second machine supposed to be added to the cluster> ... export ASM_OLD_NAME_1=<name of the asm instance on the first old node> export ASM_OLD_NAME_2=<name of the asm instance on the second old node> ... export ASM_NEW_NAME_1=<name of the asm instance on the first new node> export ASM_NEW_NAME_2=<name of the asm instance on the second new node> ... on each NEW_NODE: scp oracle@${OLD_NODE_1}:.bashrc ~/ vi ~/.bashrc source ~/.bashrc
on each NEW_NODE: sudo vi /etc/sysconfig/network-scripts/ifcfg-eth1 : DEVICE=eth1 BOOTPROTO=static IPADDR=192.168.XXX.XXX NETMASK=255.255.255.0 ONBOOT=yes TYPE=Ethernet sudo vi /etc/sysconfig/network-scripts/ifcfg-eth2 : DEVICE=eth2 BOOTPROTO=static IPADDR=192.168.XXX.XXX NETMASK=255.255.255.0 ONBOOT=yes TYPE=Ethernet sudo /etc/rc.d/init.d/network restart
/etc/hosts
file. The best way to do this is to update the file on one of existing RAC nodes by adding entries describing private, public and virtual interfaces of new nodes and then to distribute this file to all RAC nodes.
.ssh/authorized_keys.local
and .ssh/authorized_keys
files and distribute them to all nodes of the cluster: On each NEW_NODE: /usr/bin/ssh-keygen -t rsa /usr/bin/ssh-keygen -t dsa vi ~oracle/.ssh/config Host * ForwardX11 no StrictHostKeyChecking=no On OLD_NODE_1: wget https://twiki.cern.ch/twiki/pub/PSSGroup/Installation_verbose/clu-ssh chmod +x clu-ssh echo "$OLD_NODE_1 $OLD_NODE_2 $NEW_NODE_1 $NEW_NODE_2" > racnodes.db ./clu-ssh racnodes "cat ~/.ssh/id_rsa.pub ~/.ssh/id_dsa.pub" >> ~/.ssh/authorized_keys.local cat ~/.ssh/authorized_keys.local >> ~/.ssh/authorized_keys scp ~/.ssh/authorized_keys ~/.ssh/authorized_keys.local ${OLD_NODE_2}:~/.ssh scp ~/.ssh/authorized_keys ~/.ssh/authorized_keys.local ${NEW_NODE_1}:~/.ssh scp ~/.ssh/authorized_keys ~/.ssh/authorized_keys.local ${NEW_NODE_2}:~/.ssh
.ssh/known_hosts
. For each entry add a comma separated (no space) list of IPs and host aliases for public and private networks. Example of complete entry: itrac29,itrac29.cern.ch,137.138.35.79,itrac29-v,compr-priv1-1,compr-priv2-1,172.31.5.1,172.31.6.1 ssh-rsa AAAAB3NzaC...
.ssh/known_hosts
file to all cluster nodes using scp
and check the configuration: scp ~/.ssh/known_hosts ${OLD_NODE_2}:~/.ssh scp ~/.ssh/known_hosts ${NEW_NODE_1}:~/.ssh scp ~/.ssh/known_hosts ${NEW_NODE_2}:~/.ssh ssh PRIVATE_HOSTNAME_OF_ONE_OF_THE_RAC_NODES date
/etc/issue.net
file: sudo rm -f /etc/issue.net
sudo echo "options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180" >> /etc/modules.conf sudo echo "session required /lib/security/pam_limits.so" >> /etc/pam.d/login
/etc/sysconfig/devlabel
file from any of the old nodes. Reload devlabel. If needed configure ASMLib and rescan disks: on each NEW_NODE: sudo vi /etc/sysconfig/devlabel sudo /sbin/devlabel reload sudo /etc/init.d/oracleasm configure sudo /etc/init.d/oracleasm scandisks sudo /etc/init.d/oracleasm listdisks
on the OLD_NODE_1: cd $ORA_CRS_HOME/oui/bin ./addNode.sh ...
on OLD_NODE_1: cat $ORACLE_HOME/opmn/conf/ons.configThen use the remote port number found above to run the following command:
on OLD_NODE_1: $ORA_CRS_HOME/bin/racgons add_config ${NEW_NODE_1}:REMOTE_PORT_SELECTED_ABOVE ${NEW_NODE_2:}REMOTE_PORT_SELECTED_ABOVE ...
cd $ORACLE_HOME/oui/bin ./addNode.sh ...
sudo su - export DISPLAY=YOUR_PC_NAME:0.0 cd /ORA/dbs01/oracle/product/10.1.0/rdbms/bin ./vipca -nodelist ${NEW_NODE_1},${NEW_NODE_2},... (note choose to configure the VIP only on eth0)Note 2: In point 10 check with oifcfg tool if cluster interconnect is specified correctrly:
on OLD_NODE_1: oifcfg getif
on OLD_NODE_1: sqlsys_ASM for each new node: SQL> alter system set instance_number=<number of the instance> scope=spfile sid='<name_of_the_ASM_instance>'; e.g. SQL> alter system set instance_number=3 scope=spfile sid='+ASM3';
on NEW_NODE_1: cd $ORACLE_HOME/dbs echo "spfile=/dev/raw/raw3" > init${ASM_NEW_NAME_1}.ora on NEW_NODE_2: cd $ORACLE_HOME/dbs echo "spfile=/dev/raw/raw3" > init${ASM_NEW_NAME_2}.ora ...
on NEW_NODE_1: cd $ORACLE_HOME/dbs orapwd file=./orapw${ASM_NEW_NAME_1} password=xxxxxx entries=20 on NEW_NODE_2: cd $ORACLE_HOME/dbs orapwd file=./orapw${ASM_NEW_NAME_2} password=xxxxxx entries=20 ...
on each NEW_NODE: mkdir -p /ORA/dbs00/oracle/admin/+ASM/adump mkdir -p /ORA/dbs00/oracle/admin/+ASM/bdump mkdir -p /ORA/dbs00/oracle/admin/+ASM/cdump mkdir -p /ORA/dbs00/oracle/admin/+ASM/udump
On any NODE for each NEW_NODE run: $ORACLE_HOME/bin/srvctl add asm -n ${NEW_NODE_1} -i ${ASM_NEW_NAME_1} -o ${ORACLE_HOME} $ORACLE_HOME/bin/srvctl enable asm -n ${NEW_NODE_1} -i ${ASM_NEW_NAME_1} $ORACLE_HOME/bin/srvctl start nodeapps -n ${NEW_NODE_1} $ORACLE_HOME/bin/srvctl start asm -n ${NEW_NODE_1} crsstat.sh
create pfile='/tmp/asmpfile33.txt' from spfile='/dev/raw/raw3';
on each NEW_NODE: sqlsys_ASM SQL> SELECT * FROM V$ASM_DISKGROUP; SQL> exit
from OLD_NODE_1, for each NEW_INSTANCE run: sqlsys_DB SQL> alter system set instance_number=<new_DB_instance_number> scope=spfile sid='<new_instance_sid>'; SQL> alter system set thread=<new_DB_instance_number> scope=spfile sid='<new_instance_sid>';
for each NEW_INSTANCE run: sqlsys_DB SQL> select tablespace_name from dba_tablespaces where tablespace_name like 'UNDO%'; SQL> create undo tablespace <name_of_the_undo_tablespace> datafile size 1G autoextend on next 100M maxsize 10G extent management local; SQL> alter system set undo_tablespace='<name_of_the_undo_tablespace>' scope=spfile sid='<new_instance_sid>'; SQL> select * from v$log; SQL> alter database add logfile thread <new_DB_instance_number> group <group_number> size 500M; SQL> alter database add logfile thread <new_DB_instance_number> group <group_number> size 500M; SQL> alter database add logfile thread <new_DB_instance_number> group <group_number> size 500M; SQL> alter database add logfile thread <new_DB_instance_number> group <group_number> size 500M; SQL> alter database add logfile thread <new_DB_instance_number> group <group_number> size 500M; SQL> alter database enable public thread <new_DB_instance_number>
on each NEW_NODE: scp OLD_NODE_1:$ORACLE_HOME/dbs/init<old_instance_sid>.ora $ORACLE_HOME/dbs/init<new_instance_sid>.ora orapwd file='/ORA/dbs01/oracle/product/10.1.0/rdbms/dbs/orapw<new_instance_sid>' password=xxxxx entries=20 mkdir -p /ORA/dbs00/oracle/admin/DB_NAME/adump mkdir -p /ORA/dbs00/oracle/admin/DB_NAME/bdump mkdir -p /ORA/dbs00/oracle/admin/DB_NAME/cdump mkdir -p /ORA/dbs00/oracle/admin/DB_NAME/udump srvctl add instance -d <db_name> -i <new_instance_sid> -n <new_node_name> srvctl start instance -d <db_name> -i <new_instance_sid> sqlsys_DB SQL> grant sysdba to pdb_admin; SQL> exit
srvctl config service -d $DB_NAME (displays config) srvctl modify service -d $DB_NAME -s lcg_fts -n -i lcgr1 -a "lcgr2,lcgr3,lcgr4"