Upgrade 10gR1 to 10gR2
This page describes the steps to upgrade PDB RACs 10gR1 + ASM from 10.1.0.4 to 10.2.0.2
- Perform a full backup of the DB before starting
upgrade from crs=10.1.0.4
- Backup OCR config and inventory before cleanup
- save output of crsstat.sh, srvctl config service -d DBNAME; srvconfig -exp ...
- DB create pfile='$HOME/work/init[DBNAME]_bck.ora' from spfile;
- ASM create pfile='$HOME/work/initASM_bck.ora' from spfile;
- save oraInventory/ContentsXML/inventory.xml
- stop db, asm, nodeapps and clusterware
* edit/etc/inittab and telinit q
rm -rf /etc/init.d/init.crsd /etc/init.d/init.crs /etc/init.d/init.cssd /etc/init.d/init.evmd
rm -rf /etc/rc2.d/K96init.crs /etc/rc2.d/S96init.crs /etc/rc3.d/K96init.crs /etc/rc3.d/S96init.crs /etc/rc5.d/K96init.crs /etc/rc5.d/S96init.crs
rm -rf /etc/oracle
rm -rf /var/tmp/.oracle
rm -rf /etc/ORCLcluster
(optional) rm /etc/inittab.*
- create oraInventory.tar and rm -r oraInventory
- cleanup /etc/oratab,
- /etc/init.d/oma stop or /ORA/dbs01/oracle/product/10.2.0/agent10g/bin/emctl stop agent
- edit .bashrc (CRS_ORA_HOME, etc)
- map the new rawdevices for OCR and voting disk on one node
- copy devlabel config to other nodes and run devlabel reload
- cleanup raw devices with dd (make sure you have a procedure to restore services! and don't clean /dev/raw/raw3)
- chown oracle:ci /dev/raw/raw[123] /dev/raw/raw1[12] /dev/raw/raw22
- if needed, sudo ifconfig eth0:1 down
- Install from clusterware CD
- ORA_CRS_HOME=/ORA/dbs01/oracle/product/10.2.0/crs
- configure cluster and node names
- cluster name can be name of the db
- node names (public and private) -> edit as in the /etc/hosts file
- 6 raw devices are needed:
- 2 copies of OCR (/dev/raw/raw1, /dev/raw/raw11) take care of using 2 different disk arrays
- 3 copies of the voting disk (/dev/raw/raw2, /dev/raw/raw12, /dev/raw/raw22) on 3 different disk arrays
- /dev/raw/raw3 for the ASM spfile (don't need to modify)
- crsctl stop crs
- patch to 10.2.0.2
- go to upgrade DB
upgrade from crs=10.2.0.2
- dump info on services crsstat.sh or srvctl (use the versions from $ORA_CRS_HOME)
- create copy of pfile
- create pfile='$HOME/init[DBNAME]_bck.ora' from spfile;
- stop the database, ASM and nodeapps
- srvctl stop database -d [DBNAME] ,etc
- delete services from CRS
- srvctl remove database -d [DBNAME]
- srvctl remove asm -n ...
- use netca to remove LISTNER
- detach old oracle home
cd $ORACLE_HOME/oui/bin
./runInstaller -silent -removeHome ORACLE_HOME_NAME="OraRDBMSHOME" ORACLE_HOME="/ORA/dbs01/oracle/product/10.1.0/rdbms"
upgrade DB starts here from db=10.1.0.4 to crs=db=10.2.0.2
- copy tar of new oracle home
- if needed, edit bash with new value for ORACLE_HOME, etc
- (logoff and logon to have also $PATH set easly and correctly)
- attach new Oracle Home (see steps here below)
- distribute the tar (Ex: scp rdbms_10_2_0_2.tar srv2:$PWD) at the destinations
-
- tar xfp rdbms_10_2_0_2.tar
- find the appropriate value for ORACLE_HOME and its path
- ex: see existing nodes: cat /ORA/dbs01/oracle/oraInventory/ContentsXML/inventory.xml
- on the new nodes perform the clonig operation
- cd $ORACLE_HOME/clone/bin
- perl clone.pl ORACLE_HOME="/ORA/dbs01/oracle/product/10.2.0/rdbms" ORACLE_HOME_NAME="OraDb10g_rdbms" '-O"CLUSTER_NODES={[node1],[node2],}"' '-O"LOCAL_NODE=[nodename]"' (edit node names)
- repeat for all new nodes, editing LOCAL_NODE value
- run root.sh on new nodes, as instructed by clone.pl
- run netca to create listeners and register them with crs
- add pfile and and password file for asm and db instances (on all nodes)
- cp /ORA/dbs01/oracle/product/10.1.0/rdbms/dbs/init*.ora /ORA/dbs01/oracle/product/10.2.0/rdbms/dbs
- orapwd file=./orapw+ASM1.. password=xxxxxx entries=20
- orapwd file=./orapwdbinstance1.. password=xxxxxx entries=20
- check asm spfile
- create pfile='/tmp/asmpfile13_03_pre.txt' from spfile='/dev/raw/raw3';
- change pfile and recreate spfile according to standards in installation guide and restart
- add asm instances and start them
- srvctl add asm -n ... -i +ASM1 -o $ORACLE_HOME
- cp /ORA/dbs01/oracle/product/10.1.0/rdbms/network/admin/tnsnames.ora /ORA/dbs01/oracle/product/10.2.0/rdbms/network/admin/
- edit database pfile following new standard (see installation guide)
- edit _recyclebin, cluster_database and other params
- edit shared_pool (for example with __shared_pool_size params) to be >=500MB (??)
- cd $ORACLE_HOME/rdbms/admin
- startup upgrade pfile=$HOME/init[DBNAME]_migr.ora (the edited pfile)
- SPOOL upgrade.log
- DO YOU HAVE BACKUPS ?
- @catupgrd
.sql
- shutdown and startup pfile=$HOME/init[DBNAME]_migr.ora (still cluster=false)
- run @utlrp
.sql
- shutdown, edit pfile to set cluster database=true and
- create create spfile='+DATA_DG1/DBNAME/spfileDBNAME.ora' from pfile='.. ';
- check and if needed update $ORACLE_HOME/dbs/initdbinst.ora (all nodes)
- add db to CRS
srvctl add database -d DBNAME-o $ORACLE_HOME -p '+DATA_DG1/DBNAME/spfileDBNAME'
srvctl add instance -d DBNAME -i INSTNAME -n ...
srvctl add instance -d DBNAME -i INSTNAME -n ...
srvctl modify instance -d DBNAME -i INSTNAME -s +ASM... # add dependencies between db instance and asm
- startup using srvctl
- recreate services
srvctl add service -d ... -s .... -r "instname1,instaname2,..."
if needed, srvctl start service -d ..
- edit /etc/logrotate.d/ora_cern_listener_udump_bdump_adump_rotate
- on all nodes of the cluster grant the 'sysdba' privilege to superusers.
- update /etc/oratab files on all nodes of the cluster
- install or reattach the Oracle_HOME for the EM agent
- Perform a full backup of the DB when finished