Sunday, October 21, 2012

RAC to Upgrade Using Out of Place Upgrade

In my case I am using 2-Node RAC Setup where I have (32 bit) Clusterware then I will upgrade my RAC server to (32 bit). 

Software requirement

1. Patchset

2. latest Opatch software required for upgrade (patch : 943827)

                            STEPS for Upgrade                                            

Step-1: Verify the upgrade on all the servers

$ ./ stage -pre crsinst -upgrade -n host01,host02 -rolling -src_crshome /u01/app/11.2.0/grid/ -dest_crshome /u01/app/11.2.3/ -dest_version -fixup -fixupdir /home/grid/fixupscript -verbose

NOTE-1: Here we have a new home for upgrade, /u01/app/11.2.3/grid

OUTPUT of this runcluvfy:

After running this you will get two errors,

a) Related with "resolve.conf"

b) for latest Opatch

a) Solution for "resolve.conf"

1. Open the named.conf file in DNS Server and see an entry there like:

# vim /var/named/chroot/etc/named.conf

zone "." IN {
        type hint;
        file "";

2. Replace this entry by new entry:

zone "." IN {
        type hint;
        file "/dev/null";

3. And then finally restart named service by using this command:

# service named restart

b) Solution for opatch (apply patch 9413827)

Here you'll have to apply a patch on your hosts one by one, using the following steps

1. Form host01 do this to stop your clusterware for patching (Opatch)

# <CRS_HOME>/crs/install/ -unlock 

2. Set Grid & JAVA home to apply Patch.

    $. oraenv = +ASM1                          

  $ export JAVA_HOME=/u01/app/11.2.0/grid/jdk

3. Change your directory to that location where you have your patch(943827), in my case it is in : /stage/943827

$ cd /stage/943827

4. Apply Patch one 1st node where your RAC server is running by grid user.

$ /u01/app/11.2.0/grid/OPatch/opatch napply -jdk $JAVA_HOME -local -oh /u01/app/11.2.0/grid -id 943827                    

5.  After applying patch change the permission on few files from the root user (#).

# chmod +w /u01/app/11.2.0/grid/log/host01/agent/     
# chmod +w /u01/app/11..2.0/grid/log/host01/agent/crsd

6. Then again start all the resources in new patched mode from root user.

# <CRS_HOME>/crs/install/ -patch 

7. Verify whether your clusterware home is patched or not from grid user.

$ /u01/app/11.2.0/grid/OPatch/opatch lsinventory

8. Repeat the same on host02 and others (Rolling forward Upgrade)

Step-2: After resolving all the errors, go to the patchset directory and invoke runIstaller.

$ /stage/

Step-3: DO the next -> next (Just like your normal installation, but remember your new installation will share old inventory files from old home like, /u01/app/oraInventory)etc

Step-4: In the last run script on the 1st node then other nodes in parallel but not on the last node. after running on all the nodes then run the same script on the last node, and then finish your upgrade.

Step-5: Verify your upgrade:

$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is []

$ crsctl query crs releaseversion

Oracle High Availability Services release version on the local node  is []

$ crsctl query crs softwareversion

Oracle Clusterware version on node [host01] is []

$ crsctl query crs softwareversion host02

Oracle Clusterware version on node [host02] is []


No comments:

Post a Comment