Patching Exadata Machine

################################################################
##    EXADATA MACHINE  INFRASTRUCTURE PATCHING of 1/8 RACK     ##
################################################################

This post describe step-by-step how to patch the infrastructure components of an Exadata Machine

———————————————————–
— Cell Storage Pre-requisites
———————————————————–

--Stop CRS using dcli
[root@ch01db01 oracle]# dcli -g /home/oracle/dbhosts -l root '/u01/app/12.1.0.2/grid/bin/crsctl stop crs'
 [root@ch01db01 oracle]# dcli -g /home/oracle/dbhosts -l root '/u01/app/12.1.0.2/grid/bin/crsctl stat res -t -init'
ch01db01: CRS-4639: Could not contact Oracle High Availability Services
ch01db01: CRS-4000: Command Status failed, or completed with errors.
ch01db02: CRS-4639: Could not contact Oracle High Availability Services
ch01db02: CRS-4000: Command Status failed, or completed with errors.
--Stop All Cell Storage Services
 [root@ch01db01 oracle]# dcli -g /home/oracle/cellhosts_ALL -l root "cellcli -e alter cell shutdown services all"
ch01celadm01:
ch01celadm01: Stopping the RS, CELLSRV, and MS services...
 ch01celadm01: The SHUTDOWN of services was successful.
 ch01celadm02:
 ch01celadm02: Stopping the RS, CELLSRV, and MS services...
 ch01celadm02: The SHUTDOWN of services was successful.
 ch01celadm03:
 ch01celadm03: Stopping the RS, CELLSRV, and MS services...
 ch01celadm03: The SHUTDOWN of services was successful.

[root@ch01db01 oracle]#

 

———————————————————–
–Cell Storage patching
———————————————————–

[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -reset_force
2016-02-05 11:17:07 +0100 :DONE: reset_force
[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -cleanup
2016-02-05 11:19:19 +0100        :Working: DO: Cleanup ...
2016-02-05 11:19:20 +0100        :SUCCESS: DONE: Cleanup
[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -patch_check_prereq
2016-02-05 11:20:56 +0100        :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell ...
 2016-02-05 11:20:57 +0100        :SUCCESS: DONE: Check cells have ssh equivalence for root user.
 2016-02-05 11:20:59 +0100        :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute ...
 2016-02-05 11:21:01 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:22:19 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:22:19 +0100        :Working: DO: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction. Up to 40 minutes ...
 2016-02-05 11:22:33 +0100 Wait correction of degraded md11 due to md partner size mismatch. Up to 30 minutes.
2016-02-05 11:22:34 +0100        :SUCCESS: DONE: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction.
 2016-02-05 11:22:34 +0100        :Working: DO: Check prerequisites on all cells. Up to 2 minutes ...
 2016-02-05 11:23:38 +0100        :SUCCESS: DONE: Check prerequisites on all cells.
 2016-02-05 11:23:38 +0100        :Working: DO: Execute plugin check for Patch Check Prereq ...
 2016-02-05 11:23:38 +0100        :INFO: Patchmgr plugin start: Prereq check for exposure to bug 17854520 v1.3. Details in logfile /u02/p17885582_121210_Linux-x86-64/patch_12.1.2.1.0.141206.1/patchmgr.stdout.
 2016-02-05 11:23:38 +0100        :SUCCESS: No exposure to bug 17854520 with non-rolling patching
 2016-02-05 11:23:39 +0100        :SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
[root@ch01db01 patch_12.1.2.1.0.141206.1]#
 [root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -patch
********************************************************************************
 NOTE Cells will reboot during the patch or rollback process.
 NOTE For non-rolling patch or rollback, ensure all ASM instances using
 NOTE the cells are shut down for the duration of the patch or rollback.
 NOTE For rolling patch or rollback, ensure all ASM instances using
 NOTE the cells are up for the duration of the patch or rollback.
WARNING Do not start more than one instance of patchmgr.
 WARNING Do not interrupt the patchmgr session.
 WARNING Do not alter state of ASM instances during patch or rollback.
 WARNING Do not resize the screen. It may disturb the screen layout.
 WARNING Do not reboot cells or alter cell services during patch or rollback.
 WARNING Do not open log files in editor in write mode or try to alter them.
NOTE All time estimates are approximate.
 NOTE You may interrupt this patchmgr run in next 60 seconds with CONTROL-c.
********************************************************************************
2016-02-05 11:27:08 +0100        :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell ...
 2016-02-05 11:27:09 +0100        :SUCCESS: DONE: Check cells have ssh equivalence for root user.
 2016-02-05 11:27:12 +0100        :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute ...
 2016-02-05 11:27:32 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:27:32 +0100        :Working: DO: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction. Up to 40 minutes ...
 2016-02-05 11:27:45 +0100 Wait correction of degraded md11 due to md partner size mismatch. Up to 30 minutes.
2016-02-05 11:27:46 +0100        :SUCCESS: DONE: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction.
 2016-02-05 11:27:46 +0100        :Working: DO: Check prerequisites on all cells. Up to 2 minutes ...
 2016-02-05 11:28:50 +0100        :SUCCESS: DONE: Check prerequisites on all cells.
 2016-02-05 11:28:50 +0100        :Working: DO: Copy the patch to all cells. Up to 3 minutes ...
 2016-02-05 11:29:22 +0100        :SUCCESS: DONE: Copy the patch to all cells.
 2016-02-05 11:29:24 +0100        :Working: DO: Execute plugin check for Patch Check Prereq ...
 2016-02-05 11:29:24 +0100        :INFO: Patchmgr plugin start: Prereq check for exposure to bug 17854520 v1.3. Details in logfile /u02/p17885582_121210_Linux-x86-64/patch_12.1.2.1.0.141206.1/patchmgr.stdout.
 2016-02-05 11:29:24 +0100        :SUCCESS: No exposure to bug 17854520 with non-rolling patching
 2016-02-05 11:29:25 +0100        :SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
 2016-02-05 11:29:25 +0100 1 of 5 :Working: DO: Initiate patch on cells. Cells will remain up. Up to 5 minutes ...
 2016-02-05 11:29:37 +0100 1 of 5 :SUCCESS: DONE: Initiate patch on cells.
 2016-02-05 11:29:37 +0100 2 of 5 :Working: DO: Waiting to finish pre-reboot patch actions. Cells will remain up. Up to 45 minutes ...
 2016-02-05 11:30:37 +0100 Wait for patch pre-reboot procedures
2016-02-05 11:44:56 +0100 2 of 5 :SUCCESS: DONE: Waiting to finish pre-reboot patch actions.
 2016-02-05 11:44:56 +0100        :Working: DO: Execute plugin check for Patching ...
 2016-02-05 11:44:56 +0100        :SUCCESS: DONE: Execute plugin check for Patching.
 2016-02-05 11:44:56 +0100 3 of 5 :Working: DO: Finalize patch on cells. Cells will reboot. Up to 5 minutes ...
 2016-02-05 11:45:17 +0100 3 of 5 :SUCCESS: DONE: Finalize patch on cells.
 2016-02-05 11:45:17 +0100 4 of 5 :Working: DO: Wait for cells to reboot and come online. Up to 120 minutes ...
 2016-02-05 11:46:17 +0100 Wait for patch finalization and reboot
2016-02-05 13:09:24 +0100 4 of 5 :SUCCESS: DONE: Wait for cells to reboot and come online.
 2016-02-05 13:09:24 +0100 5 of 5 :Working: DO: Check the state of patch on cells. Up to 5 minutes ...
 2016-02-05 13:10:09 +0100 5 of 5 :SUCCESS: DONE: Check the state of patch on cells.
 2016-02-05 13:10:09 +0100        :Working: DO: Execute plugin check for Post Patch ...
 2016-02-05 13:10:10 +0100        :SUCCESS: DONE: Execute plugin check for Post Patch.
[root@ch01db01 patch_12.1.2.1.0.141206.1]#
[root@ch01db01 patch_12.1.2.1.0.141206.1]# dcli -c ch01celadm01 -l root 'imageinfo'
 ch01celadm01:
 ch01celadm01: Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
 ch01celadm01: Cell version: OSS_12.1.2.1.0_LINUX.X64_141206.1
 ch01celadm01: Cell rpm version: cell-12.1.2.1.0_LINUX.X64_141206.1-1.x86_64
 ch01celadm01:
 ch01celadm01: Active image version: 12.1.2.1.0.141206.1
 ch01celadm01: Active image activated: 2016-02-05 20:14:52 +0100
 ch01celadm01: Active image status: success
 ch01celadm01: Active system partition on device: /dev/md5
 ch01celadm01: Active software partition on device: /dev/md7
 ch01celadm01:
 ch01celadm01: Cell boot usb partition: /dev/sdac1
 ch01celadm01: Cell boot usb version: 12.1.2.1.0.141206.1
 ch01celadm01:
 ch01celadm01: Inactive image version: 12.1.1.1.1.140712
 ch01celadm01: Inactive image activated: 2014-08-06 11:50:09 +0200
 ch01celadm01: Inactive image status: success
 ch01celadm01: Inactive system partition on device: /dev/md6
 ch01celadm01: Inactive software partition on device: /dev/md8
 ch01celadm01:
 ch01celadm01: Inactive marker for the rollback: /boot/I_am_hd_boot.inactive
 ch01celadm01: Inactive grub config for the rollback: /boot/grub/grub.conf.inactive
 ch01celadm01: Inactive kernel version for the rollback: 2.6.39-400.128.17.el5uek
 ch01celadm01: Rollback to the inactive partitions: Possible
 [root@ch01db01 patch_12.1.2.1.0.141206.1]#

-----------------------------------------------------------
-- DB Server Patching
-----------------------------------------------------------

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -h

Usage: dbnodeupdate.sh [ -u | -r | -c ] -l <baseurl|zip file> [-p] <phase> [-n] [-s] [-q] [-v] [-t] [-a] <alert.sh> [-b] [-m] | [-V] | [-h]
-u                       Upgrade
 -r                       Rollback
 -c                       Complete post actions (verify image status, cleanup, apply fixes, relink all homes, enable GI to start/start all domU's)
 -l <baseurl|zip file>    Baseurl (http or zipped iso file for the repository)
 -s                       Shutdown stack (domU's for VM) before upgrading/rolling back
 -p                       Bootstrap phase (1 or 2) only to be used when instructed by dbnodeupdate.sh
 -q                       Quiet mode (no prompting) only be used in combination with -t
 -n                       No backup will be created (Option disabled for systems being updated from Oracle Linux 5 to Oracle Linux 6)
 -t                       'to release' - used when in quiet mode or used when updating to one-offs/releases via 'latest' channel (requires 11.2.3.2.1)
 -v                       Verify prereqs only. Only to be used with -u and -l option
 -b                       Perform backup only
 -a <alert.sh>            Full path to shell script used for alert trapping
 -m                       Install / update-to exadata-sun/hp-computenode-minimum only (11.2.3.3.0 and later)
 -i                       Ignore /etc/oratab - relinking will be disabled. Only possible in combination with -c.
 -V                       Print version
 -h                       Print usage
For upgrading from releases 11.2.2.4.2 and later:
 Example using iso  : ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip
 Example using http : ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/
 Example: ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.2.1/base/x86_64/
 Example: ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/
For upgrading from releases 11.2.2.4.2 and later in quiet mode:
 Example: ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -q -t 11.2.3.2.1.130302
For completion steps:
 Example: ./dbnodeupdate.sh -c
For rollback:
 Example: ./dbnodeupdate.sh -r
For pre-req checks only:
 Example using iso  : ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -v
 Example using http : ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/ -v
For backup only:
 Example: ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -b
See MOS 1553103.1 for more examples
[root@ch01db02 dbnodeupdate]#

———————————– –DB Server patching Verification ———————————–

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -u -l /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip -v
##########################################################################################################################
 #                                                                                                                        #
 # Guidelines for using dbnodeupdate.sh (rel. 4.18):                                                                      #
 #                                                                                                                        #
 # - Prerequisites for usage:                                                                                             #
 #         1. Refer to dbnodeupdate.sh options. See MOS 1553103.1                                                         #
 #         2. Use the latest release of dbnodeupdate.sh. See patch 16486998                                               #
 #         3. Run the prereq check with the '-v' option.                                                                  #
 #                                                                                                                        #
 #   I.e.:  ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v                                                               #
 #          ./dbnodeupdate.sh -u -l http://my-yum-repo -v                                                                 #
 #                                                                                                                        #
 # - Prerequisite dependency check failures can happen due to customization:                                              #
 #     - The prereq check detects dependency issues that need to be addressed prior to running a successful update.       #
 #     - Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
 #                                                                                                                        #
 #   When upgrading from releases later than 11.2.2.4.2 to releases before 11.2.3.3.0:                                    #
 #      - Conflicting packages should be removed before proceeding the update.                                            #
 #                                                                                                                        #
 #   When upgrading to releases 11.2.3.3.0 or later:                                                                      #
 #      - When the 'exact' package dependency check fails 'minimum' package dependency check will be tried.               #
 #      - When the 'minimum' package dependency check also fails,                                                         #
 #        the conflicting packages should be removed before proceeding.                                                   #
 #                                                                                                                        #
 # - As part of the prereq checks and as part of the update, a number of rpms will be removed.                            #
 #   This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages.        #
 #      - See /var/log/cellos/packages_to_be_removed.txt for details on what packages will be removed.                    #
 #                                                                                                                        #
 # - In case of any problem when filing an SR, upload the following:                                                      #
 #      - /var/log/cellos/dbnodeupdate.log                                                                                #
 #      - /var/log/cellos/dbnodeupdate.<runid>.diag                                                                       #
 #      - where <runid> is the unique number of the failing run.                                                          #
 #                                                                                                                        #
 ##########################################################################################################################
Continue ? [y/n]
 y
(*) 2016-02-05 17:06:43: Unzipping helpers (/u01/exapatch/dbnodeupdate/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
 (*) 2016-02-05 17:06:43: Initializing logfile /var/log/cellos/dbnodeupdate.log
 (*) 2016-02-05 17:06:44: Collecting system configuration settings. This may take a while...
 (*) 2016-02-05 17:07:10: Validating system settings for known issues and best practices. This may take a while...
 (*) 2016-02-05 17:07:10: Checking free space in /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641
 (*) 2016-02-05 17:07:10: Unzipping /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip to /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641, this may take a while
 (*) 2016-02-05 17:07:23: Original /etc/yum.conf moved to /etc/yum.conf.050215170641, generating new yum.conf
 (*) 2016-02-05 17:07:23: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo
 (*) 2016-02-05 17:07:56: Validating the specified source location.
 (*) 2016-02-05 17:07:57: Cleaning up the yum cache.

—————————————————————————————————————————–
Running in prereq check mode
—————————————————————————————————————————–

Active Image version   : 12.1.1.1.1.140712
 Active Kernel version  : 2.6.39-400.128.17.el5uek
 Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
 Inactive Image version : 12.1.1.1.0.131219
 Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
 Current user id        : root
 Action                 : upgrade
 Upgrading to           : 12.1.2.1.0.141206.1 - Oracle Linux 5->6 upgrade
 Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/050215170641/x86_64/ (iso)
 Iso file               : /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641/exadata_ol6_base_repo_141206.1.iso
 Create a backup        : Yes (backup at update mandatory when updating from OL5 to OL6)
 Shutdown stack         : No (Currently stack is down)
 RPM exclusion list     : Function not available for OL5->OL6 upgrades
 RPM obsolete list      : Function not available for OL5->OL6 upgrades
 Exact dependencies     : Function not available for OL5->OL6 upgrades
 Minimum dependencies   : Function not available for OL5->OL6 upgrades
 Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 050215170641)
 Diagfile               : /var/log/cellos/dbnodeupdate.050215170641.diag
 Server model           : SUN FIRE X4170 M3
 dbnodeupdate.sh rel.   : 4.18 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
 Note                   : After upgrading and rebooting run './dbnodeupdate.sh -c' to finish post steps.
The following known issues will be checked for and automatically corrected by dbnodeupdate.sh:
 (*) - Issue - Fix for CVE-2014-9295 AND ELSA-2014-1974
The following known issues will be checked for but require manual follow-up:
 (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
 (*) - Issue - Exafusion silently enabled for database 12.1.0.2.0 with kernel 2.6.39-400.200 and later. See MOS 1947476.1 for more details.
---------------------------------------------------------------------------------------------------------------------
 NOTE:
 When upgrading to Oracle Linux 6 a backup is required for systems configured with logical volume manager (lvm).
 It appears no backup of the current image exist on the inactive lvm.
 This means a mandatory backup will be made using dbnodeupdate.sh before the actual update starts.
 ---------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------

-------------------------------------------
 Prereq check finished successfully, check the above report for next steps.
 -----------------------------------------------------------------------------------------------------------------------------
(*) 2016-02-05 17:08:01: Cleaning up iso and temp mount points
[root@ch01db02 dbnodeupdate]#

———————————–

–DB Server patching Execution

———————————–

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -u -l /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip
##########################################################################################################################
 #                                                                                                                        #
 # Guidelines for using dbnodeupdate.sh (rel. 4.18):                                                                      #
 #                                                                                                                        #
 # - Prerequisites for usage:                                                                                             #
 #         1. Refer to dbnodeupdate.sh options. See MOS 1553103.1                                                         #
 #         2. Use the latest release of dbnodeupdate.sh. See patch 16486998                                               #
 #         3. Run the prereq check with the '-v' option.                                                                  #
 #                                                                                                                        #
 #   I.e.:  ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v                                                               #
 #          ./dbnodeupdate.sh -u -l http://my-yum-repo -v                                                                 #
 #                                                                                                                        #
 # - Prerequisite dependency check failures can happen due to customization:                                              #
 #     - The prereq check detects dependency issues that need to be addressed prior to running a successful update.       #
 #     - Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
 #                                                                                                                        #
 #   When upgrading from releases later than 11.2.2.4.2 to releases before 11.2.3.3.0:                                    #
 #      - Conflicting packages should be removed before proceeding the update.                                            #
 #                                                                                                                        #
 #   When upgrading to releases 11.2.3.3.0 or later:                                                                      #
 #      - When the 'exact' package dependency check fails 'minimum' package dependency check will be tried.               #
 #      - When the 'minimum' package dependency check also fails,                                                         #
 #        the conflicting packages should be removed before proceeding.                                                   #
 #                                                                                                                        #
 # - As part of the prereq checks and as part of the update, a number of rpms will be removed.                            #
 #   This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages.        #
 #      - See /var/log/cellos/packages_to_be_removed.txt for details on what packages will be removed.                    #
 #                                                                                                                        #
 # - In case of any problem when filing an SR, upload the following:                                                      #
 #      - /var/log/cellos/dbnodeupdate.log                                                                                #
 #      - /var/log/cellos/dbnodeupdate.<runid>.diag                                                                       #
 #      - where <runid> is the unique number of the failing run.                                                          #
 #                                                                                                                        #
 ##########################################################################################################################
Continue ? [y/n]
 y
(*) 2016-02-05 17:09:38: Unzipping helpers (/u01/exapatch/dbnodeupdate/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
 (*) 2016-02-05 17:09:38: Initializing logfile /var/log/cellos/dbnodeupdate.log
 (*) 2016-02-05 17:09:39: Collecting system configuration settings. This may take a while...
 (*) 2016-02-05 17:10:07: Validating system settings for known issues and best practices. This may take a while...
 (*) 2016-02-05 17:10:07: Checking free space in /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936
 (*) 2016-02-05 17:10:07: Unzipping /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip to /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936, this may take a while
 (*) 2016-02-05 17:10:19: Original /etc/yum.conf moved to /etc/yum.conf.050215170936, generating new yum.conf
 (*) 2016-02-05 17:10:19: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo
 (*) 2016-02-05 17:10:42: Validating the specified source location.
 (*) 2016-02-05 17:10:43: Cleaning up the yum cache.
Active Image version   : 12.1.1.1.1.140712
 Active Kernel version  : 2.6.39-400.128.17.el5uek
 Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
 Inactive Image version : 12.1.1.1.0.131219
 Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
 Current user id        : root
 Action                 : upgrade
 Upgrading to           : 12.1.2.1.0.141206.1 - Oracle Linux 5->6 upgrade
 Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/050215170936/x86_64/ (iso)
 Iso file               : /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936/exadata_ol6_base_repo_141206.1.iso
 Create a backup        : Yes (backup at update mandatory when updating from OL5 to OL6)
 Shutdown stack         : No (Currently stack is down)
 RPM exclusion list     : Function not available for OL5->OL6 upgrades
 RPM obsolete list      : Function not available for OL5->OL6 upgrades
 Exact dependencies     : Function not available for OL5->OL6 upgrades
 Minimum dependencies   : Function not available for OL5->OL6 upgrade
 Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 050215170936)
 Diagfile               : /var/log/cellos/dbnodeupdate.050215170936.diag
 Server model           : SUN FIRE X4170 M3
 dbnodeupdate.sh rel.   : 4.18 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
 Note                   : After upgrading and rebooting run './dbnodeupdate.sh -c' to finish post steps.
The following known issues will be checked for and automatically corrected by dbnodeupdate.sh:
 (*) - Issue - Fix for CVE-2014-9295 AND ELSA-2014-1974
The following known issues will be checked for but require manual follow-up:
 (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
 (*) - Issue - Exafusion silently enabled for database 12.1.0.2.0 with kernel 2.6.39-400.200 and later. See MOS 1947476.1 for more details.
Continue ? [y/n]
 y
(*) 2016-02-05 17:11:59: Verifying GI and DB''s are shutdown
 (*) 2016-02-05 17:12:00: Collecting console history for diag purposes
 (*) 2016-02-05 17:12:32: Unmount of /boot successful
 (*) 2016-02-05 17:12:32: Check for /dev/sda1 successful
 (*) 2016-02-05 17:12:32: Mount of /boot successful
 (*) 2016-02-05 17:12:32: Disabling stack from starting
 (*) 2016-02-05 17:12:33: Performing filesystem backup to /dev/mapper/VGExaDb-LVDbSys2. Avg. 30 minutes (maximum 120) depends per environment.......
 (*) 2016-02-05 17:18:44: Backup successful
 (*) 2016-02-05 17:18:47: ExaWatcher stopped successful
 (*) 2016-02-05 17:19:07: EM Agent (in /u01/app/oracle/product/agent12c/core/12.1.0.4.0) stopped successfully
 (*) 2016-02-05 17:19:07: Capturing service status and file attributes. This may take a while...
 (*) 2016-02-05 17:19:12: Service status and file attribute report in: /etc/exadata/reports
 (*) 2016-02-05 17:19:12: Validating the specified source location.
 (*) 2016-02-05 17:19:13: Cleaning up the yum cache.
 (*) 2016-02-05 17:19:14: Executing OL5->OL6 upgrade steps, system is expected to reboot multiple times.
 (*) 2016-02-05 17:21:37: Initialize of Oracle Linux 6 Upgrade successful. Rebooting now...
Broadcast message from root (pts/0) (Thu Feb  5 17:21:37 2015):
The system is going down for reboot NOW!
[root@ch01db02 dbnodeupdate]#
[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -c

———————————–
–Output new Image Version
———————————–

[root@ch01db01 ibdiagtools]# imageinfo
Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
 Image version: 12.1.2.1.0.141206.1
 Image activated: 2016-02-05 18:24:46 +0100
 Image status: success
 System partition on device: /dev/mapper/VGExaDb-LVDbSys1
Advertisements

ODA CPU Capping

####################################################################
# How to reduce the number of active CPU cores on ODA system
####################################################################

--Find the ODA Serial Number
 [root@odanode1 ~]# /usr/sbin/dmidecode -t1 |grep Serial
 Serial Number: 1xxxxxXXXXxG
--Login to MOS and generate the CPU Key using the ODA Serial Number.
####################################################################
 ## ODA CPU Capping
 ####################################################################
------------------------------------
 -- Target  active CPU cores --
 ------------------------------------
   HOSTNAME    |   CPU COUNT  
 ---------------|----------------
 odanode1        |      6
 ---------------|----------------
 odanode2        |      6
 --------------------------------
-------------------------------------------------------------------------------
 --Reduce the CPU cores running the following command from the first node only!
 -------------------------------------------------------------------------------
 /opt/oracle/oak/bin/oakcli show core_config_key
 /opt/oracle/oak/bin/oakcli apply core_config_key /tmp/CPU_KEY
------------------------------------------
 --Activity Log
 ------------------------------------------
 [root@odanode1 tmp]# vi CPU_KEY  <--- Store the CPU key generated on MOS
 [root@odanode1 tmp]# /opt/oracle/oak/bin/oakcli show core_config_key
 Optional core_config_key is not applied on this machine yet !
 [root@odanode1 tmp]# pwd
 /tmp
 [root@odanode1 tmp]# /opt/oracle/oak/bin/oakcli apply core_config_key /tmp/CPU_KEY
 INFO: Both nodes get rebooted automatically after applying the license
 Do you want to continue: [Y/N]?:
 Y
 INFO: User has confirmed the reboot
Please enter the root password:
............done
INFO: Applying core_config_key on '192.168.16.25'
 ...
 INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.25 /tmp/tmp_lic_exec.pl
 INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.25 /opt/oracle/oak/bin/oakcli enforce core_config_key /tmp/.lic_file
 Waiting for the Node '192.168.16.25' to reboot...........................
 Node '192.168.16.25' is  rebooted
 Waiting for the Node '192.168.16.25' to be up before applying the license on the node '192.168.16.24'.
 .............................................
 INFO: Applying core_config_key on '192.168.16.24'
 ...
 INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.24 /tmp/tmp_lic_exec.pl
 INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.24 /opt/oracle/oak/bin/oakcli enforce core_config_key /tmp/.lic_file
Broadcast message from root (Fri Jun  7 15:18:34 2013):
The system is going down for reboot NOW!
 [root@odanode1 tmp]#
-------------------------------------------------------------
 --Check the new Number of active cores
 -------------------------------------------------------------
[root@odanode1 ~]# /opt/oracle/oak/bin/oakcli show core_config_key
Host's serialnumber                    =                     1xxxxxXXXXxG
 Enabled Cores (per server)             =                                   6
 Total Enabled Cores (on two servers)   =                             12
 Server type                            =        V1 -> SUN FIRE X4370 M2
 Hyperthreading is enabled.  Each core has 2 threads. Operating system displays 12 processors per server
 [root@odanode1 ~]#

Oracle Database Appliance Bundle 2.6

##################################################################
# Installation Oracle Database Appliance (ODA) bundle patch 2.6.0.0
##################################################################

--Path where all ODA logs are stored:
 /opt/oracle/oak/log/odanode1/patch/2.6.0.0.0
-------------------------------------------------
 --ODA Software version before patching
 -------------------------------------------------
 [root@odanode1 bin]# /opt/oracle/oak/bin/oakcli show version -detail
 reporting the metadata. It takes a while...
 System Version          Component Name                Installed Version               Supported Version
 -------------------         ---------------------    ------------------            -----------------
 2.4.0.0.0
 Controller                      11.05.02.00                     Up-to-date
 Expander                       0342                              Up-to-date
 SSD_SHARED                   E125                              Up-to-date
 HDD_LOCAL                     5G08                              Up-to-date
 HDD_SHARED                   A700                              A6C0
 ILOM                              3.0.16.22.a r75629           Up-to-date
 BIOS                              12010310                        Up-to-date
 IPMI                               1.8.10.5                          Up-to-date
 HMP                               2.2.4                              Up-to-date
 OAK                               2.4.0.0.0                         Up-to-date
 OEL                                5.8                                Up-to-date
 TFA                                2.4                                Up-to-date
 GI_HOME                         1.2.0.3.4(14275605,          Up-to-date
 14275572)
 DB_HOME                       11.2.0.3.4(14275605,         Up-to-date
 14275572)
 ASR                                Unknown                         3.9
 [root@odanode1 bin]#
####################################################################################################
-------------------------------------------------
 --Unzip the patch bundle 2.6.0.0.
 -------------------------------------------------
 --ODA Node 1
 [root@odanode1 u01]# cd /opt/oracle/oak/bin
 [root@odanode1 bin]# ./oakcli unpack -package /u01/ODA_patches/bundle_2600/p16744915_26000_Linux-x86-64.zip
 Unpacking takes a while,  pls wait....
 Successfully unpacked the files to repository.
 [root@odanode1 bin]#
--ODA Node 2
 [root@odanode2 u01]# cd /opt/oracle/oak/bin
 [root@odanode2 bin]# ./oakcli unpack -package /u01/ODA_patches/bundle_2600/p16744915_26000_Linux-x86-64.zip
 Unpacking takes a while,  pls wait....
 Successfully unpacked the files to repository.
 [root@odanode2 bin]#
-------------------------------------------------
 --Apply the Patch to the Infrastructure
 -------------------------------------------------
--ODA Node 1 ONLY
 [root@odanode1 bin]# cd /opt/oracle/oak/bin
 [root@odanode1 bin]# ./oakcli update -patch 2.6.0.0.0 --infra
 INFO: DB, ASM, Clusterware may be stopped during the patch if required
 INFO: Both nodes may get rebooted automatically during the patch if required
 Do you want to continue: [Y/N]?: Y
 INFO: User has confirmed the reboot
 INFO: Patch bundle must be unpacked on the second node also before applying this patch
 Did you unpack the patch bundle on the second node?: [Y/N]?: Y
Please enter the 'root' user password:
 Please re-enter the 'root' user password:
 INFO: Setting up the SSH
 ..........done
 INFO: Running pre-install scripts
 ..........done
 INFO: 2013-05-15 11:04:11: Running pre patch script for 2.6.0.0.0
 INFO: 2013-05-15 11:04:14: Completed pre patch script for 2.6.0.0.0
INFO: 2013-05-15 11:04:19: ------------------Patching HMP-------------------------
 SUCCESS: 2013-05-15 11:04:50: Successfully upgraded the HMP
INFO: 2013-05-15 11:04:50: ----------------------Patching OAK---------------------
 SUCCESS: 2013-05-15 11:05:13: Succesfully upgraded OAK
INFO: 2013-05-15 11:05:15: -----------------Installing / Patching  TFA-----------------
 SUCCESS: 2013-05-15 11:06:55: Successfully updated / installed the TFA
 ...
INFO: 2013-05-15 11:06:56: ------------------Patching OS-------------------------
 INFO: 2013-05-15 11:07:05: Clusterware is running on one or more nodes of the cluster
 INFO: 2013-05-15 11:07:05: Attempting to stop clusterware and its resources across the cluster
 SUCCESS: 2013-05-15 11:09:08: Successfully stopped the clusterware
SUCCESS: 2013-05-15 11:09:49: Successfully upgraded the OS
INFO: 2013-05-15 11:09:53: ----------------------Patching IPMI---------------------
 SUCCESS: 2013-05-15 11:09:55: Succesfully upgraded IPMI
INFO: 2013-05-15 11:10:02: ----------------Patching the Storage-------------------
 INFO: 2013-05-15 11:10:02: ....................Patching SSDs...............
 INFO: 2013-05-15 11:10:02: Updating the  Disk : d20 with the firmware : ZeusIOPs G3 E12B
 SUCCESS: 2013-05-15 11:10:27: Successfully updated the firmware on  Disk : d20 to ZeusIOPs G3 E12B
 INFO: 2013-05-15 11:10:27: Updating the  Disk : d21 with the firmware : ZeusIOPs G3 E12B
 SUCCESS: 2013-05-15 11:10:48: Successfully updated the firmware on  Disk : d21 to ZeusIOPs G3 E12B
 INFO: 2013-05-15 11:10:48: Updating the  Disk : d22 with the firmware : ZeusIOPs G3 E12B
 SUCCESS: 2013-05-15 11:11:10: Successfully updated the firmware on  Disk : d22 to ZeusIOPs G3 E12B
 INFO: 2013-05-15 11:11:11: Updating the  Disk : d23 with the firmware : ZeusIOPs G3 E12B
 SUCCESS: 2013-05-15 11:11:34: Successfully updated the firmware on  Disk : d23 to ZeusIOPs G3 E12B
 INFO: 2013-05-15 11:11:34: ....................Patching shared HDDs...............
 INFO: 2013-05-15 11:11:34: Disk : d0  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:34: Disk : d1  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:34: Disk : d2  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:35: Disk : d3  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:35: Disk : d4  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:35: Disk : d5  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:35: Disk : d6  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:35: Disk : d7  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:36: Disk : d8  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:36: Disk : d9  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:36: Disk : d10  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:36: Disk : d11  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:37: Disk : d12  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:37: Disk : d13  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:37: Disk : d14  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:37: Disk : d15  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:38: Disk : d16  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:38: Disk : d17  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:38: Disk : d18  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:38: Disk : d19  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:38: ....................Patching local HDDs...............
 INFO: 2013-05-15 11:11:38: Disk : c0d0  is alreporty running with : WD500BLHXSUN 5G08
 INFO: 2013-05-15 11:11:39: Disk : c0d1  is alreporty running with : WD500BLHXSUN 5G08
 INFO: 2013-05-15 11:11:39: ....................Patching Expanders...............
 INFO: 2013-05-15 11:11:39: Expander : x0  is alreporty running with : T4 Storage 0342
 INFO: 2013-05-15 11:11:39: Expander : x1  is alreporty running with : T4 Storage 0342
 INFO: 2013-05-15 11:11:39: ....................Patching Controllers...............
 INFO: 2013-05-15 11:11:39: No-update for the Controller: c0
 INFO: 2013-05-15 11:11:39: Controller : c1  is alreporty running with : 0x0072 11.05.02.00
 INFO: 2013-05-15 11:11:39: Controller : c2  is alreporty running with : 0x0072 11.05.02.00
 INFO: 2013-05-15 11:11:39: ------------Finished the storage Patching------------
INFO: 2013-05-15 11:11:40: -----------------Patching Ilom & Bios-----------------
 INFO: 2013-05-15 11:11:41: Getting the ILOM Ip address
 INFO: 2013-05-15 11:11:42: Updating the Ilom using LAN+ protocol
 INFO: 2013-05-15 11:11:43: Updating the ILOM. It takes a while
 INFO: 2013-05-15 11:16:24: Verifying the updated Ilom Version, it may take a while if ServiceProcessor is booting
 INFO: 2013-05-15 11:16:25: Waiting for the service processor to be up
 SUCCESS: 2013-05-15 11:20:09: Successfully updated the ILOM with the firmware 3.0.16.22.b r78329
INFO: Patching the infrastructure on node: odanode2 , it may take upto 30 minutes. Please wait
 ...
 ............done
INFO: Infrastructure patching summary on node: 192.168.16.24
 SUCCESS: 2013-05-15 11:31:05:  Successfully upgraded the HMP
 SUCCESS: 2013-05-15 11:31:05:  Succesfully updated the OAK
 SUCCESS: 2013-05-15 11:31:05:  Successfully updated the TFA
 SUCCESS: 2013-05-15 11:31:05:  Successfully upgraded the OS
 SUCCESS: 2013-05-15 11:31:05:  Succesfully updated the IPMI
 INFO: 2013-05-15 11:31:05:  Storage patching summary
 SUCCESS: 2013-05-15 11:31:05:  No failures during storage upgrade
 SUCCESS: 2013-05-15 11:31:05:  Successfully updated the ILOM & Bios
INFO: Infrastructure patching summary on node: 192.168.16.25
 SUCCESS: 2013-05-15 11:31:05:  Successfully upgraded the HMP
 SUCCESS: 2013-05-15 11:31:05:  Succesfully updated the OAK
 SUCCESS: 2013-05-15 11:31:05:  Successfully upgraded the OS
 SUCCESS: 2013-05-15 11:31:05:  Succesfully updated the IPMI
 INFO: 2013-05-15 11:31:05:  Storage patching summary
 SUCCESS: 2013-05-15 11:31:05:  No failures during storage upgrade
 SUCCESS: 2013-05-15 11:31:05:  Successfully updated the ILOM & Bios
INFO: Running post-install scripts
 ............done
 INFO: Some of the patched components require node reboot. Rebooting the nodes
 INFO: Setting up the SSH
 ............done
Broadcast message from root (Wed May 15 11:35:50 2013):
The system is going down for system halt NOW!
-------------------------------------------------
 --Apply the Patch to the Grid Infrastructure
 -------------------------------------------------
--ODA on BOTH Nodes
 [oracle@odanode1 OPatch]$ /u01/app/oracle/product/agent12c/agent_inst/bin/emctl stop agent
 Oracle Enterprise Manager Cloud Control 12c Release 2
 Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
 Stopping agent ..... stopped.
--ODA Node 1 ONLY
 [root@odanode1 bin]# cd /opt/oracle/oak/bin
 [root@odanode1 bin]# ./oakcli update -patch 2.6.0.0.0 --gi
Please enter the 'root' user password:
 Please re-enter the 'root' user password:
Please enter the 'grid' user password:
 Please re-enter the 'grid' user password:
 INFO: Setting up the SSH
 ..........done
 ...
 ...
..........done
 ...
 SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
 INFO: 2013-05-15 11:56:10: Setting up the ssh for grid user
 ..........done
 ...
 SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
 INFO: 2013-05-15 11:56:30: Patching the GI home on node odanode1
 INFO: 2013-05-15 11:56:30: Updating the opatch
 INFO: 2013-05-15 11:56:56: Performing the conflict checks
 SUCCESS: 2013-05-15 11:57:07: Conflict checks passed for all the homes
 INFO: 2013-05-15 11:57:07: Checking if the patch is alreporty applied on any of the homes
 INFO: 2013-05-15 11:57:10: No home is alreporty up-to-date
 SUCCESS: 2013-05-15 11:57:21: Successfully stopped the dbconsoles
 SUCCESS: 2013-05-15 11:57:36: Successfully stopped the EM agents
 INFO: 2013-05-15 11:57:41: Applying patch on the homes: /u01/app/11.2.0.3/grid
 INFO: 2013-05-15 11:57:41: It may take upto 15 mins
 SUCCESS: 2013-05-15 12:08:27: Successfully applied the patch on home: /u01/app/11.2.0.3/grid
 SUCCESS: 2013-05-15 12:08:27: Successfully started the dbconsoles
 SUCCESS: 2013-05-15 12:08:38: Successfully started the EM Agents
 INFO: 2013-05-15 12:08:39: Patching the GI home on node odanode2
 ...
..........done
INFO: GI patching summary on node: odanode1
 SUCCESS: 2013-05-15 12:22:58:  Successfully applied the patch on home /u01/app/11.2.0.3/grid
INFO: GI patching summary on node: odanode2
 SUCCESS: 2013-05-15 12:22:58:  Successfully applied the patch on home /u01/app/11.2.0.3/grid
INFO: Running post-install scripts
 ..........done
 INFO: Setting up the SSH
 ..........done
[root@odanode1 bin]#
[root@odanode2 ~]# su - grid
 [grid@odanode2 ~]$ cd /u01/app/11.2.0.3/grid/OPatch/
 [grid@odanode2 OPatch]$ ./opatch lsinv
 Oracle Interim Patch Installer version 11.2.0.3.4
 Copyright (c) 2012, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/11.2.0.3/grid
 Central Inventory : /u01/app/oraInventory
 from           : /u01/app/11.2.0.3/grid/oraInst.loc
 OPatch version    : 11.2.0.3.4
 OUI version       : 11.2.0.3.0
 Log file location : /u01/app/11.2.0.3/grid/cfgtoollogs/opatch/opatch2013-05-15_12-33-15PM_1.log
Lsinventory Output file location : /u01/app/11.2.0.3/grid/cfgtoollogs/opatch/lsinv/lsinventory2013-05-15_12-33-15PM.txt
--------------------------------------------------------------------------------
 Installed Top-level Products (1):
Oracle Grid Infrastructure                                           11.2.0.3.0
 There are 1 products installed in this Oracle Home.
Interim patches (2) :
Patch  16056266     : applied on Wed May 15 12:18:38 CEST 2013
 Unique Patch ID:  15962803
 Patch description:  "Database Patch Set Update : 11.2.0.3.6 (16056266)"
 Created on 12 Mar 2013, 02:14:47 hrs PST8PDT
 Sub-patch  14727310; "Database Patch Set Update : 11.2.0.3.5 (14727310)"
 Sub-patch  14275605; "Database Patch Set Update : 11.2.0.3.4 (14275605)"
 Sub-patch  13923374; "Database Patch Set Update : 11.2.0.3.3 (13923374)"
 Sub-patch  13696216; "Database Patch Set Update : 11.2.0.3.2 (13696216)"
 Sub-patch  13343438; "Database Patch Set Update : 11.2.0.3.1 (13343438)"
 Bugs fixed:
 13566938, 13593999, 10350832, 14138130, 12919564, 13561951, 13624984
 13588248, 13080778, 13914613, 13804294, 14258925, 12873183, 13645875
 14472647, 12880299, 14664355, 14409183, 12998795, 14469008, 13719081
 13492735, 13496884, 12857027, 14263036, 14263073, 13732226, 13742433
 16368108, 16314469, 12905058, 13742434, 12849688, 12950644, 13742435
 13464002, 13534412, 12879027, 13958038, 14613900, 12585543, 12535346
 12588744, 11877623, 13786142, 12847466, 13649031, 13981051, 12582664
 12797765, 14262913, 12923168, 13384182, 13612575, 13466801, 13484963
 14207163, 11063191, 13772618, 13070939, 12797420, 13041324, 16314467
 16314468, 12976376, 11708510, 13680405, 14589750, 13026410, 13742437
 13737746, 14644185, 13742438, 13326736, 13596521, 13001379, 16344871
 13099577, 9873405, 14275605, 13742436, 9858539, 14841812, 11715084
 16231699, 14040433, 12662040, 9703627, 12617123, 12845115, 12764337
 13354082, 14459552, 13397104, 13913630, 12964067, 12983611, 13550185
 13810393, 12780983, 12583611, 14546575, 13476583, 15862016, 11840910
 13903046, 15862017, 13572659, 16294378, 13718279, 14088346, 13657605
 13448206, 16314466, 14480676, 13419660, 13632717, 14063281, 14110275
 13430938, 13467683, 13420224, 13812031, 14548763, 16299830, 12646784
 13616375, 14035825, 12861463, 12834027, 15862021, 13632809, 13377816
 13036331, 14727310, 13685544, 15862018, 13499128, 16175381, 13584130
 12829021, 15862019, 12794305, 14546673, 12791981, 13787482, 13503598
 10133521, 12718090, 13399435, 14023636, 13860201, 12401111, 13257247
 13362079, 14176879, 12917230, 13923374, 14220725, 14480675, 13524899
 13559697, 9706792, 14480674, 13916709, 13098318, 13773133, 14076523
 13340388, 13366202, 13528551, 12894807, 13454210, 13343438, 12748240
 14205448, 13385346, 15853081, 14273397, 12971775, 13582702, 10242202
 13035804, 13544396, 16382353, 8547978, 14226599, 14062795, 13035360
 12693626, 13332439, 14038787, 14062796, 12913474, 14841409, 14390252
 16314470, 13370330, 13059165, 14062797, 14062794, 12959852, 13358781
 12345082, 12960925, 9659614, 13699124, 14546638, 13936424, 13338048
 12938841, 12658411, 12620823, 12656535, 14062793, 12678920, 13038684
 14062792, 13807411, 13250244, 12594032, 15862022, 9761357, 12612118
 13742464, 14052474, 13911821, 13457582, 13527323, 15862020, 13910420
 13502183, 12780098, 13705338, 13696216, 14841558, 10263668, 15862023
 16056266, 15862024, 13554409, 13645917, 13103913, 13011409, 14063280
Patch  16315641     : applied on Wed May 15 12:17:13 CEST 2013
 Unique Patch ID:  15966967
 Patch description:  "Grid Infrastructure Patch Set Update : 11.2.0.3.6 (16083653)"
 Created on 1 Apr 2013, 03:41:20 hrs PST8PDT
 Bugs fixed:
 16315641, 15876003, 14275572, 13919095, 13696251, 13348650, 12659561
 14305980, 14277586, 13987807, 14625969, 13825231, 12794268, 13000491
 13498267, 11675721, 14082976, 12771830, 14515980, 14085018, 13943175
 14102704, 14171552, 12594616, 13879428, 12897902, 12726222, 12829429
 13079948, 13090686, 12995950, 13251796, 13582411, 12990582, 13857364
 13082238, 12947871, 13256955, 13037709, 14535011, 12878750, 14048512
 11772838, 13058611, 13001955, 13440962, 13727853, 13425727, 12885323
 12870400, 14212634, 14407395, 13332363, 13430626, 13811209, 12709476
 14168708, 14096821, 14626717, 13460353, 13694885, 12857064, 12899169
 13111013, 12558569, 13323698, 10260842, 13085732, 10317921, 13869978
 12914824, 13789135, 12730342, 12950823, 13355963, 13531373, 14268365
 13776758, 12720728, 13620816, 13023609, 13024624, 13039908, 13036424
 13938166, 13011520, 13569812, 12758736, 13001901, 13077654, 13430715
 13550689, 13806545, 13634583, 14271305, 12538907, 13947200, 12996428
 13066371, 13483672, 12897651, 13540563, 12896850, 13241779, 12728585
 12876314, 12925041, 12650672, 12398492, 12848480, 13652088, 16307750
 12917897, 12975811, 13653178, 13371153, 14800989, 10114953, 14001941
 11836951, 14179376, 12965049, 14773530, 12765467, 13339443, 13965075
 16210540, 14307855, 12784559, 14242977, 13955385, 12704789, 13745317
 13074261, 12971251, 13993634, 13523527, 13719731, 13396284, 12639013
 12867511, 12959140, 14748254, 12829917, 12349553, 12849377, 12934171
 13843080, 14496536, 13924431, 12680491, 13334158, 10418841, 12832204
 13838047, 13002015, 12791719, 13886023, 13821454, 12782756, 14100232
 14186070, 14569263, 12873909, 13845120, 14214257, 12914722, 12842804
 12772345, 12663376, 14059576, 13889047, 12695029, 13924910, 13146560
 14070200, 13820621, 14304758, 12996572, 13941934, 14711358, 13019958
 13888719, 16463033, 12823838, 13877508, 12823042, 14494305, 13582706
 13617861, 12825835, 13025879, 13853089, 13410987, 13570879, 13247273
 13255295, 14152875, 13912373, 13011182, 13243172, 13045518, 12765868
 11825850, 15986571, 13345868, 13683090, 12932852, 13038806, 14588629
 14251904, 13396356, 13697828, 12834777, 13258062, 14371335, 13657366
 12810890, 15917085, 13502441, 14637577, 13880925, 13726162, 14153867
 13506114, 12820045, 13604057, 13263435, 14009845, 12827493, 13637590, 13068077
Rac system comprising of multiple nodes
 Local node = odanode2
 Remote node = odanode1
--------------------------------------------------------------------------------
OPatch succeeded.
 [grid@odanode2 OPatch]$
--Stop CRS on both Nodes
 [root@odanode1 2.6.0.0.0]# /u01/app/11.2.0.3/grid/bin/crsctl stop crs
 CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'odanode1'
 CRS-2673: Attempting to stop 'ora.crsd' on 'odanode1'
 CRS-2790: Starting shutdown of Cluster reporty Services-managed resources on 'odanode1'
 CRS-2673: Attempting to stop 'ora.cvu' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.efboeur.efbo_applb.efow.com.svc' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.efcteur.efct_applb.efow.com.svc' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.efpheur.efph_applb.efow.com.svc' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'odanode1'
 CRS-2677: Stop of 'ora.efboeur.efbo_applb.efow.com.svc' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.efpheur.efph_applb.efow.com.svc' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.efcteur.efct_applb.efow.com.svc' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.efboeur.db' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.efpheur.db' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.efcteur.db' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.registry.acfs' on 'odanode1'
 CRS-2677: Stop of 'ora.cvu' on 'odanode1' succeeded
 CRS-2672: Attempting to start 'ora.cvu' on 'odanode2'
 CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.odanode1.vip' on 'odanode1'
 CRS-2676: Start of 'ora.cvu' on 'odanode2' succeeded
 CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.scan2.vip' on 'odanode1'
 CRS-2677: Stop of 'ora.registry.acfs' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.efboeur.db' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.efpheur.db' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.efcteur.db' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.RECO.dg' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.REDO.dg' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.DATA.dg' on 'odanode1'
 CRS-2677: Stop of 'ora.odanode1.vip' on 'odanode1' succeeded
 CRS-2672: Attempting to start 'ora.odanode1.vip' on 'odanode2'
 CRS-2677: Stop of 'ora.REDO.dg' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.scan2.vip' on 'odanode1' succeeded
 CRS-2672: Attempting to start 'ora.scan2.vip' on 'odanode2'
 CRS-2677: Stop of 'ora.RECO.dg' on 'odanode1' succeeded
 CRS-2676: Start of 'ora.odanode1.vip' on 'odanode2' succeeded
 CRS-2676: Start of 'ora.scan2.vip' on 'odanode2' succeeded
 CRS-2677: Stop of 'ora.DATA.dg' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.asm' on 'odanode1'
 CRS-2677: Stop of 'ora.asm' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.ons' on 'odanode1'
 CRS-2677: Stop of 'ora.ons' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.net1.network' on 'odanode1'
 CRS-2677: Stop of 'ora.net1.network' on 'odanode1' succeeded
 CRS-2792: Shutdown of Cluster reporty Services-managed resources on 'odanode1' has completed
 CRS-2677: Stop of 'ora.crsd' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.ctssd' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.evmd' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.asm' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.mdnsd' on 'odanode1'
 CRS-2677: Stop of 'ora.evmd' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.mdnsd' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.ctssd' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.drivers.acfs' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.asm' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'odanode1'
 CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.cssd' on 'odanode1'
 CRS-2677: Stop of 'ora.cssd' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.crf' on 'odanode1'
 CRS-2677: Stop of 'ora.crf' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.gipcd' on 'odanode1'
 CRS-2677: Stop of 'ora.gipcd' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.gpnpd' on 'odanode1'
 CRS-2677: Stop of 'ora.gpnpd' on 'odanode1' succeeded
 CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'odanode1' has completed
 CRS-4133: Oracle High Availability Services has been stopped.
 [root@odanode1 2.6.0.0.0]#
--Start CRS on both Nodes
 [root@odanode1 2.6.0.0.0]# /u01/app/11.2.0.3/grid/bin/crsctl start crs
 CRS-4123: Oracle High Availability Services has been started.
--Check GI status
 [root@odanode1 2.6.0.0.0]# /u01/app/11.2.0.3/grid/bin/crsctl status res -t
 --------------------------------------------------------------------------------
 NAME           TARGET  STATE        SERVER                   STATE_DETAILS
 --------------------------------------------------------------------------------
 Local Resources
 --------------------------------------------------------------------------------
 ora.DATA.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.LISTENER.lsnr
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.RECO.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.REDO.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.asm
 ONLINE  ONLINE       odanode1               Started
 ONLINE  ONLINE       odanode2               Started
 ora.gsd
 OFFLINE OFFLINE      odanode1
 OFFLINE OFFLINE      odanode2
 ora.net1.network
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.ons
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.registry.acfs
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 --------------------------------------------------------------------------------
 Cluster Resources
 --------------------------------------------------------------------------------
 ora.LISTENER_SCAN1.lsnr
 1        ONLINE  ONLINE       odanode1
 ora.LISTENER_SCAN2.lsnr
 1        ONLINE  ONLINE       odanode2
 ora.cvu
 1        ONLINE  ONLINE       odanode2
 ora.efboeur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efboeur.efbo_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode1
 2        ONLINE  ONLINE       odanode2
 ora.efboeur.efbo_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.efcteur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efcteur.efct_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode1
 2        ONLINE  ONLINE       odanode2
 ora.efcteur.efct_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.odanode1.vip
 1        ONLINE  ONLINE       odanode1
 ora.odanode2.vip
 1        ONLINE  ONLINE       odanode2
 ora.efpheur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efpheur.efph_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode1
 2        ONLINE  ONLINE       odanode2
 ora.efpheur.efph_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.oc4j
 1        ONLINE  ONLINE       odanode2
 ora.scan1.vip
 1        ONLINE  ONLINE       odanode1
 ora.scan2.vip
 1        ONLINE  ONLINE       odanode2
 [root@odanode1 2.6.0.0.0]#
-------------------------------------------------
 --Apply the Patch to the RDBMS
 -------------------------------------------------
 --Check the RDBMS patch level before applying the PSU
 [root@odanode1 bin]# /opt/oracle/oak/bin/oakcli show databases
 Database Name    Database Type   Database HomeName    Database HomeLocation                                        Database Version
 ----------------       -----------            ----------------                   ---------------------------------------                           ---------------------
 efboeur                RAC                   OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1         11.2.0.3.4(14275605,14275572)
 efcteur                 RAC                  OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.4(14275605,14275572)
 efpheur                RAC                  OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.4(14275605,14275572)
 [root@odanode1 bin]#
--ODA on BOTH Nodes
 [oracle@odanode1 OPatch]$ /u01/app/oracle/product/agent12c/agent_inst/bin/emctl stop agent
 Oracle Enterprise Manager Cloud Control 12c Release 2
 Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
 Stopping agent ..... stopped.
--In addition to avoind issue while patching the RDBMS:
 --ODA on BOTH Nodes
 [root@efoda01n1 ~]# /sbin/fuser /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/libclntsh.so.11.1
 /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/libclntsh.so.11.1: 18877m 18911m
[root@efoda01n1 ~]# ps -ef|grep 18877
 oracle   18877 18791  0 10:06 ?        00:00:22 /u01/app/oracle/product/11.2.0.3/dbhome_1/jdk/bin/java -server -Xmx384M -XX:MaxPermSize=400M -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -DORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1 -Doracle.home=/u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j -Doracle.oc4j.localhome=/u01/app/oracle/product/11.2.0.3/dbhome_1/efoda01n1_test/sysman -DEMSTATE=/u01/app/oracle/product/11.2.0.3/dbhome_1/efoda01n1_test -Doracle.j2ee.dont.use.memory.archive=true -Djava.protocol.handler.pkgs=HTTPClient -Doracle.security.jazn.config=/u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j/j2ee/OC4J_DBConsole_efoda01n1_test/config/jazn.xml -Djava.security.policy=/u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j/j2ee/OC4J_DBConsole_efoda01n1_test/config/java2.policy -Djavax.net.ssl.KeyStore=/u01/app/oracle/product/11.2.0.3/dbhome_1/sysman/config/OCMTrustedCerts.txt-Djava.security.properties=/u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j/j2ee/home/config/jazn.security.props -DEMDROOT=/u01/app/oracle/product/11.2.0.3/dbhome_1/efoda01n1_test -Dsysman.md5password=true -Drepapi.oracle.home=/u01/app/oracle/product/11.2.0.3/dbhome_1 -Ddisable.checkForUpdate=true -Doracle.sysman.ccr.ocmSDK.websvc.keystore=/u01/app/oracle/product/11.2.0.3/dbhome_1/jlib/emocmclnt.ks -Dice.pilots.html4.ignoreNonGenericFonts=true -Djava.awt.headless=true -jar /u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j/j2ee/home/oc4j.jar -config /u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j/j2ee/OC4J_DBConsole_efoda01n1_test/config/server.xml
[root@efoda01n1 ~]# kill -9 18877 18911
--ODA Node 1 ONLY
 [root@odanode1 bin]# ./oakcli update -patch 2.6.0.0.0 --database
Please enter the 'root' user password:
 Please re-enter the 'root' user password:
Please enter the 'oracle' user password:
 Please re-enter the 'oracle' user password:
 INFO: Setting up the SSH
 ..........done
 ...
 ...
..........done
 ...
 SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
 INFO: 2013-05-15 15:10:16: Getting the possible database homes for patching
 ...
 INFO: 2013-05-15 15:10:21: Patching 11.2.0.3 Database homes on node odanode1
Found the following 11.2.0.3 homes possible for patching:
HOME_NAME                      HOME_LOCATION
 ---------                      -------------
 OraDb11203_home1               /u01/app/oracle/product/11.2.0.3/dbhome_1
[Please note that few of the above database homes may be alreporty up-to-date. They will be automatically ignored]
Would you like to patch all the above homes: Y | N ? :Y
 INFO: 2013-05-15 15:15:48: Setting up ssh for the user oracle
 ..........done
 ...
 SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
 INFO: 2013-05-15 15:16:07: Updating the opatch
 INFO: 2013-05-15 15:16:24: Performing the conflict checks
 SUCCESS: 2013-05-15 15:16:42: Conflict checks passed for all the homes
 INFO: 2013-05-15 15:16:42: Checking if the patch is alreporty applied on any of the homes
 INFO: 2013-05-15 15:16:46: No home is alreporty up-to-date
 SUCCESS: 2013-05-15 15:16:52: Successfully stopped the dbconsoles
 SUCCESS: 2013-05-15 15:16:58: Successfully stopped the EM agents
 INFO: 2013-05-15 15:17:03: Applying patch on the homes: /u01/app/oracle/product/11.2.0.3/dbhome_1
 INFO: 2013-05-15 15:17:03: It may take upto 15 mins
 SUCCESS: 2013-05-15 15:21:35: Successfully applied the patch on home: /u01/app/oracle/product/11.2.0.3/dbhome_1
 SUCCESS: 2013-05-15 15:21:35: Successfully started the dbconsoles
 SUCCESS: 2013-05-15 15:21:35: Successfully started the EM Agents
 INFO: 2013-05-15 15:21:37: Patching 11.2.0.3 Database homes on node odanode2
 INFO: 2013-05-15 15:22:11: Running the catbundle.sql
 INFO: 2013-05-15 15:22:18: Running catbundle.sql on the database efboeur
 INFO: 2013-05-15 15:22:26: Running catbundle.sql on the database efcteur
 INFO: 2013-05-15 15:22:35: Running catbundle.sql on the database efpheur
..........done
INFO: DB patching summary on node: odanode1
 SUCCESS: 2013-05-15 15:22:57:  Successfully applied the patch on home /u01/app/oracle/product/11.2.0.3/dbhome_1
INFO: DB patching summary on node: odanode2
 INFO: 2013-05-15 15:22:57:  Homes /u01/app/oracle/product/11.2.0.3/dbhome_1 are alreporty up-to-date
INFO: Setting up the SSH
 ..........done
[root@odanode1 bin]#
[root@odanode1 2.6.0.0.0]# /opt/oracle/oak/bin/oakcli show databases
 Database Name    Database Type   Database HomeName    Database HomeLocation                                        Database Version
 ----------------       -----------            ----------------                   ---------------------------------------                           ---------------------
 efboeur                RAC                   OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1         11.2.0.3.6(16056266,16083653)
 efcteur                 RAC                  OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.6(16056266,16083653)
 efpheur                RAC                  OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.6(16056266,16083653)
 [root@odanode1 bin]#
[oracle@odanode2 OPatch]$ ./opatch lsinv
 Oracle Interim Patch Installer version 11.2.0.3.4
 Copyright (c) 2012, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/11.2.0.3/dbhome_1
 Central Inventory : /u01/app/oraInventory
 from           : /u01/app/oracle/product/11.2.0.3/dbhome_1/oraInst.loc
 OPatch version    : 11.2.0.3.4
 OUI version       : 11.2.0.3.0
 Log file location : /u01/app/oracle/product/11.2.0.3/dbhome_1/cfgtoollogs/opatch/opatch2013-05-15_15-49-48PM_1.log
Lsinventory Output file location : /u01/app/oracle/product/11.2.0.3/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2013-05-15_15-49-48PM.txt
--------------------------------------------------------------------------------
 Installed Top-level Products (1):
Oracle Database 11g                                                  11.2.0.3.0
 There are 1 products installed in this Oracle Home.
Interim patches (2) :
Patch  16056266     : applied on Wed May 15 15:00:15 CEST 2013
 Unique Patch ID:  15962803
 Patch description:  "Database Patch Set Update : 11.2.0.3.6 (16056266)"
 Created on 12 Mar 2013, 02:14:47 hrs PST8PDT
 Sub-patch  14727310; "Database Patch Set Update : 11.2.0.3.5 (14727310)"
 Sub-patch  14275605; "Database Patch Set Update : 11.2.0.3.4 (14275605)"
 Sub-patch  13923374; "Database Patch Set Update : 11.2.0.3.3 (13923374)"
 Sub-patch  13696216; "Database Patch Set Update : 11.2.0.3.2 (13696216)"
 Sub-patch  13343438; "Database Patch Set Update : 11.2.0.3.1 (13343438)"
 Bugs fixed:
 13566938, 13593999, 10350832, 14138130, 12919564, 13561951, 13624984
 13588248, 13080778, 13914613, 13804294, 14258925, 12873183, 13645875
 14472647, 12880299, 14664355, 14409183, 12998795, 14469008, 13719081
 13492735, 13496884, 12857027, 14263036, 14263073, 13732226, 13742433
 16368108, 16314469, 12905058, 13742434, 12849688, 12950644, 13742435
 13464002, 13534412, 12879027, 13958038, 14613900, 12585543, 12535346
 12588744, 11877623, 13786142, 12847466, 13649031, 13981051, 12582664
 12797765, 14262913, 12923168, 13384182, 13612575, 13466801, 13484963
 14207163, 11063191, 13772618, 13070939, 12797420, 13041324, 16314467
 16314468, 12976376, 11708510, 13680405, 14589750, 13026410, 13742437
 13737746, 14644185, 13742438, 13326736, 13596521, 13001379, 16344871
 13099577, 9873405, 14275605, 13742436, 9858539, 14841812, 11715084
 16231699, 14040433, 12662040, 9703627, 12617123, 12845115, 12764337
 13354082, 14459552, 13397104, 13913630, 12964067, 12983611, 13550185
 13810393, 12780983, 12583611, 14546575, 13476583, 15862016, 11840910
 13903046, 15862017, 13572659, 16294378, 13718279, 14088346, 13657605
 13448206, 16314466, 14480676, 13419660, 13632717, 14063281, 14110275
 13430938, 13467683, 13420224, 13812031, 14548763, 16299830, 12646784
 13616375, 14035825, 12861463, 12834027, 15862021, 13632809, 13377816
 13036331, 14727310, 13685544, 15862018, 13499128, 16175381, 13584130
 12829021, 15862019, 12794305, 14546673, 12791981, 13787482, 13503598
 10133521, 12718090, 13399435, 14023636, 13860201, 12401111, 13257247
 13362079, 14176879, 12917230, 13923374, 14220725, 14480675, 13524899
 13559697, 9706792, 14480674, 13916709, 13098318, 13773133, 14076523
 13340388, 13366202, 13528551, 12894807, 13454210, 13343438, 12748240
 14205448, 13385346, 15853081, 14273397, 12971775, 13582702, 10242202
 13035804, 13544396, 16382353, 8547978, 14226599, 14062795, 13035360
 12693626, 13332439, 14038787, 14062796, 12913474, 14841409, 14390252
 16314470, 13370330, 13059165, 14062797, 14062794, 12959852, 13358781
 12345082, 12960925, 9659614, 13699124, 14546638, 13936424, 13338048
 12938841, 12658411, 12620823, 12656535, 14062793, 12678920, 13038684
 14062792, 13807411, 13250244, 12594032, 15862022, 9761357, 12612118
 13742464, 14052474, 13911821, 13457582, 13527323, 15862020, 13910420
 13502183, 12780098, 13705338, 13696216, 14841558, 10263668, 15862023
 16056266, 15862024, 13554409, 13645917, 13103913, 13011409, 14063280
Patch  16315641     : applied on Wed May 15 13:58:54 CEST 2013
 Unique Patch ID:  15966967
 Patch description:  "Grid Infrastructure Patch Set Update : 11.2.0.3.6 (16083653)"
 Created on 1 Apr 2013, 03:41:20 hrs PST8PDT
 Bugs fixed:
 16315641, 15876003, 14275572, 13919095, 13696251, 13348650, 12659561
 14305980, 14277586, 13987807, 14625969, 13825231, 12794268, 13000491
 13498267, 11675721, 14082976, 12771830, 14515980, 14085018, 13943175
 14102704, 14171552, 12594616, 13879428, 12897902, 12726222, 12829429
 13079948, 13090686, 12995950, 13251796, 13582411, 12990582, 13857364
 13082238, 12947871, 13256955, 13037709, 14535011, 12878750, 14048512
 11772838, 13058611, 13001955, 13440962, 13727853, 13425727, 12885323
 12870400, 14212634, 14407395, 13332363, 13430626, 13811209, 12709476
 14168708, 14096821, 14626717, 13460353, 13694885, 12857064, 12899169
 13111013, 12558569, 13323698, 10260842, 13085732, 10317921, 13869978
 12914824, 13789135, 12730342, 12950823, 13355963, 13531373, 14268365
 13776758, 12720728, 13620816, 13023609, 13024624, 13039908, 13036424
 13938166, 13011520, 13569812, 12758736, 13001901, 13077654, 13430715
 13550689, 13806545, 13634583, 14271305, 12538907, 13947200, 12996428
 13066371, 13483672, 12897651, 13540563, 12896850, 13241779, 12728585
 12876314, 12925041, 12650672, 12398492, 12848480, 13652088, 16307750
 12917897, 12975811, 13653178, 13371153, 14800989, 10114953, 14001941
 11836951, 14179376, 12965049, 14773530, 12765467, 13339443, 13965075
 16210540, 14307855, 12784559, 14242977, 13955385, 12704789, 13745317
 13074261, 12971251, 13993634, 13523527, 13719731, 13396284, 12639013
 12867511, 12959140, 14748254, 12829917, 12349553, 12849377, 12934171
 13843080, 14496536, 13924431, 12680491, 13334158, 10418841, 12832204
 13838047, 13002015, 12791719
Rac system comprising of multiple nodes
 Local node = odanode2
 Remote node = odanode1
--------------------------------------------------------------------------------
OPatch succeeded.
 [oracle@odanode2 OPatch]$
-------------------------------------------------
 --ODA Software version after patching
 -------------------------------------------------
 [root@odanode1 bin]# /opt/oracle/oak/bin/oakcli show version -detail
 reporting the metadata. It takes a while...
 System Version         Component Name            Installed Version                Supported Version
 --------------          ---------------              ---------------------         ----------------------
 2.6.0.0.0
 Controller                        11.05.02.00                      Up-to-date
 Expander                         0342                               Up-to-date
 SSD_SHARED                    E12B                               Up-to-date
 HDD_LOCAL                      5G08                               Up-to-date
 HDD_SHARED                    A700                               Up-to-date
 ILOM                               3.0.16.22.b r78329            Up-to-date
 BIOS                               12010310                         Up-to-date
 IPMI                                1.8.10.5                          Up-to-date
 HMP                                2.2.6.1                            Up-to-date
 OAK                                2.6.0.0.0                          Up-to-date
 OEL                                5.8                                  Up-to-date
 TFA                                2.5.1.4                            Up-to-date
 GI_HOME                         11.2.0.3.6(16056266,         Up-to-date
 16083653)
 DB_HOME                         11.2.0.3.6(16056266,          Up-to-date
 16083653)
 ASR                                  Unknown                          4.4
 [root@odanode1 bin]#
[grid@odanode2 ~]$ crsctl stat res -t
 --------------------------------------------------------------------------------
 NAME           TARGET  STATE        SERVER                   STATE_DETAILS
 --------------------------------------------------------------------------------
 Local Resources
 --------------------------------------------------------------------------------
 ora.DATA.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.LISTENER.lsnr
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.RECO.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.REDO.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.asm
 ONLINE  ONLINE       odanode1               Started
 ONLINE  ONLINE       odanode2               Started
 ora.gsd
 OFFLINE OFFLINE      odanode1
 OFFLINE OFFLINE      odanode2
 ora.net1.network
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.ons
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.registry.acfs
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 --------------------------------------------------------------------------------
 Cluster Resources
 --------------------------------------------------------------------------------
 ora.LISTENER_SCAN1.lsnr
 1        ONLINE  ONLINE       odanode2
 ora.LISTENER_SCAN2.lsnr
 1        ONLINE  ONLINE       odanode1
 ora.cvu
 1        ONLINE  ONLINE       odanode1
 ora.efboeur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efboeur.efbo_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode2
 2        ONLINE  ONLINE       odanode1
 ora.efboeur.efbo_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.efcteur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efcteur.efct_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode1
 2        ONLINE  ONLINE       odanode2
 ora.efcteur.efct_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.odanode1.vip
 1        ONLINE  ONLINE       odanode1
 ora.odanode2.vip
 1        ONLINE  ONLINE       odanode2
 ora.efpheur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efpheur.efph_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode2
 2        ONLINE  ONLINE       odanode1
 ora.efpheur.efph_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.oc4j
 1        ONLINE  ONLINE       odanode1
 ora.scan1.vip
 1        ONLINE  ONLINE       odanode2
 ora.scan2.vip
 1        ONLINE  ONLINE       odanode1
 [grid@odanode2 ~]$