Grid Management DB filling up ASM disk space

Recently I discovered on Oracle Grid Infrastructure 12cR2 that the ASM disk group hosting the Management DB (-MGMTDB) was filling up the disk space very quickly.

This is due to a bug on the oclumon data purge procedure.

To fix the problem, two possibilities are available:

  1. Recreate the Management DB
  2. Manually truncate the tables not purged and shrinking the Tablespace Size

 

Below are described the two options.

 

Option 1 – Recreate the Management DB

As root user on each cluster node:

# /u01/app/12.2.0.1/grid/bin/crsctl stop res ora.crf -init
# /u01/app/12.2.0.1/grid/bin/crsctl modify res ora.crf -attr ENABLED=0 -init

 

As Grid from the local node hosting the Management Database Instance run the commands:

$ /u01/app/12.2.0.1/grid/bin/srvctl status mgmtdb
$ /u01/app/12.2.0.1/grid/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB
Connecting to database
4% complete
9% complete
14% complete
19% complete
23% complete
28% complete
47% complete
Updating network configuration files
48% complete
52% complete
Deleting instance and datafiles
76% complete
100% complete

 

How to recreate the MGMTDB:

$ /u01/app/12.2.0.1/grid/bin/dbca -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc
-sid -MGMTDB
-gdbName _mgmtdb
-storageType ASM
-diskGroupName GIMR
-datafileJarLocation <GI HOME>/assistants/dbca/templates
-characterset AL32UTF8
-autoGeneratePassword           
-skipUserTemplateCheck

 

Create the pluggable GIMR database

$ /u01/app/12.2.0.1/grid/bin/mgmtca -local

 


 

Option 2 – Manually truncate the tables

 

As root user stop and disable ora.crf resource on each cluster node:

# /u01/app/12.2.0.1/grid/bin/crsctl stop res ora.crf -init
# /u01/app/12.2.0.1/grid/bin/crsctl modify res ora.crf -attr ENABLED=0 -init

 

Connect to MGMTDB and identify the segments to truncate:

export ORACLE_SID=-MGMTDB
$ORACLE_HOME/bin/sqlplus / as sysdba
SQL> select pdb_name from dba_pdbs where pdb_name!='PDB$SEED';

SQL> alter session set container=GIMR_DSCREP_10;

Session altered.

SQL> col obj format a50
SQL> select owner||'.'||SEGMENT_NAME obj, BYTES from dba_segments where owner='CHM' order by 2 asc;

 

Likely those two tables are much bigger than the rest :

  • CHM.CHMOS_PROCESS_INT_TBL
  • CHM.CHMOS_DEVICE_INT_TBL

Truncate the tables:

SQL> truncate table CHM.CHMOS_PROCESS_INT_TBL;
SQL> truncate table CHM.CHMOS_DEVICE_INT_TBL;

 

Then if needed shrink the tablespace and job done!

 


 

Patching Exadata Machine

################################################################
##    EXADATA MACHINE  INFRASTRUCTURE PATCHING of 1/8 RACK     ##
################################################################

This post describe step-by-step how to patch the infrastructure components of an Exadata Machine

———————————————————–
— Cell Storage Pre-requisites
———————————————————–

--Stop CRS using dcli
[root@ch01db01 oracle]# dcli -g /home/oracle/dbhosts -l root '/u01/app/12.1.0.2/grid/bin/crsctl stop crs'
 [root@ch01db01 oracle]# dcli -g /home/oracle/dbhosts -l root '/u01/app/12.1.0.2/grid/bin/crsctl stat res -t -init'
ch01db01: CRS-4639: Could not contact Oracle High Availability Services
ch01db01: CRS-4000: Command Status failed, or completed with errors.
ch01db02: CRS-4639: Could not contact Oracle High Availability Services
ch01db02: CRS-4000: Command Status failed, or completed with errors.
--Stop All Cell Storage Services
 [root@ch01db01 oracle]# dcli -g /home/oracle/cellhosts_ALL -l root "cellcli -e alter cell shutdown services all"
ch01celadm01:
ch01celadm01: Stopping the RS, CELLSRV, and MS services...
 ch01celadm01: The SHUTDOWN of services was successful.
 ch01celadm02:
 ch01celadm02: Stopping the RS, CELLSRV, and MS services...
 ch01celadm02: The SHUTDOWN of services was successful.
 ch01celadm03:
 ch01celadm03: Stopping the RS, CELLSRV, and MS services...
 ch01celadm03: The SHUTDOWN of services was successful.

[root@ch01db01 oracle]#

 

———————————————————–
–Cell Storage patching
———————————————————–

[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -reset_force
2016-02-05 11:17:07 +0100 :DONE: reset_force
[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -cleanup
2016-02-05 11:19:19 +0100        :Working: DO: Cleanup ...
2016-02-05 11:19:20 +0100        :SUCCESS: DONE: Cleanup
[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -patch_check_prereq
2016-02-05 11:20:56 +0100        :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell ...
 2016-02-05 11:20:57 +0100        :SUCCESS: DONE: Check cells have ssh equivalence for root user.
 2016-02-05 11:20:59 +0100        :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute ...
 2016-02-05 11:21:01 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:22:19 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:22:19 +0100        :Working: DO: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction. Up to 40 minutes ...
 2016-02-05 11:22:33 +0100 Wait correction of degraded md11 due to md partner size mismatch. Up to 30 minutes.
2016-02-05 11:22:34 +0100        :SUCCESS: DONE: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction.
 2016-02-05 11:22:34 +0100        :Working: DO: Check prerequisites on all cells. Up to 2 minutes ...
 2016-02-05 11:23:38 +0100        :SUCCESS: DONE: Check prerequisites on all cells.
 2016-02-05 11:23:38 +0100        :Working: DO: Execute plugin check for Patch Check Prereq ...
 2016-02-05 11:23:38 +0100        :INFO: Patchmgr plugin start: Prereq check for exposure to bug 17854520 v1.3. Details in logfile /u02/p17885582_121210_Linux-x86-64/patch_12.1.2.1.0.141206.1/patchmgr.stdout.
 2016-02-05 11:23:38 +0100        :SUCCESS: No exposure to bug 17854520 with non-rolling patching
 2016-02-05 11:23:39 +0100        :SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
[root@ch01db01 patch_12.1.2.1.0.141206.1]#
 [root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -patch
********************************************************************************
 NOTE Cells will reboot during the patch or rollback process.
 NOTE For non-rolling patch or rollback, ensure all ASM instances using
 NOTE the cells are shut down for the duration of the patch or rollback.
 NOTE For rolling patch or rollback, ensure all ASM instances using
 NOTE the cells are up for the duration of the patch or rollback.
WARNING Do not start more than one instance of patchmgr.
 WARNING Do not interrupt the patchmgr session.
 WARNING Do not alter state of ASM instances during patch or rollback.
 WARNING Do not resize the screen. It may disturb the screen layout.
 WARNING Do not reboot cells or alter cell services during patch or rollback.
 WARNING Do not open log files in editor in write mode or try to alter them.
NOTE All time estimates are approximate.
 NOTE You may interrupt this patchmgr run in next 60 seconds with CONTROL-c.
********************************************************************************
2016-02-05 11:27:08 +0100        :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell ...
 2016-02-05 11:27:09 +0100        :SUCCESS: DONE: Check cells have ssh equivalence for root user.
 2016-02-05 11:27:12 +0100        :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute ...
 2016-02-05 11:27:32 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:27:32 +0100        :Working: DO: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction. Up to 40 minutes ...
 2016-02-05 11:27:45 +0100 Wait correction of degraded md11 due to md partner size mismatch. Up to 30 minutes.
2016-02-05 11:27:46 +0100        :SUCCESS: DONE: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction.
 2016-02-05 11:27:46 +0100        :Working: DO: Check prerequisites on all cells. Up to 2 minutes ...
 2016-02-05 11:28:50 +0100        :SUCCESS: DONE: Check prerequisites on all cells.
 2016-02-05 11:28:50 +0100        :Working: DO: Copy the patch to all cells. Up to 3 minutes ...
 2016-02-05 11:29:22 +0100        :SUCCESS: DONE: Copy the patch to all cells.
 2016-02-05 11:29:24 +0100        :Working: DO: Execute plugin check for Patch Check Prereq ...
 2016-02-05 11:29:24 +0100        :INFO: Patchmgr plugin start: Prereq check for exposure to bug 17854520 v1.3. Details in logfile /u02/p17885582_121210_Linux-x86-64/patch_12.1.2.1.0.141206.1/patchmgr.stdout.
 2016-02-05 11:29:24 +0100        :SUCCESS: No exposure to bug 17854520 with non-rolling patching
 2016-02-05 11:29:25 +0100        :SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
 2016-02-05 11:29:25 +0100 1 of 5 :Working: DO: Initiate patch on cells. Cells will remain up. Up to 5 minutes ...
 2016-02-05 11:29:37 +0100 1 of 5 :SUCCESS: DONE: Initiate patch on cells.
 2016-02-05 11:29:37 +0100 2 of 5 :Working: DO: Waiting to finish pre-reboot patch actions. Cells will remain up. Up to 45 minutes ...
 2016-02-05 11:30:37 +0100 Wait for patch pre-reboot procedures
2016-02-05 11:44:56 +0100 2 of 5 :SUCCESS: DONE: Waiting to finish pre-reboot patch actions.
 2016-02-05 11:44:56 +0100        :Working: DO: Execute plugin check for Patching ...
 2016-02-05 11:44:56 +0100        :SUCCESS: DONE: Execute plugin check for Patching.
 2016-02-05 11:44:56 +0100 3 of 5 :Working: DO: Finalize patch on cells. Cells will reboot. Up to 5 minutes ...
 2016-02-05 11:45:17 +0100 3 of 5 :SUCCESS: DONE: Finalize patch on cells.
 2016-02-05 11:45:17 +0100 4 of 5 :Working: DO: Wait for cells to reboot and come online. Up to 120 minutes ...
 2016-02-05 11:46:17 +0100 Wait for patch finalization and reboot
2016-02-05 13:09:24 +0100 4 of 5 :SUCCESS: DONE: Wait for cells to reboot and come online.
 2016-02-05 13:09:24 +0100 5 of 5 :Working: DO: Check the state of patch on cells. Up to 5 minutes ...
 2016-02-05 13:10:09 +0100 5 of 5 :SUCCESS: DONE: Check the state of patch on cells.
 2016-02-05 13:10:09 +0100        :Working: DO: Execute plugin check for Post Patch ...
 2016-02-05 13:10:10 +0100        :SUCCESS: DONE: Execute plugin check for Post Patch.
[root@ch01db01 patch_12.1.2.1.0.141206.1]#
[root@ch01db01 patch_12.1.2.1.0.141206.1]# dcli -c ch01celadm01 -l root 'imageinfo'
 ch01celadm01:
 ch01celadm01: Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
 ch01celadm01: Cell version: OSS_12.1.2.1.0_LINUX.X64_141206.1
 ch01celadm01: Cell rpm version: cell-12.1.2.1.0_LINUX.X64_141206.1-1.x86_64
 ch01celadm01:
 ch01celadm01: Active image version: 12.1.2.1.0.141206.1
 ch01celadm01: Active image activated: 2016-02-05 20:14:52 +0100
 ch01celadm01: Active image status: success
 ch01celadm01: Active system partition on device: /dev/md5
 ch01celadm01: Active software partition on device: /dev/md7
 ch01celadm01:
 ch01celadm01: Cell boot usb partition: /dev/sdac1
 ch01celadm01: Cell boot usb version: 12.1.2.1.0.141206.1
 ch01celadm01:
 ch01celadm01: Inactive image version: 12.1.1.1.1.140712
 ch01celadm01: Inactive image activated: 2014-08-06 11:50:09 +0200
 ch01celadm01: Inactive image status: success
 ch01celadm01: Inactive system partition on device: /dev/md6
 ch01celadm01: Inactive software partition on device: /dev/md8
 ch01celadm01:
 ch01celadm01: Inactive marker for the rollback: /boot/I_am_hd_boot.inactive
 ch01celadm01: Inactive grub config for the rollback: /boot/grub/grub.conf.inactive
 ch01celadm01: Inactive kernel version for the rollback: 2.6.39-400.128.17.el5uek
 ch01celadm01: Rollback to the inactive partitions: Possible
 [root@ch01db01 patch_12.1.2.1.0.141206.1]#

-----------------------------------------------------------
-- DB Server Patching
-----------------------------------------------------------

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -h

Usage: dbnodeupdate.sh [ -u | -r | -c ] -l <baseurl|zip file> [-p] <phase> [-n] [-s] [-q] [-v] [-t] [-a] <alert.sh> [-b] [-m] | [-V] | [-h]
-u                       Upgrade
 -r                       Rollback
 -c                       Complete post actions (verify image status, cleanup, apply fixes, relink all homes, enable GI to start/start all domU's)
 -l <baseurl|zip file>    Baseurl (http or zipped iso file for the repository)
 -s                       Shutdown stack (domU's for VM) before upgrading/rolling back
 -p                       Bootstrap phase (1 or 2) only to be used when instructed by dbnodeupdate.sh
 -q                       Quiet mode (no prompting) only be used in combination with -t
 -n                       No backup will be created (Option disabled for systems being updated from Oracle Linux 5 to Oracle Linux 6)
 -t                       'to release' - used when in quiet mode or used when updating to one-offs/releases via 'latest' channel (requires 11.2.3.2.1)
 -v                       Verify prereqs only. Only to be used with -u and -l option
 -b                       Perform backup only
 -a <alert.sh>            Full path to shell script used for alert trapping
 -m                       Install / update-to exadata-sun/hp-computenode-minimum only (11.2.3.3.0 and later)
 -i                       Ignore /etc/oratab - relinking will be disabled. Only possible in combination with -c.
 -V                       Print version
 -h                       Print usage
For upgrading from releases 11.2.2.4.2 and later:
 Example using iso  : ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip
 Example using http : ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/
 Example: ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.2.1/base/x86_64/
 Example: ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/
For upgrading from releases 11.2.2.4.2 and later in quiet mode:
 Example: ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -q -t 11.2.3.2.1.130302
For completion steps:
 Example: ./dbnodeupdate.sh -c
For rollback:
 Example: ./dbnodeupdate.sh -r
For pre-req checks only:
 Example using iso  : ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -v
 Example using http : ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/ -v
For backup only:
 Example: ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -b
See MOS 1553103.1 for more examples
[root@ch01db02 dbnodeupdate]#

———————————– –DB Server patching Verification ———————————–

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -u -l /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip -v
##########################################################################################################################
 #                                                                                                                        #
 # Guidelines for using dbnodeupdate.sh (rel. 4.18):                                                                      #
 #                                                                                                                        #
 # - Prerequisites for usage:                                                                                             #
 #         1. Refer to dbnodeupdate.sh options. See MOS 1553103.1                                                         #
 #         2. Use the latest release of dbnodeupdate.sh. See patch 16486998                                               #
 #         3. Run the prereq check with the '-v' option.                                                                  #
 #                                                                                                                        #
 #   I.e.:  ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v                                                               #
 #          ./dbnodeupdate.sh -u -l http://my-yum-repo -v                                                                 #
 #                                                                                                                        #
 # - Prerequisite dependency check failures can happen due to customization:                                              #
 #     - The prereq check detects dependency issues that need to be addressed prior to running a successful update.       #
 #     - Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
 #                                                                                                                        #
 #   When upgrading from releases later than 11.2.2.4.2 to releases before 11.2.3.3.0:                                    #
 #      - Conflicting packages should be removed before proceeding the update.                                            #
 #                                                                                                                        #
 #   When upgrading to releases 11.2.3.3.0 or later:                                                                      #
 #      - When the 'exact' package dependency check fails 'minimum' package dependency check will be tried.               #
 #      - When the 'minimum' package dependency check also fails,                                                         #
 #        the conflicting packages should be removed before proceeding.                                                   #
 #                                                                                                                        #
 # - As part of the prereq checks and as part of the update, a number of rpms will be removed.                            #
 #   This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages.        #
 #      - See /var/log/cellos/packages_to_be_removed.txt for details on what packages will be removed.                    #
 #                                                                                                                        #
 # - In case of any problem when filing an SR, upload the following:                                                      #
 #      - /var/log/cellos/dbnodeupdate.log                                                                                #
 #      - /var/log/cellos/dbnodeupdate.<runid>.diag                                                                       #
 #      - where <runid> is the unique number of the failing run.                                                          #
 #                                                                                                                        #
 ##########################################################################################################################
Continue ? [y/n]
 y
(*) 2016-02-05 17:06:43: Unzipping helpers (/u01/exapatch/dbnodeupdate/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
 (*) 2016-02-05 17:06:43: Initializing logfile /var/log/cellos/dbnodeupdate.log
 (*) 2016-02-05 17:06:44: Collecting system configuration settings. This may take a while...
 (*) 2016-02-05 17:07:10: Validating system settings for known issues and best practices. This may take a while...
 (*) 2016-02-05 17:07:10: Checking free space in /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641
 (*) 2016-02-05 17:07:10: Unzipping /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip to /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641, this may take a while
 (*) 2016-02-05 17:07:23: Original /etc/yum.conf moved to /etc/yum.conf.050215170641, generating new yum.conf
 (*) 2016-02-05 17:07:23: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo
 (*) 2016-02-05 17:07:56: Validating the specified source location.
 (*) 2016-02-05 17:07:57: Cleaning up the yum cache.

—————————————————————————————————————————–
Running in prereq check mode
—————————————————————————————————————————–

Active Image version   : 12.1.1.1.1.140712
 Active Kernel version  : 2.6.39-400.128.17.el5uek
 Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
 Inactive Image version : 12.1.1.1.0.131219
 Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
 Current user id        : root
 Action                 : upgrade
 Upgrading to           : 12.1.2.1.0.141206.1 - Oracle Linux 5->6 upgrade
 Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/050215170641/x86_64/ (iso)
 Iso file               : /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641/exadata_ol6_base_repo_141206.1.iso
 Create a backup        : Yes (backup at update mandatory when updating from OL5 to OL6)
 Shutdown stack         : No (Currently stack is down)
 RPM exclusion list     : Function not available for OL5->OL6 upgrades
 RPM obsolete list      : Function not available for OL5->OL6 upgrades
 Exact dependencies     : Function not available for OL5->OL6 upgrades
 Minimum dependencies   : Function not available for OL5->OL6 upgrades
 Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 050215170641)
 Diagfile               : /var/log/cellos/dbnodeupdate.050215170641.diag
 Server model           : SUN FIRE X4170 M3
 dbnodeupdate.sh rel.   : 4.18 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
 Note                   : After upgrading and rebooting run './dbnodeupdate.sh -c' to finish post steps.
The following known issues will be checked for and automatically corrected by dbnodeupdate.sh:
 (*) - Issue - Fix for CVE-2014-9295 AND ELSA-2014-1974
The following known issues will be checked for but require manual follow-up:
 (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
 (*) - Issue - Exafusion silently enabled for database 12.1.0.2.0 with kernel 2.6.39-400.200 and later. See MOS 1947476.1 for more details.
---------------------------------------------------------------------------------------------------------------------
 NOTE:
 When upgrading to Oracle Linux 6 a backup is required for systems configured with logical volume manager (lvm).
 It appears no backup of the current image exist on the inactive lvm.
 This means a mandatory backup will be made using dbnodeupdate.sh before the actual update starts.
 ---------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------

-------------------------------------------
 Prereq check finished successfully, check the above report for next steps.
 -----------------------------------------------------------------------------------------------------------------------------
(*) 2016-02-05 17:08:01: Cleaning up iso and temp mount points
[root@ch01db02 dbnodeupdate]#

———————————–

–DB Server patching Execution

———————————–

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -u -l /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip
##########################################################################################################################
 #                                                                                                                        #
 # Guidelines for using dbnodeupdate.sh (rel. 4.18):                                                                      #
 #                                                                                                                        #
 # - Prerequisites for usage:                                                                                             #
 #         1. Refer to dbnodeupdate.sh options. See MOS 1553103.1                                                         #
 #         2. Use the latest release of dbnodeupdate.sh. See patch 16486998                                               #
 #         3. Run the prereq check with the '-v' option.                                                                  #
 #                                                                                                                        #
 #   I.e.:  ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v                                                               #
 #          ./dbnodeupdate.sh -u -l http://my-yum-repo -v                                                                 #
 #                                                                                                                        #
 # - Prerequisite dependency check failures can happen due to customization:                                              #
 #     - The prereq check detects dependency issues that need to be addressed prior to running a successful update.       #
 #     - Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
 #                                                                                                                        #
 #   When upgrading from releases later than 11.2.2.4.2 to releases before 11.2.3.3.0:                                    #
 #      - Conflicting packages should be removed before proceeding the update.                                            #
 #                                                                                                                        #
 #   When upgrading to releases 11.2.3.3.0 or later:                                                                      #
 #      - When the 'exact' package dependency check fails 'minimum' package dependency check will be tried.               #
 #      - When the 'minimum' package dependency check also fails,                                                         #
 #        the conflicting packages should be removed before proceeding.                                                   #
 #                                                                                                                        #
 # - As part of the prereq checks and as part of the update, a number of rpms will be removed.                            #
 #   This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages.        #
 #      - See /var/log/cellos/packages_to_be_removed.txt for details on what packages will be removed.                    #
 #                                                                                                                        #
 # - In case of any problem when filing an SR, upload the following:                                                      #
 #      - /var/log/cellos/dbnodeupdate.log                                                                                #
 #      - /var/log/cellos/dbnodeupdate.<runid>.diag                                                                       #
 #      - where <runid> is the unique number of the failing run.                                                          #
 #                                                                                                                        #
 ##########################################################################################################################
Continue ? [y/n]
 y
(*) 2016-02-05 17:09:38: Unzipping helpers (/u01/exapatch/dbnodeupdate/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
 (*) 2016-02-05 17:09:38: Initializing logfile /var/log/cellos/dbnodeupdate.log
 (*) 2016-02-05 17:09:39: Collecting system configuration settings. This may take a while...
 (*) 2016-02-05 17:10:07: Validating system settings for known issues and best practices. This may take a while...
 (*) 2016-02-05 17:10:07: Checking free space in /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936
 (*) 2016-02-05 17:10:07: Unzipping /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip to /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936, this may take a while
 (*) 2016-02-05 17:10:19: Original /etc/yum.conf moved to /etc/yum.conf.050215170936, generating new yum.conf
 (*) 2016-02-05 17:10:19: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo
 (*) 2016-02-05 17:10:42: Validating the specified source location.
 (*) 2016-02-05 17:10:43: Cleaning up the yum cache.
Active Image version   : 12.1.1.1.1.140712
 Active Kernel version  : 2.6.39-400.128.17.el5uek
 Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
 Inactive Image version : 12.1.1.1.0.131219
 Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
 Current user id        : root
 Action                 : upgrade
 Upgrading to           : 12.1.2.1.0.141206.1 - Oracle Linux 5->6 upgrade
 Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/050215170936/x86_64/ (iso)
 Iso file               : /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936/exadata_ol6_base_repo_141206.1.iso
 Create a backup        : Yes (backup at update mandatory when updating from OL5 to OL6)
 Shutdown stack         : No (Currently stack is down)
 RPM exclusion list     : Function not available for OL5->OL6 upgrades
 RPM obsolete list      : Function not available for OL5->OL6 upgrades
 Exact dependencies     : Function not available for OL5->OL6 upgrades
 Minimum dependencies   : Function not available for OL5->OL6 upgrade
 Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 050215170936)
 Diagfile               : /var/log/cellos/dbnodeupdate.050215170936.diag
 Server model           : SUN FIRE X4170 M3
 dbnodeupdate.sh rel.   : 4.18 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
 Note                   : After upgrading and rebooting run './dbnodeupdate.sh -c' to finish post steps.
The following known issues will be checked for and automatically corrected by dbnodeupdate.sh:
 (*) - Issue - Fix for CVE-2014-9295 AND ELSA-2014-1974
The following known issues will be checked for but require manual follow-up:
 (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
 (*) - Issue - Exafusion silently enabled for database 12.1.0.2.0 with kernel 2.6.39-400.200 and later. See MOS 1947476.1 for more details.
Continue ? [y/n]
 y
(*) 2016-02-05 17:11:59: Verifying GI and DB''s are shutdown
 (*) 2016-02-05 17:12:00: Collecting console history for diag purposes
 (*) 2016-02-05 17:12:32: Unmount of /boot successful
 (*) 2016-02-05 17:12:32: Check for /dev/sda1 successful
 (*) 2016-02-05 17:12:32: Mount of /boot successful
 (*) 2016-02-05 17:12:32: Disabling stack from starting
 (*) 2016-02-05 17:12:33: Performing filesystem backup to /dev/mapper/VGExaDb-LVDbSys2. Avg. 30 minutes (maximum 120) depends per environment.......
 (*) 2016-02-05 17:18:44: Backup successful
 (*) 2016-02-05 17:18:47: ExaWatcher stopped successful
 (*) 2016-02-05 17:19:07: EM Agent (in /u01/app/oracle/product/agent12c/core/12.1.0.4.0) stopped successfully
 (*) 2016-02-05 17:19:07: Capturing service status and file attributes. This may take a while...
 (*) 2016-02-05 17:19:12: Service status and file attribute report in: /etc/exadata/reports
 (*) 2016-02-05 17:19:12: Validating the specified source location.
 (*) 2016-02-05 17:19:13: Cleaning up the yum cache.
 (*) 2016-02-05 17:19:14: Executing OL5->OL6 upgrade steps, system is expected to reboot multiple times.
 (*) 2016-02-05 17:21:37: Initialize of Oracle Linux 6 Upgrade successful. Rebooting now...
Broadcast message from root (pts/0) (Thu Feb  5 17:21:37 2015):
The system is going down for reboot NOW!
[root@ch01db02 dbnodeupdate]#
[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -c

———————————–
–Output new Image Version
———————————–

[root@ch01db01 ibdiagtools]# imageinfo
Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
 Image version: 12.1.2.1.0.141206.1
 Image activated: 2016-02-05 18:24:46 +0100
 Image status: success
 System partition on device: /dev/mapper/VGExaDb-LVDbSys1

Create Application VIP on GI 11gR2

###################################################################
## How to add an Application VIP to Oracle Cluster 11gR2
###################################################################

Oracle Clusterware includes the utility appvipcfg which allows to easily create application VIPs; below an example based on a cluster 11.2.0.3.1

[root@lnxcld02 ~]# appvipcfg -h
 Production Copyright 2007, 2008, Oracle.All rights reserved
 Unknown option: h
Usage: appvipcfg create -network=<network_number> -ip=<ip_address> -vipname=<vipname>
 -user=<user_name>[-group=<group_name>] [-failback=0 | 1]
 delete -vipname=<vipname>
--Example to run as root user:
 [root@lnxcld02 ~]# appvipcfg create -network=1 -ip=192.168.2.200 -vipname=myappvip -user=grid -group=oinstall
Production Copyright 2007, 2008, Oracle.All rights reserved
 2012-02-10 14:39:23: Creating Resource Type
 2012-02-10 14:39:23: Executing /home/GRID_INFRA/product/11.2.0.3/bin/crsctl add type app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file /home/GRID_INFRA/product/11.2.0.3/crs/template/appvip.type
 2012-02-10 14:39:23: Executing cmd: /home/GRID_INFRA/product/11.2.0.3/bin/crsctl add type app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file /home/GRID_INFRA/product/11.2.0.3/crs/template/appvip.type
 2012-02-10 14:39:26: Create the Resource
 2012-02-10 14:39:26: Executing /home/GRID_INFRA/product/11.2.0.3/bin/crsctl add resource myappvip -type app.appvip_net1.type -attr "USR_ORA_VIP=192.168.2.200,START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:grid:r-x',HOSTING_MEMBERS=lnxcld02,APPSVIP_FAILBACK="
 2012-02-10 14:39:26: Executing cmd: /home/GRID_INFRA/product/11.2.0.3/bin/crsctl add resource myappvip -type app.appvip_net1.type -attr "USR_ORA_VIP=192.168.2.200,START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:grid:r-x',HOSTING_MEMBERS=lnxcld02,APPSVIP_FAILBACK="
##############################################################################################
[grid@lnxcld02 trace]$ crsctl stat res -t
 --------------------------------------------------------------------------------
 NAME           TARGET  STATE        SERVER                   STATE_DETAILS
 --------------------------------------------------------------------------------
 Local Resources
 --------------------------------------------------------------------------------
 ora.DATA1.dg
 ONLINE  ONLINE       lnxcld01
 ONLINE  ONLINE       lnxcld02
 ora.FRA1.dg
 ONLINE  ONLINE       lnxcld01
 ONLINE  ONLINE       lnxcld02
 ora.LISTENER.lsnr
 ONLINE  ONLINE       lnxcld01
 ONLINE  ONLINE       lnxcld02
 ora.OCRVOTING.dg
 ONLINE  ONLINE       lnxcld01
 ONLINE  ONLINE       lnxcld02
 ora.asm
 ONLINE  ONLINE       lnxcld01                 Started
 ONLINE  ONLINE       lnxcld02                 Started
 ora.gsd
 OFFLINE OFFLINE      lnxcld01
 OFFLINE OFFLINE      lnxcld02
 ora.net1.network
 ONLINE  ONLINE       lnxcld01
 ONLINE  ONLINE       lnxcld02
 ora.ons
 ONLINE  ONLINE       lnxcld01
 ONLINE  ONLINE       lnxcld02
 ora.registry.acfs
 ONLINE  ONLINE       lnxcld01
 ONLINE  ONLINE       lnxcld02
 --------------------------------------------------------------------------------
 Cluster Resources
 --------------------------------------------------------------------------------
 myappvip
 1        ONLINE  ONLINE       lnxcld02
 ora.LISTENER_SCAN1.lsnr
 1        ONLINE  ONLINE       lnxcld02
 ora.cvu
 1        ONLINE  ONLINE       lnxcld02
 ora.lnxcld01.vip
 1        ONLINE  ONLINE       lnxcld01
 ora.lnxcld02.vip
 1        ONLINE  ONLINE       lnxcld02
 ora.oc4j
 1        ONLINE  ONLINE       lnxcld02
 ora.scan1.vip
 1        ONLINE  ONLINE       lnxcld02
 ora.tpolicy.db
 1        ONLINE  ONLINE       lnxcld01                 Open
 2        ONLINE  ONLINE       lnxcld02                 Open
 ora.tpolicy.loadbalance_rw.svc
 1        ONLINE  ONLINE       lnxcld01
 2        ONLINE  ONLINE       lnxcld02
##############################################################################################
[grid@lnxcld02 ~]$ crsctl stat res myappvip -p
 NAME=myappvip
 TYPE=app.appvip_net1.type
 ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:grid:r-x
 ACTION_FAILURE_TEMPLATE=
 ACTION_SCRIPT=
 ACTIVE_PLACEMENT=1
 AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
 APPSVIP_FAILBACK=0
 AUTO_START=restore
 CARDINALITY=1
 CHECK_INTERVAL=1
 CHECK_TIMEOUT=30
 DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=vip)
 DEGREE=1
 DESCRIPTION=Application VIP
 ENABLED=1
 FAILOVER_DELAY=0
 FAILURE_INTERVAL=0
 FAILURE_THRESHOLD=0
 GEN_USR_ORA_STATIC_VIP=
 GEN_USR_ORA_VIP=
 HOSTING_MEMBERS=lnxcld02
 LOAD=1
 LOGGING_LEVEL=1
 NLS_LANG=
 NOT_RESTARTING_TEMPLATE=
 OFFLINE_CHECK_INTERVAL=0
 PLACEMENT=balanced
 PROFILE_CHANGE_TEMPLATE=
 RESTART_ATTEMPTS=0
 SCRIPT_TIMEOUT=60
 SERVER_POOLS=*
 START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
 START_TIMEOUT=0
 STATE_CHANGE_TEMPLATE=
 STOP_DEPENDENCIES=hard(ora.net1.network)
 STOP_TIMEOUT=0
 TYPE_VERSION=2.1
 UPTIME_THRESHOLD=7d
 USR_ORA_ENV=
 USR_ORA_VIP=192.168.2.200
 VERSION=11.2.0.3.0

Installation Oracle Grid Infrastrucure 12c

####################################

LINUX Setup click here

####################################

Setup OS

– Disable the Firewall

[root@oel6srv01 ~]# service iptables save
[root@oel6srv01 ~]# service iptables stop
[root@oel6srv01 ~]# chkconfig iptables off
[root@oel6srv01 ~]# service iptables status
-- If you are using IPv6 firewall, enter:
 [root@oel6srv01 ~]# service ip6tables save
 [root@oel6srv01 ~]# service ip6tables stop
 [root@oel6srv01 ~]# chkconfig ip6tables off
 [root@oel6srv01 ~]# service ip6tables status


– Disable the SELinux

[root@oel6srv01 ~]# vi /etc/sysconfig/selinux


– DISABLED kdump

[root@oel6srv01 ~]# chkconfig kdump on
[root@oel6srv01 ~]# chkconfig --list |grep kdump
kdump           0:off   1:off   2:off   3:off   4:off   5:off   6:off


– Network Setup

Public Cluster interphases, VIPs and SCAN

Subnet 10.0.0.x
Netmask 255.255.255.0

Private Cluster interphases

Subnet  192.168.0.x
Netmask 255.255.255.0

 


– Kernel add or amend the following lines  to the “/etc/sysctl.conf” file.

# vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
 fs.file-max = 6815744
 kernel.shmall = 2097152
 kernel.shmmax = 1062637568
 kernel.shmmni = 4096
 # semaphores: semmsl, semmns, semopm, semmni
 kernel.sem = 250 32000 100 128
 net.ipv4.ip_local_port_range = 9000 65500
 net.core.rmem_default=262144
 net.core.rmem_max=4194304
 net.core.wmem_default=262144
 net.core.wmem_max=1048586
--Activate the current Kernel parameters:
/sbin/sysctl -p

– Add the following lines to /etc/security/limits.conf

# vi  /etc/security/limits.conf
## Go to the end
 grid     soft     nproc      2047
 grid     hard     nproc     16384
 grid     soft     nofile     1024
 grid     hard     nofile    65536
 oracle   soft     nproc      2047
 oracle   hard     nproc     16384
 oracle   soft     nofile     1024
 oracle   hard     nofile    65536

– Add the following line to /etc/pam.d/login

# vi /etc/pam.d/login
session     required    pam_limits.so


– Disable secure linux

–making sure the SELINUX flag is set as follows.

# vi /etc/selinux/config
SELINUX=disabled


– NTP Setup

–If you are using NTP, you must add the “-x” option into the following line in the “/etc/sysconfig/ntpd” file.

# vi /etc/sysconfig/ntpd
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
# service ntpd restart
 -- OR STOP NTP SERVER the Grid will start CSSD in Active mode instead of Observe:
 # service ntpd stop
 # chkconfig ntpd off
---------------------------------------------
 - Mandatory Packages for Oracle Linux 6  and Red Hat Enterprise Linux 6
 ---------------------------------------------
 binutils-2.20.51.0.2-5.11.el6 (x86_64)
 compat-libcap1-1.10-1 (x86_64)
 compat-libstdc++-33-3.2.3-69.el6 (x86_64)
 compat-libstdc++-33-3.2.3-69.el6.i686
 gcc-4.4.4-13.el6 (x86_64)
 gcc-c++-4.4.4-13.el6 (x86_64)
 glibc-2.12-1.7.el6 (i686)
 glibc-2.12-1.7.el6 (x86_64)
 glibc-devel-2.12-1.7.el6 (x86_64)
 glibc-devel-2.12-1.7.el6.i686
 ksh
 libgcc-4.4.4-13.el6 (i686)
 libgcc-4.4.4-13.el6 (x86_64)
 libstdc++-4.4.4-13.el6 (x86_64)
 libstdc++-4.4.4-13.el6.i686
 libstdc++-devel-4.4.4-13.el6 (x86_64)
 libstdc++-devel-4.4.4-13.el6.i686
 libaio-0.3.107-10.el6 (x86_64)
 libaio-0.3.107-10.el6.i686
 libaio-devel-0.3.107-10.el6 (x86_64)
 libaio-devel-0.3.107-10.el6.i686
 make-3.81-19.el6
 sysstat-9.0.4-11.el6 (x86_64)

–Install the cvuqdisk RPM. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks,
–and it raises the following error message “Package cvuqdisk not installed” when Cluster Verification Utility is executed.

–A copy of the cvuqdisk  package is present on the 1st Grid Infrastructure ZIP file.

— Log in as root.

  1. Use the following command to find if you have an existing version of the cvuqdisk package:
# rpm -qi cvuqdisk

2.  If you have an existing version, then enter the following command to deinstall the existing version:

rpm -e cvuqdisk

  1. Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk, typically oinstall. For example:
# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
4. Install the cvuqdisk package:
rpm -iv package
# rpm -iv cvuqdisk-1.0.9-1.rpm
--OR
you install oracle-rdbms-server-12cR1-preinstall


– OPTIONAL RPMs

--Minimum ODBC Drivers for Oracle and Red Hat Linux 6 on x86-64
 unixODBC-2.2.14-11.el6 (64-bit) or later
 unixODBC-devel-2.2.14-11.el6 (64-bit) or later

– UNIX Groups

/usr/sbin/groupadd -g 1000 oinstall
/usr/sbin/groupadd -g 1001 asmadmin
/usr/sbin/groupadd -g 1002 dba
/usr/sbin/groupadd -g 1003 asmdba
/usr/sbin/groupadd -g 1004 asmoper

–New optional roles which grant access to specific features like Data Guard, RMAN and Security have been added in 12c, but not implemented in this example.


– UNIX Users

useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper grid
useradd -u 1101 -g oinstall -G asmdba,dba oracle


– Set SSH timeout wait to unlimited

# vi /etc/ssh/sshd_config
LoginGraceTime 0

 


– Create the u01 file system

[root@oel6srv01 ~]# fdisk /dev/sdb
The number of cylinders for this disk is set to 2871.
 There is nothing wrong with that, but this is larger than 1024,
 and could in certain setups cause problems with:
 1) software that runs at boot time (e.g., old versions of LILO)
 2) booting and partitioning software from other OSs
 (e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
 Command action
 e   extended
 p   primary partition (1-4)
 p
Partition number (1-4): 1
 First cylinder (1-2871, default 1):
 Using default value 1
 Last cylinder or +size or +sizeM or +sizeK (1-2871, default 2871):
 Using default value 2871
Command (m for help): w
 The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
 The kernel still uses the old table.
 The new table will be used at the next reboot.
 Syncing disks.
 [root@oel6srv01 ~]#
 [root@oel6srv01 ~]# fdisk -l
Disk /dev/sda: 21.4 GB, 21474836480 bytes
 255 heads, 63 sectors/track, 2610 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start         End      Blocks   Id  System
 /dev/sda1   *           1          13      104391   83  Linux
 /dev/sda2              14        2610    20860402+  8e  Linux LVM
Disk /dev/sdb: 23.6 GB, 23622320128 bytes
 255 heads, 63 sectors/track, 2871 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start         End      Blocks   Id  System
 /dev/sdb1               1        2871    23061276   83  Linux
[root@oel6srv01 ~]# mkfs.ext4 /dev/sdb1
 mke4fs 1.41.12 (17-May-2010)
 Filesystem label=
 OS type: Linux
 Block size=4096 (log=2)
 Fragment size=4096 (log=2)
 Stride=0 blocks, Stripe width=0 blocks
 1441792 inodes, 5765319 blocks
 288265 blocks (5.00%) reserved for the super user
 First data block=0
 Maximum filesystem blocks=4294967296
 176 block groups
 32768 blocks per group, 32768 fragments per group
 8192 inodes per group
 Superblock backups stored on blocks:
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
 4096000
Writing inode tables: done
 Creating journal (32768 blocks): done
 Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 37 mounts or
 180 days, whichever comes first.  Use tune4fs -c or -i to override.
 [root@oel6srv01 ~]#
[root@oel6srv01 /]# mkdir u01
 [root@oel6srv01 /]# cat /etc/fstab
..
..
/dev/sdb1               /u01                    ext4    defaults        0 0
[root@oel6srv01 /]# mount /u01
 [root@oel6srv01 /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
 15G  3.2G   12G  22% /
 /dev/sda1              99M   24M   71M  25% /boot
 tmpfs                 1.3G     0  1.3G   0% /dev/shm
 /dev/sdb1              22G  172M   21G   1% /u01

– Creation of GI and RDBMS directories

mkdir -p /u01/GRID/12.1.0.1
 mkdir -p /u01/app/product/12.1.0.1
#Oracle Base
 chown -R oracle:oinstall /u01/app
 chmod -R 775 /u01/app
#Oracle RDBMS Home
 chown -R oracle:oinstall /u01/app/product/12.1.0.1
 chmod -R 775 /u01/app/product/12.1.0.1
#Grid Home
 chown -R grid:oinstall /u01/GRID
 chmod -R 775 /u01/GRID/12.1.0.1

– Add this entries to the generic User Profile

# vi /etc/profile
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
 if [ $SHELL = "/bin/ksh" ]; then
 ulimit -p 16384
 ulimit -n 65536
 else
 ulimit -u 16384 -n 65536
 fi
 umask 022
 fi
 if [ $USER = "root" ]; then
 umask 022
 fi

— Configure the shared storage for ASM

##########################################################
##  Installation Oracle Grid Infrastructure
##########################################################

--Run 12c Cluvfy with the following options to verify that all prerequisites are met:
 ./runcluvfy.sh stage -post hwos -n oel6srv01,oel6srv02,oel6srv03,oel6srv04 -verbose

# Start the Grid Installation…..

[grid@oel6srv01 grid]$ ./runInstaller
 Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB.   Actual 39776 MB    Passed
 Checking swap space: must be greater than 150 MB.   Actual 4031 MB    Passed
 Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
 Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-06-25_08-23-26PM. Please wait ..

 

##############################################################
##  Grid Infrastructure crsctl output
##############################################################

[grid@oel6srv01 ~]$ crsctl stat res -t
 --------------------------------------------------------------------------------
 Name           Target  State        Server                   State details
 --------------------------------------------------------------------------------
 Local Resources
 --------------------------------------------------------------------------------
 ora.ASMNET2LSNR_ASM.lsnr
 ONLINE  ONLINE       oel6srv01                STABLE
 ONLINE  ONLINE       oel6srv02                STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.DATA1.VOL_CLOUD01.advm
 ONLINE  ONLINE       oel6srv01                Volume device /dev/a
 sm/vol_cloud01-178 i
 s online,STABLE
 ONLINE  ONLINE       oel6srv02                Volume device /dev/a
 sm/vol_cloud01-178 i
 s online,STABLE
 ONLINE  ONLINE       oel6srv03                Volume device /dev/a
 sm/vol_cloud01-178 i
 s online,STABLE
 ONLINE  ONLINE       oel6srv04                Volume device /dev/a
 sm/vol_cloud01-178 i
 s online,STABLE
 ora.DATA1.dg
 OFFLINE OFFLINE      oel6srv01               STABLE
 OFFLINE OFFLINE      oel6srv02               STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.FRA1.dg
 OFFLINE OFFLINE      oel6srv01               STABLE
 OFFLINE OFFLINE      oel6srv02               STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.LISTENER.lsnr
 ONLINE  ONLINE       oel6srv01                STABLE
 ONLINE  ONLINE       oel6srv02                STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.OCRVOTING.dg
 OFFLINE OFFLINE      oel6srv01               STABLE
 OFFLINE OFFLINE      oel6srv02               STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.data1.vol_cloud01.acfs
 ONLINE  ONLINE       oel6srv01                mounted on /cloudfs,
 STABLE
 ONLINE  ONLINE       oel6srv02                mounted on /cloudfs,
 STABLE
 ONLINE  ONLINE       oel6srv03                mounted on /cloudfs,
 STABLE
 ONLINE  ONLINE       oel6srv04                mounted on /cloudfs,
 STABLE
 ora.net1.network
 ONLINE  ONLINE       oel6srv01                STABLE
 ONLINE  ONLINE       oel6srv02                STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.ons
 ONLINE  ONLINE       oel6srv01                STABLE
 ONLINE  ONLINE       oel6srv02                STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 ora.proxy_advm
 ONLINE  ONLINE       oel6srv01                STABLE
 ONLINE  ONLINE       oel6srv02                STABLE
 ONLINE  ONLINE       oel6srv03                STABLE
 ONLINE  ONLINE       oel6srv04                STABLE
 --------------------------------------------------------------------------------
 Cluster Resources
 --------------------------------------------------------------------------------
 ora.LISTENER_SCAN1.lsnr
 1        ONLINE  ONLINE       oel6srv04                STABLE
 ora.MGMTLSNR
 1        ONLINE  ONLINE       oel6srv04                169.254.25.188 192.1
 68.0.114 192.168.0.1
 24,STABLE
 ora.asm
 1        ONLINE  ONLINE       oel6srv04                STABLE
 3        ONLINE  ONLINE       oel6srv03                STABLE
 ora.cvu
 1        ONLINE  ONLINE       oel6srv04                STABLE
 ora.mgmtdb
 1        ONLINE  ONLINE       oel6srv04                Open,STABLE
 ora.oc4j
 1        ONLINE  ONLINE       oel6srv01                STABLE
 ora.oel6srv01.vip
 1        ONLINE  ONLINE       oel6srv01                STABLE
 ora.oel6srv02.vip
 1        ONLINE  ONLINE       oel6srv02                STABLE
 ora.oel6srv03.vip
 1        ONLINE  ONLINE       oel6srv03                STABLE
 ora.oel6srv04.vip
 1        ONLINE  ONLINE       oel6srv04                STABLE
 ora.scan1.vip
 1        ONLINE  ONLINE       oel6srv04                STABLE
 --------------------------------------------------------------------------------

Oracle Cluster Name

#################################################
##    How to display Oracle Cluster name       ##
#################################################

Oracle Clusterware includes the utility cemutlo which provides 
cluster name and version.

[grid@lnxcld01 ~]$ cemutlo -h
Usage: /GRID_INFRA/product/11.2.0.3/bin/cemutlo.bin [-n] [-w]
        where:
        -n prints the cluster name
        -w prints the clusterware version in the following format:
                 <major_version>:<minor_version>:<vendor_info>


--Cluster Name
[grid@lnxcld01 ~]$ cemutlo -n
cloud01

--Cluster Version
[grid@lnxcld01 ~]$ cemutlo -w
2:1: