Bulk Exadata Patching

After more than 11 year from the launch Oracle Exadata Machine has become popular on many companies across industries, making administrators, developers and final users almost unanimously satisfied about performance and availability.

But also, on Exadata there are cumbersome maintenance activities like patching.

Most of my Exadata customers have acquired 2 non-full RACKs, which makes the patching effort quite reasonable; but recently I started working on a project with multiple full RACKs, with tens of Storage Servers, Compute Nodes and hundreds of Virtual Machines…

A very challenging environment, especially when it came to patching…

Patching all the systems using the standard patchmgr utiliy was not acceptable, therefore I had to replace my standard patching procedure with a new one offering automation and scalability.

At this subject Oracle provides few handy options:

Patching Exadata Infrastructure

  • Storage Server Patching via http/https server: starting with Oracle Exadata System Software release 18.1.0.0.0, it is possible to patch the Storage Servers using an external http server hosting the new software image. The activitiy can be scheduled up to one week before the installation, allowing on each Cell the Management Server (MS) to start downloading and run pre-checks in advance. MS interrupts the software upgrade and generates an alert if the Cell does not comply with all pre-requisites.
  • Unbreakable Linux Network: ULN offers software patches, updates, and fixes for Oracle Linux and Oracle VM. The implementation of a local YUM repository leverages the patch automation of the bare metal OS or dom0/domU.
  • InfiniBand Switch: standard rolling upgrade patching procedure using patchmgr.

Patching Grid Infrastructure & RDBMS

  • GI & RDBMS: those components are patched using the standard Oracle tools common to all platforms, but the entire process has been parallalized using OS tools like dcli commands.

Overview Bulk Exadata Patching


Main Patching Commands

Storage Server – Scheduling Automated Storage Server Update via HTTP/HTTPS

On the Storage Cell set the local Apache location hosting the cell software

[root@efucndb01-a ~]# dcli -l root -g ~/cells cellcli -e 'alter softwareUpdate store=\"http://uln-yum.emilianofusaglia.net/cellsw\"'
efucncel01-a: Software Update successfully altered.
efucncel02-a: Software Update successfully altered.
efucncel03-a: Software Update successfully altered.
efucncel04-a: Software Update successfully altered.
efucncel05-a: Software Update successfully altered.
efucncel06-a: Software Update successfully altered.
efucncel07-a: Software Update successfully altered.
efucncel08-a: Software Update successfully altered.
efucncel09-a: Software Update successfully altered.
efucncel10-a: Software Update successfully altered.
efucncel11-a: Software Update successfully altered.
efucncel12-a: Software Update successfully altered.
efucncel13-a: Software Update successfully altered.
efucncel14-a: Software Update successfully altered.
[root@efucndb01-a ~]#

Schedule the update

[root@efucndb01-a ~]# dcli -l root -g ~/cells cellcli -e 'alter softwareUpdate time=\"03:20 AM WEDNESDAY\"'
efucncel01-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel02-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel03-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel04-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel05-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel06-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel07-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel08-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel09-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel10-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel11-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel12-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel13-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
efucncel14-a: Software update is scheduled to begin at: 2020-02-05T03:20:00+01:00.
[root@efucndb01-a ~]#

Verify the scheduled upgrade

[root@efucndb01-a ~]# dcli -l root -g ~/cells cellcli -e 'list softwareupdate detail'
efucncel01-a: name: 19.3.4.0.0.200130
efucncel01-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel01-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel01-a: time: 2020-02-05T03:20:00+01:00
efucncel02-a: name: 19.3.4.0.0.200130
efucncel02-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel02-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel02-a: time: 2020-02-05T03:20:00+01:00
efucncel03-a: name: 19.3.4.0.0.200130
efucncel03-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel03-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel03-a: time: 2020-02-05T03:20:00+01:00
efucncel04-a: name: 19.3.4.0.0.200130
efucncel04-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel04-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel04-a: time: 2020-02-05T03:20:00+01:00
efucncel05-a: name: 19.3.4.0.0.200130
efucncel05-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel05-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel05-a: time: 2020-02-05T03:20:00+01:00
efucncel06-a: name: 19.3.4.0.0.200130
efucncel06-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel06-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel06-a: time: 2020-02-05T03:20:00+01:00
efucncel07-a: name: 19.3.4.0.0.200130
efucncel07-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel07-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel07-a: time: 2020-02-05T03:20:00+01:00
efucncel08-a: name: 19.3.4.0.0.200130
efucncel08-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel08-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel08-a: time: 2020-02-05T03:20:00+01:00
efucncel09-a: name: 19.3.4.0.0.200130
efucncel09-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel09-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel09-a: time: 2020-02-05T03:20:00+01:00
efucncel10-a: name: 19.3.4.0.0.200130
efucncel10-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel10-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel10-a: time: 2020-02-05T03:20:00+01:00
efucncel11-a: name: 19.3.4.0.0.200130
efucncel11-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel11-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel11-a: time: 2020-02-05T03:20:00+01:00
efucncel12-a: name: 19.3.4.0.0.200130
efucncel12-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel12-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel12-a: time: 2020-02-05T03:20:00+01:00
efucncel13-a: name: 19.3.4.0.0.200130
efucncel13-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel13-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel13-a: time: 2020-02-05T03:20:00+01:00
efucncel14-a: name: 19.3.4.0.0.200130
efucncel14-a: status: PreReq OK. Ready to update at: 2020-02-05T03:20:00+01:00
efucncel14-a: store: http://uln-yum.emilianofusaglia.net/cellsw
efucncel14-a: time: 2020-02-05T03:20:00+01:00
[root@efucndb01-a ~]#

Unbreakable Linux Network

dom0 checks

[root@efuconsole dbserver_patch_19.200120]# ./patchmgr -dbnodes ~/dom0 -precheck -yum_repo http://uln-yum.emilianofusaglia.net/yum/EngineeredSystems/exadata/dbserver/dom0/19.3.4.0.0/base/x86_64 -target_version 19.3.4.0.0.200130

NOTE patchmgr release: 19.200120 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.

2020-02-06 14:06:17 +0100 :Working: Verify SSH equivalence for the root user to node(s)
2020-02-06 14:06:19 +0100 :SUCCESS: Verify SSH equivalence for the root user to node(s)
2020-02-06 14:06:22 +0100 :Working: Initiate precheck on 8 node(s)
2020-02-06 14:07:36 +0100 :Working: Check free space on node(s)
2020-02-06 14:07:42 +0100 :SUCCESS: Check free space on node(s)
2020-02-06 14:08:07 +0100 :Working: dbnodeupdate.sh running a precheck on node(s).
2020-02-06 14:09:43 +0100 :SUCCESS: Initiate precheck on node(s).
2020-02-06 14:09:45 +0100 :SUCCESS: Completed run of command: ./patchmgr -dbnodes /root/dom0 -precheck -yum_repo http://uln-yum.emilianofusaglia.net/yum/EngineeredSystems/exadata/dbserver/dom0/19.3.4.0.0/base/x86_64 -target_version 19.3.4.0.0.200130
2020-02-06 14:09:45 +0100 :INFO : Precheck attempted on nodes in file /root/dom0: [efucndb01-a efucndb02-a efucndb03-a efucndb04-a efucndb05-a efucndb06-a efucndb07-a efucndb08-a]
2020-02-06 14:09:45 +0100 :INFO : Current image version on dbnode(s) is:
2020-02-06 14:09:45 +0100 :INFO : efucndb01-a: 19.2.6.0.0.190911.1
2020-02-06 14:09:45 +0100 :INFO : efucndb02-a: 19.2.6.0.0.190911.1
2020-02-06 14:09:45 +0100 :INFO : efucndb03-a: 19.2.6.0.0.190911.1
2020-02-06 14:09:45 +0100 :INFO : efucndb04-a: 19.2.6.0.0.190911.1
2020-02-06 14:09:45 +0100 :INFO : efucndb05-a: 19.2.6.0.0.190911.1
2020-02-06 14:09:45 +0100 :INFO : efucndb06-a: 19.2.6.0.0.190911.1
2020-02-06 14:09:45 +0100 :INFO : efucndb07-a: 19.2.6.0.0.190911.1
2020-02-06 14:09:45 +0100 :INFO : efucndb08-a: 19.2.6.0.0.190911.1
2020-02-06 14:09:45 +0100 :INFO : For details, check the following files in /EXAVMIMAGES/Patch/patchmgr_DBSERVER/dbserver_patch_19.200120:
2020-02-06 14:09:45 +0100 :INFO : - _dbnodeupdate.log
2020-02-06 14:09:45 +0100 :INFO : - patchmgr.log
2020-02-06 14:09:45 +0100 :INFO : - patchmgr.trc
2020-02-06 14:09:45 +0100 :INFO : Exit status:0
2020-02-06 14:09:45 +0100 :INFO : Exiting.
[root@efucndb01-a dbserver_patch_19.200120]#

dom0 upgrade

[root@efuconsole dbserver_patch_19.200120]# ./patchmgr -dbnodes ~/dom0 -upgrade -yum_repo http://uln-yum.emilianofusaglia.net/yum/EngineeredSystems/exadata/dbserver/dom0/19.3.4.0.0/base/x86_64 -target_version 19.3.4.0.0.200130

NOTE patchmgr release: 19.200120 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
NOTE
NOTE Database nodes will reboot during the update process.
NOTE
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.

2020-02-06 14:29:11 +0100 :Working: Verify SSH equivalence for the root user to node(s)
2020-02-06 14:29:13 +0100 :SUCCESS: Verify SSH equivalence for the root user to node(s)
2020-02-06 14:29:15 +0100 :Working: Initiate prepare steps on node(s).
2020-02-06 14:29:17 +0100 :Working: Check free space on node(s)
2020-02-06 14:29:23 +0100 :SUCCESS: Check free space on node(s)
2020-02-06 14:29:59 +0100 :SUCCESS: Initiate prepare steps on node(s).
2020-02-06 14:29:59 +0100 :Working: Initiate update on 8 node(s).
2020-02-06 14:29:59 +0100 :Working: dbnodeupdate.sh running a backup on 8 node(s).
2020-02-06 14:36:16 +0100 :SUCCESS: dbnodeupdate.sh running a backup on 8 node(s).
2020-02-06 14:36:16 +0100 :Working: Initiate update on node(s)
2020-02-06 14:36:16 +0100 :Working: Get information about any required OS upgrades from node(s).
2020-02-06 14:36:28 +0100 :SUCCESS: Get information about any required OS upgrades from node(s).
2020-02-06 14:36:28 +0100 :Working: dbnodeupdate.sh running an update step on all nodes.
2020-02-06 14:56:38 +0100 :INFO : efucndb01-a is ready to reboot.
2020-02-06 14:56:38 +0100 :INFO : efucndb02-a is ready to reboot.
2020-02-06 14:56:38 +0100 :INFO : efucndb03-a is ready to reboot.
2020-02-06 14:56:38 +0100 :INFO : efucndb04-a is ready to reboot.
2020-02-06 14:56:38 +0100 :INFO : efucndb05-a is ready to reboot.
2020-02-06 14:56:38 +0100 :INFO : efucndb06-a is ready to reboot.
2020-02-06 14:56:38 +0100 :INFO : efucndb07-a is ready to reboot.
2020-02-06 14:56:38 +0100 :INFO : efucndb08-a is ready to reboot.
2020-02-06 14:56:39 +0100 :SUCCESS: dbnodeupdate.sh running an update step on all nodes.
2020-02-06 14:56:51 +0100 :Working: Initiate reboot on node(s)
2020-02-06 14:56:55 +0100 :SUCCESS: Initiate reboot on node(s)
2020-02-06 14:56:55 +0100 :Working: Waiting to ensure node(s) is down before reboot.
2020-02-06 14:58:20 +0100 :SUCCESS: Waiting to ensure node(s) is down before reboot.
2020-02-06 14:58:20 +0100 :Working: Waiting to ensure node(s) is up after reboot.
2020-02-06 15:04:23 +0100 :SUCCESS: Waiting to ensure node(s) is up after reboot.
2020-02-06 15:04:23 +0100 :Working: Waiting to connect to node(s) with SSH. During Linux upgrades this can take some time.
2020-02-06 15:27:46 +0100 :SUCCESS: Waiting to connect to node(s) with SSH. During Linux upgrades this can take some time.
2020-02-06 15:27:46 +0100 :Working: Wait for node(s) is ready for the completion step of update.
2020-02-06 15:31:29 +0100 :SUCCESS: Wait for node(s) is ready for the completion step of update.
2020-02-06 15:31:30 +0100 :Working: Initiate completion step from dbnodeupdate.sh on node(s)
2020-02-06 15:48:10 +0100 :SUCCESS: Initiate completion step from dbnodeupdate.sh on efucndb01-a
2020-02-06 15:48:14 +0100 :SUCCESS: Initiate completion step from dbnodeupdate.sh on efucndb02-a
2020-02-06 15:48:19 +0100 :SUCCESS: Initiate completion step from dbnodeupdate.sh on efucndb03-a
2020-02-06 15:48:30 +0100 :SUCCESS: Initiate completion step from dbnodeupdate.sh on efucndb04-a
2020-02-06 15:48:35 +0100 :SUCCESS: Initiate completion step from dbnodeupdate.sh on efucndb05-a
2020-02-06 15:48:46 +0100 :SUCCESS: Initiate completion step from dbnodeupdate.sh on efucndb06-a
2020-02-06 15:48:50 +0100 :SUCCESS: Initiate completion step from dbnodeupdate.sh on efucndb07-a
2020-02-06 15:49:02 +0100 :SUCCESS: Initiate completion step from dbnodeupdate.sh on efucndb08-a
2020-02-06 15:49:18 +0100 :SUCCESS: Initiate update on node(s).
2020-02-06 15:49:18 +0100 :SUCCESS: Initiate update on 0 node(s).
[INFO ] Collected dbnodeupdate diag in file: Diag_patchmgr_dbnode_upgrade_060220142909.tbz
-rw-r--r-- 1 root root 6381043 Feb 6 15:49 Diag_patchmgr_dbnode_upgrade_060220142909.tbz
2020-02-06 15:49:22 +0100 :SUCCESS: Completed run of command: ./patchmgr -dbnodes /root/dom0 -upgrade -yum_repo http://uln-yum.emilianofusaglia.net/yum/EngineeredSystems/exadata/dbserver/dom0/19.3.4.0.0/base/x86_64 -target_version 19.3.4.0.0.200130
2020-02-06 15:49:22 +0100 :INFO : Upgrade attempted on nodes in file /root/dom0: [efucndb01-a efucndb02-a efucndb03-a efucndb04-a efucndb05-a efucndb06-a efucndb07-a efucndb08-a]
2020-02-06 15:49:22 +0100 :INFO : Current image version on dbnode(s) is:
2020-02-06 15:49:22 +0100 :INFO : efucndb01-a: 19.3.4.0.0.200130
2020-02-06 15:49:22 +0100 :INFO : efucndb02-a: 19.3.4.0.0.200130
2020-02-06 15:49:22 +0100 :INFO : efucndb03-a: 19.3.4.0.0.200130
2020-02-06 15:49:22 +0100 :INFO : efucndb04-a: 19.3.4.0.0.200130
2020-02-06 15:49:22 +0100 :INFO : efucndb05-a: 19.3.4.0.0.200130
2020-02-06 15:49:22 +0100 :INFO : efucndb06-a: 19.3.4.0.0.200130
2020-02-06 15:49:22 +0100 :INFO : efucndb07-a: 19.3.4.0.0.200130
2020-02-06 15:49:22 +0100 :INFO : efucndb08-a: 19.3.4.0.0.200130
2020-02-06 15:49:22 +0100 :INFO : For details, check the following files in /EXAVMIMAGES/Patch/patchmgr_DBSERVER/dbserver_patch_19.200120:
2020-02-06 15:49:22 +0100 :INFO : - _dbnodeupdate.log
2020-02-06 15:49:22 +0100 :INFO : - patchmgr.log
2020-02-06 15:49:22 +0100 :INFO : - patchmgr.trc
2020-02-06 15:49:22 +0100 :INFO : Exit status:0
2020-02-06 15:49:22 +0100 :INFO : Exiting.

InfiniBand Switch

IB switch checks

[root@efucndb01-a patch_switch_19.3.4.0.0.200130]# ./patchmgr -ibswitches ~/ibs -upgrade -ibswitch_precheck
2020-02-10 07:57:44 +0100 :Working: Verify SSH equivalence for the root user to node(s)
2020-02-10 07:57:46 +0100 :SUCCESS: Verify SSH equivalence for the root user to node(s)
2020-02-10 07:57:47 +0100 1 of 1 :Working: Initiate pre-upgrade validation check on InfiniBand switch(es).
----- InfiniBand switch update process started 2020-02-10 07:57:48 +0100 -----
[NOTE ] Log file at /EXAVMIMAGES/Patch/IBs_19.3.4.0.0/patch_switch_19.3.4.0.0.200130/upgradeIBSwitch.log
[INFO ] List of InfiniBand switches for upgrade: ( efucnsw-iba01-a efucnsw-ibb01-a )
[SUCCESS ] Verifying Network connectivity to efucnsw-iba01-a
[SUCCESS ] Verifying Network connectivity to efucnsw-ibb01-a
[SUCCESS ] Validating verify-topology output
[INFO ] Master Subnet Manager is set to "efucnsw-iba01-a" in all Switches
[INFO ] ---------- Starting with InfiniBand Switch efucnsw-iba01-a
[WARNING ] Infiniband switch meets minimal version requirements, but downgrade is only available to 2.2.13-2 with the current package.
To downgrade to other versions:
Manually download the InfiniBand switch firmware package to the patch directory
Set export variable "EXADATA_IMAGE_IBSWITCH_DOWNGRADE_VERSION" to the appropriate version
Run patchmgr command to initiate downgrade.
[SUCCESS ] Verify SSH access to the patchmgr host efucndb01-a.emilianofusaglia.net from the InfiniBand Switch efucnsw-iba01-a.
[INFO ] Starting pre-update validation on efucnsw-iba01-a
[SUCCESS ] Verifying that /tmp has 150M in efucnsw-iba01-a, found 492M
[SUCCESS ] Verifying that / has 20M in efucnsw-iba01-a, found 28M
[SUCCESS ] NTP daemon is running on efucnsw-iba01-a.
[INFO ] Manually validate the following entries Date:(YYYY-aM-DD) 2020-02-10 Time:(HH:MM:SS) 07:58:05
[INFO ] Validating the current firmware on the InfiniBand Switch
[SUCCESS ] Firmware verification on InfiniBand switch efucnsw-iba01-a
[SUCCESS ] Verifying that the patchmgr host efucndb01-a.emilianofusaglia.net is recognized on the InfiniBand Switch efucnsw-iba01-a through getHostByName
[SUCCESS ] Execute plugin check for Patch Check Prereq on efucnsw-iba01-a
[INFO ] Finished pre-update validation on efucnsw-iba01-a
[SUCCESS ] Pre-update validation on efucnsw-iba01-a
[SUCCESS ] Prereq check on efucnsw-iba01-a
[INFO ] ---------- Starting with InfiniBand Switch efucnsw-ibb01-a
[WARNING ] Infiniband switch meets minimal version requirements, but downgrade is only available to 2.2.13-2 with the current package.
To downgrade to other versions:
Manually download the InfiniBand switch firmware package to the patch directory
Set export variable "EXADATA_IMAGE_IBSWITCH_DOWNGRADE_VERSION" to the appropriate version
Run patchmgr command to initiate downgrade.
[SUCCESS ] Verify SSH access to the patchmgr host efucndb01-a.emilianofusaglia.net from the InfiniBand Switch efucnsw-ibb01-a.
[INFO ] Starting pre-update validation on efucnsw-ibb01-a
[SUCCESS ] Verifying that /tmp has 150M in efucnsw-ibb01-a, found 492M
[SUCCESS ] Verifying that / has 20M in efucnsw-ibb01-a, found 28M
[SUCCESS ] NTP daemon is running on efucnsw-ibb01-a.
[INFO ] Manually validate the following entries Date:(YYYY-aM-DD) 2020-02-10 Time:(HH:MM:SS) 07:58:25
[INFO ] Validating the current firmware on the InfiniBand Switch
[SUCCESS ] Firmware verification on InfiniBand switch efucnsw-ibb01-a
[SUCCESS ] Verifying that the patchmgr host efucndb01-a.emilianofusaglia.net is recognized on the InfiniBand Switch efucnsw-ibb01-a through getHostByName
[SUCCESS ] Execute plugin check for Patch Check Prereq on efucnsw-ibb01-a
[INFO ] Finished pre-update validation on efucnsw-ibb01-a
[SUCCESS ] Pre-update validation on efucnsw-ibb01-a
[SUCCESS ] Prereq check on efucnsw-ibb01-a
[SUCCESS ] Overall status
----- InfiniBand switch update process ended 2020-02-10 07:58:42 +0100 -----
2020-02-10 07:58:42 +0100 1 of 1 :SUCCESS: Initiate pre-upgrade validation check on InfiniBand switch(es).
2020-02-10 07:58:42 +0100 :SUCCESS: Completed run of command: ./patchmgr -ibswitches /root/ibs -upgrade -ibswitch_precheck
2020-02-10 07:58:42 +0100 :INFO : upgrade attempted on nodes in file /root/ibs: [efucnsw-iba01-a efucnsw-ibb01-a]
2020-02-10 07:58:42 +0100 :INFO : For details, check the following files in /EXAVMIMAGES/Patch/IBs_19.3.4.0.0/patch_switch_19.3.4.0.0.200130:
2020-02-10 07:58:42 +0100 :INFO : - upgradeIBSwitch.log
2020-02-10 07:58:42 +0100 :INFO : - upgradeIBSwitch.trc
2020-02-10 07:58:42 +0100 :INFO : - patchmgr.stdout
2020-02-10 07:58:42 +0100 :INFO : - patchmgr.stderr
2020-02-10 07:58:42 +0100 :INFO : - patchmgr.log
2020-02-10 07:58:42 +0100 :INFO : - patchmgr.trc
2020-02-10 07:58:42 +0100 :INFO : Exit status:0
2020-02-10 07:58:42 +0100 :INFO : Exiting.
[root@efucndb01-a patch_switch_19.3.4.0.0.200130]#

IB switch upgrade

[root@efucndb01-a patch_switch_19.3.4.0.0.200130]# ./patchmgr -ibswitches ~/ibs -upgrade
2020-02-10 07:59:22 +0100 :Working: Verify SSH equivalence for the root user to node(s)
2020-02-10 07:59:24 +0100 :SUCCESS: Verify SSH equivalence for the root user to node(s)
2020-02-10 07:59:25 +0100 1 of 1 :Working: Initiate upgrade of InfiniBand switches to 2.2.14-1. Expect up to 40 minutes for each switch
----- InfiniBand switch update process started 2020-02-10 07:59:25 +0100 -----
[NOTE ] Log file at /EXAVMIMAGES/Patch/IBs_19.3.4.0.0/patch_switch_19.3.4.0.0.200130/upgradeIBSwitch.log
[INFO ] List of InfiniBand switches for upgrade: ( efucnsw-iba01-a efucnsw-ibb01-a )
[SUCCESS ] Verifying Network connectivity to efucnsw-iba01-a
[SUCCESS ] Verifying Network connectivity to efucnsw-ibb01-a
[SUCCESS ] Validating verify-topology output
[INFO ] Proceeding with upgrade of InfiniBand switches to version 2.2.14_1
[INFO ] Master Subnet Manager is set to "efucnsw-iba01-a" in all Switches
[INFO ] ---------- Starting with InfiniBand Switch efucnsw-iba01-a
[WARNING ] Infiniband switch meets minimal version requirements, but downgrade is only available to 2.2.13-2 with the current package.
To downgrade to other versions:
Manually download the InfiniBand switch firmware package to the patch directory
Set export variable "EXADATA_IMAGE_IBSWITCH_DOWNGRADE_VERSION" to the appropriate version
Run patchmgr command to initiate downgrade.
[SUCCESS ] Verify SSH access to the patchmgr host efucndb01-a.emilianofusaglia.net from the InfiniBand Switch efucnsw-iba01-a.
[INFO ] Starting pre-update validation on efucnsw-iba01-a
[SUCCESS ] Verifying that /tmp has 150M in efucnsw-iba01-a, found 492M
[SUCCESS ] Verifying that / has 20M in efucnsw-iba01-a, found 26M
[SUCCESS ] Service opensmd is running on InfiniBand Switch efucnsw-iba01-a
[SUCCESS ] NTP daemon is running on efucnsw-iba01-a.
[INFO ] Manually validate the following entries Date:(YYYY-aM-DD) 2020-02-10 Time:(HH:MM:SS) 07:59:41
[INFO ] Validating the current firmware on the InfiniBand Switch
[SUCCESS ] Firmware verification on InfiniBand switch efucnsw-iba01-a
[SUCCESS ] Verifying that the patchmgr host efucndb01-a.emilianofusaglia.net is recognized on the InfiniBand Switch efucnsw-iba01-a through getHostByName
[SUCCESS ] Execute plugin check for Patch Check Prereq on efucnsw-iba01-a
[INFO ] Finished pre-update validation on efucnsw-iba01-a
[SUCCESS ] Pre-update validation on efucnsw-iba01-a
[INFO ] Package will be downloaded at firmware update time via scp
[SUCCESS ] Execute plugin check for Patching on efucnsw-iba01-a
[INFO ] Starting upgrade on efucnsw-iba01-a to 2.2.14_1. Please give upto 15 mins for the process to complete. DO NOT INTERRUPT or HIT CTRL+C during the upgrade
[INFO ] Rebooting efucnsw-iba01-a to complete the firmware update. Wait for 15 minutes before continuing. DO NOT MANUALLY REBOOT THE INFINIBAND SWITCH
Connection to efucndb01-a closed by remote host.
Connection to efucndb01-a closed.
2020-02-10 08:27:49 +0100 :Working: Verify SSH equivalence for the root user to node(s)
2020-02-10 08:27:51 +0100 :SUCCESS: Verify SSH equivalence for the root user to node(s)
2020-02-10 08:27:52 +0100 1 of 1 :Working: Initiate upgrade of InfiniBand switches to 2.2.14-1. Expect up to 40 minutes for each switch
----- InfiniBand switch update process started 2020-02-10 08:27:52 +0100 -----
[NOTE ] Log file at /EXAVMIMAGES/Patch/IBs_19.3.4.0.0/patch_switch_19.3.4.0.0.200130/upgradeIBSwitch.log
[INFO ] List of InfiniBand switches for upgrade: ( efucnsw-iba01-a efucnsw-ibb01-a )
[SUCCESS ] Verifying Network connectivity to efucnsw-iba01-a
[SUCCESS ] Verifying Network connectivity to efucnsw-ibb01-a
[INFO ] InfiniBand switch efucnsw-iba01-a is already at target version.
[SUCCESS ] Validating verify-topology output
[INFO ] Proceeding with upgrade of InfiniBand switches to version 2.2.14_1
[INFO ] Master Subnet Manager is set to "efucnsw-ibb01-a" in all Switches
[INFO ] ---------- Starting with InfiniBand Switch efucnsw-ibb01-a
[WARNING ] Infiniband switch meets minimal version requirements, but downgrade is only available to 2.2.13-2 with the current package.
To downgrade to other versions:
Manually download the InfiniBand switch firmware package to the patch directory
Set export variable "EXADATA_IMAGE_IBSWITCH_DOWNGRADE_VERSION" to the appropriate version
Run patchmgr command to initiate downgrade.
[SUCCESS ] Verify SSH access to the patchmgr host efucndb01-a.emilianofusaglia.net from the InfiniBand Switch efucnsw-ibb01-a.
[INFO ] Starting pre-update validation on efucnsw-ibb01-a
[SUCCESS ] Verifying that /tmp has 150M in efucnsw-ibb01-a, found 492M
[SUCCESS ] Verifying that / has 20M in efucnsw-ibb01-a, found 26M
[SUCCESS ] Service opensmd is running on InfiniBand Switch efucnsw-ibb01-a
[SUCCESS ] NTP daemon is running on efucnsw-ibb01-a.
[INFO ] Manually validate the following entries Date:(YYYY-aM-DD) 2020-02-10 Time:(HH:MM:SS) 08:28:07
[INFO ] Validating the current firmware on the InfiniBand Switch
[SUCCESS ] Firmware verification on InfiniBand switch efucnsw-ibb01-a
[SUCCESS ] Verifying that the patchmgr host efucndb01-a.emilianofusaglia.net is recognized on the InfiniBand Switch efucnsw-ibb01-a through getHostByName
[SUCCESS ] Execute plugin check for Patch Check Prereq on efucnsw-ibb01-a
[INFO ] Finished pre-update validation on efucnsw-ibb01-a
[SUCCESS ] Pre-update validation on efucnsw-ibb01-a
[INFO ] Package will be downloaded at firmware update time via scp
[SUCCESS ] Execute plugin check for Patching on efucnsw-ibb01-a
[INFO ] Starting upgrade on efucnsw-ibb01-a to 2.2.14_1. Please give upto 15 mins for the process to complete. DO NOT INTERRUPT or HIT CTRL+C during the upgrade
[INFO ] Rebooting efucnsw-ibb01-a to complete the firmware update. Wait for 15 minutes before continuing. DO NOT MANUALLY REBOOT THE INFINIBAND SWITCH
Connection to efucndb01-a closed by remote host.
Connection to efucndb01-a closed.

Exadata X8M introduces BIG Architectural Changes

Launched at the 2019 Oracle Open Word Conference the Exadata X8M introduces few big architectural changes which make the leading Oracle Database Machine even more attractive.

Among the most relevant changes there are:

  • InfiniBand Network replacement with an Ethernet network fabric
  • Intel Optane DC Persistent Memory inside the Storage Cell
  • New Remote Direct Memory Access (RDMA) functionalities
  • Replacement of the XEN Hypervisor with KVM

InfiniBand Network replacement with an Ethernet network fabric

The characteristic 40Gbit/second InfiniBand network used for all private network communications among database nodes and storage cells has been replaced by a new 100Gbit/second RDMA over Converged Ethernet Fabric (RoCE) based on the Cisco Switch 9336c RoCE .

The new network not only increase 2.5x the throughput but also reduces the communication latancy.

The schema below highlight the network architecture change for all private communications

Intel Optane DC Persistent Memory inside the Storage Cell

Oracle has introduced 1.5TB of Intel Optane DC Persistent Memory as additional storage device inside all Exadata X8M Storage Cell, (no matter if equipped with HC and EF devices), and it is used as accelerator in front of the Flash Memory Cards. In term of speed this new type of ultra fast storage device is located between the DRAM and the Flash Memory, bringing to three the number of multi-tiered storage devices present inside the Storage Cell.

The Exadata unique software is than capable to extract the maximum performance from this HW configuration, automatically detecting and placing the hottest data on the Persistent Memory, reducing the I/O latency of the most critical tasks.

Below is described the list of Storage Cell’s devices ordered by speed.

New Remote Direct Memory Access (RDMA) functionalities

Until now the RDMA was used among database nodes for exchanging Exafusion messages or for Smart Fusion Block Transfer. Starting with Exadata X8M, the RDMA technology is also used to perform direct I/O access to the Persistent Memory of the Storage Cells, bypassing the network and I/O stack and eliminating expensive CPU interrupts and context switches. This optimization reduces the latency by 10x, from 200μs to less than 19μs.

The picture below highlight the “Database Node to Database Node” and the new “Database Node to Storage Cell” communication using RDMA functionalities.

Replacement of the XEN Hypervisor with KVM

Oracle virtualization technology is called Oracle Virtual Machine (OVM) and in a productive invironment it can be implemented with one of this two different products:

  • Xen
  • KVM

Starting with Exadata X8M-2 the virtualization technology in use is KVM instead of Xen. Oracle started replacing Xen with KVM few years ago, for example on the smaller engineered system ODA X7-2M & X7-2S, but for the Exadata took longer, and I think the root cause was the InfiniBand network. Infact KVM is not fully integrated with InfiniBand, and it does not support bridging.

Exadata Deployment with Elastic Configuration

Recently, for one of my customers, I had the chance to install a couples of Exadata X7-2 using the new Elastic Configuration. The major benefits of using Elastic Configuration consists in the possibility to acquire the Exadata Machine with almost any possible combination of Database Nodes and Storage Cells.

In the past we were used to standard Oracle pre-defined Exadata Machine configurations: Eighth Rack, Quarter Rack, Half Rack and Full Rack, which is still possible, but not flexible enough.

The pictures below highlight the differences between the two configurations:

Edadats_Classiv_vs_Elastic

source: Oracle Data Sheet Exadata Database Machine X7-2

 

Deployment Exadata Elactic Configuration

The elastic configuration process automates the initial IP address allocations to databasenodes and storage cells, regardless the ordered configuration.  The Exadata Machine is connected to the InfiniBand switches using a standard cabling methodology which allows to determinate the node’s location in the rack. This information is therefore used when the nodes are powered up for the first time in order to assign the initial default IPs.

[root@exatest-iba0 ~]# ibhosts
Ca : 0x579b0123796ba0 ports 2 "node10 elasticNode 192.168.10.17,192.168.10.18 ETH0"
Ca : 0x579b01237966e0 ports 2 "node8 elasticNode 192.168.10.15,192.168.10.16 ETH0"
Ca : 0x579b0123844ab0 ports 2 "node6 elasticNode 192.168.10.11,192.168.10.12 ETH0"
Ca : 0x579b0123845e50 ports 2 "node5 elasticNode 192.168.10.7,192.168.10.8 ETH0"
Ca : 0x579b0123845fe0 ports 2 "node4 elasticNode 192.168.10.40,172.16.2.40 ETH0"
Ca : 0x579b0123845ea0 ports 2 "node3 elasticNode 192.168.10.9,192.168.10.10 ETH0"
Ca : 0x579b0123812b90 ports 2 "node2 elasticNode 192.168.10.1,192.168.10.2 ETH0"
Ca : 0x579b0123812970 ports 2 "node1 elasticNode 192.168.10.3,192.168.10.4 ETH0"
[root@exatest-iba0 ~]#

 

 

Because the Virtualization option was required,  it has to be activated at this stage:

[root@node8 ~]# /opt/oracle.SupportTools/switch_to_ovm.sh
2019-03-07 01:05:22 -0800 [INFO] Switch to DOM0 system partition /dev/VGExaDb/LVDbSys3 (/dev/mapper/VGExaDb-LVDbSys3)
2019-03-07 01:05:22 -0800 [INFO] Active system device: /dev/mapper/VGExaDb-LVDbSys1
2019-03-07 01:05:22 -0800 [INFO] Active system device in boot area: /dev/mapper/VGExaDb-LVDbSys1
2019-03-07 01:05:23 -0800 [INFO] Set active system device to /dev/VGExaDb/LVDbSys3 in /boot/I_am_hd_boot
2019-03-07 01:05:23 -0800 [INFO] Creating /.elasticConfig on DOM0 boot partition /boot
2019-03-07 01:05:34 -0800 [INFO] Reboot has been initiated to switch to the DOM0 system partition
Connection to 192.168.1.8 closed by remote host.
Connection to 192.168.1.8 closed.
✘

 

After the switch to OVM command it is time to reclaim the space initially used by the Linux bare metal Logical Volumes:

[root@node8 ~]# /opt/oracle.SupportTools/reclaimdisks.sh -free -reclaim
Model is ORACLE SERVER X7-2
Number of LSI controllers: 1
Physical disks found: 4 (252:0 252:1 252:2 252:3)
Logical drives found: 1
Linux logical drive: 0
RAID Level for the Linux logical drive: 5
Physical disks in the Linux logical drive: 4 (252:0 252:1 252:2 252:3)
Dedicated Hot Spares for the Linux logical drive: 0
Global Hot Spares: 0
[INFO ] Check for DOM0 with inactive Linux system disk
[INFO ] Valid DOM0 with inactive Linux system disk is detected
[INFO ] Number of partitions on the system device /dev/sda: 3
[INFO ] Higher partition number on the system device /dev/sda: 3
[INFO ] Last sector on the system device /dev/sda: 3509760000
[INFO ] End sector of the last partition on the system device /dev/sda: 3509759966
[INFO ] Remove inactive system logical volume /dev/VGExaDb/LVDbSys1
[INFO ] Remove logical volume /dev/VGExaDb/LVDbOra1
[INFO ] Extend logical volume /dev/VGExaDb/LVDbExaVMImages
[INFO ] Resize ocfs2 on logical volume /dev/VGExaDb/LVDbExaVMImages
[INFO ] XEN boot version and rpm versions are in sync
[INFO ] XEN EFI files will not be updated
[INFO ] Force setup grub
[root@node8 ~]#

 

Check the success of the reclaim disks procedure:

[root@node8 ~]# /opt/oracle.SupportTools/reclaimdisks.sh -check
Model is ORACLE SERVER X7-2
Number of LSI controllers: 1
Physical disks found: 4 (252:0 252:1 252:2 252:3)
Logical drives found: 1
Linux logical drive: 0
RAID Level for the Linux logical drive: 5
Physical disks in the Linux logical drive: 4 (252:0 252:1 252:2 252:3)
Dedicated Hot Spares for the Linux logical drive: 0
Global Hot Spares: 0
Valid. Disks configuration: RAID5 from 4 disks with no global and dedicated hot spare disks.
Valid. Booted: DOM0. Layout: DOM0.
[root@node8 ~]#

 

Upload the Oracle Exadata Database Machine Deployment Assistant configuration files to the database server, together with all software images, and run the One command procedure.

List of all Steps

[root@exatestdbadm01 linux-x64]# ./install.sh -cf TVD-exatest.xml -l
Initializing

1. Validate Configuration File
2. Update Nodes for Eighth Rack
3. Create Virtual Machine
4. Create Users
5. Setup Cell Connectivity
6. Calibrate Cells
7. Create Cell Disks
8. Create Grid Disks
9. Install Cluster Software
10. Initialize Cluster Software
11. Install Database Software
12. Relink Database with RDS
13. Create ASM Diskgroups
14. Create Databases
15. Apply Security Fixes
16. Install Exachk
17. Create Installation Summary
18. Resecure Machine
[root@exatestdbadm01 linux-x64]#

 

Run Step One to validate the setup

This example includes the creation of three different Clusters.

[root@exatestdbadm01 linux-x64]# ./install.sh -cf TVD-exatest.xml -s 1
Initializing
Executing Validate Configuration File
Validating cluster: Cluster-EFU
Locating machines...
Verifying operating systems...
Validating cluster networks...
Validating network connectivity...
Validating private ips on virtual cluster
Validating NTP setup...
Validating physical disks on storage cells...
Validating users...
Validating cluster: Cluster-PR1
Locating machines...
Verifying operating systems...
Validating cluster networks...
Validating network connectivity...
Validating private ips on virtual cluster
Validating NTP setup...
Validating physical disks on storage cells...
Validating users...
Validating cluster: Cluster-VAL
Locating machines...
Verifying operating systems...
Validating cluster networks...
Validating network connectivity...
Validating private ips on virtual cluster
Validating NTP setup...
Validating physical disks on storage cells...
Validating users...
Validating platinum...
Validating switches...
Checking disk reclaim status...
Checking Disk Tests Status....
Completed validation...

SUCCESS: Ip address: 10.x8.xx.40 is configured correctly
SUCCESS: Ip address: 10.x9.xx.55 is configured correctly
SUCCESS: Ip address: 10.x8.xx.41 is configured correctly
SUCCESS: Ip address: 10.x9.xx.56 is configured correctly
SUCCESS: Ip address: 10.x8.xx.45 is configured correctly
SUCCESS: Ip address: 10.x8.xx.46 is configured correctly
SUCCESS: Ip address: 10.x8.xx.44 is configured correctly
SUCCESS: Ip address: 10.x8.xx.43 is configured correctly
SUCCESS: Ip address: 10.x8.xx.42 is configured correctly
SUCCESS: 10.x8.xx.40 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.55 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.41 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.56 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.45 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.46 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.44 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.43 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.42 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.40 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.55 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.41 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.56 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.45 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.46 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.44 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.43 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.42 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.40 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.55 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.41 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.56 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.45 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.46 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.44 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.43 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.42 configured correctly on exatestceladm03.my.domain.com
SUCCESS: Ip address: 10.x8.xx.47 is configured correctly
SUCCESS: Ip address: 10.x9.xx.57 is configured correctly
SUCCESS: Ip address: 10.x8.xx.48 is configured correctly
SUCCESS: Ip address: 10.x9.xx.58 is configured correctly
SUCCESS: Ip address: 10.x8.xx.52 is configured correctly
SUCCESS: Ip address: 10.x8.xx.51 is configured correctly
SUCCESS: Ip address: 10.x8.xx.53 is configured correctly
SUCCESS: Ip address: 10.x8.xx.50 is configured correctly
SUCCESS: Ip address: 10.x8.xx.49 is configured correctly
SUCCESS: 10.x8.xx.47 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.57 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.48 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.58 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.52 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.51 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.53 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.50 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.49 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.47 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.57 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.48 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.58 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.52 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.51 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.53 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.50 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.49 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.47 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.57 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.48 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.58 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.52 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.51 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.53 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.50 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.49 configured correctly on exatestceladm03.my.domain.com
SUCCESS: Ip address: 10.x8.xx.54 is configured correctly
SUCCESS: Ip address: 10.x9.xx.59 is configured correctly
SUCCESS: Ip address: 10.x8.xx.55 is configured correctly
SUCCESS: Ip address: 10.x9.xx.60 is configured correctly
SUCCESS: Ip address: 10.x8.xx.58 is configured correctly
SUCCESS: Ip address: 10.x8.xx.60 is configured correctly
SUCCESS: Ip address: 10.x8.xx.59 is configured correctly
SUCCESS: Ip address: 10.x8.xx.57 is configured correctly
SUCCESS: Ip address: 10.x8.xx.56 is configured correctly
SUCCESS: 10.x8.xx.54 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.59 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.55 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.60 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.58 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.60 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.59 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.57 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.56 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.54 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.59 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.55 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.60 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.58 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.60 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.59 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.57 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.56 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.54 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.59 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.55 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.60 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.58 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.60 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.59 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.57 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.56 configured correctly on exatestceladm03.my.domain.com
SUCCESS: Validated NTP server 10.x3.xx.xx0
SUCCESS: Validated NTP server 10.x3.xx.xx1
SUCCESS: Required file /EXAVMIMAGES/onecommand/linux-x64/WorkDir/p28514222_122118_Linux-x86-64.zip exists...
SUCCESS: Required file /EXAVMIMAGES/onecommand/linux-x64/WorkDir/p28762988_12201181016GIOCT2018RU_Linux-x86-64.zip exists...
SUCCESS: Required file /EXAVMIMAGES/onecommand/linux-x64/WorkDir/p28762989_12201181016DBOCT2018RU_Linux-x86-64.zip exists...
SUCCESS: Required file config/exachk.zip exists...
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm03.my.domain.com, machine type: storage
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm02.my.domain.com, machine type: storage
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm01.my.domain.com, machine type: storage
SUCCESS: Expected machine exatestdbadm01.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: Expected machine exatestdbadm02.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: NTP servers on machine exatestceladm02.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm01.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm03.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm01.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm02.my.domain.com verified successfully
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm02.my.domain.com
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm01.my.domain.com
SUCCESS: Expected machine exatestdbadm02.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm01.my.domain.com, machine type: storage
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm02.my.domain.com, machine type: storage
SUCCESS: Expected machine exatestdbadm01.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm03.my.domain.com, machine type: storage
SUCCESS: NTP servers on machine exatestceladm03.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm01.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm02.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm02.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm01.my.domain.com verified successfully
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm02.my.domain.com
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm01.my.domain.com
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm03.my.domain.com, machine type: storage
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm02.my.domain.com, machine type: storage
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm01.my.domain.com, machine type: storage
SUCCESS: Expected machine exatestdbadm02.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: Expected machine exatestdbadm01.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: NTP servers on machine exatestceladm03.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm02.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm01.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm01.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm02.my.domain.com verified successfully
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm02.my.domain.com
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm01.my.domain.com
SUCCESS: Switch IP 10.x9.xx.51 resolves successfully to host exatest-iba0.my.domain.com on node exatestceladm03.my.domain.com
SUCCESS:
SUCCESS: Switch IP 10.x9.xx.51 resolves successfully to host exatest-iba0.my.domain.com on node exatestceladm02.my.domain.com
SUCCESS: Switch IP 10.x9.xx.52 resolves successfully to host exatest-ibb0.my.domain.com on node exatestceladm03.my.domain.com
SUCCESS:
SUCCESS:
SUCCESS:
SUCCESS: Switch IP 10.x9.xx.52 resolves successfully to host exatest-ibb0.my.domain.com on node exatestceladm02.my.domain.com
SUCCESS:
SUCCESS: Switch IP 10.x9.xx.51 resolves successfully to host exatest-iba0.my.domain.com on node exatestceladm01.my.domain.com
SUCCESS: Switch IP 10.x9.xx.52 resolves successfully to host exatest-ibb0.my.domain.com on node exatestceladm01.my.domain.com
SUCCESS:
SUCCESS: X7 compute node exatestdbadm01.my.domain.com has updated Broadcom firmware
SUCCESS: X7 compute node exatestdbadm02.my.domain.com has updated Broadcom firmware
SUCCESS: Disk Tests are not running/active on any of the Storage Servers.
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm01
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm02
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm01
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm02
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm01
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm02
SUCCESS: Disk size 10000GB on cell exatestceladm01.my.domain.com matches the value specified in the OEDA configuration file
SUCCESS: Disk size 10000GB on cell exatestceladm02.my.domain.com matches the value specified in the OEDA configuration file
SUCCESS: Disk size 10000GB on cell exatestceladm03.my.domain.com matches the value specified in the OEDA configuration file
SUCCESS: Disk size 10000GB on cell exatestceladm04.my.domain.com matches the value specified in the OEDA configuration file
SUCCESS: Disk size 10000GB on cell exatestceladm05.my.domain.com matches the value specified in the OEDA configuration file
SUCCESS: Disk size 10000GB on cell exatestceladm06.my.domain.com matches the value specified in the OEDA configuration file
Successfully completed execution of step Validate Configuration File [elapsed Time [Elapsed = 250301 mS [4.0 minutes] Thu Mar 07 12:35:31 CET 2019]]
[root@exatestdbadm01 linux-x64]#

 

 

Execution of all remaining steps

Than, because we felt confident, we decide to invoke all remaining steps together:

root@exatestdbadm01 linux-x64]# ./install.sh -cf TVD-exatest.xml -r 1-18
...
..

 

The final result is the Exadata Machine installed with six Oracle VMs, and three Grid Infrastructure clusters each one running a test RAC database.

 

 

Oracle VM Server 3.4.5 – Kernel Memory Leak

 

Oracle VM Server instability on version 3.4.5 due to a Oracle Unbreakable Enterprise Kernel (UEK)  bug.

The kernel version 4.1.12-124.14.5.el6uek.x86_64  has introduced a memory leak of the network module i40e

 

Here is reported the backtraces collected after the problem from the /var/log/messages:

Dec 12 07:06:30 efuovs02 kernel: [1508192.885203] ntpd invoked oom-killer: gfp_mask=0x200da, order=0, oom_score_adj=0
Dec 12 07:06:30 efuovs02 kernel: [1508192.885208] ntpd cpuset=/ mems_allowed=0
Dec 12 07:06:30 efuovs02 kernel: [1508192.885217] CPU: 3 PID: 4751 Comm: ntpd Not tainted 4.1.12-124.14.5.el6uek.x86_64 #2
Dec 12 07:06:30 efuovs02 kernel: [1508192.885221] Hardware name: HPE ProLiant DL360 Gen10/ProLiant DL360 Gen10, BIOS U32 02/14/2018
Dec 12 07:06:30 efuovs02 kernel: [1508192.885224]  0000000000000000 ffff8804484cf678 ffffffff816e4bdb ffff88044d44aa00
Dec 12 07:06:30 efuovs02 kernel: [1508192.885230]  0000000000000000 ffff8804484cf708 ffffffff816e32d1 01ff8804484cf688
Dec 12 07:06:30 efuovs02 kernel: [1508192.885235]  ffff8804484cf718 ffff8804484cf6c8 ffffffff811fc561 ffff8804484cf800
Dec 12 07:06:30 efuovs02 kernel: [1508192.885241] Call Trace:
Dec 12 07:06:30 efuovs02 kernel: [1508192.885251]  [<ffffffff816e4bdb>] dump_stack+0x63/0x81
Dec 12 07:06:30 efuovs02 kernel: [1508192.885256]  [<ffffffff816e32d1>] dump_header+0x7f/0x1f3
Dec 12 07:06:30 efuovs02 kernel: [1508192.885264]  [<ffffffff811fc561>] ? vmpressure+0x21/0x90
Dec 12 07:06:30 efuovs02 kernel: [1508192.885272]  [<ffffffff8118e53c>] oom_kill_process+0x1cc/0x3c0
Dec 12 07:06:30 efuovs02 kernel: [1508192.885283]  [<ffffffff8108de0e>] ? has_capability_noaudit+0x1e/0x30
Dec 12 07:06:31 efuovs02 kernel: [1508192.885288]  [<ffffffff8118eaab>] __out_of_memory+0x31b/0x530
Dec 12 07:06:31 efuovs02 kernel: [1508192.885294]  [<ffffffff8118ee5b>] out_of_memory+0x5b/0x80
Dec 12 07:06:31 efuovs02 kernel: [1508192.885300]  [<ffffffff81194d42>] __alloc_pages_nodemask+0x952/0xab0
Dec 12 07:06:31 efuovs02 kernel: [1508192.885307]  [<ffffffff811de28d>] alloc_pages_vma+0xbd/0x260
Dec 12 07:06:31 efuovs02 kernel: [1508192.885311]  [<ffffffff8118a59e>] ? find_get_entry+0x1e/0xc0
Dec 12 07:06:31 efuovs02 kernel: [1508192.885317]  [<ffffffff811ce6bd>] read_swap_cache_async+0xed/0x170
Dec 12 07:06:31 efuovs02 kernel: [1508192.885322]  [<ffffffff811ce82d>] swapin_readahead+0xed/0x190
Dec 12 07:06:31 efuovs02 kernel: [1508192.885328]  [<ffffffff811bbfe0>] handle_mm_fault+0x12d0/0x1770
Dec 12 07:06:31 efuovs02 kernel: [1508192.885335]  [<ffffffff8121d910>] ? poll_select_copy_remaining+0x130/0x130
Dec 12 07:06:31 efuovs02 kernel: [1508192.885340]  [<ffffffff8106d57f>] __do_page_fault+0x1af/0x480
Dec 12 07:06:31 efuovs02 kernel: [1508192.885346]  [<ffffffff816f2c1c>] ? page_fault+0xcc/0x120
Dec 12 07:06:31 efuovs02 kernel: [1508192.885350]  [<ffffffff8106d87f>] do_page_fault+0x2f/0x80
Dec 12 07:06:31 efuovs02 kernel: [1508192.885354]  [<ffffffff816f2be4>] ? page_fault+0x94/0x120
Dec 12 07:06:31 efuovs02 kernel: [1508192.885359]  [<ffffffff816f2bdd>] ? page_fault+0x8d/0x120
Dec 12 07:06:31 efuovs02 kernel: [1508192.885363]  [<ffffffff816f2bd6>] ? page_fault+0x86/0x120
Dec 12 07:06:31 efuovs02 kernel: [1508192.885367]  [<ffffffff816f2c5f>] page_fault+0x10f/0x120
Dec 12 07:06:31 efuovs02 kernel: [1508192.885375]  [<ffffffff813316c5>] ? copy_user_enhanced_fast_string+0x5/0x10
Dec 12 07:06:31 efuovs02 kernel: [1508192.885379]  [<ffffffff8121d7d1>] ? set_fd_set+0x21/0x30
Dec 12 07:06:31 efuovs02 kernel: [1508192.885384]  [<ffffffff8121e5aa>] core_sys_select+0x1fa/0x2f0
Dec 12 07:06:31 efuovs02 kernel: [1508192.885392]  [<ffffffff810f8fc3>] ? ntp_notify_cmos_timer+0x23/0x30
Dec 12 07:06:31 efuovs02 kernel: [1508192.885396]  [<ffffffff810f8a1d>] ? do_adjtimex+0xed/0x100
Dec 12 07:06:31 efuovs02 kernel: [1508192.885402]  [<ffffffff810ed3ac>] ? SYSC_adjtimex+0x4c/0x80
Dec 12 07:06:31 efuovs02 kernel: [1508192.885410]  [<ffffffff810209e9>] ? read_tsc+0x9/0x10
Dec 12 07:06:31 efuovs02 kernel: [1508192.885414]  [<ffffffff810f68cb>] ? ktime_get_ts64+0x4b/0x110
Dec 12 07:06:31 efuovs02 kernel: [1508192.885419]  [<ffffffff8121e74b>] SyS_select+0xab/0x100
Dec 12 07:06:31 efuovs02 kernel: [1508192.885424]  [<ffffffff816ed451>] ? system_call_after_swapgs+0xdb/0x18c
Dec 12 07:06:31 efuovs02 kernel: [1508192.885428]  [<ffffffff816ed51a>] system_call_fastpath+0x18/0xd4
Dec 12 07:06:31 efuovs02 kernel: [1508192.885457] Mem-Info:
Dec 12 07:06:31 efuovs02 kernel: [1508192.885469] active_anon:1452 inactive_anon:1426 isolated_anon:65
Dec 12 07:06:31 efuovs02 kernel: [1508192.885469]  active_file:4559 inactive_file:873 isolated_file:0
Dec 12 07:06:31 efuovs02 kernel: [1508192.885469]  unevictable:1547 dirty:20 writeback:31 unstable:0
Dec 12 07:06:31 efuovs02 kernel: [1508192.885469]  slab_reclaimable:6776 slab_unreclaimable:8649
Dec 12 07:06:31 efuovs02 kernel: [1508192.885469]  mapped:3007 shmem:0 pagetables:1705 bounce:0
Dec 12 07:06:31 efuovs02 kernel: [1508192.885469]  free:33536 free_pcp:918 free_cma:0
Dec 12 07:06:31 efuovs02 kernel: [1508192.885483] Node 0 DMA free:15740kB min:60kB low:72kB high:84kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15988kB managed:15900kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Dec 12 07:06:31 efuovs02 kernel: [1508192.885499] lowmem_reserve[]: 0 2661 15921 15921
Dec 12 07:06:31 efuovs02 kernel: [1508192.885508] Node 0 DMA32 free:64064kB min:11076kB low:13844kB high:16612kB active_anon:5912kB inactive_anon:5876kB active_file:112kB inactive_file:52kB unevictable:836kB isolated(anon):256kB isolated(file):0kB present:2781336kB managed:2751088kB mlocked:836kB dirty:0kB writeback:0kB mapped:668kB shmem:0kB slab_reclaimable:4792kB slab_unreclaimable:6452kB kernel_stack:912kB pagetables:1692kB unstable:0kB bounce:0kB free_pcp:1156kB local_pcp:248kB free_cma:0kB writeback_tmp:0kB pages_scanned:619296 all_unreclaimable? yes
Dec 12 07:06:31 efuovs02 kernel: [1508192.885524] lowmem_reserve[]: 0 0 13260 13260
Dec 12 07:06:31 efuovs02 kernel: [1508192.885532] Node 0 Normal free:54340kB min:54392kB low:67988kB high:81584kB active_anon:0kB inactive_anon:0kB active_file:18124kB inactive_file:3440kB unevictable:5352kB isolated(anon):4kB isolated(file):0kB present:13979888kB managed:13534768kB mlocked:5352kB dirty:80kB writeback:124kB mapped:11360kB shmem:0kB slab_reclaimable:22312kB slab_unreclaimable:28144kB kernel_stack:2880kB pagetables:5128kB unstable:0kB bounce:0kB free_pcp:2516kB local_pcp:572kB free_cma:0kB writeback_tmp:0kB pages_scanned:129384 all_unreclaimable? yes
Dec 12 07:06:31 efuovs02 kernel: [1508192.885546] lowmem_reserve[]: 0 0 0 0
Dec 12 07:06:31 efuovs02 kernel: [1508192.885551] Node 0 DMA: 1*4kB (U) 1*8kB (U) 1*16kB (U) 1*32kB (U) 1*64kB (U) 0*128kB 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (R) 3*4096kB (M) = 15740kB
Dec 12 07:06:31 efuovs02 kernel: [1508192.885573] Node 0 DMA32: 143*4kB (UE) 111*8kB (UEM) 209*16kB (UE) 140*32kB (UE) 94*64kB (UEM) 57*128kB (UEM) 28*256kB (UEM) 7*512kB (UEM) 4*1024kB (EM) 9*2048kB (MR) 2*4096kB (MR) = 64068kB
Dec 12 07:06:31 efuovs02 kernel: [1508192.885596] Node 0 Normal: 8736*4kB (UEM) 1360*8kB (UEM) 208*16kB (UEM) 32*32kB (UE) 2*64kB (UE) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB (R) = 54400kB
Dec 12 07:06:31 efuovs02 kernel: [1508192.885615] 8002 total pagecache pages
Dec 12 07:06:31 efuovs02 kernel: [1508192.885618] 1494 pages in swap cache
Dec 12 07:06:31 efuovs02 kernel: [1508192.885621] Swap cache stats: add 3717250313, delete 3717248819, find 2895172777/5168362256
Dec 12 07:06:31 efuovs02 kernel: [1508192.885624] Free swap  = 4129656kB
Dec 12 07:06:31 efuovs02 kernel: [1508192.885626] Total swap = 4194300kB
Dec 12 07:06:31 efuovs02 kernel: [1508192.885628] 4194303 pages RAM
Dec 12 07:06:31 efuovs02 kernel: [1508192.885630] 0 pages HighMem/MovableOnly
Dec 12 07:06:31 efuovs02 kernel: [1508192.885632] 118864 pages reserved
Dec 12 07:06:31 efuovs02 kernel: [1508192.885634] 0 pages cma reserved
Dec 12 07:06:31 efuovs02 kernel: [1508192.885636] 0 pages hwpoisoned
Dec 12 07:06:31 efuovs02 kernel: [1508192.885638] [ pid ]   uid  tgid total_vm      rss nr_ptes nr_pmds swapents oom_score_adj name
Dec 12 07:06:31 efuovs02 kernel: [1508192.885650] [  983]     0   983     2677      266      11       3      111         -1000 udevd
Dec 12 07:06:31 efuovs02 kernel: [1508192.885657] [ 3783]     0  3783   125771     1414      48       5        0         -1000 multipathd
Dec 12 07:06:31 efuovs02 kernel: [1508192.885662] [ 4334]     0  4334     6944      399      15       3      108         -1000 auditd
Dec 12 07:06:31 efuovs02 kernel: [1508192.885667] [ 4368]     0  4368    61281      438      23       3      377             0 rsyslogd
Dec 12 07:06:31 efuovs02 kernel: [1508192.885671] [ 4383]     0  4383     2832      352      11       3      164             0 irqbalance
Dec 12 07:06:31 efuovs02 kernel: [1508192.885675] [ 4412]    32  4412     4760      397      16       3       74             0 rpcbind
Dec 12 07:06:31 efuovs02 kernel: [1508192.885680] [ 4436]    29  4436     5853      354      17       3      112             0 rpc.statd
Dec 12 07:06:31 efuovs02 kernel: [1508192.885684] [ 4481]     0  4481     5790        0      15       3       50             0 rpc.idmapd
Dec 12 07:06:31 efuovs02 kernel: [1508192.885689] [ 4522]     0  4522     2106      268      12       5       28             0 fcoemon
Dec 12 07:06:31 efuovs02 kernel: [1508192.885694] [ 4537]    81  4537     5373        0      15       3       62             0 dbus-daemon
Dec 12 07:06:31 efuovs02 kernel: [1508192.885698] [ 4612]     0  4612     1030      310       9       5       41             0 o2hbmonitor
Dec 12 07:06:31 efuovs02 kernel: [1508192.885702] [ 4632]     0  4632    47286      410      51       3      221             0 cupsd
Dec 12 07:06:31 efuovs02 kernel: [1508192.885706] [ 4692]     0  4692     1039      323       9       5       30             0 acpid
Dec 12 07:06:31 efuovs02 kernel: [1508192.885711] [ 4718]     0  4718     1580      211       8       5       27             0 mcelog
Dec 12 07:06:31 efuovs02 kernel: [1508192.885715] [ 4738]     0  4738    16579      304      34       3      185         -1000 sshd
Dec 12 07:06:32 efuovs02 kernel: [1508192.885720] [ 4751]    38  4751     6644      567      18       3      162             0 ntpd
Dec 12 07:06:32 efuovs02 kernel: [1508192.885724] [ 4796]     0  4796     3235      331      15       6      143             0 xenstored
Dec 12 07:06:32 efuovs02 kernel: [1508192.885729] [ 4803]     0  4803    21126      307      21       6       69             0 xenconsoled
Dec 12 07:06:32 efuovs02 kernel: [1508192.885733] [ 4807]     0  4807    53362      393      65       3      525             0 qemu-system-i38
Dec 12 07:06:32 efuovs02 kernel: [1508192.885737] [ 4910]     0  4910    20252      478      45       3      239             0 master
Dec 12 07:06:32 efuovs02 kernel: [1508192.885742] [ 4922]    89  4922    20315      486      46       3      238             0 qmgr
Dec 12 07:06:32 efuovs02 kernel: [1508192.885746] [ 4930]     0  4930    29223      395      16       3      171             0 crond
Dec 12 07:06:32 efuovs02 kernel: [1508192.885750] [ 5036]     0  5036     5291      283      15       3       67             0 atd
Dec 12 07:06:32 efuovs02 kernel: [1508192.885755] [ 5345]     0  5345    38468      249      14       5       30             0 osmdaemon
Dec 12 07:06:32 efuovs02 kernel: [1508192.885759] [ 5366]     0  5366    85597      650      61       7     1514             0 python
Dec 12 07:06:32 efuovs02 kernel: [1508192.885764] [ 5378]     0  5378    24079      467      27       6      113             0 ovmport
Dec 12 07:06:32 efuovs02 kernel: [1508192.885768] [ 5390]     0  5390    60521      410      65       6      920             0 ovmwatch
Dec 12 07:06:32 efuovs02 kernel: [1508192.885772] [ 5405]     0  5405   208969      656      87       7     1558             0 python
Dec 12 07:06:32 efuovs02 kernel: [1508192.885777] [ 5772]     0  5772   177327     1015      89       6     1775             0 python
Dec 12 07:06:32 efuovs02 kernel: [1508192.885782] [ 5789]     0  5789    49154      741      71       7     1366             0 python
Dec 12 07:06:32 efuovs02 kernel: [1508192.885786] [ 5831]     0  5831    82559      555      70       6     1491             0 devmon
Dec 12 07:06:32 efuovs02 kernel: [1508192.885790] [ 5901]     0  5901     1031      292       9       5       18             0 mingetty
Dec 12 07:06:32 efuovs02 kernel: [1508192.885794] [ 5903]     0  5903     1031      292       8       5       19             0 mingetty
Dec 12 07:06:32 efuovs02 kernel: [1508192.885798] [ 5905]     0  5905     1031      292       9       5       19             0 mingetty
Dec 12 07:06:32 efuovs02 kernel: [1508192.885802] [ 5907]     0  5907     1031      292       9       5       19             0 mingetty
Dec 12 07:06:32 efuovs02 kernel: [1508192.885806] [ 5909]     0  5909     1031      292       9       5       19             0 mingetty
Dec 12 07:06:32 efuovs02 kernel: [1508192.885812] [26455]     0 26455    11091      458      44       5      108             0 socat
Dec 12 07:06:32 efuovs02 kernel: [1508192.885816] [27087]     0 27087    11091      458      45       5      108             0 socat
Dec 12 07:06:32 efuovs02 kernel: [1508192.885820] [27845]     0 27845    11091      458      44       5      109             0 socat
Dec 12 07:06:32 efuovs02 kernel: [1508192.885825] [27996]     0 27996    11091      458      44       5      107             0 socat
Dec 12 07:06:32 efuovs02 kernel: [1508192.885829] [14189]     0 14189    11091      458      44       5      109             0 socat
Dec 12 07:06:32 efuovs02 kernel: [1508192.885833] [16371]     0 16371    11091      458      44       5      109             0 socat
Dec 12 07:06:32 efuovs02 kernel: [1508192.885838] [14238]     0 14238     2676      256      11       3      129         -1000 udevd
Dec 12 07:06:32 efuovs02 kernel: [1508192.885842] [14374]     0 14374     2676      240      11       3      119         -1000 udevd
Dec 12 07:06:32 efuovs02 kernel: [1508192.885846] [15869]     0 15869    22957      931      62       6     1730             0 python
Dec 12 07:06:32 efuovs02 kernel: [1508192.885851] [16935]     0 16935    28695     2029      16       5       64             0 OSWatcher
Dec 12 07:06:32 efuovs02 kernel: [1508192.885855] [ 5867]    89  5867    20272     1250      45       3      229             0 pickup
Dec 12 07:06:32 efuovs02 kernel: [1508192.885860] [ 8948]     0  8948    27070      682      17       5       71             0 vmsub
Dec 12 07:06:32 efuovs02 kernel: [1508192.885864] [ 8951]     0  8951    27070      675      17       5       77             0 mpsub
Dec 12 07:06:32 efuovs02 kernel: [1508192.885868] [ 8953]     0  8953     1581      328      11       5       42             0 vmstat
Dec 12 07:06:32 efuovs02 kernel: [1508192.885872] [ 8958]     0  8958    27070      659      17       6       41             0 iosub
Dec 12 07:06:32 efuovs02 kernel: [1508192.885876] [ 8959]     0  8959    25258      441      13       5       46             0 mpstat
Dec 12 07:06:32 efuovs02 kernel: [1508192.885880] [ 8966]     0  8966    25261      420      11       5       19             0 iostat
Dec 12 07:06:32 efuovs02 kernel: [1508192.885884] [ 8971]     0  8971    27070      679      17       5        0             0 xtop
Dec 12 07:06:32 efuovs02 kernel: [1508192.885888] [ 8976]     0  8976    27070      695      17       5        0             0 psmemsub
Dec 12 07:06:32 efuovs02 kernel: [1508192.885892] [ 8977]     0  8977     3771      483      20       5        3             0 top
Dec 12 07:06:32 efuovs02 kernel: [1508192.885896] [ 8980]     0  8980    27070      680      17       5        0             0 oswsub
Dec 12 07:06:32 efuovs02 kernel: [1508192.885901] [ 8985]     0  8985    28695     1794      15       5      131             0 OSWatcher
Dec 12 07:06:32 efuovs02 kernel: [1508192.885905] [ 8986]     0  8986    27564      523      19       5        8             0 ps
Dec 12 07:06:32 efuovs02 kernel: [1508192.885909] [ 8987]     0  8987    27070       54      12       5        0             0 psmemsub
Dec 12 07:06:32 efuovs02 kernel: [1508192.885913] [ 8988]     0  8988    27070       52      11       5        0             0 oswsub
Dec 12 07:06:32 efuovs02 kernel: [1508192.885917] Out of memory: Kill process 5772 (python) score 0 or sacrifice child
Dec 12 07:06:32 efuovs02 kernel: [1508192.886216] Killed process 5772 (python) total-vm:709308kB, anon-rss:0kB, file-rss:4060kB

 

 

How to fix the OVS Kernel Memory Leak

Download the following kernel version which includes the memoy leak fix for the i40e module:  link to Oracle RPM repository

kernel-uek-4.1.12-124.21.1.el6uek.x86_64.rpm
kernel-uek-firmware-4.1.12-124.21.1.el6uek.noarch.rpm


[root@efuovs02 new_Kernel]# rpm -qp --changelog kernel-uek-4.1.12-124.21.1.el6uek.x86_64.rpm | grep -B 3 28228724
warning: kernel-uek-4.1.12-124.21.1.el6uek.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
* Tue Oct 30 2018 Brian Maly <brian.maly@oracle.com> [4.1.12-124.20.8.el6uek] 
- scsi: lpfc: devloss timeout race condition caused null pointer reference (James Smart) [Orabug: 27994179] 
- scsi: qla2xxx: Fix race condition between iocb timeout and initialisation (Ben Hutchings) [Orabug: 28013813] 
- i40e: Add programming descriptors to cleaned_count (Alexander Duyck) [Orabug: 28228724] 
- i40e: Fix memory leak related filter programming status (Alexander Duyck) [Orabug: 28228724]

 

 

Install the new OVS Kernel

Using the steps reported below, the new kernel has been installed on all OVS servers of the farm.

[root@efuovs02 new_Kernel]# rpm -ivh kernel*
warning: kernel-uek-4.1.12-124.21.1.el6uek.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:kernel-uek-firmware ########################################### [ 50%]
2:kernel-uek ########################################### [100%]
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.1.12-124.21.1.el6uek.x86_64
Found linux image: /boot/vmlinuz-4.1.12-124.14.5.el6uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-124.14.5.el6uek.x86_64.img
done
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.1.12-124.21.1.el6uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-124.21.1.el6uek.x86_64.img
Found linux image: /boot/vmlinuz-4.1.12-124.14.5.el6uek.x86_64
Found initrd image: /boot/initramfs-4.1.12-124.14.5.el6uek.x86_64.img
done
[root@efuovs02 new_Kernel]#

....

[root@efuovs02 new_Kernel]# reboot

[root@efuovs02 ~]# uname -a
Linux efuovs02 4.1.12-124.21.1.el6uek.x86_64 #2 SMP Tue Nov 6 13:31:13 PST 2018 x86_64 x86_64 x86_64 GNU/Linux

 

 

 

 

Exadata How Safely Erase All Data

When the time arrives to decommission an environment with sesitive data, we are frequently confronted to the problem how to certify to our customer or management the erase of all data and logs.

On Exadata Machine starting from the software release 12.2.1.1.0, this problem has been elegantly solved by Oracle introducing a new utility called Secure Eraser; which securely erases data on hard drives, flash devices, internal USBs, and resets ILOM to factory default.

 

In earlier software versions, the Exadata Storage Software includes CellCli commands to securely erase the user data:

CellCLI> DROP GRIDDISK ALL FLASHDISK PREFIX=DATA, ERASE=7pass
CellCLI> DROP GRIDDISK ALL PREFIX=DATA, ERASE=3pass

and

CellCLI> DROP CELLDISK ALL FLASHDISK ERASE=7pass 
CellCLI> DROP CELL ERASE=3pass

Unfortunatly those commands only cover the user data stored on the Storage Cell, and none of them produces an official certificate with the summary of the actions taken to guarantee the wipe of the data. While all this is done by Secure Eraser on all Compute and Storage nodes, sanitizing on all type of devices: user data, OS logs and network configurations.

 

Depending from the Exadata model, a subset of all of available options to execute Secure Eraser is possible:

  • Automatic Secure Eraser Ethrough PXE Boot
  • Interactive Secure Eraser through PXE Boot
  • Interactive Secure Eraser through Network Boot
  • Interactive Secure Eraser through External USB

 


 

Recently I used Secure Eraser through External USB on one Exadata X7-2 Machine and here are reported the different steps.

 

Copy the Secure Eraser Diagnostic image from MOS 2180963.1 to a USB stick.

 # dd if=image_diagnostics_18.1.4.0.0_LINUX.X64_180125.3-1.x86_64.usb of=/dev/sdb

 

Boot the server using the USB device with the Secure Eraser Diagnostic image

Exa_BootList.jpg

 

After login, start the Secure Erase process

/usr/sbin/secureeraser --erase --all --flash_erasure_method=7pass --hdd_erasure_method=3pass --technician=Emiliano_Fusaglia --witness=Mario_Bros --output=/mnt/iso

 

 

At the end of the erase process a Data Erasure Certificate similar to the one on the example below will be available in TXT, HTML and PDF format.

Exa_SecureErase_Report


 

 

 

Feedback of Modern Consolidated Database Environment

 

Since the launch of Oracle 12c R1 Beta Program (August 2012) at Trivadis, we have been intensively testing, engineering and implementing Multitenant architectures for our customers.

Today, we can provide our feedbacks and those of our customers!

The overall feedback related to Oracle Multitenant is very positive, customers have been able to increase flexibility and automation, improving the efficiency of the software development life cycles.

Even the Single-tenant configuration (free of charge) brings few advantages compared to the non-CDB architecture. Therefore, from a technology point of view I recommend adopting the Container Database (CDB) architecture for all Oracle databases.

 

Examples of Multitenant architectures implemented

Having defined Oracle Multitenant a technological revolution on the space of relational databases, when combined with others 12c features it becomes a game changer for flexibility, automation and velocity.

Here are listed few examples of successful architectures implemented with our customers, using Oracle Container Database (CDB):

 

  • Database consolidation without performance and stability compromise here.

 

  • Multitenant and DevOps here.

 

  • Operating Database Disaster Recovery in Multitenant environment here.

 

 


 

RHEL 7.4 fails to mount ACFS File System due to KMOD package

After a fresh OS installation or an upgrade to RHEL 7.4, any attempt to install ACFS drivers will fail with the following message: “ACFS-9459 ADVM/ACFS is not supported on this OS version”

The error persists even if the Oracle Grid Infrastructure software includes the  Patch 26247490: 12.2 ACFS MODULE ERRORS & CRASH DURING MODULE LOAD & UNLOAD WITH OL7U4 RHCK.

 

This problem has been identified by Oracle with  BUG 26320387 – 7.4 kmod weak-modules not checking kABI compatibility correctly

And by Red Hat  Bugzilla bug:  1477073 – 7.4 kmod weak-modules –dry-run changed output format missing ‘is compatible’ messages.

root@oel7node06:/u01/app/12.2.0.1/grid/crs/install# /u01/app/12.2.0.1/grid/bin/acfsroot install
ACFS-9459: ADVM/ACFS is not supported on this OS version: '3.10.0-514.6.1.el7.x86_64'

root@oel7node06:~# /sbin/lsmod | grep oracle
oracleadvm 776830 7
oracleoks 654476 1 oracleadvm
oracleafd 205543 1

 

The current Workaround consists in downgrade the version of the kmod  RPM to  kmod-20-9.el7.x86_64.

root@oel7node06:~# yum downgrade kmod-20-9.el7

 

After the package downgrade the ACFS drivers are correcly loaded:

root@oel7node06:~# /sbin/lsmod | grep oracle
oracleacfs 4597925 2
oracleadvm 776830 8
oracleoks 654476 2 oracleacfs,oracleadvm
oracleafd 205543 1

 


 

 

 

Adding flexibility to Oracle GI Implementing Multiple SCANs

Nowadays the business requirements force the IT to implement the more and more sophisticated and consolidated environments without compromising availability, performance and flexibility of each application running on it.

In this post, I explain how to improve the Grid Infrastructure Network flexibility, implementing multiple SCANs and how to associate one or multiple networks to the Oracle databases.

To better understand the reasons for such type of implementation, below are listed few common use cases:

  • Applications are deployed on different/dedicated subnets.
  • Network isolation due to security requirement.
  • Different database protocols are in use (TCP, TCPS, etc.).

 

 

Single Client Access Name (SCAN)

By default on each Oracle Grid Infrastructure cluster, indipendently from the number of nodes, one SCAN with 3 SCAN VIPs is created.

Below is depicted the default Oracle Clusterware network/SCAN configuration.

 

Single_Scan_Listener

 

Multiple Single Client Access Name (SCAN) implementation

Before implemeting additional SCANs, the OS provisioning of new network interfaces or new VLAN Tagging has to be completed.

The current example uses the second option (VLAN Tagging), and the bond0 interface is an Active/Active setup of two 10gbe cards, to which a VLAN tag has been added.

Below is represented the customized Oracle Clusterware network/SCAN configuration, having added a second SCAN.

 

Multi_Scan_Listeners

 

Step-by-step implementation

After completing the OS network setup, as grid owner add the new interface to the Grid Infrastructure:

grid@host01a:~# oifcfg setif -global bond0.764/10.15.69.0:public

grid@host01a:~# oifcfg getif
eno49 192.168.7.32 global cluster_interconnect,asm
eno50 192.168.9.48 global cluster_interconnect,asm
bond0 10.11.8.0 global public
bond0.764 10.15.69.0 global public
grid@host01a:~#

 

Then as root create the network number 2 and disply the configuration:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add network -netnum 2 -subnet 10.15.69.0/255.255.255.0/bond0.764 -nettype STATIC

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl config network -netnum 2
Network 2 exists
Subnet IPv4: 10.15.69.0/255.255.255.0/, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:

 

As root user add the node VIPs:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host01a -netnum 2 -address host01b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host02a -netnum 2 -address host02b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host03a -netnum 2 -address host03b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host04a -netnum 2 -address host04b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host05a -netnum 2 -address host05b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host06a -netnum 2 -address host06b-vip.emilianofusaglia.net/255.255.255.0

 

As grid user  create a new listener based on the network number 2:

grid@host01a:~# srvctl add listener -listener LISTENER2 -netnum 2 -endpoints "TCP:1532"

 

As root user add the new SCAN to the network number 2:

 root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add scan -scanname scan-02.emilianofusaglia.net -netnum 2

 

As root user start the new node VIPs:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host01b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host02b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host03b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host04b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host05b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host06b-vip.emilianofusaglia.net

 

As grid user start the new node Listeners:

grid@host01a:~# srvctl start listener -listener LISTENER2
grid@host01a:~# srvctl status listener -listener LISTENER2
Listener LISTENER2 is enabled
Listener LISTENER2 is running on node(s): host01a,host02a,host03a,host04a,host05a,host06a

 

As root user start the new SCAN and as grid user check the configuration:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start scan -netnum 2

grid@host01a:~# srvctl config scan -netnum 2
SCAN name: scan-02.emilianofusaglia.net, Network: 2
Subnet IPv4: 10.15.69.0/255.255.255.0/, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 10.15.69.44
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.15.69.45
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 3 IPv4 VIP: 10.15.69.43
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:

grid@host01a:~# srvctl status scan -netnum 2
SCAN VIP scan1_net2 is enabled
SCAN VIP scan1_net2 is running on node host02a
SCAN VIP scan2_net2 is enabled
SCAN VIP scan2_net2 is running on node host01a
SCAN VIP scan3_net2 is enabled
SCAN VIP scan3_net2 is running on node host03a

 

As grid user add the SCAN Listener and check the configuration:

grid@host01a:~# srvctl add scan_listener -netnum 2 -listener LISTENER2 -endpoints TCP:1532

grid@host01a:~# srvctl config scan_listener -netnum 2
SCAN Listener LISTENER2_SCAN1_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER2_SCAN2_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER2_SCAN3_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:

 

As grid user start the SCAN Listener2 and check the status:

grid@host01a:~# srvctl start scan_listener -netnum 2

grid@host01a:~# srvctl status scan_listener -netnum 2
SCAN Listener LISTENER2_SCAN1_NET2 is enabled
SCAN listener LISTENER2_SCAN1_NET2 is running on node host02a
SCAN Listener LISTENER2_SCAN2_NET2 is enabled
SCAN listener LISTENER2_SCAN2_NET2 is running on node host01a
SCAN Listener LISTENER2_SCAN3_NET2 is enabled
SCAN listener LISTENER2_SCAN3_NET2 is running on node host03a

 

Defining the multi SCANs configuration per database

Once the above configuration is completed, it remains to define which SCAN/s should be used by each database.

When multiple SCANs exists, by default the CRS populate the LISTENER_NETWORKS parameter to register the database against all SCANs and LISTENERs.

To overwrite this default behavior, allowing for example the authentication of a specific database only against the SCAN scan-02.emilianofusaglia.net, the database parameter LISTENER_NETWORKS should be manually configured.
The parameter LISTENER_NETWORKS can be dynamically set but the new value is enforced during the next instance restart.

 


 

ASM Filter Driver (ASMFD)

 

ASM Filter Driver is a Linux kernel module introduced in 12c R1. It resides in the I/O path of the Oracle ASM disks providing the following features:

  • Rejecting all non-Oracle I/O write requests to ASM Disks.
  • Device name persistency.
  • Node level fencing without reboot.

 

In 12c R2 ASMFD can be enabled from the GUI interface of the Grid Infrastructure installation, as shown on this post GI 12c R2 Installation at the step #8 “Create ASM Disk Group”.

Once ASM Filter Driver is in use, similarly to ASMLib the disks are managed using the ASMFD Label Name.

 

Here few examples about the implementation of ASM Filter Driver.

--How to create an ASMFD label in SQL*Plus
SQL> Alter system label set 'DATA1' to '/dev/mapper/mpathak';

System altered.


--How to create an ASM Disk Group with ASMFD
CREATE DISKGROUP DATA_DG EXTERNAL REDUNDANCY DISK 'AFD:DATA1' SIZE 30720M
ATTRIBUTE 'SECTOR_SIZE'='512','LOGICAL_SECTOR_SIZE'='512','compatible.asm'='12.2.0.1',
'compatible.rdbms'='12.2.0.1','compatible.advm'='12.2.0.1','au_size'='4M';

Diskgroup created.

 

ASM Filter Driver can also be managed from the ASM command line utility ASMCMD

--Check ASMFD status
ASMCMD> afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'oel7node06.localdomain'


--List ASM Disks where ASMFD is enabled
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                    Filtering                Path
================================================================================
DATA1                      ENABLED                /dev/mapper/mpathak
DATA2                      ENABLED                /dev/mapper/mpathan
DATA3                      ENABLED                /dev/mapper/mpathw
DATA4                      ENABLED                /dev/mapper/mpathac
GIMR1                      ENABLED                /dev/mapper/mpatham
GIMR2                      ENABLED                /dev/mapper/mpathaj
GIMR3                      ENABLED                /dev/mapper/mpathal
GIMR4                      ENABLED                /dev/mapper/mpathaf
GIMR5                      ENABLED                /dev/mapper/mpathai
RECO3                      ENABLED                /dev/mapper/mpathy
RECO1                      ENABLED                /dev/mapper/mpathab
RECO2                      ENABLED                /dev/mapper/mpathx
ASMCMD>


--How to remove an ASMFD label in ASMCMD
ASMCMD> afd_unlabel DATA4

 

 


 

Installing Oracle Grid Infrastructure 12c R2

It has been an exciting week, Oracle 12c R2 came out and suddenly was time to refresh the RAC test environments. My friend Jacques opted for an upgrade from 12.1.0.2 to 12.2.0.1 (here the link to his blog post),  I started with a fresh installation, because I also upgraded the Operating System to OEL  7.3.

Compared to 12c R1 there are new options on the installation process, but general speaking the wizard is quite similar.

The first breakthrough is about the installation simplified with an image based, no more runIstaller.sh to invoke but …

Unpack the .Zip file directly inside the Grid Infrastructure Home of the first cluster node as described below:

[grid@oel7node06 ~]$ mkdir -p /u01/app/12.2.0.1/grid 
[grid@oel7node06 ~]$ chown grid:oinstall /u01/app/12.2.0.1/grid 
[grid@oel7node06 ~]$ cd /u01/app/12.2.0.1/grid 
[grid@oel7node06 grid]$ unzip -q download_location/grid_home_image.zip

# From an X session invoke the Grid Infrastructure wizard: 
[grid@oel7node06 grid]$ ./gridSetup.sh

 

01

 

 

The second screenshot list the new Cluster typoligies available on 12c R2:

  • Oracle Standalone Cluster
  • Oracle Cluster Domain
    • Oracle Domain Services Cluster
    • Oracle Member Clusters
      • Oracle Member Cluster for Oracle Database
      • Oracle Member Cluster for Applications

 

In my case I’m installing an Oracle Standalone Cluster

02

 

 

03

04

 

05

 

06

 

07

 

08

 

09

 

10

 

11

 

12

 

13

 

14

 

15

 

16

 

17

 

18

19

 

20

 

21

 

22

 

And now time for testing.

 

 

Linux for DBA: Basic “vi” Editor Tutorial

 

UNIX/Linux “vi” is a very powerful text editor, unfortunately at the beginning the utilization can be difficult. To help our memory, I wrote this post.

This is NOT an exhaustive guide, but a concentrate of the most useful commands and options.

 

vi Operation Modes:

Command mode: allows to execute administrative tasks (run command, move cursor, serch/replace string, save, etc.). This is the default mode when started.
When Insert mode is active press ESC to revert to Command mode.

Insert mode:  enables to write into the file. To switch to Insert mode you simply type i.

 

To open a file in edit mode:

# vi filename

 

Basic Moving commands

Enable Command mode (pressing ESC twice)
j  -- Cursor down one line
k  -- Cursor up one line
h  -- Cursor left one line
l  -- Cursor right one line
Multiple lines/columns move ex.: 5h -- Cursor 5 move left

$   -- Cursor at the end of the line.
0   -- Cursor at the beginning of the line. Same than |
b   -- Cursor at the next word.
w   -- Cursor at the next word.
G   -- Cursor at the end of the file.
1G  -- Cursor at the beginning of the line.
:,4 -- Cursor at the 4th line.

 

Basic Editing commands

Enable Insert mode (pressing i)

a  -- Insert text after the cursor location. 
A  -- Insert text at the end of the line. 
i  -- Insert text before the cursor location. 
I  -- Insert text at the beginning of the line. 
o  -- Insert a new line below the cursor location. 
O  -- Insert a new line above the cursor location.
dd -- Delete the current line.
x  -- Delete the character under the cursor location.
cw -- Change the word under the cursor location.
r  -- Replace the character under the cursor location.
R  -- Replace multiple characters starting from the cursor location. ESC to stop the replacement.
yy -- Copy the current line.
yw -- Copy the current word.
p  -- Paste the copied text before the current cursor location
P  -- Paste the copied text after the current cursor location

 

Basic Search and  Replace options

Enable Command mode (pressing ESC twice)

:set ic -- Ingnore case when searching.
:set nu -- Disply line number on the left side.
:%s/<search_string>/<replacement_string>/g -- Global search and replace

 

Exiting from vi

:q  -- Exit without Saving
:q! -- Force Exit without Saving
:w  -- Save the file
:wq -- Save & Exit

 

 

Linux for DBA: Red Hat 7 removed and deprecated few commands

 

Linux Red Hat 7 and derived distributions have removed and deprecated few commands. Among them netstat and lsof,  which are popular between DBAs.

This post shows how to obtain the network information in compliance with the new OS commands.

 

NETSTAT

netstat – is now considered obsolete, and it has been replaced by ss:

root@oel7qa01:~$ ss -t
State       Recv-Q Send-Q       Local Address:Port           Peer Address:Port 
ESTAB       0      0            192.168.1.117:54360          192.0.78.23:https 
ESTAB       0      0            192.168.1.117:48538          198.252.206.25:https 
ESTAB       0      0            192.168.1.117:42744          162.125.18.133:https 
ESTAB       0      0            127.0.0.1:38106              127.0.0.1:52828 
ESTAB       0      0            192.168.1.117:54008          192.0.78.23:https 
CLOSE-WAIT  1      0            192.168.1.117:60054          51.2xx.195.xx:https 
ESTAB       0      0            192.168.1.117:47904          198.2xx.202.xx:https 
CLOSE-WAIT  32     0            192.168.1.117:56724          108.1xx.172.xxx:https 
CLOSE-WAIT  32     0            192.168.1.117:47050          54.xx.201.xxx:https 
ESTAB       0      0            127.0.0.1:52828              127.0.0.1:38106 
CLOSE-WAIT  32     0            192.168.1.117:44728          108.1xx.xxx.6x:https 
ESTAB       0      0            192.168.1.117:41848          195.xxx.2xx.xxx:https 
ESTAB       0      0            192.168.7.50:41268           192.168.7.60:ssh 
ESTAB       0      0            2a02:1203:ecb0:7b80:58d9:f6e5:90d9:f266:53060 2a00:1450:400e:800::2003:https 
ESTAB       0      0            2a02:1203:ecb0:7b80:58d9:f6e5:90d9:f266:37978 2a00:1450:400a:804::200e:https 
ESTAB       0      0            2a02:1203:ecb0:7b80:58d9:f6e5:90d9:f266:51682 2a00:1450:400a:804::2003:https

 

The netstat -r information is now provided by the command ip route:

--Until Red Hat 6
[root@oel7node00 ~]# netstat -r
Kernel IP routing table
Destination     Gateway     Genmask        Flags  MSS Window irtt Iface
default         gateway     0.0.0.0        UG       0 0         0 enp0s8
default         gateway     0.0.0.0        UG       0 0         0 enp0s3
10.0.2.0        0.0.0.0     255.255.255.0  U        0 0         0 enp0s3
172.31.100.0    0.0.0.0     255.255.255.0  U        0 0         0 enp0s9
192.168.7.0     0.0.0.0     255.255.255.0  U        0 0         0 enp0s8
192.168.200.0   0.0.0.0     255.255.255.0  U        0 0         0 enp0s10


--As of Red Hat 7
[root@oel7node00 ~]# ip route
default via 192.168.7.50 dev enp0s8 proto static metric 100 
default via 10.0.2.2 dev enp0s3 proto static metric 101 
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100 
172.31.100.0/24 dev enp0s9 proto kernel scope link src 172.31.100.10 metric 100 
192.168.7.0/24 dev enp0s8 proto kernel scope link src 192.168.7.60 metric 100 
192.168.200.0/24 dev enp0s10 proto kernel scope link src 192.168.200.10 metric 100 

 

The netstat -i information is now provided by the command ip route:

--Until Red Hat 6
[root@oel7node00 ~]# netstat -i
Kernel Interface table
Iface     MTU    RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
enp0s3   1500       66      0      0 0           72      0      0      0 BMRU
enp0s8   1500     1201      0      0 0          687      0      0      0 BMRU
enp0s9   1500        2      0      0 0            2      0      0      0 BMRU
enp0s10  1500        2      0      0 0            7      0      0      0 BMRU
lo      65536        0      0      0 0            0      0      0      0 LRU


--As of Red Hat 7
[root@oel7node00 ~]# ip -s link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 RX: bytes packets errors dropped overrun mcast 
 0         0       0      0       0       0 
 TX: bytes packets errors dropped carrier collsns 
 0         0       0      0       0       0 
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 08:00:27:4c:63:1b brd ff:ff:ff:ff:ff:ff
 RX: bytes packets errors dropped overrun mcast 
 5860      66      0      0       0       0 
 TX: bytes packets errors dropped carrier collsns 
 5662      72      0      0       0       0 
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 08:00:27:2b:ca:66 brd ff:ff:ff:ff:ff:ff
 RX: bytes packets errors dropped overrun mcast 
 131645    1237    0      0       0       0 
 TX: bytes packets errors dropped carrier collsns 
 223396    704     0      0       0       0 
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 08:00:27:cc:fb:2e brd ff:ff:ff:ff:ff:ff
 RX: bytes packets errors dropped overrun mcast 
 120        2      0      0       0       0 
 TX: bytes packets errors dropped carrier collsns 
 120       2       0      0       0       0 
5: enp0s10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 08:00:27:6f:7e:47 brd ff:ff:ff:ff:ff:ff
 RX: bytes packets errors dropped overrun mcast 
 120       2       0      0       0       0 
 TX: bytes packets errors dropped carrier collsns 
 558       7       0      0       0       0

 

The netstat -g information is now provided by the command ip maddr:

--Until Red Hat 6
[root@oel7node00 ~]# netstat -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 all-systems.mcast.net
enp0s3 1 all-systems.mcast.net
enp0s8 1 all-systems.mcast.net
enp0s9 1 all-systems.mcast.net
enp0s10 1 all-systems.mcast.net
lo 1 ff02::1
lo 1 ff01::1
enp0s3 1 ff02::1
enp0s3 1 ff01::1
enp0s8 1 ff02::1
enp0s8 1 ff01::1
enp0s9 1 ff02::1
enp0s9 1 ff01::1
enp0s10 1 ff02::1
enp0s10 1 ff01::1


--As of Red Hat 7
[root@oel7node00 ~]# ip maddr
1: lo
 inet 224.0.0.1
 inet6 ff02::1
 inet6 ff01::1
2: enp0s3
 link 01:00:5e:00:00:01
 inet 224.0.0.1
 inet6 ff02::1
 inet6 ff01::1
3: enp0s8
 link 01:00:5e:00:00:01
 inet 224.0.0.1
 inet6 ff02::1
 inet6 ff01::1
4: enp0s9
 link 01:00:5e:00:00:01
 inet 224.0.0.1
 inet6 ff02::1
 inet6 ff01::1
5: enp0s10
 link 01:00:5e:00:00:01
 inet 224.0.0.1
 inet6 ff02::1
 inet6 ff01::1

 

 

LSOF

lsof is no longer included on the OS minimal installation, but not considered as obsolete or deprecated, therefore simply use yun to intall the missing package:

[root@oel7node00 ~]# which lsof
/usr/bin/which: no lsof in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)

[root@oel7node00 ~]# yum install lsof
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package lsof.x86_64 0:4.87-4.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================================================================
 Package Arch Version Repository Size
=========================================================================================================================================================
Installing:
 lsof x86_64 4.87-4.el7 ol7_latest 330 k

Transaction Summary
=========================================================================================================================================================
Install 1 Package

Total download size: 330 k
Installed size: 927 k
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
lsof-4.87-4.el7.x86_64.rpm | 330 kB 00:00:00 
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
 Installing : lsof-4.87-4.el7.x86_64 1/1 
 Verifying : lsof-4.87-4.el7.x86_64 1/1

Installed:
 lsof.x86_64 0:4.87-4.el7

Complete!

 

 

 

 

Linux for DBA: How disable the ssh banner for a given user

Ready to install a new Oracle RAC cluster, but the ssh banner (in /etc/issue.net protected by root privileges) is compromising the non-interactive ssh commands issued by grid & oracle?

Here the trick to disable it:

--Add this empty file to the grid and oracle UNIX home
touch ~/.hushlogin 

--or
mkdir -p .ssh
chmod 700 .ssh 
echo "LogLevel quiet" > ~/.ssh/config

Severe Oracle instability due to new RedHat 7.2 feature which releases IPC objects

I have recently installed a two node RAC version 12.1.0.2 on top of RedHat 7.2 and few hours after the initial setup I started experiencing ASM and database crashes.

Checking in the alert log I found the following errors:

Tue Oct 04 05:25:17 2016
Dumping diagnostic data in directory=[cdmp_20161004052517], requested by (instance=1, osid=84872 (MMAN)), summary=[abnormal instance termination].
Tue Oct 04 05:25:18 2016
Instance terminated by USER, pid = 84872
Tue Oct 04 05:25:18 2016
Errors in file /oams/base/diag/rdbms/txdop/txdop1/trace/txdop1_mman_84872.trc:
ORA-27300: OS system dependent operation:semctl failed with status: 22
ORA-27301: OS failure message: Invalid argument
ORA-27302: failure occurred at: sskgpwrm1
ORA-27157: OS post/wait facility removed
ORA-27300: OS system dependent operation:semop failed with status: 43
ORA-27301: OS failure message: Identifier removed
ORA-27302: failure occurred at: sskgpwwait1

 

The errors pointed to the OS and in particular to the possibility that semaphores in use by Oracle have been removed.

Because this is a fresh installation and I was the only person using the cluster, it was easy to exclude any third party activity. Then I double-checked the kernel parameters and all other system pre-requisites without finding any wrong configuration.

Finally, on MOS and I found the followinfg note ALERT: Setting RemoveIPC=yes on Redhat 7.2 Crashes ASM and Database Instances as Well as Any Application That Uses a Shared Memory Segment (SHM) or Semaphores (SEM) (Doc ID 2081410.1)”

Redhat 7.2, systemd-logind service introduced a new feature to remove all IPC objects when a user fully logs out.
The feature is controled by the option RemoveIPC in the /etc/systemd/logind.conf configuration file, see man logind.conf(5) for details.

The default value for RemoveIPC in RHEL7.2 is yes.

As a result, when the last oracle or grid user disconnects, the OS removes shared memory segments and semaphores for those users.
As Oracle ASM and Databases use shared memory segments for SGA, removing shared memory segments will crash the Oracle ASM and database instances.

 

Patching ODA X5-2 Virtualized to version 12.1.2.6

Here is described the procedure to upgrade the ODA to the Bundle Patch 12.1.2.6.0.

This Bundle contains a BIG change because it replaces Oracle Enterprise Linux 5.11 with the version 6.7.

One critical requirement: this patch can only be installed on top of 12.1.2.5.0, to check the exisitng ODA version run:

# /opt/oracle/oak/bin/oakcli show version
Version
12.1.2.5.0

The patch can be downloaded from MOS selecting the following note: 22328442 ORACLE DATABASE APPLIANCE PATCH BUNDLE 12.1.2.6.0 (Patch)

 

And now let’s start with the installation:

  • Upload the patch on both ODA_Base (Dom1)  on /tmp
  • Remove any Extra RPM installed by the user on the ODA_Base
  • Unpack both ZIP files of the patch on both ODA_Base using the following oakcli command:
[root@oda_base01 / ] # cd /tmp/Patch_12.1.2.6.0
[root@oda_base01 patch]# oakcli unpack -package /tmp/patch/p22328442_121260_Linux-x86-64_1of2.zip
Unpacking takes a while, pls wait....
Successfully unpacked the files to repository.
[root@oda_base01 patch]#
[root@oda_base01 patch]#
[root@oda_base01 patch]# oakcli unpack -package /tmp/patch/p22328442_121260_Linux-x86-64_2of2.zip
Unpacking takes a while, pls wait....
Successfully unpacked the files to repository.
[root@oda_base01 patch]#


Verify the patch compatibility on both ODA_Base with the following check:

[root@oda_base01 patch]# oakcli update -patch 12.1.2.6.0 -verify

INFO: 2016-03-31 17:07:29: Reading the metadata file now...
 Component Name Installed Version Proposed Patch Version
 --------------- ------------------ -----------------
 Controller_INT     4.230.40-3739       Up-to-date
 Controller_EXT     06.00.02.00         Up-to-date
 Expander           0018                Up-to-date
 SSD_SHARED {
 [ c1d20,c1d21,c1d22, A29A              Up-to-date
 c1d23 ]
 [ c1d16,c1d17,c1d18, A29A              Up-to-date
 c1d19 ]
 }
 HDD_LOCAL            A720              Up-to-date
 HDD_SHARED           P554              Up-to-date
 ILOM             3.2.4.42 r99377     3.2.4.52 r101649
 BIOS               30040200              30050100
 IPMI               1.8.12.0              1.8.12.4
 HMP                2.3.2.4.1             2.3.4.0.1
 OAK               12.1.2.5.0            12.1.2.6.0
 OL                    5.11                  6.7
 OVM                  3.2.9              Up-to-date
 GI_HOME           12.1.0.2.5(21359755, 12.1.0.2.160119(2194
                              21359758) 8354,21948344)
 DB_HOME {
 [ OraDb11204_home1 ] 11.2.0.4.8(21352635, 11.2.0.4.160119(2194
 21352649) 8347,21948348)
 [ OraDb12102_home2,O 12.1.0.2.5(21359755, 12.1.0.2.160119(2194
 raDb12102_home1 ] 21359758) 8354,21948344)
 }
[root@oda_base01 patch]#

Validate the Upgrade to OEL6 checking:

  • The minimum required version
  • The space requirement
  • The list of valid ol5 rpms.
[root@oda_base01 patch]# oakcli validate -c ol6upgrade -prechecks
INFO: Validating the OL6 upgrade -prechecks
INFO: 2016-04-09 17:11:41: Checking for minimum compatible version
SUCCESS: 2016-04-09 17:11:41: Minimum compatible version check passed

INFO: 2016-04-09 17:11:41: Checking available free space on /u01
INFO: 2016-04-09 17:11:41: Free space on /u01 is 39734588 1K-blocks
SUCCESS: 2016-04-09 17:11:41: Check for available free space passed

INFO: 2016-04-09 17:11:42: Checking for additional RPMs
SUCCESS: 2016-04-09 17:11:42: Check for additional RPMs passed

INFO: 2016-04-09 17:11:42: Checking for expected RPMs installed
INFO: 2016-04-09 17:11:42: Please take backup of ODA_BASE. Ensure ODA_BASE, Share Repos and all the VMs are shutdown cleanly before taking backup.
INFO: 2016-04-09 17:11:42: You may use eg tar -cvzf oakDom1.<node>.tar.gz /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1.
SUCCESS: 2016-04-09 17:11:42: All the expected ol5 RPMs are installed
SUCCESS: Node is ready for upgrade
[root@oda_base01 patch]#

Apply the patch to the first node using the flag -local

[root@oda_base01 patch]# /opt/oracle/oak/bin/oakcli update -patch 12.1.2.6.0 --infra -local
INFO: Local patch is running on the Node <0>
INFO: ***************************************************
INFO: ** Please do not patch both nodes simultaneously **
INFO: ***************************************************
INFO: DB, ASM, Clusterware may be stopped during the patch if required
INFO: Local Node may get rebooted automatically during the patch if necessary
Do you want to continue: [Y/N]?: Y
INFO: User has confirmed for the reboot
INFO: 2016-04-09 17:14:22: Checking for minimum compatible version
SUCCESS: 2016-04-09 17:14:22: Minimum compatible version check passed

INFO: 2016-04-09 17:14:22: Checking available free space on /u01
INFO: 2016-04-09 17:14:22: Free space on /u01 is 39733684 1K-blocks
SUCCESS: 2016-04-09 17:14:22: Check for available free space passed

INFO: 2016-04-09 17:14:22: Checking for additional RPMs
SUCCESS: 2016-04-09 17:14:22: Check for additional RPMs passed

INFO: 2016-04-09 17:14:22: Checking for expected RPMs installed
INFO: 2016-04-09 17:14:22: Please take backup of ODA_BASE. Ensure ODA_BASE, Share Repos and all the VMs are shutdown cleanly before taking backup.
INFO: 2016-04-09 17:14:22: You may use eg tar -cvzf oakDom1.<node>.tar.gz /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1.
SUCCESS: 2016-04-09 17:14:22: All the expected ol5 RPMs are installed
INFO: All the VMs except the ODABASE will be shutdown forcefully if needed
Do you want to continue : [Y/N]? : Y
INFO: Running pre-install scripts
INFO: Running prepatching on local node
INFO: Completed pre-install scripts
INFO: local patching code START
INFO: Stopping local VMs, repos and oakd...
INFO: Shutdown of local VM, Repo and OAKD on node <0>.
INFO: Stopping OAKD on the local node.
INFO: Stopped Oakd on local node
INFO: Waiting for processes to sync up...
INFO: Oakd running on remote node
INFO: Stopping local VMs...
INFO: Stopping local shared repos...
INFO: Patching Dom0 components

INFO: Patching dom0 components on Local Node... <12.1.2.6.0>
INFO: 2016-04-09 17:27:02: Attempting to patch the HMP on Dom0...
SUCCESS: 2016-04-09 17:27:08: Successfully updated the device HMP to the version 2.3.4.0.1 on Dom0
INFO: 2016-04-09 17:27:08: Attempting to patch the IPMI on Dom0...
INFO: 2016-04-09 17:27:08: Successfully updated the IPMI on Dom0
INFO: 2016-04-09 17:27:08: Attempting to patch OS on Dom0...
INFO: 2016-04-09 17:27:18: Clusterware is running on local node
INFO: 2016-04-09 17:27:18: Attempting to stop clusterware and its resources locally
SUCCESS: 2016-04-09 17:29:12: Successfully stopped the clusterware on local node

SUCCESS: 2016-04-09 17:31:36: Successfully updated the device OVM to 3.2.9

INFO: Patching ODABASE components

INFO: Patching Infrastructure on the Local Node...

INFO: 2016-04-09 17:31:38: ------------------Patching OS-------------------------
INFO: 2016-04-09 17:31:38: OSPatching : Patching will start from step 0
INFO: 2016-04-09 17:31:38: OSPatching : Performing the step 0
INFO: 2016-04-09 17:31:39: OSPatching : step 0 completed
==================================================================================
INFO: 2016-04-09 17:31:39: OSPatching : Performing the step 1
INFO: 2016-04-09 17:31:39: OSPatching : step 1 completed
==================================================================================
INFO: 2016-04-09 17:31:39: OSPatching : Performing the step 2
INFO: 2016-04-09 17:31:42: OSPatching : step 2 completed.
==================================================================================
INFO: 2016-04-09 17:31:42: OSPatching : Performing the step 3
INFO: 2016-04-09 17:31:51: OSPatching : step 3 completed
==================================================================================
INFO: 2016-04-09 17:31:51: OSPatching : Performing the step 4
INFO: 2016-04-09 17:31:51: OSPatching : step 4 completed.
==================================================================================
INFO: 2016-04-09 17:31:51: OSPatching : Performing the step 5
INFO: 2016-04-09 17:31:52: OSPatching : step 5 completed
==================================================================================
INFO: 2016-04-09 17:31:52: OSPatching : Performing the step 6
INFO: 2016-04-09 17:31:52: OSPatching : Installing OL6 RPMs. Please wait...
INFO: 2016-04-09 17:35:05: OSPatching : step 6 completed
==================================================================================
INFO: 2016-04-09 17:35:05: OSPatching : Performing the step 7
INFO: 2016-04-09 17:37:36: OSPatching : step 7 completed
==================================================================================
INFO: 2016-04-09 17:37:36: OSPatching : Performing the step 8
INFO: 2016-04-09 17:37:37: OSPatching : step 8 completed
==================================================================================
INFO: 2016-04-09 17:37:37: OSPatching : Performing the step 9
INFO: 2016-04-09 17:38:14: OSPatching : step 9 completed
==================================================================================
INFO: 2016-04-09 17:38:14: OSPatching : Performing the step 10
INFO: 2016-04-09 17:38:50: OSPatching : step 10 completed
==================================================================================
INFO: 2016-04-09 17:38:50: OSPatching : Performing the step 11
INFO: 2016-04-09 17:38:50: OSPatching : step 11 completed
==================================================================================
INFO: 2016-04-09 17:38:50: OSPatching : Performing the step 12
INFO: 2016-04-09 17:38:50: Checking for expected RPMs installed
SUCCESS: 2016-04-09 17:38:51: All the expected ol6 RPMs are installed
INFO: 2016-04-09 17:38:51: OSPatching : step 12 completed
==================================================================================
SUCCESS: 2016-04-09 17:38:51: Successfully upgraded the OS

INFO: 2016-04-09 17:38:52: ----------------------Patching IPMI---------------------
INFO: 2016-04-09 17:38:52: IPMI is already upgraded or running with the latest version

INFO: 2016-04-09 17:38:52: ------------------Patching HMP-------------------------
INFO: 2016-04-09 17:38:53: HMP is already Up-to-date
INFO: 2016-04-09 17:38:53: /usr/lib64/sun-ssm already exists.

INFO: 2016-04-09 17:38:53: ----------------------Patching OAK---------------------
SUCCESS: 2016-04-09 17:39:27: Successfully upgraded OAK

INFO: 2016-04-09 17:39:31: ----------------------Patching JDK---------------------
SUCCESS: 2016-04-09 17:39:36: Successfully upgraded JDK

INFO: local patching code END

INFO: patching summary on local node
SUCCESS: 2016-04-09 17:39:39: Successfully upgraded the HMP on Dom0
SUCCESS: 2016-04-09 17:39:39: Successfully updated the device OVM
SUCCESS: 2016-04-09 17:39:39: Successfully upgraded the OS
INFO: 2016-04-09 17:39:39: IPMI is already upgraded
INFO: 2016-04-09 17:39:39: HMP is already updated
SUCCESS: 2016-04-09 17:39:39: Successfully updated the OAK
SUCCESS: 2016-04-09 17:39:39: Successfully updated the JDK

INFO: Running post-install scripts
INFO: Running postpatch on local node
INFO: Dom0 Needs to be rebooted, will be rebooting the Dom0

Broadcast message from root@oda_base01
 (unknown) at 17:40 ...

The system is going down for power off NOW!

Validate the steps with the  infrastructure post patch checks:

[root@oda_base01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[root@oda_base01 ~]# /opt/oracle/oak/bin/oakcli validate -c ol6upgrade -postchecks
INFO: Validating the OL6 upgrade -postchecks

INFO: 2016-04-09 19:50:40: Current kernel is OL6
INFO: 2016-04-09 19:50:43: Checking for expected RPMs installed
SUCCESS: 2016-04-09 19:50:43: All the expected ol6 RPMs are installed

Apply the patch to the second node using the flag -local

[root@oda_base02 patch]# /opt/oracle/oak/bin/oakcli update -patch 12.1.2.6.0 --infra -local
INFO: Local patch is running on the Node <1>
INFO: ***************************************************
INFO: ** Please do not patch both nodes simultaneously **
INFO: ***************************************************
INFO: DB, ASM, Clusterware may be stopped during the patch if required
INFO: Local Node may get rebooted automatically during the patch if necessary
Do you want to continue: [Y/N]?: Y
INFO: User has confirmed for the reboot
INFO: 2016-04-09 19:58:07: Checking for minimum compatible version
SUCCESS: 2016-04-09 19:58:07: Minimum compatible version check passed

INFO: 2016-04-09 19:58:07: Checking available free space on /u01
INFO: 2016-04-09 19:58:07: Free space on /u01 is 45790328 1K-blocks
SUCCESS: 2016-04-09 19:58:07: Check for available free space passed

INFO: 2016-04-09 19:58:07: Checking for additional RPMs
SUCCESS: 2016-04-09 19:58:07: Check for additional RPMs passed

INFO: 2016-04-09 19:58:07: Checking for expected RPMs installed
INFO: 2016-04-09 19:58:08: Please take backup of ODA_BASE. Ensure ODA_BASE, Share Repos and all the VMs are shutdown cleanly before taking backup.
INFO: 2016-04-09 19:58:08: You may use eg tar -cvzf oakDom1.<node>.tar.gz /OVS/Repositories/odabaseRepo/VirtualMachines/oakDom1.
SUCCESS: 2016-04-09 19:58:08: All the expected ol5 RPMs are installed
INFO: All the VMs except the ODABASE will be shutdown forcefully if needed
Do you want to continue : [Y/N]? : Y
INFO: Running pre-install scripts
INFO: Running prepatching on local node
INFO: Completed pre-install scripts
INFO: local patching code START
INFO: Stopping local VMs, repos and oakd...
INFO: Shutdown of local VM, Repo and OAKD on node <1>.
INFO: Stopping OAKD on the local node.
INFO: Stopped Oakd on local node
INFO: Waiting for processes to sync up...
INFO: Oakd running on remote node
INFO: Stopping local VMs...
INFO: Stopping local shared repos...
INFO: Patching Dom0 components

INFO: Patching dom0 components on Local Node... <12.1.2.6.0>
INFO: 2016-04-09 20:04:26: Attempting to patch the HMP on Dom0...
SUCCESS: 2016-04-09 20:04:33: Successfully updated the device HMP to the version 2.3.4.0.1 on Dom0
INFO: 2016-04-09 20:04:33: Attempting to patch the IPMI on Dom0...
INFO: 2016-04-09 20:04:33: Successfully updated the IPMI on Dom0
INFO: 2016-04-09 20:04:33: Attempting to patch OS on Dom0...
INFO: 2016-04-09 20:04:43: Clusterware is running on local node
INFO: 2016-04-09 20:04:43: Attempting to stop clusterware and its resources locally
SUCCESS: 2016-04-09 20:08:20: Successfully stopped the clusterware on local node

SUCCESS: 2016-04-09 20:10:44: Successfully updated the device OVM to 3.2.9

INFO: Patching ODABASE components

INFO: Patching Infrastructure on the Local Node...

INFO: 2016-04-09 20:10:48: ------------------Patching OS-------------------------
INFO: 2016-04-09 20:10:48: OSPatching : Patching will start from step 0
INFO: 2016-04-09 20:10:48: OSPatching : Performing the step 0
INFO: 2016-04-09 20:10:51: OSPatching : step 0 completed
==================================================================================
INFO: 2016-04-09 20:10:51: OSPatching : Performing the step 1
INFO: 2016-04-09 20:10:51: OSPatching : step 1 completed
==================================================================================
INFO: 2016-04-09 20:10:51: OSPatching : Performing the step 2
INFO: 2016-04-09 20:10:53: OSPatching : step 2 completed.
==================================================================================
INFO: 2016-04-09 20:10:53: OSPatching : Performing the step 3
INFO: 2016-04-09 20:11:00: OSPatching : step 3 completed
==================================================================================
INFO: 2016-04-09 20:11:00: OSPatching : Performing the step 4
INFO: 2016-04-09 20:11:00: OSPatching : step 4 completed.
==================================================================================
INFO: 2016-04-09 20:11:00: OSPatching : Performing the step 5
INFO: 2016-04-09 20:11:00: OSPatching : step 5 completed
==================================================================================
INFO: 2016-04-09 20:11:00: OSPatching : Performing the step 6
INFO: 2016-04-09 20:11:00: OSPatching : Installing OL6 RPMs. Please wait...
INFO: 2016-04-09 20:14:25: OSPatching : step 6 completed
==================================================================================
INFO: 2016-04-09 20:14:25: OSPatching : Performing the step 7
INFO: 2016-04-09 20:16:58: OSPatching : step 7 completed
==================================================================================
INFO: 2016-04-09 20:16:58: OSPatching : Performing the step 8
INFO: 2016-04-09 20:16:59: OSPatching : step 8 completed
==================================================================================
INFO: 2016-04-09 20:16:59: OSPatching : Performing the step 9
INFO: 2016-04-09 20:17:35: OSPatching : step 9 completed
==================================================================================
INFO: 2016-04-09 20:17:35: OSPatching : Performing the step 10
INFO: 2016-04-09 20:18:11: OSPatching : step 10 completed
==================================================================================
INFO: 2016-04-09 20:18:11: OSPatching : Performing the step 11
INFO: 2016-04-09 20:18:11: OSPatching : step 11 completed
==================================================================================
INFO: 2016-04-09 20:18:11: OSPatching : Performing the step 12
INFO: 2016-04-09 20:18:12: Checking for expected RPMs installed
SUCCESS: 2016-04-09 20:18:12: All the expected ol6 RPMs are installed
INFO: 2016-04-09 20:18:12: OSPatching : step 12 completed
==================================================================================
SUCCESS: 2016-04-09 20:18:12: Successfully upgraded the OS

INFO: 2016-04-09 20:18:12: ----------------------Patching IPMI---------------------
INFO: 2016-04-09 20:18:13: IPMI is already upgraded or running with the latest version

INFO: 2016-04-09 20:18:13: ------------------Patching HMP-------------------------
INFO: 2016-04-09 20:18:15: HMP is already Up-to-date
INFO: 2016-04-09 20:18:15: /usr/lib64/sun-ssm already exists.

INFO: 2016-04-09 20:18:15: ----------------------Patching OAK---------------------
SUCCESS: 2016-04-09 20:18:53: Successfully upgraded OAK

INFO: 2016-04-09 20:18:56: ----------------------Patching JDK---------------------
SUCCESS: 2016-04-09 20:19:02: Successfully upgraded JDK

INFO: local patching code END

INFO: patching summary on local node
SUCCESS: 2016-04-09 20:19:06: Successfully upgraded the HMP on Dom0
SUCCESS: 2016-04-09 20:19:06: Successfully updated the device OVM
SUCCESS: 2016-04-09 20:19:06: Successfully upgraded the OS
INFO: 2016-04-09 20:19:06: IPMI is already upgraded
INFO: 2016-04-09 20:19:06: HMP is already updated
SUCCESS: 2016-04-09 20:19:06: Successfully updated the OAK
SUCCESS: 2016-04-09 20:19:06: Successfully updated the JDK

INFO: Running post-install scripts
INFO: Running postpatch on local node
INFO: Dom0 Needs to be rebooted, will be rebooting the Dom0

Broadcast message from root@oda_base02
 (unknown) at 20:20 ...

The system is going down for power off NOW!

From the first ODA_Base apply the fix to the InfiniBand connection:

[root@oda_base01 ~]# python /opt/oracle/oak/bin/infiniFixSetup.py
IB Fix requires nodes reboot. Do you want to continue? [Y/N] : Y
INFO: Checking version for IB Fix setup
INFO: Checking whether IB Fix setup is already done or not
INFO: Checking default HAVIP for IB Fix setup
INFO: Setting up IB fix
INFO: Enabling IB fix and rebooting all nodes....
[root@oda_base01 ~]#
Broadcast message from root@oda_base01
 (unknown) at 20:40 ...

The system is going down for power off NOW!

Check the correct application of the InfiniBand patch, the value of the file below should be 1

[root@oda_base01 ~]#  view /opt/oracle/oak/conf/ib_fix
1

Installation of the Grid Infrastructure patch, two available methods:

  • Full Downtime
  • Rolling Upgrade

The example below show the first method

[root@oda_base01 ~]# oakcli update -patch 12.1.2.6.0 --gi

Please enter the 'SYSASM' password : (During deployment we set the SYSASM password to 'welcome1'):
Please re-enter the 'SYSASM' password:
INFO: Running pre-install scripts
INFO: Running prepatching on node 0
INFO: Running prepatching on node 1
INFO: Completed pre-install scripts
...
...
INFO: Stopped Oakd
...
...

......
SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
INFO: 2016-04-09 22:32:16: Setting up SSH for grid User
......
SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
INFO: 2016-04-09 22:32:34: Patching the GI Home on the Node oda_base01 ...
INFO: 2016-04-09 22:32:34: Updating OPATCH...
INFO: 2016-04-09 22:32:36: Rolling back GI on oda_base01 (if necessary)...
INFO: 2016-04-09 22:32:39: Rolling back GI on oda_base02 (if necessary)...
INFO: 2016-04-09 22:32:46: Patching the GI Home on the Node oda_base01
INFO: 2016-04-09 22:34:02: Performing the conflict checks...
SUCCESS: 2016-04-09 22:34:16: Conflict checks passed for all the Homes
INFO: 2016-04-09 22:34:16: Checking if the patch is already applied on any of the Homes
INFO: 2016-04-09 22:34:28: Home is not Up-to-date
SUCCESS: 2016-04-09 22:37:01: Successfully stopped the Database consoles
SUCCESS: 2016-04-09 22:37:18: Successfully stopped the EM agents
INFO: 2016-04-09 22:37:23: Applying patch on /u01/app/12.1.0.2/grid Homes
INFO: 2016-04-09 22:37:23: It may take upto 15 mins. Please wait...
SUCCESS: 2016-04-09 22:50:57: Successfully applied the patch on the Home : /u01/app/12.1.0.2/grid
SUCCESS: 2016-04-09 22:51:24: Successfully started the Database consoles
SUCCESS: 2016-04-09 22:51:40: Successfully started the EM Agents
INFO: 2016-04-09 22:51:41: Patching the GI Home on the Node oda_base02
...
INFO: 2016-04-09 23:16:27: ASM is running in Flex mode


INFO: GI patching summary on node: oda_base01
SUCCESS: 2016-04-09 23:16:28: Successfully applied the patch on the Home /u01/app/12.1.0.2/grid

INFO: GI patching summary on node: oda_base02
SUCCESS: 2016-04-09 23:16:28: Successfully applied the patch on the Home /u01/app/12.1.0.2/grid

INFO: GI versions: installed <12.1.0.2.160119> expected <12.1.0.2.160119>
INFO: Running post-install scripts
INFO: Running postpatch on node 1...
INFO: Running postpatch on node 0...
...
...
INFO: Started Oakd

Installation of the RDBMS patch, two available methods:

  • Full Downtime
  • Rolling Upgrade

The example below show the first method

[root@oda_base01 ~]# oakcli update -patch 12.1.2.6.0 --database
INFO: Running pre-install scripts
INFO: Running prepatching on node 0
INFO: Running prepatching on node 1
INFO: Completed pre-install scripts
...
...

......
SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
INFO: 2016-04-09 23:27:31: Getting all the possible Database Homes for patching
...
INFO: 2016-04-09 23:27:42: Patching 11.2.0.4 Database Homes on the Node oda_base01

Found the following 11.2.0.4 homes possible for patching:

HOME_NAME HOME_LOCATION
--------- -------------
OraDb11204_home1 /u01/app/oracle/product/11.2.0.4/dbhome_1

[Please note that few of the above Database Homes may be already up-to-date. They will be automatically ignored]

Would you like to patch all the above homes: Y | N ? : Y
INFO: 2016-04-09 23:29:17: Setting up SSH for the User oracle
......
SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
INFO: 2016-04-09 23:29:35: Updating OPATCH
Fixing home : /u01/app/oracle/product/11.2.0.4/dbhome_1...done
INFO: 2016-04-09 23:30:33: Performing the conflict checks...
SUCCESS: 2016-04-09 23:30:43: Conflict checks passed for all the Homes
INFO: 2016-04-09 23:30:43: Checking if the patch is already applied on any of the Homes
INFO: 2016-04-09 23:30:47: Home is not Up-to-date
SUCCESS: 2016-04-09 23:31:13: Successfully stopped the Database consoles
SUCCESS: 2016-04-09 23:31:31: Successfully stopped the EM agents
INFO: 2016-04-09 23:31:36: Applying the patch on oracle home : /u01/app/oracle/product/11.2.0.4/dbhome_1 ...
SUCCESS: 2016-04-09 23:32:52: Successfully applied the patch on the Home : /u01/app/oracle/product/11.2.0.4/dbhome_1
SUCCESS: 2016-04-09 23:32:52: Successfully started the Database consoles
SUCCESS: 2016-04-09 23:33:08: Successfully started the EM Agents
INFO: 2016-04-09 23:33:17: Patching 11.2.0.4 Database Homes on the Node oda_base02
INFO: 2016-04-09 23:40:45: Running the catbundle.sql
INFO: 2016-04-09 23:40:52: Running catbundle.sql on the Database XXXXXXX
INFO: 2016-04-09 23:41:29: Running catbundle.sql on the Database YYYYYYY
INFO: 2016-04-09 23:42:07: Running catbundle.sql on the Database ZZZZZZZ
INFO: 2016-04-09 23:42:42: Running catbundle.sql on the Database WWWWWWW
...
INFO: 2016-04-09 23:47:56: Patching 12.1.0.2 Database Homes on the Node oda_base01

Found the following 12.1.0.2 homes possible for patching:

HOME_NAME HOME_LOCATION
--------- -------------
OraDb12102_home1 /u01/app/oracle/product/12.1.0.2/dbhome_1
OraDb12102_home2 /u01/app/oracle/product/12.1.0.2/dbhome_2

[Please note that few of the above Database Homes may be already up-to-date. They will be automatically ignored]

Would you like to patch all the above homes: Y | N ? : Y
INFO: 2016-04-09 23:49:11: Updating OPATCH
INFO: 2016-04-09 23:49:55: Performing the conflict checks...
SUCCESS: 2016-04-09 23:50:21: Conflict checks passed for all the Homes
INFO: 2016-04-09 23:50:21: Checking if the patch is already applied on any of the Homes
INFO: 2016-04-09 23:50:28: Home is not Up-to-date
SUCCESS: 2016-04-09 23:50:47: Successfully stopped the Database consoles
SUCCESS: 2016-04-09 23:51:04: Successfully stopped the EM agents
INFO: 2016-04-09 23:51:10: Applying patch on /u01/app/oracle/product/12.1.0.2/dbhome_1,/u01/app/oracle/product/12.1.0.2/dbhome_2 Homes
INFO: 2016-04-09 23:51:10: It may take upto 30 mins. Please wait...
SUCCESS: 2016-04-09 23:54:20: Successfully applied the patch on the Home : /u01/app/oracle/product/12.1.0.2/dbhome_1,/u01/app/oracle/product/12.1.0.2/dbhome_2
SUCCESS: 2016-04-09 23:54:20: Successfully started the Database consoles
SUCCESS: 2016-04-09 23:54:37: Successfully started the EM Agents
INFO: 2016-04-09 23:54:47: Patching 12.1.0.2 Database Homes on the Node oda_base02


INFO: DB patching summary on node: oda_base01
SUCCESS: 2016-04-01 00:03:19: Successfully applied the patch on the Home /u01/app/oracle/product/11.2.0.4/dbhome_1
SUCCESS: 2016-04-01 00:03:19: Successfully applied the patch on the Home /u01/app/oracle/product/12.1.0.2/dbhome_1,/u01/app/oracle/product/12.1.0.2/dbhome_2

INFO: DB patching summary on node: oda_base02
SUCCESS: 2016-04-01 00:03:20: Successfully applied the patch on the Home /u01/app/oracle/product/11.2.0.4/dbhome_1
SUCCESS: 2016-04-01 00:03:20: Successfully applied the patch on the Home /u01/app/oracle/product/12.1.0.2/dbhome_1,/u01/app/oracle/product/12.1.0.2/dbhome_2

Post patching validation:

[root@oda_base01 ~]# /opt/oracle/oak/bin/oakcli validate -d
INFO: oak system information and Validations
RESULT: System Software inventory details
 Reading the metadata. It takes a while...
 System Version Component Name Installed Version Supported Version
 -------------- --------------- ------------------ -----------------
 12.1.2.6.0
                  Controller_INT   4.230.40-3739     Up-to-date
                  Controller_EXT   06.00.02.00       Up-to-date
                  Expander         0018              Up-to-date
 SSD_SHARED {
 [ c1d20,c1d21,c1d22,              A29A               Up-to-date
 c1d23 ]
 [ c1d16,c1d17,c1d18,              A29A               Up-to-date
 c1d19 ]
 }
 HDD_LOCAL                         A720               Up-to-date
 HDD_SHARED                        P554               Up-to-date
 ILOM                              3.2.4.42 r99377    Up-to-date
 BIOS                              30040200           Up-to-date
 IPMI                              1.8.12.4           Up-to-date
 HMP                               2.3.4.0.1          Up-to-date
 OAK                               12.1.2.6.0         Up-to-date
 OL                                6.7                Up-to-date
 OVM                               3.2.9              Up-to-date
 GI_HOME                         12.1.0.2.160119(2194 Up-to-date
                                 8354,21948344)
 DB_HOME {
 [ OraDb11204_home1 ]            11.2.0.4.160119(2194 Up-to-date
                                 8347,21948348)
 [ OraDb12102_home2,O            12.1.0.2.160119(2194 Up-to-date
 raDb12102_home1 ]               8354,21948344)
 }
RESULT: System Information:-
 Manufacturer:Oracle Corporation
 Product Name:ORACLE SERVER X5-2
 Serial Number:1548NM102F
RESULT: BIOS Information:-
 Vendor:American Megatrends Inc.
 Version:30040200
 Release Date:04/29/2015
 BIOS Revision:4.2
 Firmware Revision:3.2
SUCCESS: Controller p1 has the IR Bypass mode set correctly
SUCCESS: Controller p2 has the IR Bypass mode set correctly
INFO: Reading ilom data, may take short while..
INFO: Read the ilom data. Doing Validations
RESULT: System ILOM Version: 3.2.4.42 r99377
RESULT: System BMC firmware version 3.02
RESULT: Powersupply PS0 V_IN=230 Volts IN_POWER=180 Watts OUT_POWER=170 Watts
RESULT: Powersupply PS1 V_IN=230 Volts IN_POWER=190 Watts OUT_POWER=160 Watts
SUCCESS: Both the powersupply are ok and functioning
RESULT: Cooling Unit FM0 fan speed F0=5000 RPM F1=4500 RPM
RESULT: Cooling Unit FM1 fan speed F0=9100 RPM F1=8000 RPM
SUCCESS: Both the cooling unit are present
RESULT: Processor P0 present Details:-
 Version:Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
 Current Speed:2300 MHz Core Enabled:18 Thread Count:36
SUCCESS: All 4 memory modules of CPU P0 ok, each module is of Size:32767 MB Type:Other Speed:2133 MHz manufacturer:Samsung
RESULT: Processor P1 present Details:-
 Version:Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
 Current Speed:2300 MHz Core Enabled:18 Thread Count:36
SUCCESS: All 4 memory modules of CPU P1 ok, each module is of Size:32767 MB Type:Other Speed:2133 MHz manufacturer:Samsung
RESULT: Total Physical System Memory is 132037124 kB
SUCCESS: All OS Disks are present and in ok state
RESULT: Power Supply=24 degrees C
INFO: Checking Operating System Storage
SUCCESS: The OS disks have the boot stamp
RESULT: Device /dev/xvda2 is mounted on / of type ext3 in (rw)
RESULT: Device /dev/xvda1 is mounted on /boot of type ext3 in (rw)
RESULT: Device /dev/xvdb1 is mounted on /u01 of type ext3 in (rw)
RESULT: / has 19218 MB free out of total 55852 MB
RESULT: /boot has 384 MB free out of total 460 MB
RESULT: /u01 has 34501 MB free out of total 93868 MB
INFO: Checking Shared Storage
RESULT: Disk HDD_E0_S00_993971920 path1 status active device sdy with status active path2 status active device sda with status active
SUCCESS: HDD_E0_S00_993971920 has both the paths up and active
RESULT: Disk HDD_E0_S01_993379760 path1 status active device sdz with status active path2 status active device sdb with status active
SUCCESS: HDD_E0_S01_993379760 has both the paths up and active
RESULT: Disk HDD_E0_S02_993993052 path1 status active device sdaa with status active path2 status active device sdc with status active
SUCCESS: HDD_E0_S02_993993052 has both the paths up and active
RESULT: Disk HDD_E0_S03_993310956 path1 status active device sdab with status active path2 status active device sdd with status active
SUCCESS: HDD_E0_S03_993310956 has both the paths up and active
RESULT: Disk HDD_E0_S04_993385276 path1 status active device sdac with status active path2 status active device sde with status active
SUCCESS: HDD_E0_S04_993385276 has both the paths up and active
RESULT: Disk HDD_E0_S05_993388928 path1 status active device sdf with status active path2 status active device sdad with status active
SUCCESS: HDD_E0_S05_993388928 has both the paths up and active
RESULT: Disk HDD_E0_S06_993310572 path1 status active device sdae with status active path2 status active device sdg with status active
SUCCESS: HDD_E0_S06_993310572 has both the paths up and active
RESULT: Disk HDD_E0_S07_991849548 path1 status active device sdh with status active path2 status active device sdaf with status active
SUCCESS: HDD_E0_S07_991849548 has both the paths up and active
RESULT: Disk HDD_E0_S08_992415004 path1 status active device sdag with status active path2 status active device sdi with status active
SUCCESS: HDD_E0_S08_992415004 has both the paths up and active
RESULT: Disk HDD_E0_S09_992392444 path1 status active device sdj with status active path2 status active device sdah with status active
SUCCESS: HDD_E0_S09_992392444 has both the paths up and active
RESULT: Disk HDD_E0_S10_992233592 path1 status active device sdai with status active path2 status active device sdk with status active
SUCCESS: HDD_E0_S10_992233592 has both the paths up and active
RESULT: Disk HDD_E0_S11_992337644 path1 status active device sdl with status active path2 status active device sdaj with status active
SUCCESS: HDD_E0_S11_992337644 has both the paths up and active
RESULT: Disk HDD_E0_S12_993363524 path1 status active device sdm with status active path2 status active device sdak with status active
SUCCESS: HDD_E0_S12_993363524 has both the paths up and active
RESULT: Disk HDD_E0_S13_992394252 path1 status active device sdn with status active path2 status active device sdal with status active
SUCCESS: HDD_E0_S13_992394252 has both the paths up and active
RESULT: Disk HDD_E0_S14_993366344 path1 status active device sdam with status active path2 status active device sdo with status active
SUCCESS: HDD_E0_S14_993366344 has both the paths up and active
RESULT: Disk HDD_E0_S15_993407552 path1 status active device sdp with status active path2 status active device sdan with status active
SUCCESS: HDD_E0_S15_993407552 has both the paths up and active
RESULT: Disk SSD_E0_S16_1313537708 path1 status active device sdq with status active path2 status active device sdao with status active
SUCCESS: SSD_E0_S16_1313537708 has both the paths up and active
RESULT: Disk SSD_E0_S17_1313522352 path1 status active device sdr with status active path2 status active device sdap with status active
SUCCESS: SSD_E0_S17_1313522352 has both the paths up and active
RESULT: Disk SSD_E0_S18_1313531936 path1 status active device sds with status active path2 status active device sdaq with status active
SUCCESS: SSD_E0_S18_1313531936 has both the paths up and active
RESULT: Disk SSD_E0_S19_1313534520 path1 status active device sdt with status active path2 status active device sdar with status active
SUCCESS: SSD_E0_S19_1313534520 has both the paths up and active
RESULT: Disk SSD_E0_S20_1313568492 path1 status active device sdu with status active path2 status active device sdas with status active
SUCCESS: SSD_E0_S20_1313568492 has both the paths up and active
RESULT: Disk SSD_E0_S21_1313571440 path1 status active device sdv with status active path2 status active device sdat with status active
SUCCESS: SSD_E0_S21_1313571440 has both the paths up and active
RESULT: Disk SSD_E0_S22_1313568380 path1 status active device sdw with status active path2 status active device sdau with status active
SUCCESS: SSD_E0_S22_1313568380 has both the paths up and active
RESULT: Disk SSD_E0_S23_1313568480 path1 status active device sdx with status active path2 status active device sdav with status active
SUCCESS: SSD_E0_S23_1313568480 has both the paths up and active
INFO: Doing oak network checks
RESULT: Detected active link for interface eth0 with link speed 10000Mb/s and cable type as TwistedPair
RESULT: Detected active link for interface eth1 with link speed 10000Mb/s and cable type as TwistedPair
WARNING: No Link detected for interface eth2 with cable type as TwistedPair
WARNING: No Link detected for interface eth3 with cable type as TwistedPair
INFO: Checking bonding interface status
RESULT: No Bond Interface Found
SUCCESS: ibbond0 is running 192.168.16.27
 It may take a while. Please wait...
 INFO : ODA Topology Verification
 INFO : Running on Node0
 INFO : Check hardware type
 SUCCESS : Type of hardware found : X5-2
 INFO : Check for Environment(Bare Metal or Virtual Machine)
 SUCCESS : Type of environment found : Virtual Machine(ODA BASE)
 SUCCESS : Number of External SCSI controllers found : 2
 INFO : Check for Controllers correct PCIe slot address
 SUCCESS : External LSI SAS controller 0 : 00:04.0
 SUCCESS : External LSI SAS controller 1 : 00:05.0
 INFO : Check if JBOD powered on
 SUCCESS : 1JBOD : Powered-on
 INFO : Check for correct number of EBODS(2 or 4)
 SUCCESS : EBOD found : 2
 INFO : Check for External Controller 0
 SUCCESS : Controller connected to correct EBOD number
 SUCCESS : Controller port connected to correct EBOD port
 SUCCESS : Overall Cable check for controller 0
 INFO : Check for External Controller 1
 SUCCESS : Controller connected to correct EBOD number
 SUCCESS : Controller port connected to correct EBOD port
 SUCCESS : Overall Cable check for Controller 1
 INFO : Check for overall status of cable validation on Node0
 SUCCESS : Overall Cable Validation on Node0
 INFO : Check Node Identification status
 SUCCESS : Node Identification
 SUCCESS : Node name based on cable configuration found : NODE0
 INFO : Check JBOD Nickname
 SUCCESS : JBOD Nickname set correctly : Oracle Database Appliance - E0
 INFO : The details for Storage Topology Validation can also be found in the log file=/opt/oracle/oak/log/oda_base01/storagetopology/StorageTopology-2016-04-01-00:06:34_28446_1789.log

 One takeaway

Despite the fact that patching an Oracle Engineered system should be a straight forward task, it is recommended to carefully read the instructions (README), and the MOS notes continuously updated with bug, known issues and other related information.