The “Great” ODA overwhelming the Exadata

Introduction

This article try to explain the technical reasons of the Oracle Database Appliance success, a well-known appliance with whom Oracle targets small and medium businesses, or specific departments of big companies looking for privacy and isolation from the rest of the IT. Nowadays this small and relatively cheap appliance (around 65’000$ price list) has evolved a lot, the storage has reached an important capacity 128TB raw expansible to 256TB, and the two X5-2 servers are the same used on the database node of the Exadata machine. Many customers, while defining the new database architecture evaluate the pros and cons of acquiring an ODA compared to the smallest Exadata configuration (one eight of a Rack). If the customer is not looking for a system with extreme performance and horizontal scalability beyond the two X5-2 servers, the Oracle Database Appliance is frequently the retained option.

Some of the ODA major features are:

  • High Availability: no single point of failure on all hardware and software components.
  • Performance: each server is equipped with 2×18-core Intel Xeon and 256GB of RAM extensible up to 768GB, cluster communication over InfiniBand. The shared storage offers a multi-tiers configuration with HDDs at 7.2K rpm and two type of SSDs for frequently accessed data and for database redo logs.
  • Flexibility & Scalability: running RAC, RAC One node and Single Instance databases.
  • Virtualized configuration: designed for offering Solution in-a-box, with high available virtual machines.
  • Optimized licensing model: pay-as-you-grow model activating a crescendo number of CPU-cores on demand, with the Bare Metal configuration; or capping the resources combining Oracle VM with the Hard Partitioning setup.
  • Time-to-market: no-matter if the ODA has to be installed bare metal or virtualized, this is a standardized and automated process generally completed in one or two day of work.
  • Price: the ODA is very competitive when comparing the cost to an equivalent commodity architecture; which in addition, must be engineered, integrated and maintained by the customer.

 

At the time of the writing of this article, the latest hardware model is ODA X5-2 and 12.1.2.6.0 is the software version. This HW and SW combination offers unique features, few of them not even available on the Exadata machine, like the possibility to host databases and applications in one single box, or the possibility to rapidly and space efficiently clone an 11gR2 and 12c database using ACFS Snapshot.

 

 

ODA HW & SW Architecture

Oracle Database Appliance is composed by two X5-2 servers and a shared storage shelf, which optionally can be doubled. Each Server disposes of: two 18-core Intel Xeon E5-2699 v3; 256GB RAM (optionally upgradable to 768GB) and two 600GB 10k rpm internal disks in RAID 1 for OS and software binaries.

This appliance is equipped with redundant networking connectivity up to 10Gb, redundant SAS HBAs and Storage I/O modules, redundant InfiniBand interconnect for cluster communication enabling 40 Gb/second server-to-server communication.

The software components are all part of Oracle “Red Stack” with Oracle Linux 6 UEK or OVM 3, Grid Infrastructure 12c, Oracle RDBMS 12c & 11gR2 and Oracle Appliance Manager.

 

 

ODA Front view

Components number 1 & 2 are the X5-2 Servers. Components 3 & 4 are the Storage and the optionally Storage extension.

ODA_Front

 

ODA Rear view

Highlight of the multiple redundant connections, including InfiniBand for Oracle Clusterware, ASM and RAC communications. No single point of HW or SW failure.

ODA_Back

 

 

Storage Organization

With 16x8TB SAS HDDs a total raw space of 128TB is available on each storage self (64TB in configuration ASM double-mirrored and 42.7TB with ASM triple-mirrored). To offer better I/O performance without exploding the price, Oracle has implemented the following SSD devices: 4x400GB ASM double-mirrored, for frequently accessed data, and 4x200GB ASM triple-mirrored, for database redo logs.

As shown on the picture aside, each rotating disk has two slices, the external, and more performant partition assigned to the +DATA ASM disk group, and the internal one allocated to +RECO ASM disk group.

 

ODA_Disk

This storage optimization allows the ODA to achieve competitive I/O performance. In a production-like environment, using the three type of disks, as per ODA Database template odb-24 (https://docs.oracle.com/cd/E22693_01/doc.12/e55580/sizing.htm), Trivadis has measured 12k I/O per second and a throughput of 2300 MB/s with an average latency of 10ms. As per Oracle documentation, the maximum number of I/O per second of the rotating disks, with a single storage shelf is 3300; but this value increases significantly relocating the hottest data files to +FLASH disk group created on SSD devices.

 

ACFS becomes the default database storage of ODA

Starting from the ODA software version 12.1.0.2, any fresh installation enforces ASM Cluster File System (ACFS) as lonely type of database storage support, restricting the supported database versions to 11.2.0.4 and greater. In case of ODA upgrade from previous release, all pre-existing databases are not automatically migrated to ACFS, but Oracle provides a tool called acfs_mig.pl for executing this mandatory step on all Non-CDB databases of version >= 11.2.0.4.

Oracle has decided to promote ACFS as default database storage on ODA environment for the following reasons:

  • ACFS provides almost equivalent performance than Oracle ASM disk groups.
  • Additional functionalities on industry standard POSIX file system.
  • Database snapshot copy of PDBs, and NON-CDB of version 11.2.0.4 or greater.
  • Advanced functionality for general-purpose files such as replication, tagging, encryption, security, and auditing.

Database created on ACFS follows the same Oracle Managed Files (OMF) standard used by ASM.

As in the past, the database provisioning requires the utilization of the command line interface oakcli and the selection of a database template, which defines several characteristics including the amount of space to allocate on each file system. Container and Non-Container databases can coexist on the same Oracle Database Appliance.

The ACFS file systems are created during the database provisioning process on top of the ASM disk groups +DATA, +RECO, +REDO, and optionally +FLASH. The file systems have two possible setups, depending on the database type Container or Non-Container.

  • Container database: for each CDB the ODA database-provisioning job creates dedicated ACFS file systems with the following characteristics:
Disk Characteristics ASM Disk group ACFS Mount Point
SAS Disk external partition +DATA /u02/app/oracle/oradata/datc<db_unique_name>
SAS Disk internal partition +RECO /u01/app/oracle/fast_recovery_area/rcoc<db_unique_name>
SSD Triple-mirrored +REDO /u01/app/oracle/oradata/rdoc<db_unique_name>
SSD Double-mirrored +FLASH (*) /u02/app/oracle/oradata/flashdata

 

  • Non-Container database: in case of Non-CDB the ODA database-provisioning job creates or resizes the following shared ACFS file systems:
Disk Characteristics ASM Disk group ACFS Mount Point
SAS Disk external partition +DATA /u02/app/oracle/oradata/datastore
SAS Disk internal partition +RECO /u01/app/oracle/fast_recovery_area/datastore
SSD Triple-mirrored +REDO /u01/app/oracle/oradata/datastore
SSD Double-mirrored +FLASH (*) /u02/app/oracle/oradata/flashdata

(*) Optionally used by the databases as Smart Flash Cache (extension of the SGA buffer cache), or allocated to store the hottest data files leveraging the I/O performance of the SSD disks.

 

Oracle Database Appliance Bare Metal

The bare metal configuration has been available since version one of the appliance, and nowadays it remains the default option proposed by Oracle, which pre-install the OS Linux on any new system. Very simple and intuitive to install thanks to the pre-built bundle software, which automates most of the steps. At the end of the installation, the architecture is very similar to any other two node RAC setup based on commodity hardware; but even from an operation point of view there are great advantages, because the Oracle Appliance Manager framework simplifies and accelerates the execution of almost any system and database administrator task.

Here below is depicted the ODA architecture when the bare metal configuration is in use:

ODA_Bare_Metal

 

Oracle Database Appliance Virtualized

When the ODA is deployed with the virtualization, both servers run Oracle VM Server, also called Dom0. Each Dom0 hosts in a local dedicated repository the ODA Base (or Dom Base), a privileged virtual machine where it is installed the Appliance Manager, Grid Infrastructure and RDBMS binaries. The ODA Base takes advantage of the Xen PCI Pass-through technology to provide direct access to the ODA shared disks presented and managed by ASM. This configuration reduces the VM flexibility; in fact, no VM migration is allowed for the two ODA Base, but it guarantees almost no I/O penalty in term of performance. With the Dom Base setup, the basic installation is completed and it is possible to start provisioning databases using Oracle Appliance Manager.

At the same time, the administrator can create new-shared repositories hosted on ACFS and NFS exported to the hypervisor for hosting the application virtual machines. Those application virtual machines are also identified with the name of Domain U.  The Domain U and the templates can be stored on a local or shared Oracle VM Server repository, but to enable the functionality to migrate between the two Oracle VM Servers a shared repository on the ACFS file system should be used.

Even when the virtualization is in use, Oracle Appliance Manager is the only framework for system and database administration tasks like repository creation, import of template, deployment of virtual machine, network configuration, database provisioning and so on, relieving the administrator from all complexity.

The implementation of the Solution-in-a-box guarantees the maximum Return on Investment of the ODA; in fact, while restricting the virtual CPUs to license on the Dom Base it allows relocating spare resources to the application virtual machines as showed on the picture below.

ODA_Virtualized

 

 

ODA compared to Exadata Machine and Commodity Hardware

As described on the previous sections, Oracle Database Appliance offers unique features such as pay-as-you-grow, solution-in-a-box and so on, which can heavily influence the decision for a new database architecture. The aim of the table below is to list the main architecture characteristics to evaluate while defining a new database infrastructure, comparing the result between Oracle Database Appliance, Exadata Machine and a Commodity Architecture based on Intel Linux engineered to run RAC databases.

Table_Architectures

As shown by the different scores of the three architectures, each solution comes with points of strength and weakness; about the Oracle Database Appliance, it is evident that due to its characteristics, the smallest Oracle Engineered System remains a great option for small, medium database environments.

 

Conclusion

I hope this article keep the initial promise to explain the technical reasons of the Oracle Database Appliance success, and it has highlighted the great work done by Oracle, engineering this solution on the edge of the technology keeping the price under control.

One last summary of what in my opinion are the major benefits offered by the ODA:

  • Time-to-market: Thanks to automated processes and pre-build software images, the deployment phase is extremely rapid.
  • Simplicity: The use of standard software components, combined to the appliance orchestrator Oracle Appliance Manager makes the ODA very simple to operate.
  • Standardization & Automation: The Appliance Manager encapsulates and automatizes all repeatable and error-prone tasks like provisioning, decommissioning, patching and so on.
  • Vendor certified platform: Oracle validates and certifies the compatibility among all HW & SW components.
  • Evolution: Over the time, the ODA benefits of specific bug fixing and software evolution (introduced by Oracle though the quarterly patch sets); keeping the system on the edge for longer time when compared to a commodity architecture.
Featured

ASM 12c

A powerful framework for storage management

 

1 INTRODUCTION

Oracle Automatic Storage Management (ASM) is a well-known, largely used multi-platform volume manager and file system, designed for single-instance and clustered environment. Developed for managing Oracle database files with optimal performance and native data protection, simplifying the storage management; nowadays ASM includes several functionalities for general-purpose files too.
This article focuses on the architecture and characteristics of the version 12c, where great changes and enhancements of pre-existing capabilities have been introduced by Oracle.
Dedicated sections explaining how Oracle has leveraged ASM within the Oracle Engineered Systems complete the paper.

 

1.1 ASM 12c Instance Architecture Diagram

Below are highlighted the functionalities and the main background components associated to an ASM instance. It is important to notice how starting from Oracle 12c a database can run within ASM Disk Groups or on top of ASM Cluster file systems (ACFS).

 

ASM_db

 

Overview ASM options available in Oracle 12c.

ACFS

 

1.2       ASM 12c Multi-Nodes Architecture Diagram

In a Multi-node cluster environment, ASM 12c is now available in two configurations:

  • 11gR2 like: with one ASM instance on each Grid Infrastructure node.
  • Flex ASM: a new concept, which leverages the architecture availability and performance of the cluster; removing the 1:1 hard dependency between cluster node and local ASM instance. With Flex ASM only few nodes of the cluster run an ASM instance, (the default cardinality is 3) and the database instances communicate with ASM in two possible way: locally or over the ASM Network. In case of failure of one ASM instance, the databases automatically and transparently reconnect to another surviving instance on the cluster. This major architectural change required the introduction of two new cluster resources, ASM-Listener for supporting remote client connections and ADVM-Proxy, which permits the access to the ACFS layer. In case of wide cluster installation, Flex ASM enhances the performance and the scalability of the Grid Infrastructure, reducing the amount of network traffic generated between ASM instances.

 

Below two graphical representations of the same Oracle cluster; on the first drawing ASM is configured with pre-12c setup, on the second one Flex ASM is in use.

ASM architecture 11gR2 like

01_NO_FlexASM_Drawing

 

 

Flex ASM architecture

01_FlexASM_Drawing

 

 

2  ASM 12c NEW FEATURES

The table below summarizes the list of new functionalities introduced on ASM 12c R1

Feature Definition
Filter Driver Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O

path of the Oracle ASM disks used to validate write I/O requests to Oracle ASM disks, eliminates accidental overwrites of Oracle ASM disks that would cause corruption. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.

General ASM Enhancements –       Oracle ASM now replicates physically addressed metadata, such as the disk header and allocation tables, within each disk, offering a better protection against bad block disk sectors and external corruptions.

–       Increased storage limits: ASM can manage up to 511 disk groups and a maximum disk size of 32 PB.

–       New REPLACE clause on the ALTER DISKGROUP statement.

Disk Scrubbing Disk scrubbing checks logical data corruptions and repairs the corruptions automatically in normal and high redundancy disks groups. This process automatically starts during rebalance operations or the administrator can trigger it.
Disk Resync Enhancements It enables fast recovery from instance failure and faster resyncs performance. Multiple disks can be brought online simultaneously. Checkpoint functionality enables to resume from the point where the process was interrupted.
Even Read For Disk Groups If ASM mirroring is in use, each I/O request submitted to the system can be satisfied by more than one disk. With this feature, each request to read is sent to the least loaded of the possible source disks.
ASM Rebalance Enhancements The rebalance operation has been improved in term of scalability, performance, and reliability; supporting concurrent operations on multiple disk groups in a single instance.  In this version, it has been enhanced also the support for thin provisioning, user-data validation, and error handling.
ASM Password File in a Disk Group ASM Password file is now stored within the ASM disk group.
Access Control Enhancements on Windows It is now possible to use access control to separate roles in Windows environments. With Oracle Database services running as users rather than Local System, the Oracle ASM access control feature is enabled to support role separation on Windows.
Rolling Migration Framework for ASM One-off Patches This feature enhances the rolling migration framework to apply oneoff patches released for ASM in a rolling manner, without affecting the overall availability of the cluster or the database

 

Updated Key Management Framework This feature updates Oracle key management commands to unify the key management application programming interface (API) layer. The updated key management framework makes interacting with keys in the wallet easier and adds new key metadata that describes how the keys are being used.

 

 

2.1 ASM 12c Client Cluster

One more ASM functionality explored but still in phase of development and therefore not really documented by Oracle, is ASM Client Cluster

Designed to host applications requiring cluster functionalities (monitoring, restart and failover capabilities), without the need to provision local shared storage.

The ASM Client Cluster installation is available as configuration option of the Grid Infrastructure binaries, starting from version 12.1.0.2.1 with Oct. 2014 GI PSU.

The use of ASM Client Cluster imposes the following pre-requisites and limitations:

  • The existence of an ASM Server Cluster version 12.1.0.2.1 with Oct. 2014 GI PSU, configured with the GNS server with or without zone delegation.
  • The ASM Server Cluster becomes aware of the ASM Client Cluster by importing an ad hoc XML configuration containing all details.
  • The ASM Client Cluster uses the OCR, Voting Files and Password File of the ASM Server Cluster.
  • ASM Client Cluster communicates with the ASM Server Cluster over the ASM Network.
  • ASM Server Cluster provides remote shared storage to ASM Client Cluster.

 

As already mentioned, at the time of writing this feature is still under development and without official documentation available, the only possible comment is that the ASM Client Cluster looks similar to another option introduced by Oracle 12c and called Flex Cluster. In fact, Flex Cluster has the concept of HUB and LEAF nodes; the first used to run database workload with direct access to the ASM disks and the second used to host applications in HA configuration but without direct access to the ASM disks.

 

 

3  ACFS NEW FEATURES

In Oracle 12c the Automatic Storage Management Cluster File System supports more and more types of files, offering advanced functionalities like snapshot, replication, encryption, ACL and tagging.  It is also important to highlight that this cluster file system comply with the POSIX standards of Linux/UNIX and with the Windows standards.

Access to ACFS from outside the Grid Infrastructure cluster is granted by NFS protocol; the NFS export can be registered as clusterware resource becoming available from any of the cluster nodes (HANFS).

Here is an exhaustive list of files supported by ACFS: executables, trace files, logs, application reports, BFILEs, configuration files, video, audio, text, images, engineering drawings, general-purpose and Oracle database files.

The major change, introduced in this version of ACFS, is definitely the capability and support to host Oracle database files; granting access to a set of functionalities that in the past were restricted to customer files only. Among them, the most important is the snapshot image, which has been fully integrated with the database Multitenant architecture, allowing cloning entire Pluggable databases in few seconds, independently from the size and in space efficient way using copy-on-write technology.

The snapshots are created and immediately available in the “<FS_mount_point>.ASFS/snaps” directory, and can be generated and later converted from read-only to read/write and vice versa. In addition, ACFS supports nested snapshots.

 

Example of ACFS snapshot copy:

-- Create a read/write Snapshot copy
[grid@oel6srv02 bin]$ acfsutil snap create -w cloudfs_snap /cloudfs

-- Display Snapshot Info
[grid@oel6srv02 ~]$ acfsutil snap info cloudfs_snap /cloudfs
snapshot name:               cloudfs_snap
RO snapshot or RW snapshot:  RW
parent name:                 /cloudfs
snapshot creation time:      Wed May 27 16:54:53 2015

-- Display specific file info 
[grid@oel6srv02 ~]$ acfsutil info file /cloudfs/scripts/utl_env/NEW_SESSION.SQL
/cloudfs/scripts/utl_env/NEW_SESSION.SQL
flags:        File
inode:        42
owner:        oracle
group:        oinstall
size:         684
allocated:    4096
hardlinks:    1
device index: 1
major, minor: 251,91137
access time:  Wed May 27 10:34:18 2013
modify time:  Wed May 27 10:34:18 2013
change time:  Wed May 27 10:34:18 2013
extents:
-offset ----length | -dev --------offset
0       4096 |    1     1496457216
extent count: 1

--Convert the snapshot from Read/Write to Read-only
acfsutil snap convert -r cloudfs_snap /cloudfs

 --Drop the snapshot 
[grid@oel6srv02 ~]$ acfsutil snap delete cloudfs_snap /cloudfs

Example of Pluggable database cloned using ACFS snapshot copy List of requirements that must be met to use ACFS SNAPSHOT COPY clause:

      • All pluggable database files of the source PDB must be stored on ACFS.

 

 

      • The source PDB cannot be in a remote CDB.

 

 

      • The source PDB must be in read-only mode.

 

 

      • Dropping the parent PDB with the including datafiles clause, does not automatically remove the snapshot dependencies, manual intervention is required.

 

 

SQL> CREATE PLUGGABLE DATABASE pt02 FROM ppq01
2  FILE_NAME_CONVERT = ('/u02/oradata/CDB4/PPQ01/',
3                       '/u02/oradata/CDB4/PT02/')
4  SNAPSHOT COPY;
Pluggable database created.
Elapsed: 00:00:13.70

The PDB snapshot copy imposes few restrictions among which the source database opened in read-only. This requirement prevents the implementation on most of the production environments where the database must remain available in read/write 24h/7. For this reason, ACFS for database files is particularly recommended on test and development where flexibility, speed and space efficiency of the clones are key factors for achieving high productive environment.

Graphical representation of how efficiently create and maintain a Test & Development database environment:

DB_Snapshot

 

 

4 ASM 12c and ORACLE ENGINEERED SYSTEMS

Oracle has developed few ASM features to leverage the characteristics of the Engineered Systems. Analyzing the architecture of the Exadata Storage, we see how the unique capabilities of ASM make possible to stripe and mirror data across independent set of disks grouped in different Storage Cells.

The sections below describe the implementation of ASM on the Oracle Database Appliance (ODA) and Exadata systems.

 

 

4.1 ASM 12c on Oracle Database Appliance

Oracle Database Appliance is a simple, reliable and affordable system engineered for running database workloads. One of the key characteristics present since the first version is the pay-as-you-grow model; it permits to activate a crescendo number of CPU-cores when needed, optimizing the licensing cost. With the new version of the ODA software bundle, Oracle has introduced the configuration Solution-in-a-box; which includes the virtualization layer for hosting Oracle databases and application components on the same appliance, but on separate virtual machines. The next sections highlight how the two configurations are architected and the role played by ASM:

  • ODA Bare metal: available since version one of the appliance, this is still the default configuration proposed by Oracle. Beyond the automated installation process, it is like any other two-node cluster, with all ASM and ACFS features available.

 

ODA_Bare_Metal

 

  • ODA Virtualized: on both ODA servers runs the Oracle VM Server software, also called Dom0. Each Dom0 hosts the ODA Base (or Dom Base), a privileged virtual machine where it is installed the Appliance Manager, Grid Infrastructure and RDBMS binaries. The ODA Base takes advantage of the Xen PCI Pass-through technology to provide direct access to the ODA shared disks presented and managed by ASM. This configuration reduces the VM flexibility; in fact, no VM migration is allowed, but it guarantees almost no I/O penalty in term of performance. After the Dom Base creation, it is possible to add Virtual Machine where running application components. Those optional application virtual machines are also identified with the name of Domain U.

By default, all VMs and templates are stored on a local Oracle VM Server repository, but in order to be able to migrate application virtual machines between the two Oracle VM Servers a shared repository on the ACFS file system should be created.

The implementation of the Solution-in-a-box guarantees the maximum Return on Investment of the ODA, because while licensing only the virtual CPUs allocated to Dom Base, the remaining resources are assigned to the application components as showed on the picture below.

ODA_Virtualized

 

 

4.2 ACFS Becomes the default database storage of ODA

Starting from Version 12.1.0.2, a fresh installation of the Oracle Database Appliance adopts ACFS as primary cluster file system to store database files and general-purpose data. Three file systems are created in the ASM disk groups (DATA, RECO, and REDO) and the new databases are stored in these three ACFS file systems instead of in the ASM disk groups.

In case of ODA upgrade from previous release to 12.1.0.2, all pre-existing databases are not automatically migrated to ACFS; but can coexist with the new databases created on ACFS.

At any time, the databases can be migrated from ASM to ACFS as post upgrade step.

Oracle has decided to promote ACFS as default database storage on ODA environment for the following reasons:

 

  • ACFS provides almost equivalent performance than Oracle ASM disk groups.
  • Additional functionalities on industry standard POSIX file system.
  • Database snapshot copy of PDBs, and NON-CDB version 11.2.0.4 of greater.
  • Advanced functionality for general-purpose files such as replication, tagging, encryption, security, and auditing.

Database created on ACFS follows the same Oracle Managed Files (OMF) standard used by ASM.

 

 

4.3 ASM 12c on Exadata Machine

Oracle Exadata Database machine is now at the fifth hardware generation; the latest software update has embraced the possibility to run virtual environments, but differently from the ODA or other Engineered System like Oracle Virtual Appliance, the VMs are not intended to host application components. ASM plays a key role on the success of the Exadata, because it orchestrates all Storage Cells in a way that appear as a single entity, while in reality, they do not know and they do not talk to each other.

The Exadata, available in a wide range of hardware configurations from 1/8 to multi-racks, offers a great flexibility on the storage setup too. The sections below illustrate what is possible to achieve in term of storage configuration when the Exadata is exploited bare metal and virtualized:

  • Exadata Bare Metal: despite the default storage configuration, which foresees three disk groups striped across all Storage Cells, guaranteeing the best I/O performance; as post-installation step, it is possible to deploy a different configuration. Before changing the storage setup, it is vital to understand and evaluate all associated consequences. In fact, even though in specific cases can be a meaningful decision, any storage configuration different from the default one, has as result a shift from optimal performance to flexibility and workload isolation.

Shown below a graphical representation of the default Exadata storage setup, compared to a custom configuration, where the Storage Cells have been divided in multiple groups, segmenting the I/O workloads and avoiding disruption between environments.

Exa_BareMetal_Disks_Default

Exa_BareMetal_Disks_Segmented.png

  • Exadata Virtualized: the installation of the Exadata with the virtualization option foresees a first step of meticulous capacity planning, defining the resources to allocate to the virtual machines (CPU and memory) and the size of each ASM disk group (DBFS, Data, Reco) of the clusters. This last step is particularly important, because unlike the VM resources, the characteristics of the ASM disk groups cannot be changed.

The new version of the Exadata Deployment Assistant, which generates the configuration file to submit to the Exadata installation process, now in conjunction with the use of Oracle Virtual Machines, permits to enter the information related to multiple Grid Infrastructure clusters.

The hardware-based I/O virtualization (so called Xen SR-IOV Virtualization), implemented on the Oracle VMs running on the Exadata Database servers, guarantees almost native I/O and Networking performance over InfiniBand; with lower CPU consumption when compared to a Xen Software I/O virtualization. Unfortunately, this performance advantage comes at the detriment of other virtualization features like Load Balancing, Live Migration and VM Save/Restore operations.

If the Exadata combined with the virtualization open new horizon in term of database consolidation and licensing optimization, do not leave any option to the storage configuration. In fact, the only possible user definition is the amount of space to allocate to each disk group; with this information, the installation procedure defines the size of the Grid Disks on all available Storage Cells.

Following a graphical representation of the Exadata Storage Cells, partitioned for holding three virtualized clusters. For each cluster, ASM access is automatically restricted to the associated Grid Disks.

Exa_BareMetal_Disk_Virtual

 

 

4.4 ACFS on Linux Exadata Database Machine

Starting from version 12.1.0.2, the Exadata Database Machine running Oracle Linux, supports ACFS for database file and general-purpose, with no functional restriction.

This makes ACFS an attractive storage alternative for holding: external tables, data loads, scripts and general-purpose files.

In addition, Oracle ACFS on Exadata Database Machines supports database files for the following database versions:

  • Oracle Database 10g Rel. 2 (10.2.0.4 and 10.2.0.5)
  • Oracle Database 11g (11.2.0.4 and higher)
  • Oracle Database 12c (12.1.0.1 and higher)

Since Exadata Storage Cell does not support database version 10g, ACFS becomes an important storage option for customers wishing to host older databases on their Exadata system.

However, those new configuration options and flexibility come with one major performance restriction. When ACFS for database files is in use, the Exadata is still not supporting the Smart Scan operations and is not able to push database operations directly to the storage. Hence, for a best performance result, it is recommended to store database files on the Exadata Storage using ASM disk groups.

As per any other system, when implementing ACFS on Exadata Database Machine, snapshots and tagging are supported for database and general-purpose files, while replication, security, encryption, audit and high availability NFS functionalities are only supported with general-purpose files.

 

 

 5 Conclusion

Oracle Automatic Storage Management 12c is a single integrated solution, designed to manage database files and general-purpose data under different hardware and software configurations. The adoption of ASM and ACFS not only eliminates the need for third party volume managers and file systems, but also simplifies the storage management offering the best I/O performance, enforcing Oracle best practices. In addition, ASM 12c with the Flex ASM setup removes previous important architecture limitations:

  • Availability: the hard dependency between the local ASM and database instance, was a single point of failure. In fact, without Flex ASM, the failure of the ASM instance causes the crash of all local database instances.
  • Performance: Flex ASM reduces the network traffic generated among the ASM instances, leveraging the architecture scalability; and it is easier and faster to keep the ASM metadata synchronized across large clusters. Finally yet importantly, only few nodes of the cluster have to support the burden of an ASM instance, leaving additional resources to application processing.

 

Oracle ASM offers a large set of configurations and options; it is now our duty to understand case-by-case, when it is relevant to use one setup or another, with the aim to maximize performance, availability and flexibility of the infrastructure.

 

 

ODA CPU Capping

####################################################################
# How to reduce the number of active CPU cores on ODA system
####################################################################

--Find the ODA Serial Number
 [root@odanode1 ~]# /usr/sbin/dmidecode -t1 |grep Serial
 Serial Number: 1xxxxxXXXXxG
--Login to MOS and generate the CPU Key using the ODA Serial Number.
####################################################################
 ## ODA CPU Capping
 ####################################################################
------------------------------------
 -- Target  active CPU cores --
 ------------------------------------
   HOSTNAME    |   CPU COUNT  
 ---------------|----------------
 odanode1        |      6
 ---------------|----------------
 odanode2        |      6
 --------------------------------
-------------------------------------------------------------------------------
 --Reduce the CPU cores running the following command from the first node only!
 -------------------------------------------------------------------------------
 /opt/oracle/oak/bin/oakcli show core_config_key
 /opt/oracle/oak/bin/oakcli apply core_config_key /tmp/CPU_KEY
------------------------------------------
 --Activity Log
 ------------------------------------------
 [root@odanode1 tmp]# vi CPU_KEY  <--- Store the CPU key generated on MOS
 [root@odanode1 tmp]# /opt/oracle/oak/bin/oakcli show core_config_key
 Optional core_config_key is not applied on this machine yet !
 [root@odanode1 tmp]# pwd
 /tmp
 [root@odanode1 tmp]# /opt/oracle/oak/bin/oakcli apply core_config_key /tmp/CPU_KEY
 INFO: Both nodes get rebooted automatically after applying the license
 Do you want to continue: [Y/N]?:
 Y
 INFO: User has confirmed the reboot
Please enter the root password:
............done
INFO: Applying core_config_key on '192.168.16.25'
 ...
 INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.25 /tmp/tmp_lic_exec.pl
 INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.25 /opt/oracle/oak/bin/oakcli enforce core_config_key /tmp/.lic_file
 Waiting for the Node '192.168.16.25' to reboot...........................
 Node '192.168.16.25' is  rebooted
 Waiting for the Node '192.168.16.25' to be up before applying the license on the node '192.168.16.24'.
 .............................................
 INFO: Applying core_config_key on '192.168.16.24'
 ...
 INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.24 /tmp/tmp_lic_exec.pl
 INFO   : Running as root: /usr/bin/ssh -l root 192.168.16.24 /opt/oracle/oak/bin/oakcli enforce core_config_key /tmp/.lic_file
Broadcast message from root (Fri Jun  7 15:18:34 2013):
The system is going down for reboot NOW!
 [root@odanode1 tmp]#
-------------------------------------------------------------
 --Check the new Number of active cores
 -------------------------------------------------------------
[root@odanode1 ~]# /opt/oracle/oak/bin/oakcli show core_config_key
Host's serialnumber                    =                     1xxxxxXXXXxG
 Enabled Cores (per server)             =                                   6
 Total Enabled Cores (on two servers)   =                             12
 Server type                            =        V1 -> SUN FIRE X4370 M2
 Hyperthreading is enabled.  Each core has 2 threads. Operating system displays 12 processors per server
 [root@odanode1 ~]#

	

Oracle Database Appliance Bundle 2.6

##################################################################
# Installation Oracle Database Appliance (ODA) bundle patch 2.6.0.0
##################################################################

--Path where all ODA logs are stored:
 /opt/oracle/oak/log/odanode1/patch/2.6.0.0.0
-------------------------------------------------
 --ODA Software version before patching
 -------------------------------------------------
 [root@odanode1 bin]# /opt/oracle/oak/bin/oakcli show version -detail
 reporting the metadata. It takes a while...
 System Version          Component Name                Installed Version               Supported Version
 -------------------         ---------------------    ------------------            -----------------
 2.4.0.0.0
 Controller                      11.05.02.00                     Up-to-date
 Expander                       0342                              Up-to-date
 SSD_SHARED                   E125                              Up-to-date
 HDD_LOCAL                     5G08                              Up-to-date
 HDD_SHARED                   A700                              A6C0
 ILOM                              3.0.16.22.a r75629           Up-to-date
 BIOS                              12010310                        Up-to-date
 IPMI                               1.8.10.5                          Up-to-date
 HMP                               2.2.4                              Up-to-date
 OAK                               2.4.0.0.0                         Up-to-date
 OEL                                5.8                                Up-to-date
 TFA                                2.4                                Up-to-date
 GI_HOME                         1.2.0.3.4(14275605,          Up-to-date
 14275572)
 DB_HOME                       11.2.0.3.4(14275605,         Up-to-date
 14275572)
 ASR                                Unknown                         3.9
 [root@odanode1 bin]#
####################################################################################################
-------------------------------------------------
 --Unzip the patch bundle 2.6.0.0.
 -------------------------------------------------
 --ODA Node 1
 [root@odanode1 u01]# cd /opt/oracle/oak/bin
 [root@odanode1 bin]# ./oakcli unpack -package /u01/ODA_patches/bundle_2600/p16744915_26000_Linux-x86-64.zip
 Unpacking takes a while,  pls wait....
 Successfully unpacked the files to repository.
 [root@odanode1 bin]#
--ODA Node 2
 [root@odanode2 u01]# cd /opt/oracle/oak/bin
 [root@odanode2 bin]# ./oakcli unpack -package /u01/ODA_patches/bundle_2600/p16744915_26000_Linux-x86-64.zip
 Unpacking takes a while,  pls wait....
 Successfully unpacked the files to repository.
 [root@odanode2 bin]#
-------------------------------------------------
 --Apply the Patch to the Infrastructure
 -------------------------------------------------
--ODA Node 1 ONLY
 [root@odanode1 bin]# cd /opt/oracle/oak/bin
 [root@odanode1 bin]# ./oakcli update -patch 2.6.0.0.0 --infra
 INFO: DB, ASM, Clusterware may be stopped during the patch if required
 INFO: Both nodes may get rebooted automatically during the patch if required
 Do you want to continue: [Y/N]?: Y
 INFO: User has confirmed the reboot
 INFO: Patch bundle must be unpacked on the second node also before applying this patch
 Did you unpack the patch bundle on the second node?: [Y/N]?: Y
Please enter the 'root' user password:
 Please re-enter the 'root' user password:
 INFO: Setting up the SSH
 ..........done
 INFO: Running pre-install scripts
 ..........done
 INFO: 2013-05-15 11:04:11: Running pre patch script for 2.6.0.0.0
 INFO: 2013-05-15 11:04:14: Completed pre patch script for 2.6.0.0.0
INFO: 2013-05-15 11:04:19: ------------------Patching HMP-------------------------
 SUCCESS: 2013-05-15 11:04:50: Successfully upgraded the HMP
INFO: 2013-05-15 11:04:50: ----------------------Patching OAK---------------------
 SUCCESS: 2013-05-15 11:05:13: Succesfully upgraded OAK
INFO: 2013-05-15 11:05:15: -----------------Installing / Patching  TFA-----------------
 SUCCESS: 2013-05-15 11:06:55: Successfully updated / installed the TFA
 ...
INFO: 2013-05-15 11:06:56: ------------------Patching OS-------------------------
 INFO: 2013-05-15 11:07:05: Clusterware is running on one or more nodes of the cluster
 INFO: 2013-05-15 11:07:05: Attempting to stop clusterware and its resources across the cluster
 SUCCESS: 2013-05-15 11:09:08: Successfully stopped the clusterware
SUCCESS: 2013-05-15 11:09:49: Successfully upgraded the OS
INFO: 2013-05-15 11:09:53: ----------------------Patching IPMI---------------------
 SUCCESS: 2013-05-15 11:09:55: Succesfully upgraded IPMI
INFO: 2013-05-15 11:10:02: ----------------Patching the Storage-------------------
 INFO: 2013-05-15 11:10:02: ....................Patching SSDs...............
 INFO: 2013-05-15 11:10:02: Updating the  Disk : d20 with the firmware : ZeusIOPs G3 E12B
 SUCCESS: 2013-05-15 11:10:27: Successfully updated the firmware on  Disk : d20 to ZeusIOPs G3 E12B
 INFO: 2013-05-15 11:10:27: Updating the  Disk : d21 with the firmware : ZeusIOPs G3 E12B
 SUCCESS: 2013-05-15 11:10:48: Successfully updated the firmware on  Disk : d21 to ZeusIOPs G3 E12B
 INFO: 2013-05-15 11:10:48: Updating the  Disk : d22 with the firmware : ZeusIOPs G3 E12B
 SUCCESS: 2013-05-15 11:11:10: Successfully updated the firmware on  Disk : d22 to ZeusIOPs G3 E12B
 INFO: 2013-05-15 11:11:11: Updating the  Disk : d23 with the firmware : ZeusIOPs G3 E12B
 SUCCESS: 2013-05-15 11:11:34: Successfully updated the firmware on  Disk : d23 to ZeusIOPs G3 E12B
 INFO: 2013-05-15 11:11:34: ....................Patching shared HDDs...............
 INFO: 2013-05-15 11:11:34: Disk : d0  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:34: Disk : d1  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:34: Disk : d2  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:35: Disk : d3  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:35: Disk : d4  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:35: Disk : d5  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:35: Disk : d6  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:35: Disk : d7  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:36: Disk : d8  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:36: Disk : d9  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:36: Disk : d10  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:36: Disk : d11  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:37: Disk : d12  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:37: Disk : d13  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:37: Disk : d14  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:37: Disk : d15  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:38: Disk : d16  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:38: Disk : d17  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:38: Disk : d18  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:38: Disk : d19  is alreporty running with : HUS1560SCSUN600G A700
 INFO: 2013-05-15 11:11:38: ....................Patching local HDDs...............
 INFO: 2013-05-15 11:11:38: Disk : c0d0  is alreporty running with : WD500BLHXSUN 5G08
 INFO: 2013-05-15 11:11:39: Disk : c0d1  is alreporty running with : WD500BLHXSUN 5G08
 INFO: 2013-05-15 11:11:39: ....................Patching Expanders...............
 INFO: 2013-05-15 11:11:39: Expander : x0  is alreporty running with : T4 Storage 0342
 INFO: 2013-05-15 11:11:39: Expander : x1  is alreporty running with : T4 Storage 0342
 INFO: 2013-05-15 11:11:39: ....................Patching Controllers...............
 INFO: 2013-05-15 11:11:39: No-update for the Controller: c0
 INFO: 2013-05-15 11:11:39: Controller : c1  is alreporty running with : 0x0072 11.05.02.00
 INFO: 2013-05-15 11:11:39: Controller : c2  is alreporty running with : 0x0072 11.05.02.00
 INFO: 2013-05-15 11:11:39: ------------Finished the storage Patching------------
INFO: 2013-05-15 11:11:40: -----------------Patching Ilom & Bios-----------------
 INFO: 2013-05-15 11:11:41: Getting the ILOM Ip address
 INFO: 2013-05-15 11:11:42: Updating the Ilom using LAN+ protocol
 INFO: 2013-05-15 11:11:43: Updating the ILOM. It takes a while
 INFO: 2013-05-15 11:16:24: Verifying the updated Ilom Version, it may take a while if ServiceProcessor is booting
 INFO: 2013-05-15 11:16:25: Waiting for the service processor to be up
 SUCCESS: 2013-05-15 11:20:09: Successfully updated the ILOM with the firmware 3.0.16.22.b r78329
INFO: Patching the infrastructure on node: odanode2 , it may take upto 30 minutes. Please wait
 ...
 ............done
INFO: Infrastructure patching summary on node: 192.168.16.24
 SUCCESS: 2013-05-15 11:31:05:  Successfully upgraded the HMP
 SUCCESS: 2013-05-15 11:31:05:  Succesfully updated the OAK
 SUCCESS: 2013-05-15 11:31:05:  Successfully updated the TFA
 SUCCESS: 2013-05-15 11:31:05:  Successfully upgraded the OS
 SUCCESS: 2013-05-15 11:31:05:  Succesfully updated the IPMI
 INFO: 2013-05-15 11:31:05:  Storage patching summary
 SUCCESS: 2013-05-15 11:31:05:  No failures during storage upgrade
 SUCCESS: 2013-05-15 11:31:05:  Successfully updated the ILOM & Bios
INFO: Infrastructure patching summary on node: 192.168.16.25
 SUCCESS: 2013-05-15 11:31:05:  Successfully upgraded the HMP
 SUCCESS: 2013-05-15 11:31:05:  Succesfully updated the OAK
 SUCCESS: 2013-05-15 11:31:05:  Successfully upgraded the OS
 SUCCESS: 2013-05-15 11:31:05:  Succesfully updated the IPMI
 INFO: 2013-05-15 11:31:05:  Storage patching summary
 SUCCESS: 2013-05-15 11:31:05:  No failures during storage upgrade
 SUCCESS: 2013-05-15 11:31:05:  Successfully updated the ILOM & Bios
INFO: Running post-install scripts
 ............done
 INFO: Some of the patched components require node reboot. Rebooting the nodes
 INFO: Setting up the SSH
 ............done
Broadcast message from root (Wed May 15 11:35:50 2013):
The system is going down for system halt NOW!
-------------------------------------------------
 --Apply the Patch to the Grid Infrastructure
 -------------------------------------------------
--ODA on BOTH Nodes
 [oracle@odanode1 OPatch]$ /u01/app/oracle/product/agent12c/agent_inst/bin/emctl stop agent
 Oracle Enterprise Manager Cloud Control 12c Release 2
 Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
 Stopping agent ..... stopped.
--ODA Node 1 ONLY
 [root@odanode1 bin]# cd /opt/oracle/oak/bin
 [root@odanode1 bin]# ./oakcli update -patch 2.6.0.0.0 --gi
Please enter the 'root' user password:
 Please re-enter the 'root' user password:
Please enter the 'grid' user password:
 Please re-enter the 'grid' user password:
 INFO: Setting up the SSH
 ..........done
 ...
 ...
..........done
 ...
 SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
 INFO: 2013-05-15 11:56:10: Setting up the ssh for grid user
 ..........done
 ...
 SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
 INFO: 2013-05-15 11:56:30: Patching the GI home on node odanode1
 INFO: 2013-05-15 11:56:30: Updating the opatch
 INFO: 2013-05-15 11:56:56: Performing the conflict checks
 SUCCESS: 2013-05-15 11:57:07: Conflict checks passed for all the homes
 INFO: 2013-05-15 11:57:07: Checking if the patch is alreporty applied on any of the homes
 INFO: 2013-05-15 11:57:10: No home is alreporty up-to-date
 SUCCESS: 2013-05-15 11:57:21: Successfully stopped the dbconsoles
 SUCCESS: 2013-05-15 11:57:36: Successfully stopped the EM agents
 INFO: 2013-05-15 11:57:41: Applying patch on the homes: /u01/app/11.2.0.3/grid
 INFO: 2013-05-15 11:57:41: It may take upto 15 mins
 SUCCESS: 2013-05-15 12:08:27: Successfully applied the patch on home: /u01/app/11.2.0.3/grid
 SUCCESS: 2013-05-15 12:08:27: Successfully started the dbconsoles
 SUCCESS: 2013-05-15 12:08:38: Successfully started the EM Agents
 INFO: 2013-05-15 12:08:39: Patching the GI home on node odanode2
 ...
..........done
INFO: GI patching summary on node: odanode1
 SUCCESS: 2013-05-15 12:22:58:  Successfully applied the patch on home /u01/app/11.2.0.3/grid
INFO: GI patching summary on node: odanode2
 SUCCESS: 2013-05-15 12:22:58:  Successfully applied the patch on home /u01/app/11.2.0.3/grid
INFO: Running post-install scripts
 ..........done
 INFO: Setting up the SSH
 ..........done
[root@odanode1 bin]#
[root@odanode2 ~]# su - grid
 [grid@odanode2 ~]$ cd /u01/app/11.2.0.3/grid/OPatch/
 [grid@odanode2 OPatch]$ ./opatch lsinv
 Oracle Interim Patch Installer version 11.2.0.3.4
 Copyright (c) 2012, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/11.2.0.3/grid
 Central Inventory : /u01/app/oraInventory
 from           : /u01/app/11.2.0.3/grid/oraInst.loc
 OPatch version    : 11.2.0.3.4
 OUI version       : 11.2.0.3.0
 Log file location : /u01/app/11.2.0.3/grid/cfgtoollogs/opatch/opatch2013-05-15_12-33-15PM_1.log
Lsinventory Output file location : /u01/app/11.2.0.3/grid/cfgtoollogs/opatch/lsinv/lsinventory2013-05-15_12-33-15PM.txt
--------------------------------------------------------------------------------
 Installed Top-level Products (1):
Oracle Grid Infrastructure                                           11.2.0.3.0
 There are 1 products installed in this Oracle Home.
Interim patches (2) :
Patch  16056266     : applied on Wed May 15 12:18:38 CEST 2013
 Unique Patch ID:  15962803
 Patch description:  "Database Patch Set Update : 11.2.0.3.6 (16056266)"
 Created on 12 Mar 2013, 02:14:47 hrs PST8PDT
 Sub-patch  14727310; "Database Patch Set Update : 11.2.0.3.5 (14727310)"
 Sub-patch  14275605; "Database Patch Set Update : 11.2.0.3.4 (14275605)"
 Sub-patch  13923374; "Database Patch Set Update : 11.2.0.3.3 (13923374)"
 Sub-patch  13696216; "Database Patch Set Update : 11.2.0.3.2 (13696216)"
 Sub-patch  13343438; "Database Patch Set Update : 11.2.0.3.1 (13343438)"
 Bugs fixed:
 13566938, 13593999, 10350832, 14138130, 12919564, 13561951, 13624984
 13588248, 13080778, 13914613, 13804294, 14258925, 12873183, 13645875
 14472647, 12880299, 14664355, 14409183, 12998795, 14469008, 13719081
 13492735, 13496884, 12857027, 14263036, 14263073, 13732226, 13742433
 16368108, 16314469, 12905058, 13742434, 12849688, 12950644, 13742435
 13464002, 13534412, 12879027, 13958038, 14613900, 12585543, 12535346
 12588744, 11877623, 13786142, 12847466, 13649031, 13981051, 12582664
 12797765, 14262913, 12923168, 13384182, 13612575, 13466801, 13484963
 14207163, 11063191, 13772618, 13070939, 12797420, 13041324, 16314467
 16314468, 12976376, 11708510, 13680405, 14589750, 13026410, 13742437
 13737746, 14644185, 13742438, 13326736, 13596521, 13001379, 16344871
 13099577, 9873405, 14275605, 13742436, 9858539, 14841812, 11715084
 16231699, 14040433, 12662040, 9703627, 12617123, 12845115, 12764337
 13354082, 14459552, 13397104, 13913630, 12964067, 12983611, 13550185
 13810393, 12780983, 12583611, 14546575, 13476583, 15862016, 11840910
 13903046, 15862017, 13572659, 16294378, 13718279, 14088346, 13657605
 13448206, 16314466, 14480676, 13419660, 13632717, 14063281, 14110275
 13430938, 13467683, 13420224, 13812031, 14548763, 16299830, 12646784
 13616375, 14035825, 12861463, 12834027, 15862021, 13632809, 13377816
 13036331, 14727310, 13685544, 15862018, 13499128, 16175381, 13584130
 12829021, 15862019, 12794305, 14546673, 12791981, 13787482, 13503598
 10133521, 12718090, 13399435, 14023636, 13860201, 12401111, 13257247
 13362079, 14176879, 12917230, 13923374, 14220725, 14480675, 13524899
 13559697, 9706792, 14480674, 13916709, 13098318, 13773133, 14076523
 13340388, 13366202, 13528551, 12894807, 13454210, 13343438, 12748240
 14205448, 13385346, 15853081, 14273397, 12971775, 13582702, 10242202
 13035804, 13544396, 16382353, 8547978, 14226599, 14062795, 13035360
 12693626, 13332439, 14038787, 14062796, 12913474, 14841409, 14390252
 16314470, 13370330, 13059165, 14062797, 14062794, 12959852, 13358781
 12345082, 12960925, 9659614, 13699124, 14546638, 13936424, 13338048
 12938841, 12658411, 12620823, 12656535, 14062793, 12678920, 13038684
 14062792, 13807411, 13250244, 12594032, 15862022, 9761357, 12612118
 13742464, 14052474, 13911821, 13457582, 13527323, 15862020, 13910420
 13502183, 12780098, 13705338, 13696216, 14841558, 10263668, 15862023
 16056266, 15862024, 13554409, 13645917, 13103913, 13011409, 14063280
Patch  16315641     : applied on Wed May 15 12:17:13 CEST 2013
 Unique Patch ID:  15966967
 Patch description:  "Grid Infrastructure Patch Set Update : 11.2.0.3.6 (16083653)"
 Created on 1 Apr 2013, 03:41:20 hrs PST8PDT
 Bugs fixed:
 16315641, 15876003, 14275572, 13919095, 13696251, 13348650, 12659561
 14305980, 14277586, 13987807, 14625969, 13825231, 12794268, 13000491
 13498267, 11675721, 14082976, 12771830, 14515980, 14085018, 13943175
 14102704, 14171552, 12594616, 13879428, 12897902, 12726222, 12829429
 13079948, 13090686, 12995950, 13251796, 13582411, 12990582, 13857364
 13082238, 12947871, 13256955, 13037709, 14535011, 12878750, 14048512
 11772838, 13058611, 13001955, 13440962, 13727853, 13425727, 12885323
 12870400, 14212634, 14407395, 13332363, 13430626, 13811209, 12709476
 14168708, 14096821, 14626717, 13460353, 13694885, 12857064, 12899169
 13111013, 12558569, 13323698, 10260842, 13085732, 10317921, 13869978
 12914824, 13789135, 12730342, 12950823, 13355963, 13531373, 14268365
 13776758, 12720728, 13620816, 13023609, 13024624, 13039908, 13036424
 13938166, 13011520, 13569812, 12758736, 13001901, 13077654, 13430715
 13550689, 13806545, 13634583, 14271305, 12538907, 13947200, 12996428
 13066371, 13483672, 12897651, 13540563, 12896850, 13241779, 12728585
 12876314, 12925041, 12650672, 12398492, 12848480, 13652088, 16307750
 12917897, 12975811, 13653178, 13371153, 14800989, 10114953, 14001941
 11836951, 14179376, 12965049, 14773530, 12765467, 13339443, 13965075
 16210540, 14307855, 12784559, 14242977, 13955385, 12704789, 13745317
 13074261, 12971251, 13993634, 13523527, 13719731, 13396284, 12639013
 12867511, 12959140, 14748254, 12829917, 12349553, 12849377, 12934171
 13843080, 14496536, 13924431, 12680491, 13334158, 10418841, 12832204
 13838047, 13002015, 12791719, 13886023, 13821454, 12782756, 14100232
 14186070, 14569263, 12873909, 13845120, 14214257, 12914722, 12842804
 12772345, 12663376, 14059576, 13889047, 12695029, 13924910, 13146560
 14070200, 13820621, 14304758, 12996572, 13941934, 14711358, 13019958
 13888719, 16463033, 12823838, 13877508, 12823042, 14494305, 13582706
 13617861, 12825835, 13025879, 13853089, 13410987, 13570879, 13247273
 13255295, 14152875, 13912373, 13011182, 13243172, 13045518, 12765868
 11825850, 15986571, 13345868, 13683090, 12932852, 13038806, 14588629
 14251904, 13396356, 13697828, 12834777, 13258062, 14371335, 13657366
 12810890, 15917085, 13502441, 14637577, 13880925, 13726162, 14153867
 13506114, 12820045, 13604057, 13263435, 14009845, 12827493, 13637590, 13068077
Rac system comprising of multiple nodes
 Local node = odanode2
 Remote node = odanode1
--------------------------------------------------------------------------------
OPatch succeeded.
 [grid@odanode2 OPatch]$
--Stop CRS on both Nodes
 [root@odanode1 2.6.0.0.0]# /u01/app/11.2.0.3/grid/bin/crsctl stop crs
 CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'odanode1'
 CRS-2673: Attempting to stop 'ora.crsd' on 'odanode1'
 CRS-2790: Starting shutdown of Cluster reporty Services-managed resources on 'odanode1'
 CRS-2673: Attempting to stop 'ora.cvu' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.efboeur.efbo_applb.efow.com.svc' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.efcteur.efct_applb.efow.com.svc' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.efpheur.efph_applb.efow.com.svc' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'odanode1'
 CRS-2677: Stop of 'ora.efboeur.efbo_applb.efow.com.svc' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.efpheur.efph_applb.efow.com.svc' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.efcteur.efct_applb.efow.com.svc' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.efboeur.db' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.efpheur.db' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.efcteur.db' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.registry.acfs' on 'odanode1'
 CRS-2677: Stop of 'ora.cvu' on 'odanode1' succeeded
 CRS-2672: Attempting to start 'ora.cvu' on 'odanode2'
 CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.odanode1.vip' on 'odanode1'
 CRS-2676: Start of 'ora.cvu' on 'odanode2' succeeded
 CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.scan2.vip' on 'odanode1'
 CRS-2677: Stop of 'ora.registry.acfs' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.efboeur.db' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.efpheur.db' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.efcteur.db' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.RECO.dg' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.REDO.dg' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.DATA.dg' on 'odanode1'
 CRS-2677: Stop of 'ora.odanode1.vip' on 'odanode1' succeeded
 CRS-2672: Attempting to start 'ora.odanode1.vip' on 'odanode2'
 CRS-2677: Stop of 'ora.REDO.dg' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.scan2.vip' on 'odanode1' succeeded
 CRS-2672: Attempting to start 'ora.scan2.vip' on 'odanode2'
 CRS-2677: Stop of 'ora.RECO.dg' on 'odanode1' succeeded
 CRS-2676: Start of 'ora.odanode1.vip' on 'odanode2' succeeded
 CRS-2676: Start of 'ora.scan2.vip' on 'odanode2' succeeded
 CRS-2677: Stop of 'ora.DATA.dg' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.asm' on 'odanode1'
 CRS-2677: Stop of 'ora.asm' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.ons' on 'odanode1'
 CRS-2677: Stop of 'ora.ons' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.net1.network' on 'odanode1'
 CRS-2677: Stop of 'ora.net1.network' on 'odanode1' succeeded
 CRS-2792: Shutdown of Cluster reporty Services-managed resources on 'odanode1' has completed
 CRS-2677: Stop of 'ora.crsd' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.ctssd' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.evmd' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.asm' on 'odanode1'
 CRS-2673: Attempting to stop 'ora.mdnsd' on 'odanode1'
 CRS-2677: Stop of 'ora.evmd' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.mdnsd' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.ctssd' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.drivers.acfs' on 'odanode1' succeeded
 CRS-2677: Stop of 'ora.asm' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'odanode1'
 CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.cssd' on 'odanode1'
 CRS-2677: Stop of 'ora.cssd' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.crf' on 'odanode1'
 CRS-2677: Stop of 'ora.crf' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.gipcd' on 'odanode1'
 CRS-2677: Stop of 'ora.gipcd' on 'odanode1' succeeded
 CRS-2673: Attempting to stop 'ora.gpnpd' on 'odanode1'
 CRS-2677: Stop of 'ora.gpnpd' on 'odanode1' succeeded
 CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'odanode1' has completed
 CRS-4133: Oracle High Availability Services has been stopped.
 [root@odanode1 2.6.0.0.0]#
--Start CRS on both Nodes
 [root@odanode1 2.6.0.0.0]# /u01/app/11.2.0.3/grid/bin/crsctl start crs
 CRS-4123: Oracle High Availability Services has been started.
--Check GI status
 [root@odanode1 2.6.0.0.0]# /u01/app/11.2.0.3/grid/bin/crsctl status res -t
 --------------------------------------------------------------------------------
 NAME           TARGET  STATE        SERVER                   STATE_DETAILS
 --------------------------------------------------------------------------------
 Local Resources
 --------------------------------------------------------------------------------
 ora.DATA.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.LISTENER.lsnr
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.RECO.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.REDO.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.asm
 ONLINE  ONLINE       odanode1               Started
 ONLINE  ONLINE       odanode2               Started
 ora.gsd
 OFFLINE OFFLINE      odanode1
 OFFLINE OFFLINE      odanode2
 ora.net1.network
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.ons
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.registry.acfs
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 --------------------------------------------------------------------------------
 Cluster Resources
 --------------------------------------------------------------------------------
 ora.LISTENER_SCAN1.lsnr
 1        ONLINE  ONLINE       odanode1
 ora.LISTENER_SCAN2.lsnr
 1        ONLINE  ONLINE       odanode2
 ora.cvu
 1        ONLINE  ONLINE       odanode2
 ora.efboeur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efboeur.efbo_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode1
 2        ONLINE  ONLINE       odanode2
 ora.efboeur.efbo_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.efcteur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efcteur.efct_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode1
 2        ONLINE  ONLINE       odanode2
 ora.efcteur.efct_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.odanode1.vip
 1        ONLINE  ONLINE       odanode1
 ora.odanode2.vip
 1        ONLINE  ONLINE       odanode2
 ora.efpheur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efpheur.efph_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode1
 2        ONLINE  ONLINE       odanode2
 ora.efpheur.efph_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.oc4j
 1        ONLINE  ONLINE       odanode2
 ora.scan1.vip
 1        ONLINE  ONLINE       odanode1
 ora.scan2.vip
 1        ONLINE  ONLINE       odanode2
 [root@odanode1 2.6.0.0.0]#
-------------------------------------------------
 --Apply the Patch to the RDBMS
 -------------------------------------------------
 --Check the RDBMS patch level before applying the PSU
 [root@odanode1 bin]# /opt/oracle/oak/bin/oakcli show databases
 Database Name    Database Type   Database HomeName    Database HomeLocation                                        Database Version
 ----------------       -----------            ----------------                   ---------------------------------------                           ---------------------
 efboeur                RAC                   OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1         11.2.0.3.4(14275605,14275572)
 efcteur                 RAC                  OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.4(14275605,14275572)
 efpheur                RAC                  OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.4(14275605,14275572)
 [root@odanode1 bin]#
--ODA on BOTH Nodes
 [oracle@odanode1 OPatch]$ /u01/app/oracle/product/agent12c/agent_inst/bin/emctl stop agent
 Oracle Enterprise Manager Cloud Control 12c Release 2
 Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
 Stopping agent ..... stopped.
--In addition to avoind issue while patching the RDBMS:
 --ODA on BOTH Nodes
 [root@efoda01n1 ~]# /sbin/fuser /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/libclntsh.so.11.1
 /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/libclntsh.so.11.1: 18877m 18911m
[root@efoda01n1 ~]# ps -ef|grep 18877
 oracle   18877 18791  0 10:06 ?        00:00:22 /u01/app/oracle/product/11.2.0.3/dbhome_1/jdk/bin/java -server -Xmx384M -XX:MaxPermSize=400M -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -DORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1 -Doracle.home=/u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j -Doracle.oc4j.localhome=/u01/app/oracle/product/11.2.0.3/dbhome_1/efoda01n1_test/sysman -DEMSTATE=/u01/app/oracle/product/11.2.0.3/dbhome_1/efoda01n1_test -Doracle.j2ee.dont.use.memory.archive=true -Djava.protocol.handler.pkgs=HTTPClient -Doracle.security.jazn.config=/u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j/j2ee/OC4J_DBConsole_efoda01n1_test/config/jazn.xml -Djava.security.policy=/u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j/j2ee/OC4J_DBConsole_efoda01n1_test/config/java2.policy -Djavax.net.ssl.KeyStore=/u01/app/oracle/product/11.2.0.3/dbhome_1/sysman/config/OCMTrustedCerts.txt-Djava.security.properties=/u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j/j2ee/home/config/jazn.security.props -DEMDROOT=/u01/app/oracle/product/11.2.0.3/dbhome_1/efoda01n1_test -Dsysman.md5password=true -Drepapi.oracle.home=/u01/app/oracle/product/11.2.0.3/dbhome_1 -Ddisable.checkForUpdate=true -Doracle.sysman.ccr.ocmSDK.websvc.keystore=/u01/app/oracle/product/11.2.0.3/dbhome_1/jlib/emocmclnt.ks -Dice.pilots.html4.ignoreNonGenericFonts=true -Djava.awt.headless=true -jar /u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j/j2ee/home/oc4j.jar -config /u01/app/oracle/product/11.2.0.3/dbhome_1/oc4j/j2ee/OC4J_DBConsole_efoda01n1_test/config/server.xml
[root@efoda01n1 ~]# kill -9 18877 18911
--ODA Node 1 ONLY
 [root@odanode1 bin]# ./oakcli update -patch 2.6.0.0.0 --database
Please enter the 'root' user password:
 Please re-enter the 'root' user password:
Please enter the 'oracle' user password:
 Please re-enter the 'oracle' user password:
 INFO: Setting up the SSH
 ..........done
 ...
 ...
..........done
 ...
 SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
 INFO: 2013-05-15 15:10:16: Getting the possible database homes for patching
 ...
 INFO: 2013-05-15 15:10:21: Patching 11.2.0.3 Database homes on node odanode1
Found the following 11.2.0.3 homes possible for patching:
HOME_NAME                      HOME_LOCATION
 ---------                      -------------
 OraDb11203_home1               /u01/app/oracle/product/11.2.0.3/dbhome_1
[Please note that few of the above database homes may be alreporty up-to-date. They will be automatically ignored]
Would you like to patch all the above homes: Y | N ? :Y
 INFO: 2013-05-15 15:15:48: Setting up ssh for the user oracle
 ..........done
 ...
 SUCCESS: All nodes in /opt/oracle/oak/temp_clunodes.txt are pingable and alive.
 INFO: 2013-05-15 15:16:07: Updating the opatch
 INFO: 2013-05-15 15:16:24: Performing the conflict checks
 SUCCESS: 2013-05-15 15:16:42: Conflict checks passed for all the homes
 INFO: 2013-05-15 15:16:42: Checking if the patch is alreporty applied on any of the homes
 INFO: 2013-05-15 15:16:46: No home is alreporty up-to-date
 SUCCESS: 2013-05-15 15:16:52: Successfully stopped the dbconsoles
 SUCCESS: 2013-05-15 15:16:58: Successfully stopped the EM agents
 INFO: 2013-05-15 15:17:03: Applying patch on the homes: /u01/app/oracle/product/11.2.0.3/dbhome_1
 INFO: 2013-05-15 15:17:03: It may take upto 15 mins
 SUCCESS: 2013-05-15 15:21:35: Successfully applied the patch on home: /u01/app/oracle/product/11.2.0.3/dbhome_1
 SUCCESS: 2013-05-15 15:21:35: Successfully started the dbconsoles
 SUCCESS: 2013-05-15 15:21:35: Successfully started the EM Agents
 INFO: 2013-05-15 15:21:37: Patching 11.2.0.3 Database homes on node odanode2
 INFO: 2013-05-15 15:22:11: Running the catbundle.sql
 INFO: 2013-05-15 15:22:18: Running catbundle.sql on the database efboeur
 INFO: 2013-05-15 15:22:26: Running catbundle.sql on the database efcteur
 INFO: 2013-05-15 15:22:35: Running catbundle.sql on the database efpheur
..........done
INFO: DB patching summary on node: odanode1
 SUCCESS: 2013-05-15 15:22:57:  Successfully applied the patch on home /u01/app/oracle/product/11.2.0.3/dbhome_1
INFO: DB patching summary on node: odanode2
 INFO: 2013-05-15 15:22:57:  Homes /u01/app/oracle/product/11.2.0.3/dbhome_1 are alreporty up-to-date
INFO: Setting up the SSH
 ..........done
[root@odanode1 bin]#
[root@odanode1 2.6.0.0.0]# /opt/oracle/oak/bin/oakcli show databases
 Database Name    Database Type   Database HomeName    Database HomeLocation                                        Database Version
 ----------------       -----------            ----------------                   ---------------------------------------                           ---------------------
 efboeur                RAC                   OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1         11.2.0.3.6(16056266,16083653)
 efcteur                 RAC                  OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.6(16056266,16083653)
 efpheur                RAC                  OraDb11203_home1      /u01/app/oracle/product/11.2.0.3/dbhome_1          11.2.0.3.6(16056266,16083653)
 [root@odanode1 bin]#
[oracle@odanode2 OPatch]$ ./opatch lsinv
 Oracle Interim Patch Installer version 11.2.0.3.4
 Copyright (c) 2012, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/11.2.0.3/dbhome_1
 Central Inventory : /u01/app/oraInventory
 from           : /u01/app/oracle/product/11.2.0.3/dbhome_1/oraInst.loc
 OPatch version    : 11.2.0.3.4
 OUI version       : 11.2.0.3.0
 Log file location : /u01/app/oracle/product/11.2.0.3/dbhome_1/cfgtoollogs/opatch/opatch2013-05-15_15-49-48PM_1.log
Lsinventory Output file location : /u01/app/oracle/product/11.2.0.3/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2013-05-15_15-49-48PM.txt
--------------------------------------------------------------------------------
 Installed Top-level Products (1):
Oracle Database 11g                                                  11.2.0.3.0
 There are 1 products installed in this Oracle Home.
Interim patches (2) :
Patch  16056266     : applied on Wed May 15 15:00:15 CEST 2013
 Unique Patch ID:  15962803
 Patch description:  "Database Patch Set Update : 11.2.0.3.6 (16056266)"
 Created on 12 Mar 2013, 02:14:47 hrs PST8PDT
 Sub-patch  14727310; "Database Patch Set Update : 11.2.0.3.5 (14727310)"
 Sub-patch  14275605; "Database Patch Set Update : 11.2.0.3.4 (14275605)"
 Sub-patch  13923374; "Database Patch Set Update : 11.2.0.3.3 (13923374)"
 Sub-patch  13696216; "Database Patch Set Update : 11.2.0.3.2 (13696216)"
 Sub-patch  13343438; "Database Patch Set Update : 11.2.0.3.1 (13343438)"
 Bugs fixed:
 13566938, 13593999, 10350832, 14138130, 12919564, 13561951, 13624984
 13588248, 13080778, 13914613, 13804294, 14258925, 12873183, 13645875
 14472647, 12880299, 14664355, 14409183, 12998795, 14469008, 13719081
 13492735, 13496884, 12857027, 14263036, 14263073, 13732226, 13742433
 16368108, 16314469, 12905058, 13742434, 12849688, 12950644, 13742435
 13464002, 13534412, 12879027, 13958038, 14613900, 12585543, 12535346
 12588744, 11877623, 13786142, 12847466, 13649031, 13981051, 12582664
 12797765, 14262913, 12923168, 13384182, 13612575, 13466801, 13484963
 14207163, 11063191, 13772618, 13070939, 12797420, 13041324, 16314467
 16314468, 12976376, 11708510, 13680405, 14589750, 13026410, 13742437
 13737746, 14644185, 13742438, 13326736, 13596521, 13001379, 16344871
 13099577, 9873405, 14275605, 13742436, 9858539, 14841812, 11715084
 16231699, 14040433, 12662040, 9703627, 12617123, 12845115, 12764337
 13354082, 14459552, 13397104, 13913630, 12964067, 12983611, 13550185
 13810393, 12780983, 12583611, 14546575, 13476583, 15862016, 11840910
 13903046, 15862017, 13572659, 16294378, 13718279, 14088346, 13657605
 13448206, 16314466, 14480676, 13419660, 13632717, 14063281, 14110275
 13430938, 13467683, 13420224, 13812031, 14548763, 16299830, 12646784
 13616375, 14035825, 12861463, 12834027, 15862021, 13632809, 13377816
 13036331, 14727310, 13685544, 15862018, 13499128, 16175381, 13584130
 12829021, 15862019, 12794305, 14546673, 12791981, 13787482, 13503598
 10133521, 12718090, 13399435, 14023636, 13860201, 12401111, 13257247
 13362079, 14176879, 12917230, 13923374, 14220725, 14480675, 13524899
 13559697, 9706792, 14480674, 13916709, 13098318, 13773133, 14076523
 13340388, 13366202, 13528551, 12894807, 13454210, 13343438, 12748240
 14205448, 13385346, 15853081, 14273397, 12971775, 13582702, 10242202
 13035804, 13544396, 16382353, 8547978, 14226599, 14062795, 13035360
 12693626, 13332439, 14038787, 14062796, 12913474, 14841409, 14390252
 16314470, 13370330, 13059165, 14062797, 14062794, 12959852, 13358781
 12345082, 12960925, 9659614, 13699124, 14546638, 13936424, 13338048
 12938841, 12658411, 12620823, 12656535, 14062793, 12678920, 13038684
 14062792, 13807411, 13250244, 12594032, 15862022, 9761357, 12612118
 13742464, 14052474, 13911821, 13457582, 13527323, 15862020, 13910420
 13502183, 12780098, 13705338, 13696216, 14841558, 10263668, 15862023
 16056266, 15862024, 13554409, 13645917, 13103913, 13011409, 14063280
Patch  16315641     : applied on Wed May 15 13:58:54 CEST 2013
 Unique Patch ID:  15966967
 Patch description:  "Grid Infrastructure Patch Set Update : 11.2.0.3.6 (16083653)"
 Created on 1 Apr 2013, 03:41:20 hrs PST8PDT
 Bugs fixed:
 16315641, 15876003, 14275572, 13919095, 13696251, 13348650, 12659561
 14305980, 14277586, 13987807, 14625969, 13825231, 12794268, 13000491
 13498267, 11675721, 14082976, 12771830, 14515980, 14085018, 13943175
 14102704, 14171552, 12594616, 13879428, 12897902, 12726222, 12829429
 13079948, 13090686, 12995950, 13251796, 13582411, 12990582, 13857364
 13082238, 12947871, 13256955, 13037709, 14535011, 12878750, 14048512
 11772838, 13058611, 13001955, 13440962, 13727853, 13425727, 12885323
 12870400, 14212634, 14407395, 13332363, 13430626, 13811209, 12709476
 14168708, 14096821, 14626717, 13460353, 13694885, 12857064, 12899169
 13111013, 12558569, 13323698, 10260842, 13085732, 10317921, 13869978
 12914824, 13789135, 12730342, 12950823, 13355963, 13531373, 14268365
 13776758, 12720728, 13620816, 13023609, 13024624, 13039908, 13036424
 13938166, 13011520, 13569812, 12758736, 13001901, 13077654, 13430715
 13550689, 13806545, 13634583, 14271305, 12538907, 13947200, 12996428
 13066371, 13483672, 12897651, 13540563, 12896850, 13241779, 12728585
 12876314, 12925041, 12650672, 12398492, 12848480, 13652088, 16307750
 12917897, 12975811, 13653178, 13371153, 14800989, 10114953, 14001941
 11836951, 14179376, 12965049, 14773530, 12765467, 13339443, 13965075
 16210540, 14307855, 12784559, 14242977, 13955385, 12704789, 13745317
 13074261, 12971251, 13993634, 13523527, 13719731, 13396284, 12639013
 12867511, 12959140, 14748254, 12829917, 12349553, 12849377, 12934171
 13843080, 14496536, 13924431, 12680491, 13334158, 10418841, 12832204
 13838047, 13002015, 12791719
Rac system comprising of multiple nodes
 Local node = odanode2
 Remote node = odanode1
--------------------------------------------------------------------------------
OPatch succeeded.
 [oracle@odanode2 OPatch]$
-------------------------------------------------
 --ODA Software version after patching
 -------------------------------------------------
 [root@odanode1 bin]# /opt/oracle/oak/bin/oakcli show version -detail
 reporting the metadata. It takes a while...
 System Version         Component Name            Installed Version                Supported Version
 --------------          ---------------              ---------------------         ----------------------
 2.6.0.0.0
 Controller                        11.05.02.00                      Up-to-date
 Expander                         0342                               Up-to-date
 SSD_SHARED                    E12B                               Up-to-date
 HDD_LOCAL                      5G08                               Up-to-date
 HDD_SHARED                    A700                               Up-to-date
 ILOM                               3.0.16.22.b r78329            Up-to-date
 BIOS                               12010310                         Up-to-date
 IPMI                                1.8.10.5                          Up-to-date
 HMP                                2.2.6.1                            Up-to-date
 OAK                                2.6.0.0.0                          Up-to-date
 OEL                                5.8                                  Up-to-date
 TFA                                2.5.1.4                            Up-to-date
 GI_HOME                         11.2.0.3.6(16056266,         Up-to-date
 16083653)
 DB_HOME                         11.2.0.3.6(16056266,          Up-to-date
 16083653)
 ASR                                  Unknown                          4.4
 [root@odanode1 bin]#
[grid@odanode2 ~]$ crsctl stat res -t
 --------------------------------------------------------------------------------
 NAME           TARGET  STATE        SERVER                   STATE_DETAILS
 --------------------------------------------------------------------------------
 Local Resources
 --------------------------------------------------------------------------------
 ora.DATA.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.LISTENER.lsnr
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.RECO.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.REDO.dg
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.asm
 ONLINE  ONLINE       odanode1               Started
 ONLINE  ONLINE       odanode2               Started
 ora.gsd
 OFFLINE OFFLINE      odanode1
 OFFLINE OFFLINE      odanode2
 ora.net1.network
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.ons
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 ora.registry.acfs
 ONLINE  ONLINE       odanode1
 ONLINE  ONLINE       odanode2
 --------------------------------------------------------------------------------
 Cluster Resources
 --------------------------------------------------------------------------------
 ora.LISTENER_SCAN1.lsnr
 1        ONLINE  ONLINE       odanode2
 ora.LISTENER_SCAN2.lsnr
 1        ONLINE  ONLINE       odanode1
 ora.cvu
 1        ONLINE  ONLINE       odanode1
 ora.efboeur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efboeur.efbo_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode2
 2        ONLINE  ONLINE       odanode1
 ora.efboeur.efbo_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.efcteur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efcteur.efct_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode1
 2        ONLINE  ONLINE       odanode2
 ora.efcteur.efct_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.odanode1.vip
 1        ONLINE  ONLINE       odanode1
 ora.odanode2.vip
 1        ONLINE  ONLINE       odanode2
 ora.efpheur.db
 1        ONLINE  ONLINE       odanode1               Open
 2        ONLINE  ONLINE       odanode2               Open
 ora.efpheur.efph_applb.efow.com.svc
 1        ONLINE  ONLINE       odanode2
 2        ONLINE  ONLINE       odanode1
 ora.efpheur.efph_report.efow.com.svc
 1        OFFLINE OFFLINE
 2        OFFLINE OFFLINE
 ora.oc4j
 1        ONLINE  ONLINE       odanode1
 ora.scan1.vip
 1        ONLINE  ONLINE       odanode2
 ora.scan2.vip
 1        ONLINE  ONLINE       odanode1
 [grid@odanode2 ~]$