My OOW18 Summary

 

For those who are interested here my major takeaways from the OOW18

 

As we all know, since few years the HOTTEST topic advertized at the OOW is “Cloud Computing”, but this time Oracle Cloud was no longer alone!

In fact the focus was divided between the new Oracle OCI Cloud, also named by Larry as Second Generation of Cloud and the Autonomous Database.

 

OCI Second Gen of Cloud

Here a summary of the major advantages compared to the previous version:

– Security, guaranteed by robots which scan the network for any malicious attack.  

– The cutting edge virtual network, which brings up to 50GB speed and extreme flexibility.

– Bare Metal Infrastructure based on Exadata Machines.

– Aggressive pricing, compared to the competitors.

 

Autonomous Database.

The Autonomous Database option is now available for OLTP and DWH databases and includes new capabilities like automatic index creation and column stored table conversion. In version 19 it will manage online memory increase and additional tuning options.

As announces during Larry’s keynote, the  Autonomous database will be also available with the Cloud @ Customer option (on Exadata only), ant it will no longer require human labor (DBA and Sys Admin intervention), because Self Provisioning, Self Driving, Self Tuning and Self Repairing.

For non-technical people it looks magic, but it is few steps from what we already use in a standard Oracle 12c Database. In fact Autonomous Database leverages a bunch of database advisors and tuning options, now orchestrated by an Artificial Intelligence and Machine Learning software, in order to provide data-driven predictions and decisions.

Over the next few years, Autonomous Database will be enriched with several new options, improving the quality of live of many DBAs, which will be relieved of the majority of the tedious and recurring tasks, leaving the most added value tasks under their own responsibility.

Last but not least, the Autonomous Database runs in a very high end configurations (Oracle guarantees 99,995% of availability), which is quite expensive to acquire due to the list of mandatory requirements: Exadata, RAC, Active DG, Multitenant, Tuning Pack, Diagnostic Pack etc..

 

Exadata Machine

Several interesting features are coming next year with the introduction of the INTEL Optane DC Persistent Memory for even faster OLTP.

This new type of memory will be installed on the Storage Cell and used as accelerator in front of Flash memory.

The database node will  access to the Persistent Memory via RDMA with a gain up to 20 x faster access latency.

Oracle is developing the more and more Remote Direct Memory Access (RDMA) instructions for Cache Fusion and Storage Cell operations in order to offload the database nodes and increase the overall performance.

Stay tuned on Exadata Machine because the next generation will also include BIG architectural change…

 

Oracle Virtual Machine (OVM)

One curiosity directly collected at Linux Virtualization booth is that even though the next generation of hypervisor will be based on KVM, Oracle will keep calling it OVM and of course the current OVM product based on XEN (OVS, OVM) will still be in use by many companies.

How possibly the customers can get confused ?!?

 

With this I finished, although there would be much more to write.

 


 

Advertisements

Exadata How Safely Erase All Data

When the time arrives to decommission an environment with sesitive data, we are frequently confronted to the problem how to certify to our customer or management the erase of all data and logs.

On Exadata Machine starting from the software release 12.2.1.1.0, this problem has been elegantly solved by Oracle introducing a new utility called Secure Eraser; which securely erases data on hard drives, flash devices, internal USBs, and resets ILOM to factory default.

 

In earlier software versions, the Exadata Storage Software includes CellCli commands to securely erase the user data:

CellCLI> DROP GRIDDISK ALL FLASHDISK PREFIX=DATA, ERASE=7pass
CellCLI> DROP GRIDDISK ALL PREFIX=DATA, ERASE=3pass

and

CellCLI> DROP CELLDISK ALL FLASHDISK ERASE=7pass 
CellCLI> DROP CELL ERASE=3pass

Unfortunatly those commands only cover the user data stored on the Storage Cell, and none of them produces an official certificate with the summary of the actions taken to guarantee the wipe of the data. While all this is done by Secure Eraser on all Compute and Storage nodes, sanitizing on all type of devices: user data, OS logs and network configurations.

 

Depending from the Exadata model, a subset of all of available options to execute Secure Eraser is possible:

  • Automatic Secure Eraser Ethrough PXE Boot
  • Interactive Secure Eraser through PXE Boot
  • Interactive Secure Eraser through Network Boot
  • Interactive Secure Eraser through External USB

 


 

Recently I used Secure Eraser through External USB on one Exadata X7-2 Machine and here are reported the different steps.

 

Copy the Secure Eraser Diagnostic image from MOS 2180963.1 to a USB stick.

 # dd if=image_diagnostics_18.1.4.0.0_LINUX.X64_180125.3-1.x86_64.usb of=/dev/sdb

 

Boot the server using the USB device with the Secure Eraser Diagnostic image

Exa_BootList.jpg

 

After login, start the Secure Erase process

/usr/sbin/secureeraser --erase --all --flash_erasure_method=7pass --hdd_erasure_method=3pass --technician=Emiliano_Fusaglia --witness=Mario_Bros --output=/mnt/iso

 

 

At the end of the erase process a Data Erasure Certificate similar to the one on the example below will be available in TXT, HTML and PDF format.

Exa_SecureErase_Report


 

 

 

Exadata Storage Snapshots

This post describes how to implement Oracle Database Snapshot Technology on Exadata Machine.

Because Exadata Storage Cell Smart Features, Storage Indexes, IORM and Network Resource Manager work at level of ASM Volume Manager only, (and they don’t work on top of ACFS Cluster File System), the implementation of the snapshot technology is different compared to any other non-Exadata environment.

At this purpuse Oracle has developed a new type of ASM Disk Group called SPARSE Disk Group. It uses ASM SPARSE Grid Disk based on Thin Provisioning to save the database snapshot copies and the associated metadata, and it supports non-CDB and PDB snapshot copy.

The implementation requires the following minimal software versions :

  • Exadata Storage Software version 12.1.2.1.0.
  • Oracle Database version 12.1.0.2 with bundle patch 5.
One major restriction applies to Exadata Storage Sanpshot compared to ACFS;
the source database must be a shared copy open on read only and called Test Master. The Test Master Database can not be modified or deleted as long the latest child snapshot is in use.
This restriction exists because Exadata Snapshot technology uses “allocate on first write”, and not “copy on write” (like for ACFS), and the snapshot is per-database-datafile.
When a child snapshot issue a write, the write goes to a private copy of that block inside the snapshot, preserving the original block value which can be accessed by other child snapshots of the same Test Master.

How to Implement Exadata Storage Snapshots in a PDB Environment

Check the celldisks for available free space to allocate to a new SPARSE Disk Group

[root@strgceladm01 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm01 853.34375G
 CD_01_strgceladm01 853.34375G
 CD_02_strgceladm01 853.34375G
 CD_03_strgceladm01 853.34375G
 CD_04_strgceladm01 853.34375G
 CD_05_strgceladm01 853.34375G
 CD_06_strgceladm01 853.34375G
 CD_07_strgceladm01 853.34375G
 CD_08_strgceladm01 853.34375G
 CD_09_strgceladm01 853.34375G
 CD_10_strgceladm01 853.34375G
 CD_11_strgceladm01 853.34375G
 FD_00_strgceladm01 0
 FD_01_strgceladm01 0
 FD_02_strgceladm01 0
 FD_03_strgceladm01 0
[root@strgceladm01 ~]#


[root@strgceladm02 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm02 853.34375G
 CD_01_strgceladm02 853.34375G
 CD_02_strgceladm02 853.34375G
 CD_03_strgceladm02 853.34375G
 CD_04_strgceladm02 853.34375G
 CD_05_strgceladm02 853.34375G
 CD_06_strgceladm02 853.34375G
 CD_07_strgceladm02 853.34375G
 CD_08_strgceladm02 853.34375G
 CD_09_strgceladm02 853.34375G
 CD_10_strgceladm02 853.34375G
 CD_11_strgceladm02 853.34375G
 FD_00_strgceladm02 0
 FD_01_strgceladm02 0
 FD_02_strgceladm02 0
 FD_03_strgceladm02 0
[root@strgceladm02 ~]#


[root@strgceladm03 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm03 853.34375G
 CD_01_strgceladm03 853.34375G
 CD_02_strgceladm03 853.34375G
 CD_03_strgceladm03 853.34375G
 CD_04_strgceladm03 853.34375G
 CD_05_strgceladm03 853.34375G
 CD_06_strgceladm03 853.34375G
 CD_07_strgceladm03 853.34375G
 CD_08_strgceladm03 853.34375G
 CD_09_strgceladm03 853.34375G
 CD_10_strgceladm03 853.34375G
 CD_11_strgceladm03 853.34375G
 FD_00_strgceladm03 0
 FD_01_strgceladm03 0
 FD_02_strgceladm03 0
 FD_03_strgceladm03 0
[root@strgceladm03 ~]#

For each Storage Cell Create a SPARSE Grid Disks as described below

[root@strgceladm01 ~]# cellcli -e CREATE GRIDDISK ALL PREFIX=SPARSE, sparse=true, SIZE=853.34375G
Cell disks were skipped because they had no freespace for grid disks: FD_00_strgceladm01, FD_01_strgceladm01, FD_02_strgceladm01, FD_03_strgceladm01.
GridDisk SPARSE_CD_00_strgceladm01 successfully created
GridDisk SPARSE_CD_01_strgceladm01 successfully created
GridDisk SPARSE_CD_02_strgceladm01 successfully created
GridDisk SPARSE_CD_03_strgceladm01 successfully created
GridDisk SPARSE_CD_04_strgceladm01 successfully created
GridDisk SPARSE_CD_05_strgceladm01 successfully created
GridDisk SPARSE_CD_06_strgceladm01 successfully created
GridDisk SPARSE_CD_07_strgceladm01 successfully created
GridDisk SPARSE_CD_08_strgceladm01 successfully created
GridDisk SPARSE_CD_09_strgceladm01 successfully created
GridDisk SPARSE_CD_10_strgceladm01 successfully created
GridDisk SPARSE_CD_11_strgceladm01 successfully created
[root@strgceladm01 ~]#

For each Storage Cell List all Grid Disks

[root@strgceladm01 ~]# cellcli -e list griddisk attributes name,size
 DATAC1_CD_00_strgceladm01 6.294586181640625T
 DATAC1_CD_01_strgceladm01 6.294586181640625T
 DATAC1_CD_02_strgceladm01 6.294586181640625T
 DATAC1_CD_03_strgceladm01 6.294586181640625T
 DATAC1_CD_04_strgceladm01 6.294586181640625T
 DATAC1_CD_05_strgceladm01 6.294586181640625T
 DATAC1_CD_06_strgceladm01 6.294586181640625T
 DATAC1_CD_07_strgceladm01 6.294586181640625T
 DATAC1_CD_08_strgceladm01 6.294586181640625T
 DATAC1_CD_09_strgceladm01 6.294586181640625T
 DATAC1_CD_10_strgceladm01 6.294586181640625T
 DATAC1_CD_11_strgceladm01 6.294586181640625T
 FGRID_FD_00_strgceladm01 2.0717315673828125T
 FGRID_FD_01_strgceladm01 2.0717315673828125T
 FGRID_FD_02_strgceladm01 2.0717315673828125T
 FGRID_FD_03_strgceladm01 2.0717315673828125T
 RECOC1_CD_00_strgceladm01 1.78143310546875T
 RECOC1_CD_01_strgceladm01 1.78143310546875T
 RECOC1_CD_02_strgceladm01 1.78143310546875T
 RECOC1_CD_03_strgceladm01 1.78143310546875T
 RECOC1_CD_04_strgceladm01 1.78143310546875T
 RECOC1_CD_05_strgceladm01 1.78143310546875T
 RECOC1_CD_06_strgceladm01 1.78143310546875T
 RECOC1_CD_07_strgceladm01 1.78143310546875T
 RECOC1_CD_08_strgceladm01 1.78143310546875T
 RECOC1_CD_09_strgceladm01 1.78143310546875T
 RECOC1_CD_10_strgceladm01 1.78143310546875T
 RECOC1_CD_11_strgceladm01 1.78143310546875T
 SPARSE_CD_00_strgceladm01 853.34375G
 SPARSE_CD_01_strgceladm01 853.34375G
 SPARSE_CD_02_strgceladm01 853.34375G
 SPARSE_CD_03_strgceladm01 853.34375G
 SPARSE_CD_04_strgceladm01 853.34375G
 SPARSE_CD_05_strgceladm01 853.34375G
 SPARSE_CD_06_strgceladm01 853.34375G
 SPARSE_CD_07_strgceladm01 853.34375G
 SPARSE_CD_08_strgceladm01 853.34375G
 SPARSE_CD_09_strgceladm01 853.34375G
 SPARSE_CD_10_strgceladm01 853.34375G
 SPARSE_CD_11_strgceladm01 853.34375G
[root@strgceladm01 ~]#

From ASM Instance Create a SPARSE Disk Group

SQL> CREATE DISKGROUP SPARSEC1 EXTERNAL REDUNDANCY DISK 'o/*/SPARSE_CD_*'
ATTRIBUTE
'compatible.asm' = '12.2.0.1',
'compatible.rdbms' = '12.2.0.1',
'cell.smart_scan_capable'='TRUE',
'cell.sparse_dg' = 'allsparse',
'AU_SIZE' = '4M';

Diskgroup created.

Set the following ASM attributes on the Disk Group hosting the Test Master Database

ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'access_control.enabled' = 'true';

Grant access to the OS RDBMS user used to access to the Disk Group

ALTER DISKGROUP DATAC1 ADD USER 'oracle';

From an ASM Instance Set ownership permissions for every file that belongs solely to the PDB being snapped cloned as per example below

alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/system.xxx.xxxxxxx';
alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/sysaux.xxx.xxxxxxx';
alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/users.xxx.xxxxxxx';
...
..

Restart the Master Test PDB in Read Only

alter pluggable database PDBTESTMASTER close immediate instances=all;
alter pluggable database PDBTESTMASTER open read only;

Create the first PDB Snapshot Copy on Exadata SPARSE Disk Group

Create pluggable database PDBDEV01 from PDBTESTMASTER tempfile reuse create_file_dest='+SPARSEC1' snapshot copy;

Feedback of the Exadata Storage Snapshots

The ability to create storage efficient database copies in a few seconds, independently from the size of the Test Master is very useful for today IT departments; but such extreme velocity and flexibility is not entirely free. In fact performance tests on a I/O bound workload have highlighted important performance degradation. This reminds us that as defined by Oracle Corporation, the Snapshot Technology, included on Exadata Machine remains a non-production option.

The “Great” ODA overwhelming the Exadata

Introduction

This article try to explain the technical reasons of the Oracle Database Appliance success, a well-known appliance with whom Oracle targets small and medium businesses, or specific departments of big companies looking for privacy and isolation from the rest of the IT. Nowadays this small and relatively cheap appliance (around 65’000$ price list) has evolved a lot, the storage has reached an important capacity 128TB raw expansible to 256TB, and the two X5-2 servers are the same used on the database node of the Exadata machine. Many customers, while defining the new database architecture evaluate the pros and cons of acquiring an ODA compared to the smallest Exadata configuration (one eight of a Rack). If the customer is not looking for a system with extreme performance and horizontal scalability beyond the two X5-2 servers, the Oracle Database Appliance is frequently the retained option.

Some of the ODA major features are:

  • High Availability: no single point of failure on all hardware and software components.
  • Performance: each server is equipped with 2×18-core Intel Xeon and 256GB of RAM extensible up to 768GB, cluster communication over InfiniBand. The shared storage offers a multi-tiers configuration with HDDs at 7.2K rpm and two type of SSDs for frequently accessed data and for database redo logs.
  • Flexibility & Scalability: running RAC, RAC One node and Single Instance databases.
  • Virtualized configuration: designed for offering Solution in-a-box, with high available virtual machines.
  • Optimized licensing model: pay-as-you-grow model activating a crescendo number of CPU-cores on demand, with the Bare Metal configuration; or capping the resources combining Oracle VM with the Hard Partitioning setup.
  • Time-to-market: no-matter if the ODA has to be installed bare metal or virtualized, this is a standardized and automated process generally completed in one or two day of work.
  • Price: the ODA is very competitive when comparing the cost to an equivalent commodity architecture; which in addition, must be engineered, integrated and maintained by the customer.

 

At the time of the writing of this article, the latest hardware model is ODA X5-2 and 12.1.2.6.0 is the software version. This HW and SW combination offers unique features, few of them not even available on the Exadata machine, like the possibility to host databases and applications in one single box, or the possibility to rapidly and space efficiently clone an 11gR2 and 12c database using ACFS Snapshot.

 

 

ODA HW & SW Architecture

Oracle Database Appliance is composed by two X5-2 servers and a shared storage shelf, which optionally can be doubled. Each Server disposes of: two 18-core Intel Xeon E5-2699 v3; 256GB RAM (optionally upgradable to 768GB) and two 600GB 10k rpm internal disks in RAID 1 for OS and software binaries.

This appliance is equipped with redundant networking connectivity up to 10Gb, redundant SAS HBAs and Storage I/O modules, redundant InfiniBand interconnect for cluster communication enabling 40 Gb/second server-to-server communication.

The software components are all part of Oracle “Red Stack” with Oracle Linux 6 UEK or OVM 3, Grid Infrastructure 12c, Oracle RDBMS 12c & 11gR2 and Oracle Appliance Manager.

 

 

ODA Front view

Components number 1 & 2 are the X5-2 Servers. Components 3 & 4 are the Storage and the optionally Storage extension.

ODA_Front

 

ODA Rear view

Highlight of the multiple redundant connections, including InfiniBand for Oracle Clusterware, ASM and RAC communications. No single point of HW or SW failure.

ODA_Back

 

 

Storage Organization

With 16x8TB SAS HDDs a total raw space of 128TB is available on each storage self (64TB in configuration ASM double-mirrored and 42.7TB with ASM triple-mirrored). To offer better I/O performance without exploding the price, Oracle has implemented the following SSD devices: 4x400GB ASM double-mirrored, for frequently accessed data, and 4x200GB ASM triple-mirrored, for database redo logs.

As shown on the picture aside, each rotating disk has two slices, the external, and more performant partition assigned to the +DATA ASM disk group, and the internal one allocated to +RECO ASM disk group.

 

ODA_Disk

This storage optimization allows the ODA to achieve competitive I/O performance. In a production-like environment, using the three type of disks, as per ODA Database template odb-24 (https://docs.oracle.com/cd/E22693_01/doc.12/e55580/sizing.htm), Trivadis has measured 12k I/O per second and a throughput of 2300 MB/s with an average latency of 10ms. As per Oracle documentation, the maximum number of I/O per second of the rotating disks, with a single storage shelf is 3300; but this value increases significantly relocating the hottest data files to +FLASH disk group created on SSD devices.

 

ACFS becomes the default database storage of ODA

Starting from the ODA software version 12.1.0.2, any fresh installation enforces ASM Cluster File System (ACFS) as lonely type of database storage support, restricting the supported database versions to 11.2.0.4 and greater. In case of ODA upgrade from previous release, all pre-existing databases are not automatically migrated to ACFS, but Oracle provides a tool called acfs_mig.pl for executing this mandatory step on all Non-CDB databases of version >= 11.2.0.4.

Oracle has decided to promote ACFS as default database storage on ODA environment for the following reasons:

  • ACFS provides almost equivalent performance than Oracle ASM disk groups.
  • Additional functionalities on industry standard POSIX file system.
  • Database snapshot copy of PDBs, and NON-CDB of version 11.2.0.4 or greater.
  • Advanced functionality for general-purpose files such as replication, tagging, encryption, security, and auditing.

Database created on ACFS follows the same Oracle Managed Files (OMF) standard used by ASM.

As in the past, the database provisioning requires the utilization of the command line interface oakcli and the selection of a database template, which defines several characteristics including the amount of space to allocate on each file system. Container and Non-Container databases can coexist on the same Oracle Database Appliance.

The ACFS file systems are created during the database provisioning process on top of the ASM disk groups +DATA, +RECO, +REDO, and optionally +FLASH. The file systems have two possible setups, depending on the database type Container or Non-Container.

  • Container database: for each CDB the ODA database-provisioning job creates dedicated ACFS file systems with the following characteristics:
Disk Characteristics ASM Disk group ACFS Mount Point
SAS Disk external partition +DATA /u02/app/oracle/oradata/datc<db_unique_name>
SAS Disk internal partition +RECO /u01/app/oracle/fast_recovery_area/rcoc<db_unique_name>
SSD Triple-mirrored +REDO /u01/app/oracle/oradata/rdoc<db_unique_name>
SSD Double-mirrored +FLASH (*) /u02/app/oracle/oradata/flashdata

 

  • Non-Container database: in case of Non-CDB the ODA database-provisioning job creates or resizes the following shared ACFS file systems:
Disk Characteristics ASM Disk group ACFS Mount Point
SAS Disk external partition +DATA /u02/app/oracle/oradata/datastore
SAS Disk internal partition +RECO /u01/app/oracle/fast_recovery_area/datastore
SSD Triple-mirrored +REDO /u01/app/oracle/oradata/datastore
SSD Double-mirrored +FLASH (*) /u02/app/oracle/oradata/flashdata

(*) Optionally used by the databases as Smart Flash Cache (extension of the SGA buffer cache), or allocated to store the hottest data files leveraging the I/O performance of the SSD disks.

 

Oracle Database Appliance Bare Metal

The bare metal configuration has been available since version one of the appliance, and nowadays it remains the default option proposed by Oracle, which pre-install the OS Linux on any new system. Very simple and intuitive to install thanks to the pre-built bundle software, which automates most of the steps. At the end of the installation, the architecture is very similar to any other two node RAC setup based on commodity hardware; but even from an operation point of view there are great advantages, because the Oracle Appliance Manager framework simplifies and accelerates the execution of almost any system and database administrator task.

Here below is depicted the ODA architecture when the bare metal configuration is in use:

ODA_Bare_Metal

 

Oracle Database Appliance Virtualized

When the ODA is deployed with the virtualization, both servers run Oracle VM Server, also called Dom0. Each Dom0 hosts in a local dedicated repository the ODA Base (or Dom Base), a privileged virtual machine where it is installed the Appliance Manager, Grid Infrastructure and RDBMS binaries. The ODA Base takes advantage of the Xen PCI Pass-through technology to provide direct access to the ODA shared disks presented and managed by ASM. This configuration reduces the VM flexibility; in fact, no VM migration is allowed for the two ODA Base, but it guarantees almost no I/O penalty in term of performance. With the Dom Base setup, the basic installation is completed and it is possible to start provisioning databases using Oracle Appliance Manager.

At the same time, the administrator can create new-shared repositories hosted on ACFS and NFS exported to the hypervisor for hosting the application virtual machines. Those application virtual machines are also identified with the name of Domain U.  The Domain U and the templates can be stored on a local or shared Oracle VM Server repository, but to enable the functionality to migrate between the two Oracle VM Servers a shared repository on the ACFS file system should be used.

Even when the virtualization is in use, Oracle Appliance Manager is the only framework for system and database administration tasks like repository creation, import of template, deployment of virtual machine, network configuration, database provisioning and so on, relieving the administrator from all complexity.

The implementation of the Solution-in-a-box guarantees the maximum Return on Investment of the ODA; in fact, while restricting the virtual CPUs to license on the Dom Base it allows relocating spare resources to the application virtual machines as showed on the picture below.

ODA_Virtualized

 

 

ODA compared to Exadata Machine and Commodity Hardware

As described on the previous sections, Oracle Database Appliance offers unique features such as pay-as-you-grow, solution-in-a-box and so on, which can heavily influence the decision for a new database architecture. The aim of the table below is to list the main architecture characteristics to evaluate while defining a new database infrastructure, comparing the result between Oracle Database Appliance, Exadata Machine and a Commodity Architecture based on Intel Linux engineered to run RAC databases.

Table_Architectures

As shown by the different scores of the three architectures, each solution comes with points of strength and weakness; about the Oracle Database Appliance, it is evident that due to its characteristics, the smallest Oracle Engineered System remains a great option for small, medium database environments.

 

Conclusion

I hope this article keep the initial promise to explain the technical reasons of the Oracle Database Appliance success, and it has highlighted the great work done by Oracle, engineering this solution on the edge of the technology keeping the price under control.

One last summary of what in my opinion are the major benefits offered by the ODA:

  • Time-to-market: Thanks to automated processes and pre-build software images, the deployment phase is extremely rapid.
  • Simplicity: The use of standard software components, combined to the appliance orchestrator Oracle Appliance Manager makes the ODA very simple to operate.
  • Standardization & Automation: The Appliance Manager encapsulates and automatizes all repeatable and error-prone tasks like provisioning, decommissioning, patching and so on.
  • Vendor certified platform: Oracle validates and certifies the compatibility among all HW & SW components.
  • Evolution: Over the time, the ODA benefits of specific bug fixing and software evolution (introduced by Oracle though the quarterly patch sets); keeping the system on the edge for longer time when compared to a commodity architecture.

EXADATA: How to enable Flash Cache WriteBack on a running system

In a recent tuning activity it was necessary to change the Exadata Smart Flash Cache from “WriteThrough” to “WriteBack“. Because the system was used in a 24/7 environment we had to implement the change in a Rolling Upgrade Fashion.

Here below are described the different steps.

 

From one DB node using dcli check the currest status of the storage cells:

[root@efudbadm02 ~]# dcli -g ~/cell_group -l root cellcli -e "list cell attributes flashcachemode"
efuceladm01: WriteThrough
efuceladm02: WriteThrough
efuceladm03: WriteThrough
efuceladm04: WriteThrough
efuceladm05: WriteThrough
efuceladm06: WriteThrough
efuceladm07: WriteThrough
efuceladm08: WriteThrough
efuceladm09: WriteThrough
efuceladm10: WriteThrough
efuceladm11: WriteThrough

From one DB node using dcli check that the properties asmdeactivationoutcome and asmmodestatus of all griddisks are respectively “Yes” and “ONLINE” before continuing with the change.

[root@efudbadm02 ~]# dcli -g cell_group -l root cellcli -e list griddisk attributes asmdeactivationoutcome, asmmodestatus
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm01: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
efuceladm02: Yes ONLINE
...
..
.

From one DB node using dcli check that all flashcache modules are in “normal” state and no flash disk is in degraded or critical state.

[root@efudbadm02 ~]# dcli -g cell_group -l root cellcli -e list flashcache detail
efuceladm01: name: efuceladm01_FLASHCACHE
efuceladm01: cellDisk: FD_00_efuceladm01,FD_07_efuceladm01,FD_06_efuceladm01,FD_03_efuceladm01,FD_05_efuceladm01,FD_01_efuceladm01,FD_02_efuceladm01,FD_04_efuceladm01
efuceladm01: creationTime: 2013-06-18T15:21:13+02:00
efuceladm01: degradedCelldisks:
efuceladm01: effectiveCacheSize: 744.125G
efuceladm01: id: 35b61001-438f-4d66-8ce9-40704f758d3f
efuceladm01: size: 744.125G
efuceladm01: status: normal
efuceladm02: name: efuceladm02_FLASHCACHE
efuceladm02: cellDisk: FD_06_efuceladm02,FD_05_efuceladm02,FD_00_efuceladm02,FD_02_efuceladm02,FD_01_efuceladm02,FD_07_efuceladm02,FD_03_efuceladm02,FD_04_efuceladm02
efuceladm02: creationTime: 2013-06-18T15:21:12+02:00
efuceladm02: degradedCelldisks:
efuceladm02: effectiveCacheSize: 744.125G
efuceladm02: id: 2f7eedd6-cda2-496e-98ec-417b94fb8ee7
efuceladm02: size: 744.125G
efuceladm02: status: normal
efuceladm03: name: efuceladm03_FLASHCACHE
efuceladm03: cellDisk: FD_00_efuceladm03,FD_04_efuceladm03,FD_01_efuceladm03,FD_02_efuceladm03,FD_03_efuceladm03,FD_06_efuceladm03,FD_05_efuceladm03,FD_07_efuceladm03
efuceladm03: creationTime: 2013-06-18T15:21:10+02:00
efuceladm03: degradedCelldisks:
efuceladm03: effectiveCacheSize: 744.125G
efuceladm03: id: c271cdb8-dc70-4009-ba97-dfc4c26b00ef
efuceladm03: size: 744.125G
efuceladm03: status: normal
...
..
.

Logon on the first Cell Storage and using CellCli interface perform the following procedure to enable the WriteBack Flash Cache in a rolling upgrade fashion.

 

Drop the existing flash cache

CellCLI> drop flashcache
Flash cache efuceladm01_FLASHCACHE successfully dropped

Inactivate the griddisk on the cell

CellCLI> alter griddisk all inactive
GridDisk DATA_CD_00_efuceladm01 successfully altered
GridDisk DATA_CD_01_efuceladm01 successfully altered
GridDisk DATA_CD_02_efuceladm01 successfully altered
GridDisk DATA_CD_03_efuceladm01 successfully altered
GridDisk DATA_CD_04_efuceladm01 successfully altered
GridDisk DATA_CD_05_efuceladm01 successfully altered
GridDisk DBFS_DG_CD_02_efuceladm01 successfully altered
GridDisk DBFS_DG_CD_03_efuceladm01 successfully altered
GridDisk DBFS_DG_CD_04_efuceladm01 successfully altered
GridDisk DBFS_DG_CD_05_efuceladm01 successfully altered
GridDisk RECO_CD_00_efuceladm01 successfully altered
GridDisk RECO_CD_01_efuceladm01 successfully altered
GridDisk RECO_CD_02_efuceladm01 successfully altered
GridDisk RECO_CD_03_efuceladm01 successfully altered
GridDisk RECO_CD_04_efuceladm01 successfully altered
GridDisk RECO_CD_05_efuceladm01 successfully altered

Shut down cellsrv service

CellCLI> alter cell shutdown services cellsrv

Stopping CELLSRV services...
The SHUTDOWN of CELLSRV services was successful.

Enable the Smart Flash Cache WriteBack

CellCLI> alter cell flashCacheMode=writeback
Cell efuceladm01 successfully altered

Restart the cellsrv service

CellCLI> alter cell startup services cellsrv

Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.

Reactivate the griddisk on the cell

CellCLI> alter griddisk all active
GridDisk DATA_CD_00_efuceladm01 successfully altered
GridDisk DATA_CD_01_efuceladm01 successfully altered
GridDisk DATA_CD_02_efuceladm01 successfully altered
GridDisk DATA_CD_03_efuceladm01 successfully altered
GridDisk DATA_CD_04_efuceladm01 successfully altered
GridDisk DATA_CD_05_efuceladm01 successfully altered
GridDisk DBFS_DG_CD_02_efuceladm01 successfully altered
GridDisk DBFS_DG_CD_03_efuceladm01 successfully altered
GridDisk DBFS_DG_CD_04_efuceladm01 successfully altered
GridDisk DBFS_DG_CD_05_efuceladm01 successfully altered
GridDisk RECO_CD_00_efuceladm01 successfully altered
GridDisk RECO_CD_01_efuceladm01 successfully altered
GridDisk RECO_CD_02_efuceladm01 successfully altered
GridDisk RECO_CD_03_efuceladm01 successfully altered
GridDisk RECO_CD_04_efuceladm01 successfully altered
GridDisk RECO_CD_05_efuceladm01 successfully altered

Recreate the flash cache

CellCLI> create flashcache all
Flash cache efuceladm01_FLASHCACHE successfully created

 


Verify that the Smart Flash Cache WriteBackWriteBack option is enabled

[root@efuceladm01 ~]# cellcli -e list cell detail | grep flashCacheMode
 flashCacheMode: writeback

Before applying the change to the next Exadata Storage Server  wait that all griddisk are synronized and online.

[root@efuceladm01 ~]# cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
 DATA_CD_00_efuceladm01 SYNCING Yes
 DATA_CD_01_efuceladm01 SYNCING Yes
 DATA_CD_02_efuceladm01 SYNCING Yes
 DATA_CD_03_efuceladm01 SYNCING Yes
 DATA_CD_04_efuceladm01 SYNCING Yes
 DATA_CD_05_efuceladm01 SYNCING Yes
 DBFS_DG_CD_02_efuceladm01 ONLINE Yes
 DBFS_DG_CD_03_efuceladm01 ONLINE Yes
 DBFS_DG_CD_04_efuceladm01 ONLINE Yes
 DBFS_DG_CD_05_efuceladm01 ONLINE Yes
 RECO_CD_00_efuceladm01 OFFLINE Yes
 RECO_CD_01_efuceladm01 OFFLINE Yes
 RECO_CD_02_efuceladm01 OFFLINE Yes
 RECO_CD_03_efuceladm01 OFFLINE Yes
 RECO_CD_04_efuceladm01 OFFLINE Yes
 RECO_CD_05_efuceladm01 OFFLINE Yes

Once the asmmodestatus is ONLINE on all griddisks it is safe to move to the next Storage Server.


 

At the end of the procedure all Storage Servers are configured with Smart Flash Cache WriteBach option:

[root@efudbadm02 ~]# dcli -g ~/cell_group -l root cellcli -e "list cell attributes flashcachemode"
efuceladm01: writeback
efuceladm02: writeback
efuceladm03: writeback
efuceladm04: writeback
efuceladm05: writeback
efuceladm06: writeback
efuceladm07: writeback
efuceladm08: writeback
efuceladm09: writeback
efuceladm10: writeback
efuceladm11: writeback


	

Patching Exadata Machine

################################################################
##    EXADATA MACHINE  INFRASTRUCTURE PATCHING of 1/8 RACK     ##
################################################################

This post describe step-by-step how to patch the infrastructure components of an Exadata Machine

———————————————————–
— Cell Storage Pre-requisites
———————————————————–

--Stop CRS using dcli
[root@ch01db01 oracle]# dcli -g /home/oracle/dbhosts -l root '/u01/app/12.1.0.2/grid/bin/crsctl stop crs'
 [root@ch01db01 oracle]# dcli -g /home/oracle/dbhosts -l root '/u01/app/12.1.0.2/grid/bin/crsctl stat res -t -init'
ch01db01: CRS-4639: Could not contact Oracle High Availability Services
ch01db01: CRS-4000: Command Status failed, or completed with errors.
ch01db02: CRS-4639: Could not contact Oracle High Availability Services
ch01db02: CRS-4000: Command Status failed, or completed with errors.
--Stop All Cell Storage Services
 [root@ch01db01 oracle]# dcli -g /home/oracle/cellhosts_ALL -l root "cellcli -e alter cell shutdown services all"
ch01celadm01:
ch01celadm01: Stopping the RS, CELLSRV, and MS services...
 ch01celadm01: The SHUTDOWN of services was successful.
 ch01celadm02:
 ch01celadm02: Stopping the RS, CELLSRV, and MS services...
 ch01celadm02: The SHUTDOWN of services was successful.
 ch01celadm03:
 ch01celadm03: Stopping the RS, CELLSRV, and MS services...
 ch01celadm03: The SHUTDOWN of services was successful.

[root@ch01db01 oracle]#

 

———————————————————–
–Cell Storage patching
———————————————————–

[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -reset_force
2016-02-05 11:17:07 +0100 :DONE: reset_force
[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -cleanup
2016-02-05 11:19:19 +0100        :Working: DO: Cleanup ...
2016-02-05 11:19:20 +0100        :SUCCESS: DONE: Cleanup
[root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -patch_check_prereq
2016-02-05 11:20:56 +0100        :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell ...
 2016-02-05 11:20:57 +0100        :SUCCESS: DONE: Check cells have ssh equivalence for root user.
 2016-02-05 11:20:59 +0100        :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute ...
 2016-02-05 11:21:01 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:22:19 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:22:19 +0100        :Working: DO: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction. Up to 40 minutes ...
 2016-02-05 11:22:33 +0100 Wait correction of degraded md11 due to md partner size mismatch. Up to 30 minutes.
2016-02-05 11:22:34 +0100        :SUCCESS: DONE: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction.
 2016-02-05 11:22:34 +0100        :Working: DO: Check prerequisites on all cells. Up to 2 minutes ...
 2016-02-05 11:23:38 +0100        :SUCCESS: DONE: Check prerequisites on all cells.
 2016-02-05 11:23:38 +0100        :Working: DO: Execute plugin check for Patch Check Prereq ...
 2016-02-05 11:23:38 +0100        :INFO: Patchmgr plugin start: Prereq check for exposure to bug 17854520 v1.3. Details in logfile /u02/p17885582_121210_Linux-x86-64/patch_12.1.2.1.0.141206.1/patchmgr.stdout.
 2016-02-05 11:23:38 +0100        :SUCCESS: No exposure to bug 17854520 with non-rolling patching
 2016-02-05 11:23:39 +0100        :SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
[root@ch01db01 patch_12.1.2.1.0.141206.1]#
 [root@ch01db01 patch_12.1.2.1.0.141206.1]# ./patchmgr -cells /home/oracle/cellhosts -patch
********************************************************************************
 NOTE Cells will reboot during the patch or rollback process.
 NOTE For non-rolling patch or rollback, ensure all ASM instances using
 NOTE the cells are shut down for the duration of the patch or rollback.
 NOTE For rolling patch or rollback, ensure all ASM instances using
 NOTE the cells are up for the duration of the patch or rollback.
WARNING Do not start more than one instance of patchmgr.
 WARNING Do not interrupt the patchmgr session.
 WARNING Do not alter state of ASM instances during patch or rollback.
 WARNING Do not resize the screen. It may disturb the screen layout.
 WARNING Do not reboot cells or alter cell services during patch or rollback.
 WARNING Do not open log files in editor in write mode or try to alter them.
NOTE All time estimates are approximate.
 NOTE You may interrupt this patchmgr run in next 60 seconds with CONTROL-c.
********************************************************************************
2016-02-05 11:27:08 +0100        :Working: DO: Check cells have ssh equivalence for root user. Up to 10 seconds per cell ...
 2016-02-05 11:27:09 +0100        :SUCCESS: DONE: Check cells have ssh equivalence for root user.
 2016-02-05 11:27:12 +0100        :Working: DO: Initialize files, check space and state of cell services. Up to 1 minute ...
 2016-02-05 11:27:32 +0100        :SUCCESS: DONE: Initialize files, check space and state of cell services.
 2016-02-05 11:27:32 +0100        :Working: DO: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction. Up to 40 minutes ...
 2016-02-05 11:27:45 +0100 Wait correction of degraded md11 due to md partner size mismatch. Up to 30 minutes.
2016-02-05 11:27:46 +0100        :SUCCESS: DONE: Copy, extract prerequisite check archive to cells. If required start md11 mismatched partner size correction.
 2016-02-05 11:27:46 +0100        :Working: DO: Check prerequisites on all cells. Up to 2 minutes ...
 2016-02-05 11:28:50 +0100        :SUCCESS: DONE: Check prerequisites on all cells.
 2016-02-05 11:28:50 +0100        :Working: DO: Copy the patch to all cells. Up to 3 minutes ...
 2016-02-05 11:29:22 +0100        :SUCCESS: DONE: Copy the patch to all cells.
 2016-02-05 11:29:24 +0100        :Working: DO: Execute plugin check for Patch Check Prereq ...
 2016-02-05 11:29:24 +0100        :INFO: Patchmgr plugin start: Prereq check for exposure to bug 17854520 v1.3. Details in logfile /u02/p17885582_121210_Linux-x86-64/patch_12.1.2.1.0.141206.1/patchmgr.stdout.
 2016-02-05 11:29:24 +0100        :SUCCESS: No exposure to bug 17854520 with non-rolling patching
 2016-02-05 11:29:25 +0100        :SUCCESS: DONE: Execute plugin check for Patch Check Prereq.
 2016-02-05 11:29:25 +0100 1 of 5 :Working: DO: Initiate patch on cells. Cells will remain up. Up to 5 minutes ...
 2016-02-05 11:29:37 +0100 1 of 5 :SUCCESS: DONE: Initiate patch on cells.
 2016-02-05 11:29:37 +0100 2 of 5 :Working: DO: Waiting to finish pre-reboot patch actions. Cells will remain up. Up to 45 minutes ...
 2016-02-05 11:30:37 +0100 Wait for patch pre-reboot procedures
2016-02-05 11:44:56 +0100 2 of 5 :SUCCESS: DONE: Waiting to finish pre-reboot patch actions.
 2016-02-05 11:44:56 +0100        :Working: DO: Execute plugin check for Patching ...
 2016-02-05 11:44:56 +0100        :SUCCESS: DONE: Execute plugin check for Patching.
 2016-02-05 11:44:56 +0100 3 of 5 :Working: DO: Finalize patch on cells. Cells will reboot. Up to 5 minutes ...
 2016-02-05 11:45:17 +0100 3 of 5 :SUCCESS: DONE: Finalize patch on cells.
 2016-02-05 11:45:17 +0100 4 of 5 :Working: DO: Wait for cells to reboot and come online. Up to 120 minutes ...
 2016-02-05 11:46:17 +0100 Wait for patch finalization and reboot
2016-02-05 13:09:24 +0100 4 of 5 :SUCCESS: DONE: Wait for cells to reboot and come online.
 2016-02-05 13:09:24 +0100 5 of 5 :Working: DO: Check the state of patch on cells. Up to 5 minutes ...
 2016-02-05 13:10:09 +0100 5 of 5 :SUCCESS: DONE: Check the state of patch on cells.
 2016-02-05 13:10:09 +0100        :Working: DO: Execute plugin check for Post Patch ...
 2016-02-05 13:10:10 +0100        :SUCCESS: DONE: Execute plugin check for Post Patch.
[root@ch01db01 patch_12.1.2.1.0.141206.1]#
[root@ch01db01 patch_12.1.2.1.0.141206.1]# dcli -c ch01celadm01 -l root 'imageinfo'
 ch01celadm01:
 ch01celadm01: Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
 ch01celadm01: Cell version: OSS_12.1.2.1.0_LINUX.X64_141206.1
 ch01celadm01: Cell rpm version: cell-12.1.2.1.0_LINUX.X64_141206.1-1.x86_64
 ch01celadm01:
 ch01celadm01: Active image version: 12.1.2.1.0.141206.1
 ch01celadm01: Active image activated: 2016-02-05 20:14:52 +0100
 ch01celadm01: Active image status: success
 ch01celadm01: Active system partition on device: /dev/md5
 ch01celadm01: Active software partition on device: /dev/md7
 ch01celadm01:
 ch01celadm01: Cell boot usb partition: /dev/sdac1
 ch01celadm01: Cell boot usb version: 12.1.2.1.0.141206.1
 ch01celadm01:
 ch01celadm01: Inactive image version: 12.1.1.1.1.140712
 ch01celadm01: Inactive image activated: 2014-08-06 11:50:09 +0200
 ch01celadm01: Inactive image status: success
 ch01celadm01: Inactive system partition on device: /dev/md6
 ch01celadm01: Inactive software partition on device: /dev/md8
 ch01celadm01:
 ch01celadm01: Inactive marker for the rollback: /boot/I_am_hd_boot.inactive
 ch01celadm01: Inactive grub config for the rollback: /boot/grub/grub.conf.inactive
 ch01celadm01: Inactive kernel version for the rollback: 2.6.39-400.128.17.el5uek
 ch01celadm01: Rollback to the inactive partitions: Possible
 [root@ch01db01 patch_12.1.2.1.0.141206.1]#

-----------------------------------------------------------
-- DB Server Patching
-----------------------------------------------------------

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -h

Usage: dbnodeupdate.sh [ -u | -r | -c ] -l <baseurl|zip file> [-p] <phase> [-n] [-s] [-q] [-v] [-t] [-a] <alert.sh> [-b] [-m] | [-V] | [-h]
-u                       Upgrade
 -r                       Rollback
 -c                       Complete post actions (verify image status, cleanup, apply fixes, relink all homes, enable GI to start/start all domU's)
 -l <baseurl|zip file>    Baseurl (http or zipped iso file for the repository)
 -s                       Shutdown stack (domU's for VM) before upgrading/rolling back
 -p                       Bootstrap phase (1 or 2) only to be used when instructed by dbnodeupdate.sh
 -q                       Quiet mode (no prompting) only be used in combination with -t
 -n                       No backup will be created (Option disabled for systems being updated from Oracle Linux 5 to Oracle Linux 6)
 -t                       'to release' - used when in quiet mode or used when updating to one-offs/releases via 'latest' channel (requires 11.2.3.2.1)
 -v                       Verify prereqs only. Only to be used with -u and -l option
 -b                       Perform backup only
 -a <alert.sh>            Full path to shell script used for alert trapping
 -m                       Install / update-to exadata-sun/hp-computenode-minimum only (11.2.3.3.0 and later)
 -i                       Ignore /etc/oratab - relinking will be disabled. Only possible in combination with -c.
 -V                       Print version
 -h                       Print usage
For upgrading from releases 11.2.2.4.2 and later:
 Example using iso  : ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip
 Example using http : ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/
 Example: ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.2.1/base/x86_64/
 Example: ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/
For upgrading from releases 11.2.2.4.2 and later in quiet mode:
 Example: ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -q -t 11.2.3.2.1.130302
For completion steps:
 Example: ./dbnodeupdate.sh -c
For rollback:
 Example: ./dbnodeupdate.sh -r
For pre-req checks only:
 Example using iso  : ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -v
 Example using http : ./dbnodeupdate.sh -u -l http://my-yum-repo.my-domain.com/yum/unknown/EXADATA/dbserver/11.2.3.3.0/base/x86_64/ -v
For backup only:
 Example: ./dbnodeupdate.sh -u -l /u01/p16432033_112321_Linux-x86-64.zip -b
See MOS 1553103.1 for more examples
[root@ch01db02 dbnodeupdate]#

———————————– –DB Server patching Verification ———————————–

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -u -l /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip -v
##########################################################################################################################
 #                                                                                                                        #
 # Guidelines for using dbnodeupdate.sh (rel. 4.18):                                                                      #
 #                                                                                                                        #
 # - Prerequisites for usage:                                                                                             #
 #         1. Refer to dbnodeupdate.sh options. See MOS 1553103.1                                                         #
 #         2. Use the latest release of dbnodeupdate.sh. See patch 16486998                                               #
 #         3. Run the prereq check with the '-v' option.                                                                  #
 #                                                                                                                        #
 #   I.e.:  ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v                                                               #
 #          ./dbnodeupdate.sh -u -l http://my-yum-repo -v                                                                 #
 #                                                                                                                        #
 # - Prerequisite dependency check failures can happen due to customization:                                              #
 #     - The prereq check detects dependency issues that need to be addressed prior to running a successful update.       #
 #     - Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
 #                                                                                                                        #
 #   When upgrading from releases later than 11.2.2.4.2 to releases before 11.2.3.3.0:                                    #
 #      - Conflicting packages should be removed before proceeding the update.                                            #
 #                                                                                                                        #
 #   When upgrading to releases 11.2.3.3.0 or later:                                                                      #
 #      - When the 'exact' package dependency check fails 'minimum' package dependency check will be tried.               #
 #      - When the 'minimum' package dependency check also fails,                                                         #
 #        the conflicting packages should be removed before proceeding.                                                   #
 #                                                                                                                        #
 # - As part of the prereq checks and as part of the update, a number of rpms will be removed.                            #
 #   This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages.        #
 #      - See /var/log/cellos/packages_to_be_removed.txt for details on what packages will be removed.                    #
 #                                                                                                                        #
 # - In case of any problem when filing an SR, upload the following:                                                      #
 #      - /var/log/cellos/dbnodeupdate.log                                                                                #
 #      - /var/log/cellos/dbnodeupdate.<runid>.diag                                                                       #
 #      - where <runid> is the unique number of the failing run.                                                          #
 #                                                                                                                        #
 ##########################################################################################################################
Continue ? [y/n]
 y
(*) 2016-02-05 17:06:43: Unzipping helpers (/u01/exapatch/dbnodeupdate/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
 (*) 2016-02-05 17:06:43: Initializing logfile /var/log/cellos/dbnodeupdate.log
 (*) 2016-02-05 17:06:44: Collecting system configuration settings. This may take a while...
 (*) 2016-02-05 17:07:10: Validating system settings for known issues and best practices. This may take a while...
 (*) 2016-02-05 17:07:10: Checking free space in /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641
 (*) 2016-02-05 17:07:10: Unzipping /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip to /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641, this may take a while
 (*) 2016-02-05 17:07:23: Original /etc/yum.conf moved to /etc/yum.conf.050215170641, generating new yum.conf
 (*) 2016-02-05 17:07:23: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo
 (*) 2016-02-05 17:07:56: Validating the specified source location.
 (*) 2016-02-05 17:07:57: Cleaning up the yum cache.

—————————————————————————————————————————–
Running in prereq check mode
—————————————————————————————————————————–

Active Image version   : 12.1.1.1.1.140712
 Active Kernel version  : 2.6.39-400.128.17.el5uek
 Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
 Inactive Image version : 12.1.1.1.0.131219
 Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
 Current user id        : root
 Action                 : upgrade
 Upgrading to           : 12.1.2.1.0.141206.1 - Oracle Linux 5->6 upgrade
 Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/050215170641/x86_64/ (iso)
 Iso file               : /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170641/exadata_ol6_base_repo_141206.1.iso
 Create a backup        : Yes (backup at update mandatory when updating from OL5 to OL6)
 Shutdown stack         : No (Currently stack is down)
 RPM exclusion list     : Function not available for OL5->OL6 upgrades
 RPM obsolete list      : Function not available for OL5->OL6 upgrades
 Exact dependencies     : Function not available for OL5->OL6 upgrades
 Minimum dependencies   : Function not available for OL5->OL6 upgrades
 Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 050215170641)
 Diagfile               : /var/log/cellos/dbnodeupdate.050215170641.diag
 Server model           : SUN FIRE X4170 M3
 dbnodeupdate.sh rel.   : 4.18 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
 Note                   : After upgrading and rebooting run './dbnodeupdate.sh -c' to finish post steps.
The following known issues will be checked for and automatically corrected by dbnodeupdate.sh:
 (*) - Issue - Fix for CVE-2014-9295 AND ELSA-2014-1974
The following known issues will be checked for but require manual follow-up:
 (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
 (*) - Issue - Exafusion silently enabled for database 12.1.0.2.0 with kernel 2.6.39-400.200 and later. See MOS 1947476.1 for more details.
---------------------------------------------------------------------------------------------------------------------
 NOTE:
 When upgrading to Oracle Linux 6 a backup is required for systems configured with logical volume manager (lvm).
 It appears no backup of the current image exist on the inactive lvm.
 This means a mandatory backup will be made using dbnodeupdate.sh before the actual update starts.
 ---------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------

-------------------------------------------
 Prereq check finished successfully, check the above report for next steps.
 -----------------------------------------------------------------------------------------------------------------------------
(*) 2016-02-05 17:08:01: Cleaning up iso and temp mount points
[root@ch01db02 dbnodeupdate]#

———————————–

–DB Server patching Execution

———————————–

[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -u -l /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip
##########################################################################################################################
 #                                                                                                                        #
 # Guidelines for using dbnodeupdate.sh (rel. 4.18):                                                                      #
 #                                                                                                                        #
 # - Prerequisites for usage:                                                                                             #
 #         1. Refer to dbnodeupdate.sh options. See MOS 1553103.1                                                         #
 #         2. Use the latest release of dbnodeupdate.sh. See patch 16486998                                               #
 #         3. Run the prereq check with the '-v' option.                                                                  #
 #                                                                                                                        #
 #   I.e.:  ./dbnodeupdate.sh -u -l /u01/my-iso-repo.zip -v                                                               #
 #          ./dbnodeupdate.sh -u -l http://my-yum-repo -v                                                                 #
 #                                                                                                                        #
 # - Prerequisite dependency check failures can happen due to customization:                                              #
 #     - The prereq check detects dependency issues that need to be addressed prior to running a successful update.       #
 #     - Customized rpm packages may fail the built-in dependency check and system updates cannot proceed until resolved. #
 #                                                                                                                        #
 #   When upgrading from releases later than 11.2.2.4.2 to releases before 11.2.3.3.0:                                    #
 #      - Conflicting packages should be removed before proceeding the update.                                            #
 #                                                                                                                        #
 #   When upgrading to releases 11.2.3.3.0 or later:                                                                      #
 #      - When the 'exact' package dependency check fails 'minimum' package dependency check will be tried.               #
 #      - When the 'minimum' package dependency check also fails,                                                         #
 #        the conflicting packages should be removed before proceeding.                                                   #
 #                                                                                                                        #
 # - As part of the prereq checks and as part of the update, a number of rpms will be removed.                            #
 #   This removal is required to preserve Exadata functioning. This should not be confused with obsolete packages.        #
 #      - See /var/log/cellos/packages_to_be_removed.txt for details on what packages will be removed.                    #
 #                                                                                                                        #
 # - In case of any problem when filing an SR, upload the following:                                                      #
 #      - /var/log/cellos/dbnodeupdate.log                                                                                #
 #      - /var/log/cellos/dbnodeupdate.<runid>.diag                                                                       #
 #      - where <runid> is the unique number of the failing run.                                                          #
 #                                                                                                                        #
 ##########################################################################################################################
Continue ? [y/n]
 y
(*) 2016-02-05 17:09:38: Unzipping helpers (/u01/exapatch/dbnodeupdate/dbupdate-helpers.zip) to /opt/oracle.SupportTools/dbnodeupdate_helpers
 (*) 2016-02-05 17:09:38: Initializing logfile /var/log/cellos/dbnodeupdate.log
 (*) 2016-02-05 17:09:39: Collecting system configuration settings. This may take a while...
 (*) 2016-02-05 17:10:07: Validating system settings for known issues and best practices. This may take a while...
 (*) 2016-02-05 17:10:07: Checking free space in /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936
 (*) 2016-02-05 17:10:07: Unzipping /u01/exapatch/p20170913_121210_Linux-x86-64/p20170913_121210_Linux-x86-64.zip to /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936, this may take a while
 (*) 2016-02-05 17:10:19: Original /etc/yum.conf moved to /etc/yum.conf.050215170936, generating new yum.conf
 (*) 2016-02-05 17:10:19: Generating Exadata repository file /etc/yum.repos.d/Exadata-computenode.repo
 (*) 2016-02-05 17:10:42: Validating the specified source location.
 (*) 2016-02-05 17:10:43: Cleaning up the yum cache.
Active Image version   : 12.1.1.1.1.140712
 Active Kernel version  : 2.6.39-400.128.17.el5uek
 Active LVM Name        : /dev/mapper/VGExaDb-LVDbSys1
 Inactive Image version : 12.1.1.1.0.131219
 Inactive LVM Name      : /dev/mapper/VGExaDb-LVDbSys2
 Current user id        : root
 Action                 : upgrade
 Upgrading to           : 12.1.2.1.0.141206.1 - Oracle Linux 5->6 upgrade
 Baseurl                : file:///var/www/html/yum/unknown/EXADATA/dbserver/050215170936/x86_64/ (iso)
 Iso file               : /u01/exapatch/p20170913_121210_Linux-x86-64/iso.stage.050215170936/exadata_ol6_base_repo_141206.1.iso
 Create a backup        : Yes (backup at update mandatory when updating from OL5 to OL6)
 Shutdown stack         : No (Currently stack is down)
 RPM exclusion list     : Function not available for OL5->OL6 upgrades
 RPM obsolete list      : Function not available for OL5->OL6 upgrades
 Exact dependencies     : Function not available for OL5->OL6 upgrades
 Minimum dependencies   : Function not available for OL5->OL6 upgrade
 Logfile                : /var/log/cellos/dbnodeupdate.log (runid: 050215170936)
 Diagfile               : /var/log/cellos/dbnodeupdate.050215170936.diag
 Server model           : SUN FIRE X4170 M3
 dbnodeupdate.sh rel.   : 4.18 (always check MOS 1553103.1 for the latest release of dbnodeupdate.sh)
 Note                   : After upgrading and rebooting run './dbnodeupdate.sh -c' to finish post steps.
The following known issues will be checked for and automatically corrected by dbnodeupdate.sh:
 (*) - Issue - Fix for CVE-2014-9295 AND ELSA-2014-1974
The following known issues will be checked for but require manual follow-up:
 (*) - Issue - Yum rolling update requires fix for 11768055 when Grid Infrastructure is below 11.2.0.2 BP12
 (*) - Issue - Exafusion silently enabled for database 12.1.0.2.0 with kernel 2.6.39-400.200 and later. See MOS 1947476.1 for more details.
Continue ? [y/n]
 y
(*) 2016-02-05 17:11:59: Verifying GI and DB''s are shutdown
 (*) 2016-02-05 17:12:00: Collecting console history for diag purposes
 (*) 2016-02-05 17:12:32: Unmount of /boot successful
 (*) 2016-02-05 17:12:32: Check for /dev/sda1 successful
 (*) 2016-02-05 17:12:32: Mount of /boot successful
 (*) 2016-02-05 17:12:32: Disabling stack from starting
 (*) 2016-02-05 17:12:33: Performing filesystem backup to /dev/mapper/VGExaDb-LVDbSys2. Avg. 30 minutes (maximum 120) depends per environment.......
 (*) 2016-02-05 17:18:44: Backup successful
 (*) 2016-02-05 17:18:47: ExaWatcher stopped successful
 (*) 2016-02-05 17:19:07: EM Agent (in /u01/app/oracle/product/agent12c/core/12.1.0.4.0) stopped successfully
 (*) 2016-02-05 17:19:07: Capturing service status and file attributes. This may take a while...
 (*) 2016-02-05 17:19:12: Service status and file attribute report in: /etc/exadata/reports
 (*) 2016-02-05 17:19:12: Validating the specified source location.
 (*) 2016-02-05 17:19:13: Cleaning up the yum cache.
 (*) 2016-02-05 17:19:14: Executing OL5->OL6 upgrade steps, system is expected to reboot multiple times.
 (*) 2016-02-05 17:21:37: Initialize of Oracle Linux 6 Upgrade successful. Rebooting now...
Broadcast message from root (pts/0) (Thu Feb  5 17:21:37 2015):
The system is going down for reboot NOW!
[root@ch01db02 dbnodeupdate]#
[root@ch01db02 dbnodeupdate]# ./dbnodeupdate.sh -c

———————————–
–Output new Image Version
———————————–

[root@ch01db01 ibdiagtools]# imageinfo
Kernel version: 2.6.39-400.243.1.el6uek.x86_64 #1 SMP Wed Nov 26 09:15:35 PST 2014 x86_64
 Image version: 12.1.2.1.0.141206.1
 Image activated: 2016-02-05 18:24:46 +0100
 Image status: success
 System partition on device: /dev/mapper/VGExaDb-LVDbSys1