Exadata as Code – PDB Snapshot Cloning

As follow up to the Exadata as Code post, today I’m going to focus on one of the latest features added to our automation: PDB Snapshot Cloning.

PDB snapshot cloning is one of the best development options to offers in a CI/CD project. In an Exadata environment there are special requirements to implement before start using this technology: Sparse Grid Disks and Sparse ASM Disk Group ( description and step-by-step example is available here).

On Exadata, the PDB Snapshot beneficial of all Smart features including offload capabilities, with in addition space- and time-efficient provisioning.

After this brief introduction let’s see how the PDB Snapshot Cloning has been implemented on our Exadata as Code automation.

Exadata Sparse Storage Automation

Select the Oracle Container Database (CDB) where the PDB Snapshot Cloning should be activated, and with One-click provisioning the Sparse Grid disks and Sparse ASM disk group are created.

After the initial provisioning, the automation starts monitoring the space usage, automatically resizing it (increasing/decreasing) when necessary.



Automated storage lifecycle management


PDB Snapshot Cloning Automation

Same principle applies to the different PDB actions: One-click Provisioning/Decommissioning of the PDB Test Master and PDB Snapshot.

Those features are exposed via UI or API to the application developers, making them autonomous on the management of such space efficient database environment.

PDB Snapshot lifecycle management


Exadata as Code

It is very exciting for me to share this post, because what I’m going to describe here is not the final, but the current and intermediate result achieved by the Trivadis team on the development and implementation of what I call “Exadata as Code” project.

In the Cloud and automation era, this is the Trivadis answer to increase efficiency, time-to-market and quality on the most challenging Exadata projects. Trivadis has been working hard to achieve this level of automation, covering most of the recurrent activities on the platform, and this is not all, as any CI/CD development, it is getting every day better, enriched by new features and fixes, which simplify the lifecycle management of such platforms.

Few months ago, I posted a blog entitled Bulk Exadata Patching, where it is shown how to improve the Exadata patching automation, but despite the improvements, it was still having a number of manual interactions. Now we have reached a much better level of automation, with one-click action to perform cumbersome tasks like: Infrastructure and Database Provisioning/Decommissioning, Patching, and many other operational tasks.

How Exadata as Code works

The concept is quite simple, all developments are made available in the form of Ansible playbooks and incapsulated inside Jenkins; this brings the following advantages:

  • User friendly interface
  • Orchestration Pipelines
  • Enhanced Security, recording auditing information and logging job executions
  • Job Scheduling

Exadata Administration Workflow

Scalable solution with which efficiently manage Exadata platforms

Oracle Database Administration Workflow

Secure high quality database operation with an effective DBA Team

Wrap-up

One takeaway from this experience, automation is the only option to stay competitive and deliver high quality service.


Special thanks go to all Trivadis’ colleagues working everyday so passionately on the project #BetterTogether.


Exadata and IORM by Examples

 

The Exadata Machine is frequently used to consolidate the database infrastructure, and such kind of environments must guarantee performance stability and governance. On Exadata the IO Resource Manager extends the capabilities available also on the other platforms to allocate, cap and prioritize the resources among databases and categories.

Available since the the first version of the Storage Cell software, IORM has been recently enhanced to cope with the new Multitenant and Cloud requirements.  The IORM Plan can optimize the workload with one of the following objectives: basic, auto, low_latency,  balanced or high_throughput.

 

I/O Resource Manager Overview

IORM allows to execute I/O Requests based on their priority, this is achieved handling separated queues which manage High and Low priority requests as shown on the image below.

 

IORM_Overview

 

Default IORM status

Automatically enabled it cannot be completely disabled. The default mode, protects critical operations like  flash cache and flash log  I/Os

CellCLI> list iormplan detail
name: tvdceladm06_IORMPLAN
catPlan:
dbPlan:
objective: basic
status: active

CellCLI>

 

Per Database IORM definition

This configuration is suitable on environments with a small number of databases, where the I/O resources are individually defined for each database.

alter iormplan objective=auto

ALTER IORMPLAN -
dbplan=((name=ERP01, level=1, allocation=75, limit=95, role=primary), -
(name=ERP01, level=1, allocation=5, limit=25, role=standby),          -
(name=TREP, level=1, allocation=2, limit=5, flashCacheSize=1G),       -
(name=EPA01, level=2, allocation=40, limit=80),                       -
(name=DHJ01, level=3, allocation=50, flashCacheSize=20G),             -
(name=other, level=3, allocation=30)) 

The above plan regulates: the database level, allocation (%), soft and hard limits (%), the amount of flash cache and the role (primary or standby).

 

DBaaS and IORM

This configuration is suitable for Cloud like environments, where a large number of databases are consolidated on the same infrastructure. The database services are standardized in few categories (for example Gold, Silver and Bronze) and the I/O resource plan regulates the same service categories.

CellCLI> ALTER IORMPLAN
dbplan=((name=gold, share=20,limit=100, type=profile), 
        (name=silver, share=10, limit=60, type=profile),
        (name=bronze, share=5, limit=20, type=profile))
The datase parameter db_performance_profile allows to associate the corresponding IORM service category to the instance:
SQL> alter system set db_performance_profile=silver scope=spfile;

Exadata Deployment with Elastic Configuration

Recently, for one of my customers, I had the chance to install a couples of Exadata X7-2 using the new Elastic Configuration. The major benefits of using Elastic Configuration consists in the possibility to acquire the Exadata Machine with almost any possible combination of Database Nodes and Storage Cells.

In the past we were used to standard Oracle pre-defined Exadata Machine configurations: Eighth Rack, Quarter Rack, Half Rack and Full Rack, which is still possible, but not flexible enough.

The pictures below highlight the differences between the two configurations:

Edadats_Classiv_vs_Elastic

source: Oracle Data Sheet Exadata Database Machine X7-2

 

Deployment Exadata Elactic Configuration

The elastic configuration process automates the initial IP address allocations to databasenodes and storage cells, regardless the ordered configuration.  The Exadata Machine is connected to the InfiniBand switches using a standard cabling methodology which allows to determinate the node’s location in the rack. This information is therefore used when the nodes are powered up for the first time in order to assign the initial default IPs.

[root@exatest-iba0 ~]# ibhosts
Ca : 0x579b0123796ba0 ports 2 "node10 elasticNode 192.168.10.17,192.168.10.18 ETH0"
Ca : 0x579b01237966e0 ports 2 "node8 elasticNode 192.168.10.15,192.168.10.16 ETH0"
Ca : 0x579b0123844ab0 ports 2 "node6 elasticNode 192.168.10.11,192.168.10.12 ETH0"
Ca : 0x579b0123845e50 ports 2 "node5 elasticNode 192.168.10.7,192.168.10.8 ETH0"
Ca : 0x579b0123845fe0 ports 2 "node4 elasticNode 192.168.10.40,172.16.2.40 ETH0"
Ca : 0x579b0123845ea0 ports 2 "node3 elasticNode 192.168.10.9,192.168.10.10 ETH0"
Ca : 0x579b0123812b90 ports 2 "node2 elasticNode 192.168.10.1,192.168.10.2 ETH0"
Ca : 0x579b0123812970 ports 2 "node1 elasticNode 192.168.10.3,192.168.10.4 ETH0"
[root@exatest-iba0 ~]#

 

 

Because the Virtualization option was required,  it has to be activated at this stage:

[root@node8 ~]# /opt/oracle.SupportTools/switch_to_ovm.sh
2019-03-07 01:05:22 -0800 [INFO] Switch to DOM0 system partition /dev/VGExaDb/LVDbSys3 (/dev/mapper/VGExaDb-LVDbSys3)
2019-03-07 01:05:22 -0800 [INFO] Active system device: /dev/mapper/VGExaDb-LVDbSys1
2019-03-07 01:05:22 -0800 [INFO] Active system device in boot area: /dev/mapper/VGExaDb-LVDbSys1
2019-03-07 01:05:23 -0800 [INFO] Set active system device to /dev/VGExaDb/LVDbSys3 in /boot/I_am_hd_boot
2019-03-07 01:05:23 -0800 [INFO] Creating /.elasticConfig on DOM0 boot partition /boot
2019-03-07 01:05:34 -0800 [INFO] Reboot has been initiated to switch to the DOM0 system partition
Connection to 192.168.1.8 closed by remote host.
Connection to 192.168.1.8 closed.
✘

 

After the switch to OVM command it is time to reclaim the space initially used by the Linux bare metal Logical Volumes:

[root@node8 ~]# /opt/oracle.SupportTools/reclaimdisks.sh -free -reclaim
Model is ORACLE SERVER X7-2
Number of LSI controllers: 1
Physical disks found: 4 (252:0 252:1 252:2 252:3)
Logical drives found: 1
Linux logical drive: 0
RAID Level for the Linux logical drive: 5
Physical disks in the Linux logical drive: 4 (252:0 252:1 252:2 252:3)
Dedicated Hot Spares for the Linux logical drive: 0
Global Hot Spares: 0
[INFO ] Check for DOM0 with inactive Linux system disk
[INFO ] Valid DOM0 with inactive Linux system disk is detected
[INFO ] Number of partitions on the system device /dev/sda: 3
[INFO ] Higher partition number on the system device /dev/sda: 3
[INFO ] Last sector on the system device /dev/sda: 3509760000
[INFO ] End sector of the last partition on the system device /dev/sda: 3509759966
[INFO ] Remove inactive system logical volume /dev/VGExaDb/LVDbSys1
[INFO ] Remove logical volume /dev/VGExaDb/LVDbOra1
[INFO ] Extend logical volume /dev/VGExaDb/LVDbExaVMImages
[INFO ] Resize ocfs2 on logical volume /dev/VGExaDb/LVDbExaVMImages
[INFO ] XEN boot version and rpm versions are in sync
[INFO ] XEN EFI files will not be updated
[INFO ] Force setup grub
[root@node8 ~]#

 

Check the success of the reclaim disks procedure:

[root@node8 ~]# /opt/oracle.SupportTools/reclaimdisks.sh -check
Model is ORACLE SERVER X7-2
Number of LSI controllers: 1
Physical disks found: 4 (252:0 252:1 252:2 252:3)
Logical drives found: 1
Linux logical drive: 0
RAID Level for the Linux logical drive: 5
Physical disks in the Linux logical drive: 4 (252:0 252:1 252:2 252:3)
Dedicated Hot Spares for the Linux logical drive: 0
Global Hot Spares: 0
Valid. Disks configuration: RAID5 from 4 disks with no global and dedicated hot spare disks.
Valid. Booted: DOM0. Layout: DOM0.
[root@node8 ~]#

 

Upload the Oracle Exadata Database Machine Deployment Assistant configuration files to the database server, together with all software images, and run the One command procedure.

List of all Steps

[root@exatestdbadm01 linux-x64]# ./install.sh -cf TVD-exatest.xml -l
Initializing

1. Validate Configuration File
2. Update Nodes for Eighth Rack
3. Create Virtual Machine
4. Create Users
5. Setup Cell Connectivity
6. Calibrate Cells
7. Create Cell Disks
8. Create Grid Disks
9. Install Cluster Software
10. Initialize Cluster Software
11. Install Database Software
12. Relink Database with RDS
13. Create ASM Diskgroups
14. Create Databases
15. Apply Security Fixes
16. Install Exachk
17. Create Installation Summary
18. Resecure Machine
[root@exatestdbadm01 linux-x64]#

 

Run Step One to validate the setup

This example includes the creation of three different Clusters.

[root@exatestdbadm01 linux-x64]# ./install.sh -cf TVD-exatest.xml -s 1
Initializing
Executing Validate Configuration File
Validating cluster: Cluster-EFU
Locating machines...
Verifying operating systems...
Validating cluster networks...
Validating network connectivity...
Validating private ips on virtual cluster
Validating NTP setup...
Validating physical disks on storage cells...
Validating users...
Validating cluster: Cluster-PR1
Locating machines...
Verifying operating systems...
Validating cluster networks...
Validating network connectivity...
Validating private ips on virtual cluster
Validating NTP setup...
Validating physical disks on storage cells...
Validating users...
Validating cluster: Cluster-VAL
Locating machines...
Verifying operating systems...
Validating cluster networks...
Validating network connectivity...
Validating private ips on virtual cluster
Validating NTP setup...
Validating physical disks on storage cells...
Validating users...
Validating platinum...
Validating switches...
Checking disk reclaim status...
Checking Disk Tests Status....
Completed validation...

SUCCESS: Ip address: 10.x8.xx.40 is configured correctly
SUCCESS: Ip address: 10.x9.xx.55 is configured correctly
SUCCESS: Ip address: 10.x8.xx.41 is configured correctly
SUCCESS: Ip address: 10.x9.xx.56 is configured correctly
SUCCESS: Ip address: 10.x8.xx.45 is configured correctly
SUCCESS: Ip address: 10.x8.xx.46 is configured correctly
SUCCESS: Ip address: 10.x8.xx.44 is configured correctly
SUCCESS: Ip address: 10.x8.xx.43 is configured correctly
SUCCESS: Ip address: 10.x8.xx.42 is configured correctly
SUCCESS: 10.x8.xx.40 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.55 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.41 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.56 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.45 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.46 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.44 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.43 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.42 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.40 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.55 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.41 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.56 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.45 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.46 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.44 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.43 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.42 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.40 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.55 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.41 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.56 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.45 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.46 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.44 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.43 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.42 configured correctly on exatestceladm03.my.domain.com
SUCCESS: Ip address: 10.x8.xx.47 is configured correctly
SUCCESS: Ip address: 10.x9.xx.57 is configured correctly
SUCCESS: Ip address: 10.x8.xx.48 is configured correctly
SUCCESS: Ip address: 10.x9.xx.58 is configured correctly
SUCCESS: Ip address: 10.x8.xx.52 is configured correctly
SUCCESS: Ip address: 10.x8.xx.51 is configured correctly
SUCCESS: Ip address: 10.x8.xx.53 is configured correctly
SUCCESS: Ip address: 10.x8.xx.50 is configured correctly
SUCCESS: Ip address: 10.x8.xx.49 is configured correctly
SUCCESS: 10.x8.xx.47 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.57 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.48 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.58 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.52 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.51 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.53 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.50 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.49 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.47 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.57 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.48 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.58 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.52 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.51 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.53 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.50 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.49 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.47 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.57 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.48 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.58 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.52 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.51 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.53 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.50 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.49 configured correctly on exatestceladm03.my.domain.com
SUCCESS: Ip address: 10.x8.xx.54 is configured correctly
SUCCESS: Ip address: 10.x9.xx.59 is configured correctly
SUCCESS: Ip address: 10.x8.xx.55 is configured correctly
SUCCESS: Ip address: 10.x9.xx.60 is configured correctly
SUCCESS: Ip address: 10.x8.xx.58 is configured correctly
SUCCESS: Ip address: 10.x8.xx.60 is configured correctly
SUCCESS: Ip address: 10.x8.xx.59 is configured correctly
SUCCESS: Ip address: 10.x8.xx.57 is configured correctly
SUCCESS: Ip address: 10.x8.xx.56 is configured correctly
SUCCESS: 10.x8.xx.54 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.59 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.55 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x9.xx.60 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.58 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.60 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.59 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.57 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.56 configured correctly on exatestceladm01.my.domain.com
SUCCESS: 10.x8.xx.54 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.59 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.55 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x9.xx.60 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.58 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.60 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.59 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.57 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.56 configured correctly on exatestceladm02.my.domain.com
SUCCESS: 10.x8.xx.54 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.59 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.55 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x9.xx.60 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.58 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.60 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.59 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.57 configured correctly on exatestceladm03.my.domain.com
SUCCESS: 10.x8.xx.56 configured correctly on exatestceladm03.my.domain.com
SUCCESS: Validated NTP server 10.x3.xx.xx0
SUCCESS: Validated NTP server 10.x3.xx.xx1
SUCCESS: Required file /EXAVMIMAGES/onecommand/linux-x64/WorkDir/p28514222_122118_Linux-x86-64.zip exists...
SUCCESS: Required file /EXAVMIMAGES/onecommand/linux-x64/WorkDir/p28762988_12201181016GIOCT2018RU_Linux-x86-64.zip exists...
SUCCESS: Required file /EXAVMIMAGES/onecommand/linux-x64/WorkDir/p28762989_12201181016DBOCT2018RU_Linux-x86-64.zip exists...
SUCCESS: Required file config/exachk.zip exists...
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm03.my.domain.com, machine type: storage
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm02.my.domain.com, machine type: storage
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm01.my.domain.com, machine type: storage
SUCCESS: Expected machine exatestdbadm01.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: Expected machine exatestdbadm02.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: NTP servers on machine exatestceladm02.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm01.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm03.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm01.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm02.my.domain.com verified successfully
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm02.my.domain.com
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm01.my.domain.com
SUCCESS: Expected machine exatestdbadm02.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm01.my.domain.com, machine type: storage
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm02.my.domain.com, machine type: storage
SUCCESS: Expected machine exatestdbadm01.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm03.my.domain.com, machine type: storage
SUCCESS: NTP servers on machine exatestceladm03.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm01.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm02.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm02.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm01.my.domain.com verified successfully
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm02.my.domain.com
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm01.my.domain.com
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm03.my.domain.com, machine type: storage
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm02.my.domain.com, machine type: storage
SUCCESS: Found Operating system LinuxPhysical and configuration file expects LinuxPhysical on machine exatestceladm01.my.domain.com, machine type: storage
SUCCESS: Expected machine exatestdbadm02.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: Expected machine exatestdbadm01.my.domain.com to have OS Type of Linux Dom0, and found OsType LinuxDom0
SUCCESS: NTP servers on machine exatestceladm03.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm02.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestceladm01.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm01.my.domain.com verified successfully
SUCCESS: NTP servers on machine exatestdbadm02.my.domain.com verified successfully
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm02.my.domain.com
SUCCESS: Sufficient memory for all the guests on database node exatestdbadm01.my.domain.com
SUCCESS: Switch IP 10.x9.xx.51 resolves successfully to host exatest-iba0.my.domain.com on node exatestceladm03.my.domain.com
SUCCESS:
SUCCESS: Switch IP 10.x9.xx.51 resolves successfully to host exatest-iba0.my.domain.com on node exatestceladm02.my.domain.com
SUCCESS: Switch IP 10.x9.xx.52 resolves successfully to host exatest-ibb0.my.domain.com on node exatestceladm03.my.domain.com
SUCCESS:
SUCCESS:
SUCCESS:
SUCCESS: Switch IP 10.x9.xx.52 resolves successfully to host exatest-ibb0.my.domain.com on node exatestceladm02.my.domain.com
SUCCESS:
SUCCESS: Switch IP 10.x9.xx.51 resolves successfully to host exatest-iba0.my.domain.com on node exatestceladm01.my.domain.com
SUCCESS: Switch IP 10.x9.xx.52 resolves successfully to host exatest-ibb0.my.domain.com on node exatestceladm01.my.domain.com
SUCCESS:
SUCCESS: X7 compute node exatestdbadm01.my.domain.com has updated Broadcom firmware
SUCCESS: X7 compute node exatestdbadm02.my.domain.com has updated Broadcom firmware
SUCCESS: Disk Tests are not running/active on any of the Storage Servers.
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm01
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm02
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm01
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm02
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm01
SUCCESS: Cluster Version 12.2.0.1.181016 is compatible with OL7 on exatestdbadm02
SUCCESS: Disk size 10000GB on cell exatestceladm01.my.domain.com matches the value specified in the OEDA configuration file
SUCCESS: Disk size 10000GB on cell exatestceladm02.my.domain.com matches the value specified in the OEDA configuration file
SUCCESS: Disk size 10000GB on cell exatestceladm03.my.domain.com matches the value specified in the OEDA configuration file
SUCCESS: Disk size 10000GB on cell exatestceladm04.my.domain.com matches the value specified in the OEDA configuration file
SUCCESS: Disk size 10000GB on cell exatestceladm05.my.domain.com matches the value specified in the OEDA configuration file
SUCCESS: Disk size 10000GB on cell exatestceladm06.my.domain.com matches the value specified in the OEDA configuration file
Successfully completed execution of step Validate Configuration File [elapsed Time [Elapsed = 250301 mS [4.0 minutes] Thu Mar 07 12:35:31 CET 2019]]
[root@exatestdbadm01 linux-x64]#

 

 

Execution of all remaining steps

Than, because we felt confident, we decide to invoke all remaining steps together:

root@exatestdbadm01 linux-x64]# ./install.sh -cf TVD-exatest.xml -r 1-18
...
..

 

The final result is the Exadata Machine installed with six Oracle VMs, and three Grid Infrastructure clusters each one running a test RAC database.

 

 

RMAN on Multitenant DB – Awareness of the Backup Optimization Behavior

Recovery Manager (RMAN) is one of the most popular Oracle databases components with unique Backup/Recovery features. It is fully integrated with the Multitenant Architecture allowing to implement Manage Many-Databases-as-One strategy.

RMAN permits to customize and save several database parameters used during the backup and recovery operations. Such parameters define for example the backup retention policy, the default device type,  how many archivelog copy should be stored, if the backup-sets should be compressed and/or encrypted and so on…

 

Below an example of RMAN setup with the highlight of the parameter CONFIGURE BACKUP OPTIMIZATION ON discussed on the next sections.

RMAN> show all;

RMAN configuration parameters for database with db_unique_name CEFUPRD are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 8 DAYS;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/BACKUP/Databases/CEFUPRD/%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/BACKUP/Databases/CEFUPRD/%d_%T_%U';
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE ENCRYPTION FOR DATABASE OFF;
CONFIGURE ENCRYPTION ALGORITHM 'AES128';
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;
CONFIGURE RMAN OUTPUT TO KEEP FOR 10 DAYS;
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/fast_recovery_area/rcocefuprd/cefuprd/snapcf_cefuprd.f';

RMAN>

 

 

Effects of RMAN Backup Optimization ON/OFF

In a Multitenant environment is more important than ever to understand the effects of the parameter CONFIGURE BACKUP OPTIMIZATION which can be set to ON or OFF.

Behavion when set ON

If RMAN determines that a file is identical and it has been backed up, then it is a candidate to be skipped. RMAN must do further checking to determine whether to skip the file, however, because both the retention policy and the backup duplexing feature are factors in the algorithm that determines whether RMAN has sufficient backups on the specified device type. (Definition from Oracle Backup Recovery User’s Guide).

Behavion when set OFF

The RMAN backup always includes all files no matter if they are identical and already backed up within the backup retention window.

 

 

What happens by migrating from Non-CDB to PDB?

Assuming that we have just migrated a non-CDB database to PDB and our pluggable database has 4 tablespaces all open read/write.  The container uses the same RMAN setup included on the top of this page, with CONFIGURE BACKUP OPTIMIZATION ON.

Dispite having a FULL database backup every night, only 1 backup every 8 days will be complete and consistent, because the RMAN backup optimization algorithm will detect the SEED PDB datafiles unchanged and it will skip those files. Therefore if we restore the CDB using the backup-sets generated by one FULL database backup, with no access to the rest of backup-sets inside the retention window, there are great probabilities that the restore will fail.

 

Extract of the CDB backup log which shows that the PDB$SEED datafiles have been skipped because already backed up 1 time during the last 8 days.

RMAN> BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL = 0 DATABASE PLUS ARCHIVELOG NOT BACKED UP 1 TIMES; 

Starting backup at May 15 2018 00:35:07
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=9 instance=clgbprd1 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=168 instance=clgbprd1 device type=DISK
skipping archived logs of thread 1 from sequence 39516 to 39931; already backed up
skipping archived logs of thread 2 from sequence 34457 to 34749; already backed up
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=2 sequence=34774 RECID=148645 STAMP=976088413
input archived log thread=2 sequence=34775 RECID=148649 STAMP=976088467
input archived log thread=1 sequence=39944 RECID=148655 STAMP=976088552
input archived log thread=2 sequence=34776 RECID=148651 STAMP=976088509
input archived log thread=2 sequence=34777 RECID=148653 STAMP=976088551
input archived log thread=2 sequence=34778 RECID=148657 STAMP=976088700
input archived log thread=1 sequence=39945 RECID=148662 STAMP=976088937
input archived log thread=2 sequence=34779 RECID=148659 STAMP=976088838

...
Starting backup at May 15 2018 00:50:02
using channel ORA_DISK_1
using channel ORA_DISK_2
skipping datafile 2; already backed up 1 time(s)
skipping datafile 4; already backed up 1 time(s)
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
...

 

Using only the backup-sets above to restore the CDB means that Oracle has to recreate the two skipped datafiles (number 2 and 4) applying the archived logs generated during the initial CDB provisioning.

To note that the full backup starts including archived log from the following sequence:

  • For the Thread 1 – Sequence  39944
  • For the Thread 2 – Sequence  34774

But when Oracle initiates the Media Recovery, it complains because the archived log Thread 1 – Sequence 1 is unavailable:

RMAN> run {
allocate auxiliary channel dsk1 type disk ;
2> allocate auxiliary channel dsk2 type disk ;
allocate auxiliary channel dsk3 type disk ;
allocate auxiliary channel dsk4 type disk ;
3> allocate auxiliary channel dsk5 type disk ;
4> allocate auxiliary channel dsk6 type disk ;
5> duplicate database to 'CEFUAUX' noopen backup location '/BACKUP/Databases/CEFUPRD/backup_20180515_only' nofilenamecheck;
}6> 7> 8> 9>

allocated channel: dsk1
channel dsk1: SID=322 device type=DISK

allocated channel: dsk2
channel dsk2: SID=471 device type=DISK

allocated channel: dsk3
channel dsk3: SID=9 device type=DISK

allocated channel: dsk4
channel dsk4: SID=166 device type=DISK

allocated channel: dsk5
channel dsk5: SID=323 device type=DISK

allocated channel: dsk6
channel dsk6: SID=478 device type=DISK

Starting Duplicate Db at May 15 2018 09:29:15

....

contents of Memory Script:
{
 set until scn 2372623043;
 recover
 clone database
 delete archivelog
 ;
}
executing Memory Script

executing command: SET until clause

Starting recover at May 15 2018 11:24:39

starting media recovery

unable to find archived log
archived log thread=1 sequence=1
Oracle instance started

 

I hope this example helped to understand that while migrating from non-CDB to Multitenant, many Administration tasks should be carefully reviewed due to major architecture changes.

 


 

Exadata Storage Snapshots

This post describes how to implement Oracle Database Snapshot Technology on Exadata Machine.

Because Exadata Storage Cell Smart Features, Storage Indexes, IORM and Network Resource Manager work at level of ASM Volume Manager only, (and they don’t work on top of ACFS Cluster File System), the implementation of the snapshot technology is different compared to any other non-Exadata environment.

At this purpuse Oracle has developed a new type of ASM Disk Group called SPARSE Disk Group. It uses ASM SPARSE Grid Disk based on Thin Provisioning to save the database snapshot copies and the associated metadata, and it supports non-CDB and PDB snapshot copy.

The implementation requires the following minimal software versions :

  • Exadata Storage Software version 12.1.2.1.0.
  • Oracle Database version 12.1.0.2 with bundle patch 5.
One major restriction applies to Exadata Storage Sanpshot compared to ACFS;
the source database must be a shared copy open on read only and called Test Master. The Test Master Database can not be modified or deleted as long the latest child snapshot is in use.
This restriction exists because Exadata Snapshot technology uses “allocate on first write”, and not “copy on write” (like for ACFS), and the snapshot is per-database-datafile.
When a child snapshot issue a write, the write goes to a private copy of that block inside the snapshot, preserving the original block value which can be accessed by other child snapshots of the same Test Master.

How to Implement Exadata Storage Snapshots in a PDB Environment

Check the celldisks for available free space to allocate to a new SPARSE Disk Group

[root@strgceladm01 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm01 853.34375G
 CD_01_strgceladm01 853.34375G
 CD_02_strgceladm01 853.34375G
 CD_03_strgceladm01 853.34375G
 CD_04_strgceladm01 853.34375G
 CD_05_strgceladm01 853.34375G
 CD_06_strgceladm01 853.34375G
 CD_07_strgceladm01 853.34375G
 CD_08_strgceladm01 853.34375G
 CD_09_strgceladm01 853.34375G
 CD_10_strgceladm01 853.34375G
 CD_11_strgceladm01 853.34375G
 FD_00_strgceladm01 0
 FD_01_strgceladm01 0
 FD_02_strgceladm01 0
 FD_03_strgceladm01 0
[root@strgceladm01 ~]#


[root@strgceladm02 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm02 853.34375G
 CD_01_strgceladm02 853.34375G
 CD_02_strgceladm02 853.34375G
 CD_03_strgceladm02 853.34375G
 CD_04_strgceladm02 853.34375G
 CD_05_strgceladm02 853.34375G
 CD_06_strgceladm02 853.34375G
 CD_07_strgceladm02 853.34375G
 CD_08_strgceladm02 853.34375G
 CD_09_strgceladm02 853.34375G
 CD_10_strgceladm02 853.34375G
 CD_11_strgceladm02 853.34375G
 FD_00_strgceladm02 0
 FD_01_strgceladm02 0
 FD_02_strgceladm02 0
 FD_03_strgceladm02 0
[root@strgceladm02 ~]#


[root@strgceladm03 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm03 853.34375G
 CD_01_strgceladm03 853.34375G
 CD_02_strgceladm03 853.34375G
 CD_03_strgceladm03 853.34375G
 CD_04_strgceladm03 853.34375G
 CD_05_strgceladm03 853.34375G
 CD_06_strgceladm03 853.34375G
 CD_07_strgceladm03 853.34375G
 CD_08_strgceladm03 853.34375G
 CD_09_strgceladm03 853.34375G
 CD_10_strgceladm03 853.34375G
 CD_11_strgceladm03 853.34375G
 FD_00_strgceladm03 0
 FD_01_strgceladm03 0
 FD_02_strgceladm03 0
 FD_03_strgceladm03 0
[root@strgceladm03 ~]#

For each Storage Cell Create a SPARSE Grid Disks as described below

[root@strgceladm01 ~]# cellcli -e CREATE GRIDDISK ALL PREFIX=SPARSE, sparse=true, SIZE=853.34375G
Cell disks were skipped because they had no freespace for grid disks: FD_00_strgceladm01, FD_01_strgceladm01, FD_02_strgceladm01, FD_03_strgceladm01.
GridDisk SPARSE_CD_00_strgceladm01 successfully created
GridDisk SPARSE_CD_01_strgceladm01 successfully created
GridDisk SPARSE_CD_02_strgceladm01 successfully created
GridDisk SPARSE_CD_03_strgceladm01 successfully created
GridDisk SPARSE_CD_04_strgceladm01 successfully created
GridDisk SPARSE_CD_05_strgceladm01 successfully created
GridDisk SPARSE_CD_06_strgceladm01 successfully created
GridDisk SPARSE_CD_07_strgceladm01 successfully created
GridDisk SPARSE_CD_08_strgceladm01 successfully created
GridDisk SPARSE_CD_09_strgceladm01 successfully created
GridDisk SPARSE_CD_10_strgceladm01 successfully created
GridDisk SPARSE_CD_11_strgceladm01 successfully created
[root@strgceladm01 ~]#

For each Storage Cell List all Grid Disks

[root@strgceladm01 ~]# cellcli -e list griddisk attributes name,size
 DATAC1_CD_00_strgceladm01 6.294586181640625T
 DATAC1_CD_01_strgceladm01 6.294586181640625T
 DATAC1_CD_02_strgceladm01 6.294586181640625T
 DATAC1_CD_03_strgceladm01 6.294586181640625T
 DATAC1_CD_04_strgceladm01 6.294586181640625T
 DATAC1_CD_05_strgceladm01 6.294586181640625T
 DATAC1_CD_06_strgceladm01 6.294586181640625T
 DATAC1_CD_07_strgceladm01 6.294586181640625T
 DATAC1_CD_08_strgceladm01 6.294586181640625T
 DATAC1_CD_09_strgceladm01 6.294586181640625T
 DATAC1_CD_10_strgceladm01 6.294586181640625T
 DATAC1_CD_11_strgceladm01 6.294586181640625T
 FGRID_FD_00_strgceladm01 2.0717315673828125T
 FGRID_FD_01_strgceladm01 2.0717315673828125T
 FGRID_FD_02_strgceladm01 2.0717315673828125T
 FGRID_FD_03_strgceladm01 2.0717315673828125T
 RECOC1_CD_00_strgceladm01 1.78143310546875T
 RECOC1_CD_01_strgceladm01 1.78143310546875T
 RECOC1_CD_02_strgceladm01 1.78143310546875T
 RECOC1_CD_03_strgceladm01 1.78143310546875T
 RECOC1_CD_04_strgceladm01 1.78143310546875T
 RECOC1_CD_05_strgceladm01 1.78143310546875T
 RECOC1_CD_06_strgceladm01 1.78143310546875T
 RECOC1_CD_07_strgceladm01 1.78143310546875T
 RECOC1_CD_08_strgceladm01 1.78143310546875T
 RECOC1_CD_09_strgceladm01 1.78143310546875T
 RECOC1_CD_10_strgceladm01 1.78143310546875T
 RECOC1_CD_11_strgceladm01 1.78143310546875T
 SPARSE_CD_00_strgceladm01 853.34375G
 SPARSE_CD_01_strgceladm01 853.34375G
 SPARSE_CD_02_strgceladm01 853.34375G
 SPARSE_CD_03_strgceladm01 853.34375G
 SPARSE_CD_04_strgceladm01 853.34375G
 SPARSE_CD_05_strgceladm01 853.34375G
 SPARSE_CD_06_strgceladm01 853.34375G
 SPARSE_CD_07_strgceladm01 853.34375G
 SPARSE_CD_08_strgceladm01 853.34375G
 SPARSE_CD_09_strgceladm01 853.34375G
 SPARSE_CD_10_strgceladm01 853.34375G
 SPARSE_CD_11_strgceladm01 853.34375G
[root@strgceladm01 ~]#

From ASM Instance Create a SPARSE Disk Group

SQL> CREATE DISKGROUP SPARSEC1 EXTERNAL REDUNDANCY DISK 'o/*/SPARSE_CD_*'
ATTRIBUTE
'compatible.asm' = '12.2.0.1',
'compatible.rdbms' = '12.2.0.1',
'cell.smart_scan_capable'='TRUE',
'cell.sparse_dg' = 'allsparse',
'AU_SIZE' = '4M';

Diskgroup created.

Set the following ASM attributes on the Disk Group hosting the Test Master Database

ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'access_control.enabled' = 'true';

Grant access to the OS RDBMS user used to access to the Disk Group

ALTER DISKGROUP DATAC1 ADD USER 'oracle';

From an ASM Instance Set ownership permissions for every file that belongs solely to the PDB being snapped cloned as per example below

alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/system.xxx.xxxxxxx';
alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/sysaux.xxx.xxxxxxx';
alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/users.xxx.xxxxxxx';
...
..

Restart the Master Test PDB in Read Only

alter pluggable database PDBTESTMASTER close immediate instances=all;
alter pluggable database PDBTESTMASTER open read only;

Create the first PDB Snapshot Copy on Exadata SPARSE Disk Group

Create pluggable database PDBDEV01 from PDBTESTMASTER tempfile reuse create_file_dest='+SPARSEC1' snapshot copy;

Feedback of the Exadata Storage Snapshots

The ability to create storage efficient database copies in a few seconds, independently from the size of the Test Master is very useful for today IT departments; but such extreme velocity and flexibility is not entirely free. In fact performance tests on a I/O bound workload have highlighted important performance degradation. This reminds us that as defined by Oracle Corporation, the Snapshot Technology, included on Exadata Machine remains a non-production option.

Feedback of Modern Consolidated Database Environment

 

Since the launch of Oracle 12c R1 Beta Program (August 2012) at Trivadis, we have been intensively testing, engineering and implementing Multitenant architectures for our customers.

Today, we can provide our feedbacks and those of our customers!

The overall feedback related to Oracle Multitenant is very positive, customers have been able to increase flexibility and automation, improving the efficiency of the software development life cycles.

Even the Single-tenant configuration (free of charge) brings few advantages compared to the non-CDB architecture. Therefore, from a technology point of view I recommend adopting the Container Database (CDB) architecture for all Oracle databases.

 

Examples of Multitenant architectures implemented

Having defined Oracle Multitenant a technological revolution on the space of relational databases, when combined with others 12c features it becomes a game changer for flexibility, automation and velocity.

Here are listed few examples of successful architectures implemented with our customers, using Oracle Container Database (CDB):

 

  • Database consolidation without performance and stability compromise here.

 

  • Multitenant and DevOps here.

 

  • Operating Database Disaster Recovery in Multitenant environment here.

 

 


 

Adding Pluggable Databases to an existing Multitenant Data Guard environment

We all know the benefits of the Oracle Multitenant cosolidation “Many-as-One”, one container (CDB) operation for wich many Pluggable Databases (PDBs) can take benefit; for example one CDB backup protects all PDBs stored inside the container itself.

While among the DBAs the setup of Oracle Data Guard is become more than a standandard routine,  described in thousands of Internet pages and blogs (one example available here), this post explains how to add new Pluggable Databases (PDBs) to an existing Multitenant environment protected by Data Guard.

 

How to create PDBs in Oracle Multitenant environment protected by Data Guard

There are multiple scenarios of PDB creation and they differently integrate within the Data Guard architecture. The easiest way to proceed consists in creating a new Pluggable Database using the SEED PDB:

  • PDB creation from SEED
    The creation of a brand new empty pluggable database is automatically replicated to each physical standby database. No additional action is required.

 

Unfortunately, this option is not always applicable because the new PDB should be a clone of an existing one. Therefore, it is important for the DBA to understand how integrating new non-empty Pluggable Database on a Multitenant and Data Guard environment without impacting the pre-existing setup.

 

  • PDB clone 

Cloning a PDB in a Data Guard environment requires few additional steps, which changes across the different Oracle versions and when Active Data Guard option is in use. While remote cloning the PDB the option STANDBYS=NONE should be used to defer the PDB replica to the Standby container. Then it is possible to replicate and protect the newly cloned PDB with Data Guard.

A full example of how to perform those tasks is reported below.

 

Cloning a PDB via DB Link using the STANDBYS=NONE option

create pluggable database PCJORD from PCLORD@ccls01_PCLORD tempfile reuse STANDBYS=NONE;

 

Open the newly cloned PDB

alter pluggable database PCJORD open instances=all;

 

On the Standby Container Restore PDB from Primary

run{
set newname for pluggable database PCJORD to new;
restore pluggable database PCJORD from service CMJP01;
switch datafile all;
}

 

Connect to the Standby container and STOP the Apply Process

dgmgrl
connect sys/xxxxxxxx@CMJP01
edit database 'CMJS01' set state='APPLY-OFF';

 

If Active Data Guard is in use, re-start the Standby container in MOUNT

srvctl stop database -db CMJS01 

sqlplus / as sysdba
startup mount

 

Enable the PDB recovery on the Standby Container

alter session set container=PCJORD;
alter pluggable database enable recovery;

 

Connect to the Standby container and RE-START the Apply Process

dgmdrl
connect sys/xxxxxxxx@CMJP01
edit database 'CMJS01' set state='APPLY-ON';

 

If Active Data Guard is in use Open the Container in Read Only

alter database open;

 

 


 

 

Oracle Multitenant supports database DevOps standards

As a consultant I constantly speak with my customers, and among a big number of them I noticed that the speed and flexibility of all database provisioning activities generate huge concern.

Hence I decide to describe on this post few Oracle Multitenant options to resolve those problems.

If production is the most critical environment to maintain, it is definetly not the one generating the greatest efforts in term of provisioning. The applications are more and more complex, and require continuous delivery;  to satify those needs the infrastructure has few provisioning challengers to overcome.

Now with the Oracle version 12.2 and the Mutitenet option, the DBaaS model becomes simpler than ever.

 

Clone PDB

The Clone PDB operation has been enhanced from Cold to Hot Clone. This improvement requires the usage of  PDB Local Undo. The Hot Clone is now the default method and can be devided in three major steps:

  1. PDB source datafile copy, because the PDB remains open in read/write at this stage the cloned datafiles are physically inconsistent (fuzzy data files).
  2. The Redo Log entries generated on the source PDB during the copy are applied to the targed PDB. This step makes the source and target PDBs two exact physical copies.
  3. Because the Redo Log entries coming from the source PDB contain committed and uncommitted transactions, to make the target PDB transactionally consistent, the undo entries of all uncommitted transations must be applied.

 

The command below shows how to clone a PDB open in read/write:

Create Pluggable Database ERP_Hot_Clone from ERP;

 

Refreshable PDB

Refreshable PDB leverages the Hot Clone PDB capability, creating an initial copy of the source PDB refreshed over the time at scheduled interval or on-demand.

To better understand the possible use cases, the graphical example below covers the development’s request to have every morning a copy of production data.

 

Refreshabe_PDB_all.png

 

How to create a Refreshable PDB

Syntax to create an automatic refreshable PDB:

Create Pluggable Database CRM_Test from CRM_Prod@db_link refresh mode every 720; -- (12H)

 

Syntax to create a manual PDB refresh:

Create Pluggable Database CRM_Test from CRM_Prod@db_link refresh mode manual;

 

After the clone the refreshable PDB should then be opened in read-only:

Alter Pluggable Database CRM_Test read only;

 

How to invoke a manual PDB refresh:

Alter Pluggable Database CRM_Test refresh;

 

Creation of the snapshot databases:

Create Pluggable Database CRM_TEST_Snap01 FROM CRM_Test
FILE_NAME_CONVERT = ('/u03/oradata/CDB122/CRM_Test/','/u03/oradata/CDB122/CRM_Test_Snap01/')
SNAPSHOT COPY;

 

 


 

 

New Resource Manager options on Oracle 12c R2 Muntitenant

Since its introductions Oracle Resource Manager has been a key tool to guarantee performance stability and predictability, regulating the access to the database resources.

In the era of extreme consolidation, the role of Oracle Resource Manager is more important then ever, and Oracle Multitenant has extended its functionalities.

In Oracle 12.1 the resource manager was capable to handle:

  • Parallel server processes
  • CPU

The version 12.2 the following options have been introduced:

  • I/O rate limits for PDBs (Not supported on Exadata enviroment where I/O Resource Manager exists)
  • Memory  Management
  • Performance Profiles
  • Resource Monitoring

 

How to limit a PDB I/O consumption

--Limiting the number of I/O per second.
Alter System set MAX_IOPS = 9500 scope = BOTH;

--Limiting the amount of MB per second.
Alter System set MAX_MBPS = 70 scope = BOTH;

 

Memory  Management

Optionally starting from Oracle 12.2 it is possible to limit the PDB memory utilization using the following parameters:

  • SGA_Target                       – Max SGA size for PDB
  • SGA_Min_Size                   – Amount of guaranteed SGA size for PDB.
  • DB_Cache_Size                  – Amount of guaranteed Buffer Cache size for PDB.
  • DB_Shared_Pool_Size      – Amount of guaranteed Shared Pool size for PDB.
  • PGA_Aggregate_Limit      – Max PGA size for PDB.
  • PGA_Aggregate_Target    – Target PGA size for PDB.

 

Performance Profiles

The Profiles allows to create standard and common  Resouce Profiles, based for example on the different SLA levels (platinum, gold, silver), to grant directly to the PDBs.

DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(
 plan => 'PDB_Silver', 
 profile => 'silver', 
 shares => 2, 
 utilization_limit => 30, 
 parallel_server_limit => 20,
 memory_limit=> 30
);

 

Resource Monitoring

The new system view  V$RSRCPDBMETRIC can be used to monitor  chargeback the I/O and memory consumpion.

SELECT r.CON_ID, p.PDB_NAME, r.IOPS,r.SGA_BYTES, r.SHARED_POOL_BYTES
FROM V$RSRCPDBMETRIC r, CDB_PDBS p WHERE r.CON_ID = p.CON_ID;

 


 

New Oracle version (12.2.0.1) old BUG!

 

In June 2016 I posted the following BUG: Bug on Oracle 12c Multitenant & PDB Clone as Snapshot Copypromising to post an update once the version 12cR2 is available, because in the service request, originally opened with the version 12.1.0.2 Oracle stated that the bug would be fixed in 12cR2.

I was so impatient, that just few hours after the general availability of the Oracle Database 12c Release 2  I created a new cluster and tested the resolution.

 

For the record, it states that the resolution of this bug is important for one of my clients, where we have implemented the snapshot PDB on the application development lifecycle.

 

So let’s see if the bug has been fixed!

SQL*Plus: Release 12.2.0.1.0 Production on Wed Mar 1 21:06:54 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production


SQL> CREATE PLUGGABLE DATABASE PDBACFS1_SNAP1 from PDBACFS1 SNAPSHOT COPY;

Pluggable database created.

SQL> ALTER PLUGGABLE DATABASE PDBACFS1_SNAP1 OPEN instances=all;

Pluggable database altered.

SQL> select CON_ID, NAME, OPEN_MODE, SNAPSHOT_PARENT_CON_ID from v$pdbs where NAME in ('PDBACFS1','PDBACFS1_SNAP1');

 CON_ID     NAME               OPEN_MODE  SNAPSHOT_PARENT_CON_ID
---------- ------------------- ---------- ----------------------
 5          PDBACFS1            READ WRITE
 6          PDBACFS1_SNAP1      READ WRITE               <-- This should be 5 but is NULL

2 rows selected.

 

To a certain point of view progress has been made, in version 12.1.0.2 the column SNAPSHOT_PARENT_CON_ID was always zero (0) now is null!

I’m sorry for my customer, I’ll keep testing hoping …

 

 

Oracle 12c – Unified Audit Trail

 

Oracle 12c introduces “Unified Audit Trail” a faster, easier to access and more secure audit system.

It optionally allows to stage the audit records in a dedicated memory buffer (UNIFIED_AUDIT_SGA_QUEUE_SIZE), where they are temporarily grouped before being written into the audit table via batch transactions.

This new audit configuration substantially reduces the transactional overhead generated by the auditing.

 

Important improvements have also done to simplify the utilization:

– One single audit trail for any audit data, in fact  UNIFIED_AUDIT_TRAIL view replaces SYS.AUD$/DBA_AUDIT_TRAIL, SYS.FGA_LOGS$/DBA_FGA_AUDIT_TRAIL, DVSYS.AUDIT_TRAIL$, V$XML_AUDIT_TRAIL and the OS audit files in adump.

– All audit data stored in Oracle secure files.

– Role segregation between:

  • DBA responsible to maintain free space and backup.
  •  AUDIT_ADMIN responsible to manage the audit policies and define the data retention.
  • AUDIT_VIEWER in charge of the reports.

 

Unified Audit Trail introduces also new security options important to mention:

– It is activated with a kernel relink and it doesn’t require additional steps or parameters.

– The new AUDSYS table has a Read-Only Protection for all users. Even the DBA privilege can’t manipulate the audit records!

 

How to activate Unified Audit Trail

--Stop all Oracle processes: databases, listener and Enterprise Manager agent.

--Relink Oracle with the uniaud_on option.
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk uniaud_on ioracle

--Restart all Oracle processes: databases, listener and Enterprise Manager agent.

--Check is Unified Audit Trail is active
SQL> select * from v$option where PARAMETER='Unified Auditing';

INST_ID PARAMETER                       VALUE              CON_ID
------- ------------------------------ ------------------- ----------
 1      Unified Auditing                TRUE                0

 

Optional, but strongly recommended it is possible to relocate the AUDIT segments  from SYSAUX Tablespace to a dedicated one.

SQL> Create tablespace TBS_AUDIT datafile SIZE 2G AUTOEXTEND ON;

BEGIN
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
 audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
 audit_trail_location_value => 'TBS_AUDIT');
END;
/


SQL> select OWNER, SEGMENT_NAME, PARTITION_NAME, SEGMENT_TYPE, BYTES from dba_segments where TABLESPACE_NAME='TBS_AUDIT';

OWNER           SEGMENT_NAME                    PARTITION_NAME                SEGMENT_TYPE       BYTES
--------------- ------------------------------ ------------------------------ ------------------ ----------
AUDSYS           CLI_SWP$1b2a49f1$1$1           HIGH_PART                      TABLE PARTITION   65536
AUDSYS           CLI_SWP$1b2a49f1$1$1           PART_2                         TABLE PARTITION   65536
AUDSYS           CLI_LOB$1b2a49f1$1$1           HIGH_PART                      INDEX PARTITION   65536
AUDSYS           CLI_TIME$1b2a49f1$1$1          HIGH_PART                      INDEX PARTITION   65536
AUDSYS           CLI_LOB$1b2a49f1$1$1           PART_2                         INDEX PARTITION   65536
AUDSYS           CLI_TIME$1b2a49f1$1$1          PART_2                         INDEX PARTITION   65536
AUDSYS           CLI_SCN$1b2a49f1$1$1           PART_2                         INDEX PARTITION   65536
AUDSYS           SYS_IL0000091784C00014$$       SYS_IL_P241                    INDEX PARTITION   65536
AUDSYS           CLI_SCN$1b2a49f1$1$1           HIGH_PART                      INDEX PARTITION   65536
AUDSYS           SYS_IL0000091784C00014$$       SYS_IL_P246                    INDEX PARTITION   65536
AUDSYS           SYS_LOB0000091784C00014$$      SYS_LOB_P244                   LOB PARTITION     131072
AUDSYS           SYS_LOB0000091784C00014$$      SYS_LOB_P239                   LOB PARTITION     131072

12 rows selected.

 

The introduction of Audit Policies have brought flexibility and granularity on what it is possible to audit, here an example using Oracle sys_context function.

CREATE AUDIT POLICY hr_employees
 PRIVILEGES CREATE TABLE
 ACTIONS UPDATE ON HR.EMPLOYEES
 WHEN 'SYS_CONTEXT(''USERENV'', ''SESSION_USER'') != ''HR_ADMIN'''
 EVALUATE PER STATEMENT;

AUDIT POLICY hr_employees;

 

 

 

Oracle DB stored on ASM vs ACFS

Nowadays a new Oracle database environment with Grid Infrastructure has three main storage options:

  1. Third party clustered file system
  2. ASM Disk Groups
  3. ACFS File System

While the first option was not in scope, this blog compares the result of the tests between ASM and ACFS, highlighting when to use one or the other to store 12c NON-CDB or CDB Databases.

The tests conducted on different environments using Oracle version 12.1.0.2 July PSU have shown controversial results compared to what Oracle  is promoting for the Oracle Database Appliance (ODA) in the following paper: “Frequently Asked Questions Storing Database Files in ACFS on Oracle Database Appliance

 

Outcome of the tests

ASM remains the preferred option to achieve the best I/O performance, while ACFS introduces interesting features like DB snapshot to quickly and space efficiently provision new databases.

The performance gap between the two solutions is not negligible as reported below by the  AWR – TOP Timed Events sections of two PDBs, sharing the same infrastructure, executing the same workload but respectively using ASM and ACFS storage:

  • PDBASM: Pluggable Database stored on  ASM Disk Group
  • PDBACFS:Pluggable Database stored on ACFS File System

 

 

PDBASM AWR – TOP Timed Events and Other Stats

topevents_asm

fg_asm

 

 

PDBACFS AWR – TOP Timed Events and Other Stats

TopEvents_ACFS.png

fg_acfs

 

Due to the different characteristics and results when ASM or ACFS is in use, it is not possible to give a generic recommendation. But case by case the choise should be driven by business needs like maximum performance versus fast and efficient database clone.

 

 

 

 

New to Oracle Multitenant?

Multitenant is the biggest architectural change of Oracle 12c and the enabler of many new database options in the years to come. Therefore I have decided to write over the time, few blog posts with basic examples of what should be done and not in a multitenant database environment.

 

Rule #1   – What should not be done

If you are a CDB DBA, always pay attention to which container you are connected to and remember that application data should be stored on Application PDB only!

Unfortunately this golden rule is not-enforced by the RDBMS, but it is left in our responsibility as shown on the example below:

oracle@lxoel7n01:~/ [CDB_TEST] sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Sep 21 18:28:23 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

CDB$ROOT SQL>
CDB$ROOT SQL> show user
USER is "SYS"
CDB$ROOT SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

 

Once connected to the ROOT container let see if I can mistakenly create an application table:

CDB$ROOT SQL> CREATE TABLE EMP_1
(emp_id NUMBER,
emp_name VARCHAR2(25),
start_date DATE,
emp_status VARCHAR2(10) DEFAULT 'ACTIVE',
resume CLOB); 2 3 4 5 6

Table created.

CDB$ROOT SQL> desc emp_1
 Name                                Null?    Type
 ----------------------------------- -------- ----------------------------
 EMP_ID                                        NUMBER
 EMP_NAME                                      VARCHAR2(25)
 START_DATE                                    DATE
 EMP_STATUS                                    VARCHAR2(10)
 RESUME                                        CLOB


CDB$ROOT SQL> insert into emp_1 values (1, 'Emiliano', sysdate, 'active', ' ');

1 row created.

CDB$ROOT SQL> commit;

Commit complete.


CDB$ROOT SQL> select * from emp_1;

EMP_ID     EMP_NAME                  START_DAT EMP_STATUS RESUME
---------- ------------------------- --------- ---------- ----------------
 1          Emiliano                  21-SEP-16 active

CDB$ROOT SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

The answer is “YES” and the consequences can be devastating…

 

Rule #2   – Overview of Local and Common Entities

Non-schema entities can be created as local or common.  Local entities exist only in one PDB similar to a non-CDB architecture, while Common entities exist in every current and future container.

List of possible Local / Common entities in a Multitenant database:

  • Users
  • Roles
  • Profiles
  • Audit Policies

All Local entities are created from the local PDB and all Common entities are created from the CDB$ROOT container.

Common-user-defined Users, Roles and Profiles require a standard database prefix, defined by the spfile parameter COMMON_USER_PREFIX:

SQL> show parameter common_user_prefix

NAME                              TYPE        VALUE
--------------------------------- ----------- -----------------
common_user_prefix                string      C##

 

Example of Common User creation:

SQL> CREATE USER C##CDB_DBA1 IDENTIFIED BY PWD CONTAINER=ALL;

User created.


SQL> SELECT con_id, username, user_id, common

  2  FROM cdb_users where username='C##CDB_DBA1'  ORDER BY con_id;

    CON_ID USERNAME                USER_ID COMMON
---------- -------------------- ---------- ------
         1 C##CDB_DBA1               102    YES
         2 C##CDB_DBA1               101    YES
         3 C##CDB_DBA1               107    YES
         4 C##CDB_DBA1               105    YES
         5 C##CDB_DBA1               109    YES
         ...

 

Example of Local user creation:

SQL> show con_name

CON_NAME
------------------------------
MYPDB

SQL> CREATE USER application IDENTIFIED BY pwd CONTAINER=CURRENT;

User created.

If we try to create a Local User from the CDB$ROOT container the following error occurs: ORA-65049: creation of local user or role is not allowed in CDB$ROOT

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> CREATE USER application IDENTIFIED BY pwd   CONTAINER=CURRENT;

CREATE USER application IDENTIFIED BY pwd   CONTAINER=CURRENT

                                      *

ERROR at line 1:
ORA-65049: creation of local user or role is not allowed in CDB$ROOT

 

 

Rule #3  – Application should connect through user-defined database services only

We have been avoiding to create user-defined database services for many years, sometimes even for RAC databases. But in Multitenet or Singletenant architecture the importance of user-defined database service is even greater. For each CDB and PDB Oracle is still automatically creating a default service, but as in the past the default services should never be exposed to the applications.

 

To create user-defined database service in stand-alone environment use the package DBMS_SERVICE while connected to the corresponding PDB:

BEGIN
 DBMS_SERVICE.CREATE_SERVICE(
     SERVICE_NAME     => 'mypdb_app.emilianofusaglia.net',
     NETWORK_NAME     => 'mypdb_app.emilianofusaglia.net',
     FAILOVER_METHOD  =>
     ...
      );
 DBMS_SERVICE.START_SERVICE('mypdp_app.emilianofusaglia.net ');
END;
/

The database services will not start automatically after opening a PDB!  Create a database trigger for this purpose.

 

To create user-defined database service in clustered environment use the srvctl utility from the corresponding RDBMS ORACLE_HOME:

oracle@oel7n01:~/ [EFU1] srvctl add service -db EFU \
> -pdb MYPDB -service mypdb_app.emilianofusaglia.net \
> -failovertype SELECT -failovermethod BASIC \
> -failoverdelay 2 -failoverretry 90

 

List all CDB database services order by Container ID:

SQL> SELECT con_id, name, pdb FROM v$services ORDER BY con_id;

    CON_ID NAME                                     PDB
---------- --------------------------------------- -----------------

         1 EFUXDB                                   CDB$ROOT   <-- CDB Default Service 
         1 SYS$BACKGROUND                           CDB$ROOT   <-- CDB Default Service 
         1 SYS$USERS                                CDB$ROOT   <-- CDB Default Service 
         1 EFU.emilianofusaglia.net                 CDB$ROOT   <-- CDB Default Service 
         1 EFU_ADMIN.emilianofusaglia.net           CDB$ROOT   <-- CDB User-defined Service  
         3 mypdb.emilianofusaglia.net               MYPDB      <-- PDB Default Service 
         3 mypdb_app.emilianofusaglia.net           MYPDB      <-- PDB User-defined Service  

7 rows selected.

 

EZCONNECT to a PDB using the user-defined service:

sqlplus <username>/<password>@<host_name>:<local-listener-port>/<service-name>
sqlplus application/pwd@oel7c-scan:1522/mypdb_app.emilianofusaglia.net

 

 

Rule #4  –  Backup/Recovery strategy in Multitenant

As database administrator one of the first responsibility to fulfil is the “Backup/Recovery” strategy. The migration to multitenant database, due to the high level of consolidation density requires to review existing procedures. Few infrastructure operations, like creating a Data Guard or executing a backup, have been shifted from per-database to per-container consolidating the number of tasks.

RMAN in 12c covers all CDB, PDB backup/restore combinations, even though the best practice suggests to run the daily backup at CDB level, and in case of restore needed, the granularity goes down to the single block of one PDB.  Below are reported few basic backup/restore operations in Multitenant environment.

 

Backup a full CDB:

RMAN> connect target /;
RMAN> backup database plus archivelog;

 

Backup a list of PDBs:

RMAN> connect target /;
RMAN> backup pluggable database mypdb, hrpdb plus archivelog;

 

Backup one PDB directly connecting to it:

RMAN> connect target sys/manager@mypdb.emilianofusaglia.net;
RMAN> backup incremental level 0 database;

 

Backup a PDB tablespace:

RMAN> connect target /;
RMAN> backup tablespace mypdb:system;

 

Generate RMAN report:

RMAN> report need backup pluggable database mypdb;

 

Complete PDB Restore

RMAN> connect target /;
RMAN> alter pluggable database mypdb close;
RMAN> restore pluggable database mypdb;
RMAN> recover pluggable database mypdb;
RMAN> alter pluggable database mypdb open;

 

 

Rule #5  –  Before moving to Multitenant

Oracle Multitenant has introduced many architectural changes that force the DBA to evolve how databases are administered. My last golden rule suggests to thoroughly study the multitenant/singletenant architecture before starting any implementation.

During my experiences implementing multitenant/singletenant architectures, I found great dependencies with the following database areas:

  • Provisioning/Decommissioning
  • Patching and Upgrade
  • Backup/recovery
  • Capacity Planning and Management
  • Tuning
  • Separation of duties between CDB and PDB

 

 

Oracle Datapatch on Multitenant environment

The example below shows how to patch a Pluggable Database (PDB) migrated to a Container Database (CDB) whith a different patch level.

 

List the PDB violations

col message for a150
col action for a60
select * FROM pdb_plug_in_violations WHERE STATUS <>'RESOLVED';

TIME NAME CAUSE TYPE ERROR_NUMBER LINE
--------------------------------------------------------------------------- ------------------------------ ---------------------------------------------------------------- --------- ------------ ----------
MESSAGE STATUS ACTION
------------------------------------------------------------------------------------------------------------------------------------------------------ --------- ------------------------------------------------------------
15-07-16 11:33:26.022539 CUSPPO SQL Patch ERROR 0 1
PSU bundle patch 160419 (Database Patch Set Update : 12.1.0.2.160419 (22291127)): Installed in the CDB but not in the PDB. ERROR Call datapatch to install in the PDB or the CDB


1 row selected.

 

Datapatch help

[oracle@zlo6ka1n1 OPatch]$ ./datapatch -h
SQL Patching tool version 12.1.0.2.0 on Wed Jun 15 10:53:36 2016
Copyright (c) 2015, Oracle. All rights reserved.

sqlpatch usage:
All arguments are optional, if there are no arguments sqlpatch
will automatically determine which SQL scripts need to be run in
order to complete the installation of any SQL patches.

Optional arguments:
-db <db name>
 Use the specified database rather than $ORACLE_SID
-bundle_series <bundle_series>
 Specify if the patch is a bundle patch
 Should also be accompanied by -force option
 if -bundle_series option is specified,only 1 patch will
 be considered by the -force command
-apply <patch1,patch2,...,patchn>
 Only consider the specified patch list for apply operations
-rollback <patch1,patch2,...,patchn>
 Only consider the specified patch list for rollback operations
-upgrade_mode_only
 Only consider patches that require upgrade mode
-force
 Run the apply and/or rollback scripts even if not necessary
 per the SQL registry
-pdbs <pdb1,pdb2,...,pdbn>
 Only consider the specified list of PDBs for patching. All
 other PDBs will not be patched
-prereq
 Run prerequisite checks only, do not actually run any scripts
-oh <oracle_home value>
 Use the specified directory to check for installed patches
-verbose
 Output additional information used for debugging
-help
 Output usage information and exit
-version
 Output build information and exit

SQL Patching tool complete on Wed Jul 15 10:53:36 2016

 

Apply the patch to the PDB

[oracle@zlo6ka1n0 OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Wed Jul 15 11:36:19 2016
Copyright (c) 2015, Oracle. All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_59195_2016_07_15_11_36_19/sqlpatch_invocation.log

Connecting to database...OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
 that are in an open state, no patches will be applied to closed PDBs.
 Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
 (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series PSU:
 ID 160419 in the binary registry and ID 160419 in PDB CDB$ROOT, ID 160419 in PDB PDB$SEED

Adding patches to installation queue and performing prereq checks...
Installation queue:
 For the following PDBs: CDB$ROOT PDB$SEED
 Nothing to roll back
 Nothing to apply
 For the following PDBs: CUSPPO
 Nothing to roll back
 The following patches will be applied:
 22291127 (Database Patch Set Update : 12.1.0.2.160419 (22291127))

Installing patches...
Patch installation complete. Total patches installed: 1

Validating logfiles...
Patch 22291127 apply (pdb CUSPPO): SUCCESS
 logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/22291127/19694308/22291127_apply_CLGBTE_CUSPPO_2016Jul15_11_37_25.log (no errors)
SQL Patching tool complete on Wed Jul 15 11:37:36 2016
[oracle@zlo6ka1n0 OPatch]$