My OOW18 Summary

 

For those who are interested here my major takeaways from the OOW18

 

As we all know, since few years the HOTTEST topic advertized at the OOW is “Cloud Computing”, but this time Oracle Cloud was no longer alone!

In fact the focus was divided between the new Oracle OCI Cloud, also named by Larry as Second Generation of Cloud and the Autonomous Database.

 

OCI Second Gen of Cloud

Here a summary of the major advantages compared to the previous version:

– Security, guaranteed by robots which scan the network for any malicious attack.  

– The cutting edge virtual network, which brings up to 50GB speed and extreme flexibility.

– Bare Metal Infrastructure based on Exadata Machines.

– Aggressive pricing, compared to the competitors.

 

Autonomous Database.

The Autonomous Database option is now available for OLTP and DWH databases and includes new capabilities like automatic index creation and column stored table conversion. In version 19 it will manage online memory increase and additional tuning options.

As announces during Larry’s keynote, the  Autonomous database will be also available with the Cloud @ Customer option (on Exadata only), ant it will no longer require human labor (DBA and Sys Admin intervention), because Self Provisioning, Self Driving, Self Tuning and Self Repairing.

For non-technical people it looks magic, but it is few steps from what we already use in a standard Oracle 12c Database. In fact Autonomous Database leverages a bunch of database advisors and tuning options, now orchestrated by an Artificial Intelligence and Machine Learning software, in order to provide data-driven predictions and decisions.

Over the next few years, Autonomous Database will be enriched with several new options, improving the quality of live of many DBAs, which will be relieved of the majority of the tedious and recurring tasks, leaving the most added value tasks under their own responsibility.

Last but not least, the Autonomous Database runs in a very high end configurations (Oracle guarantees 99,995% of availability), which is quite expensive to acquire due to the list of mandatory requirements: Exadata, RAC, Active DG, Multitenant, Tuning Pack, Diagnostic Pack etc..

 

Exadata Machine

Several interesting features are coming next year with the introduction of the INTEL Optane DC Persistent Memory for even faster OLTP.

This new type of memory will be installed on the Storage Cell and used as accelerator in front of Flash memory.

The database node will  access to the Persistent Memory via RDMA with a gain up to 20 x faster access latency.

Oracle is developing the more and more Remote Direct Memory Access (RDMA) instructions for Cache Fusion and Storage Cell operations in order to offload the database nodes and increase the overall performance.

Stay tuned on Exadata Machine because the next generation will also include BIG architectural change…

 

Oracle Virtual Machine (OVM)

One curiosity directly collected at Linux Virtualization booth is that even though the next generation of hypervisor will be based on KVM, Oracle will keep calling it OVM and of course the current OVM product based on XEN (OVS, OVM) will still be in use by many companies.

How possibly the customers can get confused ?!?

 

With this I finished, although there would be much more to write.

 


 

Advertisements

RMAN on Multitenant DB – Awareness of the Backup Optimization Behavior

Recovery Manager (RMAN) is one of the most popular Oracle databases components with unique Backup/Recovery features. It is fully integrated with the Multitenant Architecture allowing to implement Manage Many-Databases-as-One strategy.

RMAN permits to customize and save several database parameters used during the backup and recovery operations. Such parameters define for example the backup retention policy, the default device type,  how many archivelog copy should be stored, if the backup-sets should be compressed and/or encrypted and so on…

 

Below an example of RMAN setup with the highlight of the parameter CONFIGURE BACKUP OPTIMIZATION ON discussed on the next sections.

RMAN> show all;

RMAN configuration parameters for database with db_unique_name CEFUPRD are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 8 DAYS;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/BACKUP/Databases/CEFUPRD/%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/BACKUP/Databases/CEFUPRD/%d_%T_%U';
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE ENCRYPTION FOR DATABASE OFF;
CONFIGURE ENCRYPTION ALGORITHM 'AES128';
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;
CONFIGURE RMAN OUTPUT TO KEEP FOR 10 DAYS;
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/fast_recovery_area/rcocefuprd/cefuprd/snapcf_cefuprd.f';

RMAN>

 

 

Effects of RMAN Backup Optimization ON/OFF

In a Multitenant environment is more important than ever to understand the effects of the parameter CONFIGURE BACKUP OPTIMIZATION which can be set to ON or OFF.

Behavion when set ON

If RMAN determines that a file is identical and it has been backed up, then it is a candidate to be skipped. RMAN must do further checking to determine whether to skip the file, however, because both the retention policy and the backup duplexing feature are factors in the algorithm that determines whether RMAN has sufficient backups on the specified device type. (Definition from Oracle Backup Recovery User’s Guide).

Behavion when set OFF

The RMAN backup always includes all files no matter if they are identical and already backed up within the backup retention window.

 

 

What happens by migrating from Non-CDB to PDB?

Assuming that we have just migrated a non-CDB database to PDB and our pluggable database has 4 tablespaces all open read/write.  The container uses the same RMAN setup included on the top of this page, with CONFIGURE BACKUP OPTIMIZATION ON.

Dispite having a FULL database backup every night, only 1 backup every 8 days will be complete and consistent, because the RMAN backup optimization algorithm will detect the SEED PDB datafiles unchanged and it will skip those files. Therefore if we restore the CDB using the backup-sets generated by one FULL database backup, with no access to the rest of backup-sets inside the retention window, there are great probabilities that the restore will fail.

 

Extract of the CDB backup log which shows that the PDB$SEED datafiles have been skipped because already backed up 1 time during the last 8 days.

RMAN> BACKUP AS COMPRESSED BACKUPSET INCREMENTAL LEVEL = 0 DATABASE PLUS ARCHIVELOG NOT BACKED UP 1 TIMES; 

Starting backup at May 15 2018 00:35:07
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=9 instance=clgbprd1 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=168 instance=clgbprd1 device type=DISK
skipping archived logs of thread 1 from sequence 39516 to 39931; already backed up
skipping archived logs of thread 2 from sequence 34457 to 34749; already backed up
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=2 sequence=34774 RECID=148645 STAMP=976088413
input archived log thread=2 sequence=34775 RECID=148649 STAMP=976088467
input archived log thread=1 sequence=39944 RECID=148655 STAMP=976088552
input archived log thread=2 sequence=34776 RECID=148651 STAMP=976088509
input archived log thread=2 sequence=34777 RECID=148653 STAMP=976088551
input archived log thread=2 sequence=34778 RECID=148657 STAMP=976088700
input archived log thread=1 sequence=39945 RECID=148662 STAMP=976088937
input archived log thread=2 sequence=34779 RECID=148659 STAMP=976088838

...
Starting backup at May 15 2018 00:50:02
using channel ORA_DISK_1
using channel ORA_DISK_2
skipping datafile 2; already backed up 1 time(s)
skipping datafile 4; already backed up 1 time(s)
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
...

 

Using only the backup-sets above to restore the CDB means that Oracle has to recreate the two skipped datafiles (number 2 and 4) applying the archived logs generated during the initial CDB provisioning.

To note that the full backup starts including archived log from the following sequence:

  • For the Thread 1 – Sequence  39944
  • For the Thread 2 – Sequence  34774

But when Oracle initiates the Media Recovery, it complains because the archived log Thread 1 – Sequence 1 is unavailable:

RMAN> run {
allocate auxiliary channel dsk1 type disk ;
2> allocate auxiliary channel dsk2 type disk ;
allocate auxiliary channel dsk3 type disk ;
allocate auxiliary channel dsk4 type disk ;
3> allocate auxiliary channel dsk5 type disk ;
4> allocate auxiliary channel dsk6 type disk ;
5> duplicate database to 'CEFUAUX' noopen backup location '/BACKUP/Databases/CEFUPRD/backup_20180515_only' nofilenamecheck;
}6> 7> 8> 9>

allocated channel: dsk1
channel dsk1: SID=322 device type=DISK

allocated channel: dsk2
channel dsk2: SID=471 device type=DISK

allocated channel: dsk3
channel dsk3: SID=9 device type=DISK

allocated channel: dsk4
channel dsk4: SID=166 device type=DISK

allocated channel: dsk5
channel dsk5: SID=323 device type=DISK

allocated channel: dsk6
channel dsk6: SID=478 device type=DISK

Starting Duplicate Db at May 15 2018 09:29:15

....

contents of Memory Script:
{
 set until scn 2372623043;
 recover
 clone database
 delete archivelog
 ;
}
executing Memory Script

executing command: SET until clause

Starting recover at May 15 2018 11:24:39

starting media recovery

unable to find archived log
archived log thread=1 sequence=1
Oracle instance started

 

I hope this example helped to understand that while migrating from non-CDB to Multitenant, many Administration tasks should be carefully reviewed due to major architecture changes.

 


 

Grid Management DB filling up ASM disk space

Recently I discovered on Oracle Grid Infrastructure 12cR2 that the ASM disk group hosting the Management DB (-MGMTDB) was filling up the disk space very quickly.

This is due to a bug on the oclumon data purge procedure.

To fix the problem, two possibilities are available:

  1. Recreate the Management DB
  2. Manually truncate the tables not purged and shrinking the Tablespace Size

 

Below are described the two options.

 

Option 1 – Recreate the Management DB

As root user on each cluster node:

# /u01/app/12.2.0.1/grid/bin/crsctl stop res ora.crf -init
# /u01/app/12.2.0.1/grid/bin/crsctl modify res ora.crf -attr ENABLED=0 -init

 

As Grid from the local node hosting the Management Database Instance run the commands:

$ /u01/app/12.2.0.1/grid/bin/srvctl status mgmtdb
$ /u01/app/12.2.0.1/grid/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB
Connecting to database
4% complete
9% complete
14% complete
19% complete
23% complete
28% complete
47% complete
Updating network configuration files
48% complete
52% complete
Deleting instance and datafiles
76% complete
100% complete

 

How to recreate the MGMTDB:

$ /u01/app/12.2.0.1/grid/bin/dbca -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc
-sid -MGMTDB
-gdbName _mgmtdb
-storageType ASM
-diskGroupName GIMR
-datafileJarLocation <GI HOME>/assistants/dbca/templates
-characterset AL32UTF8
-autoGeneratePassword           
-skipUserTemplateCheck

 

Create the pluggable GIMR database

$ /u01/app/12.2.0.1/grid/bin/mgmtca -local

 


 

Option 2 – Manually truncate the tables

 

As root user stop and disable ora.crf resource on each cluster node:

# /u01/app/12.2.0.1/grid/bin/crsctl stop res ora.crf -init
# /u01/app/12.2.0.1/grid/bin/crsctl modify res ora.crf -attr ENABLED=0 -init

 

Connect to MGMTDB and identify the segments to truncate:

export ORACLE_SID=-MGMTDB
$ORACLE_HOME/bin/sqlplus / as sysdba
SQL> select pdb_name from dba_pdbs where pdb_name!='PDB$SEED';

SQL> alter session set container=GIMR_DSCREP_10;

Session altered.

SQL> col obj format a50
SQL> select owner||'.'||SEGMENT_NAME obj, BYTES from dba_segments where owner='CHM' order by 2 asc;

 

Likely those two tables are much bigger than the rest :

  • CHM.CHMOS_PROCESS_INT_TBL
  • CHM.CHMOS_DEVICE_INT_TBL

Truncate the tables:

SQL> truncate table CHM.CHMOS_PROCESS_INT_TBL;
SQL> truncate table CHM.CHMOS_DEVICE_INT_TBL;

 

Then if needed shrink the tablespace and job done!

 


 

Exadata How Safely Erase All Data

When the time arrives to decommission an environment with sesitive data, we are frequently confronted to the problem how to certify to our customer or management the erase of all data and logs.

On Exadata Machine starting from the software release 12.2.1.1.0, this problem has been elegantly solved by Oracle introducing a new utility called Secure Eraser; which securely erases data on hard drives, flash devices, internal USBs, and resets ILOM to factory default.

 

In earlier software versions, the Exadata Storage Software includes CellCli commands to securely erase the user data:

CellCLI> DROP GRIDDISK ALL FLASHDISK PREFIX=DATA, ERASE=7pass
CellCLI> DROP GRIDDISK ALL PREFIX=DATA, ERASE=3pass

and

CellCLI> DROP CELLDISK ALL FLASHDISK ERASE=7pass 
CellCLI> DROP CELL ERASE=3pass

Unfortunatly those commands only cover the user data stored on the Storage Cell, and none of them produces an official certificate with the summary of the actions taken to guarantee the wipe of the data. While all this is done by Secure Eraser on all Compute and Storage nodes, sanitizing on all type of devices: user data, OS logs and network configurations.

 

Depending from the Exadata model, a subset of all of available options to execute Secure Eraser is possible:

  • Automatic Secure Eraser Ethrough PXE Boot
  • Interactive Secure Eraser through PXE Boot
  • Interactive Secure Eraser through Network Boot
  • Interactive Secure Eraser through External USB

 


 

Recently I used Secure Eraser through External USB on one Exadata X7-2 Machine and here are reported the different steps.

 

Copy the Secure Eraser Diagnostic image from MOS 2180963.1 to a USB stick.

 # dd if=image_diagnostics_18.1.4.0.0_LINUX.X64_180125.3-1.x86_64.usb of=/dev/sdb

 

Boot the server using the USB device with the Secure Eraser Diagnostic image

Exa_BootList.jpg

 

After login, start the Secure Erase process

/usr/sbin/secureeraser --erase --all --flash_erasure_method=7pass --hdd_erasure_method=3pass --technician=Emiliano_Fusaglia --witness=Mario_Bros --output=/mnt/iso

 

 

At the end of the erase process a Data Erasure Certificate similar to the one on the example below will be available in TXT, HTML and PDF format.

Exa_SecureErase_Report


 

 

 

Exadata Storage Snapshots

This post describes how to implement Oracle Database Snapshot Technology on Exadata Machine.

Because Exadata Storage Cell Smart Features, Storage Indexes, IORM and Network Resource Manager work at level of ASM Volume Manager only, (and they don’t work on top of ACFS Cluster File System), the implementation of the snapshot technology is different compared to any other non-Exadata environment.

At this purpuse Oracle has developed a new type of ASM Disk Group called SPARSE Disk Group. It uses ASM SPARSE Grid Disk based on Thin Provisioning to save the database snapshot copies and the associated metadata, and it supports non-CDB and PDB snapshot copy.

The implementation requires the following minimal software versions :

  • Exadata Storage Software version 12.1.2.1.0.
  • Oracle Database version 12.1.0.2 with bundle patch 5.
One major restriction applies to Exadata Storage Sanpshot compared to ACFS;
the source database must be a shared copy open on read only and called Test Master. The Test Master Database can not be modified or deleted as long the latest child snapshot is in use.
This restriction exists because Exadata Snapshot technology uses “allocate on first write”, and not “copy on write” (like for ACFS), and the snapshot is per-database-datafile.
When a child snapshot issue a write, the write goes to a private copy of that block inside the snapshot, preserving the original block value which can be accessed by other child snapshots of the same Test Master.

How to Implement Exadata Storage Snapshots in a PDB Environment

Check the celldisks for available free space to allocate to a new SPARSE Disk Group

[root@strgceladm01 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm01 853.34375G
 CD_01_strgceladm01 853.34375G
 CD_02_strgceladm01 853.34375G
 CD_03_strgceladm01 853.34375G
 CD_04_strgceladm01 853.34375G
 CD_05_strgceladm01 853.34375G
 CD_06_strgceladm01 853.34375G
 CD_07_strgceladm01 853.34375G
 CD_08_strgceladm01 853.34375G
 CD_09_strgceladm01 853.34375G
 CD_10_strgceladm01 853.34375G
 CD_11_strgceladm01 853.34375G
 FD_00_strgceladm01 0
 FD_01_strgceladm01 0
 FD_02_strgceladm01 0
 FD_03_strgceladm01 0
[root@strgceladm01 ~]#


[root@strgceladm02 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm02 853.34375G
 CD_01_strgceladm02 853.34375G
 CD_02_strgceladm02 853.34375G
 CD_03_strgceladm02 853.34375G
 CD_04_strgceladm02 853.34375G
 CD_05_strgceladm02 853.34375G
 CD_06_strgceladm02 853.34375G
 CD_07_strgceladm02 853.34375G
 CD_08_strgceladm02 853.34375G
 CD_09_strgceladm02 853.34375G
 CD_10_strgceladm02 853.34375G
 CD_11_strgceladm02 853.34375G
 FD_00_strgceladm02 0
 FD_01_strgceladm02 0
 FD_02_strgceladm02 0
 FD_03_strgceladm02 0
[root@strgceladm02 ~]#


[root@strgceladm03 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm03 853.34375G
 CD_01_strgceladm03 853.34375G
 CD_02_strgceladm03 853.34375G
 CD_03_strgceladm03 853.34375G
 CD_04_strgceladm03 853.34375G
 CD_05_strgceladm03 853.34375G
 CD_06_strgceladm03 853.34375G
 CD_07_strgceladm03 853.34375G
 CD_08_strgceladm03 853.34375G
 CD_09_strgceladm03 853.34375G
 CD_10_strgceladm03 853.34375G
 CD_11_strgceladm03 853.34375G
 FD_00_strgceladm03 0
 FD_01_strgceladm03 0
 FD_02_strgceladm03 0
 FD_03_strgceladm03 0
[root@strgceladm03 ~]#

For each Storage Cell Create a SPARSE Grid Disks as described below

[root@strgceladm01 ~]# cellcli -e CREATE GRIDDISK ALL PREFIX=SPARSE, sparse=true, SIZE=853.34375G
Cell disks were skipped because they had no freespace for grid disks: FD_00_strgceladm01, FD_01_strgceladm01, FD_02_strgceladm01, FD_03_strgceladm01.
GridDisk SPARSE_CD_00_strgceladm01 successfully created
GridDisk SPARSE_CD_01_strgceladm01 successfully created
GridDisk SPARSE_CD_02_strgceladm01 successfully created
GridDisk SPARSE_CD_03_strgceladm01 successfully created
GridDisk SPARSE_CD_04_strgceladm01 successfully created
GridDisk SPARSE_CD_05_strgceladm01 successfully created
GridDisk SPARSE_CD_06_strgceladm01 successfully created
GridDisk SPARSE_CD_07_strgceladm01 successfully created
GridDisk SPARSE_CD_08_strgceladm01 successfully created
GridDisk SPARSE_CD_09_strgceladm01 successfully created
GridDisk SPARSE_CD_10_strgceladm01 successfully created
GridDisk SPARSE_CD_11_strgceladm01 successfully created
[root@strgceladm01 ~]#

For each Storage Cell List all Grid Disks

[root@strgceladm01 ~]# cellcli -e list griddisk attributes name,size
 DATAC1_CD_00_strgceladm01 6.294586181640625T
 DATAC1_CD_01_strgceladm01 6.294586181640625T
 DATAC1_CD_02_strgceladm01 6.294586181640625T
 DATAC1_CD_03_strgceladm01 6.294586181640625T
 DATAC1_CD_04_strgceladm01 6.294586181640625T
 DATAC1_CD_05_strgceladm01 6.294586181640625T
 DATAC1_CD_06_strgceladm01 6.294586181640625T
 DATAC1_CD_07_strgceladm01 6.294586181640625T
 DATAC1_CD_08_strgceladm01 6.294586181640625T
 DATAC1_CD_09_strgceladm01 6.294586181640625T
 DATAC1_CD_10_strgceladm01 6.294586181640625T
 DATAC1_CD_11_strgceladm01 6.294586181640625T
 FGRID_FD_00_strgceladm01 2.0717315673828125T
 FGRID_FD_01_strgceladm01 2.0717315673828125T
 FGRID_FD_02_strgceladm01 2.0717315673828125T
 FGRID_FD_03_strgceladm01 2.0717315673828125T
 RECOC1_CD_00_strgceladm01 1.78143310546875T
 RECOC1_CD_01_strgceladm01 1.78143310546875T
 RECOC1_CD_02_strgceladm01 1.78143310546875T
 RECOC1_CD_03_strgceladm01 1.78143310546875T
 RECOC1_CD_04_strgceladm01 1.78143310546875T
 RECOC1_CD_05_strgceladm01 1.78143310546875T
 RECOC1_CD_06_strgceladm01 1.78143310546875T
 RECOC1_CD_07_strgceladm01 1.78143310546875T
 RECOC1_CD_08_strgceladm01 1.78143310546875T
 RECOC1_CD_09_strgceladm01 1.78143310546875T
 RECOC1_CD_10_strgceladm01 1.78143310546875T
 RECOC1_CD_11_strgceladm01 1.78143310546875T
 SPARSE_CD_00_strgceladm01 853.34375G
 SPARSE_CD_01_strgceladm01 853.34375G
 SPARSE_CD_02_strgceladm01 853.34375G
 SPARSE_CD_03_strgceladm01 853.34375G
 SPARSE_CD_04_strgceladm01 853.34375G
 SPARSE_CD_05_strgceladm01 853.34375G
 SPARSE_CD_06_strgceladm01 853.34375G
 SPARSE_CD_07_strgceladm01 853.34375G
 SPARSE_CD_08_strgceladm01 853.34375G
 SPARSE_CD_09_strgceladm01 853.34375G
 SPARSE_CD_10_strgceladm01 853.34375G
 SPARSE_CD_11_strgceladm01 853.34375G
[root@strgceladm01 ~]#

From ASM Instance Create a SPARSE Disk Group

SQL> CREATE DISKGROUP SPARSEC1 EXTERNAL REDUNDANCY DISK 'o/*/SPARSE_CD_*'
ATTRIBUTE
'compatible.asm' = '12.2.0.1',
'compatible.rdbms' = '12.2.0.1',
'cell.smart_scan_capable'='TRUE',
'cell.sparse_dg' = 'allsparse',
'AU_SIZE' = '4M';

Diskgroup created.

Set the following ASM attributes on the Disk Group hosting the Test Master Database

ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'access_control.enabled' = 'true';

Grant access to the OS RDBMS user used to access to the Disk Group

ALTER DISKGROUP DATAC1 ADD USER 'oracle';

From an ASM Instance Set ownership permissions for every file that belongs solely to the PDB being snapped cloned as per example below

alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/system.xxx.xxxxxxx';
alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/sysaux.xxx.xxxxxxx';
alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/users.xxx.xxxxxxx';
...
..

Restart the Master Test PDB in Read Only

alter pluggable database PDBTESTMASTER close immediate instances=all;
alter pluggable database PDBTESTMASTER open read only;

Create the first PDB Snapshot Copy on Exadata SPARSE Disk Group

Create pluggable database PDBDEV01 from PDBTESTMASTER tempfile reuse create_file_dest='+SPARSEC1' snapshot copy;

Feedback of the Exadata Storage Snapshots

The ability to create storage efficient database copies in a few seconds, independently from the size of the Test Master is very useful for today IT departments; but such extreme velocity and flexibility is not entirely free. In fact performance tests on a I/O bound workload have highlighted important performance degradation. This reminds us that as defined by Oracle Corporation, the Snapshot Technology, included on Exadata Machine remains a non-production option.

Feedback of Modern Consolidated Database Environment

 

Since the launch of Oracle 12c R1 Beta Program (August 2012) at Trivadis, we have been intensively testing, engineering and implementing Multitenant architectures for our customers.

Today, we can provide our feedbacks and those of our customers!

The overall feedback related to Oracle Multitenant is very positive, customers have been able to increase flexibility and automation, improving the efficiency of the software development life cycles.

Even the Single-tenant configuration (free of charge) brings few advantages compared to the non-CDB architecture. Therefore, from a technology point of view I recommend adopting the Container Database (CDB) architecture for all Oracle databases.

 

Examples of Multitenant architectures implemented

Having defined Oracle Multitenant a technological revolution on the space of relational databases, when combined with others 12c features it becomes a game changer for flexibility, automation and velocity.

Here are listed few examples of successful architectures implemented with our customers, using Oracle Container Database (CDB):

 

  • Database consolidation without performance and stability compromise here.

 

  • Multitenant and DevOps here.

 

  • Operating Database Disaster Recovery in Multitenant environment here.

 

 


 

RHEL 7.4 fails to mount ACFS File System due to KMOD package

After a fresh OS installation or an upgrade to RHEL 7.4, any attempt to install ACFS drivers will fail with the following message: “ACFS-9459 ADVM/ACFS is not supported on this OS version”

The error persists even if the Oracle Grid Infrastructure software includes the  Patch 26247490: 12.2 ACFS MODULE ERRORS & CRASH DURING MODULE LOAD & UNLOAD WITH OL7U4 RHCK.

 

This problem has been identified by Oracle with  BUG 26320387 – 7.4 kmod weak-modules not checking kABI compatibility correctly

And by Red Hat  Bugzilla bug:  1477073 – 7.4 kmod weak-modules –dry-run changed output format missing ‘is compatible’ messages.

root@oel7node06:/u01/app/12.2.0.1/grid/crs/install# /u01/app/12.2.0.1/grid/bin/acfsroot install
ACFS-9459: ADVM/ACFS is not supported on this OS version: '3.10.0-514.6.1.el7.x86_64'

root@oel7node06:~# /sbin/lsmod | grep oracle
oracleadvm 776830 7
oracleoks 654476 1 oracleadvm
oracleafd 205543 1

 

The current Workaround consists in downgrade the version of the kmod  RPM to  kmod-20-9.el7.x86_64.

root@oel7node06:~# yum downgrade kmod-20-9.el7

 

After the package downgrade the ACFS drivers are correcly loaded:

root@oel7node06:~# /sbin/lsmod | grep oracle
oracleacfs 4597925 2
oracleadvm 776830 8
oracleoks 654476 2 oracleacfs,oracleadvm
oracleafd 205543 1