Exadata How Safely Erase All Data

When the time arrives to decommission an environment with sesitive data, we are frequently confronted to the problem how to certify to our customer or management the erase of all data and logs.

On Exadata Machine starting from the software release, this problem has been elegantly solved by Oracle introducing a new utility called Secure Eraser; which securely erases data on hard drives, flash devices, internal USBs, and resets ILOM to factory default.


In earlier software versions, the Exadata Storage Software includes CellCli commands to securely erase the user data:




Unfortunatly those commands only cover the user data stored on the Storage Cell, and none of them produces an official certificate with the summary of the actions taken to guarantee the wipe of the data. While all this is done by Secure Eraser on all Compute and Storage nodes, sanitizing on all type of devices: user data, OS logs and network configurations.


Depending from the Exadata model, a subset of all of available options to execute Secure Eraser is possible:

  • Automatic Secure Eraser Ethrough PXE Boot
  • Interactive Secure Eraser through PXE Boot
  • Interactive Secure Eraser through Network Boot
  • Interactive Secure Eraser through External USB



Recently I used Secure Eraser through External USB on one Exadata X7-2 Machine and here are reported the different steps.


Copy the Secure Eraser Diagnostic image from MOS 2180963.1 to a USB stick.

 # dd if=image_diagnostics_18. of=/dev/sdb


Boot the server using the USB device with the Secure Eraser Diagnostic image



After login, start the Secure Erase process

/usr/sbin/secureeraser --erase --all --flash_erasure_method=7pass --hdd_erasure_method=3pass --technician=Emiliano_Fusaglia --witness=Mario_Bros --output=/mnt/iso



At the end of the erase process a Data Erasure Certificate similar to the one on the example below will be available in TXT, HTML and PDF format.






Feedback of Modern Consolidated Database Environment


Since the launch of Oracle 12c R1 Beta Program (August 2012) at Trivadis, we have been intensively testing, engineering and implementing Multitenant architectures for our customers.

Today, we can provide our feedbacks and those of our customers!

The overall feedback related to Oracle Multitenant is very positive, customers have been able to increase flexibility and automation, improving the efficiency of the software development life cycles.

Even the Single-tenant configuration (free of charge) brings few advantages compared to the non-CDB architecture. Therefore, from a technology point of view I recommend adopting the Container Database (CDB) architecture for all Oracle databases.


Examples of Multitenant architectures implemented

Having defined Oracle Multitenant a technological revolution on the space of relational databases, when combined with others 12c features it becomes a game changer for flexibility, automation and velocity.

Here are listed few examples of successful architectures implemented with our customers, using Oracle Container Database (CDB):


  • Database consolidation without performance and stability compromise here.


  • Multitenant and DevOps here.


  • Operating Database Disaster Recovery in Multitenant environment here.




RHEL 7.4 fails to mount ACFS File System due to KMOD package

After a fresh OS installation or an upgrade to RHEL 7.4, any attempt to install ACFS drivers will fail with the following message: “ACFS-9459 ADVM/ACFS is not supported on this OS version”

The error persists even if the Oracle Grid Infrastructure software includes the  Patch 26247490: 12.2 ACFS MODULE ERRORS & CRASH DURING MODULE LOAD & UNLOAD WITH OL7U4 RHCK.


This problem has been identified by Oracle with  BUG 26320387 – 7.4 kmod weak-modules not checking kABI compatibility correctly

And by Red Hat  Bugzilla bug:  1477073 – 7.4 kmod weak-modules –dry-run changed output format missing ‘is compatible’ messages.

root@oel7node06:/u01/app/ /u01/app/ install
ACFS-9459: ADVM/ACFS is not supported on this OS version: '3.10.0-514.6.1.el7.x86_64'

root@oel7node06:~# /sbin/lsmod | grep oracle
oracleadvm 776830 7
oracleoks 654476 1 oracleadvm
oracleafd 205543 1


The current Workaround consists in downgrade the version of the kmod  RPM to  kmod-20-9.el7.x86_64.

root@oel7node06:~# yum downgrade kmod-20-9.el7


After the package downgrade the ACFS drivers are correcly loaded:

root@oel7node06:~# /sbin/lsmod | grep oracle
oracleacfs 4597925 2
oracleadvm 776830 8
oracleoks 654476 2 oracleacfs,oracleadvm
oracleafd 205543 1





Adding flexibility to Oracle GI Implementing Multiple SCANs

Nowadays the business requirements force the IT to implement the more and more sophisticated and consolidated environments without compromising availability, performance and flexibility of each application running on it.

In this post, I explain how to improve the Grid Infrastructure Network flexibility, implementing multiple SCANs and how to associate one or multiple networks to the Oracle databases.

To better understand the reasons for such type of implementation, below are listed few common use cases:

  • Applications are deployed on different/dedicated subnets.
  • Network isolation due to security requirement.
  • Different database protocols are in use (TCP, TCPS, etc.).



Single Client Access Name (SCAN)

By default on each Oracle Grid Infrastructure cluster, indipendently from the number of nodes, one SCAN with 3 SCAN VIPs is created.

Below is depicted the default Oracle Clusterware network/SCAN configuration.




Multiple Single Client Access Name (SCAN) implementation

Before implemeting additional SCANs, the OS provisioning of new network interfaces or new VLAN Tagging has to be completed.

The current example uses the second option (VLAN Tagging), and the bond0 interface is an Active/Active setup of two 10gbe cards, to which a VLAN tag has been added.

Below is represented the customized Oracle Clusterware network/SCAN configuration, having added a second SCAN.




Step-by-step implementation

After completing the OS network setup, as grid owner add the new interface to the Grid Infrastructure:

grid@host01a:~# oifcfg setif -global bond0.764/

grid@host01a:~# oifcfg getif
eno49 global cluster_interconnect,asm
eno50 global cluster_interconnect,asm
bond0 global public
bond0.764 global public


Then as root create the network number 2 and disply the configuration:

root@host01a:~# /u01/app/ add network -netnum 2 -subnet -nettype STATIC

root@host01a:~# /u01/app/ config network -netnum 2
Network 2 exists
Subnet IPv4:, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:


As root user add the node VIPs:

root@host01a:~# /u01/app/ add vip -node host01a -netnum 2 -address host01b-vip.emilianofusaglia.net/
root@host01a:~# /u01/app/ add vip -node host02a -netnum 2 -address host02b-vip.emilianofusaglia.net/
root@host01a:~# /u01/app/ add vip -node host03a -netnum 2 -address host03b-vip.emilianofusaglia.net/
root@host01a:~# /u01/app/ add vip -node host04a -netnum 2 -address host04b-vip.emilianofusaglia.net/
root@host01a:~# /u01/app/ add vip -node host05a -netnum 2 -address host05b-vip.emilianofusaglia.net/
root@host01a:~# /u01/app/ add vip -node host06a -netnum 2 -address host06b-vip.emilianofusaglia.net/


As grid user  create a new listener based on the network number 2:

grid@host01a:~# srvctl add listener -listener LISTENER2 -netnum 2 -endpoints "TCP:1532"


As root user add the new SCAN to the network number 2:

 root@host01a:~# /u01/app/ add scan -scanname scan-02.emilianofusaglia.net -netnum 2


As root user start the new node VIPs:

root@host01a:~# /u01/app/ start vip -vip host01b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/ start vip -vip host02b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/ start vip -vip host03b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/ start vip -vip host04b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/ start vip -vip host05b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/ start vip -vip host06b-vip.emilianofusaglia.net


As grid user start the new node Listeners:

grid@host01a:~# srvctl start listener -listener LISTENER2
grid@host01a:~# srvctl status listener -listener LISTENER2
Listener LISTENER2 is enabled
Listener LISTENER2 is running on node(s): host01a,host02a,host03a,host04a,host05a,host06a


As root user start the new SCAN and as grid user check the configuration:

root@host01a:~# /u01/app/ start scan -netnum 2

grid@host01a:~# srvctl config scan -netnum 2
SCAN name: scan-02.emilianofusaglia.net, Network: 2
Subnet IPv4:, static
Subnet IPv6:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:

grid@host01a:~# srvctl status scan -netnum 2
SCAN VIP scan1_net2 is enabled
SCAN VIP scan1_net2 is running on node host02a
SCAN VIP scan2_net2 is enabled
SCAN VIP scan2_net2 is running on node host01a
SCAN VIP scan3_net2 is enabled
SCAN VIP scan3_net2 is running on node host03a


As grid user add the SCAN Listener and check the configuration:

grid@host01a:~# srvctl add scan_listener -netnum 2 -listener LISTENER2 -endpoints TCP:1532

grid@host01a:~# srvctl config scan_listener -netnum 2
SCAN Listener LISTENER2_SCAN1_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER2_SCAN2_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER2_SCAN3_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:


As grid user start the SCAN Listener2 and check the status:

grid@host01a:~# srvctl start scan_listener -netnum 2

grid@host01a:~# srvctl status scan_listener -netnum 2
SCAN Listener LISTENER2_SCAN1_NET2 is enabled
SCAN listener LISTENER2_SCAN1_NET2 is running on node host02a
SCAN Listener LISTENER2_SCAN2_NET2 is enabled
SCAN listener LISTENER2_SCAN2_NET2 is running on node host01a
SCAN Listener LISTENER2_SCAN3_NET2 is enabled
SCAN listener LISTENER2_SCAN3_NET2 is running on node host03a


Defining the multi SCANs configuration per database

Once the above configuration is completed, it remains to define which SCAN/s should be used by each database.

When multiple SCANs exists, by default the CRS populate the LISTENER_NETWORKS parameter to register the database against all SCANs and LISTENERs.

To overwrite this default behavior, allowing for example the authentication of a specific database only against the SCAN scan-02.emilianofusaglia.net, the database parameter LISTENER_NETWORKS should be manually configured.
The parameter LISTENER_NETWORKS can be dynamically set but the new value is enforced during the next instance restart.



ASM Filter Driver (ASMFD)


ASM Filter Driver is a Linux kernel module introduced in 12c R1. It resides in the I/O path of the Oracle ASM disks providing the following features:

  • Rejecting all non-Oracle I/O write requests to ASM Disks.
  • Device name persistency.
  • Node level fencing without reboot.


In 12c R2 ASMFD can be enabled from the GUI interface of the Grid Infrastructure installation, as shown on this post GI 12c R2 Installation at the step #8 “Create ASM Disk Group”.

Once ASM Filter Driver is in use, similarly to ASMLib the disks are managed using the ASMFD Label Name.


Here few examples about the implementation of ASM Filter Driver.

--How to create an ASMFD label in SQL*Plus
SQL> Alter system label set 'DATA1' to '/dev/mapper/mpathak';

System altered.

--How to create an ASM Disk Group with ASMFD
ATTRIBUTE 'SECTOR_SIZE'='512','LOGICAL_SECTOR_SIZE'='512','compatible.asm'='',

Diskgroup created.


ASM Filter Driver can also be managed from the ASM command line utility ASMCMD

--Check ASMFD status
ASMCMD> afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'oel7node06.localdomain'

--List ASM Disks where ASMFD is enabled
ASMCMD> afd_lsdsk
Label                    Filtering                Path
DATA1                      ENABLED                /dev/mapper/mpathak
DATA2                      ENABLED                /dev/mapper/mpathan
DATA3                      ENABLED                /dev/mapper/mpathw
DATA4                      ENABLED                /dev/mapper/mpathac
GIMR1                      ENABLED                /dev/mapper/mpatham
GIMR2                      ENABLED                /dev/mapper/mpathaj
GIMR3                      ENABLED                /dev/mapper/mpathal
GIMR4                      ENABLED                /dev/mapper/mpathaf
GIMR5                      ENABLED                /dev/mapper/mpathai
RECO3                      ENABLED                /dev/mapper/mpathy
RECO1                      ENABLED                /dev/mapper/mpathab
RECO2                      ENABLED                /dev/mapper/mpathx

--How to remove an ASMFD label in ASMCMD
ASMCMD> afd_unlabel DATA4




Installing Oracle Grid Infrastructure 12c R2

It has been an exciting week, Oracle 12c R2 came out and suddenly was time to refresh the RAC test environments. My friend Jacques opted for an upgrade from to (here the link to his blog post),  I started with a fresh installation, because I also upgraded the Operating System to OEL  7.3.

Compared to 12c R1 there are new options on the installation process, but general speaking the wizard is quite similar.

The first breakthrough is about the installation simplified with an image based, no more runIstaller.sh to invoke but …

Unpack the .Zip file directly inside the Grid Infrastructure Home of the first cluster node as described below:

[grid@oel7node06 ~]$ mkdir -p /u01/app/ 
[grid@oel7node06 ~]$ chown grid:oinstall /u01/app/ 
[grid@oel7node06 ~]$ cd /u01/app/ 
[grid@oel7node06 grid]$ unzip -q download_location/grid_home_image.zip

# From an X session invoke the Grid Infrastructure wizard: 
[grid@oel7node06 grid]$ ./gridSetup.sh





The second screenshot list the new Cluster typoligies available on 12c R2:

  • Oracle Standalone Cluster
  • Oracle Cluster Domain
    • Oracle Domain Services Cluster
    • Oracle Member Clusters
      • Oracle Member Cluster for Oracle Database
      • Oracle Member Cluster for Applications


In my case I’m installing an Oracle Standalone Cluster










































And now time for testing.



Linux for DBA: Basic “vi” Editor Tutorial


UNIX/Linux “vi” is a very powerful text editor, unfortunately at the beginning the utilization can be difficult. To help our memory, I wrote this post.

This is NOT an exhaustive guide, but a concentrate of the most useful commands and options.


vi Operation Modes:

Command mode: allows to execute administrative tasks (run command, move cursor, serch/replace string, save, etc.). This is the default mode when started.
When Insert mode is active press ESC to revert to Command mode.

Insert mode:  enables to write into the file. To switch to Insert mode you simply type i.


To open a file in edit mode:

# vi filename


Basic Moving commands

Enable Command mode (pressing ESC twice)
j  -- Cursor down one line
k  -- Cursor up one line
h  -- Cursor left one line
l  -- Cursor right one line
Multiple lines/columns move ex.: 5h -- Cursor 5 move left

$   -- Cursor at the end of the line.
0   -- Cursor at the beginning of the line. Same than |
b   -- Cursor at the next word.
w   -- Cursor at the next word.
G   -- Cursor at the end of the file.
1G  -- Cursor at the beginning of the line.
:,4 -- Cursor at the 4th line.


Basic Editing commands

Enable Insert mode (pressing i)

a  -- Insert text after the cursor location. 
A  -- Insert text at the end of the line. 
i  -- Insert text before the cursor location. 
I  -- Insert text at the beginning of the line. 
o  -- Insert a new line below the cursor location. 
O  -- Insert a new line above the cursor location.
dd -- Delete the current line.
x  -- Delete the character under the cursor location.
cw -- Change the word under the cursor location.
r  -- Replace the character under the cursor location.
R  -- Replace multiple characters starting from the cursor location. ESC to stop the replacement.
yy -- Copy the current line.
yw -- Copy the current word.
p  -- Paste the copied text before the current cursor location
P  -- Paste the copied text after the current cursor location


Basic Search and  Replace options

Enable Command mode (pressing ESC twice)

:set ic -- Ingnore case when searching.
:set nu -- Disply line number on the left side.
:%s/<search_string>/<replacement_string>/g -- Global search and replace


Exiting from vi

:q  -- Exit without Saving
:q! -- Force Exit without Saving
:w  -- Save the file
:wq -- Save & Exit