Grid Management DB filling up ASM disk space

Recently I discovered on Oracle Grid Infrastructure 12cR2 that the ASM disk group hosting the Management DB (-MGMTDB) was filling up the disk space very quickly.

This is due to a bug on the oclumon data purge procedure.

To fix the problem, two possibilities are available:

  1. Recreate the Management DB
  2. Manually truncate the tables not purged and shrinking the Tablespace Size

 

Below are described the two options.

 

Option 1 – Recreate the Management DB

As root user on each cluster node:

# /u01/app/12.2.0.1/grid/bin/crsctl stop res ora.crf -init
# /u01/app/12.2.0.1/grid/bin/crsctl modify res ora.crf -attr ENABLED=0 -init

 

As Grid from the local node hosting the Management Database Instance run the commands:

$ /u01/app/12.2.0.1/grid/bin/srvctl status mgmtdb
$ /u01/app/12.2.0.1/grid/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB
Connecting to database
4% complete
9% complete
14% complete
19% complete
23% complete
28% complete
47% complete
Updating network configuration files
48% complete
52% complete
Deleting instance and datafiles
76% complete
100% complete

 

How to recreate the MGMTDB:

$ /u01/app/12.2.0.1/grid/bin/dbca -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc
-sid -MGMTDB
-gdbName _mgmtdb
-storageType ASM
-diskGroupName GIMR
-datafileJarLocation <GI HOME>/assistants/dbca/templates
-characterset AL32UTF8
-autoGeneratePassword           
-skipUserTemplateCheck

 

Create the pluggable GIMR database

$ /u01/app/12.2.0.1/grid/bin/mgmtca -local

 


 

Option 2 – Manually truncate the tables

 

As root user stop and disable ora.crf resource on each cluster node:

# /u01/app/12.2.0.1/grid/bin/crsctl stop res ora.crf -init
# /u01/app/12.2.0.1/grid/bin/crsctl modify res ora.crf -attr ENABLED=0 -init

 

Connect to MGMTDB and identify the segments to truncate:

export ORACLE_SID=-MGMTDB
$ORACLE_HOME/bin/sqlplus / as sysdba
SQL> select pdb_name from dba_pdbs where pdb_name!='PDB$SEED';

SQL> alter session set container=GIMR_DSCREP_10;

Session altered.

SQL> col obj format a50
SQL> select owner||'.'||SEGMENT_NAME obj, BYTES from dba_segments where owner='CHM' order by 2 asc;

 

Likely those two tables are much bigger than the rest :

  • CHM.CHMOS_PROCESS_INT_TBL
  • CHM.CHMOS_DEVICE_INT_TBL

Truncate the tables:

SQL> truncate table CHM.CHMOS_PROCESS_INT_TBL;
SQL> truncate table CHM.CHMOS_DEVICE_INT_TBL;

 

Then if needed shrink the tablespace and job done!

 


 

Advertisements

Exadata Storage Snapshots

This post describes how to implement Oracle Database Snapshot Technology on Exadata Machine.

Because Exadata Storage Cell Smart Features, Storage Indexes, IORM and Network Resource Manager work at level of ASM Volume Manager only, (and they don’t work on top of ACFS Cluster File System), the implementation of the snapshot technology is different compared to any other non-Exadata environment.

At this purpuse Oracle has developed a new type of ASM Disk Group called SPARSE Disk Group. It uses ASM SPARSE Grid Disk based on Thin Provisioning to save the database snapshot copies and the associated metadata, and it supports non-CDB and PDB snapshot copy.

The implementation requires the following minimal software versions :

  • Exadata Storage Software version 12.1.2.1.0.
  • Oracle Database version 12.1.0.2 with bundle patch 5.
One major restriction applies to Exadata Storage Sanpshot compared to ACFS;
the source database must be a shared copy open on read only and called Test Master. The Test Master Database can not be modified or deleted as long the latest child snapshot is in use.
This restriction exists because Exadata Snapshot technology uses “allocate on first write”, and not “copy on write” (like for ACFS), and the snapshot is per-database-datafile.
When a child snapshot issue a write, the write goes to a private copy of that block inside the snapshot, preserving the original block value which can be accessed by other child snapshots of the same Test Master.

How to Implement Exadata Storage Snapshots in a PDB Environment

Check the celldisks for available free space to allocate to a new SPARSE Disk Group

[root@strgceladm01 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm01 853.34375G
 CD_01_strgceladm01 853.34375G
 CD_02_strgceladm01 853.34375G
 CD_03_strgceladm01 853.34375G
 CD_04_strgceladm01 853.34375G
 CD_05_strgceladm01 853.34375G
 CD_06_strgceladm01 853.34375G
 CD_07_strgceladm01 853.34375G
 CD_08_strgceladm01 853.34375G
 CD_09_strgceladm01 853.34375G
 CD_10_strgceladm01 853.34375G
 CD_11_strgceladm01 853.34375G
 FD_00_strgceladm01 0
 FD_01_strgceladm01 0
 FD_02_strgceladm01 0
 FD_03_strgceladm01 0
[root@strgceladm01 ~]#


[root@strgceladm02 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm02 853.34375G
 CD_01_strgceladm02 853.34375G
 CD_02_strgceladm02 853.34375G
 CD_03_strgceladm02 853.34375G
 CD_04_strgceladm02 853.34375G
 CD_05_strgceladm02 853.34375G
 CD_06_strgceladm02 853.34375G
 CD_07_strgceladm02 853.34375G
 CD_08_strgceladm02 853.34375G
 CD_09_strgceladm02 853.34375G
 CD_10_strgceladm02 853.34375G
 CD_11_strgceladm02 853.34375G
 FD_00_strgceladm02 0
 FD_01_strgceladm02 0
 FD_02_strgceladm02 0
 FD_03_strgceladm02 0
[root@strgceladm02 ~]#


[root@strgceladm03 ~]# cellcli -e list celldisk attributes name,freespace
 CD_00_strgceladm03 853.34375G
 CD_01_strgceladm03 853.34375G
 CD_02_strgceladm03 853.34375G
 CD_03_strgceladm03 853.34375G
 CD_04_strgceladm03 853.34375G
 CD_05_strgceladm03 853.34375G
 CD_06_strgceladm03 853.34375G
 CD_07_strgceladm03 853.34375G
 CD_08_strgceladm03 853.34375G
 CD_09_strgceladm03 853.34375G
 CD_10_strgceladm03 853.34375G
 CD_11_strgceladm03 853.34375G
 FD_00_strgceladm03 0
 FD_01_strgceladm03 0
 FD_02_strgceladm03 0
 FD_03_strgceladm03 0
[root@strgceladm03 ~]#

For each Storage Cell Create a SPARSE Grid Disks as described below

[root@strgceladm01 ~]# cellcli -e CREATE GRIDDISK ALL PREFIX=SPARSE, sparse=true, SIZE=853.34375G
Cell disks were skipped because they had no freespace for grid disks: FD_00_strgceladm01, FD_01_strgceladm01, FD_02_strgceladm01, FD_03_strgceladm01.
GridDisk SPARSE_CD_00_strgceladm01 successfully created
GridDisk SPARSE_CD_01_strgceladm01 successfully created
GridDisk SPARSE_CD_02_strgceladm01 successfully created
GridDisk SPARSE_CD_03_strgceladm01 successfully created
GridDisk SPARSE_CD_04_strgceladm01 successfully created
GridDisk SPARSE_CD_05_strgceladm01 successfully created
GridDisk SPARSE_CD_06_strgceladm01 successfully created
GridDisk SPARSE_CD_07_strgceladm01 successfully created
GridDisk SPARSE_CD_08_strgceladm01 successfully created
GridDisk SPARSE_CD_09_strgceladm01 successfully created
GridDisk SPARSE_CD_10_strgceladm01 successfully created
GridDisk SPARSE_CD_11_strgceladm01 successfully created
[root@strgceladm01 ~]#

For each Storage Cell List all Grid Disks

[root@strgceladm01 ~]# cellcli -e list griddisk attributes name,size
 DATAC1_CD_00_strgceladm01 6.294586181640625T
 DATAC1_CD_01_strgceladm01 6.294586181640625T
 DATAC1_CD_02_strgceladm01 6.294586181640625T
 DATAC1_CD_03_strgceladm01 6.294586181640625T
 DATAC1_CD_04_strgceladm01 6.294586181640625T
 DATAC1_CD_05_strgceladm01 6.294586181640625T
 DATAC1_CD_06_strgceladm01 6.294586181640625T
 DATAC1_CD_07_strgceladm01 6.294586181640625T
 DATAC1_CD_08_strgceladm01 6.294586181640625T
 DATAC1_CD_09_strgceladm01 6.294586181640625T
 DATAC1_CD_10_strgceladm01 6.294586181640625T
 DATAC1_CD_11_strgceladm01 6.294586181640625T
 FGRID_FD_00_strgceladm01 2.0717315673828125T
 FGRID_FD_01_strgceladm01 2.0717315673828125T
 FGRID_FD_02_strgceladm01 2.0717315673828125T
 FGRID_FD_03_strgceladm01 2.0717315673828125T
 RECOC1_CD_00_strgceladm01 1.78143310546875T
 RECOC1_CD_01_strgceladm01 1.78143310546875T
 RECOC1_CD_02_strgceladm01 1.78143310546875T
 RECOC1_CD_03_strgceladm01 1.78143310546875T
 RECOC1_CD_04_strgceladm01 1.78143310546875T
 RECOC1_CD_05_strgceladm01 1.78143310546875T
 RECOC1_CD_06_strgceladm01 1.78143310546875T
 RECOC1_CD_07_strgceladm01 1.78143310546875T
 RECOC1_CD_08_strgceladm01 1.78143310546875T
 RECOC1_CD_09_strgceladm01 1.78143310546875T
 RECOC1_CD_10_strgceladm01 1.78143310546875T
 RECOC1_CD_11_strgceladm01 1.78143310546875T
 SPARSE_CD_00_strgceladm01 853.34375G
 SPARSE_CD_01_strgceladm01 853.34375G
 SPARSE_CD_02_strgceladm01 853.34375G
 SPARSE_CD_03_strgceladm01 853.34375G
 SPARSE_CD_04_strgceladm01 853.34375G
 SPARSE_CD_05_strgceladm01 853.34375G
 SPARSE_CD_06_strgceladm01 853.34375G
 SPARSE_CD_07_strgceladm01 853.34375G
 SPARSE_CD_08_strgceladm01 853.34375G
 SPARSE_CD_09_strgceladm01 853.34375G
 SPARSE_CD_10_strgceladm01 853.34375G
 SPARSE_CD_11_strgceladm01 853.34375G
[root@strgceladm01 ~]#

From ASM Instance Create a SPARSE Disk Group

SQL> CREATE DISKGROUP SPARSEC1 EXTERNAL REDUNDANCY DISK 'o/*/SPARSE_CD_*'
ATTRIBUTE
'compatible.asm' = '12.2.0.1',
'compatible.rdbms' = '12.2.0.1',
'cell.smart_scan_capable'='TRUE',
'cell.sparse_dg' = 'allsparse',
'AU_SIZE' = '4M';

Diskgroup created.

Set the following ASM attributes on the Disk Group hosting the Test Master Database

ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'access_control.enabled' = 'true';

Grant access to the OS RDBMS user used to access to the Disk Group

ALTER DISKGROUP DATAC1 ADD USER 'oracle';

From an ASM Instance Set ownership permissions for every file that belongs solely to the PDB being snapped cloned as per example below

alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/system.xxx.xxxxxxx';
alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/sysaux.xxx.xxxxxxx';
alter diskgroup DATAC1 set ownership owner='oracle' for file '+DATAC1/CDBT/<xxxxxxxxxxxxxxxxxxx>/DATAFILE/users.xxx.xxxxxxx';
...
..

Restart the Master Test PDB in Read Only

alter pluggable database PDBTESTMASTER close immediate instances=all;
alter pluggable database PDBTESTMASTER open read only;

Create the first PDB Snapshot Copy on Exadata SPARSE Disk Group

Create pluggable database PDBDEV01 from PDBTESTMASTER tempfile reuse create_file_dest='+SPARSEC1' snapshot copy;

Feedback of the Exadata Storage Snapshots

The ability to create storage efficient database copies in a few seconds, independently from the size of the Test Master is very useful for today IT departments; but such extreme velocity and flexibility is not entirely free. In fact performance tests on a I/O bound workload have highlighted important performance degradation. This reminds us that as defined by Oracle Corporation, the Snapshot Technology, included on Exadata Machine remains a non-production option.

Feedback of Modern Consolidated Database Environment

 

Since the launch of Oracle 12c R1 Beta Program (August 2012) at Trivadis, we have been intensively testing, engineering and implementing Multitenant architectures for our customers.

Today, we can provide our feedbacks and those of our customers!

The overall feedback related to Oracle Multitenant is very positive, customers have been able to increase flexibility and automation, improving the efficiency of the software development life cycles.

Even the Single-tenant configuration (free of charge) brings few advantages compared to the non-CDB architecture. Therefore, from a technology point of view I recommend adopting the Container Database (CDB) architecture for all Oracle databases.

 

Examples of Multitenant architectures implemented

Having defined Oracle Multitenant a technological revolution on the space of relational databases, when combined with others 12c features it becomes a game changer for flexibility, automation and velocity.

Here are listed few examples of successful architectures implemented with our customers, using Oracle Container Database (CDB):

 

  • Database consolidation without performance and stability compromise here.

 

  • Multitenant and DevOps here.

 

  • Operating Database Disaster Recovery in Multitenant environment here.

 

 


 

RHEL 7.4 fails to mount ACFS File System due to KMOD package

After a fresh OS installation or an upgrade to RHEL 7.4, any attempt to install ACFS drivers will fail with the following message: “ACFS-9459 ADVM/ACFS is not supported on this OS version”

The error persists even if the Oracle Grid Infrastructure software includes the  Patch 26247490: 12.2 ACFS MODULE ERRORS & CRASH DURING MODULE LOAD & UNLOAD WITH OL7U4 RHCK.

 

This problem has been identified by Oracle with  BUG 26320387 – 7.4 kmod weak-modules not checking kABI compatibility correctly

And by Red Hat  Bugzilla bug:  1477073 – 7.4 kmod weak-modules –dry-run changed output format missing ‘is compatible’ messages.

root@oel7node06:/u01/app/12.2.0.1/grid/crs/install# /u01/app/12.2.0.1/grid/bin/acfsroot install
ACFS-9459: ADVM/ACFS is not supported on this OS version: '3.10.0-514.6.1.el7.x86_64'

root@oel7node06:~# /sbin/lsmod | grep oracle
oracleadvm 776830 7
oracleoks 654476 1 oracleadvm
oracleafd 205543 1

 

The current Workaround consists in downgrade the version of the kmod  RPM to  kmod-20-9.el7.x86_64.

root@oel7node06:~# yum downgrade kmod-20-9.el7

 

After the package downgrade the ACFS drivers are correcly loaded:

root@oel7node06:~# /sbin/lsmod | grep oracle
oracleacfs 4597925 2
oracleadvm 776830 8
oracleoks 654476 2 oracleacfs,oracleadvm
oracleafd 205543 1

 


 

 

 

Adding Pluggable Databases to an existing Multitenant Data Guard environment

We all know the benefits of the Oracle Multitenant cosolidation “Many-as-One”, one container (CDB) operation for wich many Pluggable Databases (PDBs) can take benefit; for example one CDB backup protects all PDBs stored inside the container itself.

While among the DBAs the setup of Oracle Data Guard is become more than a standandard routine,  described in thousands of Internet pages and blogs (one example available here), this post explains how to add new Pluggable Databases (PDBs) to an existing Multitenant environment protected by Data Guard.

 

How to create PDBs in Oracle Multitenant environment protected by Data Guard

There are multiple scenarios of PDB creation and they differently integrate within the Data Guard architecture. The easiest way to proceed consists in creating a new Pluggable Database using the SEED PDB:

  • PDB creation from SEED
    The creation of a brand new empty pluggable database is automatically replicated to each physical standby database. No additional action is required.

 

Unfortunately, this option is not always applicable because the new PDB should be a clone of an existing one. Therefore, it is important for the DBA to understand how integrating new non-empty Pluggable Database on a Multitenant and Data Guard environment without impacting the pre-existing setup.

 

  • PDB clone 

Cloning a PDB in a Data Guard environment requires few additional steps, which changes across the different Oracle versions and when Active Data Guard option is in use. While remote cloning the PDB the option STANDBYS=NONE should be used to defer the PDB replica to the Standby container. Then it is possible to replicate and protect the newly cloned PDB with Data Guard.

A full example of how to perform those tasks is reported below.

 

Cloning a PDB via DB Link using the STANDBYS=NONE option

create pluggable database PCJORD from PCLORD@ccls01_PCLORD tempfile reuse STANDBYS=NONE;

 

Open the newly cloned PDB

alter pluggable database PCJORD open instances=all;

 

On the Standby Container Restore PDB from Primary

run{
set newname for pluggable database PCJORD to new;
restore pluggable database PCJORD from service CMJP01;
switch datafile all;
}

 

Connect to the Standby container and STOP the Apply Process

dgmgrl
connect sys/xxxxxxxx@CMJP01
edit database 'CMJS01' set state='APPLY-OFF';

 

If Active Data Guard is in use, re-start the Standby container in MOUNT

srvctl stop database -db CMJS01 

sqlplus / as sysdba
startup mount

 

Enable the PDB recovery on the Standby Container

alter session set container=PCJORD;
alter pluggable database enable recovery;

 

Connect to the Standby container and RE-START the Apply Process

dgmdrl
connect sys/xxxxxxxx@CMJP01
edit database 'CMJS01' set state='APPLY-ON';

 

If Active Data Guard is in use Open the Container in Read Only

alter database open;

 

 


 

 

Adding flexibility to Oracle GI Implementing Multiple SCANs

Nowadays the business requirements force the IT to implement the more and more sophisticated and consolidated environments without compromising availability, performance and flexibility of each application running on it.

In this post, I explain how to improve the Grid Infrastructure Network flexibility, implementing multiple SCANs and how to associate one or multiple networks to the Oracle databases.

To better understand the reasons for such type of implementation, below are listed few common use cases:

  • Applications are deployed on different/dedicated subnets.
  • Network isolation due to security requirement.
  • Different database protocols are in use (TCP, TCPS, etc.).

 

 

Single Client Access Name (SCAN)

By default on each Oracle Grid Infrastructure cluster, indipendently from the number of nodes, one SCAN with 3 SCAN VIPs is created.

Below is depicted the default Oracle Clusterware network/SCAN configuration.

 

Single_Scan_Listener

 

Multiple Single Client Access Name (SCAN) implementation

Before implemeting additional SCANs, the OS provisioning of new network interfaces or new VLAN Tagging has to be completed.

The current example uses the second option (VLAN Tagging), and the bond0 interface is an Active/Active setup of two 10gbe cards, to which a VLAN tag has been added.

Below is represented the customized Oracle Clusterware network/SCAN configuration, having added a second SCAN.

 

Multi_Scan_Listeners

 

Step-by-step implementation

After completing the OS network setup, as grid owner add the new interface to the Grid Infrastructure:

grid@host01a:~# oifcfg setif -global bond0.764/10.15.69.0:public

grid@host01a:~# oifcfg getif
eno49 192.168.7.32 global cluster_interconnect,asm
eno50 192.168.9.48 global cluster_interconnect,asm
bond0 10.11.8.0 global public
bond0.764 10.15.69.0 global public
grid@host01a:~#

 

Then as root create the network number 2 and disply the configuration:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add network -netnum 2 -subnet 10.15.69.0/255.255.255.0/bond0.764 -nettype STATIC

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl config network -netnum 2
Network 2 exists
Subnet IPv4: 10.15.69.0/255.255.255.0/, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:

 

As root user add the node VIPs:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host01a -netnum 2 -address host01b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host02a -netnum 2 -address host02b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host03a -netnum 2 -address host03b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host04a -netnum 2 -address host04b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host05a -netnum 2 -address host05b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host06a -netnum 2 -address host06b-vip.emilianofusaglia.net/255.255.255.0

 

As grid user  create a new listener based on the network number 2:

grid@host01a:~# srvctl add listener -listener LISTENER2 -netnum 2 -endpoints "TCP:1532"

 

As root user add the new SCAN to the network number 2:

 root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add scan -scanname scan-02.emilianofusaglia.net -netnum 2

 

As root user start the new node VIPs:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host01b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host02b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host03b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host04b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host05b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host06b-vip.emilianofusaglia.net

 

As grid user start the new node Listeners:

grid@host01a:~# srvctl start listener -listener LISTENER2
grid@host01a:~# srvctl status listener -listener LISTENER2
Listener LISTENER2 is enabled
Listener LISTENER2 is running on node(s): host01a,host02a,host03a,host04a,host05a,host06a

 

As root user start the new SCAN and as grid user check the configuration:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start scan -netnum 2

grid@host01a:~# srvctl config scan -netnum 2
SCAN name: scan-02.emilianofusaglia.net, Network: 2
Subnet IPv4: 10.15.69.0/255.255.255.0/, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 10.15.69.44
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.15.69.45
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 3 IPv4 VIP: 10.15.69.43
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:

grid@host01a:~# srvctl status scan -netnum 2
SCAN VIP scan1_net2 is enabled
SCAN VIP scan1_net2 is running on node host02a
SCAN VIP scan2_net2 is enabled
SCAN VIP scan2_net2 is running on node host01a
SCAN VIP scan3_net2 is enabled
SCAN VIP scan3_net2 is running on node host03a

 

As grid user add the SCAN Listener and check the configuration:

grid@host01a:~# srvctl add scan_listener -netnum 2 -listener LISTENER2 -endpoints TCP:1532

grid@host01a:~# srvctl config scan_listener -netnum 2
SCAN Listener LISTENER2_SCAN1_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER2_SCAN2_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER2_SCAN3_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:

 

As grid user start the SCAN Listener2 and check the status:

grid@host01a:~# srvctl start scan_listener -netnum 2

grid@host01a:~# srvctl status scan_listener -netnum 2
SCAN Listener LISTENER2_SCAN1_NET2 is enabled
SCAN listener LISTENER2_SCAN1_NET2 is running on node host02a
SCAN Listener LISTENER2_SCAN2_NET2 is enabled
SCAN listener LISTENER2_SCAN2_NET2 is running on node host01a
SCAN Listener LISTENER2_SCAN3_NET2 is enabled
SCAN listener LISTENER2_SCAN3_NET2 is running on node host03a

 

Defining the multi SCANs configuration per database

Once the above configuration is completed, it remains to define which SCAN/s should be used by each database.

When multiple SCANs exists, by default the CRS populate the LISTENER_NETWORKS parameter to register the database against all SCANs and LISTENERs.

To overwrite this default behavior, allowing for example the authentication of a specific database only against the SCAN scan-02.emilianofusaglia.net, the database parameter LISTENER_NETWORKS should be manually configured.
The parameter LISTENER_NETWORKS can be dynamically set but the new value is enforced during the next instance restart.

 


 

ASM Filter Driver (ASMFD)

 

ASM Filter Driver is a Linux kernel module introduced in 12c R1. It resides in the I/O path of the Oracle ASM disks providing the following features:

  • Rejecting all non-Oracle I/O write requests to ASM Disks.
  • Device name persistency.
  • Node level fencing without reboot.

 

In 12c R2 ASMFD can be enabled from the GUI interface of the Grid Infrastructure installation, as shown on this post GI 12c R2 Installation at the step #8 “Create ASM Disk Group”.

Once ASM Filter Driver is in use, similarly to ASMLib the disks are managed using the ASMFD Label Name.

 

Here few examples about the implementation of ASM Filter Driver.

--How to create an ASMFD label in SQL*Plus
SQL> Alter system label set 'DATA1' to '/dev/mapper/mpathak';

System altered.


--How to create an ASM Disk Group with ASMFD
CREATE DISKGROUP DATA_DG EXTERNAL REDUNDANCY DISK 'AFD:DATA1' SIZE 30720M
ATTRIBUTE 'SECTOR_SIZE'='512','LOGICAL_SECTOR_SIZE'='512','compatible.asm'='12.2.0.1',
'compatible.rdbms'='12.2.0.1','compatible.advm'='12.2.0.1','au_size'='4M';

Diskgroup created.

 

ASM Filter Driver can also be managed from the ASM command line utility ASMCMD

--Check ASMFD status
ASMCMD> afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'oel7node06.localdomain'


--List ASM Disks where ASMFD is enabled
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                    Filtering                Path
================================================================================
DATA1                      ENABLED                /dev/mapper/mpathak
DATA2                      ENABLED                /dev/mapper/mpathan
DATA3                      ENABLED                /dev/mapper/mpathw
DATA4                      ENABLED                /dev/mapper/mpathac
GIMR1                      ENABLED                /dev/mapper/mpatham
GIMR2                      ENABLED                /dev/mapper/mpathaj
GIMR3                      ENABLED                /dev/mapper/mpathal
GIMR4                      ENABLED                /dev/mapper/mpathaf
GIMR5                      ENABLED                /dev/mapper/mpathai
RECO3                      ENABLED                /dev/mapper/mpathy
RECO1                      ENABLED                /dev/mapper/mpathab
RECO2                      ENABLED                /dev/mapper/mpathx
ASMCMD>


--How to remove an ASMFD label in ASMCMD
ASMCMD> afd_unlabel DATA4