Adding flexibility to Oracle GI Implementing Multiple SCANs

Nowadays the business requirements force the IT to implement the more and more sophisticated and consolidated environments without compromising availability, performance and flexibility of each application running on it.

In this post, I explain how to improve the Grid Infrastructure Network flexibility, implementing multiple SCANs and how to associate one or multiple networks to the Oracle databases.

To better understand the reasons for such type of implementation, below are listed few common use cases:

  • Applications are deployed on different/dedicated subnets.
  • Network isolation due to security requirement.
  • Different database protocols are in use (TCP, TCPS, etc.).

 

 

Single Client Access Name (SCAN)

By default on each Oracle Grid Infrastructure cluster, indipendently from the number of nodes, one SCAN with 3 SCAN VIPs is created.

Below is depicted the default Oracle Clusterware network/SCAN configuration.

 

Single_Scan_Listener

 

Multiple Single Client Access Name (SCAN) implementation

Before implemeting additional SCANs, the OS provisioning of new network interfaces or new VLAN Tagging has to be completed.

The current example uses the second option (VLAN Tagging), and the bond0 interface is an Active/Active setup of two 10gbe cards, to which a VLAN tag has been added.

Below is represented the customized Oracle Clusterware network/SCAN configuration, having added a second SCAN.

 

Multi_Scan_Listeners

 

Step-by-step implementation

After completing the OS network setup, as grid owner add the new interface to the Grid Infrastructure:

grid@host01a:~# oifcfg setif -global bond0.764/10.15.69.0:public

grid@host01a:~# oifcfg getif
eno49 192.168.7.32 global cluster_interconnect,asm
eno50 192.168.9.48 global cluster_interconnect,asm
bond0 10.11.8.0 global public
bond0.764 10.15.69.0 global public
grid@host01a:~#

 

Then as root create the network number 2 and disply the configuration:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add network -netnum 2 -subnet 10.15.69.0/255.255.255.0/bond0.764 -nettype STATIC

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl config network -netnum 2
Network 2 exists
Subnet IPv4: 10.15.69.0/255.255.255.0/, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:

 

As root user add the node VIPs:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host01a -netnum 2 -address host01b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host02a -netnum 2 -address host02b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host03a -netnum 2 -address host03b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host04a -netnum 2 -address host04b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host05a -netnum 2 -address host05b-vip.emilianofusaglia.net/255.255.255.0
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add vip -node host06a -netnum 2 -address host06b-vip.emilianofusaglia.net/255.255.255.0

 

As grid user  create a new listener based on the network number 2:

grid@host01a:~# srvctl add listener -listener LISTENER2 -netnum 2 -endpoints "TCP:1532"

 

As root user add the new SCAN to the network number 2:

 root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl add scan -scanname scan-02.emilianofusaglia.net -netnum 2

 

As root user start the new node VIPs:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host01b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host02b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host03b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host04b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host05b-vip.emilianofusaglia.net
root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start vip -vip host06b-vip.emilianofusaglia.net

 

As grid user start the new node Listeners:

grid@host01a:~# srvctl start listener -listener LISTENER2
grid@host01a:~# srvctl status listener -listener LISTENER2
Listener LISTENER2 is enabled
Listener LISTENER2 is running on node(s): host01a,host02a,host03a,host04a,host05a,host06a

 

As root user start the new SCAN and as grid user check the configuration:

root@host01a:~# /u01/app/12.2.0.1/grid/bin/srvctl start scan -netnum 2

grid@host01a:~# srvctl config scan -netnum 2
SCAN name: scan-02.emilianofusaglia.net, Network: 2
Subnet IPv4: 10.15.69.0/255.255.255.0/, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 10.15.69.44
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.15.69.45
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 3 IPv4 VIP: 10.15.69.43
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:

grid@host01a:~# srvctl status scan -netnum 2
SCAN VIP scan1_net2 is enabled
SCAN VIP scan1_net2 is running on node host02a
SCAN VIP scan2_net2 is enabled
SCAN VIP scan2_net2 is running on node host01a
SCAN VIP scan3_net2 is enabled
SCAN VIP scan3_net2 is running on node host03a

 

As grid user add the SCAN Listener and check the configuration:

grid@host01a:~# srvctl add scan_listener -netnum 2 -listener LISTENER2 -endpoints TCP:1532

grid@host01a:~# srvctl config scan_listener -netnum 2
SCAN Listener LISTENER2_SCAN1_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER2_SCAN2_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER2_SCAN3_NET2 exists. Port: TCP:1532
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:

 

As grid user start the SCAN Listener2 and check the status:

grid@host01a:~# srvctl start scan_listener -netnum 2

grid@host01a:~# srvctl status scan_listener -netnum 2
SCAN Listener LISTENER2_SCAN1_NET2 is enabled
SCAN listener LISTENER2_SCAN1_NET2 is running on node host02a
SCAN Listener LISTENER2_SCAN2_NET2 is enabled
SCAN listener LISTENER2_SCAN2_NET2 is running on node host01a
SCAN Listener LISTENER2_SCAN3_NET2 is enabled
SCAN listener LISTENER2_SCAN3_NET2 is running on node host03a

 

Defining the multi SCANs configuration per database

Once the above configuration is completed, it remains to define which SCAN/s should be used by each database.

When multiple SCANs exists, by default the CRS populate the LISTENER_NETWORKS parameter to register the database against all SCANs and LISTENERs.

To overwrite this default behavior, allowing for example the authentication of a specific database only against the SCAN scan-02.emilianofusaglia.net, the database parameter LISTENER_NETWORKS should be manually configured.
The parameter LISTENER_NETWORKS can be dynamically set but the new value is enforced during the next instance restart.

 


 

Advertisements

Linux for DBA: Red Hat 7 removed and deprecated few commands

 

Linux Red Hat 7 and derived distributions have removed and deprecated few commands. Among them netstat and lsof,  which are popular between DBAs.

This post shows how to obtain the network information in compliance with the new OS commands.

 

NETSTAT

netstat – is now considered obsolete, and it has been replaced by ss:

root@oel7qa01:~$ ss -t
State       Recv-Q Send-Q       Local Address:Port           Peer Address:Port 
ESTAB       0      0            192.168.1.117:54360          192.0.78.23:https 
ESTAB       0      0            192.168.1.117:48538          198.252.206.25:https 
ESTAB       0      0            192.168.1.117:42744          162.125.18.133:https 
ESTAB       0      0            127.0.0.1:38106              127.0.0.1:52828 
ESTAB       0      0            192.168.1.117:54008          192.0.78.23:https 
CLOSE-WAIT  1      0            192.168.1.117:60054          51.2xx.195.xx:https 
ESTAB       0      0            192.168.1.117:47904          198.2xx.202.xx:https 
CLOSE-WAIT  32     0            192.168.1.117:56724          108.1xx.172.xxx:https 
CLOSE-WAIT  32     0            192.168.1.117:47050          54.xx.201.xxx:https 
ESTAB       0      0            127.0.0.1:52828              127.0.0.1:38106 
CLOSE-WAIT  32     0            192.168.1.117:44728          108.1xx.xxx.6x:https 
ESTAB       0      0            192.168.1.117:41848          195.xxx.2xx.xxx:https 
ESTAB       0      0            192.168.7.50:41268           192.168.7.60:ssh 
ESTAB       0      0            2a02:1203:ecb0:7b80:58d9:f6e5:90d9:f266:53060 2a00:1450:400e:800::2003:https 
ESTAB       0      0            2a02:1203:ecb0:7b80:58d9:f6e5:90d9:f266:37978 2a00:1450:400a:804::200e:https 
ESTAB       0      0            2a02:1203:ecb0:7b80:58d9:f6e5:90d9:f266:51682 2a00:1450:400a:804::2003:https

 

The netstat -r information is now provided by the command ip route:

--Until Red Hat 6
[root@oel7node00 ~]# netstat -r
Kernel IP routing table
Destination     Gateway     Genmask        Flags  MSS Window irtt Iface
default         gateway     0.0.0.0        UG       0 0         0 enp0s8
default         gateway     0.0.0.0        UG       0 0         0 enp0s3
10.0.2.0        0.0.0.0     255.255.255.0  U        0 0         0 enp0s3
172.31.100.0    0.0.0.0     255.255.255.0  U        0 0         0 enp0s9
192.168.7.0     0.0.0.0     255.255.255.0  U        0 0         0 enp0s8
192.168.200.0   0.0.0.0     255.255.255.0  U        0 0         0 enp0s10


--As of Red Hat 7
[root@oel7node00 ~]# ip route
default via 192.168.7.50 dev enp0s8 proto static metric 100 
default via 10.0.2.2 dev enp0s3 proto static metric 101 
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 metric 100 
172.31.100.0/24 dev enp0s9 proto kernel scope link src 172.31.100.10 metric 100 
192.168.7.0/24 dev enp0s8 proto kernel scope link src 192.168.7.60 metric 100 
192.168.200.0/24 dev enp0s10 proto kernel scope link src 192.168.200.10 metric 100 

 

The netstat -i information is now provided by the command ip route:

--Until Red Hat 6
[root@oel7node00 ~]# netstat -i
Kernel Interface table
Iface     MTU    RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
enp0s3   1500       66      0      0 0           72      0      0      0 BMRU
enp0s8   1500     1201      0      0 0          687      0      0      0 BMRU
enp0s9   1500        2      0      0 0            2      0      0      0 BMRU
enp0s10  1500        2      0      0 0            7      0      0      0 BMRU
lo      65536        0      0      0 0            0      0      0      0 LRU


--As of Red Hat 7
[root@oel7node00 ~]# ip -s link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 RX: bytes packets errors dropped overrun mcast 
 0         0       0      0       0       0 
 TX: bytes packets errors dropped carrier collsns 
 0         0       0      0       0       0 
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 08:00:27:4c:63:1b brd ff:ff:ff:ff:ff:ff
 RX: bytes packets errors dropped overrun mcast 
 5860      66      0      0       0       0 
 TX: bytes packets errors dropped carrier collsns 
 5662      72      0      0       0       0 
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 08:00:27:2b:ca:66 brd ff:ff:ff:ff:ff:ff
 RX: bytes packets errors dropped overrun mcast 
 131645    1237    0      0       0       0 
 TX: bytes packets errors dropped carrier collsns 
 223396    704     0      0       0       0 
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 08:00:27:cc:fb:2e brd ff:ff:ff:ff:ff:ff
 RX: bytes packets errors dropped overrun mcast 
 120        2      0      0       0       0 
 TX: bytes packets errors dropped carrier collsns 
 120       2       0      0       0       0 
5: enp0s10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
 link/ether 08:00:27:6f:7e:47 brd ff:ff:ff:ff:ff:ff
 RX: bytes packets errors dropped overrun mcast 
 120       2       0      0       0       0 
 TX: bytes packets errors dropped carrier collsns 
 558       7       0      0       0       0

 

The netstat -g information is now provided by the command ip maddr:

--Until Red Hat 6
[root@oel7node00 ~]# netstat -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 all-systems.mcast.net
enp0s3 1 all-systems.mcast.net
enp0s8 1 all-systems.mcast.net
enp0s9 1 all-systems.mcast.net
enp0s10 1 all-systems.mcast.net
lo 1 ff02::1
lo 1 ff01::1
enp0s3 1 ff02::1
enp0s3 1 ff01::1
enp0s8 1 ff02::1
enp0s8 1 ff01::1
enp0s9 1 ff02::1
enp0s9 1 ff01::1
enp0s10 1 ff02::1
enp0s10 1 ff01::1


--As of Red Hat 7
[root@oel7node00 ~]# ip maddr
1: lo
 inet 224.0.0.1
 inet6 ff02::1
 inet6 ff01::1
2: enp0s3
 link 01:00:5e:00:00:01
 inet 224.0.0.1
 inet6 ff02::1
 inet6 ff01::1
3: enp0s8
 link 01:00:5e:00:00:01
 inet 224.0.0.1
 inet6 ff02::1
 inet6 ff01::1
4: enp0s9
 link 01:00:5e:00:00:01
 inet 224.0.0.1
 inet6 ff02::1
 inet6 ff01::1
5: enp0s10
 link 01:00:5e:00:00:01
 inet 224.0.0.1
 inet6 ff02::1
 inet6 ff01::1

 

 

LSOF

lsof is no longer included on the OS minimal installation, but not considered as obsolete or deprecated, therefore simply use yun to intall the missing package:

[root@oel7node00 ~]# which lsof
/usr/bin/which: no lsof in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)

[root@oel7node00 ~]# yum install lsof
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package lsof.x86_64 0:4.87-4.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================================================================
 Package Arch Version Repository Size
=========================================================================================================================================================
Installing:
 lsof x86_64 4.87-4.el7 ol7_latest 330 k

Transaction Summary
=========================================================================================================================================================
Install 1 Package

Total download size: 330 k
Installed size: 927 k
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
lsof-4.87-4.el7.x86_64.rpm | 330 kB 00:00:00 
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
 Installing : lsof-4.87-4.el7.x86_64 1/1 
 Verifying : lsof-4.87-4.el7.x86_64 1/1

Installed:
 lsof.x86_64 0:4.87-4.el7

Complete!