New Resource Manager options on Oracle 12c R2 Muntitenant

Since its introductions Oracle Resource Manager has been a key tool to guarantee performance stability and predictability, regulating the access to the database resources.

In the era of extreme consolidation, the role of Oracle Resource Manager is more important then ever, and Oracle Multitenant has extended its functionalities.

In Oracle 12.1 the resource manager was capable to handle:

  • Parallel server processes
  • CPU

The version 12.2 the following options have been introduced:

  • I/O rate limits for PDBs (Not supported on Exadata enviroment where I/O Resource Manager exists)
  • Memory  Management
  • Performance Profiles
  • Resource Monitoring

 

How to limit a PDB I/O consumption

--Limiting the number of I/O per second.
Alter System set MAX_IOPS = 9500 scope = BOTH;

--Limiting the amount of MB per second.
Alter System set MAX_MBPS = 70 scope = BOTH;

 

Memory  Management

Optionally starting from Oracle 12.2 it is possible to limit the PDB memory utilization using the following parameters:

  • SGA_Target                       – Max SGA size for PDB
  • SGA_Min_Size                   – Amount of guaranteed SGA size for PDB.
  • DB_Cache_Size                  – Amount of guaranteed Buffer Cache size for PDB.
  • DB_Shared_Pool_Size      – Amount of guaranteed Shared Pool size for PDB.
  • PGA_Aggregate_Limit      – Max PGA size for PDB.
  • PGA_Aggregate_Target    – Target PGA size for PDB.

 

Performance Profiles

The Profiles allows to create standard and common  Resouce Profiles, based for example on the different SLA levels (platinum, gold, silver), to grant directly to the PDBs.

DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(
 plan => 'PDB_Silver', 
 profile => 'silver', 
 shares => 2, 
 utilization_limit => 30, 
 parallel_server_limit => 20,
 memory_limit=> 30
);

 

Resource Monitoring

The new system view  V$RSRCPDBMETRIC can be used to monitor  chargeback the I/O and memory consumpion.

SELECT r.CON_ID, p.PDB_NAME, r.IOPS,r.SGA_BYTES, r.SHARED_POOL_BYTES
FROM V$RSRCPDBMETRIC r, CDB_PDBS p WHERE r.CON_ID = p.CON_ID;

 


 

Oracle Performance Impact of High % User Rollbacks

Recently one of my customers asked to investigate a database performance problem on a system where for multiple reasons I wasn’t having direct access.

To troubleshoot the performance, I started asking few AWR reports; at the first glance I didn’t spot any bottleneck, but while re-reading the statistics I found a strange ration between number of Transactions and Rollbacks.

 

Extract of AWR Load Profile

 — Per Second Per Transaction
Rollbacks 476.8 0.9
Transactions 546.6

 

Because all others OS and database statistics were quite good, I decided to follow the trail of the high percentage of transactions rolled back.

Before any fast conclusion I checked the nature of the rollbacks:

  • transaction rollbacks: Oracle is automatically executing a rollback, this happens for example in case of constraint violation (i.e. Primary Key violation).
  • user rollbacks: number of times users manually issue the ROLLBACK statement or an error occurs during users’ transactions.

 

Extract of AWR Activity Stats

Total      per Second per Trans
user commits

125,878

69.90 0.13

user rollbacks

858,562 476.76

0.87

transaction rollbacks

2,704

0.75

0.00

rollback changes – undo records applied

46,132

25.60

0.06

data blocks consistent reads – undo records applied

423,396

235.14

0.49

transaction tables consistent reads – undo records applied

128

0.08

0.00

consistent changes

5,599,562

3,109.48

12.7

 

From the AWR analysis has emerged the following data:

  • 87% of the user’s transactions ended with a rollback.
  • While rolling back, Oracle applies 25.6 undo records/sec. which means that at least a certain percentage of user rollbacks generates “real work“.
  • Even more important in term of performance analysis
    • number of data block consistent reads 235.14 records/sec.
    • number of consistent changes 3109.48 times/sec.

 

Because the database load profile is 90% SELECT and 10% DML, and there are important values about data block consistent reads and consisten changes, we can assume that most of the activity is concentrated in a small number of user objects.

 

Next step: I have to gain access to the system and continue the investigation…

 

 

Oracle 12c – Unified Audit Trail

 

Oracle 12c introduces “Unified Audit Trail” a faster, easier to access and more secure audit system.

It optionally allows to stage the audit records in a dedicated memory buffer (UNIFIED_AUDIT_SGA_QUEUE_SIZE), where they are temporarily grouped before being written into the audit table via batch transactions.

This new audit configuration substantially reduces the transactional overhead generated by the auditing.

 

Important improvements have also done to simplify the utilization:

– One single audit trail for any audit data, in fact  UNIFIED_AUDIT_TRAIL view replaces SYS.AUD$/DBA_AUDIT_TRAIL, SYS.FGA_LOGS$/DBA_FGA_AUDIT_TRAIL, DVSYS.AUDIT_TRAIL$, V$XML_AUDIT_TRAIL and the OS audit files in adump.

– All audit data stored in Oracle secure files.

– Role segregation between:

  • DBA responsible to maintain free space and backup.
  •  AUDIT_ADMIN responsible to manage the audit policies and define the data retention.
  • AUDIT_VIEWER in charge of the reports.

 

Unified Audit Trail introduces also new security options important to mention:

– It is activated with a kernel relink and it doesn’t require additional steps or parameters.

– The new AUDSYS table has a Read-Only Protection for all users. Even the DBA privilege can’t manipulate the audit records!

 

How to activate Unified Audit Trail

--Stop all Oracle processes: databases, listener and Enterprise Manager agent.

--Relink Oracle with the uniaud_on option.
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk uniaud_on ioracle

--Restart all Oracle processes: databases, listener and Enterprise Manager agent.

--Check is Unified Audit Trail is active
SQL> select * from v$option where PARAMETER='Unified Auditing';

INST_ID PARAMETER                       VALUE              CON_ID
------- ------------------------------ ------------------- ----------
 1      Unified Auditing                TRUE                0

 

Optional, but strongly recommended it is possible to relocate the AUDIT segments  from SYSAUX Tablespace to a dedicated one.

SQL> Create tablespace TBS_AUDIT datafile SIZE 2G AUTOEXTEND ON;

BEGIN
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_LOCATION(
 audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
 audit_trail_location_value => 'TBS_AUDIT');
END;
/


SQL> select OWNER, SEGMENT_NAME, PARTITION_NAME, SEGMENT_TYPE, BYTES from dba_segments where TABLESPACE_NAME='TBS_AUDIT';

OWNER           SEGMENT_NAME                    PARTITION_NAME                SEGMENT_TYPE       BYTES
--------------- ------------------------------ ------------------------------ ------------------ ----------
AUDSYS           CLI_SWP$1b2a49f1$1$1           HIGH_PART                      TABLE PARTITION   65536
AUDSYS           CLI_SWP$1b2a49f1$1$1           PART_2                         TABLE PARTITION   65536
AUDSYS           CLI_LOB$1b2a49f1$1$1           HIGH_PART                      INDEX PARTITION   65536
AUDSYS           CLI_TIME$1b2a49f1$1$1          HIGH_PART                      INDEX PARTITION   65536
AUDSYS           CLI_LOB$1b2a49f1$1$1           PART_2                         INDEX PARTITION   65536
AUDSYS           CLI_TIME$1b2a49f1$1$1          PART_2                         INDEX PARTITION   65536
AUDSYS           CLI_SCN$1b2a49f1$1$1           PART_2                         INDEX PARTITION   65536
AUDSYS           SYS_IL0000091784C00014$$       SYS_IL_P241                    INDEX PARTITION   65536
AUDSYS           CLI_SCN$1b2a49f1$1$1           HIGH_PART                      INDEX PARTITION   65536
AUDSYS           SYS_IL0000091784C00014$$       SYS_IL_P246                    INDEX PARTITION   65536
AUDSYS           SYS_LOB0000091784C00014$$      SYS_LOB_P244                   LOB PARTITION     131072
AUDSYS           SYS_LOB0000091784C00014$$      SYS_LOB_P239                   LOB PARTITION     131072

12 rows selected.

 

The introduction of Audit Policies have brought flexibility and granularity on what it is possible to audit, here an example using Oracle sys_context function.

CREATE AUDIT POLICY hr_employees
 PRIVILEGES CREATE TABLE
 ACTIONS UPDATE ON HR.EMPLOYEES
 WHEN 'SYS_CONTEXT(''USERENV'', ''SESSION_USER'') != ''HR_ADMIN'''
 EVALUATE PER STATEMENT;

AUDIT POLICY hr_employees;

 

 

 

Oracle DB stored on ASM vs ACFS

Nowadays a new Oracle database environment with Grid Infrastructure has three main storage options:

  1. Third party clustered file system
  2. ASM Disk Groups
  3. ACFS File System

While the first option was not in scope, this blog compares the result of the tests between ASM and ACFS, highlighting when to use one or the other to store 12c NON-CDB or CDB Databases.

The tests conducted on different environments using Oracle version 12.1.0.2 July PSU have shown controversial results compared to what Oracle  is promoting for the Oracle Database Appliance (ODA) in the following paper: “Frequently Asked Questions Storing Database Files in ACFS on Oracle Database Appliance

 

Outcome of the tests

ASM remains the preferred option to achieve the best I/O performance, while ACFS introduces interesting features like DB snapshot to quickly and space efficiently provision new databases.

The performance gap between the two solutions is not negligible as reported below by the  AWR – TOP Timed Events sections of two PDBs, sharing the same infrastructure, executing the same workload but respectively using ASM and ACFS storage:

  • PDBASM: Pluggable Database stored on  ASM Disk Group
  • PDBACFS:Pluggable Database stored on ACFS File System

 

 

PDBASM AWR – TOP Timed Events and Other Stats

topevents_asm

fg_asm

 

 

PDBACFS AWR – TOP Timed Events and Other Stats

TopEvents_ACFS.png

fg_acfs

 

Due to the different characteristics and results when ASM or ACFS is in use, it is not possible to give a generic recommendation. But case by case the choise should be driven by business needs like maximum performance versus fast and efficient database clone.