Working with Oracle Database Resource Manager

Working with Oracle Database Resource Manager

Database Consolidation is an extremely common practice among companies these days. There are several reasons why people choose to consolidate their databases. The most important reason is cost. Consolidation allows you to save cost, it could be electricity bills, DB licences or operation costs.

There are 3 types of database consolidation:

  1. Multiple databases within same Operating System (OS)
  2. Multiple schemas/workload within same database
  3. Multiple databases within same database (12c multi-tenant feature!)

Instance Caging allows you to set the amount of CPU a database can access to. Oracle Database Resource Manager(DBRM) allows you to slice the CPU cycles to share them among the different workload, schemas or databases. For engineered systems, IO Resource Manager(IORM) allows you to manage the IO resources as well. However, I will not be sharing about them.

Below are the types of resources that be managed by Oracle Database Resource Manager. I will be using CPU with my examples later.

  • CPU
  • Degree of Parallelism Limit
  • Parallel Target Percentage
  • Parallel Queue Timeout
  • Active Session Pool with Queuing
  • Automatic Consumer Group Switching
  • Canceling SQL and Terminating Sessions
  • Execution Time Limit
  • Undo Pool
  • Idle Time Limit

Multiple Databases within the same OS – Instance Caging

If you are consolidating multiple databases within the same OS, instance caging will be the feature to use! It’s extremely easy to configure.

SQL> alter system set cpu_count = 4;
SQL> alter system set resource_manager_plan = 'default_plan' sid='*';

Note 1: CPU_COUNT is a dynamic parameter
Note 2: RESOURCE_MANAGER_PLAN is a dynamic parameter
Note 3: CPU_CORE refers to the CPU threads, not CPU cores.
Note 4: It’s a good idea to oversubscribe the CPU_COUNT to fully utilise the CPU cores

Multiple schemas/workload within same database – Resource Manager

The steps below will create a very basic resource plan to manage the workload within the database. Session using the service ‘ONLINE_OLTP’ will 40% of the CPU cycles if the CPU are utilised at 100%.

SQL> exec dbms_resource_manager.create_pending_area();
SQL> exec dbms_resource_manager.create_consumer_group('ONLINE_APP','Online OTLP Application');
SQL> exec dbms_resource_manager.set_consumer_group_mapping (attribute => dbms_resource_manager.service_name,value => 'ONLINE_OLTP',consumer_group => 'ONLINE_APP'); SQL> exec dbms_resource_manager_privs.grant_switch_consumer_group(grantee_name => 'public',consumer_group => 'ONLINE_OLTP',grant_option => FALSE);
SQL> exec dbms_resource_manager.create_plan('DB_CONSOLIDATION','Plan for Consolidation');
SQL> exec dbms_resource_manager.create_plan_directive('DB_CONSOLIDATION','ONLINE_APP','Shares',mgmt_p1=> 40);
SQL> exec dbms_resource_manager.create_plan_directive('DB_CONSOLIDATION','OTHER_GROUPS',mgmt_p1=>10)
SQL> alter system set resource_manager_plan='DB_CONSOLIDATION';

The OTHER_GROUPS is to catch-all CONSUMER_GROUP for any workload not specified by any CONSUMER_GROUP. If the resource plan directive does not contain OTHER_GROUPS, you will hit the following error.

ORA-29377: consumer group OTHER_GROUPS is not part of top-plan DB_CONSOLIDATION
 ORA-06512: at "SYS.DBMS_RMIN_SYS", line 3640
 ORA-06512: at "SYS.DBMS_RMIN_SYS", line 3691
 ORA-06512: at "SYS.DBMS_RMIN_SYS", line 4364
 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 756
 ORA-06512: at line 1

To check what are the consumer groups within the DB_CONSOLIDATION plan.

SQL> select group_or_subplan from dba_rsrc_plan_directives where plan='DB_CONSOLIDATION';

To check what is the currently active resource plan directives in the database.

SQL> select name, cpu_managed from v$rsrc_plan where is_top_plan = 'TRUE';
 -------------------------------- ---

Note 1: Switching of resource plan is a dynamic operations.
Note 2: Remember to create the necessary privilege for the session to switch into the resource plan. Else, the mapping rule will be ignored.
Note 3: If the session is being throttled, you will see “resmgr:cpu quantum” wait events in your AWR report.

Multiple databases within same database (12c multi-tenant feature!) – Resource Manager

Do the following within the root container:

SQL> exec dbms_resource_manager.create_pending_area();
SQL> exec dbms_resource_manager.create_cdb_plan(plan => 'CDB_PLAN','CDB Resource Plan)
SQL> exec dbms_resource_manager.create_cdb_plan_directive(plan=> 'CDB_PLAN','pluggable_database => 'pdb1',shares => 3)
SQL> exec dbms_resource_manager.validate_pending_area();
SQL> SQL> exec dbms_resource_manager.submit_pending_area();
SQL> alter system set resource_manager_plan='CDB_PLAN';

So what happens when we use all 3 methods to manage our resources?

Screen Shot 2015-02-04 at 16.27.05

Assuming the system has 24 Core threads, the question is, how many CPU threads will OLTP get?

Answer: 0.5*0.5*8 = 2 :))

Wei Shan

Leave a comment

Why should we not upgrade to Oracle 12c……YET

Why should we not upgrade to Oracle 12c……YET

Oracle Database 12c is one of the most amazing release so far. Personally, it’s also the first “release” that I experienced since I started out as an Oracle DBA on 11gR2. Everyone has been raving about it’s features and performance. There are several new features that got every DBAs out there excited.

  1. Multi-Tenant – Database Consolidation
  2. In-Memory – Storing data in memory using columnar store
  3. Automatic Data Optimisation – Information lifecycle management feature

In this post, I shall discuss about the Multi-Tenant feature.

Multi-Tenant requires additional licensing feature on top of Enterprise Edition. According to Oracle Global Price List, it costs about 17,500 USD per core.  This does not come at a cheap cost. But consider this, if we can consolidate multiple under-utilised databases into a single machine, we could potentially save money from the EE licences instead. So please do the necessary planning before embarking on your upgrade/migration journey. According to the Oracle documentation, there are several features still unsupported on Please do take note, else you will be in for a surprise!

The following is a list of components that are not available or are restricted in this release of Oracle Database 12c Release 1 (

  • Oracle Fail Safe release 4.1.0, shipping with Oracle Database 12c Release 1 (, does not support the new multitenant container database (CDB) feature introduced in Release 12c. This restriction is documented in the ReadMe for Oracle Fail Safe 4.1.0 patch set 1 ( Support for CDB will be introduced in Oracle Fail Safe release (not a patch set) currently targeted to release in H1CY2015.
  • Only rolling upgrades to Oracle Grid Infrastructure release are supported when upgrading Oracle Grid Infrastructure clusters that are operating in Oracle Flex Cluster mode. The Oracle Clusterware stack on all nodes in the cluster (Hub Node and Leaf Node) needs to be up and running prior to the upgrade. For environments operating in Standard cluster mode, rolling and non-rolling upgrades are supported.
  • Database Resident Connection Pooling (DRCP) is not supported with Transaction Guard.
  • XStream does not support LONG columns in databases with varying width multibyte character sets.
  • Java Database Connectivity (JDBC) thin driver Database Change Notification (DCN) is not supported by PDB.
  • Oracle Automatic Storage Management Cluster File System (Oracle ACFS) does not currently support Hybrid Columnar Compression (HCC).

The following is a list of features that are not available or are restricted for a multitenant container database (CDB) in this release of Oracle Database 12c Release 1 (

  • Data Recovery Advisor
  • Flashback Pluggable Database
  • Flashback Transaction Backout
  • Database Change Notification
  • Continuous Query Notification (CQN)
  • Client Side Cache
  • Heat Map
  • Automatic Data Optimization
  • Oracle Streams

If you are currently using any of the features above, you might want to stick it out until the features are ready. I am not saying that you should never upgrade to Oracle Database 12c. That would be a silly thing to say. My point is that do not jump into a new technology just because it seems promising. Please do your due diligence and homework. In fact, according to the Oracle documentation, non-CDB database, meaning pre-12c databases will be deprecated. By June 2018, when 11gR2 is out of premier support, we should see plenty of consolidated database environments :)

Till then, please start planning for your journey to 12c!

Wei Shan

Leave a comment

Oracle Database – Gathering Statistics

Oracle Database – Gathering Statistics

Statistics are very important in databases as it allows the Oracle Optimizer (CBO) to determine the most efficient way to parse an SQL statement. Stale statistics will lead to poor database performance because the SQL execution will not be optimal. Oracle knows this is important and have the statistics gathered for you automatically via Automatic Optimizer Statistics Collection.

The types of Optimizer Statistics are:

Table statistics

  1. Number of rows
  2. Number of blocks
  3. Average row length

Column statistics

  1. Number of distinct values (NDV) in column
  2. Number of nulls in column
  3. Data distribution (histogram)
  4. Extended statistics

Index statistics

  1. Number of leaf blocks
  2. Levels
  3. Clustering factor

System statistics

  1. I/O performance and utilization
  2. CPU performance and utilization

So how should we gather statistics in Oracle manually?


There are 2 ways to do it. Prior to Oracle 10gR1, ANALYZE was the recommended way to do it. It is a command that can be easily executed.

SQL> analyze table schema_name.table_name [estimate|compute] statistics;

The COMPUTE clause will allow Oracle to perform a full table scan and insert the statistics into the data dictionary for the CBO to use. However it is very resource intensive, especially for a Data Warehouse, it might take a long time.

The ESTIMATE clause will allow Oracle to update its statistics use a sample data from the tables. It might not be as accurate, however it utilise a lot less time/resources than COMPUTE clause.

However, if you are running Oracle 10G and onwards, you should not be using ANALYZE. It is there for backward compatibility and may be removed in future releases. You should be using DBMS_STATS instead.

SQL> exec DBMS_STATS.GATHER_TABLE_STATS ('schema_name','table_name');

There are a few advantages:

  • DBMS_STATS is a stored procedure which can be manipulated easily
  • DBMS_STATS can be execute in parallel
  • Support for external and partitioned tables
  • Collects system statistics as well
  • The Optimizer, which depends upon statistics, will eventually use only statistics that have been collected by DBMS_STATS

Hope this helps

Wei Shan

Leave a comment

Oracle Database 12c – Getting my hands dirty

Oracle Database 12c – Getting my hands dirty

I attended a Oracle database 12c In-Memory workshop recently. It’s one of the new database advancement where they introduce columnar storage inside memory area for faster access. It is quite similar to IBM DB2 BLU Acceleration. You can enable a column, table, tablespace or the entire database to be stored in-memory.

How to turn on the In-Memory option.


SQL> show parameter sga_target
------------------------------------ ----------- ------------------------------
sga_target big integer 1920M

SQL> shutdown immediate;
SQL> startup;

SQL> show parameter inmemory_size
------------------------------------ ----------- ------------------------------
inmemory_size big integer 1G

-------------------- ----------
Fixed Size 2926080
Variable Size 301992448
Database Buffers 620756992
Redo Buffers 13848576
In-Memory Area 1073741824


-------------------------- ----------- ----------
1MB POOL 854589440 0
64KB POOL 201326592 0

Note: In-Memory uses a sub-heap in SGA. It is part of SGA. Your SGA_TARGET value must be greater than your INMEMORY_SIZE than else it’s won’t startup. AMM/ASMM will not manipulate your INMEMORY_SIZE automatically. The memory area is split into In Memory Compression Unit (IMCU) 1MB pool and Snapshot Metadata Unit (SMU) 65KB pool. The ratio is determined internally by Oracle and is usually in 80-20 ratio.

Playing around with In-Memory:

SQL> alter table TEST inmemory; => enable a table for inmemory
---------------------------------------- --------- ---------- -------------------
-------- -------- ----------------- --------------- -------------

Note.1: Just because a table is enabled for in-memory doesn’t mean that it get populated to memory automatically! It will only get populated when it gets triggered(DBMS_INMEMORY) manually, when the table get accessed or when the priority is set.

Note.2: USER_TABLES will not show if the table have only a few columns in-memory. You will have to query V$IM_COLUMN_LEVEL.

Note.3: USER_TABLES does not show if the in-memory table is partitioned. You will have to query USER_TAB_PARITIONS.

My 2-cents on Oracle database 12c In-Memory Option:

The idea of having both row and columnar storage in the same database looks extremely tempting. However, this does not come cheap. It is licensed by core and it is priced as the same range as Oracle RAC (Link). Storing data in column format is especially useful for DataWareHouse due to the nature of the query. However, if your tables are updated frequently (DML), it will be pretty expensive to keep the data in both row and column format synchronised.

I think it all boils down to the nature of the workload. Please do perform your own internal testing and evaluation. You never know, a free feature like “Full Database Caching” might suit your environment better! (Link).

Wei Shan


Debate on a Facebook DBA group that leads to a better understanding of Oracle Database

Debate on a Facebook DBA group that leads to a better understanding of Oracle Database

I recently had a debate with someone on a DBA group who says that it is possible to recover a datafile without any sort of backups.This does not tally with any of my understanding at all! He claims that he have successfully done it on his laptop and challenged me to try it myself. So I took the challenge :)

Setting up the stage for the testing.

SQL> create tablespace tbs_test datafile '/u01/oradata/orcl/tbstest1.dbf' size 5M;
SQL> alter tablespace tbs_test add datafile '/u01/oradata/orcl/tbstest2.dbf' size 5M;

SQL> alter database datafile '/u01/oradata/orcl/tbstest1.dbf' offline;
SQL> create table testtable2 tablespace tbs_test as select * from dba_data_files;

SQL> select count(*) from testtable2;

SQL> select owner,table_name,tablespace_name from dba_tables where table_name='TESTTABLE2';
------------------------------ ------------------------------ ------------------------------

SQL> select B.FILE_NAME from dba_segments a, dba_data_files b where a.header_file=b.file_ID and segment_name='TESTTABLE2';

The above ensures that the table only exists in ‘/u01/oradata/orcl/tbstest2.dbf’ datafile. Now comes test 1 which successfully restored the datafile without any backups!

 SQL> !rm '/u01/oradata/orcl/tbstest2.dbf'
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
[oracle@oel ~]$ sqlplus / as sysdba

SQL*Plus: Release Production on Thu Oct 30 01:19:54 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 409194496 bytes
Fixed Size 2228864 bytes
Variable Size 310381952 bytes
Database Buffers 92274688 bytes
Redo Buffers 4308992 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 7 - see DBWR trace file
ORA-01110: data file 7: '/u01/oradata/orcl/tbstest2.dbf'
SQL> alter database datafile '/u01/oradata/orcl/tbstest2.dbf' offline;
Database altered.
SQL> alter database create datafile '/u01/oradata/orcl/tbstest2.dbf';
Database altered.

SQL> recover datafile '/u01/oradata/orcl/tbstest2.dbf';
Media recovery complete.
SQL> alter database datafile '/u01/oradata/orcl/tbstest2.dbf' online;
Database altered.

OMG! Why did it WORK??!! I must admit I was quite intrigued at this point. Please be patient and read on about my second test!

SQL> !rm '/u01/oradata/orcl/tbstest2.dbf'

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 409194496 bytes
Fixed Size 2228864 bytes
Variable Size 310381952 bytes
Database Buffers 92274688 bytes
Redo Buffers 4308992 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 7 - see DBWR trace file
ORA-01110: data file 7: '/u01/oradata/orcl/tbstest2.dbf'
SQL> alter database datafile '/u01/oradata/orcl/tbstest2.dbf' offline;
Database altered.

SQL> alter database open;
Database altered.

SQL> alter system switch logfile;
System altered.

SQL> alter system switch logfile;
System altered.

SQL> alter system switch logfile;
System altered.

SQL> !ls -al /u01/archive
total 732
drwxr-xr-x. 2 oracle dba 4096 Oct 30 01:23 .
drwxr-xr-x. 6 root root 4096 Oct 30 00:39 ..
-rw-r-----. 1 oracle dba 262656 Oct 30 01:22 1_21_838508177.dbf
-rw-r-----. 1 oracle dba 466944 Oct 30 01:23 1_22_838508177.dbf
-rw-r-----. 1 oracle dba 1024 Oct 30 01:23 1_23_838508177.dbf
-rw-r-----. 1 oracle dba 2048 Oct 30 01:23 1_24_838508177.dbf

SQL> !rm /u01/archive/*

SQL> alter database create datafile '/u01/oradata/orcl/tbstest2.dbf';
Database altered.

SQL> recover datafile '/u01/oradata/orcl/tbstest2.dbf';
ORA-00279: change 1447147 generated at 10/30/2014 00:49:02 needed for thread 1
ORA-00289: suggestion : /u01/archive/1_20_838508177.dbf
ORA-00280: change 1447147 for thread 1 is in sequence #20
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

So the reason for the successful restore was due to the archivelogs or the data that resides in the redo log. I almost forgotten about my basics!

Wei Shan

Leave a comment

NoSQL Tech Comparison Report

NoSQL Tech Comparison Report

In the relational RDMS world, there are only a few companies who owns the majority of the market share. It is usually very easy to determine what technology to go with, depending on the budget and business objectives.

  • Oracle
  • Microsoft
  • DB2
  • Teradata

However, in the non-relational RDMS world, it is far more difficult to have a decision. There are plenty of variants, each having its own advantages and disadvantages. The following report from DBTA is a very good source to aid in their decision making.

Leave a comment

Clean it up

Wei Shan:

An extremely good read for understanding how Oracle deals with clean/dirty buffer using UNDO tablespace. It’s pretty confusing though!

Originally posted on Oracle Scratchpad:

There is some confusion about the expression “clean” in Oracle circles, so I thought I’d write a short note to explain the different ways in which the word may be applied to Oracle blocks. There are five terms to consider:

  1. clean
  2. commit cleanout
  3. block cleanout
  4. delayed block cleanout
  5. delayed logging block cleanout

View original 1,187 more words

Leave a comment

[ALERT] – Oracle products affected by Shellshock Bash Bug

[ALERT] – Oracle products affected by Shellshock Bash Bug

This Security Alert addresses CVE-2014-7169 (initially identified as CVE-2014-6271), a publicly disclosed vulnerability affecting GNU Bash. GNU Bash is a popular open source command line shell incorporated into Linux and other widely used operating systems. This vulnerability affects multiple Oracle products. This vulnerability may be remotely exploitable without authentication, i.e. it may be exploited over a network without the need for a username and password. A remote user can exploit this vulnerability to execute arbitrary code on systems that are running affected versions of Bash.

Some of the affected products are;

  • Solaris
  • Linux
  • Oracle Audit Vault and Database Firewall
  • All Oracle Engineered Systems

Systems that are public facing SHOULD be patched immediately.

Please refer to this link.

Wei Shan

Leave a comment


Get every new post delivered to your Inbox.