How did I became an (Oracle) Database Administrator

How did I became an (Oracle) Database Administrator

Every now and then, I get questions asking me how did I get into the field of database administration. Getting a role as a DBA is tricky. If you look at the requirements for DBA, they usually require more than 3 years of experience. There are very few companies looking for fresh graduates for DBA role.

After completing my 2 years of mandatory National Service (NS), I began to look for an IT job. 2 years of NS reduced my IT knowledge to non-existence. I was out of touch with the IT industry. I couldn’t even tell you what version was .NET framework at that time. I attended an interview (System Engineer) with a web-hosting company in Singapore. It was a horrible experience.

Interviewer: So, why do you want to join the IT industry?
Me: Because I love technology and I think it’s rather interesting.

He started asking me technical questions about the position and I could barely answer his questions. At the end of the interview, he made this remark.

I’m going to be frank with you because I do not want to waste your time. I will not hire you. You have no talent in IT. Since you mentioned that you are interested in technology, why don’t you go and try to sell handphones. It will probably suit you better.

I was devastated. I felt so insulted and shameful of myself. I was never insulted like this before. I went home and cried my lungs out. I didn’t had to mood to attend any interviews after that. I was seriously contemplating about what to do with my career. Then, my brother introduced me a 2 month free Oracle database course (MDeC KDI programme) in Malaysia, KL.

I worked my ass off for 2 months straight and got my first Oracle certification (OCA 11G DBA). When I got back to Singapore, I applied for many Oracle DBA positions and finally I got an interview from Seagate. It took me about 3 months of job hunting. The interview went like this.

He asked me 2 technical questions.

1. Purpose of Redo log
2. Purpose of Undo log

Interview: Why do you want to join us?
Me: Because I want to learn. The job scope is exactly what I am looking for. The salary does not matter to me, you can pay me $800 and I will still want the job.

And yes, I got hired in the end. :)

Summary
Today, I am still very grateful to my hiring manager at that time (I still drink coffee with him once in a while). He told me, the only reason why he hired me was because of my attitude, he knew that I was eager to prove myself and that was why I will be willing to put in everything I got to make it work. My advice to all fellow IT professionals, keep that learning attitude and your chance will come. Don’t give up and don’t get discouraged by the nay-sayers.

We have a saying in Chinese.

机会是留给有准备的人.

Chance is for those who are prepared.

1 Comment

Preparing ASM disks for Oracle RAC Installation (Solaris SPARC)

Preparing ASM disks for Oracle RAC Installation (Solaris SPARC)

Environment:
1. Solaris SPARC 64bit 11.2
2. Oracle RAC 12.1.0.2
3. Storage using HDS VSP

While I was doing my first (Solaris SPARC) RAC installation for a customer, I was slightly stumped. My typical installations were on Linux, with or without ASMLib. It was easy and very well documented. However, I was not so experienced in RAC installation on a SPARC!

I came on-site and the Solaris engineer told me that the LUNs (27 of them!) have been zoned to the system. When I do a “ls /dev/rdsk/”, I see plenty of disks!

c1d0s0 c1d11s2 c1d13s5 c1d16s0 c1d18s3 c1d1s6 c1d22s1 c1d24s4 c1d26s7 c1d3s3 c1d5s6 c1d8s1
c1d0s1 c1d11s3 c1d13s6 c1d16s1 c1d18s4 c1d2 c1d22s2 c1d24s5 c1d27s0 c1d3s4 c1d5s7 c1d8s2

I have formatted the above for readability.

Apparently, Solaris disks have 8 slices, from s0 to s7.

According to MOS[Doc ID 368840.1]:

Partition # 2 is a standard partition and it SHOULD NOT be changed as it reflects the total capacity of the disk. All other partitions are available for the configuration. All this information for the partitions and other data is contained in a special structure stored within the first 512 bytes of the disk, on the first cylinders. This structure is called VTOC (Volume Table of Contents). Actions like re-writting the VTOC will cause loosing the disk. Thus, it would be best to modify partition 0, starting at cylinder.

So, how do I actually format the disk correctly?

#format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c1d1 <HITACHI-OPEN-V -SUN-7303-100.00GB>
/virtual-devices@100/channel-devices@200/disk@1
1. c1d2 <HITACHI-OPEN-V -SUN-7303-50.00GB>
/virtual-devices@100/channel-devices@200/disk@2
2. c1d3 <HITACHI-OPEN-V -SUN-7303 cyl 1363 alt 2 hd 15 sec 512>
/virtual-devices@100/channel-devices@200/disk@3
3. c1d4 <HITACHI-OPEN-V -SUN-7303 cyl 1363 alt 2 hd 15 sec 512>
/virtual-devices@100/channel-devices@200/disk@4

Specify disk (enter its number): Specify disk (enter its number): 3

partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 - 1362 4.99GB (1363/0/0) 10467840

Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 3
Enter partition size[10444800b, 1360c, 1362e, 5100.00mb, 4.98gb]: $
partition> p
Current partition table (unnamed):
Total disk cylinders available: 1363 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 unassigned wm 3 - 1362 4.98GB (1360/0/0) 10444800
1 unassigned wu 0 0 (0/0/0) 0
2 backup wu 0 - 1362 4.99GB (1363/0/0) 10467840
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0

Ta-Da! Now, you should see c1d4s0 as /dev/rdsk/c1d4s0 and be able to view it as a candidate disk!

Regards,
Wei Shan

Leave a comment

PostgreSQL – PG_CTL Utility

PostgreSQL – PG_CTL Utility

pg_ctl is a utility for initializing, starting, stopping, or restarting the PostgreSQL database server or displaying the status of a running server.

#pg_ctl --help
pg_ctl is a utility to initialize, start, stop, or control a PostgreSQL server.

Usage:
pg_ctl init[db] [-D DATADIR] [-s] [-o "OPTIONS"]
pg_ctl start [-w] [-t SECS] [-D DATADIR] [-s] [-l FILENAME] [-o "OPTIONS"]
pg_ctl stop [-W] [-t SECS] [-D DATADIR] [-s] [-m SHUTDOWN-MODE]
pg_ctl restart [-w] [-t SECS] [-D DATADIR] [-s] [-m SHUTDOWN-MODE]
[-o "OPTIONS"]
pg_ctl reload [-D DATADIR] [-s]
pg_ctl status [-D DATADIR]
pg_ctl promote [-D DATADIR] [-s]
pg_ctl kill SIGNALNAME PID

Common options:
-D, --pgdata=DATADIR location of the database storage area
-s, --silent only print errors, no informational messages
-t, --timeout=SECS seconds to wait when using -w option
-V, --version output version information, then exit
-w wait until operation completes
-W do not wait until operation completes
-?, --help show this help, then exit
(The default is to wait for shutdown, but not for start or restart.)

If the -D option is omitted, the environment variable PGDATA is used.

Options for start or restart:
-c, --core-files allow postgres to produce core files
-l, --log=FILENAME write (or append) server log to FILENAME
-o OPTIONS command line options to pass to postgres
(PostgreSQL server executable) or initdb
-p PATH-TO-POSTGRES normally not necessary

Options for stop or restart:
-m, --mode=MODE MODE can be "smart", "fast", or "immediate"

Shutdown modes are:
smart quit after all clients have disconnected
fast quit directly, with proper shutdown
immediate quit without complete shutdown; will lead to recovery on restart

I wanted to test the difference in effect between “pg_ctl restart” and “pg_ctl reload”.

In session1, I run the following:

session1=# start transaction;
START TRANSACTION
session1=# update customer set title='Mrs' where customer_id='20';
UPDATE 1

PG_CTL RELOAD

Now, I execute “reload command” in session2.

session2$ pg_ctl reload
server signaled

Back to session1.

session1=# end;
COMMIT

It works. The connection was not dropped or affected.

PG_CTL RESTART

Now, I execute “restart command” in session2.

session2$ pg_ctl restart -m immediate
waiting for server to shut down.... done
server stopped
server starting

Back to session1.

session1# end;
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.

Summary

pg_ctl restart will cause the a full restart of the postgres server while pg_ ctl reload will only allow postmaster to reread the configuration files and reload the settings. This effect is similar to Linux service reload.

Regards,
Wei Shan

Leave a comment

Difference between PostgreSQL and Postgres Plus Advanced Server

Difference between PostgreSQL and Postgres Plus Advanced Server

Recently, I took up a opportunity to start working with the not-so-mainstream databases like PostgreSQL and Redis. It’s a really big challenge for me as I have worked with Oracle database for more than 4 years now. I started to research more on PostgreSQL and found a HUGE bunch of information. While I start to pickup on PostgreSQL, I hope to document the knowledge here.

What is PostgreSQL?

PostgreSQL is an open source relation database management system (RDBMS) that is supported on majority of the operating systems, including Unix, Linux and Windows. It is extremely mature, having more than 15 years of active development and a proven architecture which is quite similar to Oracle database.

Taken from PostgreSQL website

An enterprise class database, PostgreSQL boasts sophisticated features such as Multi-Version Concurrency Control (MVCC), point in time recovery, tablespaces, asynchronous replication, nested transactions (savepoints), online/hot backups, a sophisticated query planner/optimizer, and write ahead logging for fault tolerance. It supports international character sets, multibyte character encodings, Unicode, and it is locale-aware for sorting, case-sensitivity, and formatting. It is highly scalable both in the sheer quantity of data it can manage and in the number of concurrent users it can accommodate. There are active PostgreSQL systems in production environments that manage in excess of 4 terabytes of data. Some general PostgreSQL limits are included in the table below.

What is Postgres Plus Advance Server?

Postgres Plus Advance Server is a distribution of PostgreSQL maintained and supported by  EnterpriseDB. It is usually used by companies who require maintenance support from a company.

Taken from Here.

EnterpriseDB adds additional functionality and capabilities in the following areas:

1. Database Feature Compatibility for Oracle
2. Performance Features
3. Productivity Tools – Postgres Enterprise Manager, xDB Replication Server and the Migration Toolkit
4. Indemnification (Read: SUPPORT)

Postgres Plus Advanced Server contains all the features and functionality of PostgreSQL and adds additional performance features, security, and database feature compatibility for Oracle. The performance features include: a SQL Profiler, Index Advisor, Query Hints, Infinite Cache, DynaTune, a Hi-speed bulk loader, Bulk Collect/Fetch/Binding, and Multi-threaded Replication. Security is improved with the addition of server side code obfuscation, fine grained audit logging and SQL injection attack prevention. Finally, Advanced Server provides database compatibility that includes: PL/SQL support, a PL/SQL Debugger, Oracle SQL Extensions, OCI support, User Defined Objects, popular Function Packages, Dictionary Views, Database Links, EDB*Plus, EDB*Wrap, and SQL wait statistics.

Summary

In my opinion, customers who are migrating from Oracle to PostgreSQL would go for the latter option as it provides plenty of compatibility support for Oracle databases. However, it’s always good to sign up for support contract for the open source version with them. You never know when you need the extra help!

Regards,
Wei Shan

Leave a comment

Working with Oracle Database Resource Manager

Working with Oracle Database Resource Manager

Database Consolidation is an extremely common practice among companies these days. There are several reasons why people choose to consolidate their databases. The most important reason is cost. Consolidation allows you to save cost, it could be electricity bills, DB licences or operation costs.

There are 3 types of database consolidation:

  1. Multiple databases within same Operating System (OS)
  2. Multiple schemas/workload within same database
  3. Multiple databases within same database (12c multi-tenant feature!)

Instance Caging allows you to set the amount of CPU a database can access to. Oracle Database Resource Manager(DBRM) allows you to slice the CPU cycles to share them among the different workload, schemas or databases. For engineered systems, IO Resource Manager(IORM) allows you to manage the IO resources as well. However, I will not be sharing about them.

Below are the types of resources that be managed by Oracle Database Resource Manager. I will be using CPU with my examples later.

  • CPU
  • Degree of Parallelism Limit
  • Parallel Target Percentage
  • Parallel Queue Timeout
  • Active Session Pool with Queuing
  • Automatic Consumer Group Switching
  • Canceling SQL and Terminating Sessions
  • Execution Time Limit
  • Undo Pool
  • Idle Time Limit

Multiple Databases within the same OS – Instance Caging

If you are consolidating multiple databases within the same OS, instance caging will be the feature to use! It’s extremely easy to configure.

SQL> alter system set cpu_count = 4;
SQL> alter system set resource_manager_plan = 'default_plan' sid='*';

Note 1: CPU_COUNT is a dynamic parameter
Note 2: RESOURCE_MANAGER_PLAN is a dynamic parameter
Note 3: CPU_CORE refers to the CPU threads, not CPU cores.
Note 4: It’s a good idea to oversubscribe the CPU_COUNT to fully utilise the CPU cores

Multiple schemas/workload within same database – Resource Manager

The steps below will create a very basic resource plan to manage the workload within the database. Session using the service ‘ONLINE_OLTP’ will 40% of the CPU cycles if the CPU are utilised at 100%.

SQL> exec dbms_resource_manager.create_pending_area();
SQL> exec dbms_resource_manager.create_consumer_group('ONLINE_APP','Online OTLP Application');
SQL> exec dbms_resource_manager.set_consumer_group_mapping (attribute => dbms_resource_manager.service_name,value => 'ONLINE_OLTP',consumer_group => 'ONLINE_APP'); SQL> exec dbms_resource_manager_privs.grant_switch_consumer_group(grantee_name => 'public',consumer_group => 'ONLINE_OLTP',grant_option => FALSE);
SQL> exec dbms_resource_manager.create_plan('DB_CONSOLIDATION','Plan for Consolidation');
SQL> exec dbms_resource_manager.create_plan_directive('DB_CONSOLIDATION','ONLINE_APP','Shares',mgmt_p1=> 40);
SQL> exec dbms_resource_manager.create_plan_directive('DB_CONSOLIDATION','OTHER_GROUPS',mgmt_p1=>10)
SQL> alter system set resource_manager_plan='DB_CONSOLIDATION';

The OTHER_GROUPS is to catch-all CONSUMER_GROUP for any workload not specified by any CONSUMER_GROUP. If the resource plan directive does not contain OTHER_GROUPS, you will hit the following error.

ORA-29377: consumer group OTHER_GROUPS is not part of top-plan DB_CONSOLIDATION
 ORA-06512: at "SYS.DBMS_RMIN_SYS", line 3640
 ORA-06512: at "SYS.DBMS_RMIN_SYS", line 3691
 ORA-06512: at "SYS.DBMS_RMIN_SYS", line 4364
 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 756
 ORA-06512: at line 1

To check what are the consumer groups within the DB_CONSOLIDATION plan.

SQL> select group_or_subplan from dba_rsrc_plan_directives where plan='DB_CONSOLIDATION';
GROUP_OR_SUBPLAN
 --------------------------------------------------------------------------------
 ONLINE_APP
 OTHER_GROUPS

To check what is the currently active resource plan directives in the database.

SQL> select name, cpu_managed from v$rsrc_plan where is_top_plan = 'TRUE';
NAME CPU
 -------------------------------- ---
 DB_CONSOLIDATION ON

Note 1: Switching of resource plan is a dynamic operations.
Note 2: Remember to create the necessary privilege for the session to switch into the resource plan. Else, the mapping rule will be ignored.
Note 3: If the session is being throttled, you will see “resmgr:cpu quantum” wait events in your AWR report.

Multiple databases within same database (12c multi-tenant feature!) – Resource Manager

Do the following within the root container:

SQL> exec dbms_resource_manager.create_pending_area();
SQL> exec dbms_resource_manager.create_cdb_plan(plan => 'CDB_PLAN','CDB Resource Plan)
SQL> exec dbms_resource_manager.create_cdb_plan_directive(plan=> 'CDB_PLAN','pluggable_database => 'pdb1',shares => 3)
SQL> exec dbms_resource_manager.validate_pending_area();
SQL> SQL> exec dbms_resource_manager.submit_pending_area();
SQL> alter system set resource_manager_plan='CDB_PLAN';

So what happens when we use all 3 methods to manage our resources?

Screen Shot 2015-02-04 at 16.27.05

Assuming the system has 24 Core threads, the question is, how many CPU threads will OLTP get?

Answer: 0.5*0.5*8 = 2 :))

Cheers!
Wei Shan

Leave a comment

Why should we not upgrade to Oracle 12c……YET

Why should we not upgrade to Oracle 12c……YET

Oracle Database 12c is one of the most amazing release so far. Personally, it’s also the first “release” that I experienced since I started out as an Oracle DBA on 11gR2. Everyone has been raving about it’s features and performance. There are several new features that got every DBAs out there excited.

  1. Multi-Tenant – Database Consolidation
  2. In-Memory – Storing data in memory using columnar store
  3. Automatic Data Optimisation – Information lifecycle management feature

In this post, I shall discuss about the Multi-Tenant feature.

Multi-Tenant requires additional licensing feature on top of Enterprise Edition. According to Oracle Global Price List, it costs about 17,500 USD per core.  This does not come at a cheap cost. But consider this, if we can consolidate multiple under-utilised databases into a single machine, we could potentially save money from the EE licences instead. So please do the necessary planning before embarking on your upgrade/migration journey. According to the Oracle documentation, there are several features still unsupported on 12.1.0.2. Please do take note, else you will be in for a surprise!

The following is a list of components that are not available or are restricted in this release of Oracle Database 12c Release 1 (12.1.0.2):

  • Oracle Fail Safe release 4.1.0, shipping with Oracle Database 12c Release 1 (12.1.0.2), does not support the new multitenant container database (CDB) feature introduced in Release 12c. This restriction is documented in the ReadMe for Oracle Fail Safe 4.1.0 patch set 1 (4.1.0.1). Support for CDB will be introduced in Oracle Fail Safe release 4.1.1.0 (not a patch set) currently targeted to release in H1CY2015.
  • Only rolling upgrades to Oracle Grid Infrastructure release 12.1.0.2 are supported when upgrading Oracle Grid Infrastructure clusters that are operating in Oracle Flex Cluster mode. The Oracle Clusterware stack on all nodes in the cluster (Hub Node and Leaf Node) needs to be up and running prior to the upgrade. For environments operating in Standard cluster mode, rolling and non-rolling upgrades are supported.
  • Database Resident Connection Pooling (DRCP) is not supported with Transaction Guard.
  • XStream does not support LONG columns in databases with varying width multibyte character sets.
  • Java Database Connectivity (JDBC) thin driver Database Change Notification (DCN) is not supported by PDB.
  • Oracle Automatic Storage Management Cluster File System (Oracle ACFS) does not currently support Hybrid Columnar Compression (HCC).

The following is a list of features that are not available or are restricted for a multitenant container database (CDB) in this release of Oracle Database 12c Release 1 (12.1.0.2):

  • DBVERIFY
  • Data Recovery Advisor
  • Flashback Pluggable Database
  • Flashback Transaction Backout
  • Database Change Notification
  • Continuous Query Notification (CQN)
  • Client Side Cache
  • Heat Map
  • Automatic Data Optimization
  • Oracle Streams

If you are currently using any of the features above, you might want to stick it out until the features are ready. I am not saying that you should never upgrade to Oracle Database 12c. That would be a silly thing to say. My point is that do not jump into a new technology just because it seems promising. Please do your due diligence and homework. In fact, according to the Oracle documentation, non-CDB database, meaning pre-12c databases will be deprecated. By June 2018, when 11gR2 is out of premier support, we should see plenty of consolidated database environments :)

Till then, please start planning for your journey to 12c!

Cheers,
Wei Shan

Leave a comment

Oracle Database – Gathering Statistics

Oracle Database – Gathering Statistics

Statistics are very important in databases as it allows the Oracle Optimizer (CBO) to determine the most efficient way to parse an SQL statement. Stale statistics will lead to poor database performance because the SQL execution will not be optimal. Oracle knows this is important and have the statistics gathered for you automatically via Automatic Optimizer Statistics Collection.

The types of Optimizer Statistics are:

Table statistics

  1. Number of rows
  2. Number of blocks
  3. Average row length

Column statistics

  1. Number of distinct values (NDV) in column
  2. Number of nulls in column
  3. Data distribution (histogram)
  4. Extended statistics

Index statistics

  1. Number of leaf blocks
  2. Levels
  3. Clustering factor

System statistics

  1. I/O performance and utilization
  2. CPU performance and utilization

So how should we gather statistics in Oracle manually?

  • ANALYZE
  • DBMS_STATS

There are 2 ways to do it. Prior to Oracle 10gR1, ANALYZE was the recommended way to do it. It is a command that can be easily executed.

SQL> analyze table schema_name.table_name [estimate|compute] statistics;

The COMPUTE clause will allow Oracle to perform a full table scan and insert the statistics into the data dictionary for the CBO to use. However it is very resource intensive, especially for a Data Warehouse, it might take a long time.

The ESTIMATE clause will allow Oracle to update its statistics use a sample data from the tables. It might not be as accurate, however it utilise a lot less time/resources than COMPUTE clause.

However, if you are running Oracle 10G and onwards, you should not be using ANALYZE. It is there for backward compatibility and may be removed in future releases. You should be using DBMS_STATS instead.

SQL> exec DBMS_STATS.GATHER_TABLE_STATS ('schema_name','table_name');

There are a few advantages:

  • DBMS_STATS is a stored procedure which can be manipulated easily
  • DBMS_STATS can be execute in parallel
  • Support for external and partitioned tables
  • Collects system statistics as well
  • The Optimizer, which depends upon statistics, will eventually use only statistics that have been collected by DBMS_STATS

Hope this helps

Cheers,
Wei Shan

Leave a comment

Oracle Database 12c – Getting my hands dirty

Oracle Database 12c – Getting my hands dirty

I attended a Oracle database 12c In-Memory workshop recently. It’s one of the new database advancement where they introduce columnar storage inside memory area for faster access. It is quite similar to IBM DB2 BLU Acceleration. You can enable a column, table, tablespace or the entire database to be stored in-memory.

How to turn on the In-Memory option.

SQL> ALTER SYSTEM SET INMEMORY_SIZE=1G SCOPE=SPFILE;

SQL> show parameter sga_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 1920M

SQL> shutdown immediate;
SQL> startup;

SQL> show parameter inmemory_size
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_size big integer 1G

SQL> SELECT NAME, VALUE FROM V$SGA;
NAME VALUE
-------------------- ----------
Fixed Size 2926080
Variable Size 301992448
Database Buffers 620756992
Redo Buffers 13848576
In-Memory Area 1073741824

SQL> SELECT POOL, ALLOC_BYTES, USED_BYTES FROM V$INMEMORY_AREA;

POOL ALLOC_BYTES USED_BYTES
-------------------------- ----------- ----------
1MB POOL 854589440 0
64KB POOL 201326592 0

Note: In-Memory uses a sub-heap in SGA. It is part of SGA. Your SGA_TARGET value must be greater than your INMEMORY_SIZE than else it’s won’t startup. AMM/ASMM will not manipulate your INMEMORY_SIZE automatically. The memory area is split into In Memory Compression Unit (IMCU) 1MB pool and Snapshot Metadata Unit (SMU) 65KB pool. The ratio is determined internally by Oracle and is usually in 80-20 ratio.

Playing around with In-Memory:

SQL> alter table TEST inmemory; => enable a table for inmemory
SQL> exec DBMS_INMEMORY.POPULATE('<SCHEMA_NAME>','<TABLE_NAME>');
SQL> SELECT SEGMENT_NAME, POPULATE_STATUS, BYTES, BYTES_NOT_POPULATED FROM V$IM_SEGMENTS WHERE SEGMENT_NAME = 'TEST';
SEGMENT_NAME POPULATE_ BYTES BYTES_NOT_POPULATED
---------------------------------------- --------- ---------- -------------------
TEST COMPLETED 13631488 0
SQL> SELECT INMEMORY, INMEMORY_PRIORITY, INMEMORY_COMPRESSION, INMEMORY_DISTRIBUTE, INMEMORY_DUPLICATE FROM USER_TABLES WHERE TABLE_NAME = 'TEST';
INMEMORY INMEMORY INMEMORY_COMPRESS INMEMORY_DISTRI INMEMORY_DUPL
-------- -------- ----------------- --------------- -------------
ENABLED NONE FOR QUERY LOW AUTO NO DUPLICATE
SELECT TABLE_NAME, COLUMN_NAME, INMEMORY_COMPRESSION FROM V$IM_COLUMN_LEVEL WHERE TABLE_NAME = 'TEST';
SELECT TABLE_NAME, INMEMORY, INMEMORY_PRIORITY, INMEMORY_DISTRIBUTE, INMEMORY_COMPRESSION FROM USER_TAB_PARTITION;

Note.1: Just because a table is enabled for in-memory doesn’t mean that it get populated to memory automatically! It will only get populated when it gets triggered(DBMS_INMEMORY) manually, when the table get accessed or when the priority is set.

Note.2: USER_TABLES will not show if the table have only a few columns in-memory. You will have to query V$IM_COLUMN_LEVEL.

Note.3: USER_TABLES does not show if the in-memory table is partitioned. You will have to query USER_TAB_PARITIONS.

My 2-cents on Oracle database 12c In-Memory Option:

The idea of having both row and columnar storage in the same database looks extremely tempting. However, this does not come cheap. It is licensed by core and it is priced as the same range as Oracle RAC (Link). Storing data in column format is especially useful for DataWareHouse due to the nature of the query. However, if your tables are updated frequently (DML), it will be pretty expensive to keep the data in both row and column format synchronised.

I think it all boils down to the nature of the workload. Please do perform your own internal testing and evaluation. You never know, a free feature like “Full Database Caching” might suit your environment better! (Link).

Regards,
Wei Shan

2 Comments

Follow

Get every new post delivered to your Inbox.