Oracle Database 12c – Getting my hands dirty

Oracle Database 12c – Getting my hands dirty

I attended a Oracle database 12c In-Memory workshop recently. It’s one of the new database advancement where they introduce columnar storage inside memory area for faster access. It is quite similar to IBM DB2 BLU Acceleration. You can enable a column, table, tablespace or the entire database to be stored in-memory.

How to turn on the In-Memory option.

SQL> ALTER SYSTEM SET INMEMORY_SIZE=1G SCOPE=SPFILE;

SQL> show parameter sga_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 1920M

SQL> shutdown immediate;
SQL> startup;

SQL> show parameter inmemory_size
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_size big integer 1G

SQL> SELECT NAME, VALUE FROM V$SGA;
NAME VALUE
-------------------- ----------
Fixed Size 2926080
Variable Size 301992448
Database Buffers 620756992
Redo Buffers 13848576
In-Memory Area 1073741824

SQL> SELECT POOL, ALLOC_BYTES, USED_BYTES FROM V$INMEMORY_AREA;

POOL ALLOC_BYTES USED_BYTES
-------------------------- ----------- ----------
1MB POOL 854589440 0
64KB POOL 201326592 0

Note: In-Memory uses a sub-heap in SGA. It is part of SGA. Your SGA_TARGET value must be greater than your INMEMORY_SIZE than else it’s won’t startup. AMM/ASMM will not manipulate your INMEMORY_SIZE automatically. The memory area is split into In Memory Compression Unit (IMCU) 1MB pool and Snapshot Metadata Unit (SMU) 65KB pool. The ratio is determined internally by Oracle and is usually in 80-20 ratio.

Playing around with In-Memory:

SQL> alter table TEST inmemory; => enable a table for inmemory
SQL> exec DBMS_INMEMORY.POPULATE('<SCHEMA_NAME>','<TABLE_NAME>');
SQL> SELECT SEGMENT_NAME, POPULATE_STATUS, BYTES, BYTES_NOT_POPULATED FROM V$IM_SEGMENTS WHERE SEGMENT_NAME = 'TEST';
SEGMENT_NAME POPULATE_ BYTES BYTES_NOT_POPULATED
---------------------------------------- --------- ---------- -------------------
TEST COMPLETED 13631488 0
SQL> SELECT INMEMORY, INMEMORY_PRIORITY, INMEMORY_COMPRESSION, INMEMORY_DISTRIBUTE, INMEMORY_DUPLICATE FROM USER_TABLES WHERE TABLE_NAME = 'TEST';
INMEMORY INMEMORY INMEMORY_COMPRESS INMEMORY_DISTRI INMEMORY_DUPL
-------- -------- ----------------- --------------- -------------
ENABLED NONE FOR QUERY LOW AUTO NO DUPLICATE
SELECT TABLE_NAME, COLUMN_NAME, INMEMORY_COMPRESSION FROM V$IM_COLUMN_LEVEL WHERE TABLE_NAME = 'TEST';
SELECT TABLE_NAME, INMEMORY, INMEMORY_PRIORITY, INMEMORY_DISTRIBUTE, INMEMORY_COMPRESSION FROM USER_TAB_PARTITION;

Note.1: Just because a table is enabled for in-memory doesn’t mean that it get populated to memory automatically! It will only get populated when it gets triggered(DBMS_INMEMORY) manually, when the table get accessed or when the priority is set.

Note.2: USER_TABLES will not show if the table have only a few columns in-memory. You will have to query V$IM_COLUMN_LEVEL.

Note.3: USER_TABLES does not show if the in-memory table is partitioned. You will have to query USER_TAB_PARITIONS.

My 2-cents on Oracle database 12c In-Memory Option:

The idea of having both row and columnar storage in the same database looks extremely tempting. However, this does not come cheap. It is licensed by core and it is priced as the same range as Oracle RAC (Link). Storing data in column format is especially useful for DataWareHouse due to the nature of the query. However, if your tables are updated frequently (DML), it will be pretty expensive to keep the data in both row and column format synchronised.

I think it all boils down to the nature of the workload. Please do perform your own internal testing and evaluation. You never know, a free feature like “Full Database Caching” might suit your environment better! (Link).

Regards,
Wei Shan

Advertisements
  1. #1 by koolkunal on November 12, 2014 - 08:20

    Hi Wei,

    It is the good content you pasted here. But I guess In-memory would be good choice if you have one database and you are using it for real time reporting. It is just that you need to be careful about what you are putting into in-memory. I recently attended Maria Colgen’s ( Technical Project Manager of In-memory project ) session.

  2. #2 by Wei Shan on November 24, 2014 - 07:38

    Hello Kunal,

    Thank you for the comment.

    Yes, I believe that it is very much suitable for DWH workloads. But, I still do believe that sufficient PoC testing should be conducted. If the ETL process is not mature, we might be looking at expensive queries not utilising the columnar store.

    Thanks!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: