Oracle 11gR2 – DBA_DATA_FILES MAXBYTES value 0

Oracle 11gR2 – DBA_DATA_FILES MAXBYTES value 0

When we query the DBA_DATA_FILES table (for scripting), we need to take extra care while getting data from MAXBYTES column. Notice the MAXBYTES value is 0 for the last 2 datafiles. The reason for this is because during the creation of the datafile, the MAXSIZE was not specified.

SQL> select TABLESPACE_NAME,FILE_NAME,MAXBYTES/1024/1024/1024 “GB” from dba_data_files;

TABLESPACE_NAME FILE_NAME GB
------------------------------ -------------------------------------------------- ----------
SYSTEM +DATA/testdb/datafile/system.269.894474543 10
AUDIT_TBS +DATA/testdb/datafile/audit_tbs.270.900889041 0
AUDIT_TBS +DATA/testdb/datafile/audit_tbs.271.909620769 0

Now, when we create the datafile is MAXSIZE specified:

SQL> alter tablespace AUDIT_TBS add datafile '+DATA' size 20G autoextend on maxsize 30G;

Let’s review the DBA_DATA_FILES output again:

SQL> select TABLESPACE_NAME,FILE_NAME,MAXBYTES/1024/1024/1024 "GB" from dba_data_files;

TABLESPACE_NAME FILE_NAME GB
------------------------------ -------------------------------------------------- ----------
SYSTEM +DATA/spa/datafile/system.269.894474543 10
AUDIT_TBS +DATA/spa/datafile/audit_tbs.270.900889041 0
AUDIT_TBS +DATA/spa/datafile/audit_tbs.271.909620769 0
AUDIT_TBS +DATA/spa/datafile/audit_tbs.272.909622999 30

That’s all for the time being!

Regards,
Wei Shan

Leave a comment

Oracle RAC 11gR2- Configure job scheduler to run on a preferred node

Oracle RAC 11gR2- Configure job scheduler to run on a preferred node

In Oracle RAC, a job that is defined in the scheduler may run on either node by default. However, there are JOB_SCHEDULER properties you can configure to change that.

INSTANCE_STICKINESS

This attribute should only be used for a database running in an Oracle Real Application Clusters (Oracle RAC) environment. By default, it is set to TRUE. If you set instance_stickiness to TRUE, jobs start running on the instance with the lightest load and the Scheduler thereafter attempts to run on the instance that it last ran on. If that instance is either down or so overloaded that it does not start new jobs for a significant period of time, another instance runs the job. If the interval between runs is large, instance_stickiness is ignored and the job is handled as if it were a non-sticky job.

If instance_stickiness is set to FALSE, each instance of the job runs on the first instance available. For environments other than Oracle RAC, this attribute is not useful because there is only one instance.

INSTANCE_ID

Valid only in an Oracle Real Application Clusters environment. Indicates the instance on which the job is to be run.

Basic Example

SQL> select OWNER,JOB_NAME,INSTANCE_STICKINESS,INSTANCE_ID from dba_scheduler_jobs where JOB_NAME like 'JOB%';
OWNER JOB_NAME INSTA INSTANCE_ID
------------------------------ ------------------------------ ----- --------
USER DBA_JOB_1 TRUE

Use the DBMS_SCHEDULER.SET_ATTRIBUTE package

SQL> exec dbms_scheduler.set_attribute(name => 'USER.DBA_JOB_1' ,attribute=>'INSTANCE_ID', value=>'1');
PL/SQL procedure successfully completed.

SQL> exec dbms_scheduler.set_attribute(name => 'USER.DBA_JOB_1' ,attribute=>'INSTANCE_STICKINESS', value=>FALSE);
PL/SQL procedure successfully completed.

Results

SQL> select OWNER,JOB_NAME,INSTANCE_STICKINESS,INSTANCE_ID from dba_scheduler_jobs where JOB_NAME like '%JOB%';
OWNER JOB_NAME INSTA INSTANCE_ID
------------------------------ ------------------------------ ----- --------
USER DBA_JOB_1 FALSE 1

Regards,
Wei Shan

Leave a comment

Oracle RAC Database – CRS-4535/CRS-4536/CRS-4534 Error

Oracle RAC Database – CRS-4535/CRS-4536/CRS-4534 Error

1 of the RAC node crashed due to device full error. Upon starting up the RAC node, it hit the following error.

node1# /u01/app/11.2.0/grid/bin/crsctl check cluster
 CRS-4535: Cannot communicate with Cluster Ready Services
 CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
 CRS-4534: Cannot communicate with Event Manager

From octssd.log (Note the bolded errors)

2016-03-06 14:39:17.943: [ CRSCCL][4036015872]USING GIPC ============ 
2016-03-06 14:39:17.943: [ CRSCCL][4036015872]clsCclGipcListen: Attempting to listen on gipcha://node1:CTSSGROUP_1. 
2016-03-06 14:39:17.943: [GIPCHGEN][4036015872] gipchaInternalRegister: Initializing HA GIPC 
2016-03-06 14:39:17.944: [GIPCHGEN][4036015872] gipchaNodeCreate: adding new node 0x7f6fe4031900 { host '', haName 'b8b8-843d-c412-444e', srcLuid 1d17e809-00000000, dstLuid 00000000-00000000 numInf 0, contigSeq 0, lastAck 0, lastValidAck 0, sendSeq [0 : 0], createTime 917676, sentRegister 0, localMonitor 0, flags 0x1 } 
2016-03-06 14:39:17.944: [GIPCHTHR][3935278848] gipchaWorkerThread: starting worker thread hctx 0x70b900 [0000000000000010] { gipchaContext : host 'node1', name 'b8b8-843d-c412-444e', luid '1d17e809-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0xc000 } 
2016-03-06 14:39:17.944: [GIPCHDEM][3933177600] gipchaDaemonThread: starting daemon thread hctx 0x70b900 [0000000000000010] { gipchaContext : host 'node1', name 'b8b8-843d-c412-444e', luid '1d17e809-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0xc000 } 
2016-03-06 14:39:17.945: [GIPCXCPT][3933177600] gipchaDaemonProcessConnect: connection to daemon failed for endp 0x731c20 [0000000000000118] { gipcEndpoint : localAddr 'ipc', remoteAddr 'ipc://gipcd_node1', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 0, wobj 0x71a660, sendp 0x71a4a0flags 0x8861a, usrFlags 0x24020 }, hctx 0x70b900 [0000000000000010] { gipchaContext : host 'node1', name 'b8b8-843d-c412-444e', luid '1d17e809-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0x5 }, ret gipcretConnectionRefused (29) 
2016-03-06 14:39:17.945: [GIPCHDEM][3933177600] gipchaDaemonThreadEntry: EXCEPTION[ ret gipcretConnectionRefused (29) ] terminating daemon thread due to exception 
2016-03-06 14:39:17.945: [GIPCHDEM][3933177600] gipchaDaemonThreadEntry: daemon thread exiting state gipchaThreadStateFailed (5) 
2016-03-06 14:39:17.945: [GIPCXCPT][4036015872] gipchaInternalResolve: failed to resolve ret gipcretDaemonLost (34), host 'node1', port 'CTSSGROUP_1', hctx 0x70b900 [0000000000000010] { gipchaContext : host 'node1', name 'b8b8-843d-c412-444e', luid '1d17e809-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0xd }, ret gipcretDaemonLost (34) 
2016-03-06 14:39:17.945: [GIPCHGEN][4036015872] gipchaResolveF [gipcmodGipcResolve : gipcmodGipc.c : 806]: EXCEPTION[ ret gipcretDaemonLost (34) ] failed to resolve ctx 0x70b900 [0000000000000010] { gipchaContext : host 'node1', name 'b8b8-843d-c412-444e', luid '1d17e809-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0xd }, host 'node1', port 'CTSSGROUP_1', flags 0x0 
2016-03-06 14:39:17.946: [GIPCXCPT][4036015872] gipchaProcessClientRequest: request failed due to failure in ha threads req 0x7ffc6a994200,req type 1, hctx 0x70b900 [0000000000000010] { gipchaContext : host 'node1', name 'b8b8-843d-c412-444e', luid '1d17e809-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0xd }, ret gipcretDaemonLost (34) 
2016-03-06 14:39:17.946: [GIPCHGEN][4036015872] gipchaPublishF [gipcmodGipcBind : gipcmodGipc.c : 884]: EXCEPTION[ ret gipcretDaemonLost (34) ] failed to publish ctx 0x70b900 [0000000000000010] { gipchaContext : host 'node1', name 'b8b8-843d-c412-444e', luid '1d17e809-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0xd }, endp 0x7f6fe40375b0 [0000000000000126] { gipchaEndpoint : port 'CTSSGROUP_1', peer ':', srcCid 00000000-00000000, dstCid 00000000-00000000, numSend 0, maxSend 100, groupListType 1, hagroup 0x7f6fe4032310, usrFlags 0x4000, flags 0x0 }, port 'CTSSGROUP_1', flags 0x4000 
2016-03-06 14:39:17.946: [GIPCXCPT][4036015872] gipcBindF [clsCclGipcListen : clsCclCommHandler.c : 3553]: EXCEPTION[ ret gipcretDaemonLost (34) ] failed to bind endp 0x7f6fe402fcc0 [0000000000000101] { gipcEndpoint : localAddr 'gipcha://node1:CTSSGROUP_1', remoteAddr '', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef (nil), ready 0, wobj 0x7f6fe4024fa0, sendp (nil)flags 0x30400, usrFlags 0x20 }, addr 0x7f6fe402f230 [00000000000000ff] { gipcAddress : name 'gipcha://node1:CTSSGROUP_1', objFlags 0x0, addrFlags 0x0 }, flags 0x0 
2016-03-06 14:39:17.946: [ CRSCCL][4036015872]gipcBind() failed. rc= 34. 
2016-03-06 14:39:17.946: [ CRSCCL][4036015872]cclLibShutdown called
2016-03-06 14:39:17.946: [ CRSCCL][4036015872]ccllibShutdown done.
2016-03-06 14:39:17.946: [ CTSS][4036015872](:ctss_ccl_init1:): Fails to initialize CCL [2]. Returns [16] 
2016-03-06 14:39:17.946: [ CTSS][4036015872]ctss_main: CCL init failed [16] 
2016-03-06 14:39:17.946: [ CTSS][4036015872]ctss_main: CTSS daemon aborting [16]. 
2016-03-06 14:39:17.946: [ CTSS][4036015872]CTSS daemon aborting

From gipcd.log (Note the bolded errors)

2016-03-06 14:39:16.979: [GIPCDCLT][3731310336] gipcdClientThread: gipcListen() failed (1) on endp 000000000000010c 
2016-03-06 14:39:16.979: [ GIPCD][3731310336] gipcdSetThreadState: changing the status of clientThread. current status gipcdThreadStatusInit desired status gipcdThreadStatusOffline 
2016-03-06 14:39:16.979: [ GIPCLIB][3731310336] gipclibMapSearch: gipcMapSearch() -> gipcMapGetNodeAddr() failed: ret:gipcretKeyNotFound (36), ht:0x5a72e0, idxPtr:0x7f61df29a8c0, key:0x7f61de6725b0, flags:0x0 
2016-03-06 14:39:16.979: [GIPCXCPT][3731310336] gipcObjectLookupF [gipcPostF : gipc.c : 2022]: search found no matching oid 0000000000000000, ret gipcretKeyNotFound (36), ret gipcretInvalidObject (3) 
2016-03-06 14:39:16.979: [GIPCXCPT][3731310336] gipcPostF [gipcdClientThread : gipcdClientThread.c : 3485]: EXCEPTION[ ret gipcretInvalidObject (3) ] failed to post obj 0000000000000000, flags 0x0 
2016-03-06 14:39:16.980: [GIPCDCLT][3731310336] gipcdClientThread: Client thread has exited 
2016-03-06 14:39:16.980: [ GIPCD][3727107840] gipcdThreadWait: GIPCD received a shutdown msg from agent framework or client/node/monitor thread died 
2016-03-06 14:39:16.980: [GIPCDMON][3727107840] gipcdMonitorThread: clientThread/nodeThread did not came into 'READY' state 
2016-03-06 14:39:16.980: [ GIPCD][3727107840] gipcdSetThreadState: changing the status of monitorThread. current status gipcdThreadStatusReady desired status gipcdThreadStatusOffline 
2016-03-06 14:39:16.980: [GIPCDMON][3727107840] gipcdMonitorThread: Monitor thread is exiting.. 
2016-03-06 14:39:16.980: [ GIPCD][3831981856] gipcdMain: invalid status of gipcd Threads. Status of clientThread gipcdThreadStatusOffline Status of nodeThread gipcdThreadStatusInit Status of monitorThread gipcdThreadStatusOffline 
2016-03-06 14:39:16.986: [ GIPC][3722905344] gipcCheckInitialization: possible incompatible non-threaded init from [clsgpnp0.c : 769], original from [gipcd.c : 177] 
2016-03-06 14:39:16.986: [ GIPCLIB][3722905344] gipclibSetTraceLevel: to set level to 0 
[ CLWAL][3722905344]clsw_Initialize: OLR initlevel [30000] 
2016-03-06 14:39:16.987: [ GIPC][3722905344] gipcCheckInitialization: possible incompatible non-threaded init from [prom.c : 690], original from [gipcd.c : 177] 
2016-03-06 14:39:16.987: [ GIPCLIB][3722905344] gipclibSetTraceLevel: to set level to 0 
2016-03-06 14:39:17.007: [ GPNP][3722905344]clsgpnp_getCachedProfileEx: [at clsgpnp.c:613] Result: (26) CLSGPNP_NO_PROFILE. Can't get offline GPnP service profile: local gpnpd is up and running. Use getProfile instead. 
2016-03-06 14:39:17.007: [ GPNP][3722905344]clsgpnp_getCachedProfileEx: [at clsgpnp.c:623] Result: (26) CLSGPNP_NO_PROFILE. Failed to get offline GPnP service profile. 
2016-03-06 14:39:17.015: [ GIPCLIB][3722905344] gipclibGetClusterGuid: retrieved cluster guid 770e66ca03467f5bbfbba3432ee2aa7b 
2016-03-06 14:39:17.025: [ GIPCLIB][3722905344] gipclibSetTraceLevel: to set level to 0 
[ CLWAL][3722905344]clsw_Initialize: OLR initlevel [70000] 
2016-03-06 14:39:17.025: [ GIPCLIB][3722905344] gipclibSetTraceLevel: to set level to 0 
2016-03-06 14:39:17.048: [ CLSINET][3722905344] Returning NETDATA: 1 interfaces 
2016-03-06 14:39:17.048: [ CLSINET][3722905344] # 0 Interface 'bond0',ip='192.168.0.5',mac='40-f2-e9-25-b6-93',mask='255.255.255.0',net='192.168.0.0',use='cluster_interconnect' 
2016-03-06 14:39:17.048: [GIPCHGEN][3722905344] gipchaNodeAddInterface: adding interface information for inf 0x9530d0 { host '', haName 'gipcd_ha_name', local (nil), ip '192.168.0.5', subnet '192.168.0.0', mask '255.255.255.0', mac '40-f2-e9-25-b6-93', ifname 'bond0', numRef 0, numFail 0, idxBoot 0, flags 0x1841 } 
2016-03-06 14:39:17.048: [GIPCHTHR][3725006592] gipchaWorkerCreateInterface: created local interface for node 'node1', haName 'gipcd_ha_name', inf 'udp://192.168.0.5:42229' 
2016-03-06 14:39:17.049: [GIPCHTHR][3725006592] gipchaWorkerCreateInterface: created local bootstrap multicast interface for node 'node1', haName 'gipcd_ha_name', inf 'mcast://224.0.0.251:42424/192.168.0.5' 
2016-03-06 14:39:17.049: [GIPCHTHR][3725006592] gipchaWorkerCreateInterface: created local bootstrap multicast interface for node 'node1', haName 'gipcd_ha_name', inf 'mcast://230.0.1.0:42424/192.168.0.5' 
2016-03-06 14:39:17.049: [GIPCHTHR][3725006592] gipchaWorkerCreateInterface: created local bootstrap broadcast interface for node 'node1', haName 'gipcd_ha_name', inf 'udp://192.168.0.255:42424' 
2016-03-06 14:39:17.049: [GIPCDNDE][3729209088] gipcdNodeThread: gipcdNodeThread started 
2016-03-06 14:39:17.049: [ GIPCD][3729209088] gipcdSetThreadState: changing the status of nodeThread. current status gipcdThreadStatusInit desired status gipcdThreadStatusReady 
2016-03-06 14:39:17.050: [ GIPCD][3729209088] gipcdThreadWait: GIPCD received a shutdown msg from agent framework or client/node/monitor thread died 
2016-03-06 14:39:17.050: [GIPCDNDE][3729209088] gipcdNodeThread: clientThread/monitorThread did not came into 'READY' state 
2016-03-06 14:39:17.050: [GIPCDNDE][3729209088] gipcdNodeThreadShutdown: deleting all the peer connections 
2016-03-06 14:39:17.050: [ GIPCD][3729209088] gipcdSetThreadState: changing the status of nodeThread. current status gipcdThreadStatusReady desired status gipcdThreadStatusOffline 
2016-03-06 14:39:17.050: [GIPCDNDE][3729209088] gipcdNodeThread: Node thread has exited 
2016-03-06 14:39:17.050: [ GIPCD][3831981856] gipcdMain: All threads terminated 
2016-03-06 14:39:17.150: [ GIPCD][3831981856] gipcdMain: GIPCD terminated

Solution:

Issue was due to network socket files issue. When the node crashes previously, it did not clean up the socket files. Thus, when the node started, it couldn’t create the socket files to establish connectivity to the peer node.

root# crsctl stop crs -f
root# rm -rf /usr/tmp/.oracle/* /var/tmp/.oracle/* /tmp/.oracle/*
root# crsctl start crs

Regards,
Wei Shan

Leave a comment

Installing MongoDB on FreeBSD 10.x

Installing MongoDB on FreeBSD 10.x

We can simply run the following to install MongoDB. However, the version installed will not be the latest. At the time of writing, it will be 2.6.7.

#pkg install mongodb

We can perform the following to install the latest MongoDB version:

# portsnap fetch
# portsnap extract
# portsnap fetch update
# cd /usr/ports/databases/mongodb32/
# make install clean

Add startup settings to /etc/rc.conf:

mongod_enable="YES"
mongod_config="/usr/local/etc/mongodb.conf"
mongod_dbpath="/var/db/mongodb"

Add settings to /usr/local/etc/mongodb.conf:

master = true
nohttpinterface = true

Start mongod daemon:

# service start mongod

References:

Regards,
Wei Shan

 

1 Comment

Playing Around with PostgreSQL date/time Datatype

Playing Around with PostgreSQL date/time Datatype

To check current timestamp. (Notice it’s GMT+8 in the results)

postgres=# select now();
 now
-------------------------------
 2016-02-29 14:19:36.046499+08
(1 row)

To check current timestamp 1 hour ago.

postgres=# select (now() - interval '1 hour');
 ?column?
-------------------------------
 2016-02-29 13:20:47.030513+08
(1 row)

To check current timestamp 1 day ago.

postgres=# select (now() - interval '1 day');
 ?column?
-------------------------------
 2016-02-28 14:20:53.820484+08

To check current timestamp in epoch time. (Unix timestamp in seconds)

postgres=# select extract(epoch from NOW());
 date_part
------------------
 1456726955.54147
(1 row)

To check current timestamp in epoch time. (Unix timestamp in milliseconds)

postgres=# select extract(epoch from NOW())*1000;
 ?column?
-----------------
 1456727010006.5
(1 row)

Bonus! By default, the output is integer. You need to cast it to float to get results with less than 0.

postgres=# select 1/2;
 ?column?
----------
 0
(1 row)
postgres=# select 1/2::float;
 ?column?
----------
 0.5
(1 row)

I had quite some “fun” playing with all these date/time datatype recently.

Regards,
Wei Shan

 

Leave a comment

Oracle Database – ORA 12014: table does not contain a primary key constraint

ORA-12014: table does not contain a primary key constraint

While creating a Materialized View (MVIEW) in the Oracle database, I hit the above error. It seems kinda strange to me. I did a oerr command to try and find out more information.

[oracle@localhost ~]$ oerr ora 12014
12014, 00000, "table '%s' does not contain a primary key constraint"
// *Cause: The CREATE MATERIALIZED VIEW LOG command was issued with the
// WITH PRIMARY KEY option and the master table did not contain
// a primary key constraint or the constraint was disabled.
// *Action: Reissue the command using only the WITH ROWID option, create a
// primary key constraint on the master table, or enable an existing
// primary key constraint.

Table that I wanted to create the MVIEW on

SQL> desc cust_id_sales_aggr
 Name Null? Type
 -------------------------------- -------- ------------------------
 CUST_ID NOT NULL NUMBER
 DOLLAR_SALES NUMBER
 CNT_DOLLARS NUMBER
 CNT NUMBER

How I created the MVIEW LOG

CREATE MATERIALIZED VIEW LOG ON cust_id_sales_aggr WITH ROWID
(CUST_ID,DOLLAR_SALES,CNT_DOLLARS,CNT)
INCLUDING NEW VALUES;

How I created the MVIEW

CREATE MATERIALIZED VIEW cust_id_sales_aggr_mv
PCTFREE 0 TABLESPACE users
STORAGE (INITIAL 8M)
BUILD IMMEDIATE
REFRESH FAST ON DEMAND
ENABLE QUERY REWRITE
AS SELECT cust_id,dollar_sales,cnt_dollars,cnt FROM cust_id_sales_aggr;

The error from the above MVIEW creation command.

ERROR at line 7:
ORA-12014: table 'CUST_ID_SALES_AGGR' does not contain a primary key constraint

How to resolve the error (Obvious from the oerr output)

  1. Create PK on the source table (CUST_ID_SALES_AGGR)
  2. While creating the MVIEW, use the option with ROWID.

I chose option #1:)

CREATE MATERIALIZED VIEW cust_id_sales_aggr_mv
PCTFREE 0 TABLESPACE users
STORAGE (INITIAL 8M)
BUILD IMMEDIATE
REFRESH FAST ON DEMAND
WITH ROWID
ENABLE QUERY REWRITE
AS SELECT cust_id,dollar_sales,cnt_dollars,cnt FROM cust_id_sales_aggr;

Regard,
Wei Shan

 

Leave a comment

PostgreSQL 9.x – pg_basebackup: could not get transaction log end position from server: FATAL: archive member “core.56229” too large for tar format

PostgreSQL 9.x – pg_basebackup: could not get transaction log end position from server: FATAL:  archive member “core.56229” too large for tar format

While I was trying to re-deploy a PostgreSQL streaming replication slave from the master node, I hit the following error:

[root@node2~]# su - postgres
-bash-4.1$ /usr/pgsql-9.2/bin/pg_basebackup -D /data/var/lib/pgsql/9.2/data -h node1 --xlog -P -v
transaction log start point: 70/CD000020
166537602/235436024 kB (100%), 2/2 tablespaces
pg_basebackup: could not get transaction log end position from server: FATAL: archive member "core.56229" too large for tar format

From node1, I found the following files in the $PGDATA:

4.2G ./core.53808
4.2G ./core.55595
4.2G ./core.55698
4.2G ./core.56229

The above was the core dump from previous PostgreSQL DB crashes. By default, PostgreSQL does not create files larger than 1G, so I figured this is not the datafiles. I asked Uncle Google and found out that it’s the core dump by PostgreSQL DB crashes.

Once I removed the files from the $PGDATA directory. The pg_basebackup command executes successfully. The interesting thing that I learnt is, pg_basebackup uses tar in the backend to compress and send the files over to the slave database.

Regards,
Wei Shan

Leave a comment

MariaDB – ERROR 1033 (HY000): Incorrect information in file:

MariaDB/TokuDB – ERROR 1033 (HY000): Incorrect information in file:

While I was testing the TokuDB HotBackup plugin from Percona, I faced some error during a complete database recovery. After the recovery was completed, I tried verifying the table contents.

MariaDB [partitiontest]> select count(*) from salaries;
ERROR 1033 (HY000): Incorrect information in file: './partitiontest/salaries.frm'

Looking at the error message, I was under the impression that the tables were corrupted.

From the logs:

160127 11:10:56 [ERROR] mysql.plugin: 1 client is using or hasn't closed the table properly
160127 11:10:56 [ERROR] Can't open shared library '/usr/lib64/mysql/plugin/tokudb_backup.so' (errno: 0, cannot open shared object file: No such file or dir
ectory)
160127 11:10:56 server_audit: MariaDB Audit Plugin version 1.3.0 STARTED.
160127 11:10:56 [Note] Server socket created on IP: '::'.
160127 11:10:56 [Warning] 'user' entry 'root@centos7.local' ignored in --skip-name-resolve mode.
160127 11:10:56 [Warning] 'user' entry '@centos7.local' ignored in --skip-name-resolve mode.
160127 11:10:56 [Warning] 'proxies_priv' entry '@% root@centos7.local' ignored in --skip-name-resolve mode.
160127 11:10:56 [Note] Reading of all Master_info entries succeded
160127 11:10:56 [Note] Added new Master_info '' to hash table
160127 11:10:56 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.0.22-MariaDB-enterprise-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Enterprise Certified Binary
160127 11:11:28 [ERROR] mysqld: Incorrect information in file: './partitiontest/salaries.frm'

The error message above looks really scary. While snooping around Google, I realized that the error is very vague, I decided to look deeper. On examining the logs again, I realized that the following error:

160127 11:10:56 [ERROR] Can't open shared library '/usr/lib64/mysql/plugin/tokudb_backup.so' (errno: 0, cannot open shared object file: No such file or dir
ectory)

Once I enabled the TokuDB plugin in /etc/my.cnf.d/tokudb.conf, the recovery works fine. I learnt to be more careful when dealing with such MariaDB errors.

Regards,
Wei Shan

Leave a comment

Follow

Get every new post delivered to your Inbox.