Saturday, August 31, 2013

[how to] How often does the Impala StateStore refresh?

[how to] How often does the Impala StateStore refresh?


How often does the Impala StateStore refresh?

Posted: 31 Aug 2013 03:41 PM PDT

I am using Hive and Impala on the same cluster. I find that when I create new tables, the Impala StateStore does not refresh automatically even after a few hours.

I know that I can accomplish this by running "refresh" in impala-shell (in a cron job if need be), but I would like to know: how often does impalad refresh metadata and can I set this interval to be shorter? If so, how?

sql server-mirroring 3 sql server

Posted: 31 Aug 2013 07:36 PM PDT

we are developing an application for a company and this company has 3 branches, we design a SQL server for our application we can put this server in one of the branches but as the connection between these branches is over internet and this connection can be lost some times but the SQL server availability is vital so we decided to mirror the server in each branch, I need a simple solution for how should i config these servers to be mirror of each other with the same privilege to do changes in DBs, all we want to do is configuring 3 SQL server exactly with the same data. we read about merge replication but we want a simpler solution because our scenario is not really enterprise.

How to determine when to reduce physical memory for SQL Server?

Posted: 31 Aug 2013 06:38 AM PDT

If I repeatedly notice SQL Server reporting a lot of free memory, can I likely remove about that amount of memory from the server without affecting performance? Or, can/should it use more for caching?

For this particular example, the server is running SQL Server 2012 Standard Edition on Windows Server 2012. It has 20 GB of physical memory and hosts over 100 GB of data. SQL Server is the only application running on the server.

  SQLServer;Buffer Manager      Buffer cache hit ratio       99.737 %      Page life expectancy            874 s  SQLServer:Memory Manager      Database Cache Memory         6,744 MB      Free Memory                   5,937 MB      Optimizer Memory                  5 MB      SQL Cache Memory                 29 MB      Target Server Memory         19,015 MB      Total Server Memory          18,407 MB  

What exactly is notarization in an Oracle database? [on hold]

Posted: 31 Aug 2013 10:47 AM PDT

Recently, I was reading this article that talks about how SSL (Secure Socket Layer) can be applied to an Oracle database.

Sometimes one of the requirements for obtaining a certification authority involves notarization of the certificate request form. I wanted to know what exactly notarization is, and how can it be applied to a database in general.

MongoDB GUI written on Node.js

Posted: 31 Aug 2013 04:30 AM PDT

Maybe I doubt it - but more than a third instance of MongoDB working with Node.js. I read on official site about Admin UIs, answers on stackoverflow and haven't found GUI's @ Node.js. Why should I install this hateful PHP for RockMongo or plating with django and Fang of Mongo - there is really no MongoDB Admin UI's on Node.js

what are the ways to use SYSDATE and timestamp pattern? [on hold]

Posted: 31 Aug 2013 04:22 AM PDT

what are the ways to use SYSDATE and timestamp pattern? I would like to know the importance of sysdate and timestamp in the database query language. Database experts kindly help me also as beginner to SQL which website I can refer and what topics I need to cover.

I an not able to fetch a record based on a condition [on hold]

Posted: 31 Aug 2013 06:03 AM PDT

I have a table with these records:

0 sales   1 salesOrder  2 SalesInfo  3 SalesDescr  2 Purchase  

I want to transform it into another table like this:

0 sales   1 sales salesorder   2 sales salesorder salesInfo  3 sales salesorder salesInfo sales Descr  2 sales salesorder Purchase  

In the above up to the 4th record I can handle a loop and do the same, but I am not able to do the 5th record. Best Code I have Got in this they are clearing the Record :

call function 'BAPI_COSTELEMENTGRP_GETDETAIL'       exporting         chartofaccounts = p_kokrs         groupname       = v_groupname       tables         hierarchynodes  = i_hierarchynodes         hierarchyvalues = i_hierarchyvalues.     if not i_hierarchynodes[] is initial.       clear : i_group[], k_group.       loop at i_hierarchynodes.         case i_hierarchynodes-hierlevel.           when 0.             k_group-gname0 = i_hierarchynodes-groupname.             k_group-desc0 = i_hierarchynodes-descript.             clear : v_gname0, v_desc0.             v_gname0 = i_hierarchynodes-groupname.             v_desc0 = i_hierarchynodes-descript.           when 1.             if v_gname1 is initial.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = i_hierarchynodes-groupname.               k_group-desc1 = i_hierarchynodes-descript.               v_gname1 = i_hierarchynodes-groupname.               v_desc1 = i_hierarchynodes-descript.             else.               clear : v_gname1, v_desc1, v_gname2, v_desc2, v_gname3, v_desc3.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = i_hierarchynodes-groupname.               k_group-desc1 = i_hierarchynodes-descript.               v_gname1 = i_hierarchynodes-groupname.               v_desc1 = i_hierarchynodes-descript.             endif.           when 2.             if v_gname2 is initial.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = v_gname1.               k_group-desc1 = v_desc1.               k_group-gname2 = i_hierarchynodes-groupname.               k_group-desc2 = i_hierarchynodes-descript.               v_gname2 = i_hierarchynodes-groupname.               v_desc2 = i_hierarchynodes-descript.             else.               clear : v_gname2, v_desc2, v_gname3, v_desc3.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = v_gname1.               k_group-desc1 = v_desc1.               k_group-gname2 = i_hierarchynodes-groupname.               k_group-desc2 = i_hierarchynodes-descript.               v_gname2 = i_hierarchynodes-groupname.               v_desc2 = i_hierarchynodes-descript.             endif.           when 3.             if v_gname3 is initial.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = v_gname1.               k_group-desc1 = v_desc1.               k_group-gname2 = v_gname2.               k_group-desc2 = v_desc2.               k_group-gname3 = i_hierarchynodes-groupname.               k_group-desc3 = i_hierarchynodes-descript.               v_gname3 = i_hierarchynodes-groupname.               v_desc3 = i_hierarchynodes-descript.             else.               clear : v_gname3, v_desc3.               k_group-gname0 = v_gname0.               k_group-desc0 = v_desc0.               k_group-gname1 = v_gname1.               k_group-desc1 = v_desc1.               k_group-gname2 = v_gname2.               k_group-desc2 = v_desc2.               k_group-gname3 = i_hierarchynodes-groupname.               k_group-desc3 = i_hierarchynodes-descript.               v_gname3 = i_hierarchynodes-groupname.               v_desc3 = i_hierarchynodes-descript.             endif.         endcase.         append k_group to i_group.         clear : k_group, i_hierarchynodes.       endloop.     endif.  

Does a re-index update statistics?

Posted: 31 Aug 2013 05:28 AM PDT

I've been doing the MS10775A course this past week and one question that came up that the trainer couldn't answer reliably is:

Does a re-index update the statistics?

We found discussions online arguing both that it does and that it doesn't.

How to make Postgres autovacuum not impact performance?

Posted: 31 Aug 2013 07:11 PM PDT

I am running into problems where Postgres autovacuum processes are interfering with the overall performance of my database. It is making my select, insert, and update queries slower than usual. What are some settings, and optimization strategies that will make autovacuum have less of an impact on my database performance?

Is there a way to make it not consume as much memory/CPU so that client queries can run more efficiently?

ORA-21780: Maximum number of object durations exceeded

Posted: 31 Aug 2013 06:23 AM PDT

I am getting this error when records are streamed over to another database

Oracle DB version:11.2.0.3.0    Transaction Flow:   DML on Schema1@DB1 Streamed to Schema2@DB2 then trigger on this table will insert into  Schema3@DB2 then trigger on this table will insert into table in Schema4@DB2    ORA-21780 error happening at the last stage.  

Please advise.

Targeting MSX with job on Multi-Server Administration

Posted: 31 Aug 2013 02:17 PM PDT

I am using Multi-Server Administration to execute jobs on multiple targets. This works well, but I am not able to enlist the master as a target. I receive the following error when attempting to enlist the MSX as a TSX:

Server 'XXXXX' is an MSX. Cannot enlist one MSX into another MSX. (Microsoft SQL Server, Error: 14299)

I would think that running these same jobs on the master would not require managing a local job as well as the multi-server jobs.

Any help is appreciated.

FileWatcher Troubleshooting Options

Posted: 31 Aug 2013 11:17 AM PDT

I have setup a Credential/Procedure/Program/File Watcher/Job to monitor a folder for new files, but it is not working. I am trying to figure out what I can check to troubleshoot why this isn't working. After scaling back the code to a bare minimum, here are somethings I have already done.

  • Verify that files are being created in the folder monitored - They are.
  • Verify that the procedure can be run by itself - It can.
  • Verify that the File Watcher shows up in DBA_Scheduler_File_Watchers and is enabled. - It does and is.
  • Verify that DBA_Scheduler_Job_Run_Details shows a successful execution - It does NOT show any entries for this job.
  • Check the alert log - Nothing interesting.
  • Check for other trace files - I am getting trace files mattching *_j000_*.trc, but they just have this: FILE_TRANSFER error is: with nothing after it.

Could someone explain what I am doing wrong or give me a way to troubleshoot this further? Here is my test code:

--Create Credential.  BEGIN      sys.dbms_scheduler.create_credential(          username => 'oracle',          password => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx',          database_role => NULL,          windows_domain => NULL,          comments => NULL,          credential_name => 'TestCredential'      );  END;  /      CREATE TABLE FilesFound(FoundDate Date);      CREATE OR REPLACE PROCEDURE TestProcedure(iFileInfo In sys.scheduler_filewatcher_result) AS  BEGIN     INSERT INTO FilesFound VALUES (sysdate);     COMMIT;  END;  /      BEGIN     DBMS_SCHEDULER.create_program(        program_name => 'TestProgram',        program_type => 'stored_procedure',        program_action => 'TestProcedure',        number_of_arguments => 1,        enabled => False);     DBMS_SCHEDULER.define_metadata_argument(        program_name => 'TestProgram',        metadata_attribute => 'event_message',        argument_position => 1);  END;  /        BEGIN  dbms_scheduler.create_file_watcher(     file_watcher_name => 'TestFileWatcher',     directory_path => '/u01/test',     file_name => '*.*',     credential_name => 'TestCredential',     destination => NULL,     enabled => False);  END;  /      BEGIN     dbms_scheduler.create_job(     job_name => 'TestJob',     program_name => 'TestProgram',     event_condition => NULL,     queue_spec => 'TestFileWatcher',     auto_drop => False,     enabled => False);       --Enable Everything.     dbms_scheduler.enable('TestProgram, TestFileWatcher, TestJob');  end;  /    --Set a one minute check interval.  BEGIN    DBMS_SCHEDULER.set_attribute(      'file_watcher_schedule',      'repeat_interval',      'freq=minutely; interval=1');  END;  /    --Create a file.  DECLARE    vFile utl_file.file_type;  BEGIN    EXECUTE IMMEDIATE 'create or replace directory TESTDIRECTORY as ''/u01/test''';    vFile := utl_file.fopen('TESTDIRECTORY', 'TestFileWatcher.txt', 'w', NULL);    utl_file.put_line(vFile, 'File has arrived '||SYSTIMESTAMP, TRUE);    utl_file.fclose(vFile);  END;  /      --Delay to give time for file to appear.  BEGIN     DBMS_LOCK.SLEEP(120);  END;  /    --Check the table.  BEGIN     FOR vx IN (select count(*) ct from sys.FilesFound) LOOP        If (vx.ct = 0) Then           DBMS_Output.Put_Line('Failed - No File Watcher Activity.');        Else           DBMS_Output.Put_Line('Success - File Watcher Activity Found.');        End If;     END Loop;  END;  /    --Cleanup.  EXECUTE dbms_scheduler.drop_job(job_name => 'TestJob');  EXECUTE dbms_scheduler.drop_program(program_name => 'TestProgram');  EXECUTE dbms_scheduler.drop_file_watcher(file_watcher_name => 'TestFileWatcher');  EXECUTE DBMS_SCHEDULER.drop_credential(credential_name => 'TestCredential');  drop table FilesFound;  drop procedure TestProcedure;  drop directory TestDirectory;  

restrict user host settings to socket connection only

Posted: 31 Aug 2013 01:17 PM PDT

Is there a way to restrict a user's host setting so that (s)he can only connect by socket and not TCP/IP? I'd like to have this setting for the root account.

edit: As Abdul Manaf pointed out skip-networking can be used to turn off TCP/IP connectivity altogether. But can it be done on a user account basis?

Unable to create a new listener in oracle 11g

Posted: 31 Aug 2013 03:17 PM PDT

In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager.

Grant access to a table to all users

Posted: 31 Aug 2013 10:17 AM PDT

Is it possible to assign a grant to a table for all users, or a default permission so that when new users are created they will have the specific grants for that table to SELECT, UPDATE, INSERT and DELETE?

Performance difference between MySQL and PostgreSQL for the same schema/queries

Posted: 31 Aug 2013 07:14 PM PDT

I'm a newbie DBA, and I have experience in Microsoft SQL Server but I want to jump to FLOSS.

I'm starting a company, and we develop an app (PHP) with a Postgres backend, and we did some tests comparing with MySQL too. We observe that MySQL is twice as fast as PostgreSQL.

I did a tangible performance test:

  • Same columns in table with equivalent column datatypes.
  • Same number of rows.
  • Same indexes in both (primary key included).
  • The CPU load are idle and Postgres machine it's significantly better.
  • And the same query (obviously).

What am I doing wrong?

P.S: I read many "howtos" on performance tuning for database engines.
P.S(2): We're using InnoDB (one file per table) on the MySQL database.


Hi Mat!

I did the three common select (and hardest) queries.

The question about disk, certainly it's not the same; In Postgres it's a SSD (almost three time fastest).

MySQL cache data:

+------------------------------+----------------------+  | Variable_name                | Value                |  +------------------------------+----------------------+  | binlog_cache_size            | 32768                |  | have_query_cache             | YES                  |  | key_cache_age_threshold      | 300                  |  | key_cache_block_size         | 1024                 |  | key_cache_division_limit     | 100                  |  | max_binlog_cache_size        | 18446744073709547520 |  | query_cache_limit            | 1048576              |  | query_cache_min_res_unit     | 4096                 |  | query_cache_size             | 16777216             |  | query_cache_type             | ON                   |  | query_cache_wlock_invalidate | OFF                  |  | table_definition_cache       | 256                  |  | table_open_cache             | 64                   |  | thread_cache_size            | 8                    |  +------------------------------+----------------------+  

I don't know how to view this in PostgreSQL.

Thanks in advance.

Should I keep pg_xlog on the same disk as data if using disk snapshotting?

Posted: 31 Aug 2013 07:22 PM PDT

We're running on EBS volumes on EC2. We're interested in leveraging EBS snapshotting for backups. However, does this mean we'd need to ensure our pg_xlog is on the same EBS volume as our data?

(I believe) the usual reasoning for separating pg_xlog onto a separate volume is for performance. However, if they are on different volumes, the snapshots may be out of sync.

mysql second slave not syncing while first slave works fine

Posted: 30 Aug 2013 10:17 PM PDT

I have a master (m) - slave (s1) setup using mysql 5.1.45

When I try to add a second slave (s2) the slave lags behind and never catches up on the sync.

Even after having synced the s2 with the whole system offline and there were (Seconds_Behind_Master = 0) after a few hours the s2 gets out of sync.

Strange is that s1 is always on sync.

any ideas?

SHOW SLAVE STATUS \G  (on slave2)  *************************** 1. row ***************************             Slave_IO_State: Waiting for master to send event                Master_Host: xxx.xxx.xxx.xxx                Master_User: xxxx_xxxx5                Master_Port: 3306              Connect_Retry: 60            Master_Log_File: mysql-bin.013165        Read_Master_Log_Pos: 208002803             Relay_Log_File: xxxxxxxxxx-relay-bin.000100              Relay_Log_Pos: 1052731555      Relay_Master_Log_File: mysql-bin.013124           Slave_IO_Running: Yes          Slave_SQL_Running: Yes            Replicate_Do_DB: xxxxxxxxx        Replicate_Ignore_DB:         Replicate_Do_Table:     Replicate_Ignore_Table:    Replicate_Wild_Do_Table:  Replicate_Wild_Ignore_Table:                 Last_Errno: 0                 Last_Error:               Skip_Counter: 0        Exec_Master_Log_Pos: 1052731410            Relay_Log_Space: 44233859505            Until_Condition: None             Until_Log_File:              Until_Log_Pos: 0         Master_SSL_Allowed: No         Master_SSL_CA_File:         Master_SSL_CA_Path:            Master_SSL_Cert:          Master_SSL_Cipher:             Master_SSL_Key:      Seconds_Behind_Master: 69594  Master_SSL_Verify_Server_Cert: No              Last_IO_Errno: 0              Last_IO_Error:             Last_SQL_Errno: 0             Last_SQL_Error:  

iperf results between servers:

M -> s2  [ ID] Interval       Transfer     Bandwidth  [  5]  0.0-10.0 sec    502 MBytes    420 Mbits/sec  [ ID] Interval       Transfer     Bandwidth  [  4]  0.0-10.0 sec  1.05 GBytes    902 Mbits/sec    M -> s1  [ ID] Interval       Transfer     Bandwidth  [  4]  0.0-10.0 sec    637 MBytes    534 Mbits/sec  [ ID] Interval       Transfer     Bandwidth  [  5]  0.0-10.0 sec    925 MBytes    775 Mbits/sec  

vmstat for s2

 vmstat   procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------    r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st   1  0    268 126568 199100 22692944    0    0   100   836    8   81  1  0 96  3    vmstat 2 10  procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st  0  0    268 1150144 197128 21670808    0    0   100   835    9   81  1  0 96  3  0  0  0    268 1144464 197160 21674940    0    0   644  3096 1328 1602  0  0 97  2  0  0  2    268 1140680 197176 21679624    0    0   846  5362 1002 1567  0  0 98  2  0  0  1    268 1135332 197192 21685040    0    0   960  3348  850 1193  0  0 98  1  0  0  0    268 1130776 197204 21688752    0    0   576  2894  978 1232  0  0 98  2  0  0  0    268 1127060 197264 21693556    0    0   586  5202 1075 1505  0  0 97  3  0  0  0    268 1122184 197272 21698412    0    0   896  1160  614  727  0  0 98  1  0  0  0    268 1118532 197300 21702780    0    0   586  5070 1279 1708  0  0 93  6  0  0  0    268 1114000 197324 21705820    0    0   402  1522  947  942  0  0 95  4  0  0  0    268 1109708 197336 21710188    0    0   704  9150 1224 2109  0  0 97  2  0  

top output on s2

top - 14:44:25 up 16:36,  1 user,  load average: 1.62, 1.47, 1.42  Tasks: 140 total,   1 running, 139 sleeping,   0 stopped,   0 zombie  Cpu0  :  2.9%us,  1.1%sy,  0.0%ni, 73.8%id, 21.8%wa,  0.0%hi,  0.4%si,  0.0%st  Cpu1  :  0.8%us,  0.3%sy,  0.0%ni, 95.5%id,  3.3%wa,  0.0%hi,  0.0%si,      0.0%st  Cpu2  :  0.6%us,  0.3%sy,  0.0%ni, 97.7%id,  1.4%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu3  :  0.5%us,  0.2%sy,  0.0%ni, 98.9%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu4  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu5  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu6  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu7  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Mem:  24744184k total, 24005508k used,   738676k free,   199136k buffers  Swap:  1050616k total,      268k used,  1050348k free, 22078920k cached  

Any ideas?

Is there any chance that the Mysql version is the culprit of all this in conjuction with the nearly 5 fold increase in traffic to the master ?

If that is the case then why s1 syncs and not s2?

Any ideas if 5.6.x solves similar probs ?

How does MySQL determine the 'index first key' and 'index last key' with indexes?

Posted: 31 Aug 2013 12:17 AM PDT

In other words, how does MySQL find the result range with indexes on the first step when it's executing a select query?

Viewing MySQL Account Resource Limits

Posted: 31 Aug 2013 03:17 AM PDT

Is there any way of viewing an account's remaining resources that are allocated to it? I setup an account that's allowed 7200 queries an hour. At any point, could I then run a query to find out how many remaining queries it's allowed?

MySQL must be storing this information somewhere as FLUSH USER_RESOURCES; will reset the counters however, I tried a few variants such as SHOW USER_RESOURCES and they don't seem to display anything. I've also hunted around information_schema and mysql tables.

Is it just not possible to retrieve that information?

heroku pg:info shows 0 tables

Posted: 31 Aug 2013 08:38 PM PDT

My heroku app has dev plan.

The app is useable and works and if I use pg:backups to download a backup and view it locally I can see that it has data.

However, when I do heroku pg:info, it gives me 0 tables and 0 rows:

Connections: 0  Created:     2012-10-25 09:18 UTC  Data Size:   6.1 MB  Fork/Follow: Unavailable  PG Version:  9.1.6  Plan:        Dev  Rows:        0/10000 (In compliance)  Status:      available  Tables:      0  

What's going on?

When using a master plus multiple slaves, how do I change over from one server to a brand new one with minimal downtime?

Posted: 31 Aug 2013 12:17 AM PDT

In my particular case, I'm using Heroku Postgres (repackaged Amazon EC2 instances running PostgreSQL), but platform-agnostic answers are also good!

Heroku's instructions describe the process for transfering over a single database (i.e. without slaves):

  • You have an existing server, A.
  • Create server B as a follower of A.

    $ heroku addons:add heroku-postgresql:ronin --follow HEROKU_POSTGRESQL_AUBURN  ... Attached as HEROKU_POSTGRESQL_BLUE  
  • Wait for B to be up-to-date with A.

  • Disable the application so no new writes are made to A.
  • Wait for B to fully catch up.
  • Sever the connection between B and A, so B is an independent db.

    $ heroku pg:unfollow HEROKU_POSTGRESQL_BLUE  
  • Reconfigure the app to use B as your database.

    $ heroku pg:promote HEROKU_POSTGRESQL_BLUE  
  • Re-enable the application.

  • Destroy A.

    $ heroku addons:remove HEROKU_POSTGRESQL_AUBURN  

This all works fine if you have a single database, but what happens when you have slaves? Heroku's docs only say that you'll have to re-create the slaves, without going into further depth. (n.b. Once you've done the process above, you can't immediately create the new slaves. The server needs a few hours before it can be forked/followed.)

Now, if your app can handle ditching the slaves entirely, then you can do the following:

  • disable the slaves
  • change over the master server
  • create new slaves
  • enable the new slaves

But if reducing your server farm to just a single server is going to cause the app to crash and burn, you need a way to keep your slaves up and running throughout this whole process!

My first thought was to create a server C, following B, even while B is still following A. When switching over, B becomes the new master and C the new slave. But, alas, Heroku (and presumably also PostgreSQL) doesn't support followers-following-followers.

QUESTION: How do you change over from one whole master/slave set to another set? How do you change over your master without having to run without slaves for an extended period of time?

And just in case it's note clear: I'm not asking about failing over to an existing slave. I want to upgrade all of the servers to a different-sized EC2 instance. And I want to maintain my master/slave setup throughout the process.

Does pt-table-checksum point out rows that are different between Master and Slave?

Posted: 30 Aug 2013 11:17 PM PDT

Trying to verify replication integrity of my replica. And I found that pt-table-checksum is one of the best ways of doing it.

I have a simple Master->Slave set up. Have created a sample table with 10 rows on the Master, which has successfully replicated on the Slave. Intentionally on the slave I have changed a column in one of the rows of that table. Executed -

pt-table-checksum "h=host,u=user,p=password,P=3306" --function=MD5 --databases=test --tables=tentotal  

And indeed in my checksums table that pt-table-checksum created, I could see different checksum for master and slave for that table. Excellent so far!

Output was -

TS                  ERRORS DIFFS    ROWS    CHUNKS  SKIPPED  TIME  TABLE  09-05T12:17:30      0      0        3       1       0        0.182 test.tentotal  

Strangely the DIFF column says 0 which is incorrect. What am I missing here?

Percona tool kit version: 2.1.3

No comments:

Post a Comment

Search This Blog