Saturday, August 17, 2013

[how to] Merge multiple tables in a database

[how to] Merge multiple tables in a database


Merge multiple tables in a database

Posted: 17 Aug 2013 07:50 PM PDT

I have a MySQL database with table names like 1,2,3... upto 1000

Each tables have 2000 rows and has the same structure.

This is how all my 2000 table's structure look like

id,url,title,content  

id has type int, primary key,auto increment.

Now I would like to create a new database with name merged. Also a table with name merged_table.

Can someone tell me how to import all my tables in merged_table?

My problem here is all my tables has id from 1 to 2000. So I have no idea how to import them.

I would like to have unique ids from 1 to 2000000 in my new merged_table

I'm using ubuntu. So if someone suggest me a terminal command, that would be awesome.

MYSQL 5.6.12 Lock wait timeout exceeded, yet no transactions\locks currently taking place?

Posted: 17 Aug 2013 06:58 PM PDT

On a development database with no other users other than myself, I'm getting the following error #1205 - Lock wait timeout exceeded; try restarting transaction when trying to run a simple alter statement.

ALTER TABLE proposals ADD STATUS INT( 1 ) NULL AFTER propnum ;  

if I run SHOW ENGINE INNODB STATUS I get the following.

=====================================  2013-08-17 20:51:54 fa8 INNODB MONITOR OUTPUT  =====================================  Per second averages calculated from the last 46 seconds  -----------------  BACKGROUND THREAD  -----------------  srv_master_thread loops: 935 srv_active, 0 srv_shutdown, 104952 srv_idle  srv_master_thread log flush and writes: 105884  ----------  SEMAPHORES  ----------  OS WAIT ARRAY INFO: reservation count 399  OS WAIT ARRAY INFO: signal count 408  Mutex spin waits 267, rounds 3238, OS waits 94  RW-shared spins 281, rounds 8430, OS waits 281  RW-excl spins 17, rounds 770, OS waits 23  Spin rounds per wait: 12.13 mutex, 30.00 RW-shared, 45.29 RW-excl  ------------  TRANSACTIONS  ------------  Trx id counter 29698  Purge done for trx's n:o < 29691 undo n:o < 0 state: running but idle  History list length 1091  LIST OF TRANSACTIONS FOR EACH SESSION:  ---TRANSACTION 0, not started  MySQL thread id 1278, OS thread handle 0xfa8, query id 9178 localhost ::1 david init  SHOW ENGINE INNODB STATUS  --------  FILE I/O  --------  I/O thread 0 state: wait Windows aio (insert buffer thread)  I/O thread 1 state: wait Windows aio (log thread)  I/O thread 2 state: wait Windows aio (read thread)  I/O thread 3 state: wait Windows aio (read thread)  I/O thread 4 state: wait Windows aio (read thread)  I/O thread 5 state: wait Windows aio (read thread)  I/O thread 6 state: wait Windows aio (write thread)  I/O thread 7 state: wait Windows aio (write thread)  I/O thread 8 state: wait Windows aio (write thread)  I/O thread 9 state: wait Windows aio (write thread)  Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,  ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 0; buffer pool: 0  501 OS file reads, 3733 OS file writes, 1948 OS fsyncs  0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s  -------------------------------------  INSERT BUFFER AND ADAPTIVE HASH INDEX  -------------------------------------  Ibuf: size 1, free list len 0, seg size 2, 0 merges  merged operations:  insert 0, delete mark 0, delete 0  discarded operations:  insert 0, delete mark 0, delete 0  Hash table size 591553, node heap has 2 buffer(s)  0.00 hash searches/s, 0.00 non-hash searches/s  ---  LOG  ---  Log sequence number 13235364  Log flushed up to   13235364  Pages flushed up to 13235364  Last checkpoint at  13235364  0 pending log writes, 0 pending chkp writes  1045 log i/o's done, 0.00 log i/o's/second  ----------------------  BUFFER POOL AND MEMORY  ----------------------  Total memory allocated 305856512; in additional pool allocated 0  Dictionary memory allocated 113054  Buffer pool size   18240  Free buffers       17691  Database pages     547  Old database pages 209  Modified db pages  0  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 0, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 450, created 97, written 2355  0.00 reads/s, 0.00 creates/s, 0.00 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 547, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  --------------  ROW OPERATIONS  --------------  0 queries inside InnoDB, 0 queries in queue  0 read views open inside InnoDB  Main thread id 1404, state: sleeping  Number of rows inserted 298, updated 336, deleted 9, read 136560  0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s  ----------------------------  END OF INNODB MONITOR OUTPUT  ============================  

To my limited knowledge, I don't see any pending transactions. If I stop\restart the MySql service, I can run the statement, but this is the 3rd time such an error has occurred and I would like to get to the bottom of it so I can learn what is going wrong, if that's possible. Is this a known issue or should I just restart the service every time this happens and move on?

Why are some aggregates treated differently in the RHS of a rule in the model clause?

Posted: 17 Aug 2013 12:23 PM PDT

With my limited understanding of the model clause, I'd expected the following three queries to return the same results because there are no nulls in the data, and the functions are equivalent:

select *  from (select level k, 100 v from dual connect by level<=2)  model return updated rows  dimension by (k)  measures (v, 0 shr)  rules ( shr[any] = v[cv()]/sum(v)[any] );  --plain sum  /*           K          V        SHR  ---------- ---------- ----------           1        100        0.5            2        100        0.5   */  select *  from (select level k, 100 v from dual connect by level<=2)  model return updated rows  dimension by (k)  measures (v, 0 shr)  rules ( shr[any] = v[cv()]/nullif(sum(v)[any],0) ); --with nullif  /*           K          V        SHR  ---------- ---------- ----------           1        100       0.25   <------\___ why?           2        100       0.25   <------/  */  select *  from (select level k, 100 v from dual connect by level<=2)  model return updated rows  dimension by (k)  measures (v, 0 shr)  rules ( shr[any] = v[cv()]/decode(sum(v)[any],0,null,sum(v)[any]) );  -- with decode  /*           K          V        SHR  ---------- ---------- ----------           1        100        0.5            2        100        0.5   */  

What am I missing about the processing of rules that explains this behaviour?

SQLFiddle here

My log file is large, what should I do? [duplicate]

Posted: 17 Aug 2013 06:14 PM PDT

This question already has an answer here:

My transaction log is larger than I'd like it to be, and I want to know how I can fix that.

How to get SQL Server 2012 to use the invariant culture in format()?

Posted: 17 Aug 2013 03:12 PM PDT

I'm trying to get the built-in format() function in SQL Server 2012 to use the invariant culture.

It is said in the documentation that the function accepts a .NET culture identifier as the third parameter. The identifier for the invariant culture is a blank string:

You specify the invariant culture by name by using an empty string ("") in the call to a CultureInfo instantiation method.

That does not work with SQL Server however:

select format(getdate(), N'g', '');  

Msg 9818, Level 16, State 1, Line 1
The culture parameter '' provided in the function call is not supported.

It is also documented that the invariant culture is associated with the English language, but not with any country/region. One would think this allows to pass 'en' as the identifier, but then, in .NET, CultureInfo.InvariantCulture.Equals(CultureInfo.GetCultureInfo("")) yields true, but CultureInfo.InvariantCulture.Equals(CultureInfo.GetCultureInfo("en")) gives false, so they aren't really the same.

So how do I make SQL Server to use the invariant culture?

(Note: I'm interested in making the built-in thing to work. I already have my own CLR functions to do this, I was going to remove them in favor of the now-built-in functionality).

Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64)
Dec 28 2012 20:23:12
Copyright (c) Microsoft Corporation
Business Intelligence Edition (64-bit) on Windows NT 6.0 (Build 6002: Service Pack 2) (Hypervisor)

Query approval/workflow for ad-hoc user direct access

Posted: 17 Aug 2013 09:52 AM PDT

I am coming into a project late, where non-IT users are being granted access to run ad-hoc read-only/SELECT-only queries against an Oracle database. The users will have Oracle accounts and connect directly with TOAD.

Automated reports are created for common reporting and presented to end users through Cognos. However, based upon these basic reports, a select group of individuals will be given the ability to then connect directly against the database to run ad-hoc queries to do additional research/data mining. In order to control this activity, they are trying to implement a business process where the exact queries run or at least the general form will be preapproved (creating a whitelist) and then the logs will be reviewed to ensure that only the whitelisted queries were run; any other queries would be subject to investigation.

I am not very happy with this setup for a risk control standpoint, because while the log review may be a suitable detective control and could be performed manually (less than 100 queries are expected to be run this way per work day), I would prefer that the end users not even have the ability to do something malicious (e.g., dump the whole database) in the first place, as this could cause some serious liability/risk based on the data held in the database.

I have suggested that they develop the queries in a test environment on dummy data, modify, and then have them used in production through a DBA or some other automated process to prevent a non-whitelisted query from being run in the first place. They also claim that the queries would vary greatly, so they can't use some type of form-based app that just changes the key search terms and conditions.

My thought would be to create some type of app with a workflow where Alice would enter a query to run and Bob would have to then approve it before it was actually executed. I was told there is no budget or time to create something like this. So I was wondering if there is any built in functionality or tools that would provide some similar type of approve-before-execute flow. I would also be open to other suggestion that fit the parameters I am forced to deal with (business users must be able to submit ad-hoc queries to be run in a reasonable time frame after creation and a custom app cannot be written to manage the workflow).

Whether or not to create separate tables

Posted: 17 Aug 2013 10:46 AM PDT

I am working on a project, where I need to store the information related to A and B, the A and B both have identical columns.There might be thousands of As and Bs. Should I use "type" column to differentiate between these two ? Is there any performance issue because each time it will require a full table search ? I have 3 tables that are having similar structure for A and B.

Error in running SQL Server script

Posted: 17 Aug 2013 10:12 AM PDT

I copied oldDatabase as newDatabase. Modified newDatabase's schema.
Meanwhile, oldDatabase's data are processing and updating.

I want to copy oldDatabase's data to newDatabase.

What I have done is:

  1. Clean the data of newDatabase
  2. Generate data only script of oldDatabase
  3. Run this script in newDatabase.

And I got a lot of error messages like:

Cannot insert explicit value for identity column in table 'MyTableName' when     IDENTITY_INSERT is set to OFF.    Invalid column name 'MyColumnName'.    Violation of PRIMARY KEY constraint 'PK_MyTableName'. Cannot insert duplicate    key in object 'dbo.MyTableName'.  

Some tables and rows data are affected. But, tables with above error are not affected. How can I fix it? Is there a best way to make this stuff?

PS

I run my script using this command ,

sqlcmd -S myServerName -U userName -P passowrd -i PathOfMyScriptFile  

Which one should i use Triggers OR Events for my criteria?

Posted: 17 Aug 2013 03:46 AM PDT

I have tables in mysql database like 1) absentees 2) Fee Collection 3) Sales 4) Purchases

If some entries inserted into these tables i want to save my custom details in another table 5) Events.

Example : Suppose if attendance table has data inserted on today. I want to take the count of absentees and insert the count value into Events Table.

I want to do this task on 05:30 PM in the evening whenever the insertions/updates made to above 4 tables.

could any one please tell me how to achieve this.??

How to loop through AS400 table with cursor

Posted: 17 Aug 2013 10:06 AM PDT

I have a AS400 table that contains multiple rows per person. Each row contains data concerning a transaction by the person including points assigned to each transaction. I read through all rows for one person ordered by the points. The first row with the highest points, I leave alone. On subsequent rows the point values are cut in half and I do an update. I am currently using a cursor to open the table and doing a Loop to read through the rows. I am getting an -508 error stating :

An UPDATE or DELETE statement with a WHERE CURRENT OF CL was attempted, but the cursor is not positioned on a row or is positioned on a row, but the row is not locked because a COMMIT HOLD or ROLLBACK HOLD statement released the lock on the row. A FETCH statement must be issued to position the cursor on a row and lock the row. Recovery . . . : Issue a FETCH statement to position the cursor on a row and lock the row; then, try the request again.

Part of my code is below:

DECLARE V_LNAME CHAR ( 30 ) ;   DECLARE V_LNAMEHOLD CHAR ( 30 ) ;   DECLARE V_FNAME CHAR ( 15 ) ;   DECLARE V_FNAMEHOLD CHAR ( 15 ) ;   DECLARE V_DOB DATE ;   DECLARE V_DOBHOLD DATE ;   DECLARE V_TRANSNMBR CHAR ( 9 ) ;   DECLARE V_TRANSNMBRHOLD CHAR ( 9 ) ;   DECLARE V_POINTS NUMERIC ( 5 ) ;   DECLARE V_POINTSHOLD NUMERIC ( 5 ) ;   DECLARE V_POINTSEQ NUMERIC ( 5 ) ;   DECLARE FIRSTRECORD CHAR ( 1 ) ;   DECLARE CL CURSOR FOR   SELECT LNAME , FNAME , DOB , TRANSNCNMBR , TOPOINTS   FROM DB_TRANSDATA   ORDER BY LNAME ASC , FNAME ASC , DOB ASC , TOPOINTS DESC ;   DECLARE CLHLD CURSOR FOR   SELECT LNAME , FNAME , DOB , TRANSNCNMBR , TOPOINTS   FROM DB_TRANSDATA   ORDER BY LNAME ASC , FNAME ASC , DOB ASC , TOPOINTS DESC ;     OPEN CLHLD ;   FETCH CLHLD INTO V_LNAMEHOLD , V_FNAMEHOLD , V_DOBHOLD , V_TRANSNMBRHOLD ;   close clhld;    OPEN CL ;   SET FIRSTRECORD = 'Y' ;   LOOP   FETCH CL INTO V_LNAME , V_FNAME , V_DOB , V_TRANSNMBR , V_POINTS , V_POINTSEQ ;   IF TRIM ( CHAR ( V_LNAME ) ) = TRIM ( CHAR ( V_LNAMEHOLD ) ) AND TRIM ( CHAR ( V_FNAME ) ) = TRIM ( CHAR ( V_FNAMEHOLD ) ) AND V_DOB = V_DOBHOLD AND V_TRANSNMBR = V_TRANSNMBRHOLD AND FIRSTRECORD = 'N' THEN   SET V_POINTSEQ = V_POINTS * .5 ;     UPDATE DB_TRANSDATA   SET POINTSEQ = V_POINTSEQ   WHERE CURRENT OF CL ;     SET V_LNAMEHOLD = V_LNAME ;   SET V_FNAMEHOLD = V_FNAME ;   SET V_DOBHOLD = V_DOB ;   SET V_TRANSNMBRHOLD = V_TRANSNMBR ;   ELSE     UPDATE DB_TRANSDATA   SET POINTSEQ = V_POINTS   WHERE CURRENT OF CL ;     SET V_LNAMEHOLD = V_LNAME ;   SET V_FNAMEHOLD = V_FNAME ;   SET V_DOBHOLD = V_DOB ;   SET V_TRANSNMBRHOLD = V_TRANSNMBR ;   SET FIRSTRECORD = 'N' ;   END IF ;     END LOOP ;     CLOSE CL;      END  ;   

Bulk insert with format file: 0 rows affected

Posted: 17 Aug 2013 12:06 PM PDT

When using the BULK INSERT command in SQL Server 2008 it returns:

(0 row(s) affected)  

I am using this command to carry out the bulk insert:

BULK INSERT Test      FROM 'C:\DataFiles\Tests.dat'      WITH (FORMATFILE = 'C:\DataFiles\FormatFiles\TestFormat.Fmt');  GO  

Tests.dat contains:

b00d23fe-580e-42dc-abd4-e8a054395126,48dd5dd6e3a144f7a817f234dd51469c,452eb8ce-6ae2-4e7a-a389-1097882c83ab,,, ,,,,Aria,,,160,,,86400,,2004-04-03 23:23:00.000,,2012-07-06 13:26:31.633,2012-07-06 13:27:44.650,3,,,,51B7A831-4731-4E2E-ACEC-06636ADC7AD3,,0,,0,,Field Name 1,,Field Name 2,,Field Name 3,,Field Name 4,

and the format file TestFormat.fmt contains:

9.0  39  1       SQLCHAR       0       37      ","      1     Key                                              ""  2       SQLCHAR       0       37      ","      2     TestType                                         ""  3       SQLCHAR       0       37      ","      3     CaseKey                                          ""  4       SQLCHAR       0       30      ","      4     Height                                           ""  5       SQLCHAR       0       30      ","      5     Weight                                           ""  6       SQLCHAR       0       128     ","      6     PacemakerType                                    Latin1_General_CI_AI  7       SQLCHAR       0       0       ","      7     Diary                                            Latin1_General_CI_AI  8       SQLCHAR       0       0       ","      8     Indication                                       Latin1_General_CI_AI  9       SQLCHAR       0       0       ","      9     Medication                                       Latin1_General_CI_AI  10      SQLCHAR       0       37      ","      10    RecorderType                                     ""  11      SQLCHAR       0       100     ","      11    RecorderSerial                                   Latin1_General_CI_AI  12      SQLCHAR       0       0       ","      12    Comments                                         Latin1_General_CI_AI  13      SQLCHAR       0       12      ","      13    Status                                           ""  14      SQLCHAR       0       0       ","      14    AdditionalData                                   Latin1_General_CI_AI  15      SQLCHAR       0       37      ","      15    OrderKey                                         ""  16      SQLCHAR       0       12      ","      16    Duration                                         ""  17      SQLCHAR       0       12      ","      17    Age                                              ""  18      SQLCHAR       0       24      ","      18    RecordingStartDateTime                           ""  19      SQLCHAR       0       128     ","      19    Ward                                             Latin1_General_CI_AI  20      SQLCHAR       0       24      ","      20    CreatedDateTime                                  ""  21      SQLCHAR       0       24      ","      21    UpdatedDateTime                                  ""  22      SQLCHAR       0       21      ","      22    UserGroupBits                                    ""  23      SQLCHAR       0       24      ","      23    LastArchive                                      ""  24      SQLCHAR       0       128     ","      24    PointOfCare                                      Latin1_General_CI_AI  25      SQLCHAR       0       128     ","      25    Bed                                              Latin1_General_CI_AI  26      SQLCHAR       0       37      ","      26    DownloadFacilityKey                              ""  27      SQLCHAR       0       37      ","      27    AnalysisFacilityKey                              ""  28      SQLCHAR       0       12      ","      28    Priority                                         ""  29      SQLCHAR       0       37      ","      29    FacilityKey                                      ""  30      SQLCHAR       0       12      ","      30    PacemakerTypeStandard                            ""  31      SQLCHAR       0       128     ","      31    TestTypeName                                     Latin1_General_CI_AI  32      SQLCHAR       0       128     ","      32    UserDefined1Name                                 Latin1_General_CI_AI  33      SQLCHAR       0       128     ","      33    UserDefined1Value                                Latin1_General_CI_AI  34      SQLCHAR       0       128     ","      34    UserDefined2Name                                 Latin1_General_CI_AI  35      SQLCHAR       0       128     ","      35    UserDefined2Value                                Latin1_General_CI_AI  36      SQLCHAR       0       128     ","      36    UserDefined3Name                                 Latin1_General_CI_AI  37      SQLCHAR       0       128     ","      37    UserDefined3Value                                Latin1_General_CI_AI  38      SQLCHAR       0       128     ","      38    UserDefined4Name                                 Latin1_General_CI_AI  39      SQLCHAR       0       128     "\r\n"   39    UserDefined4Value                                Latin1_General_CI_AI  

I cannot figure out why this isn't working. Other people have had similar problems because they had more fields than actual columns in their database. Or using .csv files which are not supported apparently.

This works fine on every other table in the database I am importing with no errors so I can't understand why it doesn't work here.

Does openquery use distributed transactions?

Posted: 17 Aug 2013 06:06 AM PDT

Does openquery in SQL Server 2005 use distributed transactions? If so, in which cases does it?

Strange Locking issues while playing back logs via percona playback

Posted: 17 Aug 2013 09:06 AM PDT

Problem while replaying logs

I am in the process of benchmarking a new DB node (specs at the end) and have run across some strange behavior:

As described here i:

  • Created a dump (innobackupex ftw)
  • I logged all my queries for an hour
  • Setup my new db (same my.cnf as the live db only with a higher innodb_buffer_pool_size)
  • Started the replay of my slow query log

As per the documentation:

percona-playback --mysql-host=127.0.0.1\  --mysql-user=root --mysql-schema=my_db\  --query-log-file=slow.log  

This works fine for about 15 minutes, then I start getting strange locking problems:

Error during query: Lock wait timeout exceeded; try restarting transaction, number of tries 0  

I started debugging my current load on the db and found that only one single query was running:

(taken from innodb status)

---TRANSACTION 1C5264768, ACTIVE 44 sec inserting  mysql tables in use 1, locked 1  LOCK WAIT 2 lock struct(s), heap size 376, 1 row lock(s)  MySQL thread id 4289, OS thread handle 0x7f7fb0779700, query id 77515 localhost     127.0.0.1 root update  insert into sessions (a, b, c, d, e, e, f, g, h, i, j, k, l, m, n, o, p, q) values (0, 682,  ------- TRX HAS BEEN WAITING 44 SEC FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 4549 page no 7875876 n bits 104 index `PRIMARY` of table `production`.`sessions` trx id 1C5264768 lock_mode X insert intention waiting  ------------------  TABLE LOCK table `production`.`sessions` trx id 1C5264768 lock mode IX  RECORD LOCKS space id 4549 page no 7875876 n bits 104 index `PRIMARY` of table `production`.`sessions` trx id 1C5264768 lock_mode X insert intention waiting  ---TRANSACTION 1C526475D, ACTIVE (PREPARED) 452 sec  2 lock struct(s), heap size 376, 1 row lock(s)  MySQL thread id 1722, OS thread handle 0x7f7fb083d700, query id 77311 localhost 127.0.0.1 root  Trx read view will not see trx with id >= 1C526475E, sees < 1C525BA04  TABLE LOCK table `production`.`sessions` trx id 1C526475D lock mode IX  RECORD LOCKS space id 4549 page no 7875876 n bits 104 index `PRIMARY` of table `production`.`sessions` trx id 1C526475D lock_mode X  ----------------------------  END OF INNODB MONITOR OUTPUT  

And only one table open:

mysql> SHOW OPEN TABLES from production where In_use != 0;  +----------------------+--------------+--------+-------------+  | Database             | Table        | In_use | Name_locked |  +----------------------+--------------+--------+-------------+  | production           | sessions     |      1 |           0 |  +----------------------+--------------+--------+-------------+  1 row in set (0.00 sec)  

This situation stays like this for about 3-4 minutes and then suddenly playback continues.

These issues do not happen on the live db: we have some issues with locking but we have never exceeded the innodb_lock_wait_timeout value.

I am most likely missing something obvious but for the life of me i can't figure it out, but why would the replay hang like that or better yet why would mysql remain in this lock state?

The relevant entries in the slow log are from our jee server:

XA START 0xbe681101606ce8d1676630322c7365727665722c5035313337,0x676630322c7365727665722c50353133372c00,0x4a5453;  insert into sessions (a, b, c, d, e, e, f, g, h, i, j, k, l, m, n, o, p, q) values (0, 682, ...);  XA END 0xbe681101606ce8d1676630322c7365727665722c5035313337,0x676630322c7365727665722c50353133372c00,0x4a5453;  

Does hibernate's transaction handling have anything to do with the way the lock is generated and not closed?

Server Specs

  • Ubuntu 12.04.2 LTS
  • percona-server-server-5.5 version 5.5.32-rel31.0-549.precise

Relavent config:

max_connections         = 1500  sort_buffer_size        = 1M  thread_cache_size       = 1000  max_heap_table_size     = 512M  tmp_table_size          = 512M  join_buffer_size        = 67108864  expand_fast_index_creation = ON  open_files_limit        = 65535  table_definition_cache  = 4096  table_open_cache        = 262144  max_allowed_packet      = 16M  thread_stack            = 192K  query_cache_limit       = 1M  query_cache_size        = 512M  thread_concurrency      = 8  query_cache_type        = 1  long_query_time         = 2  log_slave_updates       = 1  expire_logs_days        = 10  max_binlog_size         = 100M  

Innodb config:

default_storage_engine   = InnoDB  innodb_file_per_table    = 1  innodb_old_blocks_time   = 1000  innodb_buffer_pool_size  = 163456M  innodb_log_file_size     = 256M  innodb_flush_method      = O_DIRECT  innodb_read_io_threads   = 4  innodb_write_io_threads  = 4  innodb_doublewrite       = FALSE  innodb_flush_log_at_trx_commit = 2  

Thanks for any help or experience in this area!

Edit

I have been playing with some of the innodb variables and with the help of innodb_show_verbose_locks have been able to determine a bit more. In this example:

---TRANSACTION 1C52D8AB4, ACTIVE 49 sec inserting  mysql tables in use 1, locked 1  LOCK WAIT 2 lock struct(s), heap size 376, 1 row lock(s)  MySQL thread id 18602, OS thread handle 0x7f007a4a0700, query id 624263 localhost 127.0.0.1 root update  INSERT INTO `images` (A,B,C...) VALUES (....)  ------- TRX HAS BEEN WAITING 49 SEC FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 51 page no 16791 n bits 152 index `PRIMARY` of table `production`.`images` trx id 1C52D8AB4 lock_mode X insert intention waiting  Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0  0: len 8; hex 73757072656d756d; asc supremum;;    ------------------  TABLE LOCK table `production`.`images` trx id 1C52D8AB4 lock mode IX  RECORD LOCKS space id 51 page no 16791 n bits 152 index `PRIMARY` of table `production`.`images` trx id 1C52D8AB4 lock_mode X insert intention waiting  Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0  0: len 8; hex 73757072656d756d; asc supremum;;    ---TRANSACTION 1C52D8AA9, ACTIVE 151 sec  2 lock struct(s), heap size 376, 1 row lock(s)  MySQL thread id 18460, OS thread handle 0x7f007454e700, query id 624243 localhost 127.0.0.1 root  TABLE LOCK table `production`.`images` trx id 1C52D8AA9 lock mode IX  RECORD LOCKS space id 51 page no 16791 n bits 152 index `PRIMARY` of table `production`.`images` trx id 1C52D8AA9 lock_mode X  Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0  0: len 8; hex 73757072656d756d; asc supremum;;  

Both transaction 1C52D8AA9 and 1C52D8AB4 have an IX lock on address 73757072656d756d which is fine as i gather from this post since innodb uses MGL locking. However the Followup X Locking (seen here: "id 1C52D8AB4 lock_mode X insert intention waiting") is missing.

Greyed out menu options, broken activity monitor

Posted: 17 Aug 2013 10:05 AM PDT

I recently installed SP1 on my 2012 SQL Server. Ever since then SSMS has behaved a little weirdly when I connect from my pc rather than from the local server itself.

I no longer see the green "running" icons on the server and on the agent, and several menu options are greyed out (Start/Stop/Restart on server menu and agent menu).

enter image description here

Also, when I try to access the Activity Monitor, I get the following error:

enter image description here

Error Detail

I found the following article about the Activity Monitor. I followed the steps there (although I am a local admin on the sql server and should have had all of those permissions already) to no avail. The server is running Windows 2008 R2 Datacenter SP1. When I remote desktop to the server itself and use SSMS directly I have none of these issues, and when a domain admin connects to the server through SSMS remotely everything also works for him.

Any ideas?

How to change a column type from SET to an ENUM

Posted: 17 Aug 2013 11:06 AM PDT

I have a table with a SET column type that need to be changed to ENUM type. Can I do it with a simple ALTER or will this mess up my data? NB: there are no multiple values (a,b,c) in the table.

Create a trigger to update table data on another Server's database

Posted: 17 Aug 2013 08:07 PM PDT

I am creating a trigger in MySQL and I need a little help.

I have 2 websites, 2 databases (same name) on 2 different web servers, S1 & S2.

These databases have the same tables names.

I want both the user data on both the websites to be the same.

So if one user registers on S1, then that user registration information should be passed to S2.

If a user registration information is updated on S1, the same information should be updated on S2.

And the same applies for S2.

How can I create a trigger so that every time there is an insert / update / delete in database on S1, then the user table on S2 also gets automatically updated.

And every time there is an insert / update / delete in database on S2, then the user table on S1 also get automatically updated.

Is this possible? Could you provide some examples?

Tool to export data with all relational data?

Posted: 17 Aug 2013 06:19 PM PDT

Is there a tool to export data from selected rows in a table with all data stored in other tables in other tables linked by relational design?

The purpose is to ease migrations of bits of data between servers for adhoc migrations. I am looking specifically for a tool for MySQL InnoDB with defined foreign keys.

MySQL Full Text search increase relevance for exact matches?

Posted: 17 Aug 2013 01:07 PM PDT

I have a MySQL database of ~10,000 organisation names that I want to be able to search. I would like to use a full text search because this would enable me to find "institute of doobry" by searching "doobry institute" etc.

The problem I have is that I have lots of entries like "institute of doobry canteen" and "institute of doobry alumni association".

MATCH (names) AGAINST ("doobry institute")  

will return all of these records and will not score institute of doobry higher than the canteen.

I'd sort of like to say: score it higher if the word count is similar.

Is there any way to achieve this?

mysql: need help to optimize my query/table

Posted: 17 Aug 2013 02:07 PM PDT

I'm wondering if someone could help me optimize my tables/query to speed up a query. It is currently running ridiculously slow. I think a well-thought out index could help me. Any help would be really appreciated

Tables URLS and TAGS mentioned below are 2 and 20 million rows respectively (will probably end up having 10x). A query like the one below already takes 10 seconds to run.

An Example: http://whatrethebest.com/php+tutorials

Tables

CREATE TABLE IF NOT EXISTS `TAGS` (  `hash` varchar(255) NOT NULL,  `tag` varchar(255) NOT NULL,  UNIQUE KEY `my_unique_key` (`hash`,`tag`),  KEY `tag` (`tag`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

and

CREATE TABLE IF NOT EXISTS `URLS` (  `url` text NOT NULL,  `domain` text,  `title` text NOT NULL,  `description` text,  `numsaves` int(11) NOT NULL,  `firstsaved` varchar(256) DEFAULT NULL,  `md5` varchar(255) NOT NULL DEFAULT '',  PRIMARY KEY (`md5`),  UNIQUE KEY `md5` (`md5`),  KEY `numsaves` (`numsaves`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

QUERY

SELECT urls.md5, urls.url, urls.title, urls.numsaves  FROM urls  JOIN tags ON urls.md5 = tags.hash  WHERE tags.tag  IN (  'php', 'tutorials'  )  GROUP BY urls.md5  HAVING COUNT( * ) =2  ORDER BY urls.numsaves DESC  LIMIT 20  

EXPLAIN

I'm not sure what this shows

id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  tags    range   my_unique_key,tag   tag     767     NULL    230946  Using where; Using index; Using temporary; Using filesort  1   SIMPLE  urls    eq_ref  PRIMARY,md5     PRIMARY     767     jcooper_whatrethebest_urls.tags.hash    1     

So I think the problem is:

certain tags like 'php have 34,000 entries, most of which only have under 5 saves. But in order to get the 20 most saved it is having to sort them all.Right?

I can't really create a 'numsaves' column in TAGS and index on that because that number will be changing up and down, and that wouldnt make sense. Is it possible to create a cross-table index between urls.numsaves and tags.tag? Or a third table to use in my query somehow? Would this solve my problem? I know almost nothing about indexing.

Any help would be really appreciated!


EDITS BELOW

RESPONSE TO YperCube:

Thank you, Thank you, your suggestions have sped up my queries by a factor of 10-20X . This is an immense improvement. I can't thank you enough.

I'm posting my current queries and tables with execution times in case you or anyone else has any more optimization suggestions. I am worried that as my table grows I may not be able to keep my search times under 3 seconds, which would be a killer.

New Query Example 1

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN tags AS t1 ON t1.url_id = u.id  AND t1.tag = 'programming'  JOIN tags AS t2 ON t2.url_id = u.id  AND t2.tag = 'language'  ORDER BY u.numsaves DESC  LIMIT 20     Showing rows 20 - 19 ( 20 total, Query took 0.2009 sec)     id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  t2  ref     tag_id  tag_id  767     const   53820   Using where; Using index; Using temporary; Using filesort  1   SIMPLE  t1  ref     tag_id  tag_id  772     const,jcooper_whatrethebest_urls.t2.url_id  1   Using where; Using index  1   SIMPLE  u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t2.url_id    1     

Neq Query Example 2 (seems to be slower)

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN   ( SELECT ui.id, ui.numsaves  FROM urls AS ui  JOIN tags AS t1 ON  t1.url_id = ui.id  AND t1.tag = 'programming'  JOIN tags AS t2 ON  t2.url_id = ui.id  AND t2.tag = 'language'  ORDER BY ui.numsaves DESC  LIMIT 20  ) AS ulim ON ulim.id = u.id  ORDER BY ulim.numsaves DESC ;    Showing rows 0 - 29 ( 2,794,577 total, Query took 0.4633 sec)    id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>  ALL     NULL    NULL    NULL    NULL    20  Using filesort  1   PRIMARY     u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   ulim.id     1     2   DERIVED     t2  ref     tag_id  tag_id  767         53820   Using where; Using index; Using temporary; Using filesort  2   DERIVED     t1  ref     tag_id  tag_id  772     jcooper_whatrethebest_urls.t2.url_id    1   Using where; Using index  2   DERIVED     ui  eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t2.url_id    1     

Using Query Example on a Single Tag (slower by a lot)

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN tags AS t1 ON t1.url_id = u.id  AND t1.tag = 'programming'  ORDER BY u.numsaves DESC  LIMIT 20     Showing rows 20 - 19 ( 20 total, Query took 3.7395 sec)    id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  t1  ref     tag_id  tag_id  767     const   200576  Using where; Using index; Using temporary; Using filesort  1   SIMPLE  u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t1.url_id    1     

I'm not sure why this one is so much slower?

Do you have any ideas of a query to optimize for querying a single tag?

My Current Tables

CREATE TABLE `urls` (  `id` int(11) NOT NULL AUTO_INCREMENT,  `url` text NOT NULL,  `domain` text,  `title` text NOT NULL,  `description` text,  `numsaves` int(11) NOT NULL,  `firstsaved` varchar(256) DEFAULT NULL,  `md5` varchar(255) NOT NULL DEFAULT '',  PRIMARY KEY (`id`),  UNIQUE KEY `md5` (`md5`),  KEY `id_numsaves_IX` (`id`,`numsaves`)  ) ENGINE=InnoDB AUTO_INCREMENT=2958560 DEFAULT CHARSET=utf8    CREATE TABLE `tags` (  `url_id` int(11) DEFAULT NULL,  `hash` varchar(255) NOT NULL,  `tag` varchar(255) NOT NULL,  UNIQUE KEY `tag_id` (`tag`,`url_id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8  

Thank you again

MySQL replication using a lot of IO

Posted: 17 Aug 2013 04:06 AM PDT

I'm having big troubles on a database pool I've setup recently.

There is a master, and 4 slaves replicating one DB. All using MyISAM engine.

Thing is, during the replication period (when the slave is not up to date), the IO% showing in iotop is equal to 99% for mysqld.

Besides, I'm having a big latency when connecting a web application to any of these databases (master included).

All these servers are virtual machines running Ubuntu 12.04, having 4vcpu each and 12GB RAM.

If you need more information about the config (system/mysql), please tell me, I'll update my post.

Thing is, I need some hint about where to search, because for now I was not able to identify the problem.

EDIT:

I'm adding the conf files for the master and for a replica :

master configuration file

replica configuration file

Replication on MySQL server

Posted: 17 Aug 2013 04:07 PM PDT

I had to stop the slave server to test something. After I started the server again there is a problem with replication on MySQL server

On the problematic server

mysql> SHOW SLAVE STATUS\G  *************************** 1. row ***************************                 Slave_IO_State: Connecting to master                    Master_Host: servera                    Master_User: replica                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: servera-bin.000024            Read_Master_Log_Pos: 808459481                 Relay_Log_File: serverb-relay-bin.000071                  Relay_Log_Pos: 4          Relay_Master_Log_File: servera-bin.000024               Slave_IO_Running: No              Slave_SQL_Running: Yes                Replicate_Do_DB:            Replicate_Ignore_DB:             Replicate_Do_Table:         Replicate_Ignore_Table:        Replicate_Wild_Do_Table:    Replicate_Wild_Ignore_Table:                     Last_Errno: 0                     Last_Error:                   Skip_Counter: 0            Exec_Master_Log_Pos: 808459481                Relay_Log_Space: 106                Until_Condition: None                 Until_Log_File:                  Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:             Master_SSL_CA_Path:                Master_SSL_Cert:              Master_SSL_Cipher:                 Master_SSL_Key:          Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 1129                  Last_IO_Error: error connecting to master 'replica@servera:3306' - retry-time: 60  retries: 86400                 Last_SQL_Errno: 0                 Last_SQL_Error:  

on the problematic server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | serverb   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

on the main server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | servera   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

according to what I've read, there is a need to execute the following command om the main server:

mysql> FLUSH HOSTS;   

What will happen then? if there is any application connected to it - will it disconnect it also?

Merge Replication identity field issues

Posted: 17 Aug 2013 03:07 PM PDT

One of our clients is using our software with merge replication of a database on a SQL Server 2008 R2 machine. There are two production environments in separate geographical locations only one of which is live at any one time, so basically one and live one on standby. Only the live database is updated by teh applications. Every couple of months they failover between the datacentres and the standby environment become the live centre. There is an instance of SQL Server 2008 in each datacentre and merge replication is used to keep them in sync. This was all working ok until the beginning of the year when we started getting replication errors with some lTID columns in various tables that have the Identity property set.

The errors were like this one:

The insert failed. It conflicted with an identity range check constraint in database 'GateMain', replicated table 'dbo.tGateCalcsLog', column 'lTID'. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.

Then after the last failover we noticed we had an issue with the lTID values in one specific table. Our application relies on the lTID value always having incremented in order such that the highest lTID value is always the newest entry in the table. We've found that due to how the identity ranges are being managed by replication that when the system is failed over that the lTID range of the now live database server may have a range of values that are lower than those already present in the table. Is there a way to manage this in merge replication so we can guarantee that the next identity value allocated to the lTID column in greater than any lTID currently in the table? Or do we need to use a different type of replication or possibly mirroring?

How can I replicate some tables without transferring the entire log?

Posted: 17 Aug 2013 05:06 AM PDT

I have a mysql database that contains some tables with private information, and some tables with public information.

I would like to replicate only the tables containing public information from one database to another, making sure that NO confidential information ever gets stored on the slave.

I know I can use the replicate-do-table to specify that only some tables are replicated, but my understanding is that the entire bin log is transferred to the slave.

Is there a way to ensure that only the public information is transferred to the slave?

MySQL Slaves lag behind master

Posted: 17 Aug 2013 05:07 PM PDT

I have one master and four slaves. Sometimes all my slaves lag behind the master. I have implemented the heartbeat for monitoring replication lag. Now I am trying to find why the slaves are lagging behind the master.

I saw the slow queries (for today) on the master and I found that the slowest query (DML) was taking 138 seconds. But the slaves were lagging about 1400 seconds and there were also no slow queries on the slaves for DML (update, delete, insert, etc.).

Points to be taken into consideration:

  1. All tables are InnoDB.
  2. 68 GB of RAM (Master as well as slaves).
  3. Data size about 1 TB.
  4. Master and slave are running from a long.

What may be the reason for lag?

content types of insertion

Posted: 17 Aug 2013 02:06 AM PDT

I have a CSV file that contains a lot of integer values. Is it better to insert these individual values in bulk as in a full transaction or should I be inserting the CSV file itself into the MySQL?

Indexing a longtext field

Posted: 17 Aug 2013 03:06 AM PDT

I would like to run an index on a longtext field using:

CREATE INDEX post_meta ON wp_postmeta (meta_value(8));

There are currently ~1 million records.

Questions:

  1. Will creating this index affect the data in any way whatsover? Drop leading 0's or anyhting like that?
  2. Is there any reason NOT to do this? There are many rows with content greater than 8 characters, but I frequently query on a type of entry that is 8 or less.

Why Does the Transaction Log Keep Growing or Run Out of Space?

Posted: 17 Aug 2013 06:22 PM PDT

This one seems to be a common question in most forums and all over the web, it is asked here in many formats that typically sound like this:

In SQL Server -

  • What are some reasons the transaction log grows so large?
  • Why is my log file so big?
  • What are some ways to prevent this problem from occurring?
  • What do I do when I get myself on track with the underlying cause and want to put my transaction log file to a healthy size?

Partitioning a table will boost the performance?

Posted: 17 Aug 2013 10:37 AM PDT

I have a very large table approx 28GB and that is continually increasing. I am thinking about partitioning my table.

The table is InnoDB and The File Per Table is enabled

I have a field name created which is an integer field and stores the timestamp so I was thinking about creating the table like this:

ALTER TABLE TABLE_NAME PARTITION BY RANGE (created)  (  PARTITION p0 VALUES LESS THAN (1325356200) ENGINE = InnoDB, # Data before 2012  PARTITION p1 VALUES LESS THAN (1333218600) ENGINE = InnoDB, # Data for JAN,FEB,MARCH YEAR 2012  PARTITION p2 VALUES LESS THAN (1341081000) ENGINE = InnoDB, # NEXT THREE MONTH DATA FOR YEAR 2012  PARTITION p3 VALUES LESS THAN (1349029800) ENGINE = InnoDB, # NEXT THREE MONTH DATA FOR YEAR 2012  PARTITION p4 VALUES LESS THAN (1356978600) ENGINE = InnoDB, # NEXT THREE MONTH DATA FOR YEAR 2012  PARTITION p5 VALUES LESS THAN MAXVALUE              # DATA for Next years     ) ;  

I have partitioned the year 2012 in Quarters(As it is our requirement) and will also do the same for year 2013 but not done here.

Questions:

  1. Does partitioning boost INSERTs and SELECTs Performance
  2. How much time the ALTER take as it is a huge amount of Data(Any way to minimize the Alter time)?
  3. Do I need to use the field created in the queries so that the optimizer can use the partition?
  4. Are there any limitation's or risks by partioning this table?
  5. Is there any way to ALTER the table without downtime?

No comments:

Post a Comment

Search This Blog