Sunday, March 10, 2013

[how to] How to model inheritance of two tables mysql

[how to] How to model inheritance of two tables mysql


How to model inheritance of two tables mysql

Posted: 10 Mar 2013 08:45 PM PDT

I have some tables where I store data and depending on the type of person (worker,civil) that did a job I want to store it a table 'event', now these guys rescue an animal (thereis a table animal).

Finally I want to have a table to store the event that a guy (WORKER,CIVIL), saved an animal, but How should I add a foreign key or how to know the id of the civil or worker that did the job?

I have

  DROP  TABLE IF EXISTS `tbl_animal`;   CREATE TABLE `tbl_animal` (      id_animal       INTEGER     NOT NULL PRIMARY KEY AUTO_INCREMENT,      name            VARCHAR(25) NOT NULL DEFAULT "no name",      specie          VARCHAR(10) NOT NULL DEFAULT "Other",      sex             CHAR(1)     NOT NULL DEFAULT "M",      size            VARCHAR(10) NOT NULL DEFAULT "Mini",      edad            VARCHAR(10) NOT NULL DEFAULT "Lact",      pelo            VARCHAR(5 ) NOT NULL DEFAULT "short",      color           VARCHAR(25) NOT NULL DEFAULT "not defined",      ra              VARCHAR(25) NOT NULL DEFAULT "not defined",      CONSTRAINT `uc_Info_Animal` UNIQUE (`id_animal`)             ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;      INSERT INTO `tbl_animal` VALUES (1,'no name', 'dog', 'M','Mini','Lact','Long','black','Bobtail');  INSERT INTO `tbl_animal` VALUES (2,'peluchin', 'cat', 'M','Mini','Lact','Long','white','not defined');  INSERT INTO `tbl_animal` VALUES (3,'asechin', 'cat', 'M','Mini','Lact','Corto','orange','not defined');    DROP    TABLE IF EXISTS `tbl_person`;    CREATE TABLE `tbl_person` (      type_ID      INT AUTO_INCREMENT,      type_person  VARCHAR(50) NOT NULL ,              CONSTRAINT  `pk_Info_person` PRIMARY KEY (`type_ID`),        CONSTRAINT  `uc_Info_person` UNIQUE (`type_person`)      ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;    INSERT INTO `tbl_person` (type_person) VALUES ('Worker');  INSERT INTO `tbl_person` (type_person) VALUES ('Civil');        DROP    TABLE IF EXISTS `tbl_worker`;    CREATE TABLE `tbl_worker`(      id_worker           INTEGER  NOT NULL PRIMARY KEY,      name_worker         VARCHAR(50) NOT NULL ,          address_worker      VARCHAR(40) NOT NULL DEFAULT "not defined",           delegation          VARCHAR(40) NOT NULL DEFAULT "not defined",      CONSTRAINT `uc_Info_worker` UNIQUE (`id_worker`)             ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;    INSERT INTO `tbl_worker` VALUES (1,'N_CEDENTE1', 'DIR Worker 1', 'DEL');  INSERT INTO `tbl_worker` VALUES (2,'N_worker1', 'DIR Worker 2', 'DEL');  INSERT INTO `tbl_worker` VALUES (3,'N_worker2', 'address worker','delegation worker');       DROP    TABLE IF EXISTS `tbl_civil`;   CREATE TABLE `tbl_civil`(      id_civil                        INTEGER  NOT NULL PRIMARY KEY,      name_civil                      VARCHAR(50)  ,      procedence_civil                VARCHAR(40)  NOT NULL DEFAULT "Socorrism",          CONSTRAINT `uc_Info_civil` UNIQUE (`id_civil`)             ) ENGINE=InnoDB  DEFAULT CHARSET=utf8;      INSERT INTO `tbl_civil`  VALUES (1,'N_civil1' , 'Socorrism');      CREATE TABLE `tbl_event` (      id_event     INTEGER NOT NULL,      id_animal    INTEGER NOT NULL,      type_person  VARCHAR(50) NOT NULL ,       date_reception DATE DEFAULT '2000-01-01 01:01:01',      FOREIGN KEY (id_animal)   REFERENCES `tbl_animal`    (id_animal),      FOREIGN KEY (type_person )  REFERENCES `tbl_person`   (type_person ),      CONSTRAINT `uc_Info_event` UNIQUE (`id_animal`,`id_event`)       )ENGINE=InnoDB  DEFAULT CHARSET=utf8;    INSERT INTO `tbl_event` VALUES (1,1, 'Worker','2013-01-01 01:01:01' );  INSERT INTO `tbl_event` VALUES (2,2, 'Civil','2013-01-01 01:01:01' );  

Now in this design I do not know how to relate whose person did the job if, I would had only a kind of person (aka civil) i would only store the civil_id in person field in this last table....but how to know if it was civil or worker, do I need other intermediate table?

How to make this design in mysql, I have done a sqlfiddle but do not know how to go on, how to join other table like: enter image description here

SQL Azure: Drop Constraint and Index

Posted: 10 Mar 2013 08:06 PM PDT

This may be a stupid question, but I'm not a DBA and just wanna make sure of what I'm doing before I make a mistake.

Here is my question: When dropping Constraints, are their indexes dropped as well?

Basically, I'm asking because I want to modify an existing constraint. I want to change the foreign key to another column.

Here was the original query to create them:

-- Creating foreign key on [Industry_Id] in table 'Courses'  ALTER TABLE [dbo].[Courses]  ADD CONSTRAINT [FK_CourseIndustry]      FOREIGN KEY ([Industry_Id])      REFERENCES [dbo].[Industries]          ([Id])      ON DELETE NO ACTION ON UPDATE NO ACTION;    -- Creating non-clustered index for FOREIGN KEY 'FK_CourseIndustry'  CREATE INDEX [IX_FK_CourseIndustry]  ON [dbo].[Courses]      ([Industry_Id]);  GO  

Here is what I'm thinking of doing:

IF OBJECT_ID(N'[dbo].[FK_CourseIndustry]', 'F') IS NOT NULL      ALTER TABLE [dbo].[Courses] DROP CONSTRAINT [FK_CourseIndustry];  GO    -- Creating foreign key on [IndustryId] in table 'Courses'  ALTER TABLE [dbo].[Courses]  ADD CONSTRAINT [FK_CourseIndustry]      FOREIGN KEY ([IndustryId])      REFERENCES [dbo].[Industries]          ([Id])      ON DELETE NO ACTION ON UPDATE NO ACTION;    -- Creating non-clustered index for FOREIGN KEY 'FK_CourseIndustry'  CREATE INDEX [IX_FK_CourseIndustry]  ON [dbo].[Courses]      ([IndustryId]);  GO  

Is that enough and will it work? Honestly, I don't mind if it's not an elegant solution just so long as it works.

Any piece of advise or information would be highly appreciated. Thanks!

Physically migrate MySQL without a dump

Posted: 10 Mar 2013 04:56 PM PDT

A client ask me to migrate his MySQL DB because the server doesn't have free space.

It also has a very big table that is broken, so I can't dump it

I can't REPAIR it because of the free space.

Question : Is there a way to physically move MySQL DB data files to another server and use them for the "new" MySQL ?

AWS Marketplace Mongo, how to create replica

Posted: 10 Mar 2013 02:46 PM PDT

I am new to mongo, one of the first things it appears needs to be done is to create mongo replica sets as this avoids mongo write locking issues.

I set up mongo on AWS by getting mongo off the AWS Marketplace (mongo on marketplace is tuned for AWS)

What I dont know is how i go about setting up mongo replica, if I purchase another mongo instance via the marketplace, they will no doubt be separate instances, not replicas of the same one.

So given that I currently have one EC2 running mongo, what would the procedure be to convert this into a replication, so I always write to one server and always read from another.

I have read the instructions here: http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/

But I am not confident that I understand what is going on there. Not sure what the dbpath is etc.

any help would be greatly appreciated in getting this up and working.

Stress test to saturate memory for MySQL innodb

Posted: 10 Mar 2013 02:29 PM PDT

I'm investigating windows 32bit memory problems (the 2G limit) and am after a test which will max out mysql's memory, both innodb_buffer_pool and the per connection memory.

Perhaps a query I could use for mysqlslap?

SQL availability groups log maintenance

Posted: 10 Mar 2013 07:17 PM PDT

I have a AG with 2 nodes in a site in sync mode and 1 offsite in async mode. I'm having trouble understanding how the logspace is maintained in this scenario. For instance, I'm taking log backups every 15 mins on the primary server and on the secondary server in sync modem. If I move my log backups off to the secondary node (in sync mode) does this mean it will release space on the primary node too, or do I still need to run tran backups on the primary node, which will in turn release space on the secondary nodes too?

"Lost connection to MySQL server during query" error

Posted: 10 Mar 2013 09:13 AM PDT

I have MySQL 5.5.20 and this table:

  mysql> desc country;  +----------------+-------------+------+-----+---------+-------+  | Field          | Type        | Null | Key | Default | Extra |  +----------------+-------------+------+-----+---------+-------+  | id             | int(255)    | NO   | PRI | NULL    |       |  | iso            | varchar(2)  | NO   |     | NULL    |       |  | name           | varchar(80) | NO   |     | NULL    |       |  | printable_name | varchar(80) | NO   |     | NULL    |       |  | iso3           | varchar(3)  | YES  |     | NULL    |       |  | numcode        | smallint(6) | YES  |     | NULL    |       |  +----------------+-------------+------+-----+---------+-------+  

If I run a query like this

SELECT country.ID, country.ISO, country.NAME,         country.PRINTABLE_NAME, country.ISO3, country.NUMCODE  FROM country;  

It returns:

  ERROR 2013 (HY000): Lost connection to MySQL server during query  

If I change the order of the columns (ISO3 before PRINTABLE_NAME for example) like this:

SELECT country.ID, country.ISO, country.NAME,         country.PRINTABLE_NAME,  country.NUMCODE, country.ISO3  FROM country;  

It works fine!

Or if I rewrite the query using lower-case letters for columns, it works as well.

This issue appears from time to time (about once a month) and the only solution to fix it is to restart MySQL! i have checked error log and this what i have in it:

  130310 11:01:23 [Warning] /usr/sbin/mysqld: Forcing close of thread 401108  user: 'root'  

I am really confused and don't know why this happens! Any ideas on how this could be fixed?

'Incorrect datetime value: for column 'last_login' at row 1'

Posted: 09 Mar 2013 11:27 PM PST

i am getting below error:

Last_SQL_Errno: 1292

    Last_SQL_Error: Error 'Incorrect datetime value: '2013-03-10 02:00:00' for column 'last_login' at row 1' on query. Default database: 'platform'. Query: 'INSERT INTO app_player (creation_date, last_login, removed_date, app_id, user_id) VALUES ('2013-03-10 02:00:00', '2013-03-10 02:00:00', null, 400, 77870956)'  1 row in set (0.00 sec)  

Table structure:

mysql> show create table app_player\G  *************************** 1. row ***************************         Table: app_player  Create Table: CREATE TABLE `app_player` (    `app_id` int(10) unsigned NOT NULL,    `user_id` bigint(20) unsigned NOT NULL,    `creation_date` datetime NOT NULL,    `removed_date` datetime DEFAULT NULL,    `last_login` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00' ON UPDATE CURRENT_TIMESTAMP,    PRIMARY KEY (`app_id`,`user_id`),    KEY `idx_creation_date_app_id` (`creation_date`,`app_id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8  1 row in set (0.00 sec)    mysql veriosn is : 5.1.52    mysql> show global variables like '%date%';  +-----------------------------------------+-------------------+  | Variable_name                           | Value             |  +-----------------------------------------+-------------------+  | binlog_direct_non_transactional_updates | OFF               |  | date_format                             | %Y-%m-%d          |  | datetime_format                         | %Y-%m-%d %H:%i:%s |  | innodb_stats_auto_update                | 1                 |  | innodb_stats_update_need_lock           | 1                 |  | log_slave_updates                       | OFF               |  | low_priority_updates                    | OFF               |  | query_cache_wlock_invalidate            | OFF               |  | sql_log_update                          | ON                |  | sql_low_priority_updates                | OFF               |  | sql_safe_updates                        | OFF               |  +-----------------------------------------+-------------------+  

How do I know if my database partitioning is done well?

Posted: 09 Mar 2013 10:08 PM PST

I have a sneaking suspicion that the guy who wrote the partitioning functions at my place of work did a fairly poor job. For certain queries (maybe 20% of them) we see incredible performance boosts when using OPTION(LOOP JOIN). Isn't this bad?

Shouldn't partitioning be done in a balanced way that really allows the optimizer to take over and properly optimize? I know the optimizer isn't always right, and I don't know much about partitioning, but I just feel that queries shouldn't require hints as often as we use them. Thoughts?

We are currently running SQL Server 2008 R2.

Is this relation in 3NF?

Posted: 09 Mar 2013 10:52 PM PST

I created a relation called Customer and defined as:

enter image description here

I just wanted to know if it is in 3NF?

My concern is regarding address, since customer may have the same address.

I killed a session and it is still running?

Posted: 10 Mar 2013 09:02 AM PDT

I have Oracle 9.2

I view my running session with the query:

SELECT S92700.SYSUSR_VW."NAME", SID,SERIAL#, AUDSID  FROM v$session, S92700.SYSUSR_VW, S92700.SECUSR  WHERE S92700.SYSUSR_VW.SESSIONID = AUDSID AND S92700.SECUSR."NAME"=S92700.SYSUSR_VW."NAME"   

Result:

 NAME      SID      SERIAL#    AUDSID   prot6      9        6503       78239  

And i run the follow query to kill the session

alter system kill session '9,6503' immediate  

SQL Command execution

Executed successfully in 0 s, 0 rows affected. Line 1, column 1

Execution finished after 0 s, 0 error(s) occurred.


The session still running

The same query in other schema is working

Avoiding "Waiting for table metadata lock" when `ALTER TABLE DROP PARTITION`?

Posted: 09 Mar 2013 11:00 PM PST

I have some tables that many users need to access to:

mysql> show create table v3_cam_date\G  *************************** 1. row ***************************         Table: v3_cam_date  Create Table: CREATE TABLE `v3_cam_date` (    `campaignid` mediumint(9) NOT NULL DEFAULT '0',    `totalclick` mediumint(9) unsigned NOT NULL DEFAULT '0',    `totalview` int(11) unsigned NOT NULL DEFAULT '0',    `realclick` mediumint(9) unsigned NOT NULL DEFAULT '0',    `clickcharge` mediumint(9) unsigned NOT NULL DEFAULT '0',    `viewcharge` int(11) unsigned NOT NULL DEFAULT '0',    `uv` mediumint(9) unsigned NOT NULL DEFAULT '0',    `uc` mediumint(9) unsigned NOT NULL DEFAULT '0',    `dt` date NOT NULL DEFAULT '0000-00-00',    `ctr` decimal(5,3) NOT NULL DEFAULT '0.000' COMMENT '=-1: meaning not available(N/A)',    `moneyc` int(11) unsigned NOT NULL DEFAULT '0',    `moneyv` int(11) unsigned NOT NULL DEFAULT '0',    KEY `ix_campaignid_dt` (`campaignid`,`dt`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  /*!50100 PARTITION BY RANGE (TO_DAYS(dt))  (PARTITION p0 VALUES LESS THAN (0) ENGINE = InnoDB,   PARTITION p01 VALUES LESS THAN (734502) ENGINE = InnoDB,   PARTITION p2 VALUES LESS THAN (734683) ENGINE = InnoDB,   PARTITION p03 VALUES LESS THAN (734863) ENGINE = InnoDB,   PARTITION p04 VALUES LESS THAN (734959) ENGINE = InnoDB,   PARTITION p5 VALUES LESS THAN (735141) ENGINE = InnoDB,   PARTITION p06 VALUES LESS THAN (735210) ENGINE = InnoDB,   PARTITION MERGER_2013227 VALUES LESS THAN (735291) ENGINE = InnoDB,   PARTITION pcurrent_2013227 VALUES LESS THAN (735292) ENGINE = InnoDB) */  

When an user want to drop a partition ALTER TABLE v3_cam_date DROP PARTITION pcurrent_2013227, it may cause many transactions to be in the Waiting for table metadata lock state:

     Id: 31560182     User: alice     Host: 192.168.3.40:36132       db: db  Command: Query     Time: 806    State: Waiting for table metadata lock     Info: SELECT COUNT(DISTINCT A.`campaignid`)  INTO _campaigncomplete          FROM `ox_campaigns` A           INNER JOIN `selfserving_users` B ON B.`user_id` = A.`uid`          INNER JOIN `v3_cam_date` C ON C.`campaignid` = A.`campaignid`          WHERE A.`revenue_type` = 5 AND A.`deleted` = 0 AND A.`expire` = DATE_ADD(    (SELECT `sys_date_cpc` FROM `000_sys_params_v4`) , INTERVAL 1 DAY)                   AND A.`isExpired` = 0 AND IF( NAME_CONST('_permitid',3) = -1, 1=1, IF( NAME_CONST('_permitid',3) = 0, A.`uid` IN (SELECT C.`user_id` FROM `selfserving_users` C WHERE C.`groupid` =  NAME_CONST('_groupid',17) ) ,A.`uid` =  NAME_CONST('userid',5770)))  

What is the efficient way to accomplish this without locking?

mysql better index?

Posted: 10 Mar 2013 01:35 PM PDT

Hi i am not sure where should i add primary index in my table salas order. Table have invoice date where is lets say 50 values added every day and i need to search them this column is date datatype or add index to invoice number where lets say is 40 values added every day with same invoice number but this column is varchar datatype and values looks like this a100,a100,a100,a101,a102,a102 etc what will be better for search of invoices. Selec* from salesorders where date ... or where invoicenumer ...

db INDEX on ORDER BY FIELD in very simple query does not help - still slow response?

Posted: 10 Mar 2013 08:35 AM PDT

I have very simple query which returns 200K records very slow (20 seconds).

SELECT * FROM TABLE ORDER BY ID DESC  

If I do just

 SELECT * FROM TABLE  

it returns quick result.

I created INDEX on that field ID (ALLOW REVERSE SCANS) but still returns very similar response. Where can be the problem? What can be the cause of stagnation for this query? I updated statictics and index table metadata.

Shouldn't INDEX help me on this ORDER BY FIELD?

My server has 8GB RAM. Buffer pool has value of 128MB. I adjusted it to 1 GB but still no too much progress.

I am using DB2 database.

Configuration Parameters are with this values: (SORTHEAP) = AUTOMATIC(164) and (SHEAPTHRES_SHR) = AUTOMATIC(824). (SHEAPTHRES) = 0. Should I maybe adjust them and on what values?

This is live system in production so I will be very thankful for help.

Thank you

What is the actual difference between innodb_autoinc_lock_mode 0,1,2

Posted: 10 Mar 2013 08:35 PM PDT

I would like to understand the differences between innodb_autoinc_lock_mode options 0,1 and 2 when parallel load data infiles are given.

I see in "0" option, it locks the entire table and does the First transaction requested for N number of records say TX1. So when next transaction say TX2 is raised in meantime when first transaction is still getting uploaded using "load data", then it has to wait in the queue for the first one TX1 to complete. And then it sets the max(AI_column)+1 value from the table and does upload for the next set of load data. In this case it doesn't jump the Auto Increment numbers.

Also I see in "1" option, it locks the entire table and does the First transaction requested for N number of records say TX1. So when next transaction say TX2 is raised in meantime when first transaction is still getting uploaded using "load data", then it has to wait in the queue for the first one TX1 to complete. And then it sets the max(AI_column)+1 value from the table and does upload for the next set of load data. And then it sets the max(AI_column)+some_creepy_jump.

But I see in "2" option, it doesn't lock the entire table. Instead it keeps inserts for each process simultaneously and inserting records for which ever request falls in randomly and ends all threads with average time around (1.21 sec for 7 parellel threads using load data having 1000000 record each). In this case it has multiple transactions in mixed order. And then it sets the max(AI_column)+some_creepy_jump atlast.

I'm using mysql 5.1.61 .

  • My question is what is the use of option 1?
  • And why does it stay as default later versions of Mysql 5.1.22 ?
  • Have anyone comeaccross any disasters with option 2?

As my project demands multiple processes using load data for a table. And I have tested the above options and finalized to go with option 2. Kindly correct me if my experiments are wrong.

Increase in memory use when database server switches

Posted: 10 Mar 2013 11:57 AM PDT

I have an ASP.NET 4.0 application running on Window Server 2008 and a Oracle 11g database server running on Linux.

There are two database servers with the same virtual IP.

Recently somehow one database server went down and another database server starting working. Every thing went perfectly on the database server site with a small problem with the Oracle listener.

But on my application server, memory started increasing and went up to above 15 GB, almost 99% use. Normally it uses 2 Gb of memory.

I am using ADO.Net to connect to Oracle. My application has properly closed database connections.

I was not able to figure out what is the problem and what should I do in my application. Can any one suggest me what should I do?

Queries getting stuck on very simple COUNT queries

Posted: 10 Mar 2013 04:35 PM PDT

process list:

| 16004 | metrics | xxx:64616      | metrics | Query   | 29064 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356856200'                                        |         0 |             0 |      1021 |  | 16019 | metrics | xxx:23506      | metrics | Query   | 29043 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356856200'                                        |         0 |             0 |      1021 |  | 16102 | metrics | xxx:9117       | metrics | Query   | 28881 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356856200'                                        |         0 |             0 |      1021 |  | 16149 | metrics | xxx:14772      | metrics | Query   | 28768 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356856200'                                        |         0 |             0 |      1021 |  | 16384 | metrics | xxx:51572      | metrics | Query   | 27916 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356858000'                                        |         0 |             0 |      1021 |  ...  | 17890 | metrics | xxx:28643      | metrics | Query   |   709 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356885000'                                        |         0 |             0 |        21 |  | 17924 | metrics | xxx:48797      | metrics | Query   |   681 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356885000'                                        |         0 |             0 |        21 |  | 17929 | metrics | xxx:55624      | metrics | Query   |   671 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356885000'                                        |         0 |             0 |        21 |  | 17938 | metrics | xxx:2632       | metrics | Query   |   654 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356885000'                                        |         0 |             0 |        21 |  

The table only has around 2500 rows, only ~1000 need to be scanned. Table is InnoDB.

This started happening last night. I was not able to run REPAIR TABLE but ANALYZE TABLE ran ok. I can't run any selects manually. When it happened last night I had to kill the mysql process and let it do crash recovery but it's doing it again.

UPDATEs on the table run without error (it is frequently updated), it just seems to be anything that selects from it that use COUNT(). I am able to SELECT * FROM Plugin and have every row returned, but even if I do SELECT COUNT(ID) FROM Plugin it still freezes on sending data

Explain:

mysql> explain SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356856200';  +----+-------------+--------+-------+---------------+-------------+---------+------+------+--------------------------+  | id | select_type | table  | type  | possible_keys | key         | key_len | ref  | rows | Extra                    |  +----+-------------+--------+-------+---------------+-------------+---------+------+------+--------------------------+  |  1 | SIMPLE      | Plugin | range | LastUpdated   | LastUpdated | 5       | NULL | 1382 | Using where; Using index |  

MySQL is using Percona Server 5.5.27

Any ideas on what is causing this?

Thanks a lot.

EDIT: innodb% variables and INNODB STATUS

Rolando: you say to increase innodb_io_capacity could you please explain? I am using 2x 7200 rpm hard drives in a ZFS mirror. Why should I increase it to 2000 ? The MySQL website advises 100 for single 7200 rpm drives (I have it at the default 200)

Excerpt from INNODB STATUS. These lovely selects have been stuck for half a day now. These are the oldest queries.

---TRANSACTION 27986E9D1, ACTIVE 48622 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 570263, OS thread handle 0xad7380000, query id 1217058950 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357635600'  Trx read view will not see trx with id >= 27986E9D3, sees < 278C78F08  ---TRANSACTION 2797C7249, ACTIVE 49200 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 570063, OS thread handle 0xad7380800, query id 1216141959 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357635600'  Trx read view will not see trx with id >= 2797C724A, sees < 278C78F08  

Full status:

=====================================  130109 22:44:19 INNODB MONITOR OUTPUT  =====================================  Per second averages calculated from the last 50 seconds  -----------------  BACKGROUND THREAD  -----------------  srv_master_thread loops: 737366 1_second, 728615 sleeps, 73673 10_second, 921 background, 911 flush  srv_master_thread log flush and writes: 766639  ----------  SEMAPHORES  ----------  OS WAIT ARRAY INFO: reservation count 75724985, signal count 103543991  --Thread 46560449536 has waited at row0sel.c line 3941 for 0.0000 seconds the semaphore:  S-lock on RW-latch at 0x83368d9b8 '&block->lock'  a writer (thread id 46560448512) has reserved it in mode  wait exclusive  number of readers 1, waiters flag 1, lock_word: ffffffffffffffff  Last time read locked in file btr0sea.c line 942  Last time write locked in file /usr/local/src/Percona-Server-5.5.27-rel28.1/storage/innobase/row/row0upd.c line 2194  --Thread 46560428032 has waited at row0sel.c line 2924 for 0.0000 seconds the semaphore:  S-lock on RW-latch at 0x8337b76f8 '&block->lock'  number of readers 1, waiters flag 0, lock_word: fffff  Last time read locked in file btr0sea.c line 942  Last time write locked in file /usr/local/src/Percona-Server-5.5.27-rel28.1/storage/innobase/row/row0upd.c line 2194  --Thread 46560448512 has waited at row0upd.c line 2194 for 0.0000 seconds the semaphore:  X-lock (wait_ex) on RW-latch at 0x83368d9b8 '&block->lock'  a writer (thread id 46560448512) has reserved it in mode  wait exclusive  number of readers 1, waiters flag 1, lock_word: ffffffffffffffff  Last time read locked in file btr0sea.c line 942  Last time write locked in file /usr/local/src/Percona-Server-5.5.27-rel28.1/storage/innobase/row/row0upd.c line 2194  --Thread 46560446464 has waited at row0sel.c line 2924 for 0.0000 seconds the semaphore:  S-lock on RW-latch at 0x83368d9b8 '&block->lock'  a writer (thread id 46560448512) has reserved it in mode  wait exclusive  number of readers 1, waiters flag 1, lock_word: ffffffffffffffff  Last time read locked in file btr0sea.c line 942  Last time write locked in file /usr/local/src/Percona-Server-5.5.27-rel28.1/storage/innobase/row/row0upd.c line 2194  Mutex spin waits 860007450, rounds 4788572135, OS waits 44460586  RW-shared spins 56868852, rounds 806806376, OS waits 12775136  RW-excl spins 115255109, rounds 878353038, OS waits 11254430  Spin rounds per wait: 5.57 mutex, 14.19 RW-shared, 7.62 RW-excl  --------  FILE I/O  --------  I/O thread 0 state: waiting for i/o request (insert buffer thread)  I/O thread 1 state: waiting for i/o request (log thread)  I/O thread 2 state: waiting for i/o request (read thread)  I/O thread 3 state: waiting for i/o request (read thread)  I/O thread 4 state: waiting for i/o request (read thread)  I/O thread 5 state: waiting for i/o request (read thread)  I/O thread 6 state: waiting for i/o request (read thread)  I/O thread 7 state: waiting for i/o request (read thread)  I/O thread 8 state: waiting for i/o request (read thread)  I/O thread 9 state: waiting for i/o request (read thread)  I/O thread 10 state: waiting for i/o request (write thread)  I/O thread 11 state: waiting for i/o request (write thread)  I/O thread 12 state: waiting for i/o request (write thread)  I/O thread 13 state: waiting for i/o request (write thread)  I/O thread 14 state: waiting for i/o request (write thread)  I/O thread 15 state: waiting for i/o request (write thread)  I/O thread 16 state: waiting for i/o request (write thread)  I/O thread 17 state: waiting for i/o request (write thread)  Pending normal aio reads: 0 [0, 0, 0, 0, 0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0, 0, 0, 0, 0] ,   ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 0; buffer pool: 0  6880197 OS file reads, 409935916 OS file writes, 1108361 OS fsyncs  1 pending preads, 1 pending pwrites  15.60 reads/s, 16930 avg bytes/read, 190.92 writes/s, 2.34 fsyncs/s  -------------------------------------  INSERT BUFFER AND ADAPTIVE HASH INDEX  -------------------------------------  Ibuf: size 824, free list len 518, seg size 1343, 364193 merges  merged operations:   insert 589402, delete mark 221097, delete 9349  discarded operations:   insert 0, delete mark 0, delete 0  Hash table size 21249841, node heap has 32061 buffer(s)  142440.39 hash searches/s, 8066.70 non-hash searches/s  ---  LOG  ---  Log sequence number 3642503658567  Log flushed up to   3642503574648  Last checkpoint at  3636441447521  Max checkpoint age    6957135914  Checkpoint age target 6739725417  Modified age          6062211046  Checkpoint age        6062211046  0 pending log writes, 0 pending chkp writes  398344529 log i/o's done, 170.86 log i/o's/second  ----------------------  BUFFER POOL AND MEMORY  ----------------------  Total memory allocated 11031019520; in additional pool allocated 0  Internal hash tables (constant factor + variable factor)      Adaptive hash index 695290368       (169998728 + 525291640)      Page hash           10625704 (buffer pool 0 only)      Dictionary cache    42705560        (42501104 + 204456)      File system         99832   (82672 + 17160)      Lock system         26573920        (26563016 + 10904)      Recovery system     0       (0 + 0)  Dictionary memory allocated 204456  Buffer pool size        655359  Buffer pool size, bytes 10737401856  Free buffers            96  Database pages          623202  Old database pages      230029  Modified db pages       307887  Pending reads 1  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 20888946, not young 0  183.06 youngs/s, 0.00 non-youngs/s  Pages read 6929404, created 2039787, written 15209639  16.10 reads/s, 11.04 creates/s, 23.24 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 12.42/s, Random read ahead 0.00/s  LRU len: 623202, unzip_LRU len: 0  I/O sum[750]:cur[6], unzip sum[0]:cur[0]  --------------  ROW OPERATIONS  --------------  0 queries inside InnoDB, 0 queries in queue  20 read views open inside InnoDB  ---OLDEST VIEW---  Normal read view  Read view low limit trx n:o 2797C724A  Read view up limit trx id 278C78F08  Read view low limit trx id 2797C724A  Read view individually stored trx ids:  Read view trx id 2797C7249  Read view trx id 2797C6206  Read view trx id 2797A2F63  Read view trx id 27976266E  Read view trx id 279761B1E  Read view trx id 278C78F08  -----------------  Main thread id 34397547520, state: sleeping  Number of rows inserted 42523215, updated 894466983, deleted 12775, read 9294801842  12.36 inserts/s, 907.78 updates/s, 0.00 deletes/s, 17273.29 reads/s  ------------  TRANSACTIONS  ------------  Trx id counter 27B056747  Purge done for trx's n:o < 278FA84BC undo n:o < 5D  History list length 15024237  LIST OF TRANSACTIONS FOR EACH SESSION:  [...]  ---TRANSACTION 279D23E45, ACTIVE 42765 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 572264, OS thread handle 0xb0420f400, query id 1225918036 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357641000'  Trx read view will not see trx with id >= 279D23E48, sees < 2797C7249  ---TRANSACTION 279B9A787, ACTIVE 45181 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 571429, OS thread handle 0xad737c400, query id 1222536287 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357639200'  Trx read view will not see trx with id >= 279B9A78D, sees < 2797C7249  ---TRANSACTION 2799448B4, ACTIVE 47853 sec fetching rows  mysql tables in use 1, locked 0xxx.ip.xxx  MySQL thread id 570543, OS thread handle 0xadb3ea000, query id 1218229116 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357635600'  Trx read view will not see trx with id >= 2799448B5, sees < 278C78F08  ---TRANSACTION 27986E9D1, ACTIVE 48622 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 570263, OS thread handle 0xad7380000, query id 1217058950 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357635600'  Trx read view will not see trx with id >= 27986E9D3, sees < 278C78F08  ---TRANSACTION 2797C7249, ACTIVE 49200 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 570063, OS thread handle 0xad7380800, query id 1216141959 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357635600'  Trx read view will not see trx with id >= 2797C724A, sees < 278C78F08  ----------------------------  END OF INNODB MONITOR OUTPUT  ============================  

innodb% variables

| innodb_adaptive_flushing                  | OFF                    |  | innodb_adaptive_flushing_method           | estimate               |  | innodb_adaptive_hash_index                | ON                     |  | innodb_adaptive_hash_index_partitions     | 1                      |  | innodb_additional_mem_pool_size           | 10485760               |  | innodb_autoextend_increment               | 8                      |  | innodb_autoinc_lock_mode                  | 1                      |  | innodb_blocking_buffer_pool_restore       | OFF                    |  | innodb_buffer_pool_instances              | 1                      |  | innodb_buffer_pool_restore_at_startup     | 0                      |  | innodb_buffer_pool_shm_checksum           | ON                     |  | innodb_buffer_pool_shm_key                | 0                      |  | innodb_buffer_pool_size                   | 10737418240            |  | innodb_change_buffering                   | all                    |  | innodb_checkpoint_age_target              | 0                      |  | innodb_checksums                          | ON                     |  | innodb_commit_concurrency                 | 0                      |  | innodb_concurrency_tickets                | 500                    |  | innodb_corrupt_table_action               | assert                 |  | innodb_data_file_path                     | ibdata1:10M:autoextend |  | innodb_data_home_dir                      |                        |  | innodb_dict_size_limit                    | 0                      |  | innodb_doublewrite                        | OFF                    |  | innodb_doublewrite_file                   |                        |  | innodb_fake_changes                       | OFF                    |  | innodb_fast_checksum                      | OFF                    |  | innodb_fast_shutdown                      | 1                      |  | innodb_file_format                        | Barracuda              |  | innodb_file_format_check                  | ON                     |  | innodb_file_format_max                    | Antelope               |  | innodb_file_per_table                     | ON                     |  | innodb_flush_log_at_trx_commit            | 2                      |  | innodb_flush_method                       | O_DIRECT               |  | innodb_flush_neighbor_pages               | none                   |  | innodb_force_load_corrupted               | OFF                    |  | innodb_force_recovery                     | 0                      |  | innodb_ibuf_accel_rate                    | 100                    |  | innodb_ibuf_active_contract               | 1                      |  | innodb_ibuf_max_size                      | 5368692736             |  | innodb_import_table_from_xtrabackup       | 0                      |  | innodb_io_capacity                        | 200                    |  | innodb_kill_idle_transaction              | 0                      |  | innodb_large_prefix                       | OFF                    |  | innodb_lazy_drop_table                    | 0                      |  | innodb_lock_wait_timeout                  | 50                     |  | innodb_locks_unsafe_for_binlog            | OFF                    |  | innodb_log_block_size                     | 512                    |  | innodb_log_buffer_size                    | 8388608                |  | innodb_log_file_size                      | 4294967296             |  | innodb_log_files_in_group                 | 2                      |  | innodb_log_group_home_dir                 | ./                     |  | innodb_max_dirty_pages_pct                | 75                     |  | innodb_max_purge_lag                      | 0                      |  | innodb_merge_sort_block_size              | 1048576                |  | innodb_mirrored_log_groups                | 1                      |  | innodb_old_blocks_pct                     | 37                     |  | innodb_old_blocks_time                    | 0                      |  | innodb_open_files                         | 300                    |  | innodb_page_size                          | 16384                  |  | innodb_purge_batch_size                   | 20                     |  | innodb_purge_threads                      | 1                      |  | innodb_random_read_ahead                  | OFF                    |  | innodb_read_ahead                         | none                   |  | innodb_read_ahead_threshold               | 56                     |  | innodb_read_io_threads                    | 8                      |  | innodb_recovery_stats                     | OFF                    |  | innodb_recovery_update_relay_log          | OFF                    |  | innodb_replication_delay                  | 0                      |  | innodb_rollback_on_timeout                | OFF                    |  | innodb_rollback_segments                  | 128                    |  | innodb_show_locks_held                    | 10                     |  | innodb_show_verbose_locks                 | 0                      |  | innodb_spin_wait_delay                    | 6                      |  | innodb_stats_auto_update                  | 1                      |  | innodb_stats_method                       | nulls_equal            |  | innodb_stats_on_metadata                  | ON                     |  | innodb_stats_sample_pages                 | 8                      |  | innodb_stats_update_need_lock             | 1                      |  | innodb_strict_mode                        | OFF                    |  | innodb_support_xa                         | OFF                    |  | innodb_sync_spin_loops                    | 30                     |  | innodb_table_locks                        | ON                     |  | innodb_thread_concurrency                 | 0                      |  | innodb_thread_concurrency_timer_based     | OFF                    |  | innodb_thread_sleep_delay                 | 10000                  |  | innodb_use_global_flush_log_at_trx_commit | ON                     |  | innodb_use_native_aio                     | OFF                    |  | innodb_use_sys_malloc                     | ON                     |  | innodb_use_sys_stats_table                | OFF                    |  | innodb_version                            | 1.1.8-27.0             |  | innodb_write_io_threads                   | 8                      |xxxx  

Way to add referencing and referenced entities of a specific entity in E-R Diagram MYSQL

Posted: 09 Mar 2013 09:34 PM PST

Is there any way to add entities that are referencing a specific entity in an E-R Diagram. I have a database containing 400 tables. I want to see a view containing a specific entity and its referencing and referenced entities. Some MySQL clients like toad give a way to add in referenced entities when you drag a specific entity but I could not find a way to add in referencing entities that is those which have that specific entity/table as foreign key in them.

P.S. Managing an E-R diagram containing 400 tables is very very tedious

how often to run Tablediff

Posted: 10 Mar 2013 06:35 PM PDT

I wonder if it is safe to run the Tablediff tool on a database fairly often? Let's say every 15 or 30 minutes. Also, the Data in the DB is around 100 GB.

How to compare two SQL Server 2008 configurations

Posted: 10 Mar 2013 12:30 PM PDT

I have a problem with my application that works on my dev SQL Server 2008 (Developer Edition), but not on a production SQL Server 2008. Is there an easy way to export the settings from the production server and compare it to my server configuration?

What I found out is that I can export Facets in SQL Server Management Studio to XML files and compare them in a diff tool.

Is there any other/better way to export and compare settings of two SQL Server instances?

ORA-01017: invalid username/password; logon denied

Posted: 10 Mar 2013 01:52 AM PST

I've come across the abnormal situation, and can't understand what happens.

I am connecting databases using PL/SQL Developer 8.0.4.1514(I've also tried v7,but same) . Sometimes(When I am trying to log on, or when I open another session, or when I am trying to run query in previously opened session) this error message appears

enter image description here

Sometimes after 3 tries error resolves(using the same password on each try), sometimes it needs one try to resolve...

What may be the reason?

Thank you ...

No comments:

Post a Comment

Search This Blog