Thursday, March 14, 2013

[how to] How to setup a database for multiple databases

[how to] How to setup a database for multiple databases


How to setup a database for multiple databases

Posted: 14 Mar 2013 05:50 PM PDT

A friend of mine asked me to try and create him a wordpress plugin that will allow him to display the results of teh events in his little office golf "tour" that he runs.

My MySQL skills are pretty basic so I am not sure how to setup the table for this.

I am needing to record the event name, the course they are playing, the par for each hole for that course, the name of the golfers, and their scores for each hole.

The number of events will vary and the number of golfers will vary from event to event.

I am not really sure how to start and how to link everything together. Any help or direction will be greatly appreciated.

Thanks in advance.

Chris

Migrate SQL database to multiple files (shrinkfile emptyfile vs log shipping)

Posted: 14 Mar 2013 07:57 PM PDT

I have a SQL2008 database in production that we are moving onto a new server. The current database has a single ~400GB MDF file. The new server will be running SQL2012, and we are running mirrored Intel 910 SSDs. These drives will present us with 4x 200GB partitions.

To make this work, we will need to split the single MDF into 4 smaller ones using DBCC SHRINKFILE with EMPTYFILE. We have done this in test, and it still takes ~ 3.5 hours to do which is too long. The existing database is OLTP, and 365/24/7 and I know blocking will occur during this process, so we can't do it on production first.

My question, is there a way to backup and restore the database to the new server in a temp location. create the new files, EMPTY the temp MDF into the new locations, then apply transaction logs after? That way we can move the data while current old production is up and running, then do a short shutdown, apply logs, and bring up the new DB?

Or are there any other options to get from Server A with one file and Server B with 4 files on different drives with minimal downtime?

Index use and column type

Posted: 14 Mar 2013 03:58 PM PDT

I was told today that having a table structure similar to this is bad for performance of index's because of the mismatch of column types

Table 1 id - int (7) PK Others ....

Table 2 id int (11) Index (has to exist in table 1 logically - though not enforced through FK) Others ....

Im using MySQL with InnoDB.

The question i suppose could also be related to Varchar 200 vs 220 for example.

I know that the index size will be effected by the extra space per item needed (and storage space in general), probably in the 5-10% area, but will the time taken for joins to happen increase ?

Right Join vs Left Join - My joins are broken and I don't understand why

Posted: 14 Mar 2013 02:06 PM PDT

http://sqlfiddle.com/#!3/ef71e/1

I rather drastically cut down the fiddle, but I think the intent shines through. [Def] is a [DefaultClassification] table, and [Cls] is a [Classification] table that has some old records that need to be added in. The DefaultClassification table will be used in the future to spawn new groups of records into yet another table, and we're "unbreaking" an existing bit of logic/data with this process.

I did my best to distill the entire thing down to the barest essentials, but I've got another similar process that runs right after this one, so I'm looking to learn best practices and why this query is broken.

When I wrote this I intended it to run in a 2008+ environment, scripted or manually in SSMS. I don't know that it won't be scripted, but it might. Right now it's entirely run by hand. The rollback and select at the top above the print messages at the end are because it doesn't work right, so this lets me validate it before committing. I would prefer a valid script that I don't have to muck with, however

The specific issue I have is on line 62 of that revision one link http://sqlfiddle.com/#!3/ef71e/1 and it looks like this:

RIGHT OUTER JOIN [Def] tcd

if I flip that RIGHT to a LEFT it does the insert but doesn't detect duplicates (to prevent insertion) and if I do the RIGHT it detects duplicates but doesn't handle insertions.

What did I break? And Why?


And I'm told the use of the @@ERROR was bad juju, but I don't know why, so bonus points (a 100 rep bounty) for that explanation too.

Does PgBouncer create a separate pool for every database config entry?

Posted: 14 Mar 2013 12:54 PM PDT

If I set up a pgbouncer.ini config file with several database entries, does pgBouncer create a separate pool for each one? I would think so, but have not been able to confirm this.

For example, with something like this:

  [databases]  db1 = ...  db2 = ...    [pgbouncer]    .  .  .  max_client_conn = 480    default_pool_size = 250  

Would I have a default pool of 250 for each DB and a max of 480, or a total of 250 default and max of 480 connections that are shared between both?

Thanks very much!

MongoDB: group and document size limitation with aggregation and MapReduce

Posted: 14 Mar 2013 12:25 PM PDT

Hitting the limitations of MongoDB (in a single machine environment), I wonder which part of MapReduce and aggregation with group/$group is causing the troubles.

Usually I use a mix of MR/Aggregation for counting documents with unique identifiers inside. Such an identifier could be based on a single field or also a combination of several fields etc.

Before counting the uniques, some $match/filtering is done first for the pipeline.

Anyways, mongodb sometimes worries about the maximum document size and sometimes when exceeding 16M with $group.

How to estimate the amount of size because I really wonder if something is done wrong in my aggregation or if 16M is really a worse limitation and makes mongodb's aggregation framework useless for my "tiny" database with 50GB of data on the single machine => poor :-(

BEGIN incorrect syntax error in sybase

Posted: 14 Mar 2013 12:10 PM PDT

I'm getting this error:

Incorrect syntax near keyword 'BEGIN'

while executing a stored procedure in isql, like this:

execute myDb..customNewProc param1='2013-03-14', param2='test'  

I don't understand why it is generating an error.

Is Reorganizing Indexes required after ShrinkDatabase or can I just reogranize index with fragmentation more than 10%?

Posted: 14 Mar 2013 11:50 AM PDT

I am a new DBA to a SQL 2005 production Report server. My predecessor created a Maintenance Plan with 4 tasks: CHECKDB; SHRINKDATABASE (N'DB1',10,TRUNCATEONLY); it seems to be running a REORGANIZE on all Indexes for all Tables and Views and "Compact Large Objects" is checked; and then it UPDATE STATISTICS for all tables. There is also a separate job that runs after the Maintenance Plan that if the fragmentation of the same DB1 Indexes is more than 10% they are reorganized. It seems redundant. If they were just reorganized why would there need to be a second job? Also wouldn't the second job be more efficient since it doesn't use up resources for Indexes less than 10%?

Can't query one view from another?

Posted: 14 Mar 2013 11:03 AM PDT

Okay, I'm moving a database from SQL Server (which I already miss) to MySql 5.6. I'm having a problem creating a list of valid dates. In SQL Server I just used CTEs to build up numbers, then months, then years, then days of the month, then cross join'd everything together to create a valid list. I'd rather do that then create a giant table listing all valid dates from the start to the end of time. Converting it to a set of views, however, doesn't seem to be working:

create view `Numbers` as      select 0 N union all select 1 union all select 2 union all      select 3 union all select 4 union all select 5 union all      select 6 union all select 7 union all select 8 union all select 9;  create view `Months` as      select n.N+1 MonthNum union all select 11 union all select 12      from `Numbers` n;  

Error from MySql: Error Code: 1109. Unknown table 'n' in field list.

phpmyadmin - there is no style at all

Posted: 14 Mar 2013 10:30 AM PDT

I've installed phpmyadmin and when I turn on I can't see part of style and I don't know why. I even checked if some of css files don't load or something. No every thing is okay.

And I set AllowNoPassword to true and I still have that I can't login without password

MySQL: can not get rid of “Using filesort” in a simple query

Posted: 14 Mar 2013 07:06 PM PDT

Simple query but can not get rid of "using filesort":

CREATE TABLE IF NOT EXISTS `online` (    `id` bigint(20) NOT NULL AUTO_INCREMENT,    `uid` int(11) NOT NULL,    `expiration` int(11) NOT NULL,    PRIMARY KEY (`id`),    KEY `uid` (`uid`),    KEY `expiration` (`expiration`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=1925234 ;    mysql> explain SELECT i.id, i.uid, i.expiration, u.nick, u.mainpicture      -> FROM online i join usertable u on i.uid = u.id      -> order by i.expiration DESC limit 0,12;    +----+-------------+-------+--------+---------------+---------+---------+----------------+------+----------------+  | id | select_type | table | type   | possible_keys | key     | key_len | ref            | rows | Extra          |  +----+-------------+-------+--------+---------------+---------+---------+----------------+------+----------------+  |  1 | SIMPLE      | i     | ALL    | uid           | NULL    | NULL    | NULL           | 1020 | Using filesort |  |  1 | SIMPLE      | u     | eq_ref | PRIMARY       | PRIMARY | 4       | dbasen01.i.uid |    1 |                |  +----+-------------+-------+--------+---------------+---------+---------+----------------+------+----------------+  2 rows in set (0.00 sec)  

"usertable" has index on "id" column. Tried to order by "i.id" (primary key) but the same - query still uses filesort.

Update after RolandoMySQLDBA answer:

+----+-------------+-----------------+--------+---------------+---------+---------+-------+------+----------------+  | id | select_type | table           | type   | possible_keys | key     | key_len | ref   | rows | Extra          |  +----+-------------+-----------------+--------+---------------+---------+---------+-------+------+----------------+  |  1 | PRIMARY     |       | ALL    | NULL          | NULL    | NULL    | NULL  |   12 |                |  |  1 | PRIMARY     | u               | eq_ref | PRIMARY       | PRIMARY | 4       | i.uid |    1 |                |  |  2 | DERIVED     | online          | ALL    | NULL          | NULL    | NULL    | NULL  | 1009 | Using filesort |  +----+-------------+-----------------+--------+---------------+---------+---------+-------+------+----------------+  

Enforce data integrity with a recursive table

Posted: 14 Mar 2013 01:32 PM PDT

I have an existing oracle 11g database schema that works with a web application. I am planning an expansion to the application so a web service can do data operations on the database. As part of the planning I have realized that there are no data integrity checks on parent/child relationships which would make it problematic to let other applications work with the table. I am planning to do validation in the web service but the best practice is to have validation in the database and the web service.

--the base lookup table has a table with text values that is not shown.  --Example Red, Green,   CREATE TABLE PROPERTY  (    ID                        NUMBER(9)           NOT NULL, --PRIMARY KEY    TENANT_ID                 NUMBER(9)           NOT NULL  )  -- a property may or may not have a parent property.   --Example "Weight" of an item is a child of the "Shipping Weight"  CREATE TABLE PROPERTY_DEPENDENCY  --PRIMARY KEY PROPERTY_ID,PROPERTY_TYPE_ID  (    PROPERTY_ID               NUMBER(9)           NOT NULL,    PARENT_PROPERTY_ID        NUMBER(9),    PROPERTY_TYPE_ID          NUMBER(9)           NOT NULL,    ACTIVE                    NUMBER(1)           NOT NULL  )  --examples "Item Colour", "Item Trim Colour","Shipping Weight", "Weight"  CREATE TABLE PROPERTY_TYPE  (    ID                        NUMBER(9)           NOT NULL,  --PRIMARY KEY    VALUE                     VARCHAR2(200 BYTE)  NOT NULL,    PROPERTY_TYPE             NUMBER(10)          DEFAULT 1   NOT NULL  )    --and the table that you insert and update into  CREATE TABLE CASE_PROPERTY  (    ID                        NUMBER(9)           NOT NULL, --PRIMARY KEY    PARENT_ID                 NUMBER(9),          --constraint on PROPERTY    CASE_ID                   NUMBER(9)           NOT NULL,--foreign key    PROPERTY_ID               NUMBER(9),          --constraint on PROPERTY    PROPERTY_TYPE_ID          NUMBER(9)           NOT NULL --constraint on PROPERTY_TYPE  )  

These are the problems I have identified:

  • you can insert into CASE_PROPERTY and make a property it's own parent or grandparent
  • you can insert the wrong PROPERTY_TYPE_ID for a PROPERTY_ID into CASE_PROPERTY
  • you can insert into CASE_PROPERTY a PARENT_ID which makes no sense for a PROPERTY_TYPE_ID

I can add a check constraint so that PARENT_ID <> PROPERTY_ID so you cannot be a parent to yourself.

Edit 3: The real problem is that the tables are not normalized properly which is great for reporting but hard on data validation. CASE_PROPERTY.PROPERTY_TYPE_ID should always be the same as the value in PROPERTY_DEPENDENCY.PROPERTY_TYPE_ID but I don't know how to validate this.

Are there any ways other than triggers to enforce data integrity on CASE_PROPERTY?

Edit: I'll put together a complete example. If I added foreign key constraints on PROPERTY_DEPENDENCY I would verify that only properties with parents were inserted but would they be the correct parents?

Edit 2: Here is a complete example of the inserts that are allowed. The last two inserts are examples of data that is allowed but should not be.

ALTER TABLE CASE_PROPERTY ADD  CONSTRAINT CASE_PROPERTY_R01  FOREIGN  KEY (PARENT_ID)  REFERENCES CASE_PROPERTY (ID)  ENABLE  VALIDATE    Insert into PROPERTY    (ID, TENANT_ID)  Values    (2, 1);   Insert into PROPERTY    (ID, TENANT_ID)  Values    (3, 1);   Insert into PROPERTY    (ID, TENANT_ID)  Values    (4, 1);     Insert into PROPERTY_TYPE    (ID,       VALUE, PROPERTY_TYPE)  Values    (10, 'Colour', 2);      Insert into PROPERTY_TYPE    (ID,       VALUE, PROPERTY_TYPE)  Values    (11, 'Trim Colour', 1);      Insert into PROPERTY_TYPE    (ID,       VALUE, PROPERTY_TYPE)  Values    (12, 'Shipping Weight', 1);   Insert into PROPERTY_TYPE    (ID,       VALUE, PROPERTY_TYPE)  Values    (13, 'Weight', 3);     Insert into PROPERTY_DEPENDENCY    (PROPERTY_ID,       PARENT_PROPERTY_ID, PROPERTY_TYPE_ID)  Values    (4, 3, 11);   Insert into PROPERTY_DEPENDENCY    (PROPERTY_ID,       PARENT_PROPERTY_ID, PROPERTY_TYPE_ID)  Values    (3, NULL, 10);  Insert into PROPERTY_DEPENDENCY    (PROPERTY_ID,       PARENT_PROPERTY_ID, PROPERTY_TYPE_ID)  Values    (1, NULL, 12);      Insert into PROPERTY_DEPENDENCY    (PROPERTY_ID,       PARENT_PROPERTY_ID, PROPERTY_TYPE_ID)  Values    (2, 1, 13);     --example of a property validated data insert    --item 201 with type 13 is the child of item 200 of type 12  Insert into CASE_PROPERTY    (ID,       PARENT_ID, CASE_ID, PROPERTY_ID, PROPERTY_TYPE_ID)  Values    (200, NULL, 3000, 1, 12);   Insert into CASE_PROPERTY    (ID,       PARENT_ID, CASE_ID, PROPERTY_ID, PROPERTY_TYPE_ID)  Values    (201, 200, 3000, 2, 13);     --bad data inserts    -- a property is parent to itself with an incorrect property_type_id     Insert into CASE_PROPERTY    (ID,       PARENT_ID, CASE_ID, PROPERTY_ID, PROPERTY_TYPE_ID)  Values    (202, 202, 4000, 3, 10);    --should be 202, null,4000,3,10     --a property is inserted with a parent that is not allowed     Insert into CASE_PROPERTY    (ID,       PARENT_ID, CASE_ID, PROPERTY_ID, PROPERTY_TYPE_ID)  Values    (203, 200, 4000, 2, 13);    --parent property should be 1 not 2  

How do I optimize this complex query?

Posted: 14 Mar 2013 10:17 AM PDT

I'm pretty skilled in MySQL, although I wouldn't consider myself an expert at all. Now, recently I started working with more complex queries in OsCommerce and other systems.

I stumpled upon a query that was running slow at my job, and they told me to look into the optimization of it. However, everything I know about optimization is already being used. I'm not sure if using left joins all the way is a good idea though.

Here's the query:

    SELECT     po.products_options_name,     p.jhk_sizetypes_id,     p.jhk_products_size,     pd.jhk_products_sizetext,     ps.products_stock_code,     p.products_id,     pd.products_name,     pd.products_description,     ps.bestseller_score,     p.products_date_added,     p.products_price,     p.products_price1,     p.products_price2,     p.products_price3,     p.products_price4,     p.products_price1_qty,     p.products_price2_qty,     p.products_price3_qty,     p.products_price4_qty,     p.products_tax_class_id,     p.products_ordered,     IF(s.status, s.specials_new_products_price, NULL) AS specials_new_products_price,     IF(s.status, s.specials_new_products_price, p.products_price) AS final_price   FROM (products p, products_description pd)   LEFT JOIN     specials s    ON     p.products_id = s.products_id   LEFT JOIN     products_options po    ON     (po.products_options_id=p.jhk_sizetypes_id AND po.language_id = '4'),     (categories c, products_to_categories p2c) ,     products_stock ps   LEFT JOIN     products_price pback    ON     (pback.products_stock_code=ps.products_stock_code and pback.priceGroup = 'EUR')   LEFT JOIN     products_price pp    ON     (pp.products_stock_code=ps.products_stock_code and pp.priceGroup = 'Danmark')   WHERE     ps.products_id = p.products_id    AND     (      p2c.categories_id = 0     OR      pp.price > 0.0     OR      pback.price > 0.0     )    AND     p.products_status = '1'    AND     p.products_id = pd.products_id    AND     pd.language_id = '4'    AND     p.products_id = p2c.products_id    AND     p2c.categories_id = c.categories_id    AND     c.categories_status = '1'    AND     (      ps.products_stock_quantity > 0     OR      (       p.products_delivery_time > 0      AND       ps.products_stock_soldout = '0'      )     OR      (       ps.products_date_available > 0      AND       p.products_delivery_time >= 0      )     )   GROUP BY     p.products_id   ORDER BY     p.products_ordered DESC;  

Here's my explain data.

enter image description here

View Disc Usage

Posted: 14 Mar 2013 12:56 PM PDT

Is this statement correct?

Behind the scenes SQL Server actually stores the data in the view as a physical table which must be updated if any of the data behind the view changes

I did some searching but couldn't find any resources on how views work in SQL Server. I'd always thought that views don't take up any extra space for data unless you're using a indexed/materialized view.

Thanks

Can memcached improve performance for InnoDB in mem with a large innodb_buffer_pool_size?

Posted: 14 Mar 2013 11:18 AM PDT

I learned that innodb_buffer_pool_size makes innodb behave like an in memory db. If I give innodb_buffer_pool_size enough ram for the entire db could memcached improve further the performance, or memcached would be unnecessary in such a setup?

Bypass MyISAM table lock

Posted: 14 Mar 2013 12:09 PM PDT

I have a big MyISAM table with a fulltext index for searching. Problem is that inserting data into that table results in a table lock and in the meantime no other sql query can perform any action because the whole table is locked.

Is there a possibility or a workaround to bypass that table lock without getting a corrupt table? It seems that InnoDB is the right way (because of row locking), but InnoDB includes fulltext search only with a mysql version greater than 5.6 and this is not included in the current debian repository, even more the new debian wheezy has not mysql 5.6 included.

What can I do in my case?

How to Install/Configure Oracle Instant Client on Ubuntu

Posted: 14 Mar 2013 04:23 PM PDT

Installing the Oracle Instant Client on Ubuntu/Debian based systems is awkward since Oracle is RPM centric regarding its Linux support.

What's the best way to easily install and configure the Instant Client on Ubuntu?

What alternatives exist when a table requires too many foreign keys?

Posted: 14 Mar 2013 04:23 PM PDT

We have a base table that defines parts and holds information like part number, description, price, weight, etc. We also have approximately 400 tables that reference the base table and provide additional information about the parts based on their type / category.

We started by using foreign key constraints so that a part cannot be deleted from the base table if it is being referenced in one of the 400 part specific tables but we quickly reached the maximum 253 recommended foreign keys for SQL Server 2005.

Are there any alternatives to foreign keys in this situation that will ensure data integrity? We haven't seen performance issues when accessing the data but updating an existing part in the base table will fail as the query plan is too complex.

Why can't RDBM's cluster the way NoSQL does?

Posted: 14 Mar 2013 05:40 PM PDT

One of the big plusses for nosql DBMS is that they can cluster more easily. Supposedly with NoSQL you can create hundreds of cheap machines that store different pieces of data and query it all at once.

My question is this, why can't relational DBMS do this like mysql or sql server? Is it that the vendors just haven't figured out a technical way to do this with their existing product, or is there some issue with the relational model that prevents this from being feasible? What is so great about the NoSQL way of storing and accessing data (key/value, documents, etc) that makes clustering easier, if this is true at all?

Are view references in a query properly called "derived tables" as such?

Posted: 14 Mar 2013 07:40 PM PDT

While answering a question on stackoverflow, I presented a definition of derived tables:

A derived table is a complete query, inside of parentheses, that is used as if it were a real table.

But a commenter objected:

Though there are other kinds of derived tables besides those "inside of parentheses". ... [such as] Views and Table-Valued Functions ... .

and further backed this up with:

From the ISO/IEC 2003 Spec, section 4.3 of the Framework volume, page 13 of the August 2003 draft Spec: "An operation that references zero or more base tables and returns a table is called a query. The result of a query is called a derived table." Note that both Views and Table-Valued Functions return "the result of a query", which is a derived table. Microsoft (and to a lesser extent, Oracle) are notorious for mistakenly equating "derived table" and "sub-query" in their documentation, but Derived Tables do also include pre-defined queries like Views.

So what's the real scoop, here? Am I to relegate what I thought of as derived tables to simply "FROM clause aliased inline subqueries" or are views not properly derived tables?

Please note: I searched for quite a while online and could not find anything definitive. I don't have a copy of the said spec.

Also, I think it's worth addressing something else. Let's say that views are properly called "derived tables". Does this make the reference to the view also a "derived table" or merely a reference? For an example with a CTE that should drive the point home:

WITH SalesTotals AS (     SELECT        O.CustomerID,        SalesTotal = Sum(OrderTotal)     FROM        dbo.CustomerOrder O     GROUP BY        O.CustomerID  )  SELECT     C.Name,     S.SalesTotal  FROM     dbo.Customer C     INNER JOIN SalesTotals S        ON C.CustomerID = S.CustomerID;  

The SalesTotals CTE introduced with WITH is a derived table. But is INNER JOIN SalesTotals also a derived table, or just a reference to a derived table? Does this query have two derived tables or one? If one, then by extension I think that a view may be a derived table, but referencing it may not have to be a derived table.

SQL Query to replace character based on join

Posted: 14 Mar 2013 01:39 PM PDT

I have a table called service entry which has a field called part description. I am trying to normalize my table structures.

**Dbo.ServiceEntry

  ServiceEntryID, Description       1200         270 ~ Syringe assembly  ~ 7,3 ~ Increase drive current from 3 --> 1.|282 ~ LPH pcb ~ 8 ~  | 287 ~ Valve block     assembly ~ null ~  

Sample data for the part description column:

**270 ~ Syringe assembly ~ 7,3 ~ Increase drive current from 3 --> 1.|282 ~ LPH pcb ~ 8 ~ | 287 ~ Valve block assembly ~ null ~

In the above here is the structure for the column: PartID ~ PartDescription ~ ServiceType ~ Comment

For multiple parts, this character | is added and the structure is repeated.

Now the other table Service Entry Part has a one to many relationship for instance a Service Entry could have multiple Service entry parts.

Dbo.ServiceEntryPart  ID,ServiceEntryID,PartID,Comment,ServiceType  

New Update to the table ServiceEntryPart will contain a column called Part Description. so it will look like below

**Dbo.ServiceEntryPart  ID,ServiceEntryID,PartID,Comment,ServiceType,PartDescription**  

Existing Data in ServiceEntryPart:

ID  ServiceEntryID  PartDescription PartID  ServiceType Comment  1    1200                            270      7,3        Increase drive current from 3  2    1200                            282      8            3    1200                            287      null        

Desired Data in ServiceEntryPart

ID  ServiceEntryID  PartDescription  PartID    ServiceType   Comment  1    1200           Syringe assembly    270      7,3         Increase drive current from 3  2    1200           LPH pcb             282      8            3    1200           Valve block         287      null        

So essentialy for the sake of normalization, I want to look at the existing service entries and within the description column, I want to parse out the part description and update the service entry part table with the appropriate part descriptions.

Indexing is not working

Posted: 14 Mar 2013 06:40 PM PDT

Following is the table and index information for which I have the queries at the end.

Create Table: CREATE TABLE `food` (    `id` int(10) unsigned NOT NULL DEFAULT '0',    `page_no` int(11) DEFAULT NULL,    `nid` int(11) DEFAULT NULL,    `name` varchar(255) CHARACTER SET utf8 DEFAULT NULL,    `source` varchar(4) CHARACTER SET utf8 NOT NULL,    `brand` varchar(500) CHARACTER SET utf8 DEFAULT NULL,    `cuisine` varchar(100) CHARACTER SET utf8 DEFAULT NULL,    `subcuisine` varchar(100) CHARACTER SET utf8 DEFAULT NULL,    `type` varchar(255) CHARACTER SET utf8 DEFAULT NULL,    `per_nr` float DEFAULT NULL,    `category` varchar(100) CHARACTER SET utf8 DEFAULT NULL,    `source_category` varchar(255) CHARACTER SET utf8 DEFAULT NULL,    `source_parent_category` varchar(255) CHARACTER SET utf8 DEFAULT NULL,    KEY `id_idx` (`id`),    KEY `page_no_idx` (`page_no`),    KEY `source_idx` (`source`),    KEY `nid_idx` (`nid`),    KEY `per_nr_idx` (`per_nr`),    KEY `per_nr_source_category_nid_idx` (`per_nr`,`source_category`,`nid`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  

EXPLAIN 1

mysql > EXPLAIN SELECT food.nid, name, brand, cuisine, source_category, subcuisine, course, per_nr  FROM food where category is not null and  per_nr is not null and nid is not null;    +----+-------------+-------+------+---------------------------------------------------+------+---------+------+--------+-------------+  | id | select_type | table | type | possible_keys                                     | key  | key_len | ref  | rows   | Extra       |  +----+-------------+-------+------+---------------------------------------------------+------+---------+------+--------+-------------+  |  1 | SIMPLE      | food  | ALL  | nid_idx,per_nr_idx,per_nr_source_category_nid_idx | NULL | NULL    | NULL | 532752 | Using where |  +----+-------------+-------+------+---------------------------------------------------+------+---------+------+--------+-------------+  

EXPLAIN 2

mysql > EXPLAIN SELECT name, nid, brand, cuisine, type, subcuisine, course, per_nr, source, source_category FROM food where per_nr>47 and source_category='Cereal Grains and Pasta' and food.nid!=205014 and food.nid is not null;    +----+-------------+-------+------+---------------------------------------------------+------+---------+------+--------+-------------+  | id | select_type | table | type | possible_keys                                     | key  | key_len | ref  | rows   | Extra       |  +----+-------------+-------+------+---------------------------------------------------+------+---------+------+--------+-------------+  |  1 | SIMPLE      | food  | ALL  | nid_idx,per_nr_idx,per_nr_source_category_nid_idx | NULL | NULL    | NULL | 532752 | Using where |  +----+-------------+-------+------+---------------------------------------------------+------+---------+------+--------+-------------+  1 row in set (0.09 sec)  

Queries:

  1. Explain 1: As we can see the query contains column name which are indexed as well, but still explain show that the mysql was not able to identify key to make it faster.
  2. Explain 2: I have created combined index per_nr_source_category_nid_idx in the same order in which the fields are available in where clause but still the mysql couldn't identify the index.

Although I have created indexes but still why system is not able to use it. This is my question in both the above queries.

Detailed Index details

mysql> SHOW INDEX FROM food;    +-------+------------+--------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+  | Table | Non_unique | Key_name                       | Seq_in_index | Column_name     | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |  +-------+------------+--------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+  | food  |          1 | id_idx                         |            1 | id              | A         |      822286 |     NULL | NULL   |      | BTREE      |         |               |  | food  |          1 | page_no_idx                    |            1 | page_no         | A         |      822286 |     NULL | NULL   | YES  | BTREE      |         |               |  | food  |          1 | source_idx                     |            1 | source          | A         |          27 |     NULL | NULL   |      | BTREE      |         |               |  | food  |          1 | nid_idx                        |            1 | nid             | A         |      822286 |     NULL | NULL   | YES  | BTREE      |         |               |  | food  |          1 | per_nr_idx                     |            1 | per_nr          | A         |          27 |     NULL | NULL   | YES  | BTREE      |         |               |  | food  |          1 | per_nr_source_category_nid_idx |            1 | per_nr          | A         |          27 |     NULL | NULL   | YES  | BTREE      |         |               |  | food  |          1 | per_nr_source_category_nid_idx |            2 | source_category | A         |       12092 |     NULL | NULL   | YES  | BTREE      |         |               |  | food  |          1 | per_nr_source_category_nid_idx |            3 | nid             | A         |      822286 |     NULL | NULL   | YES  | BTREE      |         |               |  +-------+------------+--------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+  8 rows in set (4.35 sec)  

How to increase fast backup and restore of 500GB database using mysqldump?

Posted: 14 Mar 2013 03:40 PM PDT

There is a database A size of 500GB. Tables in database A contains both MyISAM and INNODB tables. MyISAM tables are master tables and Innodb tables are main transaction tables.

Backup and restore using mysql dump taking quit a long time or days.

  • max_allowed_packet=1G
  • foreign_key_checks=0
  • auto_commit = off

How to check growth of database in mysql?

Posted: 14 Mar 2013 02:39 PM PDT

I want to know is there any method to check the growth of database on file For example Database A contains all tables in INNODB storage engine another database B contains all tables in MyISAM storage engine.

Tables not shown in SQL Server Enterprise Manager

Posted: 14 Mar 2013 11:26 AM PDT

For some reason 3 tables which are clearly in use by the application are not listed in the Tables view in SQL Server Enterprise Manager. It's SQL Server 2000. When generating a diagram, a popup appears stating that those tables couldn't be loaded. The weird thing is that data from those tables is actually returned in the application (not cache).

How to restrict MySQL user to not import and export database

Posted: 14 Mar 2013 10:39 AM PDT

I have created a read only MySQL user but this user have an option to export the database through phpmyadmin. How can I disable the options Export and Import?

Mysql 5.5 strange behavior with group by with limit based of INDEX vs INDEX FOR GROUP BY

Posted: 14 Mar 2013 11:38 AM PDT

I feel like I've found a bug in mysql, but haven't been able to track it down. Any idea what's going on here?

This was working just find in mysql 5.1, but then it never used the INDEX FOR GROUP BY.

mysql>  select count(DISTINCT(IF(user_id = 0, ip, user_id))) from tm_tmp group by artist_id limit 11;  +-----------------------------------------------+  | count(DISTINCT(IF(user_id = 0, ip, user_id))) |  +-----------------------------------------------+  |                                             1 |  |                                             2 |  |                                             1 |  |                                             2 |  |                                             1 |  |                                             4 |  |                                             1 |  |                                             3 |  |                                             2 |  |                                             1 |  |                                             1 |  +-----------------------------------------------+  11 rows in set (0.00 sec)    mysql>  select count(DISTINCT(IF(user_id = 0, ip, user_id))) from tm_tmp group by artist_id limit 12;  +-----------------------------------------------+  | count(DISTINCT(IF(user_id = 0, ip, user_id))) |  +-----------------------------------------------+  |                                             0 |  |                                             0 |  |                                             0 |  |                                             0 |  |                                             0 |  |                                             0 |  |                                             0 |  |                                             0 |  |                                             0 |  |                                             0 |  |                                             0 |  |                                             0 |  +-----------------------------------------------+  12 rows in set (0.00 sec)      mysql> explain extended  select          count(DISTINCT(IF(user_id = 0, ip, user_id))) from tm_tmp group by artist_id limit 11;  +----+-------------+--------+-------+---------------+-----------------+---------+------+------+----------+-------------+  | id | select_type | table  | type  | possible_keys | key             | key_len | ref  | rows | filtered | Extra       |  +----+-------------+--------+-------+---------------+-----------------+---------+------+------+----------+-------------+  |  1 | SIMPLE      | tm_tmp | index | NULL          | sp_artist_reach | 12      | NULL |   45 |    26.67 | Using index |  +----+-------------+--------+-------+---------------+-----------------+---------+------+------+----------+-------------+  1 row in set, 1 warning (0.00 sec)    mysql> explain extended  select          count(DISTINCT(IF(user_id = 0, ip, user_id))) from tm_tmp group by artist_id limit 12;  +----+-------------+--------+-------+---------------+-----------------+---------+------+------+----------+--------------------------+  | id | select_type | table  | type  | possible_keys | key             | key_len | ref  | rows | filtered | Extra                    |  +----+-------------+--------+-------+---------------+-----------------+---------+------+------+----------+--------------------------+  |  1 | SIMPLE      | tm_tmp | range | NULL          | sp_artist_reach | 4       | NULL |   12 |   100.00 | Using index for group-by |  +----+-------------+--------+-------+---------------+-----------------+---------+------+------+----------+--------------------------+    mysql> show create table tm_tmp;  | Table  | Create Table  | tm_tmp | CREATE TABLE `tm_tmp` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `artist_id` int(11) NOT NULL DEFAULT '0',    `user_id` int(11) NOT NULL DEFAULT '0',    `ip` int(10) unsigned NOT NULL DEFAULT '0',    PRIMARY KEY (`id`),    KEY `sp_artist_reach` (`artist_id`,`user_id`,`ip`),  ) ENGINE=MyISAM AUTO_INCREMENT=46 DEFAULT CHARSET=latin1 |  1 row in set (0.00 sec)  

* Edit *

I have filed the following bug report: http://bugs.mysql.com/64445. We'll see what comes out of it.

The solution we were able to get working was forcing a derived table like this:

SELECT count(DISTINCT(IF(user_id = 0, ip, user_id)))  FROM  (   SELECT artist_id, user_id, ip   FROM tm_tmp   GROUP BY artist_id, user_id, ip  ) summary  GROUP BY artist_id  LIMIT 12;    mysql> explain SELECT count(DISTINCT(IF(user_id = 0, ip, user_id))) FROM (  SELECT  artist_id, user_id, ip  FROM tm_tmp  GROUP BY artist_id, user_id, ip ) summary GROUP BY artist_id LIMIT 11;  +----+-------------+------------+-------+---------------+-----------------+---------+------+------+--------------------------+  | id | select_type | table      | type  | possible_keys | key             | key_len |  ref | rows | Extra                    |  +----+-------------+------------+-------+---------------+-----------------+---------+------+------+--------------------------+  |  1 | PRIMARY     |  | ALL   | NULL          | NULL            | NULL    | NULL |   21 | Using filesort           |  |  2 | DERIVED     | tm_tmp     | range | NULL          | sp_artist_reach | 12      | NULL |   23 | Using index for group-by |  +----+-------------+------------+-------+---------------+-----------------+---------+------+------+--------------------------+  

No comments:

Post a Comment

Search This Blog