Thursday, March 21, 2013

[how to] Is using the alpha version of Percona 5.6 safe?

[how to] Is using the alpha version of Percona 5.6 safe?


Is using the alpha version of Percona 5.6 safe?

Posted: 21 Mar 2013 07:56 PM PDT

I'm really sorry if my question is kind of very basic, but is something that is really making me thinking a lot about it for a long time. As Percona has a lot of lines of optimizing codes, I run it instead of standard MySQL. I've always been running Percona 5.5 with no problems. However, I'd like to update to version 5.6 in order to get a plugin that uses fulltext search working in my InnoDB (as InnoDB supports FULLTEXT in newest MySQL 5.6 versions. I have two (2) options:

1) Remove Percona and install MySQL 5.6.10 (the standard version)

2) Install Percona 5.6 alpha version, that also will allow me to use the plugin that requires fulltext in my tables that run InnoDB.

I'd like to know from anyone who knows more than me if is it safe to use this Percona 5.6 version. Nowadays I am running Percona 5.5 in my server that runs only WordPress, and I don't want to stop using Percona, but don't want something that can make me lose any data (is it possible as Percona 5.6 is new?).

Thank you all for the attention.

Refactoring/normalization - but almost empty table

Posted: 21 Mar 2013 03:20 PM PDT

I normalized a legacy DB into this structure: enter image description here

But I'm not sure if it is correctly normalized. I don't feel very comfortable with the almost empty filters table.

The requirements

  • A group contains several items.
  • A filter organizes how a item is "displayed".
  • Multiple terms are defining a filter.
  • Different groups may contain the same item, but different filter.
  • A filter cam have no terms.

Not really important requirements

  • A filter might be used several times.

Btw. the diagram was created using http://cakeapp.com.

Cannot use python3 as stored procedure language in posgresql database

Posted: 21 Mar 2013 05:05 PM PDT

I want to have python3 in my postgresql database for writing stored procedures.

Being in the psql client, when I enter the command create extension plpython3u I get the error:

couldn't open extension control file /usr/share/postgresql/9.1/extension/plpython3u.control : No such file or directory  

I have checked, there are plpythonu.control and plpython2u.control in the directory, but not the one for version 3.

However I have installed the packages python3 and python3-postgresql (among others) from depot. I'm using Ubuntu 12.04, kernel 3.2.0.38, with postgresql 9.1 installed.

What should I install (or do) to have the plpython3u.control file on my machine and have python3 available in my database?

media family on device is incorrectly formed. SQL Server cannot process this media family

Posted: 21 Mar 2013 02:32 PM PDT

I am trying to restore a .BAK in SQL server but get the following error:

Msg 3241, Level 16, State 7, Line 1 The media family on device 'c:\glyn\JA.bak' is incorrectly formed. SQL Server cannot process this media family. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally. I have tried restoring using 2012, 2008 and even 2005 but nothing works, I have used the following query and through the Tasks>> back up but nothing works, here's my SQL:

RESTORE DATABASE JA FROM DISK='c:\glyn\JA.bak'  

Does anyone have any suggestions? I've read that the bak could be corrupt, it was sent to my by another developer on DVD who is working on the project.

Does avg_query for PgBouncer include update and inserts?

Posted: 21 Mar 2013 01:23 PM PDT

Trying to tune our application and would like to know if the stats data that pgBouncer shows as avg_query, according to the PgBouncer docs on usage:

  avg_query      Average query duration in microseconds.  

includes the average duration for INSERTs and UPDATEs as well as SELECTs.

Thanks very much!

Upgrade SQL Server 2012 Express to Evaluation Edition

Posted: 21 Mar 2013 05:41 PM PDT

Is there a way to upgrade an instance of SQL Server 2012 Express to SQL Server Evaluation Enterprise?

I tried using the SQL Server Installation Center and clicking Maintenance, and then Edition Upgrade. I selected to upgrade the Express Instance to Evaluation, but I get an error on the Edition Upgrade Rules step, which says "The selected SQL Server instance does not meet upgrade matrix requirements." I also checked out the Supported Version and Upgrade Paths on http://msdn.microsoft.com/en-us/library/ms143393.aspx and Express 2012 to Enterprise 2012 is supported, so it would make sense that Express 2012 to Evaluation Enterprise 2012 should be supported especially since they are both free and Express has less features than Enterprise. Am I doing something wrong? Is there a better way to perform this upgrade? Thanks,

SAP Business Warehouse 0APO_LOCNO_ATTR datasource extraction

Posted: 21 Mar 2013 12:59 PM PDT

We are currently importing 0APO_LOCNO_ATTR from several different source systems. I want to be able to import the field PRECISID into BW from this datasource. Here is what I have discovered/tried so far.

  • Using RSA2, PRECISID is in the extract structure, but it is not marked as transferable. I have tried looking up the datasource in RSA6 to unhide this field, but it is not present in RSA6.

  • When I run RSA3 on this datasource, PRECISID is present and populated. However, when I display the fields of the datasource in RSA1 it is not present. I have tried replicating metadata, but that hasn't done anything. I was under the impression that if a field was present in RSA3 that it should be present in the fields of the PSA.

RESOURCE_SEMAPHORE_QUERY_COMPILER waits in Microsoft SQL Server 2008 Enterprise

Posted: 21 Mar 2013 01:58 PM PDT

Could fragmented indexes on Microsoft SQL Server 2008 Enterprise cause RESOURCE_SEMAPHORE_QUERY_COMPILER waits?

The last two weeks one of our applications has had downtime due to RESOURCE_SEMAPHORE_QUERY_COMPILER waits. We've "fixed" them by rebuilding indexes; it was just a shot in the dark.

The application vendor has "dug in" and "found the root cause" which they claim is index fragmentation and their recommendation was to implement the workaround we'd already put in place: rebuild indexes weekly (was in place at time of issue) and reorganize nightly (new).

My concern is the application vendor just took our work and claimed it as their own. When we originally stumbled across the fix, we considered it a temporary workaround unlikely to cause issues (its hard to object to rebuilding/reorganizing indexes). This issue started just after our largest user's busy time, so usage and index fragmentation was down. We had just purged a significant amount of data, so percentage fragmentation would get higher easier (it peaked about 22% when we were having the semaphore issue) but the percentages were still significantly down.

My concern is just we're putting a band-aide on the situation and that its just a matter of time before we have issues again. If the query was taking a long time, I could understand, but it seems odd to me that we'd be having memory issues while compiling the query.

Random bits of information:

  • Two application servers with about 130 concurrent users
  • Dedicated database server
    • Microsoft SQL Server 2008 Enterprise
    • 2 x Xeon E5-2650 @ 2.3 GHz
    • 64 GB of ram (don't know details)
    • Gigabit Ethernet x 3
  • The datafile for the database is 199GB
  • According to sp_spaceused
    • database_size is 37831736 KB
    • index_size is 54547920 KB
  • The log and datafile are one dedicated drives; both drives are on iSCSI SAN and are RAID10
  • There is no partitioning in the database

I apologize if I left off something important; let me know and I'll try to add it.

Best solution to fixing database design with GUID as primary key

Posted: 21 Mar 2013 02:05 PM PDT

I am after some confirmation of this idea to fix a badly performing database or a better suggestion if any one has one. Always open to better suggestions.

I have a very large database (20+ million records growing by about 1/2 million per day) which are using GUID as PK.

An oversight on my part but the PK is clustered on SQL server and is causing performance issues.

The reason for a guid - this database is partially synchronised with 150 other databases so the PK needed to be unique. The synchronisation is not managed by SQL Server, rather there is a custom process built which keeps the data in sync for the requirements of the system - all based on that GUID.

Each of the 150 remote databases don't store the full data as stored in the central SQL Database. they only store a subset of the data they actually require, and the data the require is not unique to them (10 out of the 150 database may have some of the same records from other sites databases for example - they share). Also - the data is actually generated at the remote sites - not at the central point - hence the need for the GUIDs.

The central database is used not only for keeping everything in sync, but queries from 3000+ users will be executed against that very large fragmented database. Already this is a big problem in initial testing.

Fortunately we are not live yet - so I can make changes and take things offline if required which is at least something.

The performance of the remote databases is not a problem - the data subsets are pretty small and the database usually never gets above 1GB in size in total. The records are fed back to the main system quite regularly and removed from the smaller BD's when no longer required.

The performance of the central DB which is the keeper of all records is woeful - due to a clustered GUID as a primary key for that many records. The index fragmentation is off the charts.

So - my thoughts to fix the performance issue is to Create a new column - Unsigned BIGINT IDENTITY(1,1) and then change the Clustered PK of the table BIGINT column.

I would create a Unique Non Clustered index on the GUID field which was the primary key.

The smaller remote 150 databases don't need to know about the new PK on the Central SQL Server database - its purely going to be used for organising the data in the database and stop the bad performance and fragmentation.

Would this work and improve the performance of the central SQL database and prevent future index fragmentation hell (to a degree of course)? or have I missed something very important here which is going to jump up and bite me and cause even more grief?

MySQL Permissions

Posted: 21 Mar 2013 06:26 PM PDT

A web application that we're working on creates a new database for each user that signs up.

With this, I'm setting up a database user for each of these new databases and wonder what the best permissions are to set for these new users?

Currently I'm using the following permissions

GRANT SELECT, INSERT, UPDATE, DELETE ON database.* TO xxx@'%' IDENTIFIED BY 'xxx_pass';  

As this is a client-facing portal, I don't want to give too many permissions away because thats just asking for hack attempts, but I'm not sure if there are other permissions that might need to be granted?

I can't be too explicit on what this application will do, but it's not too heavy - it is mostly data driven, however, so it will be sending data to/receiving data from the database.

Apologies if this is on the wrong site - I'm not entirely sure where it belongs otherwise.

What is the real-world purpose of an optional 1:1 relationship?

Posted: 21 Mar 2013 11:10 AM PDT

I know that one-to-one relationships can be used to split data into multiple tables for performance or security and that it is used to create a is-a-relationsship.

But aside from that things, what is the real-world prupose of an optional one-to-one relationship, expecially one where both sites are optional?

can I replace nulls in a PIVOT with zeroes?

Posted: 21 Mar 2013 11:35 AM PDT

I am using the PIVOT function in Oracle and am curious if I can replace the null values with zeroes? I know I can wrap the entire query in another SELECT and then use COALESCE on the values, but I am curious if there is a shortcut.

Does MySQL Replication hamper the performance of my DB?

Posted: 21 Mar 2013 07:20 PM PDT

I am not exactly a 'Qualified' DBA, but yes i am in charge of my own DB which we use in our social app.

I have recently implemented Master/Slave Replication on my Database for obvious reasons.

What I wish to know is whether MySQL Replication kills the performance of my DB since for every user writing on my Master DB creates an additional write on Slave DB (I might be wrong here).

There is no lag between my slave & master. So, Replication is almost instant

DB2 UPDATE TRIGGER

Posted: 21 Mar 2013 03:23 PM PDT

I am trying to create a trigger in a DB2 database that runs on the update of a column in one table, and then fills in another table with certain values.

For example, there is a power unit table with a FLEET_ID column. Everytime the FLEET_ID is changed, I need to to create a new row in the TRANS_AUDIT table.

The TRANS_AUDIT schema is as follows:

CREATE TABLE LYNX.TRANS_AUDIT (   TA_ID INTEGER NOT NULL,   TA_KEY_VALUE VARCHAR(100),   TA_TABLE_CHANGED VARCHAR(40),   TA_FIELD_CHANGED VARCHAR(40),   TA_OLD_FIELD_VALUE VARCHAR(100),   TA_NEW_FIELD_VALUE VARCHAR(100),   TA_USER_WHO_CHANGED VARCHAR(128),   TA_DATE_CHANGED TIMESTAMP,   TA_COMMENT VARCHAR(40),   TA_OLD_FIELD_DOUBLE DOUBLE DEFAULT 0,   TA_NEW_FIELD_DOUBLE DOUBLE DEFAULT 0,   PRIMARY KEY (TA_ID)  );  

Here is what I have so far, but I can't seem to get it to work, I am getting a "function sequence" error.

CREATE TRIGGER PU_UPD_FLEETID  AFTER UPDATE OF FLEET_ID ON PUNIT   REFERENCING OLD AS O NEW AS N  FOR EACH ROW   MODE DB2SQL   BEGIN ATOMIC     DECLARE   vTA_ID INTEGER;    IF(N.FLEET_ID <> O.FLEET_ID) THEN        SELECT MAX(TA_ID)+1 INTO vTA_ID; --generate a unique sequential id        INSERT INTO LYNX.TRANS_AUDIT      (TA_ID, TA_KEY_VALUE, TA_TABLE_CHANGED, TA_FIELD_CHANGED, TA_OLD_FIELD_VALUE, TA_NEW_FIELD_VALUE, TA_USER_WHO_CHANGED, TA_DATE_CHANGED, TA_COMMENT, TA_OLD_FIELD_DOUBLE, TA_NEW_FIELD_DOUBLE)      VALUES       (TA_ID, N.UNIT_ID , 'PUNIT', 'FLEET_ID', O.FLEET_ID, N.FLEET_ID , SESSION_USER ,CURRENT TIMESTAMP , '', '0' ,'0' );  END IF;  

END;

Using Mirroring for incremental backups?

Posted: 21 Mar 2013 10:47 AM PDT

We have a machine tool application that runs on a SQL server backend. These tools run all day every day and I am looking for a way to incrementally backup the database without any machine downtime.

I've seen mirroring solutions with two servers, but these solutions would not work for us do to cost and size constraints. Is it possible to programmatically or through some service to mirror this database to a backup hard drive on the main PC?

Some data loss is acceptable.

Will table inheritance in PostgreSql suit this situation?

Posted: 21 Mar 2013 05:56 PM PDT

Let's start off by saying, I'm new to PSQL and coming from a MSSQL background. I'm starting to design a DB in PSQL.

So my main aim is to have a Main schema in this DB, which I am going to use to control certain services with, and have multiple schema's linked to this, which is going to be the different clients data.

I have read up on table inheritance, but have never came across anyone using inheritance with an effect similar to that of DB Migrations. What I want to achieve is basically have a default schema, with the clients schema's all inheriting from the default. This to me gives control of creating/dropping/altering objects to all schema's by just altering a single schema.

Could this be viable? Has anyone else had any experience doing something similar? Does anyone have any suggestion?

Create database on new partition

Posted: 21 Mar 2013 01:48 PM PDT

I use postgresql 9.1, on ubuntu 12.04. I had installed the depot package.

I have added a new partition to my system, and I would like to create a postgresql database on this partition ( which will be entirely dedicated to this database).

My pgdata is located in var/lib/postgresql/9.1/main. I plan to stop the postgresql service, copy the pgdata content to the new partition, then make a symbolic link to the new partition, chown the new directory to postgres user, restart postgresql... but I'm afraid this all looks more like a hack.

Is there a way to create a database specifically on a specified partition ? Something more "canonical" (not a play of word with ubuntu)

Create Scheme that allows for fluidity between entities

Posted: 21 Mar 2013 10:56 AM PDT

We are a small rapidly evolving company and recently investigated switching databases from SQL Server to a graph database. We like the graph approach because of the fluidity and openness that it offered.

For instance, with a graph DB you can add or remove new relationships quite easily. However, in an RDBMS it is much more difficult.

Is it possible to have some or all of the fluidity in a RDBMS like SQL Server?

What are the differences between leaf and non-leaf pages?

Posted: 21 Mar 2013 04:45 PM PDT

I've been running some index usage reports, and I'm trying to get a definition of Leaf and Non-leaf. There seem to be both Leaf and Non-leaf Inserts, Updates, Deletes, Page Merges, and Page Allocations. I really don't know what it means, or if one is better than the other.

If someone could give a simple definition of each, and also explain why Leaf or Non-leaf matters, it would be appreciated!

Partition Fact Table

Posted: 21 Mar 2013 04:01 PM PDT

In one of my fact table which has close to 25 million records in it and now when a cognos team try to run the report on top of that it takes lot of time, So i am thinking of partitioning the table we store the records in fact table based on daily basis and we do have have the id calaendarday dma in the fact table as data type int.So is there a way i can partition this fact table based on id calaendarday dma?

Please help me on the above query as it is impacting the cognos reports

Does MySQL have problems with nested insert like with subqueries in where?

Posted: 21 Mar 2013 08:10 PM PDT

Query 1:

INSERT `personal`.`locations`     SELECT DISTINCT `s`.*       FROM `references` `t`        JOIN `locations` `s` ON `first_id` = `s`.`id`       WHERE         `lat` >= 37.3        AND `lat` <= 37.3        AND `lng` >= -122.2         AND `lng` <= -122.1   ON DUPLICATE KEY UPDATE     `lat`  = `s`.`lat`,    `lng`  = `s`.`lng`,    `name` = `s`.`name`,    `desr` = `s`.`desr`;  

Query 2:

INSERT `personal`.`locations`     SELECT DISTINCT `s`.*       FROM `references` `t`        JOIN `locations` `s` ON (`first_id` = `s`.`id` OR `second_id` = `s`.`id`)       WHERE         `lat` >= 37.3        AND `lat` <= 37.3        AND `lng` >= -122.2         AND `lng` <= -122.1   ON DUPLICATE KEY UPDATE     `lat`  = `s`.`lat`,    `lng`  = `s`.`lng`,    `name` = `s`.`name`,    `desr` = `s`.`desr`;  

The select in query 1 takes 0.008 seconds to select 4 million records.

The select in query 2 takes 1 second to select 300 thousand records.

Query 1 executes completely in 60 seconds.

Query 2 executes completely in 300 seconds.

The conclusion: MySQL seems to repeat the select for every insert just like with where clause subqueries.

Is there a solution?


Edit 1: Added new query

Query 2: faster alternative but still with the same issue

INSERT `personal`.`locations`     SELECT DISTINCT `s`.*       FROM `references` `t`        JOIN `locations` `s` ON `first_id` = `s`.`id`       WHERE         `lat` >= 37.3        AND `lat` <= 37.3        AND `lng` >= -122.2         AND `lng` <= -122.1   UNION ALL    SELECT DISTINCT `s`.*       FROM `references` `t`        JOIN `locations` `s` ON `second_id` = `s`.`id`       WHERE         `lat` >= 37.3        AND `lat` <= 37.3        AND `lng` >= -122.2         AND `lng` <= -122.1   ON DUPLICATE KEY UPDATE     `lat`  = `s`.`lat`,    `lng`  = `s`.`lng`,    `name` = `s`.`name`,    `desr` = `s`.`desr`;  

Slightly faster despite the fact it actually executes more updates but since the two selects execute faster the the one they replace it gains a bit of time.


Edit 2: Added table structure

CREATE TABLE IF NOT EXISTS `references` (    `id`        bigint(20) unsigned NOT NULL AUTO_INCREMENT,    `first_id`  bigint(20) unsigned NOT NULL DEFAULT '0',    `second_id` bigint(20) unsigned NOT NULL DEFAULT '0',    `name`      varchar(255) NOT NULL,    `status`    enum('V','I','D') NOT NULL DEFAULT 'V',    PRIMARY KEY (`id`),    KEY `first_id`  (`first_id`),    KEY `second_id` (`second_id`),    KEY `status`    (`status`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;    CREATE TABLE IF NOT EXISTS `locations` (    `id`        bigint(20) unsigned NOT NULL AUTO_INCREMENT,    `lat`       double NOT NULL DEFAULT '0',    `lng`       double NOT NULL DEFAULT '0',    `name`      varchar(40) NOT NULL DEFAULT '0',    `desr`      varchar(254) NOT NULL DEFAULT '0',    `status`    enum('V','I','D') NOT NULL DEFAULT 'V',    PRIMARY KEY (`id`),    KEY `lat`    (`lat`),    KEY `lng`    (`lng`)    KEY `status` (`status`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;  

Each reference has to have at least one location associated based on first_id. The second is optional but it does exist for 70%-80% of the records. Each location can be associated with multiple references.

MySQL Stalls and stops all queries

Posted: 21 Mar 2013 11:46 AM PDT

We occassionally have our servers stall and back up queres for 3-5 minutes. then back to normal but locks up all users.

We made a new server with binlogs and log fiels on seperate disk from data. Made data be RAID10 and made config chnages usggested by Oracle. Probelm is not as often but still happens.

We have a query that ones once per minute for each user (about 150 usually). They run all day long very quickly but those backup quickly when this stall happens. We have analyzed our slow query log and indexed more files and fixed soem slow queries and we see no more causing this. We can go days now without it but then bam it is back.

We occassionally have our servers stall and back up queres for 3-5 minutes. then back to normal but locks up all users.

We made a new server with binlogs and log fiels on seperate disk from data. Made data be RAID10 and made config chnages usggested by Oracle. Probelm is not as often but still happens.

We have a query that ones once per minute for each user (about 150 usually). They run all day long very quickly but those backup quickly when this stall happens. We have analyzed our slow query log and indexed more files and fixed soem slow queries and we see no more causing this. We can go days now without it but then bam it is back.

SHOW GLOBAL STATUS

Variable_name   Value  Aborted_clients 4407  Aborted_connects    1551  Binlog_cache_disk_use   184072  Binlog_cache_use    661894  Binlog_stmt_cache_disk_use  0  Binlog_stmt_cache_use   0  Bytes_received  19063442840  Bytes_sent  132853133182  Com_admin_commands  6634187  Com_assign_to_keycache  0  Com_alter_db    0  Com_alter_db_upgrade    0  Com_alter_event 0  Com_alter_function  0  Com_alter_procedure 0  Com_alter_server    0  Com_alter_table 8  Com_alter_tablespace    0  Com_analyze 0  Com_begin   704470  Com_binlog  0  Com_call_procedure  6722493  Com_change_db   6658277  Com_change_master   0  Com_check   0  Com_checksum    0  Com_commit  1430211  Com_create_db   0  Com_create_event    0  Com_create_function 0  Com_create_index    62  Com_create_procedure    2882  Com_create_server   0  Com_create_table    2  Com_create_trigger  1  Com_create_udf  0  Com_create_user 0  Com_create_view 0  Com_dealloc_sql 6112846  Com_delete  11113  Com_delete_multi    0  Com_do  0  Com_drop_db 0  Com_drop_event  0  Com_drop_function   0  Com_drop_index  9  Com_drop_procedure  2882  Com_drop_server 0  Com_drop_table  0  Com_drop_trigger    1  Com_drop_user   0  Com_drop_view   0  Com_empty_query 0  Com_execute_sql 6112973  Com_flush   0  Com_grant   1  Com_ha_close    0  Com_ha_open 0  Com_ha_read 0  Com_help    0  Com_insert  89369  Com_insert_select   1  Com_install_plugin  0  Com_kill    521  Com_load    0  Com_lock_tables 0  Com_optimize    0  Com_preload_keys    0  Com_prepare_sql 6112973  Com_purge   0  Com_purge_before_date   0  Com_release_savepoint   0  Com_rename_table    0  Com_rename_user 0  Com_repair  0  Com_replace 0  Com_replace_select  0  Com_reset   0  Com_resignal    0  Com_revoke  0  Com_revoke_all  0  Com_rollback    218  Com_rollback_to_savepoint   0  Com_savepoint   0  Com_select  43871825  Com_set_option  8940769  Com_signal  0  Com_show_authors    0  Com_show_binlog_events  0  Com_show_binlogs    0  Com_show_charsets   1145  Com_show_collations 24150  Com_show_contributors   0  Com_show_create_db  0  Com_show_create_event   0  Com_show_create_func    0  Com_show_create_proc    6170775  Com_show_create_table   0  Com_show_create_trigger 0  Com_show_databases  8  Com_show_engine_logs    0  Com_show_engine_mutex   174  Com_show_engine_status  174  Com_show_events 0  Com_show_errors 0  Com_show_fields 253582  Com_show_function_status    1  Com_show_grants 1  Com_show_keys   4359  Com_show_master_status  0  Com_show_open_tables    174  Com_show_plugins    88  Com_show_privileges 0  Com_show_procedure_status   1  Com_show_processlist    16374  Com_show_profile    0  Com_show_profiles   0  Com_show_relaylog_events    0  Com_show_slave_hosts    0  Com_show_slave_status   0  Com_show_status 23158  Com_show_storage_engines    0  Com_show_table_status   4356  Com_show_tables 85  Com_show_triggers   0  Com_show_variables  25102  Com_show_warnings   69  Com_slave_start 0  Com_slave_stop  0  Com_stmt_close  6112851  Com_stmt_execute    6112975  Com_stmt_fetch  0  Com_stmt_prepare    6112975  Com_stmt_reprepare  0  Com_stmt_reset  0  Com_stmt_send_long_data 0  Com_truncate    3  Com_uninstall_plugin    0  Com_unlock_tables   0  Com_update  665414  Com_update_multi    301  Com_xa_commit   0  Com_xa_end  0  Com_xa_prepare  0  Com_xa_recover  0  Com_xa_rollback 0  Com_xa_start    0  Compression OFF  Connections 28988  Created_tmp_disk_tables 997343  Created_tmp_files   3428  Created_tmp_tables  1103738  Delayed_errors  0  Delayed_insert_threads  0  Delayed_writes  0  Flush_commands  1  Handler_commit  40551793  Handler_delete  21962  Handler_discover    0  Handler_prepare 1183972  Handler_read_first  2803361  Handler_read_key    1817223370  Handler_read_last   299  Handler_read_next   2192159478  Handler_read_prev   2918171  Handler_read_rnd    4979417  Handler_read_rnd_next   1025367957  Handler_rollback    3129  Handler_savepoint   0  Handler_savepoint_rollback  0  Handler_update  480507  Handler_write   69142091  Innodb_buffer_pool_pages_data   424551  Innodb_buffer_pool_pages_dirty  13  Innodb_buffer_pool_pages_flushed    1382087  Innodb_buffer_pool_pages_free   2185371  Innodb_buffer_pool_pages_misc   11518  Innodb_buffer_pool_pages_total  2621440  Innodb_buffer_pool_read_ahead_rnd   0  Innodb_buffer_pool_read_ahead   57049  Innodb_buffer_pool_read_ahead_evicted   0  Innodb_buffer_pool_read_requests    1032232686  Innodb_buffer_pool_reads    353952  Innodb_buffer_pool_wait_free    0  Innodb_buffer_pool_write_requests   4456721  Innodb_data_fsyncs  1218430  Innodb_data_pending_fsyncs  0  Innodb_data_pending_reads   0  Innodb_data_pending_writes  0  Innodb_data_read    2617249792  Innodb_data_reads   434696  Innodb_data_writes  2030236  Innodb_data_written 1855358976  Innodb_dblwr_pages_written  0  Innodb_dblwr_writes 0  Innodb_have_atomic_builtins ON  Innodb_log_waits    0  Innodb_log_write_requests   978061  Innodb_log_writes   624738  Innodb_os_log_fsyncs    647788  Innodb_os_log_pending_fsyncs    0  Innodb_os_log_pending_writes    0  Innodb_os_log_written   674286592  Innodb_page_size    16384  Innodb_pages_created    4042  Innodb_pages_read   421754  Innodb_pages_written    1382087  Innodb_row_lock_current_waits   0  Innodb_row_lock_time    265740  Innodb_row_lock_time_avg    20441  Innodb_row_lock_time_max    34256  Innodb_row_lock_waits   13  Innodb_rows_deleted 20577  Innodb_rows_inserted    486519  Innodb_rows_read    552198569  Innodb_rows_updated 252163  Innodb_truncated_status_writes  0  Key_blocks_not_flushed  0  Key_blocks_unused   201000  Key_blocks_used 13343  Key_read_requests   182851753  Key_reads   13242  Key_write_requests  8998  Key_writes  8990  Last_query_cost 0.000000  Max_used_connections    606  Not_flushed_delayed_rows    0  Open_files  177  Open_streams    0  Open_table_definitions  10000  Open_tables 10000  Opened_files    4008968  Opened_table_definitions    15849  Opened_tables   18859  Performance_schema_cond_classes_lost    0  Performance_schema_cond_instances_lost  0  Performance_schema_file_classes_lost    0  Performance_schema_file_handles_lost    0  Performance_schema_file_instances_lost  0  Performance_schema_locker_lost  0  Performance_schema_mutex_classes_lost   0  Performance_schema_mutex_instances_lost 0  Performance_schema_rwlock_classes_lost  0  Performance_schema_rwlock_instances_lost    0  Performance_schema_table_handles_lost   0  Performance_schema_table_instances_lost 0  Performance_schema_thread_classes_lost  0  Performance_schema_thread_instances_lost    0  Prepared_stmt_count 25  Qcache_free_blocks  0  Qcache_free_memory  0  Qcache_hits 0  Qcache_inserts  0  Qcache_lowmem_prunes    0  Qcache_not_cached   0  Qcache_queries_in_cache 0  Qcache_total_blocks 0  Queries 89665502  Questions   62813329  Rpl_status  AUTH_MASTER  Select_full_join    10321  Select_full_range_join  2428  Select_range    7770260  Select_range_check  0  Select_scan 2796956  Slave_heartbeat_period  0.000  Slave_open_temp_tables  0  Slave_received_heartbeats   0  Slave_retried_transactions  0  Slave_running   OFF  Slow_launch_threads 0  Slow_queries    2382879  Sort_merge_passes   23  Sort_range  153943  Sort_rows   223361444  Sort_scan   255572  Ssl_accept_renegotiates 0  Ssl_accepts 0  Ssl_callback_cache_hits 0  Ssl_cipher    Ssl_cipher_list   Ssl_client_connects 0  Ssl_connect_renegotiates    0  Ssl_ctx_verify_depth    0  Ssl_ctx_verify_mode 0  Ssl_default_timeout 0  Ssl_finished_accepts    0  Ssl_finished_connects   0  Ssl_session_cache_hits  0  Ssl_session_cache_misses    0  Ssl_session_cache_mode  NONE  Ssl_session_cache_overflows 0  Ssl_session_cache_size  0  Ssl_session_cache_timeouts  0  Ssl_sessions_reused 0  Ssl_used_session_cache_entries  0  Ssl_verify_depth    0  Ssl_verify_mode 0  Ssl_version   Table_locks_immediate   47028873  Table_locks_waited  214  Tc_log_max_pages_used   0  Tc_log_page_size    0  Tc_log_page_waits   0  Threads_cached  261  Threads_connected   345  Threads_created 606  Threads_running 34  Uptime  309942  Uptime_since_flush_status   309942  

SHOW ENGINE INNODB STATUS

*************************** 1. row ***************************    Type: InnoDB    Name:   Status:   =====================================  121219 17:19:36 INNODB MONITOR OUTPUT  =====================================  Per second averages calculated from the last 16 seconds  

BACKGROUND THREAD

srv_master_thread loops: 239642 1_second, 239639 sleeps, 23598 10_second, 3673 background, 3673 flush  srv_master_thread log flush and writes: 240469  

SEMAPHORES

OS WAIT ARRAY INFO: reservation count 78879, signal count 3065681  Mutex spin waits 702797, rounds 2282568, OS waits 49863  RW-shared spins 842193, rounds 2862837, OS waits 26976  RW-excl spins 434256, rounds 3777756, OS waits 1580  Spin rounds per wait: 3.25 mutex, 3.40 RW-shared, 8.70 RW-excl  

FILE I/O

I/O thread 0 state: wait Windows aio (insert buffer thread)  I/O thread 1 state: wait Windows aio (log thread)  I/O thread 2 state: wait Windows aio (read thread)  I/O thread 3 state: wait Windows aio (read thread)  I/O thread 4 state: wait Windows aio (read thread)  I/O thread 5 state: wait Windows aio (read thread)  I/O thread 6 state: wait Windows aio (read thread)  I/O thread 7 state: wait Windows aio (read thread)  I/O thread 8 state: wait Windows aio (read thread)  I/O thread 9 state: wait Windows aio (read thread)  I/O thread 10 state: wait Windows aio (read thread)  I/O thread 11 state: wait Windows aio (read thread)  I/O thread 12 state: wait Windows aio (read thread)  I/O thread 13 state: wait Windows aio (read thread)  I/O thread 14 state: wait Windows aio (read thread)  I/O thread 15 state: wait Windows aio (read thread)  I/O thread 16 state: wait Windows aio (read thread)  I/O thread 17 state: wait Windows aio (read thread)  I/O thread 18 state: wait Windows aio (write thread)  I/O thread 19 state: wait Windows aio (write thread)  I/O thread 20 state: wait Windows aio (write thread)  I/O thread 21 state: wait Windows aio (write thread)  I/O thread 22 state: wait Windows aio (write thread)  I/O thread 23 state: wait Windows aio (write thread)  I/O thread 24 state: wait Windows aio (write thread)  I/O thread 25 state: wait Windows aio (write thread)  I/O thread 26 state: wait Windows aio (write thread)  I/O thread 27 state: wait Windows aio (write thread)  I/O thread 28 state: wait Windows aio (write thread)  I/O thread 29 state: wait Windows aio (write thread)  I/O thread 30 state: wait Windows aio (write thread)  I/O thread 31 state: wait Windows aio (write thread)  I/O thread 32 state: wait Windows aio (write thread)  I/O thread 33 state: wait Windows aio (write thread)  Pending normal aio reads: 0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ,   ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 0; buffer pool: 0  434696 OS file reads, 2030236 OS file writes, 1218430 OS fsyncs  0.00 reads/s, 0 avg bytes/read, 19.19 writes/s, 10.50 fsyncs/s  

INSERT BUFFER AND ADAPTIVE HASH INDEX

Ibuf: size 1, free list len 27, seg size 29, 11681 merges  merged operations:   insert 15777, delete mark 12559, delete 16  discarded operations:   insert 0, delete mark 0, delete 0  Hash table size 84999163, node heap has 11512 buffer(s)  26842.82 hash searches/s, 2386.60 non-hash searches/s  

LOG

Log sequence number 15116880820  Log flushed up to   15116880820  Last checkpoint at  15116877234  0 pending log writes, 0 pending chkp writes  647781 log i/o's done, 5.56 log i/o's/second  

BUFFER POOL AND MEMORY

Total memory allocated 43956305920; in additional pool allocated 0  Dictionary memory allocated 181322283  Buffer pool size   2621440  Free buffers       2185371  Database pages     424551  Old database pages 156874  Modified db pages  13  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 978, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 421754, created 4042, written 1382087  0.00 reads/s, 0.06 creates/s, 13.62 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 424551, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  

INDIVIDUAL BUFFER POOL INFO

---BUFFER POOL 0  Buffer pool size   327680  Free buffers       275235  Database pages     51011  Old database pages 18850  Modified db pages  2  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 125, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 50633, created 461, written 181321  0.00 reads/s, 0.00 creates/s, 1.69 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 51011, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 1  Buffer pool size   327680  Free buffers       274121  Database pages     52133  Old database pages 19264  Modified db pages  0  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 120, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 51823, created 442, written 98744  0.00 reads/s, 0.00 creates/s, 0.31 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 52133, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 2  Buffer pool size   327680  Free buffers       272641  Database pages     53580  Old database pages 19798  Modified db pages  1  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 117, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 53212, created 531, written 156145  0.00 reads/s, 0.00 creates/s, 1.94 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 53580, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 3  Buffer pool size   327680  Free buffers       272101  Database pages     54139  Old database pages 20004  Modified db pages  1  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 109, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 53880, created 451, written 232510  0.00 reads/s, 0.00 creates/s, 1.94 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 54139, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 4  Buffer pool size   327680  Free buffers       270671  Database pages     55579  Old database pages 20536  Modified db pages  4  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 122, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 55213, created 534, written 219942  0.00 reads/s, 0.00 creates/s, 2.25 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 55579, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 5  Buffer pool size   327680  Free buffers       272982  Database pages     53257  Old database pages 19678  Modified db pages  1  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 118, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 52863, created 567, written 186229  0.00 reads/s, 0.06 creates/s, 2.06 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 53257, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 6  Buffer pool size   327680  Free buffers       273770  Database pages     52463  Old database pages 19386  Modified db pages  3  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 117, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 52078, created 577, written 181167  0.00 reads/s, 0.00 creates/s, 1.56 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 52463, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 7  Buffer pool size   327680  Free buffers       273850  Database pages     52389  Old database pages 19358  Modified db pages  1  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 150, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 52052, created 479, written 126029  0.00 reads/s, 0.00 creates/s, 1.87 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 52389, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  

ROW OPERATIONS

0 queries inside InnoDB, 0 queries in queue  26 read views open inside InnoDB  Main thread id 4032, state: sleeping  Number of rows inserted 486519, updated 252163, deleted 20577, read 4847166839  1.37 inserts/s, 1.87 updates/s, 0.06 deletes/s, 54865.57 reads/s  

SHOW ENGINE INNODB MUTEX

Type    Name    Status  InnoDB  ibuf0ibuf.c:534 os_waits=6  InnoDB  dict0dict.c:699 os_waits=53  InnoDB  log0log.c:775   os_waits=4  InnoDB  log0log.c:771   os_waits=11076  InnoDB  buf0buf.c:1166  os_waits=15  InnoDB  buf0buf.c:1166  os_waits=13  InnoDB  buf0buf.c:1166  os_waits=6  InnoDB  buf0buf.c:1166  os_waits=10  InnoDB  buf0buf.c:1166  os_waits=34  InnoDB  buf0buf.c:1166  os_waits=5  InnoDB  buf0buf.c:1166  os_waits=4  InnoDB  buf0buf.c:1166  os_waits=24  InnoDB  fil0fil.c:1610  os_waits=26671  InnoDB  srv0srv.c:1018  os_waits=11942  InnoDB  sync0sync.c:1527    os_waits=2  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=3  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=5  InnoDB  dict0dict.c:1726    os_waits=7  InnoDB  dict0dict.c:1726    os_waits=6  InnoDB  dict0dict.c:1726    os_waits=9  InnoDB  dict0dict.c:1726    os_waits=11  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=13  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=2  InnoDB  dict0dict.c:1726    os_waits=2  InnoDB  dict0dict.c:1726    os_waits=3  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=2  InnoDB  dict0dict.c:1726    os_waits=2  InnoDB  dict0dict.c:1726    os_waits=3  InnoDB  dict0dict.c:1726    os_waits=16  InnoDB  dict0dict.c:722 os_waits=14  InnoDB  dict0dict.c:722 os_waits=6  InnoDB  dict0dict.c:722 os_waits=7  InnoDB  dict0dict.c:722 os_waits=6  InnoDB  dict0dict.c:722 os_waits=22  InnoDB  dict0dict.c:722 os_waits=18  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=23  InnoDB  dict0dict.c:722 os_waits=18  InnoDB  dict0dict.c:722 os_waits=20  InnoDB  dict0dict.c:722 os_waits=15  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=5  InnoDB  dict0dict.c:722 os_waits=12  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=309  InnoDB  dict0dict.c:722 os_waits=6  InnoDB  dict0dict.c:722 os_waits=16  InnoDB  dict0dict.c:722 os_waits=15  InnoDB  dict0dict.c:722 os_waits=22  InnoDB  dict0dict.c:722 os_waits=33  InnoDB  dict0dict.c:722 os_waits=21  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=5  InnoDB  dict0dict.c:722 os_waits=20  InnoDB  dict0dict.c:722 os_waits=6  InnoDB  dict0dict.c:722 os_waits=12  InnoDB  dict0dict.c:722 os_waits=7  InnoDB  dict0dict.c:722 os_waits=35  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=3  InnoDB  dict0dict.c:722 os_waits=14  InnoDB  dict0dict.c:722 os_waits=9  InnoDB  dict0dict.c:722 os_waits=20  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=11  InnoDB  dict0dict.c:722 os_waits=11  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=6  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=11  InnoDB  dict0dict.c:722 os_waits=5  InnoDB  dict0dict.c:722 os_waits=14  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=28  InnoDB  dict0dict.c:722 os_waits=17  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=18  InnoDB  dict0dict.c:722 os_waits=7  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=7  InnoDB  dict0dict.c:722 os_waits=22  InnoDB  dict0dict.c:722 os_waits=8  InnoDB  dict0dict.c:722 os_waits=32  InnoDB  dict0dict.c:722 os_waits=18  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=16  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=7  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=22  InnoDB  dict0dict.c:722 os_waits=15  InnoDB  log0log.c:832   os_waits=23038  InnoDB  btr0sea.c:178   os_waits=2965  InnoDB  combined buf0buf.c:916  os_waits=1261  

Linked Server Query Results: OPENQUERY vs Distributed Transactions

Posted: 21 Mar 2013 06:47 PM PDT

I have two databases. They are on different servers. One is named REPORTDB and the other is named TRANSDB. TRANSDB is referenced as a linked server on REPORTDB named TRANS.

I execute a query like this from REPORTDB using Four Part Naming and Distributed Transactions:

SET @val = ''  SET @DBName = 'TRANSDB'  SELECT @val = @val + 'CREATE SYNONYM ' + [name] + ' for ' + @DBName + '.dbo.'       + [name] + CHAR(10) + CHAR(13)   FROM TRANS.db.sys.tables;  

I expect the result of this query to generate a CREATE SYNONYM statement for each table in TRANSDB inside my REPORTDB (Table1, Table2, Table3). However, it only creates one statement and that is the last table that is returned in the result set from TRANSDB (Table3).

This correctly returns three CREATE SYNONYM statements for Table1, Table2, and Table3 using OPENQUERY:.

SET @val = ''  SET @DBName = 'TRANSDB'  SELECT @val = @val + 'CREATE SYNONYM + [name] + ' for ' + @DBName + '.dbo.'       + [name] + CHAR(10) + CHAR(13)   FROM OPENQUERY( TRANS, ' select [name] FROM  db.sys.tables')  

Since I do not wish to use openquery, how can I get the DISTRIBUTED TRANSACTION using four part naming to return results correctly?

Select * from statement execute very slowly, innodb io read speed is low

Posted: 21 Mar 2013 05:47 PM PDT

I have a very simple query " select * from ap_statistic " running in my servers. the servers have the same hardware and software configuration (CPU 8 core, mem :32G, OS: redhat 5.5, mysql version: 5.1 ) and run the same applications. In server A, the row number of the table ap_statistic is about 22512379, in server B, the row number of the table is 41438751. Of course the query running on server A is faster than server B, but what is strange is the query on server B is extreme slow, it takes more than 3 hours where in server A it just takes 10 minutes.

I use some tool to monitor system status and find that when the query is running in server A, system IO read speed is about 20~30M/s, but in server B it's 2~3M/s. I've tried to clean linux cache and restart mysql server, all is the same result. And I tried to restored DB from server B to server A, so the query in Server A is very very slow and io read speed is very slow. I want to know why this happen?

the ap_statistic table data in server A is generated by normally running and table data in server B is generated by a stored procedure. the table scheme is:

CREATE TABLE `ap_statistic` (    `ap_statisticId` BIGINT(20) UNSIGNED NOT NULL,    `deviceId` INT(11) UNSIGNED NOT NULL DEFAULT '0',    `macaddress` VARCHAR(100) DEFAULT NULL,    `check_time` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00',    `ap_count` INT(10) UNSIGNED NOT NULL DEFAULT '0',    `wlan` VARCHAR(64) DEFAULT NULL,    `radio` VARCHAR(50) DEFAULT NULL,    `bssid` VARCHAR(32) DEFAULT NULL,    `zd_ap_name` VARCHAR(64) DEFAULT NULL,    `channel` INT(2) DEFAULT NULL,    `uplinkRSSI` INT(3) DEFAULT '0',    `downlinkRSSI` INT(3) DEFAULT '0',    `txBytes` BIGINT(20) DEFAULT '0',    `rxBytes` BIGINT(20) DEFAULT '0',    `txPkts` BIGINT(20) DEFAULT '0',    `rxPkts` BIGINT(20) DEFAULT '0',    `hops` INT(1) DEFAULT '0',    `numDownlink` INT(3) DEFAULT '0',    `distance` INT(4) DEFAULT '0',    `phyerr` INT(11) DEFAULT '0',    `max_num_clients` INT(3) DEFAULT '0',    `max_mesh_downlinks` INT(1) DEFAULT '0',    `airtime` INT(3) DEFAULT '0',    `uptimePercentage` INT(3) DEFAULT '0',    `total_num_clients` INT(3) DEFAULT '0',    `tx_actual_throughput` BIGINT(20) DEFAULT '0',    `rx_actual_throughput` BIGINT(20) DEFAULT '0',    `tunnelMode` VARCHAR(32) DEFAULT NULL,    `externalIp` VARCHAR(64) DEFAULT NULL,    `externalPort` VARCHAR(32) DEFAULT NULL,    `level` INT(1) DEFAULT '1'     `essid` VARCHAR(64) DEFAULT NULL,    `total_client_join` INT(11) DEFAULT '0',    PRIMARY KEY (`ap_statisticId`),    KEY `check_time` (`check_time`),    KEY `macaddress` (`macaddress`),    KEY `deviceId` (`deviceId`)  ) ENGINE=INNODB DEFAULT CHARSET=utf8  

the follows are the table file info and some outputs of the monitor tools

Server B

  -rw-rw---- 1 mysql mysql 18568183808 Oct 11 14:52 ap_statistic.ibd      [root@localhost itms]# filefrag ./ap_statistic.ibd    ./ap_statistic.ibd: 164 extents found, perfection would be 159 extents          TABLE         Non_unique  Key_name    Seq_in_index  Column_name     COLLATION  Cardinality  Sub_part  Packed  NULL    Index_type  COMMENT      ------------  ----------  ----------  ------------  --------------  ---------  -----------  --------  ------  ------  ----------  -------      ap_statistic           0  PRIMARY                1  ap_statisticId  A             41438751    (NULL)  (NULL)          BTREE                    ap_statistic           1  check_time             1  check_time      A                10320    (NULL)  (NULL)          BTREE                    ap_statistic           1  macaddress             1  macaddress      A                   16    (NULL)  (NULL)  YES     BTREE                    ap_statistic           1  deviceId               1  deviceId        A                   16    (NULL)  (NULL)          BTREE            mysql>show status;            Variable_name   Value          Aborted_clients 0          Aborted_connects    0          Binlog_cache_disk_use   0          Binlog_cache_use    0          Bytes_received  1256          Bytes_sent  8844          Com_admin_commands  0          Com_assign_to_keycache  0          Com_alter_db    0          Com_alter_db_upgrade    0          Com_alter_event 0          Com_alter_function  0          Com_alter_procedure 0          Com_alter_server    0          Com_alter_table 0          Com_alter_tablespace    0          Com_analyze 0          Com_backup_table    0          Com_begin   0          Com_binlog  0          Com_call_procedure  0          Com_change_db   1          Com_change_master   0          Com_check   0          Com_checksum    0          Com_commit  0          Com_create_db   0          Com_create_event    0          Com_create_function 0          Com_create_index    0          Com_create_procedure    0          Com_create_server   0          Com_create_table    0          Com_create_trigger  0          Com_create_udf  0          Com_create_user 0          Com_create_view 0          Com_dealloc_sql 0          Com_delete  0          Com_delete_multi    0          Com_do  0          Com_drop_db 0          Com_drop_event  0          Com_drop_function   0          Com_drop_index  0          Com_drop_procedure  0          Com_drop_server 0          Com_drop_table  0          Com_drop_trigger    0          Com_drop_user   0          Com_drop_view   0          Com_empty_query 0          Com_execute_sql 0          Com_flush   0          Com_grant   0          Com_ha_close    0          Com_ha_open 0          Com_ha_read 0          Com_help    0          Com_insert  0          Com_insert_select   0          Com_install_plugin  0          Com_kill    0          Com_load    0          Com_load_master_data    0          Com_load_master_table   0          Com_lock_tables 0          Com_optimize    0          Com_preload_keys    0          Com_prepare_sql 0          Com_purge   0          Com_purge_before_date   0          Com_release_savepoint   0          Com_rename_table    0          Com_rename_user 0          Com_repair  0          Com_replace 0          Com_replace_select  0          Com_reset   0          Com_restore_table   0          Com_revoke  0          Com_revoke_all  0          Com_rollback    0          Com_rollback_to_savepoint   0          Com_savepoint   0          Com_select  1          Com_set_option  3          Com_show_authors    0          Com_show_binlog_events  0          Com_show_binlogs    0          Com_show_charsets   0          Com_show_collations 0          Com_show_column_types   0          Com_show_contributors   0          Com_show_create_db  0          Com_show_create_event   0          Com_show_create_func    0          Com_show_create_proc    0          Com_show_create_table   1          Com_show_create_trigger 0          Com_show_databases  0          Com_show_engine_logs    0          Com_show_engine_mutex   0          Com_show_engine_status  0          Com_show_events 0          Com_show_errors 0          Com_show_fields 1          Com_show_function_status    0          Com_show_grants 0          Com_show_keys   1          Com_show_master_status  0          Com_show_new_master 0          Com_show_open_tables    0          Com_show_plugins    0          Com_show_privileges 0          Com_show_procedure_status   0          Com_show_processlist    0          Com_show_profile    0          Com_show_profiles   0          Com_show_slave_hosts    0          Com_show_slave_status   0          Com_show_status 21          Com_show_storage_engines    0          Com_show_table_status   0          Com_show_tables 0          Com_show_triggers   0          Com_show_variables  0          Com_show_warnings   0          Com_slave_start 0          Com_slave_stop  0          Com_stmt_close  0          Com_stmt_execute    0          Com_stmt_fetch  0          Com_stmt_prepare    0          Com_stmt_reprepare  0          Com_stmt_reset  0          Com_stmt_send_long_data 0          Com_truncate    0          Com_uninstall_plugin    0          Com_unlock_tables   0          Com_update  0          Com_update_multi    0          Com_xa_commit   0          Com_xa_end  0          Com_xa_prepare  0          Com_xa_recover  0          Com_xa_rollback 0          Com_xa_start    0          Compression ON          Connections 323          Created_tmp_disk_tables 1          Created_tmp_files   5          Created_tmp_tables  2          Delayed_errors  0          Delayed_insert_threads  0          Delayed_writes  0          Flush_commands  1          Handler_commit  1          Handler_delete  0          Handler_discover    0          Handler_prepare 0          Handler_read_first  0          Handler_read_key    0          Handler_read_next   0          Handler_read_prev   0          Handler_read_rnd    0          Handler_read_rnd_next   39          Handler_rollback    0          Handler_savepoint   0          Handler_savepoint_rollback  0          Handler_update  0          Handler_write   37          Innodb_buffer_pool_pages_data   43392          Innodb_buffer_pool_pages_dirty  0          Innodb_buffer_pool_pages_flushed    43822          Innodb_buffer_pool_pages_free   637198          Innodb_buffer_pool_pages_misc   562          Innodb_buffer_pool_pages_total  681152          Innodb_buffer_pool_read_ahead_rnd   9          Innodb_buffer_pool_read_ahead_seq   27          Innodb_buffer_pool_read_requests    36489397          Innodb_buffer_pool_reads    27421          Innodb_buffer_pool_wait_free    0          Innodb_buffer_pool_write_requests   4165371          Innodb_data_fsyncs  5228          Innodb_data_pending_fsyncs  0          Innodb_data_pending_reads   1          Innodb_data_pending_writes  0          Innodb_data_read    626216960          Innodb_data_reads   36565          Innodb_data_writes  293947          Innodb_data_written 1792826880          Innodb_dblwr_pages_written  43822          Innodb_dblwr_writes 830          Innodb_log_waits    0          Innodb_log_write_requests   492588          Innodb_log_writes   268248          Innodb_os_log_fsyncs    2130          Innodb_os_log_pending_fsyncs    0          Innodb_os_log_pending_writes    0          Innodb_os_log_written   356559872          Innodb_page_size    16384          Innodb_pages_created    5304          Innodb_pages_read   38087          Innodb_pages_written    43822          Innodb_row_lock_current_waits   0          Innodb_row_lock_time    0          Innodb_row_lock_time_avg    0          Innodb_row_lock_time_max    0          Innodb_row_lock_waits   0          Innodb_rows_deleted 28637          Innodb_rows_inserted    306449          Innodb_rows_read    16579740          Innodb_rows_updated 887251          Key_blocks_not_flushed  0          Key_blocks_unused   212928          Key_blocks_used 1415          Key_read_requests   393323          Key_reads   16          Key_write_requests  102461          Key_writes  102439          Last_query_cost 9142769.199000          Max_used_connections    19          Not_flushed_delayed_rows    0          Open_files  24          Open_streams    0          Open_table_definitions  142          Open_tables 146          Opened_files    592          Opened_table_definitions    0          Opened_tables   0          Prepared_stmt_count 0          Qcache_free_blocks  0          Qcache_free_memory  0          Qcache_hits 0          Qcache_inserts  0          Qcache_lowmem_prunes    0          Qcache_not_cached   0          Qcache_queries_in_cache 0          Qcache_total_blocks 0          Queries 1578897          Questions   30          Rpl_status  NULL          Select_full_join    0          Select_full_range_join  0          Select_range    0          Select_range_check  0          Select_scan 2          Slave_open_temp_tables  0          Slave_retried_transactions  0          Slave_running   OFF          Slow_launch_threads 0          Slow_queries    0          Sort_merge_passes   0          Sort_range  0          Sort_rows   0          Sort_scan   0          Ssl_accept_renegotiates 0          Ssl_accepts 0          Ssl_callback_cache_hits 0          Ssl_cipher            Ssl_cipher_list           Ssl_client_connects 0          Ssl_connect_renegotiates    0          Ssl_ctx_verify_depth    0          Ssl_ctx_verify_mode 0          Ssl_default_timeout 0          Ssl_finished_accepts    0          Ssl_finished_connects   0          Ssl_session_cache_hits  0          Ssl_session_cache_misses    0          Ssl_session_cache_mode  NONE          Ssl_session_cache_overflows 0          Ssl_session_cache_size  0          Ssl_session_cache_timeouts  0          Ssl_sessions_reused 0          Ssl_used_session_cache_entries  0          Ssl_verify_depth    0          Ssl_verify_mode 0          Ssl_version           Table_locks_immediate   1549525          Table_locks_waited  0          Tc_log_max_pages_used   0          Tc_log_page_size    0          Tc_log_page_waits   0          Threads_cached  0          Threads_connected   17          Threads_created 322          Threads_running 2          Uptime  8093          Uptime_since_flush_status   8093            mysql>show variables;            Variable_name   Value          auto_increment_increment    1          auto_increment_offset   1          autocommit  ON          automatic_sp_privileges ON          back_log    50          big_tables  OFF          binlog_cache_size   32768          binlog_direct_non_transactional_updates OFF          binlog_format   STATEMENT          bulk_insert_buffer_size 8388608          character_set_client    utf8          character_set_connection    utf8          character_set_database  utf8          character_set_filesystem    binary          character_set_results   utf8          character_set_server    utf8          character_set_system    utf8          collation_connection    utf8_general_ci          collation_database  utf8_general_ci          collation_server    utf8_general_ci          completion_type 0          concurrent_insert   1          connect_timeout 10          date_format %Y-%m-%d          datetime_format %Y-%m-%d %H:%i:%s          default_week_format 0          delay_key_write ON          delayed_insert_limit    100          delayed_insert_timeout  300          delayed_queue_size  1000          div_precision_increment 4          engine_condition_pushdown   ON          error_count 0          event_scheduler OFF          expire_logs_days    0          flush   OFF          flush_time  0          foreign_key_checks  ON          ft_boolean_syntax   + -><()~*:""&|          ft_max_word_len 84          ft_min_word_len 4          ft_query_expansion_limit    20          ft_stopword_file    (built-in)          general_log OFF          group_concat_max_len    1024          have_community_features YES          have_compress   YES          have_crypt  YES          have_csv    YES          have_dynamic_loading    YES          have_geometry   YES          have_innodb YES          have_ndbcluster NO          have_openssl    DISABLED          have_partitioning   NO          have_query_cache    YES          have_rtree_keys YES          have_ssl    DISABLED          have_symlink    YES          hostname    localhost.localdomain          identity    0          ignore_builtin_innodb   OFF          init_connect              init_file             init_slave            innodb_adaptive_hash_index  ON          innodb_additional_mem_pool_size 67108864          innodb_autoextend_increment 8          innodb_autoinc_lock_mode    1          innodb_buffer_pool_size 11159994368          innodb_checksums    ON          innodb_commit_concurrency   0          innodb_concurrency_tickets  500          innodb_data_file_path   ibdata1:10M:autoextend          innodb_data_home_dir              innodb_doublewrite  ON          innodb_fast_shutdown    1          innodb_file_io_threads  4          innodb_file_per_table   ON          innodb_flush_log_at_trx_commit  2          innodb_flush_method O_DIRECT          innodb_force_recovery   0          innodb_lock_wait_timeout    120          innodb_locks_unsafe_for_binlog  ON          innodb_log_buffer_size  134217728          innodb_log_file_size    5242880          innodb_log_files_in_group   2          innodb_log_group_home_dir   ./          innodb_max_dirty_pages_pct  90          innodb_max_purge_lag    0          innodb_mirrored_log_groups  1          innodb_open_files   300          innodb_rollback_on_timeout  OFF          innodb_stats_on_metadata    ON          innodb_support_xa   ON          innodb_sync_spin_loops  20          innodb_table_locks  ON          innodb_thread_concurrency   8          innodb_thread_sleep_delay   10000          innodb_use_legacy_cardinality_algorithm ON          insert_id   0          interactive_timeout 28800          join_buffer_size    268435456          keep_files_on_create    OFF          key_buffer_size 268435456          key_cache_age_threshold 300          key_cache_block_size    1024          key_cache_division_limit    100                   large_files_support ON          large_page_size 0          large_pages OFF          last_insert_id  0          lc_time_names   en_US          license Commercial          local_infile    ON          locked_in_memory    OFF          log OFF          log_bin OFF          log_bin_trust_function_creators OFF          log_bin_trust_routine_creators  OFF                   log_output  FILE          log_queries_not_using_indexes   OFF          log_slave_updates   OFF          log_slow_queries    OFF          log_warnings    1          long_query_time 10.000000          low_priority_updates    OFF          lower_case_file_system  OFF          lower_case_table_names  1          max_allowed_packet  134217728          max_binlog_cache_size   18446744073709547520          max_binlog_size 1073741824          max_connect_errors  10          max_connections 300          max_delayed_threads 20          max_error_count 64          max_heap_table_size 268435456          max_insert_delayed_threads  20          max_join_size   18446744073709551615          max_length_for_sort_data    1024          max_prepared_stmt_count 16382          max_relay_log_size  0          max_seeks_for_key   18446744073709551615          max_sort_length 1024          max_sp_recursion_depth  0          max_tmp_tables  32          max_user_connections    0          max_write_lock_count    18446744073709551615          min_examined_row_limit  0          multi_range_count   256          myisam_data_pointer_size    6          myisam_max_sort_file_size   9223372036853727232          myisam_mmap_size    18446744073709551615          myisam_recover_options  OFF          myisam_repair_threads   1          myisam_sort_buffer_size 8388608          myisam_stats_method nulls_unequal          myisam_use_mmap OFF          net_buffer_length   16384          net_read_timeout    30          net_retry_count 10          net_write_timeout   60          new OFF          old OFF          old_alter_table OFF          old_passwords   OFF          open_files_limit    10240          optimizer_prune_level   1          optimizer_search_depth  62          optimizer_switch    index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on          port    3306          preload_buffer_size 32768          profiling   OFF          profiling_history_size  15          protocol_version    10          pseudo_thread_id    18          query_alloc_block_size  8192          query_cache_limit   1048576          query_cache_min_res_unit    4096          query_cache_size    0          query_cache_type    ON          query_cache_wlock_invalidate    OFF          query_prealloc_size 8192          rand_seed1            rand_seed2            range_alloc_block_size  4096          read_buffer_size    67108864          read_only   OFF          read_rnd_buffer_size    67108864          relay_log             relay_log_index           relay_log_info_file relay-log.info          relay_log_purge ON          relay_log_space_limit   0          report_host           report_password           report_port 3306          report_user           rpl_recovery_rank   0          secure_auth OFF          secure_file_priv              server_id   0          skip_external_locking   ON          skip_name_resolve   ON          skip_networking OFF          skip_show_database  OFF          slave_compressed_protocol   OFF          slave_exec_mode STRICT          slave_net_timeout   3600          slave_skip_errors   OFF          slave_transaction_retries   10          slow_launch_time    2          slow_query_log  OFF          sort_buffer_size    16777216          sql_auto_is_null    ON          sql_big_selects ON          sql_big_tables  OFF          sql_buffer_result   OFF          sql_log_bin ON          sql_log_off OFF          sql_log_update  ON          sql_low_priority_updates    OFF          sql_max_join_size   18446744073709551615          sql_mode              sql_notes   ON          sql_quote_show_create   ON          sql_safe_updates    OFF          sql_select_limit    18446744073709551615          sql_slave_skip_counter            sql_warnings    OFF          ssl_ca            ssl_capath            ssl_cert              ssl_cipher            ssl_key           storage_engine  MyISAM          sync_binlog 0          sync_frm    ON          system_time_zone    UTC          table_definition_cache  256          table_lock_wait_timeout 50          table_open_cache    512          table_type  MyISAM          thread_cache_size   0          thread_handling one-thread-per-connection          thread_stack    262144          time_format %H:%i:%s          time_zone   +08:00          timed_mutexes   OFF          timestamp   1349946061          tmp_table_size  1073741824          transaction_alloc_block_size    8192          transaction_prealloc_size   4096          tx_isolation    REPEATABLE-READ          unique_checks   ON          updatable_views_with_limit  YES          version 5.1.53-enterprise-commercial-pro          version_comment MySQL Enterprise Server - Pro Edition (Commercial)          version_compile_machine x86_64          version_compile_os  unknown-linux-gnu          wait_timeout    28800          warning_count   0              mysql> show innodb status\G;          *************************** 1. row ***************************            Type: InnoDB            Name:          Status:          =====================================          121011 10:22:13 INNODB MONITOR OUTPUT          =====================================          Per second averages calculated from the last 39 seconds          ----------          SEMAPHORES          ----------          OS WAIT ARRAY INFO: reservation count 3806, signal count 3778          Mutex spin waits 0, rounds 282892, OS waits 2075          RW-shared spins 1969, OS waits 864; RW-excl spins 2336, OS waits 749          ------------          TRANSACTIONS          ------------          Trx id counter 0 5303968          Purge done for trx's n:o < 0 5303951 undo n:o < 0 0          History list length 1          LIST OF TRANSACTIONS FOR EACH SESSION:          ---TRANSACTION 0 0, not started, process no 30336, OS thread id 1189509440          MySQL thread id 520, query id 1861594 localhost root          show innodb status          ---TRANSACTION 0 5303967, not started, process no 30336, OS thread id 1188710720          MySQL thread id 526, query id 1861593 127.0.0.1 root          ---TRANSACTION 0 5303962, not started, process no 30336, OS thread id 1186314560          MySQL thread id 519, query id 1861555 127.0.0.1 root          ---TRANSACTION 0 5303952, not started, process no 30336, OS thread id 1188444480          MySQL thread id 515, query id 1861567 127.0.0.1 root          ---TRANSACTION 0 5303948, not started, process no 30336, OS thread id 1187912000          MySQL thread id 516, query id 1861566 127.0.0.1 root          ---TRANSACTION 0 5303937, not started, process no 30336, OS thread id 1190308160          MySQL thread id 511, query id 1861568 127.0.0.1 root          ---TRANSACTION 0 0, not started, process no 30336, OS thread id 1090791744          MySQL thread id 18, query id 1596073 172.18.112.84 root          ---TRANSACTION 0 5303959, ACTIVE 63 sec, process no 30336, OS thread id 1090525504 fetching rows, thread declared inside InnoDB 500          mysql tables in use 1, locked 0          MySQL thread id 17, query id 1861400 localhost root Sending data          select * from ap_statistic          Trx read view will not see trx with id >= 0 5303960, sees < 0 5303960          --------          FILE I/O          --------          I/O thread 0 state: waiting for i/o request (insert buffer thread)          I/O thread 1 state: waiting for i/o request (log thread)          I/O thread 2 state: waiting for i/o request (read thread)          I/O thread 3 state: waiting for i/o request (write thread)          Pending normal aio reads: 0, aio writes: 0,           ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0          Pending flushes (fsync) log: 0; buffer pool: 0          63521 OS file reads, 294656 OS file writes, 5641 OS fsyncs          1 pending preads, 0 pending pwrites          149.38 reads/s, 16384 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s          -------------------------------------          INSERT BUFFER AND ADAPTIVE HASH INDEX          -------------------------------------          Ibuf: size 1, free list len 318, seg size 320,          63593 inserts, 63593 merged recs, 9674 merges          Hash table size 22086161, node heap has 607 buffer(s)          0.08 hash searches/s, 0.26 non-hash searches/s          ---          LOG          ---          Log sequence number 15 2873617336          Log flushed up to   15 2873617336          Last checkpoint at  15 2873617336          0 pending log writes, 0 pending chkp writes          269102 log i/o's done, 0.00 log i/o's/second          ----------------------          BUFFER POOL AND MEMORY          ----------------------          Total memory allocated 12452785320; in additional pool allocated 15261440          Dictionary memory allocated 789024          Buffer pool size   681152          Free buffers       610013          Database pages     70532          Modified db pages  0          Pending reads 1          Pending writes: LRU 0, flush list 0, single page 0          Pages read 65043, created 5488, written 45924          149.38 reads/s, 0.00 creates/s, 0.00 writes/s          Buffer pool hit rate 888 / 1000          --------------          ROW OPERATIONS          --------------          1 queries inside InnoDB, 0 queries in queue          2 read views open inside InnoDB          Main thread process no. 30336, id 1185782080, state: waiting for server activity          Number of rows inserted 336555, updated 1112311, deleted 28681, read 29200669          0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 8258.58 reads/s          ----------------------------          END OF INNODB MONITOR OUTPUT          ============================            1 row in set, 1 warning (0.00 sec)            ERROR:          No query specified                iostat -dx 2            Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util          sda          0.00   2.50 141.50 11.50 4516.00  112.00  2258.00    56.00    30.25     0.95    6.23   5.70  87.25          sda1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00          sda2         0.00   2.50 141.50 11.50 4516.00  112.00  2258.00    56.00    30.25     0.95    6.23   5.70  87.25          dm-0         0.00   0.00 141.50 14.00 4516.00  112.00  2258.00    56.00    29.76     0.97    6.24   5.62  87.35          dm-1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00            Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util          sda          3.00   0.00 154.50  0.00 4932.00    0.00  2466.00     0.00    31.92     0.93    6.04   6.04  93.25          sda1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00          sda2         3.00   0.00 154.50  0.00 4932.00    0.00  2466.00     0.00    31.92     0.93    6.04   6.04  93.25          dm-0         0.00   0.00 157.50  0.00 4932.00    0.00  2466.00     0.00    31.31     0.95    6.04   5.93  93.40          dm-1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00            Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util          sda          3.00   1.50 150.50  1.50 4804.00   24.00  2402.00    12.00    31.76     0.94    6.15   6.14  93.40          sda1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00          sda2         3.00   1.50 150.50  1.50 4804.00   24.00  2402.00    12.00    31.76     0.94    6.15   6.14  93.40          dm-0         0.00   0.00 153.50  3.00 4804.00   24.00  2402.00    12.00    30.85     0.95    6.08   5.97  93.50          dm-1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00              vmstat 2            procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------           r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st           1  1     16 27358488  18152 115500    0    0  2558     0 1193 8634 14  1 73 12  0           1  1     16 27346840  18168 115500    0    0  2356    12 1174 9129 14  2 73 12  0           2  1     16 27334320  18172 115504    0    0  2522     0 1184 8918 14  1 73 12  0           1  1     16 27321812  18180 115496    0    0  2456    12 1183 7357 13  1 74 12  0           1  1     16 27310132  18180 115504    0    0  2344    12 1174 6990 12  2 74 12  0           1  1     16 27297608  18184 115508    0    0  2506     0 1182 6203 12  2 74 11  0           1  1     16 27285444  18200 115504    0    0  2420    90 1187 9667 13  2 73 12  0           1  1     16 27277640  18200 115508    0    0  2248     0 1165 8103 19  2 69 11  0           2  1     16 27265380  18204 115504    0    0  2498     0 1179 5690 13  1 74 12  0           4  1     16 27252972  18216 115508    0    0  2434    12 1178 6096 14  1 74 12  0           1  1     16 27241032  18232 115496    0    0  2520     0 1181 9252 12  1 75 11  0           2  1     16 27229136  18240 115508    0    0  2468    10 1178 7116 13  1 74 12  0           1  0     16 27630612  18248 115508    0    0  1536    20 1121 4082 13  1 79  7  0              mpstat -P ALL 2              02:48:57 PM  CPU   %user   %nice %system %iowait    %irq   %soft   %idle    intr/s          02:48:59 PM  all   13.69    0.00    1.31   11.56    0.00    0.62   72.81   1190.95          02:48:59 PM    0   33.67    0.00    0.50    0.00    0.00    0.00   65.83   1006.03          02:48:59 PM    1    6.53    0.00    0.50   92.96    0.50    0.50    0.00    160.80          02:48:59 PM    2    1.01    0.00    0.50    0.00    0.00    0.00   98.49      0.00          02:48:59 PM    3    0.00    0.00    0.00    0.00    0.00    0.00  100.50      3.52          02:48:59 PM    4   35.68    0.00    1.01    0.00    0.00    1.01   62.81     13.57          02:48:59 PM    5    4.52    0.00    0.00    0.00    0.00    0.00   96.48      0.50          02:48:59 PM    6    3.52    0.00    0.00    0.00    0.00    0.00   96.98      0.50          02:48:59 PM    7   25.13    0.00    7.54    0.00    0.00    4.02   63.82      6.03            02:48:59 PM  CPU   %user   %nice %system %iowait    %irq   %soft   %idle    intr/s          02:49:01 PM  all   12.50    0.00    1.19   11.69    0.00    0.56   74.06   1177.11          02:49:01 PM    0   22.89    0.00    1.49    0.00    0.00    1.49   74.13    995.52          02:49:01 PM    1    5.97    0.00    0.50   92.54    0.00    0.50    0.00    159.70          02:49:01 PM    2    0.50    0.00    0.50    0.50    0.00    0.00   98.01      1.99          02:49:01 PM    3    0.00    0.00    0.00    0.00    0.00    0.00   99.50      2.49          02:49:01 PM    4   45.77    0.00    1.49    0.00    0.00    0.50   51.24     11.94          02:49:01 PM    5    0.00    0.00    0.00    0.00    0.00    0.00   99.50      0.50          02:49:01 PM    6    0.50    0.00    0.00    0.00    0.00    0.00   99.00      0.50          02:49:01 PM    7   23.38    0.00    5.47    0.00    0.00    1.99   68.16      4.48            02:49:01 PM  CPU   %user   %nice %system %iowait    %irq   %soft   %idle    intr/s          02:49:03 PM  all   13.05    0.00    1.12   11.62    0.00    0.50   73.70   1179.00          02:49:03 PM    0   43.50    0.00    0.50    0.00    0.00    0.00   56.00   1000.50          02:49:03 PM    1    6.50    0.00    1.00   93.00    0.00    0.50    0.00    157.00          02:49:03 PM    2    1.50    0.00    0.50    0.00    0.00    0.00   98.50      0.00          02:49:03 PM    3    0.00    0.00    0.00    0.00    0.00    0.00  100.00      2.50          02:49:03 PM    4   32.50    0.00    1.50    0.00    0.00    1.00   65.50     13.00          02:49:03 PM    5   11.00    0.00    4.00    0.00    0.00    1.50   83.50      0.50          02:49:03 PM    6    0.00    0.00    0.00    0.00    0.00    0.00  100.00      0.00          02:49:03 PM    7   10.50    0.00    2.00    0.00    0.00    1.00   87.00      5.50  

What are the things need to be considered while creating tablespace?

Posted: 21 Mar 2013 11:39 AM PDT

I am creating temporary/permanent tablespace on Oracle. As I am going to create an application that is transactional there will be traffic. I bumped into concept called tablespace in oracle. What kind practial things should I consider while creating tablespace? Should I create temporary table or should I use default temp tablespace? Are there any resource that I can study except theory?

I have though to create tablespace as:

 CREATE TEMPORARY TABLESPACE OPTIMADATA_TEMP TEMPFILE      '/opt/app/oracle/oradata/orcl/test01.dbf' SIZE 1024M reuse autoextend on,      '/opt/app/oracle/oradata/orcl/test02.dbf' SIZE 1024M      reuse      autoextend on      extent management local;  

Do I create multiple files or Single file is sufficient?

CREATE TABLESPACE OPTIMADATA DATAFILE '/opt/app/oracle/oradata/orcl/test.dbf' size 2000M AUTOEXTEND ON  EXTENT MANAGEMENT LOCAL;   

Do I always do Autoextend on ??

How do I list all columns for a specified table

Posted: 21 Mar 2013 01:51 PM PDT

I'm looking for a precise piece of information in a database which I have no knowledge about.

It's a 3rd party product, they are slow on answering some questions, and I know the data is lying inside that db, so I want to do a little of retro engineering.

Given one table, is it possible to have a list of the names of the columns for this table.

For example in SqlServer, it's possible to dump a table into a reusable CREATE statements, that textually lists all the columns the table is composed of.

100 TeraBytes Capacity Database - Resources and Time Estimates

Posted: 21 Mar 2013 04:52 PM PDT

I am working on a 'back of the envelope' calculation for a 100TB reporting database setup. I am seeking out thoughts from the experts here. Proposed environment:

  1. Storage Capacity ~ 100TB
  2. Tables ~ 200, sizes ranging from 1GB to 5TB. mean size could lie between 100GB-200GB
  3. ETL - jobs may require join between tables of 10's of millions of rows, with join keys ranging from 10 bytes to 500 bytes. such joins should finish in under 2-5 minutes
  4. Live Selects - initially, only interested in select speeds. should support 500 selects/second. Updates/second will be relatively much smaller number and can be ignored for this exercise.
  5. need 24x7 availability. 2 independent DB servers should be available to serve select calls (with data replicated).

Questions:

  1. At present, I am looking at Oracle. How has your experience been with other commercial (or) opensource solutions for large databases?
  2. What hardware-OS have you seen to work best? I am planning for Linux on Dell.
  3. Is Network storage, such as NetApp, a must? What issues do you foresee with using commercial off the shelf disks?
  4. Once the hardware and OS are ready, how much time would you set aside to setup, configure DB, storage etc.
  5. What team compositions worked best in the environments you have observed? I mean, the various Admins (OS Admin, Oracle DB Admin?) required to manage and operate such a setup. How many of them might be needed to achieve a 24x7 uptime.
  6. Any approximation/range on DB Licenses, Network Storage costs.

I know I dont have all the environment details. I am not looking for exact details, an approximation is sufficient. Though some of the questions might be best answered by managers, I am interested in Admins perspective. I appreciate your input.

Database after restore process still display Restoring

Posted: 21 Mar 2013 08:42 PM PDT

I restored Database from Backup File (in Management Studio) and after restore, Database showed Restoring, then I deleted files from disk (engine's stopped), then I tried Restore again with success, on both cases I leave Restore Process on Default Options

I have no idea why, dump file I created after restart of enviroment, without active Connection(s)

my question: is possible somehow to stop the Restoring status, if is there some tools for that

[SQL Server] NOT IN based on two fields

[SQL Server] NOT IN based on two fields


NOT IN based on two fields

Posted: 21 Mar 2013 10:56 AM PDT

Please help me to not use this silly concatenation method. I want to select all from the first table when the combination of two fields is not in the second.What;s the accepted way to do this?DECLARE @Test1 TABLE (Field1 varchar(1), Field2 int)INSERT INTO @Test1SELECT 'A',1 UNION SELECT 'B',2 UNIONSELECT 'C',3 DECLARE @Test2 TABLE (Field1 varchar(1), Field2 int)INSERT INTO @Test2SELECT 'B',2 UNIONSELECT 'C',3 SELECT * FROM @Test1WHERE Field1 + CONVERT(varchar(1),Field2) NOT IN ( SELECT Field1 + CONVERT(varchar(1),Field2) FROM @Test2 )

Database email

Posted: 21 Mar 2013 03:07 AM PDT

I setup database email and get this Message[260] Unable to start mail session (reason: System.TypeInitializationException: The type initializer for 'System.Data.SqlClient.SqlConnection' threw an exception. ---> System.TypeInitializationException: The type initializer for 'System.Data.SqlClient.SqlConnectionFactory' threw an exception. ---> System.TypeInitializationException: The type initializer for 'System.Data.SqlClient.SqlPerformanceCounters' threw an exception. ---> System.Configuration.ConfigurationErrorsException: Configuration system failed to initialize ---> System.Configuration.Confi)

[Articles] Zettabytes and Beyond

[Articles] Zettabytes and Beyond


Zettabytes and Beyond

Posted: 20 Mar 2013 11:00 PM PDT

The growth of data, and the sheer scale of data we store and manage is stunning. Steve Jones looks at the rates of growth these days.

[MS SQL Server] How many instances we can create

[MS SQL Server] How many instances we can create


How many instances we can create

Posted: 20 Mar 2013 06:45 PM PDT

Hi allI have small doubt, how many instances we can create in production environmentand how many databases we can create on each instanceas per my knowledge we can create only one instance in production environment and also we can create multiple databasesif create multiple instances it will effect on performance on os level or system levelis this correct if anything wrong please correct it

[SQL 2012] Always ON Secondary database Read Only Connection.....Please help.

[SQL 2012] Always ON Secondary database Read Only Connection.....Please help.


Always ON Secondary database Read Only Connection.....Please help.

Posted: 21 Mar 2013 02:17 AM PDT

Dear All,I have configured SQL always on with two server and enabled SQL Server 2012 AlwaysOn Availability Groups Read-Only Routing by using below query.But when i am checking the Read only connection using (sqlcmd -S X.X.X.X -E -d DatabaseName-K ReadOnly) the connection is still routing to primary serever only...Please help if any additional things to do--Specify a read_only_routing_urlALTER AVAILABILITY GROUP AlwaysOnAGProdMODIFY REPLICA ON 'ALWAYSON-AG1'WITH( SECONDARY_ROLE ( READ_ONLY_ROUTING_URL='TCP://ALWAYSON-AG1.TESTDOMAIN.local:1433' ))-------------------------------------------------------------------------------------------------ALTER AVAILABILITY GROUP AlwaysOnAGProdMODIFY REPLICA ON 'ALWAYSON-AG2'WITH( SECONDARY_ROLE ( READ_ONLY_ROUTING_URL='TCP://ALWAYSON-AG2.TESTDOMAIN.local:1433' ))---------------------------------------------------------------------------------------------------Specify a read-only routing listALTER AVAILABILITY GROUP AlwaysOnAGProdMODIFY REPLICA ON 'ALWAYSON-AG1'WITH( PRIMARY_ROLE ( READ_ONLY_ROUTING_LIST =('ALWAYSON-AG2') ))-------------------------------------------------------------------------------------------------ALTER AVAILABILITY GROUP AlwaysOnAGProdMODIFY REPLICA ON 'ALWAYSON-AG2'WITH( PRIMARY_ROLE ( READ_ONLY_ROUTING_LIST =('ALWAYSON-AG1') ))-------------------------------------------------------------------------------------------------Testing Method:C:\ sqlcmd -S X.X.X.X -E -d DatabaseName -K ReadOnly: Select @@serverName: GoPrimary server Name am getting.!

SQL server 2000 upgrade to SQL server 2012

Posted: 20 Mar 2013 04:21 PM PDT

While migrating from SQL server 2000 databases to SQL server 2012 databases , we need to follow one of the below approach .SQL server 2000>>SQL Server 2005 SP4 >> SQL server 2012• Clean approach to migrate Remediation of Non ANSI SQL 2000 code to ANSI complained code • TOC of SQL server 2005 is less than the TOC for SQL server 2008• Amount of remediation for data type deprecation, syntax changes is high from SQL 2000 to SQL 2008 migration SQL server 2000>>SQL Server 2008 R2 >> SQL server 2012• Remediation for SQL 2000 and SQL 2005 will be handled in on go• TOC of SQL server 2008 is higher than the TOC of SQL server 2005• Most of the remediation issue will be address at the intermediate migration Kindly let me know your views about the best approach .

system FlushCache happening often

Posted: 24 Sep 2012 03:28 AM PDT

SQL 2012 - enterprise... OLTP environment...Experiencing the following - several times per hour. Has anyone experieced the following:FlushCache: cleaned up 513008 bufs with 448519 writes in 75641 ms (avoided 92621 new dirty bufs) for db 10:0average throughput: 52.99 MB/sec, I/O saturation: 104, context switches 5758last target outstanding: 80800, avgWriteLatency 0The system is generating the flush cache and causing severe performance degradations.

An error occurred in Service Broker internal activation while trying to scan the user queue 'msdb.dbo.ExternalMailQueue'

Posted: 20 Mar 2013 10:10 PM PDT

What is this? How do I fix it? I get a ton of these and it slows down the server for 5 minutes.An error occurred in Service Broker internal activation while trying to scan the user queue 'msdb.dbo.ExternalMailQueue' for its status. Error: 1222, State: 51. Lock request time out period exceeded. This is an informational message only. No user action is required.

Video tutorials for MDS and DQS for 2012

Posted: 20 Mar 2013 04:59 PM PDT

Hi Team,Can any one point me to good video tutorials on using features in MDS and DQS.I am good with installing the MDS. Thanks in advance

Licensing for Evaluation before Purchase?

Posted: 20 Mar 2013 04:34 AM PDT

We are looking to create a new SQL box, and there is also a desire by management to migrate from 2008 to 2012 SQL Server when they do it. Anybody know Microsoft's policy on evaluation before purchasing licenses (i.e. before they by licensing for 2012 on the box they want to make sure upgrading doesn't break their existing system)?

Data size error in SSAS 2012 Tabular Table Import Wizard for n/varchar(max)

Posted: 20 Mar 2013 11:06 AM PDT

Does anyone know the actual maximum string length values for columns of type varchar(max) or nvarchar(max) when using the Table Import Wizard in a tabular project?I'm working my way through the Wrox book Professional SQL Server 2012 Analysis Services with MDX and DAX and trying my hand at tabular projects for the first time. A couple of the tables provided in the AdventureWorksDW relational database have nvarchar(max) fields. When attempting to add these tables via the Table Import Wizard, I receive an error. No error number or code is provided, just this:[quote]The size of a data value in table '<table name plus long identifier string>' column '<column name>' was too large to fit in that column. The current operation was cancelled because another operation in the transaction failed.[/quote]If I filter out the column from the import, the rest of the tables import fine, but the text didn't mention needing to exclude the columns.I checked out the [url=http://msdn.microsoft.com/en-us/library/gg492146.aspx]Data Types Supported[/url] in BOL, and it contains the useful tidbit [quote]You cannot import from a varchar(max) column that contains a string length of more than 131,072 characters.[/quote] "OK," I thought, "These fields are nvarchar(max), so that would probably max out at 65,536 characters since each character requires twice the bytes. There must be strings longer than that in my data."However, when I ran both MAX(LEN(<field>)) and MAX(DATALENGTH(<field>)), one of the tables having the issue returned a maximum length for the field in question of only 34,568 characters and 69,136 bytes, well under the limit in BOL. Thinking it may be an issue with nvarchar(max) as opposed to varchar(max), I copied the data to a new table, moving the nvarchar(max) data to a varchar(max) field. Attempting to import from the varchar(max) field (now only 34,658 bytes) still generated the same error.I haven't been able to locate any other information online about the error or about actual maximums enforced or whether or not it may be impacted by available memory (since it all has to be stored in-memory for the VertiPaq engine). I'd much appreciate any insight anyone has to offer.

Upgrade SQL/SSRS 2008R2 to 2012 SP1

Posted: 20 Mar 2013 04:07 AM PDT

I'm testing upgrading SQL/SSRS from SQL 2008 R2 SP2 to SQL 2012 SP1 (11.0.3349). Most of my reports' data source is a Dynamics NAV DB where most of the number fields are decimal(38,20). I'm finding that when I have a zero value in a column on a report that the report loses formatting and throws xml exceptions when rendered to Excel. So a field formatted for currency would become 0.00000000000000000 and renders as text in Excel 2010, but if there's a value in the cell then the formatting is fine. I'm looking on the MS site and there is documentation that this is an Excel issue and was supposed to be fixed in a CU for Office 2010 Excel, but I didn't see it in the release notes for the CU. The error is "Excel found unreadable context.." There are no errors in the report itself only rendering. Rendering to other formats is fine. I've found a workaround where I change the value in a report by using an if statement to make it equal to zero (IIF(value=0,0,value), but I've got several hundred reports in my library and I wouldn't even know how many cells I'd have to change. I'm just looking to see if anyone else has ran into this issue during their upgrade testing process.I can repeat the error in a new report if I use as a Dataset Select Cast(0 as decimal(38,20)) d1 , CAST(0 as decimal(10,2)) d2Export to Excel gets this error<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <recoveryLog xmlns="<logFileName>error056200_05.xml</logFileName><summary>Errors">http://schemas.openxmlformats.org/spreadsheetml/2006/main"><logFileName>error056200_05.xml</logFileName><summary>Errors were detected in file 'C:\Users\...ME...\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.IE5\P42V8D4D\_TestExcel.xlsx'</summary><repairedRecords summary="Following is a list of repairs:"><repairedRecord>Repaired Records: Cell information from /xl/worksheets/sheet1.xml part</repairedRecord></repairedRecords></recoveryLog>The d1 column is now formatted as text 0.00000000000000000000 instead of as currency and the d2 column is formatted correctly

[T-SQL] OPENROWSET - SYNTEX ERROR NEAR @CMD

[T-SQL] OPENROWSET - SYNTEX ERROR NEAR @CMD


OPENROWSET - SYNTEX ERROR NEAR @CMD

Posted: 21 Mar 2013 01:57 AM PDT

-- this works fineSELECT a.* FROM OPENROWSET('SQLNCLI', 'Server=sql01;Trusted_Connection=yes;', 'SELECT * from syte_APP.DBO.employee') AS a -- this code gives me error DECLARE @Cmd VarChar(4000) SET @CMD = 'SELECT * from syte_APP.DBO.employee'SELECT a.* FROM OPENROWSET('SQLNCLI', 'Server=sql01;Trusted_Connection=yes;', @cmd ) AS a -- Msg 102, Level 15, State 1, Line 4-- Incorrect syntax near '@cmd'

Conversion from varchar to strong data types

Posted: 20 Mar 2013 11:09 PM PDT

Hi all,There are a few databases I work with that have been designed where varchar columns are used to store what actually displays on the front end as Ints, Decimals, Varchars, Datetimes, checkboxes.I often have to write integrations with these databases bringing data in and prefer to validate the data whilst loading from the staging tables.I have seen allsorts of values being passed into the staging tables that will load into the target database because the columns are all varchars but the values don't display on the front end because the app actively filters bad values out.Poor design, I know, the designers have their reasons and I don't want to really get into all that on this thread.What I would like to do is for my validation scripts to warn up front of potentially invalid datatypes. My problem is that forexample the ISNUMERIC() function return 1 for the value ',1234' but a CONVERT(NUMERIC, ',1234') or CAST(',1234' AS NUMERIC) will fail with a "Error converting data type varchar to numeric).I've been trying to locate a set of reliable datatype testing functions that will reliably determine if a varchar can be converted to a given data type or not.Does anyone know of any?

status on a given date or date range

Posted: 20 Mar 2013 07:18 AM PDT

I have a situation where we need to be able to query the status a claim was at any given time.For example, I have a table in (2008R2):CREATE TABLE ClaimStatus( ClaimID int NOT NULL, StatusDate datetime NOT NULL, StatusCode int NOT NULL)INSERT INTO ClaimStatus VALUES (7150,'2013-03-11 10:41:29.823',100) INSERT INTO ClaimStatus VALUES (7150,'2013-03-12 07:20:41.720',300) INSERT INTO ClaimStatus VALUES (7150,'2013-03-15 13:35:50.000',310)INSERT INTO ClaimStatus VALUES (7148,'2013-03-01 10:41:29.780',100)INSERT INTO ClaimStatus VALUES (7148,'2013-03-10 07:21:26.557',300)INSERT INTO ClaimStatus VALUES (7148,'2013-03-20 13:35:50.000',310)INSERT INTO ClaimStatus VALUES (7149,'2013-02-01 01:19:20.110',100)INSERT INTO ClaimStatus VALUES (7149,'2013-02-14 07:21:26.557',300)INSERT INTO ClaimStatus VALUES (7149,'2013-03-14 00:35:50.000',310)INSERT INTO ClaimStatus VALUES (7147,'2013-02-01 01:19:20.110',100)INSERT INTO ClaimStatus VALUES (7147,'2013-02-14 07:21:26.557',300)INSERT INTO ClaimStatus VALUES (7147,'2013-03-10 00:35:50.000',310)Let's say I need a query that would tell me which claims were at a status of 300 on 3/14/2013. Some of these claims were, in fact, at a status of 300 on that day but how do I query that since there is no entry for the specific date of 3/14/2013? ( ie: claim 7148 was changed to a status of 300 on 3/10 and was not changed to another status until 3/20, so I would need that claim pulled back in my query as it would have still been status 300 on 3/14.)

i want update the 2nd column based on first column:

Posted: 20 Mar 2013 08:49 PM PDT

hear i have id column based on that getting one more column as mid and data as belo:[u]id[/u]100200300400500Expected output:[u]id[/u] [u]Mid[/u]100 NULL200 100 300 200400 300500 400

Performance Problem

Posted: 21 Mar 2013 12:25 AM PDT

Hi,I have been struggling with a procedure for a few days. even though the tables sizes are not very big, but a procedure is taking more than 2 minutes to return the results. Most probably this is happening due to nested views used inside it which have a lot of left joins. I am not sure if a view is actually ran to some extent behind the scenes because I just not se this doing good here. For example, one of the small tables is scanned in the execution plan and shows correct actual number of rows but then there is a Lazy Pool operator whose output becomes billion number of actual rows, I have not see this honestly.Please give me some inputs by looking at the execution plan. I can share the tables schema but not sure if that is required.RegardsChandan Jha

SQL Server Logon Trigger Problems

Posted: 20 Mar 2013 05:34 AM PDT

Hello,I have two business needs to deny logon to any connection that is accessing SQL Server with a session that is not encrypted (we use SSL and the force encryption option cannot be set due to various third party app problems) and I need to capture certain audit data for each successful logon to SQL Server.I created two logon triggers to do this.The problem is that the two triggers work fine and as expected when testing through SQL Server Management Studio.The check for encryption trigger fails with the infamous 'login fails due to trigger execution' error when using any .Net app (SSIS, SSRS, VB program, etc.).The audit trigger works fine from all our applications except one written in PHP.I created a table that stores the dbid, dbname, an encrypted required indicator and an audit required indicator. I use this to control if the connections are denied or if auditing of the database connection is required. We use SQL Server 2008 R2 SP2 (build 10.50.4000).Example of the deny logon trigger:SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE TRIGGER [DenyUnencryptedConnection]ON ALL SERVER WITH EXECUTE AS 'DatabaseLogonAuditUser'FOR LOGONASBEGIN DECLARE @LoginName VARCHAR(128), @DBName VARCHAR(128)SELECT @LoginName = EVENTDATA().value('(/EVENT_INSTANCE/LoginName)[1]', 'nvarchar(128)'), @DBName = ORIGINAL_DB_NAME ()IF (SELECT COUNT(ad.DatabaseName) FROM DatabaseLogonAudit.dbo.AuditDatabase ad INNER JOIN master.sys.sysprocesses sp on sp.dbid = ad.DatabaseId INNER JOIN master.sys.dm_exec_connections ec ON ec.session_id= sp.spid WHERE sp.loginame = @LoginName AND ad.DatabaseName = @DBName AND ad.DBLogonAuditRequired = 1 AND ec.Encrypt_option <> 'TRUE') > 0 ROLLBACK;END;GOSET ANSI_NULLS OFFGOSET QUOTED_IDENTIFIER OFFGOExample of the audit trigger:SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE TRIGGER [CaptureLogonInfo]ON ALL SERVER WITH EXECUTE AS 'DatabaseLogonAuditUser'AFTER LOGONASBEGIN DECLARE @SessionID INT, @LoginTime DATETIME, @LoginName VARCHAR(128) SELECT @SessionID = EVENTDATA().value('(/EVENT_INSTANCE/SPID)[1]', 'int'), @LoginTime = EVENTDATA().value('(/EVENT_INSTANCE/PostTime)[1]', 'nvarchar (128)'), @LoginName = EVENTDATA().value('(/EVENT_INSTANCE/LoginName)[1]', 'nvarchar(128)') IF (SELECT COUNT(AuditDatabaseIdn) FROM databaseLogonAudit.dbo.AuditDatabase WHERE DatabaseName = ORIGINAL_DB_NAME () AND DBLogonAuditRequired = 1) > 0 INSERT INTO DatabaseLogonAudit.dbo.AuditLogin (ServerName, LoginTime, LoginName, DatabaseName, Session_id, HostName ) VALUES ( @@Servername, @LoginTime, @LoginName, ORIGINAL_DB_NAME (), @SessionID, HOST_NAME() ); END;GOSET ANSI_NULLS OFFGOSET QUOTED_IDENTIFIER OFFGOHas anyone had unpredictable results using logon triggers and are there any steps/processes that I may be missing in implementing these? Thanks…

index re-build on trucated table

Posted: 20 Mar 2013 09:26 PM PDT

hi,i've seen a couple of cases where huge staging tables containing c/nc indexes are truncated and the index drop and rebuild happens , so its like 1. drop index 2. truncate table 3.re-create indexmy question is1. would a NC/C index slow down the truncation(not delete) of data from a huge table.2. And most importantly, is there any advantage of re-building index on a truncated table.i think while truncating a table , the indexes need not be touched at all.Please enlighten me.

System M Derived in SQL Server ?

Posted: 20 Mar 2013 06:21 PM PDT

All,I have recently read the below articles.[url]https://www.simple-talk.com/sql/performance/join-reordering-and-bushy-plans/http://www.benjaminnevarez.com/2010/06/optimizing-join-orders/http://en.wikipedia.org/wiki/Query_optimizer[/url][quote] Most query optimizers determine join order via a dynamic programming algorithm pioneered by IBM's System R database project[citation needed]. [/quote]how about sqlserver , sybase & oracle?[quote]Historically, System-R derived query optimizers would often only consider left-deep query plans, which first join two base tables together, then join the intermediate result with another base table, and so on. This heuristic reduces the number of plans that need to be considered (n! instead of 4^n)[/quote][quote]sql server = System-R derived query optimizers ?[/quote]If System-R is derived on SqlServer, How "Bushy Plan" is implemented in SQL Server? it seems like by-passing the System-R properties? [url]http://en.wikipedia.org/wiki/IBM_System_R[/url]I am not that much clear abou System_R. what exactly the role of System_R in DB?

How to call a batch file to execute from an SP

Posted: 20 Mar 2013 04:15 AM PDT

Hi All,Need your assistance please, I am not very good with scripting.I have created a draft of SP, and I need syntax to make a call to a batch file(.bat) from within the SP. Once I have that I can incorporate it in the code and begin testing.Can someone please provide sample script.Thanks,SueTons.

Optimization of dynamic SQL

Posted: 20 Mar 2013 08:19 PM PDT

Hi All,I have a stored procedure with many optional parameters.Following is one of the code blocks from that SP which is taking long time to execute. If there is any other optimized way to perform the same task please suggest me:[code="sql"]declare @City varchar(50) = 'aadorp', @Region varchar(50)= 'europe', @strProvince varchar(200) = '''Overijssel'', ''Utrecht''', @strCountry1 varchar(50) = '''netherlands''', @strCountry2 varchar(50) = '''''', @strCountry3 varchar(50) = '''''', @strCountry4 varchar(50) = '''''',@PtSql varchar(8000), @PtJoinString varchar(4000), @PtWhereString varchar(4000),@VacSearchLocLatitude numeric(14,10), @VacSearchLocLongitude numeric(14,10)Create table #LatLong (Latitude numeric(14,10), Longitude numeric(14,10), CityID int)IF LTRIM(rtrim(@City)) <> '' BEGIN set @PtSql = 'insert into #LatLong Select CityLatitude, CityLongitude, ci.CityInternalID from GeoData.TB_City ci' set @PtJoinString = ' left join (select CityInternalID,CityName,CityTranslationStatusID from GeoData.TB_CityTranslation UNION select CityInternalID, CitySynonymName, CitySynonymStatusID from GeoData.TB_CitySynonym ) cits on ci.CityInternalId = cits.CityInternalID' set @PtWhereString =' where ((CityDefaultName = '''+ @City +''' and ci.CityStatusID = 1) or (CityName = ''' + @City + ''' and cits.CityTranslationStatusID = 1)) ' IF LTRIM(RTRIM(@Region)) <> '' BEGIN set @PtWhereString = @PtWhereString + ' and re.RegionDefaultName = ''' + @Region + ''' and re.RegionStatusID = 1' set @PtJoinString = @PtJoinString + ' inner join ( select RegionInternalID,RegionDefaultName,RegionStatusID from GeoData.TB_Region UNION select RegionInternalID,RegionName,RegionTranslationStatusID from GeoData.TB_RegionTranslation UNION select RegionInternalID, RegionSynonymName, RegionSynonymStatusID from GeoData.TB_RegionSynonym) re on ci.CityRegionID = re.RegionInternalID' END --If Province is true and country is false IF (LTRIM(RTRIM(@strCountry1)) <> '''''' OR LTRIM(RTRIM(@strCountry2)) <> '''''' OR LTRIM(RTRIM(@strCountry3)) <> '''''' OR LTRIM(RTRIM(@strCountry4)) <> '''''') and NOT (LTRIM(RTRIM(@strProvince)) <> '') BEGIN set @PtWhereString = @PtWhereString + ' and co.CountryDefaultName in (' + @strCountry1 +','+ @strCountry2 +','+ @strCountry3 +','+ @strCountry4 + ') and co.CountryStatusID = 1' set @PtJoinString = @PtJoinString + ' inner join ( select CountryInternalID,CountryDefaultName,CountryStatusID from GeoData.TB_Country UNION select CountryInternalID,CountryName,CountryTranslationStatusID from GeoData.TB_CountryTranslation UNION select CountryInternalID,CountrySynonymName,CountrySynonymStatusID from GeoData.TB_CountrySynonym) co on ci.CityCountryId = co.CountryInternalID' END --If Province is false and country is true IF (LTRIM(RTRIM(@strProvince)) <> '') and NOT (LTRIM(RTRIM(@strCountry1)) <> '''''' OR LTRIM(RTRIM(@strCountry2)) <> '''''' OR LTRIM(RTRIM(@strCountry3)) <> '''''' OR LTRIM(RTRIM(@strCountry4)) <> '''''') BEGIN set @PtWhereString = @PtWhereString + ' and pr.ProvinceDefaultName in (' + @strProvince + ') and pr.ProvinceStatusID = 1' set @PtJoinString = @PtJoinString + ' inner join ( select ProvinceInternalID,ProvinceDefaultName,ProvinceStatusID from GeoData.TB_province UNION select ProvinceInternalID,ProvinceName,ProvinceTranslationStatusID from GeoData.TB_provinceTranslation UNION select ProvinceInternalID,ProvinceSynonymName,ProvinceSynonymStatusID from GeoData.TB_ProvinceSynonym) pr on ci.CityProvinceID = pr.ProvinceInternalID' END --If Province is true and country is true IF (LTRIM(RTRIM(@strProvince)) <> '') and (LTRIM(RTRIM(@strCountry1)) <> '''''' OR LTRIM(RTRIM(@strCountry2)) <> '''''' OR LTRIM(RTRIM(@strCountry3)) <> '''''' OR LTRIM(RTRIM(@strCountry4)) <> '''''') BEGIN set @PtWhereString = @PtWhereString + ' and ((pr.ProvinceDefaultName in (' + @strProvince + ') and co.CountryDefaultName in (' + @strCountry1 + ') and pr.ProvinceLevel = 1) OR (pr.ProvinceDefaultName in (' + @strProvince + ') and co.CountryDefaultName in (' + @strCountry2 + ') and pr.ProvinceLevel = 2) OR (pr.ProvinceDefaultName in (' + @strProvince + ') and co.CountryDefaultName in (' + @strCountry3 + ') and pr.ProvinceLevel = 3) OR (pr.ProvinceDefaultName in (' + @strProvince + ') and co.CountryDefaultName in (' + @strCountry4 + ') and pr.ProvinceLevel = 4)) and pr.ProvinceStatusID = 1' set @PtJoinString = @PtJoinString + ' inner join ( select ProvinceInternalID,ProvinceDefaultName,ProvinceStatusID, ProvinceLevel from GeoData.TB_province UNION select ProvinceInternalID,ProvinceName,ProvinceTranslationStatusID, ProvinceLevel from GeoData.TB_provinceTranslation UNION select ProvinceInternalID,ProvinceSynonymName,ProvinceSynonymStatusID, ProvinceLevel from GeoData.TB_ProvinceSynonym) pr on pr.ProvinceInternalID in (ci.CityProvinceID_1, ci.CityProvinceID_2,ci.CityProvinceID_3,ci.CityProvinceID_4) inner join ( select CountryInternalID,CountryDefaultName,CountryStatusID from GeoData.TB_Country UNION select CountryInternalID,CountryName,CountryTranslationStatusID from GeoData.TB_CountryTranslation UNION select CountryInternalID,CountrySynonymName,CountrySynonymStatusID from GeoData.TB_CountrySynonym) co on ci.CityCountryId = co.CountryInternalID' END set @PtSql = @PtSql + @PtJoinString + @PtWhereString Print @PtSql --exec sp_executesql @PtSql, N'@City nvarchar(1000) OUTPUT, @Region nvarchar(1000) OUTPUT, @strCountry nvarchar(1000) OUTPUT, @strProvince nvarchar(1000) OUTPUT' , @City , @Region, @strCountry , @strProvince EXEC(@PtSql) select @VacSearchLocLatitude = Latitude, @VacSearchLocLongitude = Longitude from #LatLong ENDdrop table #LatLong [/code]And following is the final SQL query formed(Printed from Above code): [code="sql"]Select CityLatitude, CityLongitude, ci.CityInternalID from GeoData.TB_City ci left join (select CityInternalID,CityName,CityTranslationStatusID from GeoData.TB_CityTranslation UNION select CityInternalID, CitySynonymName, CitySynonymStatusID from GeoData.TB_CitySynonym ) cits on ci.CityInternalId = cits.CityInternalID inner join ( select RegionInternalID,RegionDefaultName,RegionStatusID from GeoData.TB_Region UNION select RegionInternalID,RegionName,RegionTranslationStatusID from GeoData.TB_RegionTranslation UNION select RegionInternalID, RegionSynonymName, RegionSynonymStatusID from GeoData.TB_RegionSynonym) re on ci.CityRegionID = re.RegionInternalID inner join ( select ProvinceInternalID,ProvinceDefaultName,ProvinceStatusID, ProvinceLevel from GeoData.TB_province UNION select ProvinceInternalID,ProvinceName,ProvinceTranslationStatusID, ProvinceLevel from GeoData.TB_provinceTranslation UNION select ProvinceInternalID,ProvinceSynonymName,ProvinceSynonymStatusID, ProvinceLevel from GeoData.TB_ProvinceSynonym) pr on pr.ProvinceInternalID in (ci.CityProvinceID_1, ci.CityProvinceID_2,ci.CityProvinceID_3,ci.CityProvinceID_4) inner join ( select CountryInternalID,CountryDefaultName,CountryStatusID from GeoData.TB_Country UNION select CountryInternalID,CountryName,CountryTranslationStatusID from GeoData.TB_CountryTranslation UNION select CountryInternalID,CountrySynonymName,CountrySynonymStatusID from GeoData.TB_CountrySynonym) co on ci.CityCountryId = co.CountryInternalIDwhere ((CityDefaultName = 'aadorp' and ci.CityStatusID = 1) or (CityName = 'aadorp' and cits.CityTranslationStatusID = 1)) and re.RegionDefaultName = 'europe' and re.RegionStatusID = 1 and ((pr.ProvinceDefaultName in ('Overijssel', 'Utrecht') and co.CountryDefaultName in ('netherlands') and pr.ProvinceLevel = 1) OR (pr.ProvinceDefaultName in ('Overijssel', 'Utrecht') and co.CountryDefaultName in ('') and pr.ProvinceLevel = 2) OR (pr.ProvinceDefaultName in ('Overijssel', 'Utrecht') and co.CountryDefaultName in ('') and pr.ProvinceLevel = 3) OR (pr.ProvinceDefaultName in ('Overijssel', 'Utrecht') and co.CountryDefaultName in ('') and pr.ProvinceLevel = 4)) and pr.ProvinceStatusID = 1[/code]All parameters are optional except @CityIf there is more efficient way (I'm sure there is) please suggest me.Thank you.

Bushy plan vs left-deep

Posted: 20 Mar 2013 07:09 PM PDT

I have read about Bushy plan and left deep in some articles. Bushy plan will consume more memory as it runs parallely. is it advisable to write query which uses bushy plan to resolve? Apart from CTE, the below code will also use bushy plan or not. right?select * from (select * from client a,security_account b where a.cli_id = b.cli_id)a, (select * from client a,security_account b where a.cli_id = b.cli_id)bwhere a.cli_id = b.cli_idUnder which circumstance, we have to write a query which uses bushy plan?

How to report on historical movements within a changing hierarchy

Posted: 20 Mar 2013 10:33 AM PDT

I'm in an IT environment where we're wanting to:1. run queries which will show rates of consumption of entities which are occasionally moving between nodes of a hierarchy 2. AND to correlate the rates of consumption of those entities to the nodes at the same time. At this time, a senior developer has developed a lookup table of hierarchy paths to correspond to entities, but the moment the entities position in the hierarchy changes, a record of it's old hierarchy path is lost. What approach would you use to preserve the relationship of an entity with a node path it used to belong to?

sp_addlinkedserver not able to execute with other command line

Posted: 20 Mar 2013 11:15 AM PDT

Hi there,I am barely new with T-SQL and trying to link to the other server database with sp_addlinkedserver.I am able to make connection with only executing the sp_addlinkedserver and then pull the data from the remote server.But, when I put everything together, the sp_addlinkedserver stops working and keep on prompt me that the target server is not on the sys.servers.sp_addlinkedserver also will not work if it were executed from the stored procedure?Working:[code="sql"]EXECUTE sp_addlinkedserver@server = 'pHpnng', @srvproduct = '',@provider = 'SQLNCLI',@datasrc = 'pHpnng',@provstr = 'DRIVER=SQL Server;SERVER=pHpnng;UID=sa;PWD=Spm2009!;'ENDSELECT * FROM sys.servers WHERE data_source = 'pHpnng'[/code]Not Working:(With the additional line will cause everything to stop working and the 'pHpnng' will not appear on the sys.servers at the end of the execution)[code="sql"]EXECUTE sp_addlinkedserver@server = 'pHpnng', @srvproduct = '',@provider = 'SQLNCLI',@datasrc = 'pHpnng',@provstr = 'DRIVER=SQL Server;SERVER=pHpnng;UID=sa;PWD=Spm2009!;'ENDSELECT * FROM [pHpnng].[OTestSystemDB].[dbo].[ListName][/code]This only works if I ran the 1st code, then run the "SELECT * FROM [pHpnng].[OTestSystemDB].[dbo].[ListName]" seperately.

Debugging stacked CTEs efficently

Posted: 20 Mar 2013 06:16 AM PDT

I'm curious if anyone else has figured out a good system for performing analysis on a query that may have multiple CTEs stacked. While some problems can be resolved by considering the T-SQL and making logical evaluations, sometime it's nice to be able to see the intermediate data so you can see what is really going on under the hood.When I have such query, I have the following steps:1) add a /* and */ delimiter on the bottom part of the query to comment out all subsequent steps2) comment out the line after the closing parenthesis of last CTE to make the next CTE a outer statement3) run the query, analyze the data4) uncomment the comment from #2, move the /* to the next CTE and comment out the next line after the new closing parenthesis. Repeat until we reach the final statement.This works but I wondered if there was a more effective and a bit less time-consuming & error-prone approach that would permit for data analysis of individual steps. Thanks in advance!

Running Query on Multiple Database/Servers

Posted: 20 Mar 2013 09:20 AM PDT

HiI have a database table that contains all the server details. I have a query and wanted to run against all servers available in the table.[u][b]Sample Query:[/b][/u][code="sql"]SELECT DISTINCT Col1, Col2 FROM dbo.Test[/code][b][u]Requirement[/u][/b]1. Connect to one server 2. Run the Query against all the user DB and save the results into a table.Please suggest a best way to from SQL server.ThanksShuaib

Add variable number of rows into a table based on the values in another table (without cursors/while loops)

Posted: 20 Mar 2013 05:46 AM PDT

Below is a simplified version of my tables:set nocount ondrop table #x create table #x (docid int, pages int)insert into #x values (1, 1)insert into #x values (2, 5)insert into #x values (3, 2)insert into #x values (4, 3)select * from #x;drop table #y create table #y (docid int, pagenumber int)insert into #y values (1, 1)insert into #y values (2, 1)insert into #y values (2, 2)insert into #y values (2, 3)insert into #y values (2, 4)insert into #y values (2, 5)insert into #y values (3, 1)insert into #y values (3, 2)insert into #y values (4, 1)insert into #y values (4, 2)insert into #y values (4, 3)select * from #y;set nocount offSo basically I have an input table #x with a docid and total number of pages within that docid.How can I construct the output table #y that has n rows per #x.docid where n is #x.pages?I can do it with cursors or while loops etc in a few different ways (either per docid or one insert/select per distinct #x.pages value)I am wondering if there is a set based T-SQL solution to this? Can CTEs be somehow used for this? I am creating code ultimately for SQL Server 2008 R2 enterprise edition.Any help is greatly appreciated!Thanks.

Search This Blog