Monday, May 20, 2013

[how to] Server permissions of an activation stored procedure of a Server Broker queue

[how to] Server permissions of an activation stored procedure of a Server Broker queue


Server permissions of an activation stored procedure of a Server Broker queue

Posted: 20 May 2013 07:29 PM PDT

I have a stored procedure that queries the sys.dm_exec_requests view. In the stored procedure the view only returns one row, while the stored procedure needs to see all of them. The MSDN article on the view says that what is returned depends on the user permission:

If the user has VIEW SERVER STATE permission on the server, the user will see all executing sessions on the instance of SQL Server; otherwise, the user will see only the current session.

The stored procedure is actually an activation stored procedure of a broker queue:

CREATE QUEUE test_queue    WITH     STATUS = ON,    RETENTION = OFF ,    ACTIVATION (      STATUS = ON,      PROCEDURE_NAME = test_procedure,      MAX_QUEUE_READERS = 1,       EXECUTE AS SELF ),  POISON_MESSAGE_HANDLING (STATUS = ON)   ON [PRIMARY]  

When I read the MSDN article, I changed

EXECUTE AS SELF  

to

EXECUTE AS 'dbo'  

which did not make any difference. sys.dm_exec_requests would still return a single row.

Also I tried to do

EXECUTE AS OWNER  

Which made no difference either. As I understand the stored procedure owner is who created it, which was me. And as I'm in a sysadmin role that should be working, but it does not. I'll appreciate any troubleshooting tips.

In particular I would like to know how to list a server permissions for a given user so I could check if they include 'VIEW SERVER STATE' permission as per the article.

The confusing part is that user as I understand it is on database level, so it's unclear to me how it can have server permission. If you could clarify that, it would be great too.

Creating oracle database physical files from migrated DB

Posted: 20 May 2013 06:56 PM PDT

I migrated a database from SQL Server to Oracle 11g and there doesn't seem to be any physical files created after migration.

The client requires that they receive a copy of the database so they can clone it wherever they need. How do I create the files from the existing database? I'm from SQL server so it's pretty easy to get the files with a few clicks. Are there any similar feature for oracle 11g? Any alternatives if there aren't?

Why does fn_my_permissions return empty rowset? [closed]

Posted: 20 May 2013 07:34 PM PDT

If I run this query:

select * From fn_my_permissions(NULL, 'SERVER')  

This is what I'm getting:

entity_name   subentity_name  permission_name  ------------- --------------- -------------------------------  server                        CONNECT SQL  server                        SHUTDOWN  server                        CREATE ENDPOINT  server                        CREATE ANY DATABASE  server                        CREATE AVAILABILITY GROUP  server                        ALTER ANY LOGIN  server                        ALTER ANY CREDENTIAL  server                        ALTER ANY ENDPOINT  server                        ALTER ANY LINKED SERVER  server                        ALTER ANY CONNECTION  server                        ALTER ANY DATABASE  server                        ALTER RESOURCES  server                        ALTER SETTINGS  server                        ALTER TRACE  server                        ALTER ANY AVAILABILITY GROUP  server                        ADMINISTER BULK OPERATIONS  server                        AUTHENTICATE SERVER  server                        EXTERNAL ACCESS ASSEMBLY  server                        VIEW ANY DATABASE  server                        VIEW ANY DEFINITION  server                        VIEW SERVER STATE  server                        CREATE DDL EVENT NOTIFICATION  server                        CREATE TRACE EVENT NOTIFICATION  server                        ALTER ANY EVENT NOTIFICATION  server                        ALTER SERVER STATE  server                        UNSAFE ASSEMBLY  server                        ALTER ANY SERVER AUDIT  server                        CREATE SERVER ROLE  server                        ALTER ANY SERVER ROLE  server                        ALTER ANY EVENT SESSION  server                        CONTROL SERVER    (31 row(s) affected)  

However if I run this:

execute ('select * From fn_my_permissions(NULL, ''SERVER'')') AS USER = 'dbo'  

It returns empty record set. I would like to know why. I expect it to return permissions of a database user called 'dbo'. I do not believe that it does not have any.

MSDN: fn_my_permissions

Note, that dbo user exists in every database and usually has principal_id of 1 as can be seen by running this query on a database:

select * from sys.database_principals  

I would like to know how can I get the server level permission as listed above for an arbitrary user. This MSDN article says:

If the user has VIEW SERVER STATE permission on the server, the user will see all executing sessions on the instance of SQL Server; otherwise, the user will see only the current session.

So I would like to get all server permissions for an arbitrary user to determine weather they meet the article criteria.

Background:

I have a stored procedure that queries sys.dm_exec_requests view. In the stored procedure only one row is returned, while the stored procedure needs to see all of them. The MSDN article on the view says that what is returned depends on the user permission as per above.

The stored procedure is actually an activation stored procedure of a broker queue:

CREATE QUEUE test_queue     WITH      STATUS = ON,     RETENTION = OFF ,     ACTIVATION (      STATUS = ON,      PROCEDURE_NAME = test_procedure,      MAX_QUEUE_READERS = 1,       EXECUTE AS SELF ),     POISON_MESSAGE_HANDLING (STATUS = ON)      ON [PRIMARY]  

When I read the MSDN article, I changed

EXECUTE AS SELF  

to

EXECUTE AS 'dbo'  

which did not make any difference. sys.dm_exec_requests would still return a single row. So naturally I decided to check if the dbo user (which as I understand is supposed to be a super user) has the VIEW SERVER STATE permission. This is how this question was born.

Also I tried to do

EXECUTE AS OWNER  

Which made no difference either. As I understand the stored procedure owner is who created it, which was me. And as I'm in a sysadmin that should be working, but it does not. So I'll appreciate any troubleshooting tips.

Is this a good strategy for importing a large amount of data and decomposing as an ETL?

Posted: 20 May 2013 06:41 PM PDT

I have a set of five tables (a highly decomposed schema for an ETL if I understand the nomenclature) that I'm going to load via bulk import, then run some inserts from those five tables into a SLEW of other tables, including inserts that just rely on the values I just inserted into the first tables.

I can do my inserts as an A, B, C process, where I insert into the first table, then insert into some table S where exists in A + T (being some other table that has preloaded "configuration data"), then inserting into Z where exists in B + U, etc.

Should I be trying to batch those inserts with a cursor (I know, stone the traitor) or should I just run the raw insert into scripts and let the server die a thousand heat deaths? Basically I'm worried about starving the server or causing it to collapse from too many index or something else with inserts.

Should I stage out the inserts as:

  1. Insert into one set of tables
  2. Insert into secondary tables based on the first tables
  3. Insert into tertiary tables, etc.

OR should I insert into all the tales where the data is needed but do it via cursor, in a "for loop" style pattern of 100k rows at a time.

FWIW, this is a behavior I saw from the DBAs at my last job, so I figure that's "what I'm supposed to do" (the batch process via cursors) but maybe I don't understand enough about what they were doing (they were also live-loading into systems that already had data, and were loading new data afterwards).

Also bear in mind that I'm normally a C# dev, but I've got the most TSQL experience here and I'm trying to make the best process I can for raw-loading this data as opposed to our "current" method that is mostly webservice fetches and NHibernate save-commits.

Things I think are important to the question:

  1. There will be no other load on this server when I do this, I will have complete and total control of the box, and I'm the only user interested here (this is the initial data load before anyone else can do anything with the box)
  2. There are indexes on all the tables, where appropriate, and there are FKs and other predicates
  3. The entire db system is in use for slews of other clients right now, so I know the indexes are needed for operation, but if I should do something to disable those and re-enable them to make the inserts etc faster, I'm looking for those sorts of optimization hacks.
  4. I'm waiting on sample data but I expect the maximum records in a given table to be about a million rows, fairly wide, and that I'm going to insert into those other tables at most a million rows for the ancillary tables that can come secondary or tertiary.
  5. The hardware is "average".

Is it "legal" TSQL to CREATE TABLE #SomeTable and then DROP TABLE #SomeTable repeatedly in a script?

Posted: 20 May 2013 06:01 PM PDT

Is it "legal" TSQL to CREATE TABLE #SomeTable and then DROP TABLE #SomeTable repeatedly in a script?

I've got my code sort of segregated as "coherent blocks" that I can insert into a longer "configuration script" over and over, and one of the patterns I'm using is this:

CREATE TABLE #WidgetSetting (      WidgetID bigint not null,      Name nvarchar(100) not null,      Value nvarchar(max) not null,      CreateDate datetime not null)    INSERT VALUES    MERGE TABLES    DROP TABLE #WidgetSetting  

But now SSMS is complaining that the object already exists by the next time the CREATE TABLE fires. What gives?

I think it's obvious I'm going to have to declare the table once at the beginning of the script, truncate instead of delete from, but it's frustrating, naturally, to not be able to just drop the table and use the same name again.

ERROR 13 (HY000) When Creating Database

Posted: 20 May 2013 06:17 PM PDT

I'm trying to create a database and when I type in CREATE DATABASE epik; I get the error ERROR 13 (HY000): Can't get stat of './epik' (Errcode: 13) I am running kubuntu 13.04 with mysql 5.5.31.

Backup very large table

Posted: 20 May 2013 02:22 PM PDT

I'm deeply sorry if this is an amateurish question, but I have to update certain values of a large table (for the sake of a presumed example, it is called 'Resource' and it is over 5M rows) and thus I have to make a backup before performing the changes. We have not enough DB free space in order to store the full backed-up table.

Which is the best way? Is there a way to do this by blocks? I mean something like: backing up the first 100K rows from the original table, updating those 100K rows in the original table, deleting those 100K rows from the backed-up table, backing up the following 100K rows from the original table, and proceeding analogously. Is this feasible?

How to schedule PostgreSQL replication?

Posted: 20 May 2013 05:13 PM PDT

I was reading postgresql replications solution but, even I just starting understaning how it works, another doubt has arrisen. I'll be using postgres internal replication solution but as far as I understand, every event will be replicated just as it is achieved; for example, if I insert some data into the master, automatically it will be replicated to the slave...Am I right?

Well, is that so, I was searching for way of scheduling this replications as a passive backup but the thing is, I need for example, every monday on the night the database gets replicated with all the data it had inserted in it that day to a slave database in Amazon EC2 virtual servers. I was thinking on using cron as a first approach but reading postgres documentation I got aware that practically only configuring the necessary files, postgres do it for you...

So, how could I schedule replication in just desired moments?

PD.-The schedule may vary so it needs to be completely generic, also...I'm using Ubuntu 12.04 and PostgreSQL 9.1 as in the master as in the slave. Thanks

Do certain string patterns affect query performance on a VARCHAR column?

Posted: 20 May 2013 07:58 PM PDT

I am a beginner on MySQL Administration and performance problems. Now this is confusing me.

I have a VARCHAR column identity, the string in it is consisted of a Class type and an id, like note:123, user:4 to identify a row in one type of entity.

There are 5 class names for now.

Since the string could be constructed as note:123 or 123:note, either holds the meaning of it, I am wondering does 123:note will have better query performance if I do select * from table where identity='123:note' than the other? I am thinking that 123:note will have more variety from the beginning of string, because the id is at the front.

Or it just doesn't matter?

UPDATE: the identity field is just a redundant field for query, the type and id are actually stored in other two fields in the same table.

So:

1st, where type='note' and id=123' is definitely faster than where identity in ('note:123', 'user:456'). I think it's definitely faster, right?

2nd, if I have to have this redundant column, note:123 or 123:note which pattern is faster when I do where identity in ('note:123', 'user:456'), and the length of identity is 20.

And here is create info: likeable_identity is the identity field I am talking about.

CREATE TABLE `likes` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `user_id` int(11) DEFAULT NULL,    `created_at` int(11) NOT NULL,    `likeable_id` int(11) DEFAULT NULL,    `likeable_type` varchar(255) DEFAULT NULL,    `likeable_identity` varchar(255) NOT NULL,    PRIMARY KEY (`id`),    UNIQUE KEY `index_likes_on_user_id_and_likeable_id_and_likeable_type` (`user_id`,`likeable_id`,`likeable_type`),    KEY `index_likes_on_user_id` (`user_id`)  ) ENGINE=InnoDB AUTO_INCREMENT=8814 DEFAULT CHARSET=utf8;  

Thanks.

ORDER BY optimization issue

Posted: 20 May 2013 02:17 PM PDT

I have a query:

SELECT      user_details.id , user_details.first_name , user_details.last_name,    user_accounts.name account_name, jtl0.account_id account_id,     user_details.title , user_details.job_function ,     user_details.contact_data_status , user_details.assigned_user_id     FROM user_details     LEFT JOIN  user_accounts_user_details jtl0     ON user_details.id=jtl0.contact_id AND jtl0.deleted=0   LEFT JOIN  user_accounts user_accounts     ON user_accounts.id=jtl0.account_id AND user_accounts.deleted=0   where user_details.deleted=0   ORDER BY account_name ASC LIMIT 0,21;  

Without the ORDER BY / LIMIT, the query returns immediately, but when I add ORDER BY account_name ASC to this takes 5 minutes. I have tried several combinations of indexes on user_accounts but have no success.

Here is the EXPLAIN result of the query:

+----+-------------+----------+--------+--------------------------------------------------------------+-----------------------+---------+-------------------------------+---------+----------------------------------------------+  | id | select_type | table    | type   | possible_keys                                                | key                   | key_len | ref                           | rows    | Extra                                        |  +----+-------------+----------+--------+--------------------------------------------------------------+-----------------------+---------+-------------------------------+---------+----------------------------------------------+  |  1 | SIMPLE      | user_details | ref    | idx_contacts_del_last,idx_cont_del_reports,idx_del_id_user   | idx_contacts_del_last | 2       | const                         | 1554822 | Using where; Using temporary; Using filesort |  |  1 | SIMPLE      | jtl0     | ref    | idx_contid_del_accid                                         | idx_contid_del_accid  | 113     | salesify_db.contacts.id,const |       1 | Using index                                  |  |  1 | SIMPLE      | user_accounts | eq_ref | PRIMARY,idx_accnt_id_del,idx_accnt_assigned_del,idx_oss_name | PRIMARY               | 108     | salesify_db.jtl0.account_id   |       1 |                                              |  +----+-------------+----------+--------+--------------------------------------------------------------+-----------------------+---------+-------------------------------+---------+----------------------------------------------+      No of records in user_accounts : around 3000000  No of records in user_details : around 3400000  No of records in user_accounts_user_details : around 2500000  

Index in user_accounts

PRIMARY KEY (`id`),    KEY `idx_accnt_id_del` (`id`,`deleted`),    KEY `idx_accnt_name_del` (`name`,`deleted`),    KEY `idx_accnt_assigned_del` (`deleted`,`assigned_user_id`),    KEY `idx_accnt_parent_id` (`parent_id`),    KEY `idx_oss_name` (`id`,`name`(15))  ) ENGINE=InnoDB DEFAULT CHARSET=utf8  

Index in user_details

PRIMARY KEY (`id`),    KEY `idx_cont_last_first` (`last_name`,`first_name`,`deleted`),    KEY `idx_contacts_del_last` (`deleted`,`last_name`),    KEY `idx_cont_del_reports` (`deleted`,`reports_to_id`,`last_name`),    KEY `idx_reports_to_id` (`reports_to_id`),    KEY `idx_del_id_user` (`deleted`,`id`,`assigned_user_id`),    KEY `idx_cont_assigned` (`assigned_user_id`),    KEY `primary_address_idx` (`primary_address_country`),    KEY `idx_oss_dt` (`date_entered`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8  

Index in user_accounts_user_details

  PRIMARY KEY (`id`),    KEY `idx_account_contact` (`account_id`,`contact_id`),    KEY `idx_contid_del_accid` (`contact_id`,`deleted`,`account_id`)  

SQL agent job step's connection managers

Posted: 20 May 2013 08:56 PM PDT

Is there any way with T-SQL to list the connection managers that are used in an SQL Agent Job step?

Should we periodically 'refresh' the mirrored copy of a SQL Server database?

Posted: 20 May 2013 05:48 PM PDT

When you have SQL Server mirroring set up, do you trust that your mirror database will stay the same as the primary, even after months and months of mirroring?  I know we all check the status, and of course it says the database is synchronized...  but I worry that somehow the mirror might fall out of sync due to network glitches, software bugs etc. and the fault won't be detected.  Then one day we need the mirror and we're horrified to learn that it's corrupted.

One thing I could do is periodically refresh the mirror from the primary, since our database is not terribly large.  But perhaps I'm being overly paranoid.

I would like to hear from DBAs, either of SQL Server or any other platform, as to whether you share this concern about the integrity of mirror DBs, and if so, what you do to mitigate that concern.  Do you occasionally re-copy your primary DB over to the mirror?  Do you take other measures to ensure consistency between the two?  Thanks for any advice you can provide.

Drop partition not freeing disk space

Posted: 20 May 2013 03:41 PM PDT

I am running MySql innodb with innodb_file_per_table.

As a daily script, I create a new partition for a set of tables, as well as drop yesterday's partitions. I have noticed that through the df command, disk space is not freeing up. What else needs to be done to free up the disk space? I would like to avoid having a system that runs out of disk space because of the database disk usage. I would also like to avoid having to shut down Mysql.

Edit: I have read that discarding the tablespace should delete the .ibd files that take up most of the space.

ALTER TABLE tbl_name DISCARD TABLESPACE;  

However there is a known bug using this command on partitioned tables.

Startup procedure for availability replica becoming primary

Posted: 20 May 2013 05:27 PM PDT

Is there any way how to create a startup T-SQL script/procedure for starting availability replica (when state has changed from secondary to primary)?

I want to disable SSIS jobs specific for related availability groups.

MySQL Hanging Completely when `ALTER TABLE... ENABLE KEYS`

Posted: 20 May 2013 09:19 PM PDT

I know very little about database administration but I have to deal with some very large tables on my site.

This server has 64GB of RAM and Intel Core i7-3820 (4 x 3600 MHz). Most of everything it does is MySQL. I use half MyISAM and half InnoDB tables.

I have a couple of tables in MyISAM with billions of rows. Every day I have a script which disables keys, add a few million more rows, then enables keys again. The ALTER TABLE... ENABLE KEYS causes the server to basically stop for a couple of hours. No web site which uses MySQL will load at all, even though they are not accessing the tables being altered at all.

Please also advise me on how to setup the my.cnf file to fix this issue and optimize for rebuilding these indexes as fast as possible. Someone told me to increase the key_buffer_size, but I'm unsure about whether this is good as everyone seems to have a different opinion..? Currently it looks like this:

[client]  port        = 3306  socket      = /var/lib/mysql/mysql.sock    [mysqld]  port = 3306  socket = /var/lib/mysql/mysql.sock  skip-external-locking  max_allowed_packet = 512M  table_open_cache = 1024  sort_buffer_size = 128M  read_buffer_size = 2M  read_rnd_buffer_size = 8M  myisam_sort_buffer_size = 24G  thread_cache_size = 12  query_cache_size = 256M  thread_concurrency = 16  log-bin=mysql-bin  binlog_format=mixed  server-id   = 1  innodb_file_per_table = 1  table_cache = 1024  key_buffer = 256M  key_buffer_size = 12G  myisam_repair_threads = 4  big-tables  bind-address = 127.0.0.1  max_connections = 400  tmp_table_size = 4G  max_heap_table_size = 4G  log_bin = /backup/mysql-bin-logs/mysql-bin.log  expire_logs_days        = 10  max_binlog_size         = 100M  innodb_buffer_pool_size = 12G  local-infile=1  net_read_timeout = 1800  net_write_timeout = 1800      [mysqldump]  quick  max_allowed_packet = 16M    [mysql]  no-auto-rehash  local-infile=1    [myisamchk]  key_buffer_size = 256M  sort_buffer_size = 256M  read_buffer = 2M  write_buffer = 2M  key_buffer = 256M    [mysqlhotcopy]  interactive-timeout  

MySQL Version

innodb_version 5.5.30  protocol_version 10  version 5.5.30-log  version_comment MySQL Community Server (GPL) by Remi  version_compile_machine x86_64  version_compile_os Linux  

UPDATE

I've started a bounty. I changed some of the my.conf settings (also updated in this post). Then when I tried to rebuild the indexes on the large table it started out with Repair with 8 threads (even though the number of repair threads is set to 4) and then when I checked it a few hours later the same command was on Repair with keycache, which is now where it's sitting. So somehow it degraded down from the sort to keycache method (I have no idea why!)

Please help me optimize this! It's supposed to run every day but currently takes several days just to ALTER TABLE... ENABLE KEYS.

Here are some other variables I was asked for, which I do not understand but might help you help me:

+----------------------+-------+  | Variable_name        | Value |  +----------------------+-------+  | key_cache_block_size | 1024  |  +----------------------+-------+  +-----------------+-------------+  | Variable_name   | Value       |  +-----------------+-------------+  | key_buffer_size | 12884901888 |  +-----------------+-------------+  +-------------------+-------+  | Variable_name     | Value |  +-------------------+-------+  | Key_blocks_unused | 0     |  +-------------------+-------+  

UPDATEDATE 2 (5/21)

Thinking it would solve me problem, I have changed my script to entirely truncate the table, reinsert all the rows, and add the indexes one at a time. Instead of disabling keys, adding new rows and then enabling keys.

Unfortunately it did not help because the index creation still goes to repair with keycache.

Here is the result of SHOW CREATE TABLE research_storage1:

CREATE TABLE `research_storage1` (    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,    `word1` mediumint(8) unsigned NOT NULL,    `word2` mediumint(8) unsigned NOT NULL,    `origyear` smallint(5) unsigned NOT NULL,    `cat` tinyint(3) unsigned NOT NULL,    `pibn` int(10) unsigned NOT NULL,    `page` smallint(5) unsigned NOT NULL,    `pos` smallint(5) unsigned NOT NULL,    `num` tinyint(3) unsigned NOT NULL,    PRIMARY KEY (`id`),    KEY `pibnpage` (`pibn`,`page`,`word2`,`word1`),    KEY `word21pibn` (`word2`,`word1`,`pibn`,`num`),    KEY `word12num` (`word1`,`word2`,`num`),    KEY `cat1` (`cat`,`word1`),    KEY `year1` (`origyear`,`word1`),    KEY `catyear1` (`cat`,`origyear`,`word1`),    KEY `pibn` (`pibn`,`word1`)  ) ENGINE=MyISAM DEFAULT CHARSET=ascii COLLATE=ascii_bin DATA DIRECTORY='/storage/researchdb/' INDEX DIRECTORY='/storage/researchdb/';  

I also ran this command: SELECT data_length/power(1024,3) dat,index_length/power(1024,3) ndx FROM information_schema.tables WHERE table_schema='dbname' AND table_name='tablename'; But the problem with this is that I currently have 2 tables for this table, 1 which is truncated, and 1 which has all the data but no indexes (the former will be replaced with the latter once indexes are complete)... the reason being because I am unable to build the damn indexes (hence the problem). Here is the info for the truncated table, and then the table with data but no indexes:

+------+------------------------+  | dat  | ndx                    |  +------+------------------------+  |    0 | 0.00000095367431640625 |  +------+------------------------+  +-------------------+--------------------+  | dat               | ndx                |  +-------------------+--------------------+  | 51.61232269741595 | 27.559160232543945 |  +-------------------+--------------------+  

Please also note that the table will be 10x larger than this before all the data is received.

Index optimization

Posted: 20 May 2013 02:24 PM PDT

I'm building a 'tag' system for my blog posts. Currently my tags table has columns tagid - MEDIUMINT NOT NULL AUTO_INCREMENT PRIMARY KEY and tag VARCHAR(30) NOT NULL UNIQUE. And I have another table describing many to many relation between my blog posts and tags, table posttags with columns postid,tagid and dual PRIMARY KEY(postid,tagid) .

Now when I started learning mysql, PRIMARY KEYs were all about the unique content. But now I understand that they are more of performance optimization things.

postid is obviously an auto increment int.

Most of my queries are like :

  • select all from posts join tags where postid = {postid},
  • select all from posts join tags where tag = {tag name}.

Is my current DB structure good enough or are there any improvements I could make? also please suggest what all feilds should I make indexs, primary keys or unique?

UPDATE : InnoDB is the engine I'm using. Also, I have not defined any foreign keys as I don't see any benefit since my PHP side is capable of maintaining the integrity and consistency of data,

Logging queries and other T-SQL

Posted: 20 May 2013 05:34 PM PDT

I would like to know if SQL Server 2008 R2 has a default logging scheme for SELECT statements (or any other T-SQL for that matter).

If yes, where can I see it? If not, how could I set it up?

Different queries result in big difference in performance

Posted: 20 May 2013 02:55 PM PDT

I have 2 similar functions, different way result in big diff in performance.

The PostgreSQL version: 9.2.1.

Function 1

create or replace function func_1()  returns text  as  $$  declare     v_i integer;       v_md5 varchar;   begin     for v_i in 1..1000000 loop         v_md5:='a'|| v_i::character varying;     end loop;       return 'ok';  end;  $$  language plpgsql;  

Function 2

create or replace function func_select()  returns text  as  $$  declare     v_i integer;       v_md5 varchar;   begin     for v_i in 1..1000000 loop        select 'a'|| v_i::character varying into  v_md5;     end loop;     return 'ok';  end;  $$  language plpgsql;  

Function Timings

francs=> \timing  Timing is on.  francs=> select func_1();   func_1   --------   ok  (1 row)    Time: 1467.231 ms  francs=> select func_1();   func_1   --------   ok  (1 row)    Time: 1383.424 ms  francs=> select func_select();   func_select   -------------   ok  (1 row)    Time: 22176.600 ms  francs=> select func_select();   func_select   -------------   ok  (1 row)    Time: 23265.755 ms  

From the above, function func_1() only takes about 1383 ms, but function func_select() takes about 23265 ms, anybody can explain this?

Database schema design help needed

Posted: 20 May 2013 02:23 PM PDT

I am developing a PHP application expecting millions of records both parent and children. My goal is to design an optimized database design to achieve high speed and performance.

This application will have 1M users, each user will have their own dashboard where they can create their own users/roles, pages and posts.

I am thinking about two possible solutions

  1. Use only 1 table for all users, pages and posts and they will have a foreign key of the owner
  2. Use separate tables for all users, like user1_pages, user1_posts, user1_users

I Think with #1 the query will be slow as all users will be sharing one table, second one also don't seems a perfect solution as number of tables will increase exponentially.

Please suggest me a best possible solution and share tips to design an optimized MySQL schema.

Why are constraints applied in the database rather than the code?

Posted: 20 May 2013 04:35 PM PDT

Why are constraint applied in Database? Will it not be more flexible to be in code?

I'm reading a beginners book on implementing databases, so this is a beginner's question!

Let's say I design a database model, which has this entity model:

Person, Has subtypes: Employee, Student  Student's subtypes: Graduate, Undergraduate  Employee's subtypes: Teacher, Administrator  

A registered person on system can be only a Student or Employee; Person entity requires uniqueness of social number (which every person has only one of, of course).

If one day the college decides that teachers (an Employee subtype) can also be Student, taking courses in their free time (it's very improbable but I can't think of anything else right now), it's much harder to change database design which could have thousands of entries, rather than just changing the logic in code that didn't allow a person be registered both as a student and an employee.

Why do we care about business rules in database design rather than in code?

Inner join using an array column

Posted: 20 May 2013 03:27 PM PDT

Having trouble indexing and executing a query in O (log n) time.

The query includes an INNER JOIN, an ORDER BY, and an equality operation. If I understand the laws of databases correctly, a query can be indexed and executed in O (log n) time (or thereabouts) if a non-equality operator is not used on more than one column. In this case, I believe the INNER JOIN does count as an equality operator and the non-equality operator would be the ORDER BY portion of the query. This table has upwards of 10,000,000 rows and needs to handle several reads and writes per second.

Using PostgreSQL. This is what the table looks like. As you can see, the column 'Names' is a list property and it is the column that the INNER JOIN goes against:

Age Names                       Date  34  ['carla', 'john', 'sam']    3/13/2011  26  ['json', 'cindy', 'joel']   3/13/2011  72  ['beth', 'amber', 'susie']  3/13/2011  14  ['john', 'jim', 'debie']    3/13/2011  

This is the query that we are trying to do:

SELECT * FROM the_table WHERE Age==26 AND Names=='john' ORDER BY Date  

My background is from using App Engine's Big Table, so I've used equality operators here to indicate that 'john' should be one of the names in the Names column. This would be an acceptable query in GAE's big table, it would execute in O (log N) time as all Big Table queries are reqyured to do. I am assuming there is a way to do this in PostgreSQL as well since PostgreSQL accepts list data types as columns.

Is this possible to do in PostgreSQL?

If so, how should the index be set up (we can't figure out how to set up an index that takes into account the three properties)?

Should I snowflake or duplicate it across my facts?

Posted: 20 May 2013 08:08 PM PDT

I'm building up a data warehouse to be used by SSAS to create cubes on, and I'm debating between two possible schemas. In my warehouse, I've got two different fact tables that tracking daily changes in dollar values. Each entity in these fact tables have an underlying Sales Order and Line to which they relate. These SOs and Lines then have other related dimensions, such as customer, product, etc. About 12 sub-dimensions total so far.

My question is if I should be rolling all these sub dimensions up directly into the fact tables, or if I should use a little snowflaking in my warehouse, and have them branching off the Sales Order and Lines dimension instead.

The first option obviously follows a star-schema model better. However, if changes are made such as adding additional dimensions, it becomes more maintenance, basically having to do the ETL twice for each fact table, rather than just the once on the SO dimension. As well, if a new fact is added that relates to Sales Orders, I'd have to go through the whole process again.

As this is my first DW/OLAP project, I'm not familiar on where the line should be drawn on snowflaking, and other people's thoughts would be highly appreciated.

Looking for performance improvements for this query

Posted: 20 May 2013 06:22 PM PDT

This query is taking a long time to execute. We are expecting results in 0 sec, if possible.

Please help me to do any other following:

  • Rewrite the query
  • Suggest any indexes
  • Any other optimization that may be needed.

Also, please help isolate root cause.

Here is the Query And Explain Plan:

mysql> explain SELECT * FROM (`xgen_studentMLs`)      LEFT JOIN `xgen_quizzes`           ON xgen_studentMLs.quizid = xgen_quizzes.idnum      WHERE            `student` = 27126        AND xgen_studentMLs.topic = 1829        AND xgen_studentMLs.metatype = 'topic'        AND (xgen_studentMLs.department='Nursing' OR xgen_studentMLs.department='Biology')        AND (xgen_quizzes.book IS NULL OR xgen_quizzes.book=0)      ORDER BY xgen_studentMLs.timestamp DESC LIMIT 100;    +----+-------------+-----------------+--------+---------------+---------+---------+------------------------------+------+-----------------------------+  | id | select_type | table           | type   | possible_keys | key     | key_len | ref                          | rows | Extra                       |  +----+-------------+-----------------+--------+---------------+---------+---------+------------------------------+------+-----------------------------+  |  1 | SIMPLE      | xgen_studentMLs | range  | student,mult  | mult    | 265     | NULL                         |   18 | Using where; Using filesort |  |  1 | SIMPLE      | xgen_quizzes    | eq_ref | PRIMARY       | PRIMARY | 4       | prepu.xgen_studentMLs.quizid |    1 | Using where                 |  +----+-------------+-----------------+--------+---------------+---------+---------+------------------------------+------+-----------------------------+  

Here are table structures:

mysql> show create table xgen_quizzes\G  *************************** 1. row ***************************         Table: xgen_quizzes  Create Table: CREATE TABLE `xgen_quizzes` (    `idnum` int(11) NOT NULL AUTO_INCREMENT,    `userid` int(11) NOT NULL DEFAULT '0',    `timestarted` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',    `timefinished` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',    `questionlist` mediumtext NOT NULL,    `topics` mediumtext NOT NULL,    `totalnum` int(11) NOT NULL DEFAULT '0',    `completednum` int(11) NOT NULL DEFAULT '0',    `assignment` int(11) NOT NULL DEFAULT '0',    `department` varchar(255) NOT NULL DEFAULT '',    `book` int(11) NOT NULL DEFAULT '0',    `cqs` mediumtext NOT NULL,    `metatype` varchar(25) DEFAULT 'topic',    PRIMARY KEY (`idnum`),    KEY `userid` (`userid`),    KEY `assignment` (`assignment`)  ) ENGINE=InnoDB AUTO_INCREMENT=13547573 DEFAULT CHARSET=latin1  1 row in set (0.00 sec)    mysql> show create table xgen_studentMLs\G  *************************** 1. row ***************************         Table: xgen_studentMLs  Create Table: CREATE TABLE `xgen_studentMLs` (    `student` int(11) NOT NULL,    `topic` int(11) NOT NULL,    `ML` float NOT NULL,    `MI` int(3) NOT NULL DEFAULT '0',    `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    `department` varchar(255) NOT NULL,    `metatype` varchar(25) DEFAULT 'topic',    `quizid` int(11) NOT NULL DEFAULT '0',    KEY `student` (`student`),    KEY `mult` (`topic`,`department`,`student`,`timestamp`,`ML`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  1 row in set (0.00 sec)  

Structuring data in MongoDB for a to-do list type application

Posted: 20 May 2013 07:08 PM PDT

This is a fairly high level question as I am in the initial stages of planning this application, and am also very new to MongoDB and document-based databases in general. Disclaimer aside, I am building a to-do list type application whose two main types of data are users and items (aka tasks or todo's), and my question is, what would be a good way to structure this data in mongodb, if I want it optimized more for read speeds rather than updates?

My initial thought, based on my limited knowledge of mongodb, is that since it allows embedded type data structures I could basically store it in a single collection like this:

{      username: "joe",      password: "...",      items: [          {              text: "This is a task",              completed: 1          },          ...      ]  },  ...  

However, will this make it more difficult to update/remove existing items since they are not themselves stored as separate documents, and therefore lose things like auto-generated unique object ID's, ect? Also each user could potentially have a relatively large number of items associated with them (possibly 1000's). Is embedding large lists in parent documents like that a bad idea?

Schema design: Use of association (aka: bridge/junction) table vs foreign key constraint with composite index containing a non-key field

Posted: 20 May 2013 09:08 PM PDT

This is an inventory database for IT assets. The models used are trimmed in order to focus on the problem at hand. Using SQL Server 2008. Thanks for taking the time to read and for any input you can provide.

My design includes a Device table which holds the various devices that can be entered into inventory. Each device has a boolean flag, CanNetwork which states whether a device has network capability, e.g., for most computers CanNetwork = true, for hard drives CanNetwork = false; some printers will be true, and others will be false. You get the idea.

The CanNetwork field determines if network-related information is relevant when an inventory record is created.

Design 1

My first design uses an index on Device.DeviceID and Device.CanNetwork to use in a foreign key constraint with the Asset table.

Schema #1: Foreign key constraint with index

The NetworkStatus table looks like this in this setup:

+----------------------------------------------------------------------+  | NetworkStatusID | NetworkStatus  | NetworkStatusDescription          |  |----------------------------------------------------------------------|  | 1               | Connected      | Device connected to network.      |  | 2               | Not Connected  | Device not connected to network.  |  | 3               | Do Not Connect | Do not connect device to network. |  +----------------------------------------------------------------------+  

I put check constraints on the Asset table as follows to ensure a network status and network information can only be provided if the device is capable of connecting to a network.

-- Asset table check constraint: CK_CanNetwork  -- If can't network, then network fields are null  CanNetwork = 0  AND NetworkStatusID IS NULL  AND Hostname IS NULL  AND IPAddress IS NULL  AND MACAddress IS NULL  OR CanNetwork = 1    -- Asset table check constraint: CK_NetworkStatus  -- If network status is "Connected", then must provide  -- a hostname or ip address  NetworkStatusID = 1 AND Hostname IS NOT NULL  OR NetworkStatusID = 1 AND IPAddress IS NOT NULL  OR NetworkStatusID <> 1  

The issue I have with this design is I'm not sure if the relationship with Asset and Device.DeviceID/Device.CanNetwork is a good or bad design decision. Is propagating a non-key field like CanNetwork to other tables a bad design? I don't have enough experience with database design to make an informed decision.


Design 2

In this design I thought I would use a bridge/association/junction table to decide which network statuses are valid for a device. It looks like this:

enter image description here

The NetworkStatus table looks like this in this setup (Notice the addition of record with id #4, and the ForCanNetwork field which specifies that this status is for use with devices that can can connect to a network):

+--------------------------------------------------------------------------------------+  | NetworkStatusID | NetworkStatus  | NetworkStatusDescription          | ForCanNetwork |  |--------------------------------------------------------------------------------------|  | 1               | Connected      | Device connected to network.      | True  (1)     |  | 2               | Not Connected  | Device not connected to network.  | True  (1)     |  | 3               | Do Not Connect | Do not connect device to network. | True  (1)     |  | 4               | Incapable      | Cannot connect to networks.       | False (0)     |  +--------------------------------------------------------------------------------------+  

Due to this design's granularity, I could theoretically allow any mix of statuses for devices with this design, but I wanted to control it so I wrote some triggers to only insert the correct mix of statuses depending on whether the device is network capable. Triggers as follows:

-- NetworkStatus table on INSERT trigger  -- Adds a record for each device that  -- matches the capability of the network status  INSERT INTO DeviceNetworkStatus  SELECT i.NetworkStatusID, dev.DeviceID  FROM Device dev  CROSS JOIN  inserted i  WHERE dev.CanNetwork = i.ForCanNetwork    -- Device table on INSERT trigger  -- Adds a record for each network status that  -- matches the capability of the new device  INSERT INTO DeviceNetworkStatus  SELECT ns.NetworkStatusID, i.DeviceID  FROM NetworkStatus ns  CROSS JOIN  inserted i  WHERE ns.ForCanNetwork = i.CanNetwork  

I used the following CHECK constraint on the Asset table:

-- Asset table check constraint: CK_NetworkStatus  -- If network status is "Connected', then must provide  -- a hostname or ip address  -- If network status is "Incapable", then network fields  -- must be null  NetworkStatusID = 1 AND Hostname IS NOT NULL  OR NetworkStatusID = 1 AND IPAddress IS NOT NULL  OR NetworkStatusID = 4 AND Hostname IS NULL  AND IPAddress IS NULL AND MACAddress IS NULL  OR NetworkStatusID <> 1 AND NetworkStatusID <> 4  

This design eliminates the need to propagate CanNetwork across the tables. The issue I see with this design is that every device that has network capability will have records in DeviceNetworkStatus paired with NetworkStatus ids 1, 2 and 3, while devices that can't connect to a network will be paired only with NetworkStatus id 4. It seems like a lot of extra records that all mean the same thing: devices that can be networked can only use statuses 1, 2 and 3, and devices that can't network only use 4. This design seems to be more "relationally correct", but also smells a bit.

Update

The following update proposes variations on Design 1. I come across situations like this often, where there are many ways to achieve the same end result. I never know how to tell if there are hidden problems with the designs, and I can't judge when to normalize or denormalize. Is one of these designs preferred over the other and why?

Design 1.1

Schema #1.1: NetworkAsset subtype

-- NetworkAsset table check constraint: CK_CanNetwork  CanNetwork = 1    -- NetworkAsset table check constraint: CK_NetworkStatus  -- If network status is "Connected", then must provide  -- a hostname or ip address  NetworkStatusID = 1 AND Hostname IS NOT NULL  OR NetworkStatusID = 1 AND IPAddress IS NOT NULL  OR NetworkStatusID <> 1  

Design 1.2

Schema #1.2: NetworkAsset and NetworkDevice subtype

For the record, this design seems a bit absurd even to me, but this is my thought process. In this one, the presence of a DeviceID in the NetworkDevice table is equivalent to saying CanNetwork = true in Design 1. The NetworkAsset.NetworkDeviceID has a foreign key constraint and is used to ensure only networkable devices are entered. Can do this using a CHECK constraint (see below), and by making NetworkDeviceID a computed column that is equal to DeviceID.

-- NetworkAsset table check constraint: CK_IsNetworkDevice  NetworkDeviceID = DeviceID  

Partition Fact Table

Posted: 20 May 2013 04:08 PM PDT

In one of my fact table which has close to 25 million records in it and now when a cognos team try to run the report on top of that it takes lot of time, So i am thinking of partitioning the table we store the records in fact table based on daily basis and we do have have the id calaendarday dma in the fact table as data type int.So is there a way i can partition this fact table based on id calaendarday dma?

Please help me on the above query as it is impacting the cognos reports

Applying user-defined fields to arbitrary entities

Posted: 20 May 2013 02:38 PM PDT

Currently we have an old (rather crude) system that has user-defined fields, which are mapped against rows in arbitrary tables. This was an after-the-fact modification based on a customer request, and it wasn't really designed to scale well. Our system has around 60 different types of entities, which makes things even more complicated. Essentially the implementation looks like this:

USER_DEFINED_FIELDS:

UDF_ID         int  ENTITY_TYPE    int  UDF_NAME       nvarchar(64)  UDF_DATA_TYPE  int  UDF_FORMAT     nvarchar(16)  UDF_MASK       nvarchar(24)  UDF_FLAGS      int  

UDF_VALUES_NUMBER:

UDF_ID         int  ENTITY_ID      int  VALUE          int  MODIFIED       datetime  

UDF_VALUES_TEXT:

UDF_ID         int  ENTITY_ID      int  VALUE          nvarchar(255)  MODIFIED       datetime  

etc...

This gets nice and fun when we generate our own ways to index compound primary keys, but that's another DailyWTF-worthy story.

Obviously this is pretty hideous, and leads to some spectacularly horrific queries being generated, but it's worked alright for now because we limit each entity to a maximum of 5 user-defined fields. As a quick disclaimer, I wasn't with the company when this design decision was made! ;)

Anyway, we're about to start a shiny new project and will inevitably need a better way of doing this, with no restrictions on the number of UDFs we can apply to entities, increased performance, and less horror in the generated query department. If we could run a NoSQL solution like Mongo or Redis I'd be happy and wouldn't need to bother you all, but sadly that's not an option. Instead, I need a way to do this from within SQL Server 2008 R2.

So far, I've come up with the following options:

  • Individual UDF table for each entity type, with identical structures.
    • Benefits: Queries are faster and simpler, solves the compound index problem, can search by UDF.
    • Downsides: Table versioning is more cumbersome, lots of extra tables, difficult to index across similar entity types (e.g. purchase orders and sales orders)
  • Binary blob UDF column in every entity's table.
    • Benefits: Really fast: no fetch overhead, no extra JOINs.
    • Downsides: Can't search by UDF, extra proprietary code required.
  • XML column in every entity's table.
    • Benefits: Fast like binary blobs, seems to be indexable.
    • Downsides: No idea how they work - looks complicated!

So, do any of these ideas have merit? Is there a better alternative?

Select * from statement execute very slowly, innodb io read speed is low

Posted: 20 May 2013 06:08 PM PDT

I have a very simple query " select * from ap_statistic " running in my servers. the servers have the same hardware and software configuration (CPU 8 core, mem :32G, OS: redhat 5.5, mysql version: 5.1 ) and run the same applications. In server A, the row number of the table ap_statistic is about 22512379, in server B, the row number of the table is 41438751. Of course the query running on server A is faster than server B, but what is strange is the query on server B is extreme slow, it takes more than 3 hours where in server A it just takes 10 minutes.

I use some tool to monitor system status and find that when the query is running in server A, system IO read speed is about 20~30M/s, but in server B it's 2~3M/s. I've tried to clean linux cache and restart mysql server, all is the same result. And I tried to restored DB from server B to server A, so the query in Server A is very very slow and io read speed is very slow. I want to know why this happen?

the ap_statistic table data in server A is generated by normally running and table data in server B is generated by a stored procedure. the table scheme is:

CREATE TABLE `ap_statistic` (    `ap_statisticId` BIGINT(20) UNSIGNED NOT NULL,    `deviceId` INT(11) UNSIGNED NOT NULL DEFAULT '0',    `macaddress` VARCHAR(100) DEFAULT NULL,    `check_time` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00',    `ap_count` INT(10) UNSIGNED NOT NULL DEFAULT '0',    `wlan` VARCHAR(64) DEFAULT NULL,    `radio` VARCHAR(50) DEFAULT NULL,    `bssid` VARCHAR(32) DEFAULT NULL,    `zd_ap_name` VARCHAR(64) DEFAULT NULL,    `channel` INT(2) DEFAULT NULL,    `uplinkRSSI` INT(3) DEFAULT '0',    `downlinkRSSI` INT(3) DEFAULT '0',    `txBytes` BIGINT(20) DEFAULT '0',    `rxBytes` BIGINT(20) DEFAULT '0',    `txPkts` BIGINT(20) DEFAULT '0',    `rxPkts` BIGINT(20) DEFAULT '0',    `hops` INT(1) DEFAULT '0',    `numDownlink` INT(3) DEFAULT '0',    `distance` INT(4) DEFAULT '0',    `phyerr` INT(11) DEFAULT '0',    `max_num_clients` INT(3) DEFAULT '0',    `max_mesh_downlinks` INT(1) DEFAULT '0',    `airtime` INT(3) DEFAULT '0',    `uptimePercentage` INT(3) DEFAULT '0',    `total_num_clients` INT(3) DEFAULT '0',    `tx_actual_throughput` BIGINT(20) DEFAULT '0',    `rx_actual_throughput` BIGINT(20) DEFAULT '0',    `tunnelMode` VARCHAR(32) DEFAULT NULL,    `externalIp` VARCHAR(64) DEFAULT NULL,    `externalPort` VARCHAR(32) DEFAULT NULL,    `level` INT(1) DEFAULT '1'     `essid` VARCHAR(64) DEFAULT NULL,    `total_client_join` INT(11) DEFAULT '0',    PRIMARY KEY (`ap_statisticId`),    KEY `check_time` (`check_time`),    KEY `macaddress` (`macaddress`),    KEY `deviceId` (`deviceId`)  ) ENGINE=INNODB DEFAULT CHARSET=utf8  

the follows are the table file info and some outputs of the monitor tools

Server B

  -rw-rw---- 1 mysql mysql 18568183808 Oct 11 14:52 ap_statistic.ibd      [root@localhost itms]# filefrag ./ap_statistic.ibd    ./ap_statistic.ibd: 164 extents found, perfection would be 159 extents          TABLE         Non_unique  Key_name    Seq_in_index  Column_name     COLLATION  Cardinality  Sub_part  Packed  NULL    Index_type  COMMENT      ------------  ----------  ----------  ------------  --------------  ---------  -----------  --------  ------  ------  ----------  -------      ap_statistic           0  PRIMARY                1  ap_statisticId  A             41438751    (NULL)  (NULL)          BTREE                    ap_statistic           1  check_time             1  check_time      A                10320    (NULL)  (NULL)          BTREE                    ap_statistic           1  macaddress             1  macaddress      A                   16    (NULL)  (NULL)  YES     BTREE                    ap_statistic           1  deviceId               1  deviceId        A                   16    (NULL)  (NULL)          BTREE            mysql>show status;            Variable_name   Value          Aborted_clients 0          Aborted_connects    0          Binlog_cache_disk_use   0          Binlog_cache_use    0          Bytes_received  1256          Bytes_sent  8844          Com_admin_commands  0          Com_assign_to_keycache  0          Com_alter_db    0          Com_alter_db_upgrade    0          Com_alter_event 0          Com_alter_function  0          Com_alter_procedure 0          Com_alter_server    0          Com_alter_table 0          Com_alter_tablespace    0          Com_analyze 0          Com_backup_table    0          Com_begin   0          Com_binlog  0          Com_call_procedure  0          Com_change_db   1          Com_change_master   0          Com_check   0          Com_checksum    0          Com_commit  0          Com_create_db   0          Com_create_event    0          Com_create_function 0          Com_create_index    0          Com_create_procedure    0          Com_create_server   0          Com_create_table    0          Com_create_trigger  0          Com_create_udf  0          Com_create_user 0          Com_create_view 0          Com_dealloc_sql 0          Com_delete  0          Com_delete_multi    0          Com_do  0          Com_drop_db 0          Com_drop_event  0          Com_drop_function   0          Com_drop_index  0          Com_drop_procedure  0          Com_drop_server 0          Com_drop_table  0          Com_drop_trigger    0          Com_drop_user   0          Com_drop_view   0          Com_empty_query 0          Com_execute_sql 0          Com_flush   0          Com_grant   0          Com_ha_close    0          Com_ha_open 0          Com_ha_read 0          Com_help    0          Com_insert  0          Com_insert_select   0          Com_install_plugin  0          Com_kill    0          Com_load    0          Com_load_master_data    0          Com_load_master_table   0          Com_lock_tables 0          Com_optimize    0          Com_preload_keys    0          Com_prepare_sql 0          Com_purge   0          Com_purge_before_date   0          Com_release_savepoint   0          Com_rename_table    0          Com_rename_user 0          Com_repair  0          Com_replace 0          Com_replace_select  0          Com_reset   0          Com_restore_table   0          Com_revoke  0          Com_revoke_all  0          Com_rollback    0          Com_rollback_to_savepoint   0          Com_savepoint   0          Com_select  1          Com_set_option  3          Com_show_authors    0          Com_show_binlog_events  0          Com_show_binlogs    0          Com_show_charsets   0          Com_show_collations 0          Com_show_column_types   0          Com_show_contributors   0          Com_show_create_db  0          Com_show_create_event   0          Com_show_create_func    0          Com_show_create_proc    0          Com_show_create_table   1          Com_show_create_trigger 0          Com_show_databases  0          Com_show_engine_logs    0          Com_show_engine_mutex   0          Com_show_engine_status  0          Com_show_events 0          Com_show_errors 0          Com_show_fields 1          Com_show_function_status    0          Com_show_grants 0          Com_show_keys   1          Com_show_master_status  0          Com_show_new_master 0          Com_show_open_tables    0          Com_show_plugins    0          Com_show_privileges 0          Com_show_procedure_status   0          Com_show_processlist    0          Com_show_profile    0          Com_show_profiles   0          Com_show_slave_hosts    0          Com_show_slave_status   0          Com_show_status 21          Com_show_storage_engines    0          Com_show_table_status   0          Com_show_tables 0          Com_show_triggers   0          Com_show_variables  0          Com_show_warnings   0          Com_slave_start 0          Com_slave_stop  0          Com_stmt_close  0          Com_stmt_execute    0          Com_stmt_fetch  0          Com_stmt_prepare    0          Com_stmt_reprepare  0          Com_stmt_reset  0          Com_stmt_send_long_data 0          Com_truncate    0          Com_uninstall_plugin    0          Com_unlock_tables   0          Com_update  0          Com_update_multi    0          Com_xa_commit   0          Com_xa_end  0          Com_xa_prepare  0          Com_xa_recover  0          Com_xa_rollback 0          Com_xa_start    0          Compression ON          Connections 323          Created_tmp_disk_tables 1          Created_tmp_files   5          Created_tmp_tables  2          Delayed_errors  0          Delayed_insert_threads  0          Delayed_writes  0          Flush_commands  1          Handler_commit  1          Handler_delete  0          Handler_discover    0          Handler_prepare 0          Handler_read_first  0          Handler_read_key    0          Handler_read_next   0          Handler_read_prev   0          Handler_read_rnd    0          Handler_read_rnd_next   39          Handler_rollback    0          Handler_savepoint   0          Handler_savepoint_rollback  0          Handler_update  0          Handler_write   37          Innodb_buffer_pool_pages_data   43392          Innodb_buffer_pool_pages_dirty  0          Innodb_buffer_pool_pages_flushed    43822          Innodb_buffer_pool_pages_free   637198          Innodb_buffer_pool_pages_misc   562          Innodb_buffer_pool_pages_total  681152          Innodb_buffer_pool_read_ahead_rnd   9          Innodb_buffer_pool_read_ahead_seq   27          Innodb_buffer_pool_read_requests    36489397          Innodb_buffer_pool_reads    27421          Innodb_buffer_pool_wait_free    0          Innodb_buffer_pool_write_requests   4165371          Innodb_data_fsyncs  5228          Innodb_data_pending_fsyncs  0          Innodb_data_pending_reads   1          Innodb_data_pending_writes  0          Innodb_data_read    626216960          Innodb_data_reads   36565          Innodb_data_writes  293947          Innodb_data_written 1792826880          Innodb_dblwr_pages_written  43822          Innodb_dblwr_writes 830          Innodb_log_waits    0          Innodb_log_write_requests   492588          Innodb_log_writes   268248          Innodb_os_log_fsyncs    2130          Innodb_os_log_pending_fsyncs    0          Innodb_os_log_pending_writes    0          Innodb_os_log_written   356559872          Innodb_page_size    16384          Innodb_pages_created    5304          Innodb_pages_read   38087          Innodb_pages_written    43822          Innodb_row_lock_current_waits   0          Innodb_row_lock_time    0          Innodb_row_lock_time_avg    0          Innodb_row_lock_time_max    0          Innodb_row_lock_waits   0          Innodb_rows_deleted 28637          Innodb_rows_inserted    306449          Innodb_rows_read    16579740          Innodb_rows_updated 887251          Key_blocks_not_flushed  0          Key_blocks_unused   212928          Key_blocks_used 1415          Key_read_requests   393323          Key_reads   16          Key_write_requests  102461          Key_writes  102439          Last_query_cost 9142769.199000          Max_used_connections    19          Not_flushed_delayed_rows    0          Open_files  24          Open_streams    0          Open_table_definitions  142          Open_tables 146          Opened_files    592          Opened_table_definitions    0          Opened_tables   0          Prepared_stmt_count 0          Qcache_free_blocks  0          Qcache_free_memory  0          Qcache_hits 0          Qcache_inserts  0          Qcache_lowmem_prunes    0          Qcache_not_cached   0          Qcache_queries_in_cache 0          Qcache_total_blocks 0          Queries 1578897          Questions   30          Rpl_status  NULL          Select_full_join    0          Select_full_range_join  0          Select_range    0          Select_range_check  0          Select_scan 2          Slave_open_temp_tables  0          Slave_retried_transactions  0          Slave_running   OFF          Slow_launch_threads 0          Slow_queries    0          Sort_merge_passes   0          Sort_range  0          Sort_rows   0          Sort_scan   0          Ssl_accept_renegotiates 0          Ssl_accepts 0          Ssl_callback_cache_hits 0          Ssl_cipher            Ssl_cipher_list           Ssl_client_connects 0          Ssl_connect_renegotiates    0          Ssl_ctx_verify_depth    0          Ssl_ctx_verify_mode 0          Ssl_default_timeout 0          Ssl_finished_accepts    0          Ssl_finished_connects   0          Ssl_session_cache_hits  0          Ssl_session_cache_misses    0          Ssl_session_cache_mode  NONE          Ssl_session_cache_overflows 0          Ssl_session_cache_size  0          Ssl_session_cache_timeouts  0          Ssl_sessions_reused 0          Ssl_used_session_cache_entries  0          Ssl_verify_depth    0          Ssl_verify_mode 0          Ssl_version           Table_locks_immediate   1549525          Table_locks_waited  0          Tc_log_max_pages_used   0          Tc_log_page_size    0          Tc_log_page_waits   0          Threads_cached  0          Threads_connected   17          Threads_created 322          Threads_running 2          Uptime  8093          Uptime_since_flush_status   8093            mysql>show variables;            Variable_name   Value          auto_increment_increment    1          auto_increment_offset   1          autocommit  ON          automatic_sp_privileges ON          back_log    50          big_tables  OFF          binlog_cache_size   32768          binlog_direct_non_transactional_updates OFF          binlog_format   STATEMENT          bulk_insert_buffer_size 8388608          character_set_client    utf8          character_set_connection    utf8          character_set_database  utf8          character_set_filesystem    binary          character_set_results   utf8          character_set_server    utf8          character_set_system    utf8          collation_connection    utf8_general_ci          collation_database  utf8_general_ci          collation_server    utf8_general_ci          completion_type 0          concurrent_insert   1          connect_timeout 10          date_format %Y-%m-%d          datetime_format %Y-%m-%d %H:%i:%s          default_week_format 0          delay_key_write ON          delayed_insert_limit    100          delayed_insert_timeout  300          delayed_queue_size  1000          div_precision_increment 4          engine_condition_pushdown   ON          error_count 0          event_scheduler OFF          expire_logs_days    0          flush   OFF          flush_time  0          foreign_key_checks  ON          ft_boolean_syntax   + -><()~*:""&|          ft_max_word_len 84          ft_min_word_len 4          ft_query_expansion_limit    20          ft_stopword_file    (built-in)          general_log OFF          group_concat_max_len    1024          have_community_features YES          have_compress   YES          have_crypt  YES          have_csv    YES          have_dynamic_loading    YES          have_geometry   YES          have_innodb YES          have_ndbcluster NO          have_openssl    DISABLED          have_partitioning   NO          have_query_cache    YES          have_rtree_keys YES          have_ssl    DISABLED          have_symlink    YES          hostname    localhost.localdomain          identity    0          ignore_builtin_innodb   OFF          init_connect              init_file             init_slave            innodb_adaptive_hash_index  ON          innodb_additional_mem_pool_size 67108864          innodb_autoextend_increment 8          innodb_autoinc_lock_mode    1          innodb_buffer_pool_size 11159994368          innodb_checksums    ON          innodb_commit_concurrency   0          innodb_concurrency_tickets  500          innodb_data_file_path   ibdata1:10M:autoextend          innodb_data_home_dir              innodb_doublewrite  ON          innodb_fast_shutdown    1          innodb_file_io_threads  4          innodb_file_per_table   ON          innodb_flush_log_at_trx_commit  2          innodb_flush_method O_DIRECT          innodb_force_recovery   0          innodb_lock_wait_timeout    120          innodb_locks_unsafe_for_binlog  ON          innodb_log_buffer_size  134217728          innodb_log_file_size    5242880          innodb_log_files_in_group   2          innodb_log_group_home_dir   ./          innodb_max_dirty_pages_pct  90          innodb_max_purge_lag    0          innodb_mirrored_log_groups  1          innodb_open_files   300          innodb_rollback_on_timeout  OFF          innodb_stats_on_metadata    ON          innodb_support_xa   ON          innodb_sync_spin_loops  20          innodb_table_locks  ON          innodb_thread_concurrency   8          innodb_thread_sleep_delay   10000          innodb_use_legacy_cardinality_algorithm ON          insert_id   0          interactive_timeout 28800          join_buffer_size    268435456          keep_files_on_create    OFF          key_buffer_size 268435456          key_cache_age_threshold 300          key_cache_block_size    1024          key_cache_division_limit    100                   large_files_support ON          large_page_size 0          large_pages OFF          last_insert_id  0          lc_time_names   en_US          license Commercial          local_infile    ON          locked_in_memory    OFF          log OFF          log_bin OFF          log_bin_trust_function_creators OFF          log_bin_trust_routine_creators  OFF                   log_output  FILE          log_queries_not_using_indexes   OFF          log_slave_updates   OFF          log_slow_queries    OFF          log_warnings    1          long_query_time 10.000000          low_priority_updates    OFF          lower_case_file_system  OFF          lower_case_table_names  1          max_allowed_packet  134217728          max_binlog_cache_size   18446744073709547520          max_binlog_size 1073741824          max_connect_errors  10          max_connections 300          max_delayed_threads 20          max_error_count 64          max_heap_table_size 268435456          max_insert_delayed_threads  20          max_join_size   18446744073709551615          max_length_for_sort_data    1024          max_prepared_stmt_count 16382          max_relay_log_size  0          max_seeks_for_key   18446744073709551615          max_sort_length 1024          max_sp_recursion_depth  0          max_tmp_tables  32          max_user_connections    0          max_write_lock_count    18446744073709551615          min_examined_row_limit  0          multi_range_count   256          myisam_data_pointer_size    6          myisam_max_sort_file_size   9223372036853727232          myisam_mmap_size    18446744073709551615          myisam_recover_options  OFF          myisam_repair_threads   1          myisam_sort_buffer_size 8388608          myisam_stats_method nulls_unequal          myisam_use_mmap OFF          net_buffer_length   16384          net_read_timeout    30          net_retry_count 10          net_write_timeout   60          new OFF          old OFF          old_alter_table OFF          old_passwords   OFF          open_files_limit    10240          optimizer_prune_level   1          optimizer_search_depth  62          optimizer_switch    index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on          port    3306          preload_buffer_size 32768          profiling   OFF          profiling_history_size  15          protocol_version    10          pseudo_thread_id    18          query_alloc_block_size  8192          query_cache_limit   1048576          query_cache_min_res_unit    4096          query_cache_size    0          query_cache_type    ON          query_cache_wlock_invalidate    OFF          query_prealloc_size 8192          rand_seed1            rand_seed2            range_alloc_block_size  4096          read_buffer_size    67108864          read_only   OFF          read_rnd_buffer_size    67108864          relay_log             relay_log_index           relay_log_info_file relay-log.info          relay_log_purge ON          relay_log_space_limit   0          report_host           report_password           report_port 3306          report_user           rpl_recovery_rank   0          secure_auth OFF          secure_file_priv              server_id   0          skip_external_locking   ON          skip_name_resolve   ON          skip_networking OFF          skip_show_database  OFF          slave_compressed_protocol   OFF          slave_exec_mode STRICT          slave_net_timeout   3600          slave_skip_errors   OFF          slave_transaction_retries   10          slow_launch_time    2          slow_query_log  OFF          sort_buffer_size    16777216          sql_auto_is_null    ON          sql_big_selects ON          sql_big_tables  OFF          sql_buffer_result   OFF          sql_log_bin ON          sql_log_off OFF          sql_log_update  ON          sql_low_priority_updates    OFF          sql_max_join_size   18446744073709551615          sql_mode              sql_notes   ON          sql_quote_show_create   ON          sql_safe_updates    OFF          sql_select_limit    18446744073709551615          sql_slave_skip_counter            sql_warnings    OFF          ssl_ca            ssl_capath            ssl_cert              ssl_cipher            ssl_key           storage_engine  MyISAM          sync_binlog 0          sync_frm    ON          system_time_zone    UTC          table_definition_cache  256          table_lock_wait_timeout 50          table_open_cache    512          table_type  MyISAM          thread_cache_size   0          thread_handling one-thread-per-connection          thread_stack    262144          time_format %H:%i:%s          time_zone   +08:00          timed_mutexes   OFF          timestamp   1349946061          tmp_table_size  1073741824          transaction_alloc_block_size    8192          transaction_prealloc_size   4096          tx_isolation    REPEATABLE-READ          unique_checks   ON          updatable_views_with_limit  YES          version 5.1.53-enterprise-commercial-pro          version_comment MySQL Enterprise Server - Pro Edition (Commercial)          version_compile_machine x86_64          version_compile_os  unknown-linux-gnu          wait_timeout    28800          warning_count   0              mysql> show innodb status\G;          *************************** 1. row ***************************            Type: InnoDB            Name:          Status:          =====================================          121011 10:22:13 INNODB MONITOR OUTPUT          =====================================          Per second averages calculated from the last 39 seconds          ----------          SEMAPHORES          ----------          OS WAIT ARRAY INFO: reservation count 3806, signal count 3778          Mutex spin waits 0, rounds 282892, OS waits 2075          RW-shared spins 1969, OS waits 864; RW-excl spins 2336, OS waits 749          ------------          TRANSACTIONS          ------------          Trx id counter 0 5303968          Purge done for trx's n:o < 0 5303951 undo n:o < 0 0          History list length 1          LIST OF TRANSACTIONS FOR EACH SESSION:          ---TRANSACTION 0 0, not started, process no 30336, OS thread id 1189509440          MySQL thread id 520, query id 1861594 localhost root          show innodb status          ---TRANSACTION 0 5303967, not started, process no 30336, OS thread id 1188710720          MySQL thread id 526, query id 1861593 127.0.0.1 root          ---TRANSACTION 0 5303962, not started, process no 30336, OS thread id 1186314560          MySQL thread id 519, query id 1861555 127.0.0.1 root          ---TRANSACTION 0 5303952, not started, process no 30336, OS thread id 1188444480          MySQL thread id 515, query id 1861567 127.0.0.1 root          ---TRANSACTION 0 5303948, not started, process no 30336, OS thread id 1187912000          MySQL thread id 516, query id 1861566 127.0.0.1 root          ---TRANSACTION 0 5303937, not started, process no 30336, OS thread id 1190308160          MySQL thread id 511, query id 1861568 127.0.0.1 root          ---TRANSACTION 0 0, not started, process no 30336, OS thread id 1090791744          MySQL thread id 18, query id 1596073 172.18.112.84 root          ---TRANSACTION 0 5303959, ACTIVE 63 sec, process no 30336, OS thread id 1090525504 fetching rows, thread declared inside InnoDB 500          mysql tables in use 1, locked 0          MySQL thread id 17, query id 1861400 localhost root Sending data          select * from ap_statistic          Trx read view will not see trx with id >= 0 5303960, sees < 0 5303960          --------          FILE I/O          --------          I/O thread 0 state: waiting for i/o request (insert buffer thread)          I/O thread 1 state: waiting for i/o request (log thread)          I/O thread 2 state: waiting for i/o request (read thread)          I/O thread 3 state: waiting for i/o request (write thread)          Pending normal aio reads: 0, aio writes: 0,           ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0          Pending flushes (fsync) log: 0; buffer pool: 0          63521 OS file reads, 294656 OS file writes, 5641 OS fsyncs          1 pending preads, 0 pending pwrites          149.38 reads/s, 16384 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s          -------------------------------------          INSERT BUFFER AND ADAPTIVE HASH INDEX          -------------------------------------          Ibuf: size 1, free list len 318, seg size 320,          63593 inserts, 63593 merged recs, 9674 merges          Hash table size 22086161, node heap has 607 buffer(s)          0.08 hash searches/s, 0.26 non-hash searches/s          ---          LOG          ---          Log sequence number 15 2873617336          Log flushed up to   15 2873617336          Last checkpoint at  15 2873617336          0 pending log writes, 0 pending chkp writes          269102 log i/o's done, 0.00 log i/o's/second          ----------------------          BUFFER POOL AND MEMORY          ----------------------          Total memory allocated 12452785320; in additional pool allocated 15261440          Dictionary memory allocated 789024          Buffer pool size   681152          Free buffers       610013          Database pages     70532          Modified db pages  0          Pending reads 1          Pending writes: LRU 0, flush list 0, single page 0          Pages read 65043, created 5488, written 45924          149.38 reads/s, 0.00 creates/s, 0.00 writes/s          Buffer pool hit rate 888 / 1000          --------------          ROW OPERATIONS          --------------          1 queries inside InnoDB, 0 queries in queue          2 read views open inside InnoDB          Main thread process no. 30336, id 1185782080, state: waiting for server activity          Number of rows inserted 336555, updated 1112311, deleted 28681, read 29200669          0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 8258.58 reads/s          ----------------------------          END OF INNODB MONITOR OUTPUT          ============================            1 row in set, 1 warning (0.00 sec)            ERROR:          No query specified                iostat -dx 2            Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util          sda          0.00   2.50 141.50 11.50 4516.00  112.00  2258.00    56.00    30.25     0.95    6.23   5.70  87.25          sda1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00          sda2         0.00   2.50 141.50 11.50 4516.00  112.00  2258.00    56.00    30.25     0.95    6.23   5.70  87.25          dm-0         0.00   0.00 141.50 14.00 4516.00  112.00  2258.00    56.00    29.76     0.97    6.24   5.62  87.35          dm-1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00            Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util          sda          3.00   0.00 154.50  0.00 4932.00    0.00  2466.00     0.00    31.92     0.93    6.04   6.04  93.25          sda1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00          sda2         3.00   0.00 154.50  0.00 4932.00    0.00  2466.00     0.00    31.92     0.93    6.04   6.04  93.25          dm-0         0.00   0.00 157.50  0.00 4932.00    0.00  2466.00     0.00    31.31     0.95    6.04   5.93  93.40          dm-1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00            Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util          sda          3.00   1.50 150.50  1.50 4804.00   24.00  2402.00    12.00    31.76     0.94    6.15   6.14  93.40          sda1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00          sda2         3.00   1.50 150.50  1.50 4804.00   24.00  2402.00    12.00    31.76     0.94    6.15   6.14  93.40          dm-0         0.00   0.00 153.50  3.00 4804.00   24.00  2402.00    12.00    30.85     0.95    6.08   5.97  93.50          dm-1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00              vmstat 2            procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------           r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st           1  1     16 27358488  18152 115500    0    0  2558     0 1193 8634 14  1 73 12  0           1  1     16 27346840  18168 115500    0    0  2356    12 1174 9129 14  2 73 12  0           2  1     16 27334320  18172 115504    0    0  2522     0 1184 8918 14  1 73 12  0           1  1     16 27321812  18180 115496    0    0  2456    12 1183 7357 13  1 74 12  0           1  1     16 27310132  18180 115504    0    0  2344    12 1174 6990 12  2 74 12  0           1  1     16 27297608  18184 115508    0    0  2506     0 1182 6203 12  2 74 11  0           1  1     16 27285444  18200 115504    0    0  2420    90 1187 9667 13  2 73 12  0           1  1     16 27277640  18200 115508    0    0  2248     0 1165 8103 19  2 69 11  0           2  1     16 27265380  18204 115504    0    0  2498     0 1179 5690 13  1 74 12  0           4  1     16 27252972  18216 115508    0    0  2434    12 1178 6096 14  1 74 12  0           1  1     16 27241032  18232 115496    0    0  2520     0 1181 9252 12  1 75 11  0           2  1     16 27229136  18240 115508    0    0  2468    10 1178 7116 13  1 74 12  0           1  0     16 27630612  18248 115508    0    0  1536    20 1121 4082 13  1 79  7  0              mpstat -P ALL 2              02:48:57 PM  CPU   %user   %nice %system %iowait    %irq   %soft   %idle    intr/s          02:48:59 PM  all   13.69    0.00    1.31   11.56    0.00    0.62   72.81   1190.95          02:48:59 PM    0   33.67    0.00    0.50    0.00    0.00    0.00   65.83   1006.03          02:48:59 PM    1    6.53    0.00    0.50   92.96    0.50    0.50    0.00    160.80          02:48:59 PM    2    1.01    0.00    0.50    0.00    0.00    0.00   98.49      0.00          02:48:59 PM    3    0.00    0.00    0.00    0.00    0.00    0.00  100.50      3.52          02:48:59 PM    4   35.68    0.00    1.01    0.00    0.00    1.01   62.81     13.57          02:48:59 PM    5    4.52    0.00    0.00    0.00    0.00    0.00   96.48      0.50          02:48:59 PM    6    3.52    0.00    0.00    0.00    0.00    0.00   96.98      0.50          02:48:59 PM    7   25.13    0.00    7.54    0.00    0.00    4.02   63.82      6.03            02:48:59 PM  CPU   %user   %nice %system %iowait    %irq   %soft   %idle    intr/s          02:49:01 PM  all   12.50    0.00    1.19   11.69    0.00    0.56   74.06   1177.11          02:49:01 PM    0   22.89    0.00    1.49    0.00    0.00    1.49   74.13    995.52          02:49:01 PM    1    5.97    0.00    0.50   92.54    0.00    0.50    0.00    159.70          02:49:01 PM    2    0.50    0.00    0.50    0.50    0.00    0.00   98.01      1.99          02:49:01 PM    3    0.00    0.00    0.00    0.00    0.00    0.00   99.50      2.49          02:49:01 PM    4   45.77    0.00    1.49    0.00    0.00    0.50   51.24     11.94          02:49:01 PM    5    0.00    0.00    0.00    0.00    0.00    0.00   99.50      0.50          02:49:01 PM    6    0.50    0.00    0.00    0.00    0.00    0.00   99.00      0.50          02:49:01 PM    7   23.38    0.00    5.47    0.00    0.00    1.99   68.16      4.48            02:49:01 PM  CPU   %user   %nice %system %iowait    %irq   %soft   %idle    intr/s          02:49:03 PM  all   13.05    0.00    1.12   11.62    0.00    0.50   73.70   1179.00          02:49:03 PM    0   43.50    0.00    0.50    0.00    0.00    0.00   56.00   1000.50          02:49:03 PM    1    6.50    0.00    1.00   93.00    0.00    0.50    0.00    157.00          02:49:03 PM    2    1.50    0.00    0.50    0.00    0.00    0.00   98.50      0.00          02:49:03 PM    3    0.00    0.00    0.00    0.00    0.00    0.00  100.00      2.50          02:49:03 PM    4   32.50    0.00    1.50    0.00    0.00    1.00   65.50     13.00          02:49:03 PM    5   11.00    0.00    4.00    0.00    0.00    1.50   83.50      0.50          02:49:03 PM    6    0.00    0.00    0.00    0.00    0.00    0.00  100.00      0.00          02:49:03 PM    7   10.50    0.00    2.00    0.00    0.00    1.00   87.00      5.50  

What's the difference between a CTE and a Temp Table?

Posted: 20 May 2013 05:51 PM PDT

What is the difference between a Common Table Expression (CTE) and a temp table? And when should I use one over the other?

CTE

WITH cte (Column1, Column2, Column3)  AS  (      SELECT Column1, Column2, Column3      FROM SomeTable  )    SELECT * FROM cte  

Temp Table

SELECT Column1, Column2, Column3  INTO #tmpTable  FROM SomeTable    SELECT * FROM #tmpTable  

No comments:

Post a Comment

Search This Blog