Saturday, April 20, 2013

[how to] trying to execute SQL query on an oracle 10g database using java but getting ORA-00911 error

[how to] trying to execute SQL query on an oracle 10g database using java but getting ORA-00911 error


trying to execute SQL query on an oracle 10g database using java but getting ORA-00911 error

Posted: 20 Apr 2013 05:19 PM PDT

I am trying to execute a simple CREATE TABLE query on an oracle10g database using java. This is what I am doing:

Connection conn = //create connection  //open connection.  No errors/warnings are generated so I assume this succeeds   Statement statement = conn.createStatement();  ResultSet rs = statement.executeQuery("CREATE TABLE t1 (c1 INTEGER NOT NULL, c2 VARCHAR(100), c3 INTEGER NULL, CONSTRAINT m_pk PRIMARY KEY (c1));");  ...  //close connection.  No errors again so I assume it succeeds.  

When I execute the command (copying and pasting into terminal window) directly on the command line, it succeeds. When I run the above java code, this is the error I get:

java.sql.SQLSyntaxErrorException: ORA-00911: invalid character  

I dont understand how I could have an invalid character since copying and pasting it into terminal works perfectly fine.

how to build table relationships in a firebird database?

Posted: 20 Apr 2013 05:16 PM PDT

I am a beginner in Delphi and I have a Firebird database with 2 tables namely masterlist and daily collection. I used Zeos 7.0.3 to access my Firebird database.

My masterlist contains the following columns:

╔══════╦══════╦═════════╦════════╗  ║ name ║ date ║ balance ║ status ║  ╚══════╩══════╩═════════╩════════╝  

My daily collection contains the following columns:

╔══════╦══════╦═════════╦═════════╗  ║ date ║ name ║ payment ║ balance ║  ╚══════╩══════╩═════════╩═════════╝  

I would like to build a relation in which the balance from masterlist will be copied to the balance column of the daily collection, and when I update the column in the daily collection it will also update the content of the masterlist.

Hope this will be considered a good question I have tried very hard to make a useful question.

parenthetic grouping of join clauses

Posted: 20 Apr 2013 03:41 PM PDT

What is the difference between these joins?

  1. a left join b left join c
  2. (a left join b) left join c
  3. a left join (b left join c)

Does the grouping only affect the order of the joins? Will the query engine decompose #1 into one of #2 or #3, i.e. pick an order for doing the joins?

I'm asking because h2 uses indexes on c for #1, but for #2 does a table scan of c.

Here b is a simple many-to-many join table with columns (a_id, c_id).

After moving a DB to a new disk, the log file isn't dumping data to the DB

Posted: 20 Apr 2013 04:22 PM PDT

I recently added a new disk to our SQL Server box and followed these steps from Microsoft to move the .mdf file to the new drive. I opted to leave the log file on the original disk for performance reasons.

When I tried to bring the database back online, I was met with a systems permissions exception (admin access was needed). I went into Windows Services and changed SQL Server to log on as the local system account. This appeared to have resolved the problem.

However, two days later, the .mdf file is the same size as pre-transfer (I took it offline and only the timestamp changed). I checked the log, which is usually under 1GB, and found that it has now reached 30GB.

Am I missing something? Why hasn't SQL Server written any of this new data to the DB file itself?

Database design: nested tagging

Posted: 20 Apr 2013 06:23 PM PDT

I have a situation that after an initial thought ended in the following tables:

section:         id, name  section_tag:     id, name   section_tag_map: section_id, tag_id    item:         id, name  item_tag:     id, name   item_tag_map: item_id, tag_id    section_item_map: section_id, item_id  

This describes well the data: items belong to sections. Sections and items have multiple tags for each of their "predicates", but they are different kinds of tags (section tags are specific to sections, item tags to items).

I will need to do this kind of query:

  • Select sections with section_tag's (3, 4). Easy.
  • Select items with item_tag's (1, 2) that belong to a section with section_tag's (3, 4). It didn't take long to realize what a mess of queries this requires.

I'm planning to simplify the initial idea, setting a single tag table and through app logic make items "inherit" tags that are meant to be set for sections. So it will be more like this:

section: id, name  item:    id, name  tag:     id, type, name    section_item: section_id, item_id  section_tag_map: item_id, tag_id  item_tag_map: item_id, tag_id  

The upside is that queries will be much simpler. The downside is that I'll duplicate data and have to take care of syncing item tags when the tags from a section change.

What I'm weighting is if I should worry about having a monster query like this (first design):

SELECT      i.*  FROM      item AS i  INNER JOIN      section_item_map AS sim  ON      sim.item_id = i.id  INNER JOIN      (SELECT          s.*      FROM          section AS s      INNER JOIN          section_tag_map AS stm      ON          stm.section_id = s.id      WHERE          stm.tag_id IN (3, 4)      GROUP BY          s.id      HAVING          COUNT(s.id) = 2      ) AS s  ON      sim.section_id = s.id  INNER JOIN      item_tag_map AS itm  ON      itm.item_id = i.id  WHERE      itm.tag_id IN (1, 2)  GROUP BY      i.id  HAVING      COUNT(i.id) = 2  

...or a much simpler one like this (second design):

SELECT      i.*  FROM      item AS i  INNER JOIN      item_tag_map AS itm  ON      itm.item_id = i.id  WHERE      itm.tag_id IN (1, 2, 3, 4)  GROUP BY      i.id  HAVING      COUNT(i.id) = 4  

Am I on the right path or maybe I am missing something crucial here? The system is a single-user Sqlite app with about 20k sections and 100k items.

Help with modeling this table

Posted: 20 Apr 2013 11:39 AM PDT

Thanks for reading my post...I'm new at setting up databases and am working on a simple practice app.

I have a table called PhoneNumbers, which hold a list of phone numbers (duh). I have two other tables that relate to PhoneNumbers, Employee and Company.

Each Employee can have multiple phone numbers, each Company can have multiple phone numbers.

Since my app needs to be able to separate out the difference between Company phone numbers and employee phone numbers, and be able to list each number per employee and per company, I was wondering the best way to set up the PhoneNumbers table, keeping in mind that perhaps there will be more than two Number Types in the future.


One ideas I had was:

Create two fields in the Phone number table called NumberTypeID and OwnerID and create another table called NumberType. NumberTypeID would hold the ID numbers of the Number Types created in the NumberType table. OwnerID would hold ID of the Company or Employee it belongs to.

PhoneNumbers    - ID   - PhoneNumber   - NumberTypeID   - OwnerID    NumberType   - ID   - Type  

I'm thinking the NumberTypeID column will help querying all numbers for all companies/employee easier. I also am thinking this makes it more flexible down the road for new number types, but perhaps is overly complex? Maybe I don't need the NumberTypeID column at all and querying all numbers for all companies/employees is best with a more complex statement saved as a view? Or perhaps I'm missing something else all together?


Again very new at this, so any help would be greatly appreciated.

Thanks

Unable to recover database

Posted: 20 Apr 2013 08:59 AM PDT

How to resolve this issue, I am getting this error, when I'm trying to open my Oracle Database

ORA-16038: log 1 sequence# 59 cannot be archived  ORA-19809: limit exceeded for recovery files  ORA-00312: online log 1 thread 1: 'E:\ORACLE\PRODUCT\10.2.0\ORADATA\TESTDB\REDO01.LOG'  

Even I tried this, which I found here

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=2048576000 SCOPE=BOTH;  

Any help appreciated !!

How to update column value in MS-Access by comparing two columns?

Posted: 20 Apr 2013 01:06 PM PDT

I have an Access 2000 table with 10 fields. I want to compare the 2 column and update the third one.

To elaborate :

keyfield  cno     entno      uno  (Text)    (Text)  (int)      (int)  ------------------------------------  1101     100       1         1*         1101     100       1         2*  1101     100       1         3*  1101     100       2         1*  1101     100       2         2*  

These values to be updated for following condition:

If keyfield = 1101          and csno = 100               and entno = 1      then     uno should be 1, 2, 3   if entno changes then uno should start with 1  again if csno changes then uno should start 1  and if keyfield changes then uno should start with 1  

High Mysql Load , over 700% CPU

Posted: 20 Apr 2013 04:09 PM PDT

I had high mysql load on server linux 64 bit , 24 G.B ram , Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz ,

Alot of quiers in sending data mode

Here is mysql status

+------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  | Id   | User         | Host      | db           | Command | Time | State          | Info                                                                                        |  +------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  | 1    | leechprotect | localhost | leechprotect | Sleep   | 507  |                |                                                                                             |  | 422  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM likes WHERE userid='100002047302002' and pageid='113623891994626' Limit 1     |  | 440  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM likes WHERE userid='100003610486105' and pageid='137067399778568' Limit 1     |  | 745  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000156154972' and pageid='259472294088694' Limit 1     |  | 813  | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001079730269' and pageid='111612552199698' Limit 1     |  | 817  | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100003718366503' and pageid='105790599509795' Limit 1     |  | 888  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM friends WHERE userid='100004673917946' and friendid='100004836366957' Limit 1 |  | 930  | face_book  | localhost | face_book  | Query   | 4    | Sending data   | SELECT * FROM likes WHERE userid='100001857826693' and pageid='379878825440539' Limit 1     |  | 940  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002525443288' and pageid='432454306781258' Limit 1     |  | 976  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100001786746020' and pageid='266169045612' Limit 1        |  | 980  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000721604707' and pageid='188587591283392' Limit 1     |  | 999  | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001661124181' and pageid='161323847303028' Limit 1     |  | 1033 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100002273583367' and pageid='447287665321823' Limit 1     |  | 1064 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100003902289965' and pageid='315361025251697' Limit 1     |  | 1100 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100002720670629' and pageid='114370700612' Limit 1        |  | 1109 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100002279885489' and friendid='100002385585461' Limit 1 |  | 1111 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100001087100886' and friendid='100005456647732' Limit 1 |  | 1132 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100005404566097' and pageid='225594034251253' Limit 1     |  | 1148 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100005551654543' and pageid='104088939622341' Limit 1     |  | 1172 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000009185323' and pageid='110343285691930' Limit 1     |  | 1188 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100003468150624' and pageid='182937471830173' Limit 1     |  | 1192 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100000619411698' and pageid='527695070578211' Limit 1     |  | 1196 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002866966279' and pageid='244651575605946' Limit 1     |  | 1208 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100001057034709' and friendid='1080136538' Limit 1      |  | 1230 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100005246283878' and friendid='100002513789129' Limit 1 |  | 1240 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100005028574119' and friendid='100001229276848' Limit 1 |  | 1241 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='1681467791' and friendid='1537753959' Limit 1           |  | 1242 | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM friends WHERE userid='100001845705855' and friendid='1668437534' Limit 1      |  | 1247 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100003854517927' and friendid='100002322873932' Limit 1 |  | 1257 | face_book  | localhost | face_book  | Query   | 0    | Sorting result | SELECT lastpost,id FROM facesessions ORDER BY lastpost DESC                                 |  | 1276 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100001579975664' and pageid='402312375509' Limit 1        |  | 1284 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001827038305' and pageid='254365179238' Limit 1        |  | 1291 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='1587203387' and pageid='197678701083' Limit 1             |  | 1309 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100002156769339' and friendid='100001641695726' Limit 1 |  | 1318 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002230633105' and pageid='343669279056732' Limit 1     |  | 1325 | face_book  | localhost | face_book  | Sleep   | 8    |                |                                                                                             |  | 1333 | face_book  | localhost | face_book  | Sleep   | 2    |                |                                                                                             |  | 1338 | face_book  | localhost | face_book  | Sleep   | 0    |                |                                                                                             |  | 1339 | root         | localhost |              | Query   | 0    |                | show processlist                                                                            |  +------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  Uptime: 508  Threads: 38  Questions: 65938  Slow queries: 0  Opens: 51156  Flush tables: 1  Open tables: 34  Queries per second avg: 129.799  

and here is /etc/my.cnf

[client]  socket=/var/lib/mysql/mysql.sock    [mysqld]  tmpdir=/mysqltmp    port = 3306  socket = /var/lib/mysql/mysql.sock  skip-networking  skip-external-locking  local-infile=0    back_log = 100  #skip-innodb  max_connections = 50  table_cache = 32  key_buffer_size = 12284M  myisam_sort_buffer_size = 512M  myisam_max_sort_file_size = 8192M  join_buffer_size = 512M  read_buffer_size = 512M  sort_buffer_size = 512M  read_rnd_buffer_size = 512M  table_definition_cache = 8000  table_open_cache = 8000  thread_cache_size = 4  wait_timeout = 360  interactive_timeout = 60  connect_timeout = 10  tmp_table_size = 256M  max_heap_table_size = 256M  max_allowed_packet = 512M  max_seeks_for_key = 1000  group_concat_max_len = 1024  max_length_for_sort_data = 1024  net_buffer_length = 16384  max_connect_errors = 100000  concurrent_insert = 2  bulk_insert_buffer_size = 8M  query_cache_limit = 1M  query_cache_size = 256M  query_cache_type = 1  query_prealloc_size = 262144  query_alloc_block_size = 65536  range_alloc_block_size = 4096  transaction_alloc_block_size = 8192  transaction_prealloc_size = 4096  default-storage-engine = MyISAM  max_write_lock_count = 8    innodb_open_files = 500  innodb_data_file_path= ibdata1:10M:autoextend  innodb_buffer_pool_size = 64M  innodb_additional_mem_pool_size = 32M    innodb_log_files_in_group = 2  innodb_log_file_size = 5M  innodb_log_buffer_size = 8M  innodb_flush_log_at_trx_commit = 2  innodb_thread_concurrency = 8    #slow_query_log=0  #long_query_time=1  #slow_query_log_file=/var/log/mysql/log-slow-queries.log    open_files_limit=50000  [mysqld_safe]  log-error=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  nice = -5  open-files-limit = 8192    [mysqldump]  quick  max_allowed_packet = 512M    [myisamchk]  key_buffer_size = 6400M  sort_buffer_size = 1M  read_buffer_size = 1M  write_buffer_size = 1M    [mysqlhotcopy]  interactive-timeout  

i tried to optimize table and adjust my.cnf with mysqlreport still the same

i don't use InnoDB

mysql version

# mysql -V  mysql  Ver 14.14 Distrib 5.1.68, for unknown-linux-gnu (x86_64) using readline 5.1  

mysql> SHOW CREATE TABLE friends\G

*************************** 1. row ***************************         Table: friends  Create Table: CREATE TABLE `friends` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `userid` mediumtext COLLATE latin1_general_ci,    `friendid` mediumtext COLLATE latin1_general_ci,    `name` varchar(255) COLLATE latin1_general_ci NOT NULL,    `dateline` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastsend` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sendstatus` varchar(255) COLLATE latin1_general_ci NOT NULL,    PRIMARY KEY (`id`),    KEY `name` (`name`),    KEY `lastsend` (`lastsend`),    KEY `sendstatus` (`sendstatus`)  ) ENGINE=MyISAM AUTO_INCREMENT=1079024 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SHOW CREATE TABLE likes\G

*************************** 1. row ***************************         Table: likes  Create Table: CREATE TABLE `likes` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `userid` mediumtext COLLATE latin1_general_ci,    `pageid` mediumtext COLLATE latin1_general_ci,    `name` varchar(255) COLLATE latin1_general_ci NOT NULL,    `link` varchar(255) COLLATE latin1_general_ci NOT NULL,    `dateline` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastsend` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sendstatus` varchar(255) COLLATE latin1_general_ci NOT NULL,    PRIMARY KEY (`id`)  ) ENGINE=MyISAM AUTO_INCREMENT=2008744 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SHOW CREATE TABLE facesessions\G

*************************** 1. row ***************************         Table: facesessions  Create Table: CREATE TABLE `facesessions` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `session_key` varchar(255) COLLATE latin1_general_ci NOT NULL,    `uid` varchar(255) COLLATE latin1_general_ci NOT NULL,    `expires` varchar(255) COLLATE latin1_general_ci NOT NULL,    `secret` varchar(255) COLLATE latin1_general_ci NOT NULL,    `access_token` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sig` varchar(255) COLLATE latin1_general_ci NOT NULL,    `username` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastposttime` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastpost` varchar(255) COLLATE latin1_general_ci NOT NULL,    `nextsend` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `lastpoststatus` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `gender` varchar(20) COLLATE latin1_general_ci DEFAULT NULL,    `birthday` varchar(20) COLLATE latin1_general_ci DEFAULT NULL,    `location` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `imported` int(10) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `uid` (`uid`),    KEY `access_token` (`access_token`),    KEY `sig` (`sig`),    KEY `username` (`username`),    KEY `lastposttime` (`lastposttime`),    KEY `lastpost` (`lastpost`),    KEY `nextsend` (`nextsend`),    KEY `lastpoststatus` (`lastpoststatus`)  ) ENGINE=MyISAM AUTO_INCREMENT=16238 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SELECT SUM(index_length) FROM information_schema.tables WHERE engine='MyISAM';

+-------------------+  | SUM(index_length) |  +-------------------+  |         150684672 |  +-------------------+  1 row in set (0.01 sec)  

How do I specify a linked server in SQL Server 2008 R2 to a remote database over tcp/ip?

Posted: 20 Apr 2013 12:52 PM PDT

I have a remote database running on a IAAS provider and I need to be able to execute joined queries from my local workstation. I am having trouble because when I attempt to create the link it is attempting to use named pipes for the connection.

I created a system DSN using the SQL Server Native Client 10 and fed it the IP, Database, User, and Password. Tested it and it was fine.

  1. Went to Linked Servers and Create New.
  2. Specified a name "MAIN"
  3. Selected SQL Server Native Client 10 in the combo box
  4. Entered SQL Server in Product Name Field
  5. Entered my DSN name in Data Source Field
  6. Entered SQLNCLI10 in Provider Name Field
  7. Entered my database name in the Catalog Field
  8. Went to the security options and specified my local to remote login pairing.

After clicking OK. I get an error that it can't find the server and it shows that it is attempting to use named pipes.

What should I do to correct.

Thanks

SQLCMD command is not able to insert accents

Posted: 20 Apr 2013 01:03 PM PDT

I am trying to run sqlcmd.exe in order to setup a new database from command line. I am using SQL SERVER Express 2012 on Windows 7 64 bits.

Here's the command I use:

SQLCMD -S .\MSSQLSERVER08 -V 17 -E -i %~dp0\aqualogyDB.sql -o %~dp0\databaseCreationLog.log   

And here's a pieceof the sql file creation script:

    CREATE DATABASE aqualogy       COLLATE Modern_Spanish_CI_AS      WITH TRUSTWORTHY ON, DB_CHAINING ON;      GO      use aqualogy      GO      CREATE TABLE [dbo].[BaseLayers] (      [Code] nchar(100) NOT NULL ,      [Geometry] nvarchar(MAX) NOT NULL ,      [IsActive] bit NOT NULL DEFAULT ((1))       )        EXEC sp_updateextendedproperty @name = N'MS_Description', @value = N'Capas de     cartografía base de la aplicaicón. Consideramos en Galia Móvil la cartografía(...)'  , @level0type = 'SCHEMA', @level0name = N'dbo'  , @level1type = 'TABLE', @level1name = N'BaseLayers'  

Well, please check that there are some accents on the words; which is the table's description . The database is created with no problems. 'Collate' is understood by the script, as you can see in the attached screenshot. Despite of this, accents are not properly shown when examining the table.Collation problem

I'd really appreciate any help. Thank you very much.

How to insert binary data into a PostgreSQL BYTEA column using libpqxx?

Posted: 20 Apr 2013 05:15 PM PDT

I'm a newbie at SQL, and even greener when it comes to using the PostgreSQL C++ API libpqxx.

What I'd like to do is insert some binary data into a BYTEA column, but I find the Doxygen output is lacking in details. The fact that http://pqxx.org/ has been down for the past few days isn't helping me...

How would I go about inserting the contents of somefile.bin into a table with a BYTEA column?

What I have is along these lines:

pqxx::work work( conn );  work.exec( "CREATE TABLE test ( name varchar(20), data BYTEA )" );  work.exec( "INSERT INTO test( name, data ) VALUES ( 'foo', <insert filename.bin here> )" );  work.commit();  

If it makes a difference, I'd like to use the new hex format for BYTEA available in PostgreSQL 9.1.

Using MongoDB and PostgreSQL together

Posted: 20 Apr 2013 08:54 AM PDT

My current project is essentially a run of the mill document management system.

That said, there are some wrinkles (surprise, surprise). While some of the wrinkles are fairly specific to the project, I believe there are some general observations and questions that have come up which don't have a canonical answer (that I could find, anyway) and that are applicable to a wider problem domain. There's a lot here and I'm not sure it's a good fit for the StackExchange Q&A format but I think it a) an answerable question and b) non-specific enough that it can benefit the community. Some of my considerations are specific to me but I think the question could be of use to anyone faced with deciding on SQL vs NoSQL vs both.

The background:

The web app we are building contains data that is clearly relational in nature as well as data that is document-oriented. We would like to have our cake and eat it too.

TL;DR: I think #5 below passes the smell test. Do you? Does anyone have experience with such an integration of SQL and NOSQL in a single application? I tried to list all the possible approaches to this class of problem in below. Have I missed a promising alternative?

Complexities:

  • There are many different classes of documents. The requirements already call for dozens of different documents. This number will only ever go up. The best possible case would be one in which we could leverage a simple domain specific language, code generation and a flexible schema so that domain experts could handle the addition of new document classes without the intervention of DBAs or programmers. (Note: already aware we are living out Greenspun's Tenth Rule)
  • The integrity of previous successful writes is a central requirement of the project. The data will be business critical. Full ACID semantics on writes can be sacrificed provided that the things that do get succesfully written stay written.
  • The documents are themselves complex. The prototype document in our specific case will require storage of 150+ distinct pieces of data per document instance. The pathological case could be an order of magnitude worse, but certainly not two.
  • A single class of documents is a moving target subject to updates at a later point in time.
  • We like the free stuff we get from Django when we hook it into a relational database. We would like to keep the freebies without having to jump back two Django versions to use the django-nonrel fork. Dumping the ORM entirely is preferable to downgrading to 1.3.

Essentially, it's a mishmash of relational data (your typical web app stuff like users, groups, etc., as well as document metadata that we'll need to be able to slice and dice with complex queries in realtime) and document data (e.g. the hundreds of fields which we have no interest in joining on or querying by - our only use case for the data will be for showing the single document into which it was entered).

I wanted to do a sanity check (if you check my posting history, I'm pretty explicit about the fact that I am not a DBA) on my preferred method as well as enumerate all of the options I've come across for others solving broadly similar problems involving both relational and non-relational data.

Proposed Solutions:

1. One table per document class

Each document class gets its own table, with columns for all metadata and data.

Advantages:

  • The standard SQL data model is in play.
  • Relational data is handled in the best possible way. We'll denormalize later if we need to.
  • Django's built-in admin interface is comfortable with introspecting these tables and the ORM can live happily with 100% the data out of the box.

Disadvantages:

  • Maintenance nightmare. Dozens (hundreds?) of tables with (tens of?) thousands of columns.
  • Application-level logic responsible for deciding exactly which table to write to. Making the table name a parameter for a query stinks.
  • Basically all business logic changes will require schema changes.
  • Pathological cases might require striping data for single forms across multiple tables (see: What is the maximum number of columns in a PostgreSQL table?).
  • We would probably need to go find a real, honest-to-God DBA who would no doubt end up hating life and us.

2. EAV modeling

There is just a fields table. Entity-Attribute-Value modeling is already well understood. I've included it for completeness. I don't think any new project being started in 2013 would go with an EAV approach on purpose.

Advantages:

  • Easy to model.

Disadvantages:

  • More difficult to query.
  • DB layer no longer has a straight-forward representation for what constitutes one app-level object.
  • We would lose DB-level constraint checking.
  • Number of rows on one table will grow 100s-1000s of times faster. Likely future pain point, performance-wise.
  • Limited indexing possible.
  • DB schema is nonsensical as far as ORM is concerned. Batteries included web app stuff is preserved but custom data models are going to require custom queries.

3. Use PostgreSQL hstore or json fields

Either of these field types would do the trick for storing schemaless data within the context of a relational DB. The only reason I don't jump to this solution immediately is it is relatively new (introduced in version 8.4 so not that new), I have zero previous exposure to it and I am suspicious. It strikes me as wrong for precisely the same reasons I would feel uneasy throwing all my nice, easily normalized data into Mongo - even though Mongo can handle references between documents.

Advantages:

  • We get the benefits of the Django ORM and the built-in auth and session management.
  • Everything stays in one backend that we've previously used on other projects successfully.

Disadvantages:

  • No experience with this, personally.
  • It doesn't look like a very highly used feature. It looks like they get recommended quite a bit to people looking at NOSQL solutions but I don't see a lot of evidence that they are being chosen. This makes me think I must be missing something.
  • All values stored are strings. Lose DB-level constraint checking.
  • The data in the hstore will never be displayed to the user unless they specifically view a document, but the metadata stored in more standard columns will be. We will be beating that metadata up and I worry the rather large hstores we will be creating might come with performance drawbacks.

4. Go full bore document-oriented

Make all the things documents (in the MongoDB sense). Create a single collection of type Document and call it a day. Bring all peripheral data (including data on user accounts, groups, etc) into mongo as well. This solution is obviously better than EAV modeling but it feels wrong to me for the same reason #3 felt wrong - they both feel like using your hammer as a screwdriver too.

Advantages:

  • No need to model data upfront. Have one collection with documents of type Document and call it a day.
  • Known good scaling characteristics, should the collection need to grow to encompass millions or even billions of documents.
  • JSON format (BSON) is intuitive for developers.
  • As I understand it (which is only vaguely at this point), by being paranoid with regard to write-concern level even a single instance can provide pretty strong data safety in the event of anything and everything up to a hard drive crash.

Disadvantages:

  • The ORM is out the window for Django trunk. Freebies that go out the window with it: the auth framework, the sessions framework, the admin interface, surely many other things.
  • Must either use mongo's referencing capabilities (which require multiple queries) or denormalize data. Not only do we lose freebies that we got from Django, we also lose freebies like JOINs we took for granted in PostgreSQL.
  • Data safety. When one reads about MongoDB, it seems there is always at least one person referring to how it will up and lose your data. They never cite a particular occurrence and it might all just be hogwash or just related to the old default fire and forget write-concern but it still worries me. We will of course be utilizing a fairly paranoid backup strategy in any case (if data is corrupted silently that could well be immaterial of course..).

5. PostgreSQL and MongoDB

Relational data goes in the relational database and document data goes in the document-oriented database. The documents table on the relational database contains all of the data we might need to index or slice and dice on as well as a MongoDB ObjectId which we would use when we needed to query for the actual values of the fields on the documents. We wouldn't be able to use the ORM or the built-in admin for the values of the documents themselves but that's not that big of a loss since the whole app is basically an admin interface for the documents and we would have likely had to customize that specific part of the ORM to an unacceptable degree to make it work just the way we need.

Advantages:

  • Each backend does only what it is good at.
  • References between models are preserved without requiring multiple queries.
  • We get to keep the batteries Django gave us as far as users, sessions, etc are concerned.
  • Only need one documents table no matter how many different classes of documents are created.
  • The less often queried document data is strongly separated from the far more often queried metadata.

Disadvantages:

  • Retrieving document data will require 2 sequential queries, first against the SQL DB and then against the MongoDB (though this is no worse than if the same data had been stored in Mongo and not denormalized)
  • Writing will no longer be atomic. A write against a single Mongo document is guaranteed to be atomic and PG obviously can make atomicity guarantees but ensuring atomicity of write across both will require application logic, no doubt with a performance and complexity penalty.
  • Two backends = two query languages = two different programs with dissimilar admin requirements = two databases vying for memory.

Should I snowflake or duplicate it across my facts?

Posted: 20 Apr 2013 08:04 PM PDT

I'm building up a data warehouse to be used by SSAS to create cubes on, and I'm debating between two possible schemas. In my warehouse, I've got two different fact tables that tracking daily changes in dollar values. Each entity in these fact tables have an underlying Sales Order and Line to which they relate. These SOs and Lines then have other related dimensions, such as customer, product, etc. About 12 sub-dimensions total so far.

My question is if I should be rolling all these sub dimensions up directly into the fact tables, or if I should use a little snowflaking in my warehouse, and have them branching off the Sales Order and Lines dimension instead.

The first option obviously follows a star-schema model better. However, if changes are made such as adding additional dimensions, it becomes more maintenance, basically having to do the ETL twice for each fact table, rather than just the once on the SO dimension. As well, if a new fact is added that relates to Sales Orders, I'd have to go through the whole process again.

As this is my first DW/OLAP project, I'm not familiar on where the line should be drawn on snowflaking, and other people's thoughts would be highly appreciated.

Landed as BI, but databases are a big WTF, what to do?

Posted: 20 Apr 2013 07:35 PM PDT

Maybe a duplicate, but I believe my case is a bit different. From one of the answers I got to this post on SQL Server Central that also comes handy too but is not quite the same scenario: 9 Things to Do When You Inherit a Database

Couple of weeks ago started on a new job. I'm supposed to be working as BI analyst and SQL Developer. But right on the firsts assignments noticed in general everything was taking long to execute. Asked the person that is guiding me on the first days, my supervisor you could say, and he told me that they know the databases are a mess. Asked if I could take a look and see what could be done, got a yes as answer.

So I began to digg in, using several really handy scripts, like for example:

What I've found is a big mess as they told me. As an example, blitzindex procedure returns almost 2000 rows with lot of duplicate indexes, NC indexes including all the columns from a table, lot of heap tables, really wide indexes and more. As for backups, none is done since several weeks, asked about it and IT guys just copy the databases each night to a different server. Couple of databases are over 100Gb and several others are close to that size too. Statistics are updated everyday for every table. There are reports that take more than hour to finish, on not so big tables (just couple of millions of rows). And so on.

As a test I spent couple of days tuning couple of big tables and different procedures and queries that use them. Prepared a baseline, using the profiler. Then made few changes and ran again the test queries. As spected, a report that was taking about 8min now is running in around a minute and a couple of other queries also now take less than half the time. All these changes are done on a test server, we still have a reporting server and a production server.

Taking into consideration that I'm supposed to be a BI and sql developer with limited rights new on the office and not a DBA. What other actions do you recomend me to do in order to approach this scenario? There is an appointed DBA but seems to be just an sql developer doing some dba tasks. There was DBA but he left around half year ago they told me. Should I forget about these issues? Or as someone that is heavily using the database I must point out the problems and propose solutions? Has someone been on the same scenario?

Resume Logshipping after Primary database restored

Posted: 20 Apr 2013 10:02 AM PDT

We have Database (D1) as a primary database and Log shipping setup to the secondary database called (LS_D1). Now what we are doing is taking backup of D1 database and restore it on D2 database. We do schema and data changes for new Release deployment on to D2 database.

What we want is to restore D2 database on D1 database and resume the log shipping to the LS_D1 database without copying backup file. Copying a backup file from primary data center to DR data center cost us too much money for every Release so that's why we want to resume the Log shipping.

When i tried to do this LSN numbers got out of sync, is there any way that we can solve this problem ? or can we have any other alternative to Log Shipping that can provide us DR capability and solve this problem ?

Note: Any solution that can require one time cost of log shipping would also be fine.

Thanks, Piyush Patel

Why does that query cause lock wait timeouts?

Posted: 20 Apr 2013 09:02 AM PDT

From time to time, I find a lot of these errors in my PHP error log:

MYSQL.1213: Deadlock found when trying to get lock; try restarting transactionSQL  

The problem persists for about 2 or 3 minutes. Thanks to stackoverflow, the reason was quite easy to find:

------------------------  LATEST DETECTED DEADLOCK  ------------------------  130320 15:53:37  *** (1) TRANSACTION:  TRANSACTION 0 83395751, ACTIVE 35 sec, process no 26405, OS thread id 140507872417536 starting index read  mysql tables in use 3, locked 3  LOCK WAIT 3 lock struct(s), heap size 1216, 2 row lock(s)  MySQL thread id 1163191, query id 199629038 localhost sosci Updating  UPDATE `database`.`table` SET `invalidate`='2013-03-21 03:53:02' WHERE ((token='C7G8X3HABCDEFGH') AND (invalidate IS NULL)) AND (project=26118) LIMIT 1  *** (1) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395751 lock_mode X locks rec but not gap waiting  Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c2591; asc   % ;; 1: len 6; hex 000004e36ace; asc     j ;; 2: len 7; hex 8000000a830110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: len 4; hex 80006606; asc   f ;; 5: len 1; hex 07; asc  ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc    N wd ;; 11: len 8; hex 8000124ef495e88e; asc    N    ;;    *** (2) TRANSACTION:  TRANSACTION 0 83395676, ACTIVE 37 sec, process no 26405, OS thread id 140507856160512 fetching rows, thread declared inside InnoDB 451  mysql tables in use 1, locked 1  4 lock struct(s), heap size 1216, 53 row lock(s), undo log entries 1  MySQL thread id 1163198, query id 199628885 localhost sosci updating  DELETE FROM `database`.`table` WHERE ((action="limit") AND (info='login') AND (creation < DATE_SUB(NOW(), INTERVAL 10 MINUTE)))  *** (2) HOLDS THE LOCK(S):  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395676 lock_mode X  Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c2591; asc   % ;; 1: len 6; hex 000004e36ace; asc     j ;; 2: len 7; hex 8000000a830110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: len 4; hex 80006606; asc   f ;; 5: len 1; hex 07; asc  ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc    N wd ;; 11: len 8; hex 8000124ef495e88e; asc    N    ;;    *** (2) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395676 lock_mode X waiting  Record lock, heap no 117 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c31d2; asc   1 ;; 1: len 6; hex 000004f884fc; asc       ;; 2: len 7; hex 80000011040110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: SQL NULL; 5: len 1; hex 06; asc  ;; 6: SQL NULL; 7: len 15; hex 3133322e3139392e3132312e313632; asc 132.199.121.162;; 8: len 5; hex 6c6f67696e; asc login;; 9: len 1; hex 81; asc  ;; 10: len 8; hex 8000124ef49502aa; asc    N    ;; 11: SQL NULL;    *** WE ROLL BACK TRANSACTION (1)  

What I do not understand is: Why? The locked table is very small, only 61 entries (about 30 new and 30 deleted per day, the auto-incremented primary index is near 800.000). No column is especially large.

I use the InnoDB enging for this table (one key refers to another table with approx. 20.000 entries) and the problem occurs from time to time. RAM should not be an issue.The webserver and MySQL server run on the same (virtual) machine that usually does not suffer performance problems. Other transactions (there were thousands during the locked minutes) in large tables (1-2 mio. entries) did not make any trouble.

Thanks for your hints!

Structuring data in MongoDB for a to-do list type application

Posted: 20 Apr 2013 06:08 PM PDT

This is a fairly high level question as I am in the initial stages of planning this application, and am also very new to MongoDB and document-based databases in general. Disclaimer aside, I am building a to-do list type application whose two main types of data are users and items (aka tasks or todo's), and my question is, what would be a good way to structure this data in mongodb, if I want it optimized more for read speeds rather than updates?

My initial thought, based on my limited knowledge of mongodb, is that since it allows embedded type data structures I could basically store it in a single collection like this:

{      username: "joe",      password: "...",      items: [          {              text: "This is a task",              completed: 1          },          ...      ]  },  ...  

However, will this make it more difficult to update/remove existing items since they are not themselves stored as separate documents, and therefore lose things like auto-generated unique object ID's, ect? Also each user could potentially have a relatively large number of items associated with them (possibly 1000's). Is embedding large lists in parent documents like that a bad idea?

Create Scheme that allows for fluidity between entities

Posted: 20 Apr 2013 11:03 AM PDT

We are a small rapidly evolving company and recently investigated switching databases from SQL Server to a graph database. We like the graph approach because of the fluidity and openness that it offered.

For instance, with a graph DB you can add or remove new relationships quite easily. However, in an RDBMS it is much more difficult.

Is it possible to have some or all of the fluidity in a RDBMS like SQL Server?

Under what circumstances can a division expression reduce the number of rows in a query's result?

Posted: 20 Apr 2013 07:47 PM PDT

I'm experiencing a problem with a query that I've written.

Given a table with the schema:

create table somevals(division varchar(100),                         subdivision varchar(100),                         good float,                         bad float);  

I would like to find, for each unique (division, subdivision), the sum of good values, the sum of bad values, and the ratio of good to bad values. So, I wrote the query:

select division, subdivision, g, b, g / b as ratio  from (select division, subdivision, sum(good) as g, sum(bad) as b         from somevals        group by division, subdivision       ) as temp  

The problem is that this returns far fewer rows than I would expect. However, if I remove the division (g / b as ratio), this query returns the correct number of rows in the result (about three times as many). None of the columns contain Null values. None of the bad values are less than or equal to zero, so sum(bad) will always be non-zero.

I am at a loss for how this could be happening. Under what circumstances can adding a division expression in the select statement reduce the number of rows in the result?

Schema design: Use of association (aka: bridge/junction) table vs foreign key constraint with composite index containing a non-key field

Posted: 20 Apr 2013 08:17 PM PDT

This is an inventory database for IT assets. The models used are trimmed in order to focus on the problem at hand. Using SQL Server 2008. Thanks for taking the time to read and for any input you can provide.

My design includes a Device table which holds the various devices that can be entered into inventory. Each device has a boolean flag, CanNetwork which states whether a device has network capability, e.g., for most computers CanNetwork = true, for hard drives CanNetwork = false; some printers will be true, and others will be false. You get the idea.

The CanNetwork field determines if network-related information is relevant when an inventory record is created.

Design 1

My first design uses an index on Device.DeviceID and Device.CanNetwork to use in a foreign key constraint with the Asset table.

Schema #1: Foreign key constraint with index

The NetworkStatus table looks like this in this setup:

+----------------------------------------------------------------------+  | NetworkStatusID | NetworkStatus  | NetworkStatusDescription          |  |----------------------------------------------------------------------|  | 1               | Connected      | Device connected to network.      |  | 2               | Not Connected  | Device not connected to network.  |  | 3               | Do Not Connect | Do not connect device to network. |  +----------------------------------------------------------------------+  

I put check constraints on the Asset table as follows to ensure a network status and network information can only be provided if the device is capable of connecting to a network.

-- Asset table check constraint: CK_CanNetwork  -- If can't network, then network fields are null  CanNetwork = 0  AND NetworkStatusID IS NULL  AND Hostname IS NULL  AND IPAddress IS NULL  AND MACAddress IS NULL  OR CanNetwork = 1    -- Asset table check constraint: CK_NetworkStatus  -- If network status is "Connected", then must provide  -- a hostname or ip address  NetworkStatusID = 1 AND Hostname IS NOT NULL  OR NetworkStatusID = 1 AND IPAddress IS NOT NULL  OR NetworkStatusID <> 1  

The issue I have with this design is I'm not sure if the relationship with Asset and Device.DeviceID/Device.CanNetwork is a good or bad design decision. Is propagating a non-key field like CanNetwork to other tables a bad design? I don't have enough experience with database design to make an informed decision.


Design 2

In this design I thought I would use a bridge/association/junction table to decide which network statuses are valid for a device. It looks like this:

enter image description here

The NetworkStatus table looks like this in this setup (Notice the addition of record with id #4, and the ForCanNetwork field which specifies that this status is for use with devices that can can connect to a network):

+--------------------------------------------------------------------------------------+  | NetworkStatusID | NetworkStatus  | NetworkStatusDescription          | ForCanNetwork |  |--------------------------------------------------------------------------------------|  | 1               | Connected      | Device connected to network.      | True  (1)     |  | 2               | Not Connected  | Device not connected to network.  | True  (1)     |  | 3               | Do Not Connect | Do not connect device to network. | True  (1)     |  | 4               | Incapable      | Cannot connect to networks.       | False (0)     |  +--------------------------------------------------------------------------------------+  

Due to this design's granularity, I could theoretically allow any mix of statuses for devices with this design, but I wanted to control it so I wrote some triggers to only insert the correct mix of statuses depending on whether the device is network capable. Triggers as follows:

-- NetworkStatus table on INSERT trigger  -- Adds a record for each device that  -- matches the capability of the network status  INSERT INTO DeviceNetworkStatus  SELECT i.NetworkStatusID, dev.DeviceID  FROM Device dev  CROSS JOIN  inserted i  WHERE dev.CanNetwork = i.ForCanNetwork    -- Device table on INSERT trigger  -- Adds a record for each network status that  -- matches the capability of the new device  INSERT INTO DeviceNetworkStatus  SELECT ns.NetworkStatusID, i.DeviceID  FROM NetworkStatus ns  CROSS JOIN  inserted i  WHERE ns.ForCanNetwork = i.CanNetwork  

I used the following CHECK constraint on the Asset table:

-- Asset table check constraint: CK_NetworkStatus  -- If network status is "Connected', then must provide  -- a hostname or ip address  -- If network status is "Incapable", then network fields  -- must be null  NetworkStatusID = 1 AND Hostname IS NOT NULL  OR NetworkStatusID = 1 AND IPAddress IS NOT NULL  OR NetworkStatusID = 4 AND Hostname IS NULL  AND IPAddress IS NULL AND MACAddress IS NULL  OR NetworkStatusID <> 1 AND NetworkStatusID <> 4  

This design eliminates the need to propagate CanNetwork across the tables. The issue I see with this design is that every device that has network capability will have records in DeviceNetworkStatus paired with NetworkStatus ids 1, 2 and 3, while devices that can't connect to a network will be paired only with NetworkStatus id 4. It seems like a lot of extra records that all mean the same thing: devices that can be networked can only use statuses 1, 2 and 3, and devices that can't network only use 4. This design seems to be more "relationally correct", but also smells a bit.

Update

The following update proposes variations on Design 1. I come across situations like this often, where there are many ways to achieve the same end result. I never know how to tell if there are hidden problems with the designs, and I can't judge when to normalize or denormalize. Is one of these designs preferred over the other and why?

Design 1.1

Schema #1.1: NetworkAsset subtype

-- NetworkAsset table check constraint: CK_CanNetwork  CanNetwork = 1    -- NetworkAsset table check constraint: CK_NetworkStatus  -- If network status is "Connected", then must provide  -- a hostname or ip address  NetworkStatusID = 1 AND Hostname IS NOT NULL  OR NetworkStatusID = 1 AND IPAddress IS NOT NULL  OR NetworkStatusID <> 1  

Design 1.2

Schema #1.2: NetworkAsset and NetworkDevice subtype

For the record, this design seems a bit absurd even to me, but this is my thought process. In this one, the presence of a DeviceID in the NetworkDevice table is equivalent to saying CanNetwork = true in Design 1. The NetworkAsset.NetworkDeviceID has a foreign key constraint and is used to ensure only networkable devices are entered. Can do this using a CHECK constraint (see below), and by making NetworkDeviceID a computed column that is equal to DeviceID.

-- NetworkAsset table check constraint: CK_IsNetworkDevice  NetworkDeviceID = DeviceID  

Partition Fact Table

Posted: 20 Apr 2013 04:03 PM PDT

In one of my fact table which has close to 25 million records in it and now when a cognos team try to run the report on top of that it takes lot of time, So i am thinking of partitioning the table we store the records in fact table based on daily basis and we do have have the id calaendarday dma in the fact table as data type int.So is there a way i can partition this fact table based on id calaendarday dma?

Please help me on the above query as it is impacting the cognos reports

SQL as a Storage Definition Language (SDL)

Posted: 20 Apr 2013 08:39 PM PDT

In my Databases lecture, the teacher mentioned that SQL is not just a data definition language (DDL), data manipulation language (DML) and view definition language (VDL), but also a storage definition language (SDL). The last usage doesn't come with any further explanations or examples.

Searching for extra information on the internet regarding using SQL as a SDL yields no relevant results. Could anyone give an example of using SQL as a storage definition language?

Edit:

Would a specification such as MySQL's SET storage_engine=MYISAM; count as a storage definition? It doesn't really relate to the data being stored, but rather, how it's stored.

Does MySQL have problems with nested insert like with subqueries in where?

Posted: 20 Apr 2013 09:03 PM PDT

Query 1:

INSERT `personal`.`locations`     SELECT DISTINCT `s`.*       FROM `references` `t`        JOIN `locations` `s` ON `first_id` = `s`.`id`       WHERE         `lat` >= 37.3        AND `lat` <= 37.3        AND `lng` >= -122.2         AND `lng` <= -122.1   ON DUPLICATE KEY UPDATE     `lat`  = `s`.`lat`,    `lng`  = `s`.`lng`,    `name` = `s`.`name`,    `desr` = `s`.`desr`;  

Query 2:

INSERT `personal`.`locations`     SELECT DISTINCT `s`.*       FROM `references` `t`        JOIN `locations` `s` ON (`first_id` = `s`.`id` OR `second_id` = `s`.`id`)       WHERE         `lat` >= 37.3        AND `lat` <= 37.3        AND `lng` >= -122.2         AND `lng` <= -122.1   ON DUPLICATE KEY UPDATE     `lat`  = `s`.`lat`,    `lng`  = `s`.`lng`,    `name` = `s`.`name`,    `desr` = `s`.`desr`;  

The select in query 1 takes 0.008 seconds to select 4 million records.

The select in query 2 takes 1 second to select 300 thousand records.

Query 1 executes completely in 60 seconds.

Query 2 executes completely in 300 seconds.

The conclusion: MySQL seems to repeat the select for every insert just like with where clause subqueries.

Is there a solution?


Edit 1: Added new query

Query 2: faster alternative but still with the same issue

INSERT `personal`.`locations`     SELECT DISTINCT `s`.*       FROM `references` `t`        JOIN `locations` `s` ON `first_id` = `s`.`id`       WHERE         `lat` >= 37.3        AND `lat` <= 37.3        AND `lng` >= -122.2         AND `lng` <= -122.1   UNION ALL    SELECT DISTINCT `s`.*       FROM `references` `t`        JOIN `locations` `s` ON `second_id` = `s`.`id`       WHERE         `lat` >= 37.3        AND `lat` <= 37.3        AND `lng` >= -122.2         AND `lng` <= -122.1   ON DUPLICATE KEY UPDATE     `lat`  = `s`.`lat`,    `lng`  = `s`.`lng`,    `name` = `s`.`name`,    `desr` = `s`.`desr`;  

Slightly faster despite the fact it actually executes more updates but since the two selects execute faster the the one they replace it gains a bit of time.


Edit 2: Added table structure

CREATE TABLE IF NOT EXISTS `references` (    `id`        bigint(20) unsigned NOT NULL AUTO_INCREMENT,    `first_id`  bigint(20) unsigned NOT NULL DEFAULT '0',    `second_id` bigint(20) unsigned NOT NULL DEFAULT '0',    `name`      varchar(255) NOT NULL,    `status`    enum('V','I','D') NOT NULL DEFAULT 'V',    PRIMARY KEY (`id`),    KEY `first_id`  (`first_id`),    KEY `second_id` (`second_id`),    KEY `status`    (`status`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;    CREATE TABLE IF NOT EXISTS `locations` (    `id`        bigint(20) unsigned NOT NULL AUTO_INCREMENT,    `lat`       double NOT NULL DEFAULT '0',    `lng`       double NOT NULL DEFAULT '0',    `name`      varchar(40) NOT NULL DEFAULT '0',    `desr`      varchar(254) NOT NULL DEFAULT '0',    `status`    enum('V','I','D') NOT NULL DEFAULT 'V',    PRIMARY KEY (`id`),    KEY `lat`    (`lat`),    KEY `lng`    (`lng`)    KEY `status` (`status`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;  

Each reference has to have at least one location associated based on first_id. The second is optional but it does exist for 70%-80% of the records. Each location can be associated with multiple references.

MySQL Stalls and stops all queries

Posted: 20 Apr 2013 12:03 PM PDT

We occassionally have our servers stall and back up queres for 3-5 minutes. then back to normal but locks up all users.

We made a new server with binlogs and log fiels on seperate disk from data. Made data be RAID10 and made config chnages usggested by Oracle. Probelm is not as often but still happens.

We have a query that ones once per minute for each user (about 150 usually). They run all day long very quickly but those backup quickly when this stall happens. We have analyzed our slow query log and indexed more files and fixed soem slow queries and we see no more causing this. We can go days now without it but then bam it is back.

We occassionally have our servers stall and back up queres for 3-5 minutes. then back to normal but locks up all users.

We made a new server with binlogs and log fiels on seperate disk from data. Made data be RAID10 and made config chnages usggested by Oracle. Probelm is not as often but still happens.

We have a query that ones once per minute for each user (about 150 usually). They run all day long very quickly but those backup quickly when this stall happens. We have analyzed our slow query log and indexed more files and fixed soem slow queries and we see no more causing this. We can go days now without it but then bam it is back.

SHOW GLOBAL STATUS

Variable_name   Value  Aborted_clients 4407  Aborted_connects    1551  Binlog_cache_disk_use   184072  Binlog_cache_use    661894  Binlog_stmt_cache_disk_use  0  Binlog_stmt_cache_use   0  Bytes_received  19063442840  Bytes_sent  132853133182  Com_admin_commands  6634187  Com_assign_to_keycache  0  Com_alter_db    0  Com_alter_db_upgrade    0  Com_alter_event 0  Com_alter_function  0  Com_alter_procedure 0  Com_alter_server    0  Com_alter_table 8  Com_alter_tablespace    0  Com_analyze 0  Com_begin   704470  Com_binlog  0  Com_call_procedure  6722493  Com_change_db   6658277  Com_change_master   0  Com_check   0  Com_checksum    0  Com_commit  1430211  Com_create_db   0  Com_create_event    0  Com_create_function 0  Com_create_index    62  Com_create_procedure    2882  Com_create_server   0  Com_create_table    2  Com_create_trigger  1  Com_create_udf  0  Com_create_user 0  Com_create_view 0  Com_dealloc_sql 6112846  Com_delete  11113  Com_delete_multi    0  Com_do  0  Com_drop_db 0  Com_drop_event  0  Com_drop_function   0  Com_drop_index  9  Com_drop_procedure  2882  Com_drop_server 0  Com_drop_table  0  Com_drop_trigger    1  Com_drop_user   0  Com_drop_view   0  Com_empty_query 0  Com_execute_sql 6112973  Com_flush   0  Com_grant   1  Com_ha_close    0  Com_ha_open 0  Com_ha_read 0  Com_help    0  Com_insert  89369  Com_insert_select   1  Com_install_plugin  0  Com_kill    521  Com_load    0  Com_lock_tables 0  Com_optimize    0  Com_preload_keys    0  Com_prepare_sql 6112973  Com_purge   0  Com_purge_before_date   0  Com_release_savepoint   0  Com_rename_table    0  Com_rename_user 0  Com_repair  0  Com_replace 0  Com_replace_select  0  Com_reset   0  Com_resignal    0  Com_revoke  0  Com_revoke_all  0  Com_rollback    218  Com_rollback_to_savepoint   0  Com_savepoint   0  Com_select  43871825  Com_set_option  8940769  Com_signal  0  Com_show_authors    0  Com_show_binlog_events  0  Com_show_binlogs    0  Com_show_charsets   1145  Com_show_collations 24150  Com_show_contributors   0  Com_show_create_db  0  Com_show_create_event   0  Com_show_create_func    0  Com_show_create_proc    6170775  Com_show_create_table   0  Com_show_create_trigger 0  Com_show_databases  8  Com_show_engine_logs    0  Com_show_engine_mutex   174  Com_show_engine_status  174  Com_show_events 0  Com_show_errors 0  Com_show_fields 253582  Com_show_function_status    1  Com_show_grants 1  Com_show_keys   4359  Com_show_master_status  0  Com_show_open_tables    174  Com_show_plugins    88  Com_show_privileges 0  Com_show_procedure_status   1  Com_show_processlist    16374  Com_show_profile    0  Com_show_profiles   0  Com_show_relaylog_events    0  Com_show_slave_hosts    0  Com_show_slave_status   0  Com_show_status 23158  Com_show_storage_engines    0  Com_show_table_status   4356  Com_show_tables 85  Com_show_triggers   0  Com_show_variables  25102  Com_show_warnings   69  Com_slave_start 0  Com_slave_stop  0  Com_stmt_close  6112851  Com_stmt_execute    6112975  Com_stmt_fetch  0  Com_stmt_prepare    6112975  Com_stmt_reprepare  0  Com_stmt_reset  0  Com_stmt_send_long_data 0  Com_truncate    3  Com_uninstall_plugin    0  Com_unlock_tables   0  Com_update  665414  Com_update_multi    301  Com_xa_commit   0  Com_xa_end  0  Com_xa_prepare  0  Com_xa_recover  0  Com_xa_rollback 0  Com_xa_start    0  Compression OFF  Connections 28988  Created_tmp_disk_tables 997343  Created_tmp_files   3428  Created_tmp_tables  1103738  Delayed_errors  0  Delayed_insert_threads  0  Delayed_writes  0  Flush_commands  1  Handler_commit  40551793  Handler_delete  21962  Handler_discover    0  Handler_prepare 1183972  Handler_read_first  2803361  Handler_read_key    1817223370  Handler_read_last   299  Handler_read_next   2192159478  Handler_read_prev   2918171  Handler_read_rnd    4979417  Handler_read_rnd_next   1025367957  Handler_rollback    3129  Handler_savepoint   0  Handler_savepoint_rollback  0  Handler_update  480507  Handler_write   69142091  Innodb_buffer_pool_pages_data   424551  Innodb_buffer_pool_pages_dirty  13  Innodb_buffer_pool_pages_flushed    1382087  Innodb_buffer_pool_pages_free   2185371  Innodb_buffer_pool_pages_misc   11518  Innodb_buffer_pool_pages_total  2621440  Innodb_buffer_pool_read_ahead_rnd   0  Innodb_buffer_pool_read_ahead   57049  Innodb_buffer_pool_read_ahead_evicted   0  Innodb_buffer_pool_read_requests    1032232686  Innodb_buffer_pool_reads    353952  Innodb_buffer_pool_wait_free    0  Innodb_buffer_pool_write_requests   4456721  Innodb_data_fsyncs  1218430  Innodb_data_pending_fsyncs  0  Innodb_data_pending_reads   0  Innodb_data_pending_writes  0  Innodb_data_read    2617249792  Innodb_data_reads   434696  Innodb_data_writes  2030236  Innodb_data_written 1855358976  Innodb_dblwr_pages_written  0  Innodb_dblwr_writes 0  Innodb_have_atomic_builtins ON  Innodb_log_waits    0  Innodb_log_write_requests   978061  Innodb_log_writes   624738  Innodb_os_log_fsyncs    647788  Innodb_os_log_pending_fsyncs    0  Innodb_os_log_pending_writes    0  Innodb_os_log_written   674286592  Innodb_page_size    16384  Innodb_pages_created    4042  Innodb_pages_read   421754  Innodb_pages_written    1382087  Innodb_row_lock_current_waits   0  Innodb_row_lock_time    265740  Innodb_row_lock_time_avg    20441  Innodb_row_lock_time_max    34256  Innodb_row_lock_waits   13  Innodb_rows_deleted 20577  Innodb_rows_inserted    486519  Innodb_rows_read    552198569  Innodb_rows_updated 252163  Innodb_truncated_status_writes  0  Key_blocks_not_flushed  0  Key_blocks_unused   201000  Key_blocks_used 13343  Key_read_requests   182851753  Key_reads   13242  Key_write_requests  8998  Key_writes  8990  Last_query_cost 0.000000  Max_used_connections    606  Not_flushed_delayed_rows    0  Open_files  177  Open_streams    0  Open_table_definitions  10000  Open_tables 10000  Opened_files    4008968  Opened_table_definitions    15849  Opened_tables   18859  Performance_schema_cond_classes_lost    0  Performance_schema_cond_instances_lost  0  Performance_schema_file_classes_lost    0  Performance_schema_file_handles_lost    0  Performance_schema_file_instances_lost  0  Performance_schema_locker_lost  0  Performance_schema_mutex_classes_lost   0  Performance_schema_mutex_instances_lost 0  Performance_schema_rwlock_classes_lost  0  Performance_schema_rwlock_instances_lost    0  Performance_schema_table_handles_lost   0  Performance_schema_table_instances_lost 0  Performance_schema_thread_classes_lost  0  Performance_schema_thread_instances_lost    0  Prepared_stmt_count 25  Qcache_free_blocks  0  Qcache_free_memory  0  Qcache_hits 0  Qcache_inserts  0  Qcache_lowmem_prunes    0  Qcache_not_cached   0  Qcache_queries_in_cache 0  Qcache_total_blocks 0  Queries 89665502  Questions   62813329  Rpl_status  AUTH_MASTER  Select_full_join    10321  Select_full_range_join  2428  Select_range    7770260  Select_range_check  0  Select_scan 2796956  Slave_heartbeat_period  0.000  Slave_open_temp_tables  0  Slave_received_heartbeats   0  Slave_retried_transactions  0  Slave_running   OFF  Slow_launch_threads 0  Slow_queries    2382879  Sort_merge_passes   23  Sort_range  153943  Sort_rows   223361444  Sort_scan   255572  Ssl_accept_renegotiates 0  Ssl_accepts 0  Ssl_callback_cache_hits 0  Ssl_cipher    Ssl_cipher_list   Ssl_client_connects 0  Ssl_connect_renegotiates    0  Ssl_ctx_verify_depth    0  Ssl_ctx_verify_mode 0  Ssl_default_timeout 0  Ssl_finished_accepts    0  Ssl_finished_connects   0  Ssl_session_cache_hits  0  Ssl_session_cache_misses    0  Ssl_session_cache_mode  NONE  Ssl_session_cache_overflows 0  Ssl_session_cache_size  0  Ssl_session_cache_timeouts  0  Ssl_sessions_reused 0  Ssl_used_session_cache_entries  0  Ssl_verify_depth    0  Ssl_verify_mode 0  Ssl_version   Table_locks_immediate   47028873  Table_locks_waited  214  Tc_log_max_pages_used   0  Tc_log_page_size    0  Tc_log_page_waits   0  Threads_cached  261  Threads_connected   345  Threads_created 606  Threads_running 34  Uptime  309942  Uptime_since_flush_status   309942  

SHOW ENGINE INNODB STATUS

*************************** 1. row ***************************    Type: InnoDB    Name:   Status:   =====================================  121219 17:19:36 INNODB MONITOR OUTPUT  =====================================  Per second averages calculated from the last 16 seconds  

BACKGROUND THREAD

srv_master_thread loops: 239642 1_second, 239639 sleeps, 23598 10_second, 3673 background, 3673 flush  srv_master_thread log flush and writes: 240469  

SEMAPHORES

OS WAIT ARRAY INFO: reservation count 78879, signal count 3065681  Mutex spin waits 702797, rounds 2282568, OS waits 49863  RW-shared spins 842193, rounds 2862837, OS waits 26976  RW-excl spins 434256, rounds 3777756, OS waits 1580  Spin rounds per wait: 3.25 mutex, 3.40 RW-shared, 8.70 RW-excl  

FILE I/O

I/O thread 0 state: wait Windows aio (insert buffer thread)  I/O thread 1 state: wait Windows aio (log thread)  I/O thread 2 state: wait Windows aio (read thread)  I/O thread 3 state: wait Windows aio (read thread)  I/O thread 4 state: wait Windows aio (read thread)  I/O thread 5 state: wait Windows aio (read thread)  I/O thread 6 state: wait Windows aio (read thread)  I/O thread 7 state: wait Windows aio (read thread)  I/O thread 8 state: wait Windows aio (read thread)  I/O thread 9 state: wait Windows aio (read thread)  I/O thread 10 state: wait Windows aio (read thread)  I/O thread 11 state: wait Windows aio (read thread)  I/O thread 12 state: wait Windows aio (read thread)  I/O thread 13 state: wait Windows aio (read thread)  I/O thread 14 state: wait Windows aio (read thread)  I/O thread 15 state: wait Windows aio (read thread)  I/O thread 16 state: wait Windows aio (read thread)  I/O thread 17 state: wait Windows aio (read thread)  I/O thread 18 state: wait Windows aio (write thread)  I/O thread 19 state: wait Windows aio (write thread)  I/O thread 20 state: wait Windows aio (write thread)  I/O thread 21 state: wait Windows aio (write thread)  I/O thread 22 state: wait Windows aio (write thread)  I/O thread 23 state: wait Windows aio (write thread)  I/O thread 24 state: wait Windows aio (write thread)  I/O thread 25 state: wait Windows aio (write thread)  I/O thread 26 state: wait Windows aio (write thread)  I/O thread 27 state: wait Windows aio (write thread)  I/O thread 28 state: wait Windows aio (write thread)  I/O thread 29 state: wait Windows aio (write thread)  I/O thread 30 state: wait Windows aio (write thread)  I/O thread 31 state: wait Windows aio (write thread)  I/O thread 32 state: wait Windows aio (write thread)  I/O thread 33 state: wait Windows aio (write thread)  Pending normal aio reads: 0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ,   ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 0; buffer pool: 0  434696 OS file reads, 2030236 OS file writes, 1218430 OS fsyncs  0.00 reads/s, 0 avg bytes/read, 19.19 writes/s, 10.50 fsyncs/s  

INSERT BUFFER AND ADAPTIVE HASH INDEX

Ibuf: size 1, free list len 27, seg size 29, 11681 merges  merged operations:   insert 15777, delete mark 12559, delete 16  discarded operations:   insert 0, delete mark 0, delete 0  Hash table size 84999163, node heap has 11512 buffer(s)  26842.82 hash searches/s, 2386.60 non-hash searches/s  

LOG

Log sequence number 15116880820  Log flushed up to   15116880820  Last checkpoint at  15116877234  0 pending log writes, 0 pending chkp writes  647781 log i/o's done, 5.56 log i/o's/second  

BUFFER POOL AND MEMORY

Total memory allocated 43956305920; in additional pool allocated 0  Dictionary memory allocated 181322283  Buffer pool size   2621440  Free buffers       2185371  Database pages     424551  Old database pages 156874  Modified db pages  13  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 978, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 421754, created 4042, written 1382087  0.00 reads/s, 0.06 creates/s, 13.62 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 424551, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  

INDIVIDUAL BUFFER POOL INFO

---BUFFER POOL 0  Buffer pool size   327680  Free buffers       275235  Database pages     51011  Old database pages 18850  Modified db pages  2  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 125, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 50633, created 461, written 181321  0.00 reads/s, 0.00 creates/s, 1.69 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 51011, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 1  Buffer pool size   327680  Free buffers       274121  Database pages     52133  Old database pages 19264  Modified db pages  0  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 120, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 51823, created 442, written 98744  0.00 reads/s, 0.00 creates/s, 0.31 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 52133, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 2  Buffer pool size   327680  Free buffers       272641  Database pages     53580  Old database pages 19798  Modified db pages  1  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 117, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 53212, created 531, written 156145  0.00 reads/s, 0.00 creates/s, 1.94 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 53580, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 3  Buffer pool size   327680  Free buffers       272101  Database pages     54139  Old database pages 20004  Modified db pages  1  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 109, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 53880, created 451, written 232510  0.00 reads/s, 0.00 creates/s, 1.94 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 54139, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 4  Buffer pool size   327680  Free buffers       270671  Database pages     55579  Old database pages 20536  Modified db pages  4  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 122, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 55213, created 534, written 219942  0.00 reads/s, 0.00 creates/s, 2.25 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 55579, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 5  Buffer pool size   327680  Free buffers       272982  Database pages     53257  Old database pages 19678  Modified db pages  1  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 118, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 52863, created 567, written 186229  0.00 reads/s, 0.06 creates/s, 2.06 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 53257, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 6  Buffer pool size   327680  Free buffers       273770  Database pages     52463  Old database pages 19386  Modified db pages  3  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 117, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 52078, created 577, written 181167  0.00 reads/s, 0.00 creates/s, 1.56 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 52463, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  ---BUFFER POOL 7  Buffer pool size   327680  Free buffers       273850  Database pages     52389  Old database pages 19358  Modified db pages  1  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 150, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 52052, created 479, written 126029  0.00 reads/s, 0.00 creates/s, 1.87 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 52389, unzip_LRU len: 0  I/O sum[0]:cur[0], unzip sum[0]:cur[0]  

ROW OPERATIONS

0 queries inside InnoDB, 0 queries in queue  26 read views open inside InnoDB  Main thread id 4032, state: sleeping  Number of rows inserted 486519, updated 252163, deleted 20577, read 4847166839  1.37 inserts/s, 1.87 updates/s, 0.06 deletes/s, 54865.57 reads/s  

SHOW ENGINE INNODB MUTEX

Type    Name    Status  InnoDB  ibuf0ibuf.c:534 os_waits=6  InnoDB  dict0dict.c:699 os_waits=53  InnoDB  log0log.c:775   os_waits=4  InnoDB  log0log.c:771   os_waits=11076  InnoDB  buf0buf.c:1166  os_waits=15  InnoDB  buf0buf.c:1166  os_waits=13  InnoDB  buf0buf.c:1166  os_waits=6  InnoDB  buf0buf.c:1166  os_waits=10  InnoDB  buf0buf.c:1166  os_waits=34  InnoDB  buf0buf.c:1166  os_waits=5  InnoDB  buf0buf.c:1166  os_waits=4  InnoDB  buf0buf.c:1166  os_waits=24  InnoDB  fil0fil.c:1610  os_waits=26671  InnoDB  srv0srv.c:1018  os_waits=11942  InnoDB  sync0sync.c:1527    os_waits=2  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=3  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=5  InnoDB  dict0dict.c:1726    os_waits=7  InnoDB  dict0dict.c:1726    os_waits=6  InnoDB  dict0dict.c:1726    os_waits=9  InnoDB  dict0dict.c:1726    os_waits=11  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=13  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=1  InnoDB  dict0dict.c:1726    os_waits=2  InnoDB  dict0dict.c:1726    os_waits=2  InnoDB  dict0dict.c:1726    os_waits=3  InnoDB  dict0dict.c:1726    os_waits=4  InnoDB  dict0dict.c:1726    os_waits=2  InnoDB  dict0dict.c:1726    os_waits=2  InnoDB  dict0dict.c:1726    os_waits=3  InnoDB  dict0dict.c:1726    os_waits=16  InnoDB  dict0dict.c:722 os_waits=14  InnoDB  dict0dict.c:722 os_waits=6  InnoDB  dict0dict.c:722 os_waits=7  InnoDB  dict0dict.c:722 os_waits=6  InnoDB  dict0dict.c:722 os_waits=22  InnoDB  dict0dict.c:722 os_waits=18  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=23  InnoDB  dict0dict.c:722 os_waits=18  InnoDB  dict0dict.c:722 os_waits=20  InnoDB  dict0dict.c:722 os_waits=15  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=5  InnoDB  dict0dict.c:722 os_waits=12  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=309  InnoDB  dict0dict.c:722 os_waits=6  InnoDB  dict0dict.c:722 os_waits=16  InnoDB  dict0dict.c:722 os_waits=15  InnoDB  dict0dict.c:722 os_waits=22  InnoDB  dict0dict.c:722 os_waits=33  InnoDB  dict0dict.c:722 os_waits=21  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=5  InnoDB  dict0dict.c:722 os_waits=20  InnoDB  dict0dict.c:722 os_waits=6  InnoDB  dict0dict.c:722 os_waits=12  InnoDB  dict0dict.c:722 os_waits=7  InnoDB  dict0dict.c:722 os_waits=35  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=3  InnoDB  dict0dict.c:722 os_waits=14  InnoDB  dict0dict.c:722 os_waits=9  InnoDB  dict0dict.c:722 os_waits=20  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=11  InnoDB  dict0dict.c:722 os_waits=11  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=6  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=11  InnoDB  dict0dict.c:722 os_waits=5  InnoDB  dict0dict.c:722 os_waits=14  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=28  InnoDB  dict0dict.c:722 os_waits=17  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=18  InnoDB  dict0dict.c:722 os_waits=7  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=7  InnoDB  dict0dict.c:722 os_waits=22  InnoDB  dict0dict.c:722 os_waits=8  InnoDB  dict0dict.c:722 os_waits=32  InnoDB  dict0dict.c:722 os_waits=18  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=16  InnoDB  dict0dict.c:722 os_waits=10  InnoDB  dict0dict.c:722 os_waits=7  InnoDB  dict0dict.c:722 os_waits=13  InnoDB  dict0dict.c:722 os_waits=22  InnoDB  dict0dict.c:722 os_waits=15  InnoDB  log0log.c:832   os_waits=23038  InnoDB  btr0sea.c:178   os_waits=2965  InnoDB  combined buf0buf.c:916  os_waits=1261  

Linked Server Query Results: OPENQUERY vs Distributed Transactions

Posted: 20 Apr 2013 07:03 PM PDT

I have two databases. They are on different servers. One is named REPORTDB and the other is named TRANSDB. TRANSDB is referenced as a linked server on REPORTDB named TRANS.

I execute a query like this from REPORTDB using Four Part Naming and Distributed Transactions:

SET @val = ''  SET @DBName = 'TRANSDB'  SELECT @val = @val + 'CREATE SYNONYM ' + [name] + ' for ' + @DBName + '.dbo.'       + [name] + CHAR(10) + CHAR(13)   FROM TRANS.db.sys.tables;  

I expect the result of this query to generate a CREATE SYNONYM statement for each table in TRANSDB inside my REPORTDB (Table1, Table2, Table3). However, it only creates one statement and that is the last table that is returned in the result set from TRANSDB (Table3).

This correctly returns three CREATE SYNONYM statements for Table1, Table2, and Table3 using OPENQUERY:.

SET @val = ''  SET @DBName = 'TRANSDB'  SELECT @val = @val + 'CREATE SYNONYM + [name] + ' for ' + @DBName + '.dbo.'       + [name] + CHAR(10) + CHAR(13)   FROM OPENQUERY( TRANS, ' select [name] FROM  db.sys.tables')  

Since I do not wish to use openquery, how can I get the DISTRIBUTED TRANSACTION using four part naming to return results correctly?

Select * from statement execute very slowly, innodb io read speed is low

Posted: 20 Apr 2013 06:03 PM PDT

I have a very simple query " select * from ap_statistic " running in my servers. the servers have the same hardware and software configuration (CPU 8 core, mem :32G, OS: redhat 5.5, mysql version: 5.1 ) and run the same applications. In server A, the row number of the table ap_statistic is about 22512379, in server B, the row number of the table is 41438751. Of course the query running on server A is faster than server B, but what is strange is the query on server B is extreme slow, it takes more than 3 hours where in server A it just takes 10 minutes.

I use some tool to monitor system status and find that when the query is running in server A, system IO read speed is about 20~30M/s, but in server B it's 2~3M/s. I've tried to clean linux cache and restart mysql server, all is the same result. And I tried to restored DB from server B to server A, so the query in Server A is very very slow and io read speed is very slow. I want to know why this happen?

the ap_statistic table data in server A is generated by normally running and table data in server B is generated by a stored procedure. the table scheme is:

CREATE TABLE `ap_statistic` (    `ap_statisticId` BIGINT(20) UNSIGNED NOT NULL,    `deviceId` INT(11) UNSIGNED NOT NULL DEFAULT '0',    `macaddress` VARCHAR(100) DEFAULT NULL,    `check_time` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00',    `ap_count` INT(10) UNSIGNED NOT NULL DEFAULT '0',    `wlan` VARCHAR(64) DEFAULT NULL,    `radio` VARCHAR(50) DEFAULT NULL,    `bssid` VARCHAR(32) DEFAULT NULL,    `zd_ap_name` VARCHAR(64) DEFAULT NULL,    `channel` INT(2) DEFAULT NULL,    `uplinkRSSI` INT(3) DEFAULT '0',    `downlinkRSSI` INT(3) DEFAULT '0',    `txBytes` BIGINT(20) DEFAULT '0',    `rxBytes` BIGINT(20) DEFAULT '0',    `txPkts` BIGINT(20) DEFAULT '0',    `rxPkts` BIGINT(20) DEFAULT '0',    `hops` INT(1) DEFAULT '0',    `numDownlink` INT(3) DEFAULT '0',    `distance` INT(4) DEFAULT '0',    `phyerr` INT(11) DEFAULT '0',    `max_num_clients` INT(3) DEFAULT '0',    `max_mesh_downlinks` INT(1) DEFAULT '0',    `airtime` INT(3) DEFAULT '0',    `uptimePercentage` INT(3) DEFAULT '0',    `total_num_clients` INT(3) DEFAULT '0',    `tx_actual_throughput` BIGINT(20) DEFAULT '0',    `rx_actual_throughput` BIGINT(20) DEFAULT '0',    `tunnelMode` VARCHAR(32) DEFAULT NULL,    `externalIp` VARCHAR(64) DEFAULT NULL,    `externalPort` VARCHAR(32) DEFAULT NULL,    `level` INT(1) DEFAULT '1'     `essid` VARCHAR(64) DEFAULT NULL,    `total_client_join` INT(11) DEFAULT '0',    PRIMARY KEY (`ap_statisticId`),    KEY `check_time` (`check_time`),    KEY `macaddress` (`macaddress`),    KEY `deviceId` (`deviceId`)  ) ENGINE=INNODB DEFAULT CHARSET=utf8  

the follows are the table file info and some outputs of the monitor tools

Server B

  -rw-rw---- 1 mysql mysql 18568183808 Oct 11 14:52 ap_statistic.ibd      [root@localhost itms]# filefrag ./ap_statistic.ibd    ./ap_statistic.ibd: 164 extents found, perfection would be 159 extents          TABLE         Non_unique  Key_name    Seq_in_index  Column_name     COLLATION  Cardinality  Sub_part  Packed  NULL    Index_type  COMMENT      ------------  ----------  ----------  ------------  --------------  ---------  -----------  --------  ------  ------  ----------  -------      ap_statistic           0  PRIMARY                1  ap_statisticId  A             41438751    (NULL)  (NULL)          BTREE                    ap_statistic           1  check_time             1  check_time      A                10320    (NULL)  (NULL)          BTREE                    ap_statistic           1  macaddress             1  macaddress      A                   16    (NULL)  (NULL)  YES     BTREE                    ap_statistic           1  deviceId               1  deviceId        A                   16    (NULL)  (NULL)          BTREE            mysql>show status;            Variable_name   Value          Aborted_clients 0          Aborted_connects    0          Binlog_cache_disk_use   0          Binlog_cache_use    0          Bytes_received  1256          Bytes_sent  8844          Com_admin_commands  0          Com_assign_to_keycache  0          Com_alter_db    0          Com_alter_db_upgrade    0          Com_alter_event 0          Com_alter_function  0          Com_alter_procedure 0          Com_alter_server    0          Com_alter_table 0          Com_alter_tablespace    0          Com_analyze 0          Com_backup_table    0          Com_begin   0          Com_binlog  0          Com_call_procedure  0          Com_change_db   1          Com_change_master   0          Com_check   0          Com_checksum    0          Com_commit  0          Com_create_db   0          Com_create_event    0          Com_create_function 0          Com_create_index    0          Com_create_procedure    0          Com_create_server   0          Com_create_table    0          Com_create_trigger  0          Com_create_udf  0          Com_create_user 0          Com_create_view 0          Com_dealloc_sql 0          Com_delete  0          Com_delete_multi    0          Com_do  0          Com_drop_db 0          Com_drop_event  0          Com_drop_function   0          Com_drop_index  0          Com_drop_procedure  0          Com_drop_server 0          Com_drop_table  0          Com_drop_trigger    0          Com_drop_user   0          Com_drop_view   0          Com_empty_query 0          Com_execute_sql 0          Com_flush   0          Com_grant   0          Com_ha_close    0          Com_ha_open 0          Com_ha_read 0          Com_help    0          Com_insert  0          Com_insert_select   0          Com_install_plugin  0          Com_kill    0          Com_load    0          Com_load_master_data    0          Com_load_master_table   0          Com_lock_tables 0          Com_optimize    0          Com_preload_keys    0          Com_prepare_sql 0          Com_purge   0          Com_purge_before_date   0          Com_release_savepoint   0          Com_rename_table    0          Com_rename_user 0          Com_repair  0          Com_replace 0          Com_replace_select  0          Com_reset   0          Com_restore_table   0          Com_revoke  0          Com_revoke_all  0          Com_rollback    0          Com_rollback_to_savepoint   0          Com_savepoint   0          Com_select  1          Com_set_option  3          Com_show_authors    0          Com_show_binlog_events  0          Com_show_binlogs    0          Com_show_charsets   0          Com_show_collations 0          Com_show_column_types   0          Com_show_contributors   0          Com_show_create_db  0          Com_show_create_event   0          Com_show_create_func    0          Com_show_create_proc    0          Com_show_create_table   1          Com_show_create_trigger 0          Com_show_databases  0          Com_show_engine_logs    0          Com_show_engine_mutex   0          Com_show_engine_status  0          Com_show_events 0          Com_show_errors 0          Com_show_fields 1          Com_show_function_status    0          Com_show_grants 0          Com_show_keys   1          Com_show_master_status  0          Com_show_new_master 0          Com_show_open_tables    0          Com_show_plugins    0          Com_show_privileges 0          Com_show_procedure_status   0          Com_show_processlist    0          Com_show_profile    0          Com_show_profiles   0          Com_show_slave_hosts    0          Com_show_slave_status   0          Com_show_status 21          Com_show_storage_engines    0          Com_show_table_status   0          Com_show_tables 0          Com_show_triggers   0          Com_show_variables  0          Com_show_warnings   0          Com_slave_start 0          Com_slave_stop  0          Com_stmt_close  0          Com_stmt_execute    0          Com_stmt_fetch  0          Com_stmt_prepare    0          Com_stmt_reprepare  0          Com_stmt_reset  0          Com_stmt_send_long_data 0          Com_truncate    0          Com_uninstall_plugin    0          Com_unlock_tables   0          Com_update  0          Com_update_multi    0          Com_xa_commit   0          Com_xa_end  0          Com_xa_prepare  0          Com_xa_recover  0          Com_xa_rollback 0          Com_xa_start    0          Compression ON          Connections 323          Created_tmp_disk_tables 1          Created_tmp_files   5          Created_tmp_tables  2          Delayed_errors  0          Delayed_insert_threads  0          Delayed_writes  0          Flush_commands  1          Handler_commit  1          Handler_delete  0          Handler_discover    0          Handler_prepare 0          Handler_read_first  0          Handler_read_key    0          Handler_read_next   0          Handler_read_prev   0          Handler_read_rnd    0          Handler_read_rnd_next   39          Handler_rollback    0          Handler_savepoint   0          Handler_savepoint_rollback  0          Handler_update  0          Handler_write   37          Innodb_buffer_pool_pages_data   43392          Innodb_buffer_pool_pages_dirty  0          Innodb_buffer_pool_pages_flushed    43822          Innodb_buffer_pool_pages_free   637198          Innodb_buffer_pool_pages_misc   562          Innodb_buffer_pool_pages_total  681152          Innodb_buffer_pool_read_ahead_rnd   9          Innodb_buffer_pool_read_ahead_seq   27          Innodb_buffer_pool_read_requests    36489397          Innodb_buffer_pool_reads    27421          Innodb_buffer_pool_wait_free    0          Innodb_buffer_pool_write_requests   4165371          Innodb_data_fsyncs  5228          Innodb_data_pending_fsyncs  0          Innodb_data_pending_reads   1          Innodb_data_pending_writes  0          Innodb_data_read    626216960          Innodb_data_reads   36565          Innodb_data_writes  293947          Innodb_data_written 1792826880          Innodb_dblwr_pages_written  43822          Innodb_dblwr_writes 830          Innodb_log_waits    0          Innodb_log_write_requests   492588          Innodb_log_writes   268248          Innodb_os_log_fsyncs    2130          Innodb_os_log_pending_fsyncs    0          Innodb_os_log_pending_writes    0          Innodb_os_log_written   356559872          Innodb_page_size    16384          Innodb_pages_created    5304          Innodb_pages_read   38087          Innodb_pages_written    43822          Innodb_row_lock_current_waits   0          Innodb_row_lock_time    0          Innodb_row_lock_time_avg    0          Innodb_row_lock_time_max    0          Innodb_row_lock_waits   0          Innodb_rows_deleted 28637          Innodb_rows_inserted    306449          Innodb_rows_read    16579740          Innodb_rows_updated 887251          Key_blocks_not_flushed  0          Key_blocks_unused   212928          Key_blocks_used 1415          Key_read_requests   393323          Key_reads   16          Key_write_requests  102461          Key_writes  102439          Last_query_cost 9142769.199000          Max_used_connections    19          Not_flushed_delayed_rows    0          Open_files  24          Open_streams    0          Open_table_definitions  142          Open_tables 146          Opened_files    592          Opened_table_definitions    0          Opened_tables   0          Prepared_stmt_count 0          Qcache_free_blocks  0          Qcache_free_memory  0          Qcache_hits 0          Qcache_inserts  0          Qcache_lowmem_prunes    0          Qcache_not_cached   0          Qcache_queries_in_cache 0          Qcache_total_blocks 0          Queries 1578897          Questions   30          Rpl_status  NULL          Select_full_join    0          Select_full_range_join  0          Select_range    0          Select_range_check  0          Select_scan 2          Slave_open_temp_tables  0          Slave_retried_transactions  0          Slave_running   OFF          Slow_launch_threads 0          Slow_queries    0          Sort_merge_passes   0          Sort_range  0          Sort_rows   0          Sort_scan   0          Ssl_accept_renegotiates 0          Ssl_accepts 0          Ssl_callback_cache_hits 0          Ssl_cipher            Ssl_cipher_list           Ssl_client_connects 0          Ssl_connect_renegotiates    0          Ssl_ctx_verify_depth    0          Ssl_ctx_verify_mode 0          Ssl_default_timeout 0          Ssl_finished_accepts    0          Ssl_finished_connects   0          Ssl_session_cache_hits  0          Ssl_session_cache_misses    0          Ssl_session_cache_mode  NONE          Ssl_session_cache_overflows 0          Ssl_session_cache_size  0          Ssl_session_cache_timeouts  0          Ssl_sessions_reused 0          Ssl_used_session_cache_entries  0          Ssl_verify_depth    0          Ssl_verify_mode 0          Ssl_version           Table_locks_immediate   1549525          Table_locks_waited  0          Tc_log_max_pages_used   0          Tc_log_page_size    0          Tc_log_page_waits   0          Threads_cached  0          Threads_connected   17          Threads_created 322          Threads_running 2          Uptime  8093          Uptime_since_flush_status   8093            mysql>show variables;            Variable_name   Value          auto_increment_increment    1          auto_increment_offset   1          autocommit  ON          automatic_sp_privileges ON          back_log    50          big_tables  OFF          binlog_cache_size   32768          binlog_direct_non_transactional_updates OFF          binlog_format   STATEMENT          bulk_insert_buffer_size 8388608          character_set_client    utf8          character_set_connection    utf8          character_set_database  utf8          character_set_filesystem    binary          character_set_results   utf8          character_set_server    utf8          character_set_system    utf8          collation_connection    utf8_general_ci          collation_database  utf8_general_ci          collation_server    utf8_general_ci          completion_type 0          concurrent_insert   1          connect_timeout 10          date_format %Y-%m-%d          datetime_format %Y-%m-%d %H:%i:%s          default_week_format 0          delay_key_write ON          delayed_insert_limit    100          delayed_insert_timeout  300          delayed_queue_size  1000          div_precision_increment 4          engine_condition_pushdown   ON          error_count 0          event_scheduler OFF          expire_logs_days    0          flush   OFF          flush_time  0          foreign_key_checks  ON          ft_boolean_syntax   + -><()~*:""&|          ft_max_word_len 84          ft_min_word_len 4          ft_query_expansion_limit    20          ft_stopword_file    (built-in)          general_log OFF          group_concat_max_len    1024          have_community_features YES          have_compress   YES          have_crypt  YES          have_csv    YES          have_dynamic_loading    YES          have_geometry   YES          have_innodb YES          have_ndbcluster NO          have_openssl    DISABLED          have_partitioning   NO          have_query_cache    YES          have_rtree_keys YES          have_ssl    DISABLED          have_symlink    YES          hostname    localhost.localdomain          identity    0          ignore_builtin_innodb   OFF          init_connect              init_file             init_slave            innodb_adaptive_hash_index  ON          innodb_additional_mem_pool_size 67108864          innodb_autoextend_increment 8          innodb_autoinc_lock_mode    1          innodb_buffer_pool_size 11159994368          innodb_checksums    ON          innodb_commit_concurrency   0          innodb_concurrency_tickets  500          innodb_data_file_path   ibdata1:10M:autoextend          innodb_data_home_dir              innodb_doublewrite  ON          innodb_fast_shutdown    1          innodb_file_io_threads  4          innodb_file_per_table   ON          innodb_flush_log_at_trx_commit  2          innodb_flush_method O_DIRECT          innodb_force_recovery   0          innodb_lock_wait_timeout    120          innodb_locks_unsafe_for_binlog  ON          innodb_log_buffer_size  134217728          innodb_log_file_size    5242880          innodb_log_files_in_group   2          innodb_log_group_home_dir   ./          innodb_max_dirty_pages_pct  90          innodb_max_purge_lag    0          innodb_mirrored_log_groups  1          innodb_open_files   300          innodb_rollback_on_timeout  OFF          innodb_stats_on_metadata    ON          innodb_support_xa   ON          innodb_sync_spin_loops  20          innodb_table_locks  ON          innodb_thread_concurrency   8          innodb_thread_sleep_delay   10000          innodb_use_legacy_cardinality_algorithm ON          insert_id   0          interactive_timeout 28800          join_buffer_size    268435456          keep_files_on_create    OFF          key_buffer_size 268435456          key_cache_age_threshold 300          key_cache_block_size    1024          key_cache_division_limit    100                   large_files_support ON          large_page_size 0          large_pages OFF          last_insert_id  0          lc_time_names   en_US          license Commercial          local_infile    ON          locked_in_memory    OFF          log OFF          log_bin OFF          log_bin_trust_function_creators OFF          log_bin_trust_routine_creators  OFF                   log_output  FILE          log_queries_not_using_indexes   OFF          log_slave_updates   OFF          log_slow_queries    OFF          log_warnings    1          long_query_time 10.000000          low_priority_updates    OFF          lower_case_file_system  OFF          lower_case_table_names  1          max_allowed_packet  134217728          max_binlog_cache_size   18446744073709547520          max_binlog_size 1073741824          max_connect_errors  10          max_connections 300          max_delayed_threads 20          max_error_count 64          max_heap_table_size 268435456          max_insert_delayed_threads  20          max_join_size   18446744073709551615          max_length_for_sort_data    1024          max_prepared_stmt_count 16382          max_relay_log_size  0          max_seeks_for_key   18446744073709551615          max_sort_length 1024          max_sp_recursion_depth  0          max_tmp_tables  32          max_user_connections    0          max_write_lock_count    18446744073709551615          min_examined_row_limit  0          multi_range_count   256          myisam_data_pointer_size    6          myisam_max_sort_file_size   9223372036853727232          myisam_mmap_size    18446744073709551615          myisam_recover_options  OFF          myisam_repair_threads   1          myisam_sort_buffer_size 8388608          myisam_stats_method nulls_unequal          myisam_use_mmap OFF          net_buffer_length   16384          net_read_timeout    30          net_retry_count 10          net_write_timeout   60          new OFF          old OFF          old_alter_table OFF          old_passwords   OFF          open_files_limit    10240          optimizer_prune_level   1          optimizer_search_depth  62          optimizer_switch    index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on          port    3306          preload_buffer_size 32768          profiling   OFF          profiling_history_size  15          protocol_version    10          pseudo_thread_id    18          query_alloc_block_size  8192          query_cache_limit   1048576          query_cache_min_res_unit    4096          query_cache_size    0          query_cache_type    ON          query_cache_wlock_invalidate    OFF          query_prealloc_size 8192          rand_seed1            rand_seed2            range_alloc_block_size  4096          read_buffer_size    67108864          read_only   OFF          read_rnd_buffer_size    67108864          relay_log             relay_log_index           relay_log_info_file relay-log.info          relay_log_purge ON          relay_log_space_limit   0          report_host           report_password           report_port 3306          report_user           rpl_recovery_rank   0          secure_auth OFF          secure_file_priv              server_id   0          skip_external_locking   ON          skip_name_resolve   ON          skip_networking OFF          skip_show_database  OFF          slave_compressed_protocol   OFF          slave_exec_mode STRICT          slave_net_timeout   3600          slave_skip_errors   OFF          slave_transaction_retries   10          slow_launch_time    2          slow_query_log  OFF          sort_buffer_size    16777216          sql_auto_is_null    ON          sql_big_selects ON          sql_big_tables  OFF          sql_buffer_result   OFF          sql_log_bin ON          sql_log_off OFF          sql_log_update  ON          sql_low_priority_updates    OFF          sql_max_join_size   18446744073709551615          sql_mode              sql_notes   ON          sql_quote_show_create   ON          sql_safe_updates    OFF          sql_select_limit    18446744073709551615          sql_slave_skip_counter            sql_warnings    OFF          ssl_ca            ssl_capath            ssl_cert              ssl_cipher            ssl_key           storage_engine  MyISAM          sync_binlog 0          sync_frm    ON          system_time_zone    UTC          table_definition_cache  256          table_lock_wait_timeout 50          table_open_cache    512          table_type  MyISAM          thread_cache_size   0          thread_handling one-thread-per-connection          thread_stack    262144          time_format %H:%i:%s          time_zone   +08:00          timed_mutexes   OFF          timestamp   1349946061          tmp_table_size  1073741824          transaction_alloc_block_size    8192          transaction_prealloc_size   4096          tx_isolation    REPEATABLE-READ          unique_checks   ON          updatable_views_with_limit  YES          version 5.1.53-enterprise-commercial-pro          version_comment MySQL Enterprise Server - Pro Edition (Commercial)          version_compile_machine x86_64          version_compile_os  unknown-linux-gnu          wait_timeout    28800          warning_count   0              mysql> show innodb status\G;          *************************** 1. row ***************************            Type: InnoDB            Name:          Status:          =====================================          121011 10:22:13 INNODB MONITOR OUTPUT          =====================================          Per second averages calculated from the last 39 seconds          ----------          SEMAPHORES          ----------          OS WAIT ARRAY INFO: reservation count 3806, signal count 3778          Mutex spin waits 0, rounds 282892, OS waits 2075          RW-shared spins 1969, OS waits 864; RW-excl spins 2336, OS waits 749          ------------          TRANSACTIONS          ------------          Trx id counter 0 5303968          Purge done for trx's n:o < 0 5303951 undo n:o < 0 0          History list length 1          LIST OF TRANSACTIONS FOR EACH SESSION:          ---TRANSACTION 0 0, not started, process no 30336, OS thread id 1189509440          MySQL thread id 520, query id 1861594 localhost root          show innodb status          ---TRANSACTION 0 5303967, not started, process no 30336, OS thread id 1188710720          MySQL thread id 526, query id 1861593 127.0.0.1 root          ---TRANSACTION 0 5303962, not started, process no 30336, OS thread id 1186314560          MySQL thread id 519, query id 1861555 127.0.0.1 root          ---TRANSACTION 0 5303952, not started, process no 30336, OS thread id 1188444480          MySQL thread id 515, query id 1861567 127.0.0.1 root          ---TRANSACTION 0 5303948, not started, process no 30336, OS thread id 1187912000          MySQL thread id 516, query id 1861566 127.0.0.1 root          ---TRANSACTION 0 5303937, not started, process no 30336, OS thread id 1190308160          MySQL thread id 511, query id 1861568 127.0.0.1 root          ---TRANSACTION 0 0, not started, process no 30336, OS thread id 1090791744          MySQL thread id 18, query id 1596073 172.18.112.84 root          ---TRANSACTION 0 5303959, ACTIVE 63 sec, process no 30336, OS thread id 1090525504 fetching rows, thread declared inside InnoDB 500          mysql tables in use 1, locked 0          MySQL thread id 17, query id 1861400 localhost root Sending data          select * from ap_statistic          Trx read view will not see trx with id >= 0 5303960, sees < 0 5303960          --------          FILE I/O          --------          I/O thread 0 state: waiting for i/o request (insert buffer thread)          I/O thread 1 state: waiting for i/o request (log thread)          I/O thread 2 state: waiting for i/o request (read thread)          I/O thread 3 state: waiting for i/o request (write thread)          Pending normal aio reads: 0, aio writes: 0,           ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0          Pending flushes (fsync) log: 0; buffer pool: 0          63521 OS file reads, 294656 OS file writes, 5641 OS fsyncs          1 pending preads, 0 pending pwrites          149.38 reads/s, 16384 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s          -------------------------------------          INSERT BUFFER AND ADAPTIVE HASH INDEX          -------------------------------------          Ibuf: size 1, free list len 318, seg size 320,          63593 inserts, 63593 merged recs, 9674 merges          Hash table size 22086161, node heap has 607 buffer(s)          0.08 hash searches/s, 0.26 non-hash searches/s          ---          LOG          ---          Log sequence number 15 2873617336          Log flushed up to   15 2873617336          Last checkpoint at  15 2873617336          0 pending log writes, 0 pending chkp writes          269102 log i/o's done, 0.00 log i/o's/second          ----------------------          BUFFER POOL AND MEMORY          ----------------------          Total memory allocated 12452785320; in additional pool allocated 15261440          Dictionary memory allocated 789024          Buffer pool size   681152          Free buffers       610013          Database pages     70532          Modified db pages  0          Pending reads 1          Pending writes: LRU 0, flush list 0, single page 0          Pages read 65043, created 5488, written 45924          149.38 reads/s, 0.00 creates/s, 0.00 writes/s          Buffer pool hit rate 888 / 1000          --------------          ROW OPERATIONS          --------------          1 queries inside InnoDB, 0 queries in queue          2 read views open inside InnoDB          Main thread process no. 30336, id 1185782080, state: waiting for server activity          Number of rows inserted 336555, updated 1112311, deleted 28681, read 29200669          0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 8258.58 reads/s          ----------------------------          END OF INNODB MONITOR OUTPUT          ============================            1 row in set, 1 warning (0.00 sec)            ERROR:          No query specified                iostat -dx 2            Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util          sda          0.00   2.50 141.50 11.50 4516.00  112.00  2258.00    56.00    30.25     0.95    6.23   5.70  87.25          sda1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00          sda2         0.00   2.50 141.50 11.50 4516.00  112.00  2258.00    56.00    30.25     0.95    6.23   5.70  87.25          dm-0         0.00   0.00 141.50 14.00 4516.00  112.00  2258.00    56.00    29.76     0.97    6.24   5.62  87.35          dm-1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00            Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util          sda          3.00   0.00 154.50  0.00 4932.00    0.00  2466.00     0.00    31.92     0.93    6.04   6.04  93.25          sda1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00          sda2         3.00   0.00 154.50  0.00 4932.00    0.00  2466.00     0.00    31.92     0.93    6.04   6.04  93.25          dm-0         0.00   0.00 157.50  0.00 4932.00    0.00  2466.00     0.00    31.31     0.95    6.04   5.93  93.40          dm-1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00            Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util          sda          3.00   1.50 150.50  1.50 4804.00   24.00  2402.00    12.00    31.76     0.94    6.15   6.14  93.40          sda1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00          sda2         3.00   1.50 150.50  1.50 4804.00   24.00  2402.00    12.00    31.76     0.94    6.15   6.14  93.40          dm-0         0.00   0.00 153.50  3.00 4804.00   24.00  2402.00    12.00    30.85     0.95    6.08   5.97  93.50          dm-1         0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00              vmstat 2            procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------           r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st           1  1     16 27358488  18152 115500    0    0  2558     0 1193 8634 14  1 73 12  0           1  1     16 27346840  18168 115500    0    0  2356    12 1174 9129 14  2 73 12  0           2  1     16 27334320  18172 115504    0    0  2522     0 1184 8918 14  1 73 12  0           1  1     16 27321812  18180 115496    0    0  2456    12 1183 7357 13  1 74 12  0           1  1     16 27310132  18180 115504    0    0  2344    12 1174 6990 12  2 74 12  0           1  1     16 27297608  18184 115508    0    0  2506     0 1182 6203 12  2 74 11  0           1  1     16 27285444  18200 115504    0    0  2420    90 1187 9667 13  2 73 12  0           1  1     16 27277640  18200 115508    0    0  2248     0 1165 8103 19  2 69 11  0           2  1     16 27265380  18204 115504    0    0  2498     0 1179 5690 13  1 74 12  0           4  1     16 27252972  18216 115508    0    0  2434    12 1178 6096 14  1 74 12  0           1  1     16 27241032  18232 115496    0    0  2520     0 1181 9252 12  1 75 11  0           2  1     16 27229136  18240 115508    0    0  2468    10 1178 7116 13  1 74 12  0           1  0     16 27630612  18248 115508    0    0  1536    20 1121 4082 13  1 79  7  0              mpstat -P ALL 2              02:48:57 PM  CPU   %user   %nice %system %iowait    %irq   %soft   %idle    intr/s          02:48:59 PM  all   13.69    0.00    1.31   11.56    0.00    0.62   72.81   1190.95          02:48:59 PM    0   33.67    0.00    0.50    0.00    0.00    0.00   65.83   1006.03          02:48:59 PM    1    6.53    0.00    0.50   92.96    0.50    0.50    0.00    160.80          02:48:59 PM    2    1.01    0.00    0.50    0.00    0.00    0.00   98.49      0.00          02:48:59 PM    3    0.00    0.00    0.00    0.00    0.00    0.00  100.50      3.52          02:48:59 PM    4   35.68    0.00    1.01    0.00    0.00    1.01   62.81     13.57          02:48:59 PM    5    4.52    0.00    0.00    0.00    0.00    0.00   96.48      0.50          02:48:59 PM    6    3.52    0.00    0.00    0.00    0.00    0.00   96.98      0.50          02:48:59 PM    7   25.13    0.00    7.54    0.00    0.00    4.02   63.82      6.03            02:48:59 PM  CPU   %user   %nice %system %iowait    %irq   %soft   %idle    intr/s          02:49:01 PM  all   12.50    0.00    1.19   11.69    0.00    0.56   74.06   1177.11          02:49:01 PM    0   22.89    0.00    1.49    0.00    0.00    1.49   74.13    995.52          02:49:01 PM    1    5.97    0.00    0.50   92.54    0.00    0.50    0.00    159.70          02:49:01 PM    2    0.50    0.00    0.50    0.50    0.00    0.00   98.01      1.99          02:49:01 PM    3    0.00    0.00    0.00    0.00    0.00    0.00   99.50      2.49          02:49:01 PM    4   45.77    0.00    1.49    0.00    0.00    0.50   51.24     11.94          02:49:01 PM    5    0.00    0.00    0.00    0.00    0.00    0.00   99.50      0.50          02:49:01 PM    6    0.50    0.00    0.00    0.00    0.00    0.00   99.00      0.50          02:49:01 PM    7   23.38    0.00    5.47    0.00    0.00    1.99   68.16      4.48            02:49:01 PM  CPU   %user   %nice %system %iowait    %irq   %soft   %idle    intr/s          02:49:03 PM  all   13.05    0.00    1.12   11.62    0.00    0.50   73.70   1179.00          02:49:03 PM    0   43.50    0.00    0.50    0.00    0.00    0.00   56.00   1000.50          02:49:03 PM    1    6.50    0.00    1.00   93.00    0.00    0.50    0.00    157.00          02:49:03 PM    2    1.50    0.00    0.50    0.00    0.00    0.00   98.50      0.00          02:49:03 PM    3    0.00    0.00    0.00    0.00    0.00    0.00  100.00      2.50          02:49:03 PM    4   32.50    0.00    1.50    0.00    0.00    1.00   65.50     13.00          02:49:03 PM    5   11.00    0.00    4.00    0.00    0.00    1.50   83.50      0.50          02:49:03 PM    6    0.00    0.00    0.00    0.00    0.00    0.00  100.00      0.00          02:49:03 PM    7   10.50    0.00    2.00    0.00    0.00    1.00   87.00      5.50  

No comments:

Post a Comment

Search This Blog