Tuesday, April 9, 2013

[how to] Update Sql Server from Mysql

[how to] Update Sql Server from Mysql


Update Sql Server from Mysql

Posted: 09 Apr 2013 02:36 PM PDT

We have a website that stores form data in a web hosted MySQL database. Whenever form data is entered into the MySQL server tables I need update all necessary tables on our local SQL Server database.

I have seen several articles that describe how to update MySQL using SQL Server triggers but nothing that goes the other way.

If I create a linked server and linked tables in SQL Server can I create a trigger on a MySQL table from the SQL Server management studio?

SQL Server how to detect unused views and/or get usage stats

Posted: 09 Apr 2013 07:48 PM PDT

Is there a way to determine in SQL Server if a view is no longer being used? (without removing it) Ideally i would like to know usage for SQL Server 2000 as well as 2012 if that is possible.

I am upgrading databases and suspect that many of the views are no longer being used, some of the views will be difficult to compile on the new server as they access multiple databases some of which are not being moved to the new server.

Can I delegate SQL Agent rights without granting sysadmin rights?

Posted: 09 Apr 2013 01:14 PM PDT

I'd like to give a user the ability to administer SQL Agent jobs. According to the doc page, I can do this by granting the "appropriate" roles on msdb. I've granted all three SQLAgent% roles, but the user gets this error when (as a test) trying to rename a job:

Only members of sysadmin role are allowed to update or delete jobs owned by a different login.  (Microsoft SQL Server, Error: 14525)  

This question is similar, but no answer there addresses SQL Agent. Is the sysadmin role the only solution, in MS SQL 2K8?

Optimizing queries on a range of timestamps (two columns)

Posted: 09 Apr 2013 06:37 PM PDT

I use postgresql-9.1 with ubuntu 12.04.

I need to select records inside a range of time: my table time_limits has two timestamp fields and one property integer. Indeed there are other info columns but not used for any queries, only for storing information, I won't speak about these ones.

create table (     start_date_time timestamp,     end_date_time timestamp,      id_phi integer,      primary key(start_date_time,end_date_time,id_phi);  

this table now contains something like 2.000.000 records.

but the time spent were so enormous for a query like:

select * from time_limits as t   where t.id_phi=0   and t.start_date_time <= timestamp'2010-08-08 00:00:00'  and t.end_date_time >= timestamp'2010-08-08 00:05:00'  

that I tried creating the inversed index:

create index idx_inversed on time_limits(id_phi,start_date_time,end_date_time);  

I have the impression that the times spent are better: The time needed for accessing the records in the middle of the table seems to be more reasonable. ( somwhere between 40 and 90 seconds)

But's its several tens of seconds for values in the middle of the time range. And twice more when targeting the end of the table (chronologically speaking).

So I have tried explain/analyze for the first time. You can see the result here on depeszcom.

    QUERY PLAN                                                                                           Bitmap Heap Scan on time_limits  (cost=4730.38..22465.32 rows=62682 width=36) (actual time=44.446..44.446 rows=0 loops=1)     Recheck Cond: ((id_phi = 0) AND (start_date_time <= '2011-08-08 00:00:00'::timestamp without time zone) AND (end_date_time >= '2011-08-08 00:05:00'::timestamp without time zone))     ->  Bitmap Index Scan on idx_time_limits_phi_start_end  (cost=0.00..4714.71 rows=62682 width=0) (actual time=44.437..44.437 rows=0 loops=1)           Index Cond: ((id_phi = 0) AND (start_date_time <= '2011-08-08 00:00:00'::timestamp without time zone) AND (end_date_time >= '2011-08-08 00:05:00'::timestamp without time zone))   Total runtime: 44.507 ms  (5 rows)  

What could I do to optimize the search ? You can see all the time is spent scanning the two timestamps columns once the id_phi is set to 0. And I don't understand the big scan (60K rows!) on the timestamps, aren't they indexed ? ( both the primary key and the idx_inversed I created)

Should I change from timestamp types to something else ?

I have read (really little) about GIST and GIN indexes, I just know they can be more efficient on certain conditions for custom types. Should I go this way to have a efficient time index ?

PGPool Frequent Queries

Posted: 09 Apr 2013 10:40 AM PDT

We recently setup a PostgreSQL Cluster with PGPool in front of it. This is our first go at such a thing. We setup our QA environment to use the new PGPool, and we cranked up the logging on the Postgresql servers so we could point pgBadger at it and see how things are going.

After minimal use, and not quite 24 hours of logging, pgBadger showed me this report this morning:

enter image description here

That seems absurd to me. The first three top queries don't even appear to have anything to do with our application nor application user, yet they all accuse the application of instigating them.

We assume that PGPool is doing this in the background when the application instigates it, but even then ... there is no way our QA application instigated 369K times. Admittedly, the time spent on these are negligible, all of 24 seconds over the course of a day.

However, we would like to grasp:

  • Why are we seeing this amount of traffic from pgPool?
  • Can we expect this to scale? Currently we have a single QA DB here with very light usage. What do we expect when we get 200 production databases running in this cluster?
  • Is there an easy way to prevent posgtresql from logging these types of queries from pgpool in order for us to more easily parse queries that are truly instigated by our application?
  • Is there another question we should ask that we haven't?

mySql Relationships / Index - Checking for duplicates

Posted: 09 Apr 2013 01:32 PM PDT

I have this setup on innoDB:

TblUser   user details    TblConnection  User Connection

TblConnection also has these indexes: User, Connection where User is the foreign key of TblUser.id and another index: Connection, User where Connection is also a foregin key to TblUser.id

Now I want to make sure that when adding a new row I have complete uniqueness between column combinations. By this I mean I want that 1,2 is treated the same as 2,1 and therefore an INSERT is not made.

difference between update lock and exclusive lock

Posted: 09 Apr 2013 12:18 PM PDT

Whether exclusive lock and update lock are same or related with each other ? Because, they seems very similar. Anyone please explain.

Hash in SQL query

Posted: 09 Apr 2013 04:54 PM PDT

I saw few sql queries with ### appended before and after column name in where clause like

                   `select a from emp where a.###id### = 1`  

What is significance of these hashes?

T-SQL Script for segmenting data into finer categories based on percentage measure in another table

Posted: 09 Apr 2013 09:21 AM PDT

I have a SQL Server 2008 R2 database with two tables. The first table - Shops - has a list of establishments and the number of shoppers it attracts segmented by criteria A (2 categories) and B (2 categories). The table looks like this

         SHOPID |   A1B1  | A2B1 | A1B2  | A2B2          -----------------------------------------              1        23     45      52      21              2        25     96      20      11              3        22     32      54      47  

Now I want to further split this data into smaller segments by creating a segment C (3 categories). The data for this is stored in a table SEG3. In this table, for every Segment A-B combination I have three values which corresponds to how much percentage of shoppers in that segment A-B belong to segment A-B-C.

    SegmentA | SegmentB | SUB1 | SUB2 | SUB3      -----------------------------------------          1         1        0.2    0.3    0.5          2         1        0.1    0.5    0.7          1         2        0.6    0.4    0.7          2         2        0.8    1.0    0.5  

I now want to generate a table SEGABC that has the counts of shoppers split into the 3 categorical segments. The table should look like:

        SHOPID  |  A1B1C1  |  A1B1C2  |  A1B1C3  |  A1B2C1  |  A1B2C2  |  A1B2C3  |  A2B1C1  |  A2B1C2  |  A2B1C3|  A2B2C1  |  A2B2C2  |  A2B2C3          ----------------------------------------------------------------------------------------------------------------------------------------              1              2              3  

With the corresponding values filled in. For instance A1B1C1 for SHOPID=1 should be A1B1 for SHOPID 1 multiplied by the value in SEG3 where SegmentA=1 and SegmentB=1 and SUB1. That would be equal to 23 times 0.2 = 4.6.

Is there an elegant way to do this using T-SQL? Or is writing out all the combinations the only way out?

Mysql: order by field or select union?

Posted: 09 Apr 2013 08:58 AM PDT

As a newbie, I expected Mysql to return the rows of this query ordered by their ids order in the id clause

select * from test where id in (3,1,2)  

Unfortunatelly for me, that's not true. The results comes in id asc. order.

I have read that the easiest way to get what I want is using order by field

select * from test where id in (3,1,2) order by field(id, 3,1,2)  

But I also read that is not ver efficient for queries with lots of ids in the sorting.

So, Do you think that this query below would performance better?

select * from test where id=3 union select * from test where id=1 union select * from test where id=2  

Microsoft SQL Server how can i detach localdb to run on any computer

Posted: 09 Apr 2013 05:30 PM PDT

I am working on a personal project and have implemented a database system using the Entity Framework code first tutorial on msdn, link below,

http://msdn.microsoft.com/en-gb/data/jj193542

I am running Visual Studio 2012 and the database has been created using Microsoft SQL Server and localdb. However i wish to run my application on any computer and this current setup will not allow me to do so.

I am a bit of a novice when it comes to databases, and would like some advice/resources that outline how i can change my database setup so that it is available on any computer and also retain the data currently in my database.

Any help would be greatly appreciated, Thanks in advance!

disallow SET command in a postgresql server

Posted: 09 Apr 2013 12:31 PM PDT

Im planning to publish my postgresql server to a few untrusted clients.

I dont want them to modify any runtime setting, like work_mem or something risky to my server. In general I assume the pg_catalog schema is public but I don't want to allow updating pg_settings at all.

Is it possible?

Linked Server to SQL Server 2000 server from SQL Server 2012

Posted: 09 Apr 2013 10:02 AM PDT

I have a new SQL Server 2012 instance, and am migrating a database from SQL Server 2000. Some of the stored procedures in the DB I am moving reference another database back on that 2000 server. What is the best way to handle this?

Would a linked server help? I am having trouble creating a link for testing. Some people recommend using ODBC, others seem to use the SQL Server 10 client driver (SQLNCLI10). Using the SQL Server 10 client driver seems better to me, are there issues installing it on a SQL Server 2012 instance?

Should I seperate frequently updated columns

Posted: 09 Apr 2013 07:06 PM PDT

I have a users table that contains users' information and a column named credits that is frequently updated.

  • On index page I'm showing list of users with basic user information but I don't need credits.

  • On details page I'm also showing credits with user informations.

Two main operations are dense in project. First one is the SELECT operation on index page and second one is the UPDATE operation on credits column. credits column is not indexed but since it's frequently updated, will it effect the indexed columns on users table? If so, I think I should separate credits column as a table and join it only when required. Doing by this, can I improve SELECT performance on index page?

Also I'm using MySQL. What kind of table structure should I use for frequently updated columns?

Huge sizes Replication Tables in Sql Server db, how to deal with?

Posted: 09 Apr 2013 10:28 AM PDT

I've a sql server database that is configured to work with old software, i've no details about the software, but i know that DB uses replication.

the DB size dramatically increases. When i investigate some tables sizes, i found these ones with these sizes:

MSmerge_bi_D89534BB1063408B942B62C6B98C1E51 | 22.731987 Gigs MSmerge_bi_6C3E9C2ED4824946B65D575B7F84C225 | 16.62373352 Gigs MSmerge_bi_D89D3BA8BD334415AF6D51F0561344F3 | 14.91197205 Gigs MSmerge_bi_4BE323F1031445C78894AA625FB6BF50 | 3.80770874 Gigs MSmerge_bi_4CDB8F1078C74B33820DA52F55DAA57A | 2.638702393 Gigs MSmerge_bi_6BDBDB2589364C609E303B85D8C30155 | 2.328575134 Gigs MSmerge_bi_8DF8751AAE234EF88DBE64E0D947DECD | 1.122215271 Gigs MSmerge_bi_EE8A4730E51247E5A4719BBC0DAB304E | 1.069778442 Gigs MSmerge_bi_2A397A3824BA455EAE4F6689623A46AC | 0.340202332 Gigs


any suggestions on how to deal with these huge tables, and is it safe to delete ? (i'm not expert with db Administration)

Oracle schema import is not importing all the tables present in the schema dump file

Posted: 09 Apr 2013 08:26 AM PDT

I have exported an existing oracle schema from another machine and then imported it in my local machine. Import was successful, but some tables which are present in the export dump file are not imported.

Here are the export and import commands i have used.

Export Command:  ---------------  exp sreeni/sreeni@agentrics1:1524/ezmodc full=n file=SreeniTrunkDump.dmp log=SreeniTrunkDump_Export.log     Import Command:  ---------------  imp badri/badri@localhost:1521/xe file=SreeniTrunkDump.dmp log=BadriSchemaImport_Import.log full=y     

The Oracle we are using is 10g EE.

What could be going wrong ? Can you please suggest a solution to this issue.

When and why can this kind of deadlock occur?

Posted: 09 Apr 2013 09:19 AM PDT

------------------------  LATEST DETECTED DEADLOCK  ------------------------  130409  0:40:58  *** (1) TRANSACTION:  TRANSACTION 3D61D41F, ACTIVE 3 sec inserting  mysql tables in use 1, locked 1  LOCK WAIT 43 lock struct(s), heap size 6960, 358 row lock(s), undo log entries 43  MySQL thread id 17241690, OS thread handle 0x7ffd3469a700, query id 860259163 localhost root update  INSERT INTO `notification` (`other_grouped_notifications_count`, `user_id`, `notifiable_type`, `action_item`, `action_id`, `created_at`, `status`, `updated_at`) VALUES (0, 4442, 'MATCH', 'MATCH', 224716, 1365448255, 1, 1365448255)  *** (1) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 272207 n bits 1272 index `user_id` of table `notification` trx id 3D61D41F lock_mode X locks gap before rec insert intention waiting  Record lock, heap no 69 PHYSICAL RECORD: n_fields 2; compact format; info bits 0   0: len 4; hex 8000115b; asc    [;;   1: len 4; hex 0005e0bb; asc     ;;    *** (2) TRANSACTION:  TRANSACTION 3D61C472, ACTIVE 15 sec starting index read  mysql tables in use 1, locked 1  3 lock struct(s), heap size 1248, 2 row lock(s)  MySQL thread id 17266704, OS thread handle 0x7ffd34b01700, query id 860250374 localhost root Updating  UPDATE `notification` SET `status`=0 WHERE user_id = 4443 and status=1  *** (2) HOLDS THE LOCK(S):  RECORD LOCKS space id 0 page no 272207 n bits 1272 index `user_id` of table `notification` trx id 3D61C472 lock_mode X  Record lock, heap no 69 PHYSICAL RECORD: n_fields 2; compact format; info bits 0   0: len 4; hex 8000115b; asc    [;;   1: len 4; hex 0005e0bb; asc     ;;    *** (2) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 261029 n bits 248 index `PRIMARY` of table `notification` trx id 3D61C472 lock_mode X locks rec but not gap waiting  Record lock, heap no 161 PHYSICAL RECORD: n_fields 16; compact format; info bits 0   0: len 4; hex 0005e0bb; asc     ;;   1: len 6; hex 00000c75178f; asc    u  ;;   2: len 7; hex 480007c00c1d10; asc H      ;;   3: len 4; hex 8000115b; asc    [;;   4: len 8; hex 5245474953544552; asc REGISTER;;   5: SQL NULL;   6: SQL NULL;   7: SQL NULL;   8: len 4; hex d117dd91; asc     ;;   9: len 4; hex d117dd91; asc     ;;   10: len 1; hex 80; asc  ;;   11: SQL NULL;   12: SQL NULL;   13: SQL NULL;   14: SQL NULL;   15: len 4; hex 80000000; asc     ;;    *** WE ROLL BACK TRANSACTION (2)  ------------  TRANSACTIONS  ------------  Trx id counter 3DBAB14A  Purge done for trx's n:o < 3DBAAF69 undo n:o < 0  History list length 1911  LIST OF TRANSACTIONS FOR EACH SESSION:  ---TRANSACTION 0, not started  MySQL thread id 17555159, OS thread handle 0x7ffd342f7700, query id 874625123 localhost root  show engine innodb status  ---TRANSACTION 3DBAB148, not started  MySQL thread id 17557253, OS thread handle 0x7ffd34a9f700, query id 874625122 localhost root  ---TRANSACTION 3DBAB0F0, not started  MySQL thread id 17557249, OS thread handle 0x7ffd34948700, query id 874624797 localhost root  ---TRANSACTION 3DBAB0FF, not started  MySQL thread id 17546322, OS thread handle 0x7ffd343bb700, query id 874624821 localhost root  ---TRANSACTION 3DBAB0FE, not started  MySQL thread id 17546288, OS thread handle 0x7ffd3441d700, query id 874624819 localhost root  ---TRANSACTION 3DBAB0FB, not started  MySQL thread id 17546267, OS thread handle 0x7ffd34543700, query id 874624818 localhost root  ---TRANSACTION 3DBAB0FA, not started  MySQL thread id 17546259, OS thread handle 0x7ffd34638700, query id 874624817 localhost root  ---TRANSACTION 3DBAB01D, not started  MySQL thread id 17546253, OS thread handle 0x7ffbfce8f700, query id 874624182 localhost root  ---TRANSACTION 3DBAB00A, not started  MySQL thread id 17546252, OS thread handle 0x7ffd34d4d700, query id 874624151 localhost root  ---TRANSACTION 3DBAB101, not started  MySQL thread id 17546251, OS thread handle 0x7ffd34359700, query id 874624824 localhost root  ---TRANSACTION 3DBAB100, not started  MySQL thread id 17546249, OS thread handle 0x7ffd34853700, query id 874624823 localhost root  ---TRANSACTION 3DBAB102, not started  MySQL thread id 17546245, OS thread handle 0x7ffbfd079700, query id 874624825 localhost root  ---TRANSACTION 0, not started  MySQL thread id 17509427, OS thread handle 0x7ffbfc0a8700, query id 872618032 localhost root  --------  FILE I/O  --------  I/O thread 0 state: waiting for i/o request (insert buffer thread)  I/O thread 1 state: waiting for i/o request (log thread)  I/O thread 2 state: waiting for i/o request (read thread)  I/O thread 3 state: waiting for i/o request (read thread)  I/O thread 4 state: waiting for i/o request (read thread)  I/O thread 5 state: waiting for i/o request (read thread)  I/O thread 6 state: waiting for i/o request (write thread)  I/O thread 7 state: waiting for i/o request (write thread)  I/O thread 8 state: waiting for i/o request (write thread)  I/O thread 9 state: waiting for i/o request (write thread)  Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,   ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 0; buffer pool: 0  1251611 OS file reads, 59363166 OS file writes, 43388223 OS fsyncs  0.00 reads/s, 0 avg bytes/read, 37.85 writes/s, 35.43 fsyncs/s  -------------------------------------  INSERT BUFFER AND ADAPTIVE HASH INDEX  -------------------------------------  Ibuf: size 1, free list len 6378, seg size 6380, 45209 merges  merged operations:   insert 467359, delete mark 13737931, delete 888610  discarded operations:   insert 0, delete mark 0, delete 0  Hash table size 8850487, node heap has 15036 buffer(s)  296481.25 hash searches/s, 4879.87 non-hash searches/s  ---  LOG  ---  Log sequence number 174590808256  Log flushed up to   174590808256  Last checkpoint at  174589633376  0 pending log writes, 0 pending chkp writes  42519161 log i/o's done, 35.28 log i/o's/second  ----------------------  BUFFER POOL AND MEMORY  ----------------------  Total memory allocated 4395630592; in additional pool allocated 0  Dictionary memory allocated 829252  Buffer pool size   262144  Free buffers       16  Database pages     247092  Old database pages 91191  Modified db pages  755  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 4120399, not young 0  0.00 youngs/s, 0.00 non-youngs/s  Pages read 2422849, created 679606, written 29056740  0.00 reads/s, 0.00 creates/s, 2.86 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s  LRU len: 247092, unzip_LRU len: 0  I/O sum[705]:cur[0], unzip sum[0]:cur[0]  --------------  ROW OPERATIONS  --------------  0 queries inside InnoDB, 0 queries in queue  1 read views open inside InnoDB  Main thread process no. 30911, id 140720292824832, state: sleeping  Number of rows inserted 30042313, updated 60944284, deleted 28858062, read 2514515901102  11.36 inserts/s, 28.57 updates/s, 4.00 deletes/s, 436602.24 reads/s  ----------------------------  END OF INNODB MONITOR OUTPUT  ============================    

Indexes in notification table are -

id - Primary key    user_id - non-unique index    

How to move folder of SQL Server LocalDB instances from the default location?

Posted: 09 Apr 2013 08:37 AM PDT

I get multiple errors with LocalDB (startup error, can't create DB from SQL Server Management Studio and also multiple issues when trying to restore a backup) and none of the fixes I found are helping anything. It seems to me that all errors have their root in permissions not set up correctly.

Now a simple DDL table creation script works fine from SQL Server Management Studio and creates the database in the root of my user account's folder (as explained here). The Instances folder however (located at D:\Users\[My name]\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances despite when installing the SQL Server Express with Advanced Services I specified the DATA folder in another location - where only standard SQL Server Express .mdf files are stored, but not LocalDB instances) seems problematic regarding permissions (in contrary to the account root) therefore and also for having my DB files in a location among my projects I'd like to move the Instances folder to another place. Is this possible? I haven't even found a corresponding registry entry.

I'm using SQL Server Express 2012 and also SSMS of the same version (latest updates installed).

Any help would be appreciated.

Edit: The Database Settings page KookieMonster mentioned previously threw an error for me, not being able to edit nor see any configuration there. I let Windows Update check for updates again and there were some updates for SQL Server Express that weren't installed (although I'm sure I selected everything to install before), so I installed them. This made it possible for me to go to the settings page.

I've tried to set up the LocalDB instance's root folder to somewhere else but I get an access denied error for every folder I tried, namely

Alter failed for Settings 'Microsoft.SqlServer.Management.Smo.ObjectKeyBase'. (Microsoft.SqlServer.Smo)

------------------------------ ADDITIONAL INFORMATION:

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)


RegCreateKeyEx() returned error 5, 'Access is denied.' (Microsoft SQL Server, Error: 22002)

I get this even if the folder has Full control set for Everyone! Even more amusing the only folder that works, one that is set as a default (the user account's root) has no special permissions applied - just the ones my user has (just as any other folder I tried to set). In effect this means that all my LocalDB databases' files are saved to my account's root folder.

Why is there such a huge performance difference in these two join statements?

Posted: 09 Apr 2013 01:42 PM PDT

I have a query which i first wrote as (query has been simplified a bit) :

select ROW_NUMBER() OVER (ORDER BY c.CategoryTypeID), c.name, count(p.id) from category c  inner join product p on p.indID = c.id or p.catid = c.id and p.deleted = 1 and p.statusid = 1   group by c.name  

This took 8 seconds.

If i changed it to the following it runs in under 1 second:

   select ROW_NUMBER() OVER (ORDER BY c.CategoryTypeID), c.name, count(p.id) from category c     inner join product p on p.indID = c.id or p.catid = c.id     where p.deleted = 0 and p.statusid =1   group by c.name  

I'd like to understand why there's such a huge difference in performance. Not only is the performance worse but it seems to be ignoring my conditional statements on the first sql query it's counting records regardless of what p.deleted or p.statusid is

What I am not understanding about how joins work? This is being run on mssql 2012

What must be in place to validate a XMLTYPE against a schema?

Posted: 09 Apr 2013 11:31 AM PDT

I have a procedure that generates an XMLTYPE and I want to validate it against a schema. The problem is that there seems to be a permissions issue running createSchemaBasedXML because when I run the procedure as AUTHID DEFINER it gives the error "ORA-31050: Access denied", but when I run it as AUTHID CURRENT_USER it actually returns a validation specific error (I'll deal with that separately). CURRENT_USER is not an acceptable solution.

My supposition is that CURRENT_USER works because the user has the XMLADMIN role. Granting the permissions the role includes does not resolve the issue, so it must be the roles ability to bypass the ACLs.

The thing is, querying RESOURCE_VIEW for the ACL that protects the resource shows that it is protected by /sys/acls/all_owner_acl.xml. DBMS_XDB.getPrivileges shows that the xsd has all the following permissions:

  <read-properties/>    <read-contents/>    <write-config/>    <link/>    <unlink/>    <read-acl/>    <write-acl-ref/>    <update-acl/>    <resolve/>    <link-to/>    <unlink-from/>    <dav:lock/>    <dav:unlock/>    <dav:write-properties/>    <dav:write-content/>    <dav:execute/>    <dav:take-ownership/>    <dav:read-current-user-privilege-set/>  

Using DBMS_XDB.getAclDocument shows a principal of dav:owner has all privileges, so that must not be enough to allow the owner of the schema to create schema based XML. With this thought in mind I created a block to run DBMS_XDB.createResource creating a new ACL. I can successfully create the ACL and from SQLDeveloper I can see that it exists in the location I created it in.

There are any number of places I could be going wrong in this process, so the core of what I am looking for is this:

What must be in place to validate a XMLTYPE against a schema?

=== Update 4/3/2013 ===
I can set the acl for my xsd file to /sys/acls/all_all_acl.xml and back to /sys/acls/all_owner_acl.xml. As before, neither of these resolve the permissions issue. I also tried a createResource using the acl definition copied from all_owner_acl.xl and using the same path as those files of /sys/acls/. This at least successfully sets the ACL. I have run it for all my XSDs and queried RESOURCE_VIEW to confirm that they are set to the new ACL. After all this though, createSchemaBasedXML still gives an access denied error. Is my ACL correct or might there be another issue?

<acl xmlns="http://xmlns.oracle.com/xdb/acl.xsd"      xmlns:dav="DAV:"      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"        xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd         http://xmlns.oracle.com/xdb/acl.xsd" shared="true">         <ace>           <grant>true</grant>           <principal>MY_ORACLE_USER</principal>           <privilege>             <all/>           </privilege>         </ace>       </acl>  

=== Update 4/9/2013 ===
I have an anonymous block that can successfully validate XML based on a schema. This again points to a permissions issue that allows this to work when roles are enabled, but not when they are not available.

MySQL slave stuck in "Reading event from the relay log"?

Posted: 09 Apr 2013 08:23 AM PDT

  • 5.5.28-log MySQL Community Server (GPL) by Remi
  • binlog-format=MIXED

My problem is similar to this question.

*************************** 2. row ***************************       Id: 973415     User: system user     Host:        db: NULL  Command: Connect     Time: 29062    State: Reading event from the relay log     Info: NULL  

It looks like a bug, except for no one mentioned that verion 5.5.28 is effected.

Here're the additional informations:

mysql> show slave status\G

          Master_Log_File: mysql-bin.006413        Read_Master_Log_Pos: 40371120             Relay_Log_File: mysqld-relay-bin.011003              Relay_Log_Pos: 226592090      Relay_Master_Log_File: mysql-bin.006392           Slave_IO_Running: Yes          Slave_SQL_Running: Yes            Replicate_Do_DB:         Replicate_Ignore_DB:          Replicate_Do_Table:      Replicate_Ignore_Table:     Replicate_Wild_Do_Table:   Replicate_Wild_Ignore_Table:                  Last_Errno: 0                 Last_Error:                Skip_Counter: 0        Exec_Master_Log_Pos: 226591944            Relay_Log_Space: 5708184440            Until_Condition: None             Until_Log_File:               Until_Log_Pos: 0         Master_SSL_Allowed: No         Master_SSL_CA_File:          Master_SSL_CA_Path:             Master_SSL_Cert:           Master_SSL_Cipher:              Master_SSL_Key:       Seconds_Behind_Master: 29531  

mysql> show engine innodb status;

---TRANSACTION 1709EE56, ACTIVE 29524 sec fetching rows  mysql tables in use 8, locked 8  94647 lock struct(s), heap size 14121400, 45755176 row lock(s), undo log entries 7547  MySQL thread id 973415, OS thread handle 0x40d90940, query id 1773214543 Reading event from the relay log  

mysqlbinlog --start-position=226591944 mysql-bin.006392

#130401 15:18:23 server id 248  end_log_pos 226592078   Table_map: `reportingdb`.`v3_cam_ip` mapped to number 4973102  #130401 15:18:23 server id 248  end_log_pos 226593102   Delete_rows: table id 4973102  #130401 15:18:23 server id 248  end_log_pos 226594131   Delete_rows: table id 4973102  #130401 15:18:23 server id 248  end_log_pos 226595169   Delete_rows: table id 4973102  

The command that the user run on the master:

DELETE FROM v3_cam_ip WHERE dt ='2013-03-15' LIMIT 100000  

mysql> show create table v3_cam_ip\G

*************************** 1. row ***************************         Table: v3_cam_ip  Create Table: CREATE TABLE `v3_cam_ip` (    `campaignid` mediumint(9) unsigned DEFAULT NULL,    `ip` varchar(20) COLLATE latin1_bin NOT NULL DEFAULT '',    `dt` date NOT NULL,    KEY `ix_campaignid` (`campaignid`),    KEY `ix_dt` (`dt`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_bin  /*!50100 PARTITION BY RANGE (TO_DAYS(dt))  (PARTITION p00 VALUES LESS THAN (0) ENGINE = InnoDB,   PARTITION p01 VALUES LESS THAN (734653) ENGINE = InnoDB,   PARTITION p02 VALUES LESS THAN (734745) ENGINE = InnoDB,   PARTITION p05 VALUES LESS THAN (734965) ENGINE = InnoDB,   PARTITION p06 VALUES LESS THAN (735160) ENGINE = InnoDB,   PARTITION p07 VALUES LESS THAN (735210) ENGINE = InnoDB,   PARTITION MERGER_201341 VALUES LESS THAN (735324) ENGINE = InnoDB,   PARTITION pcurrent_201341 VALUES LESS THAN (735325) ENGINE = InnoDB) */  

mysql> show keys from v3_cam_ip\G

*************************** 1. row ***************************          Table: v3_cam_ip     Non_unique: 1       Key_name: ix_campaignid   Seq_in_index: 1    Column_name: campaignid      Collation: A    Cardinality: 20       Sub_part: NULL         Packed: NULL           Null: YES     Index_type: BTREE        Comment:   Index_comment:   *************************** 2. row ***************************          Table: v3_cam_ip     Non_unique: 1       Key_name: ix_dt   Seq_in_index: 1    Column_name: dt      Collation: A    Cardinality: 20       Sub_part: NULL         Packed: NULL           Null:      Index_type: BTREE        Comment:   Index_comment:   2 rows in set (0.95 sec)  

What I have done on one of two Slaves:

  • stop slave; hangs so I have to press Ctrl-C
  • then I tried to restart the MySQL (using init script) but it failed
  • after that I did a kill -9 (silly?) and start again
  • now the slave is started but it doesn't get update from the master

What should I do on the remaining slave?


gdb --batch --quiet -ex 'set pagination off' -ex 'thread apply all bt full' -ex 'quit' -p $(pidof mysqld)

#4  0x0000000000926b99 in ha_partition::handle_unordered_next(unsigned char*, bool) ()  No symbol table info available.  #5  0x0000000000733de9 in Rows_log_event::find_row(Relay_log_info const*) ()  No symbol table info available.  #6  0x0000000000734079 in Delete_rows_log_event::do_exec_row(Relay_log_info const*) ()  No symbol table info available.  #7  0x0000000000734a92 in Rows_log_event::do_apply_event(Relay_log_info const*) ()  No symbol table info available.  #8  0x000000000051774e in apply_event_and_update_pos(Log_event*, THD*, Relay_log_info*) ()  

The full backtrace: http://fpaste.org/pXvT/

Foreign Key constraint on fixed value field - Ever appropriate?

Posted: 09 Apr 2013 08:17 AM PDT

I have a short (15 rows) look-up table that lists the valid values for several columns in the database.
I don't think it can be considered a One-True-Lookup-Table, it is about a single definite concept, but it may conceptually be partitioned in some subgroups (3).
I now happen to have to add a column that actually needs to accept only the values from one of these subgroups.

The proper thing to do now would probably to make one table for each subgroup and turn the original table into a simple list of IDs, from which the subgroup tables take their primary ids from.

It is though very unlikely that I ever further need to refer to the subgroups, I will instead frequently use things for which all the values in the original table are valid.

So the partitioning option would make things more complex for the vast part of the application just to support one case.

The only other option I know of is to add a Type column to the original lookup table and a fixed-value column to the single table that need to refer to the subgroup, and use a two-columns foreign key (to ID + Type) in this single case.

This is very ugly, and I'm not sure if it is theoretically correct to use columns that are not the primary key as foreign key referents, but given the context, is it acceptable, or even the best way?

Refactoring/normalization - but almost empty table

Posted: 09 Apr 2013 08:33 PM PDT

I normalized a legacy DB into this structure: enter image description here

But I'm not sure if it is correctly normalized. I don't feel very comfortable with the almost empty filters table.

The requirements

  • A group contains several items.
  • A filter organizes how a item is "displayed".
  • Multiple terms are defining a filter.
  • Different groups may contain the same item, but different filter.
  • A filter cam have no terms.

Not really important requirements

  • A filter might be used several times.

Btw. the diagram was created using http://cakeapp.com.

mysql better index?

Posted: 09 Apr 2013 02:01 PM PDT

Hi i am not sure where should i add primary index in my table salas order. Table have invoice date where is lets say 50 values added every day and i need to search them this column is date datatype or add index to invoice number where lets say is 40 values added every day with same invoice number but this column is varchar datatype and values looks like this a100,a100,a100,a101,a102,a102 etc what will be better for search of invoices. Selec* from salesorders where date ... or where invoicenumer ...

What is the actual difference between innodb_autoinc_lock_mode 0,1,2

Posted: 09 Apr 2013 09:01 PM PDT

I would like to understand the differences between innodb_autoinc_lock_mode options 0,1 and 2 when parallel load data infiles are given.

I see in "0" option, it locks the entire table and does the First transaction requested for N number of records say TX1. So when next transaction say TX2 is raised in meantime when first transaction is still getting uploaded using "load data", then it has to wait in the queue for the first one TX1 to complete. And then it sets the max(AI_column)+1 value from the table and does upload for the next set of load data. In this case it doesn't jump the Auto Increment numbers.

Also I see in "1" option, it locks the entire table and does the First transaction requested for N number of records say TX1. So when next transaction say TX2 is raised in meantime when first transaction is still getting uploaded using "load data", then it has to wait in the queue for the first one TX1 to complete. And then it sets the max(AI_column)+1 value from the table and does upload for the next set of load data. And then it sets the max(AI_column)+some_creepy_jump.

But I see in "2" option, it doesn't lock the entire table. Instead it keeps inserts for each process simultaneously and inserting records for which ever request falls in randomly and ends all threads with average time around (1.21 sec for 7 parellel threads using load data having 1000000 record each). In this case it has multiple transactions in mixed order. And then it sets the max(AI_column)+some_creepy_jump atlast.

I'm using mysql 5.1.61 .

  • My question is what is the use of option 1?
  • And why does it stay as default later versions of Mysql 5.1.22 ?
  • Have anyone comeaccross any disasters with option 2?

As my project demands multiple processes using load data for a table. And I have tested the above options and finalized to go with option 2. Kindly correct me if my experiments are wrong.

Increase in memory use when database server switches

Posted: 09 Apr 2013 12:01 PM PDT

I have an ASP.NET 4.0 application running on Window Server 2008 and a Oracle 11g database server running on Linux.

There are two database servers with the same virtual IP.

Recently somehow one database server went down and another database server starting working. Every thing went perfectly on the database server site with a small problem with the Oracle listener.

But on my application server, memory started increasing and went up to above 15 GB, almost 99% use. Normally it uses 2 Gb of memory.

I am using ADO.Net to connect to Oracle. My application has properly closed database connections.

I was not able to figure out what is the problem and what should I do in my application. Can any one suggest me what should I do?

Queries getting stuck on very simple COUNT queries

Posted: 09 Apr 2013 05:01 PM PDT

process list:

| 16004 | metrics | xxx:64616      | metrics | Query   | 29064 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356856200'                                        |         0 |             0 |      1021 |  | 16019 | metrics | xxx:23506      | metrics | Query   | 29043 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356856200'                                        |         0 |             0 |      1021 |  | 16102 | metrics | xxx:9117       | metrics | Query   | 28881 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356856200'                                        |         0 |             0 |      1021 |  | 16149 | metrics | xxx:14772      | metrics | Query   | 28768 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356856200'                                        |         0 |             0 |      1021 |  | 16384 | metrics | xxx:51572      | metrics | Query   | 27916 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356858000'                                        |         0 |             0 |      1021 |  ...  | 17890 | metrics | xxx:28643      | metrics | Query   |   709 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356885000'                                        |         0 |             0 |        21 |  | 17924 | metrics | xxx:48797      | metrics | Query   |   681 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356885000'                                        |         0 |             0 |        21 |  | 17929 | metrics | xxx:55624      | metrics | Query   |   671 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356885000'                                        |         0 |             0 |        21 |  | 17938 | metrics | xxx:2632       | metrics | Query   |   654 | Sending data | SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356885000'                                        |         0 |             0 |        21 |  

The table only has around 2500 rows, only ~1000 need to be scanned. Table is InnoDB.

This started happening last night. I was not able to run REPAIR TABLE but ANALYZE TABLE ran ok. I can't run any selects manually. When it happened last night I had to kill the mysql process and let it do crash recovery but it's doing it again.

UPDATEs on the table run without error (it is frequently updated), it just seems to be anything that selects from it that use COUNT(). I am able to SELECT * FROM Plugin and have every row returned, but even if I do SELECT COUNT(ID) FROM Plugin it still freezes on sending data

Explain:

mysql> explain SELECT COUNT(*) FROM Plugin where LastUpdated >= '1356856200';  +----+-------------+--------+-------+---------------+-------------+---------+------+------+--------------------------+  | id | select_type | table  | type  | possible_keys | key         | key_len | ref  | rows | Extra                    |  +----+-------------+--------+-------+---------------+-------------+---------+------+------+--------------------------+  |  1 | SIMPLE      | Plugin | range | LastUpdated   | LastUpdated | 5       | NULL | 1382 | Using where; Using index |  

MySQL is using Percona Server 5.5.27

Any ideas on what is causing this?

Thanks a lot.

EDIT: innodb% variables and INNODB STATUS

Rolando: you say to increase innodb_io_capacity could you please explain? I am using 2x 7200 rpm hard drives in a ZFS mirror. Why should I increase it to 2000 ? The MySQL website advises 100 for single 7200 rpm drives (I have it at the default 200)

Excerpt from INNODB STATUS. These lovely selects have been stuck for half a day now. These are the oldest queries.

---TRANSACTION 27986E9D1, ACTIVE 48622 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 570263, OS thread handle 0xad7380000, query id 1217058950 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357635600'  Trx read view will not see trx with id >= 27986E9D3, sees < 278C78F08  ---TRANSACTION 2797C7249, ACTIVE 49200 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 570063, OS thread handle 0xad7380800, query id 1216141959 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357635600'  Trx read view will not see trx with id >= 2797C724A, sees < 278C78F08  

Full status:

=====================================  130109 22:44:19 INNODB MONITOR OUTPUT  =====================================  Per second averages calculated from the last 50 seconds  -----------------  BACKGROUND THREAD  -----------------  srv_master_thread loops: 737366 1_second, 728615 sleeps, 73673 10_second, 921 background, 911 flush  srv_master_thread log flush and writes: 766639  ----------  SEMAPHORES  ----------  OS WAIT ARRAY INFO: reservation count 75724985, signal count 103543991  --Thread 46560449536 has waited at row0sel.c line 3941 for 0.0000 seconds the semaphore:  S-lock on RW-latch at 0x83368d9b8 '&block->lock'  a writer (thread id 46560448512) has reserved it in mode  wait exclusive  number of readers 1, waiters flag 1, lock_word: ffffffffffffffff  Last time read locked in file btr0sea.c line 942  Last time write locked in file /usr/local/src/Percona-Server-5.5.27-rel28.1/storage/innobase/row/row0upd.c line 2194  --Thread 46560428032 has waited at row0sel.c line 2924 for 0.0000 seconds the semaphore:  S-lock on RW-latch at 0x8337b76f8 '&block->lock'  number of readers 1, waiters flag 0, lock_word: fffff  Last time read locked in file btr0sea.c line 942  Last time write locked in file /usr/local/src/Percona-Server-5.5.27-rel28.1/storage/innobase/row/row0upd.c line 2194  --Thread 46560448512 has waited at row0upd.c line 2194 for 0.0000 seconds the semaphore:  X-lock (wait_ex) on RW-latch at 0x83368d9b8 '&block->lock'  a writer (thread id 46560448512) has reserved it in mode  wait exclusive  number of readers 1, waiters flag 1, lock_word: ffffffffffffffff  Last time read locked in file btr0sea.c line 942  Last time write locked in file /usr/local/src/Percona-Server-5.5.27-rel28.1/storage/innobase/row/row0upd.c line 2194  --Thread 46560446464 has waited at row0sel.c line 2924 for 0.0000 seconds the semaphore:  S-lock on RW-latch at 0x83368d9b8 '&block->lock'  a writer (thread id 46560448512) has reserved it in mode  wait exclusive  number of readers 1, waiters flag 1, lock_word: ffffffffffffffff  Last time read locked in file btr0sea.c line 942  Last time write locked in file /usr/local/src/Percona-Server-5.5.27-rel28.1/storage/innobase/row/row0upd.c line 2194  Mutex spin waits 860007450, rounds 4788572135, OS waits 44460586  RW-shared spins 56868852, rounds 806806376, OS waits 12775136  RW-excl spins 115255109, rounds 878353038, OS waits 11254430  Spin rounds per wait: 5.57 mutex, 14.19 RW-shared, 7.62 RW-excl  --------  FILE I/O  --------  I/O thread 0 state: waiting for i/o request (insert buffer thread)  I/O thread 1 state: waiting for i/o request (log thread)  I/O thread 2 state: waiting for i/o request (read thread)  I/O thread 3 state: waiting for i/o request (read thread)  I/O thread 4 state: waiting for i/o request (read thread)  I/O thread 5 state: waiting for i/o request (read thread)  I/O thread 6 state: waiting for i/o request (read thread)  I/O thread 7 state: waiting for i/o request (read thread)  I/O thread 8 state: waiting for i/o request (read thread)  I/O thread 9 state: waiting for i/o request (read thread)  I/O thread 10 state: waiting for i/o request (write thread)  I/O thread 11 state: waiting for i/o request (write thread)  I/O thread 12 state: waiting for i/o request (write thread)  I/O thread 13 state: waiting for i/o request (write thread)  I/O thread 14 state: waiting for i/o request (write thread)  I/O thread 15 state: waiting for i/o request (write thread)  I/O thread 16 state: waiting for i/o request (write thread)  I/O thread 17 state: waiting for i/o request (write thread)  Pending normal aio reads: 0 [0, 0, 0, 0, 0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0, 0, 0, 0, 0] ,   ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 0; buffer pool: 0  6880197 OS file reads, 409935916 OS file writes, 1108361 OS fsyncs  1 pending preads, 1 pending pwrites  15.60 reads/s, 16930 avg bytes/read, 190.92 writes/s, 2.34 fsyncs/s  -------------------------------------  INSERT BUFFER AND ADAPTIVE HASH INDEX  -------------------------------------  Ibuf: size 824, free list len 518, seg size 1343, 364193 merges  merged operations:   insert 589402, delete mark 221097, delete 9349  discarded operations:   insert 0, delete mark 0, delete 0  Hash table size 21249841, node heap has 32061 buffer(s)  142440.39 hash searches/s, 8066.70 non-hash searches/s  ---  LOG  ---  Log sequence number 3642503658567  Log flushed up to   3642503574648  Last checkpoint at  3636441447521  Max checkpoint age    6957135914  Checkpoint age target 6739725417  Modified age          6062211046  Checkpoint age        6062211046  0 pending log writes, 0 pending chkp writes  398344529 log i/o's done, 170.86 log i/o's/second  ----------------------  BUFFER POOL AND MEMORY  ----------------------  Total memory allocated 11031019520; in additional pool allocated 0  Internal hash tables (constant factor + variable factor)      Adaptive hash index 695290368       (169998728 + 525291640)      Page hash           10625704 (buffer pool 0 only)      Dictionary cache    42705560        (42501104 + 204456)      File system         99832   (82672 + 17160)      Lock system         26573920        (26563016 + 10904)      Recovery system     0       (0 + 0)  Dictionary memory allocated 204456  Buffer pool size        655359  Buffer pool size, bytes 10737401856  Free buffers            96  Database pages          623202  Old database pages      230029  Modified db pages       307887  Pending reads 1  Pending writes: LRU 0, flush list 0, single page 0  Pages made young 20888946, not young 0  183.06 youngs/s, 0.00 non-youngs/s  Pages read 6929404, created 2039787, written 15209639  16.10 reads/s, 11.04 creates/s, 23.24 writes/s  Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000  Pages read ahead 0.00/s, evicted without access 12.42/s, Random read ahead 0.00/s  LRU len: 623202, unzip_LRU len: 0  I/O sum[750]:cur[6], unzip sum[0]:cur[0]  --------------  ROW OPERATIONS  --------------  0 queries inside InnoDB, 0 queries in queue  20 read views open inside InnoDB  ---OLDEST VIEW---  Normal read view  Read view low limit trx n:o 2797C724A  Read view up limit trx id 278C78F08  Read view low limit trx id 2797C724A  Read view individually stored trx ids:  Read view trx id 2797C7249  Read view trx id 2797C6206  Read view trx id 2797A2F63  Read view trx id 27976266E  Read view trx id 279761B1E  Read view trx id 278C78F08  -----------------  Main thread id 34397547520, state: sleeping  Number of rows inserted 42523215, updated 894466983, deleted 12775, read 9294801842  12.36 inserts/s, 907.78 updates/s, 0.00 deletes/s, 17273.29 reads/s  ------------  TRANSACTIONS  ------------  Trx id counter 27B056747  Purge done for trx's n:o < 278FA84BC undo n:o < 5D  History list length 15024237  LIST OF TRANSACTIONS FOR EACH SESSION:  [...]  ---TRANSACTION 279D23E45, ACTIVE 42765 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 572264, OS thread handle 0xb0420f400, query id 1225918036 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357641000'  Trx read view will not see trx with id >= 279D23E48, sees < 2797C7249  ---TRANSACTION 279B9A787, ACTIVE 45181 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 571429, OS thread handle 0xad737c400, query id 1222536287 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357639200'  Trx read view will not see trx with id >= 279B9A78D, sees < 2797C7249  ---TRANSACTION 2799448B4, ACTIVE 47853 sec fetching rows  mysql tables in use 1, locked 0xxx.ip.xxx  MySQL thread id 570543, OS thread handle 0xadb3ea000, query id 1218229116 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357635600'  Trx read view will not see trx with id >= 2799448B5, sees < 278C78F08  ---TRANSACTION 27986E9D1, ACTIVE 48622 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 570263, OS thread handle 0xad7380000, query id 1217058950 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357635600'  Trx read view will not see trx with id >= 27986E9D3, sees < 278C78F08  ---TRANSACTION 2797C7249, ACTIVE 49200 sec fetching rows  mysql tables in use 1, locked 0  MySQL thread id 570063, OS thread handle 0xad7380800, query id 1216141959 lb1.xxxx xxx.ip.xxx metrics Sending data  SELECT COUNT(*) FROM Plugin where LastUpdated >= '1357635600'  Trx read view will not see trx with id >= 2797C724A, sees < 278C78F08  ----------------------------  END OF INNODB MONITOR OUTPUT  ============================  

innodb% variables

| innodb_adaptive_flushing                  | OFF                    |  | innodb_adaptive_flushing_method           | estimate               |  | innodb_adaptive_hash_index                | ON                     |  | innodb_adaptive_hash_index_partitions     | 1                      |  | innodb_additional_mem_pool_size           | 10485760               |  | innodb_autoextend_increment               | 8                      |  | innodb_autoinc_lock_mode                  | 1                      |  | innodb_blocking_buffer_pool_restore       | OFF                    |  | innodb_buffer_pool_instances              | 1                      |  | innodb_buffer_pool_restore_at_startup     | 0                      |  | innodb_buffer_pool_shm_checksum           | ON                     |  | innodb_buffer_pool_shm_key                | 0                      |  | innodb_buffer_pool_size                   | 10737418240            |  | innodb_change_buffering                   | all                    |  | innodb_checkpoint_age_target              | 0                      |  | innodb_checksums                          | ON                     |  | innodb_commit_concurrency                 | 0                      |  | innodb_concurrency_tickets                | 500                    |  | innodb_corrupt_table_action               | assert                 |  | innodb_data_file_path                     | ibdata1:10M:autoextend |  | innodb_data_home_dir                      |                        |  | innodb_dict_size_limit                    | 0                      |  | innodb_doublewrite                        | OFF                    |  | innodb_doublewrite_file                   |                        |  | innodb_fake_changes                       | OFF                    |  | innodb_fast_checksum                      | OFF                    |  | innodb_fast_shutdown                      | 1                      |  | innodb_file_format                        | Barracuda              |  | innodb_file_format_check                  | ON                     |  | innodb_file_format_max                    | Antelope               |  | innodb_file_per_table                     | ON                     |  | innodb_flush_log_at_trx_commit            | 2                      |  | innodb_flush_method                       | O_DIRECT               |  | innodb_flush_neighbor_pages               | none                   |  | innodb_force_load_corrupted               | OFF                    |  | innodb_force_recovery                     | 0                      |  | innodb_ibuf_accel_rate                    | 100                    |  | innodb_ibuf_active_contract               | 1                      |  | innodb_ibuf_max_size                      | 5368692736             |  | innodb_import_table_from_xtrabackup       | 0                      |  | innodb_io_capacity                        | 200                    |  | innodb_kill_idle_transaction              | 0                      |  | innodb_large_prefix                       | OFF                    |  | innodb_lazy_drop_table                    | 0                      |  | innodb_lock_wait_timeout                  | 50                     |  | innodb_locks_unsafe_for_binlog            | OFF                    |  | innodb_log_block_size                     | 512                    |  | innodb_log_buffer_size                    | 8388608                |  | innodb_log_file_size                      | 4294967296             |  | innodb_log_files_in_group                 | 2                      |  | innodb_log_group_home_dir                 | ./                     |  | innodb_max_dirty_pages_pct                | 75                     |  | innodb_max_purge_lag                      | 0                      |  | innodb_merge_sort_block_size              | 1048576                |  | innodb_mirrored_log_groups                | 1                      |  | innodb_old_blocks_pct                     | 37                     |  | innodb_old_blocks_time                    | 0                      |  | innodb_open_files                         | 300                    |  | innodb_page_size                          | 16384                  |  | innodb_purge_batch_size                   | 20                     |  | innodb_purge_threads                      | 1                      |  | innodb_random_read_ahead                  | OFF                    |  | innodb_read_ahead                         | none                   |  | innodb_read_ahead_threshold               | 56                     |  | innodb_read_io_threads                    | 8                      |  | innodb_recovery_stats                     | OFF                    |  | innodb_recovery_update_relay_log          | OFF                    |  | innodb_replication_delay                  | 0                      |  | innodb_rollback_on_timeout                | OFF                    |  | innodb_rollback_segments                  | 128                    |  | innodb_show_locks_held                    | 10                     |  | innodb_show_verbose_locks                 | 0                      |  | innodb_spin_wait_delay                    | 6                      |  | innodb_stats_auto_update                  | 1                      |  | innodb_stats_method                       | nulls_equal            |  | innodb_stats_on_metadata                  | ON                     |  | innodb_stats_sample_pages                 | 8                      |  | innodb_stats_update_need_lock             | 1                      |  | innodb_strict_mode                        | OFF                    |  | innodb_support_xa                         | OFF                    |  | innodb_sync_spin_loops                    | 30                     |  | innodb_table_locks                        | ON                     |  | innodb_thread_concurrency                 | 0                      |  | innodb_thread_concurrency_timer_based     | OFF                    |  | innodb_thread_sleep_delay                 | 10000                  |  | innodb_use_global_flush_log_at_trx_commit | ON                     |  | innodb_use_native_aio                     | OFF                    |  | innodb_use_sys_malloc                     | ON                     |  | innodb_use_sys_stats_table                | OFF                    |  | innodb_version                            | 1.1.8-27.0             |  | innodb_write_io_threads                   | 8                      |xxxx  

Alter table on live production databases

Posted: 09 Apr 2013 04:57 PM PDT

How do most "popular" (MySQL, Postgres...) database system handle altering tables on live production databases (like adding, deleting or changing the type of colums)?

I know the correct way is to backup everything schedule downtime and do then do the changes.

But... does any current database system support doing these things "on-line" without stopping anything? (maybe just delaying the queries that reference a column that is just being changed/deleted)

And what does it happen when I just do an ALTER TABLE... on a live running database? Does everything stop when this happens? Can data get corrupted? etc.

Again, I'm mostly referring to Postgres or MySQL as these are what I encounter.

(And, yes, anytime I had to do this before I did it "the right way", backing things up, scheduling downtine etc. ...but I just want to know if it's possible to do this sort and things "quick and dirty" or if there is any DB system that actually has support for "quick, live and dirty" schema changes)


Someone just suggested Online Schema Change for MySQL from Facebook script (with a tutorial here and source here)... seems like a nice way to automate a the set of "hacky" ways to do it... has anyone ever used it in something resemblig production?

No comments:

Post a Comment

Search This Blog