Connecting to an 11g Standard Database (Local) Posted: 19 Jul 2013 07:18 PM PDT So i've installed 11g Standard on a VM, it's going to be connected to from other Computers/Users. Im not really used to Oracle setup/etc.. so bear with me. Anyways it installs successfully and everything but im not really sure how to create my first database on it (Using SQL Developer). Im trying to connect with SQL developer, but im not really sure of the parameters? Should I be able to connect at this stage? what do I enter for Username/Password etc...? I've seen guides but they are for Express Edition. This is on a Windows 8 64 Bit VM Btw. Thank you |
Speeding up MySQL InnoDB NOT IN (Subquery) Posted: 19 Jul 2013 12:35 PM PDT Good evening, I'm currently up to try to improve the performance of some of my queries. As far as I know, statements like "IN" or "NOT IN" are even faster with a large amount of values if a subquery on an indexed field without conditions is used. SELECT * FROM table1 WHERE field1 NOT IN (SELECT index_field FROM table2) When it comes to use conditions at table2, the query becomes realy slow on a large amount of data. SELECT * FROM table1 WHERE field1 NOT IN ( SELECT index_field FROM table2 WHERE user_id = '2' ) I was thinking about using LEFT Join but when I need to to the filtering for the user_id, it also becomes slow. How may I solve this problem? Currently, I've no clue about that. The original query looked sth. like this SELECT i.* FROM stream_item si LEFT JOIN user__publisher_item ui ON ui.user_id = 2 && ui.item_id = si.item_id INNER JOIN item i ON i.id = si.item_id WHERE si.stream_id IN (5,7,8,9,19,24,29,42,43,44) && ui.id IS NULL || ( ui.`read` = 0 || ui.saved = 1 ) GROUP BY si.item_id ORDER BY si.`found` DESC LIMIT 0,10 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE si index PRIMARY,stream_2_item,stream_id_found stream_2_item 4 \N 663236 Using temporary; Using filesort 1 SIMPLE ui eq_ref user_id_item_id,user_2_item,user_id_read_saved_hidden,user_id_saved,user_id_hidden user_id_item_id 8 const,si.item_id 1 Using where 1 SIMPLE i eq_ref PRIMARY PRIMARY 4 si.item_id 1 I have a table which represents the n:m relationship between stream and items. An unique item can be accessible via multiple streams. Based upon this i have a state-table for user dependent states of an item. Currently, the user-state-entry will only be generated, if the item was read or saved the first tim. Otherwise, there is no entry in the user_item table. In the beginning, this query was fine but now, it takes a lot of time. Even if I leave out the "INNER JOIN" it takes 7 seconds on the current tables with around 500k rows each. The table-structure in the background is as follows: - stream
- stream_item //n:m relation between stream and items
- item //unique items
- stream_group //user-defined groups of multiple streams
- stream_group_streams //_n:m relation between stream_groups and streams_
- user
- user_item //state-table that holds the user-dependent states of an item - if there is no state, no row exists for an item
I already asked this question at Stackoverflow but added my 2 Edits with further information to late so that I haven't got an answer anymore. Kind regards, Dominik Btw. sorry for my bad english |
Mysql federated engine table creation error Posted: 19 Jul 2013 11:56 AM PDT I am trying to create a federated table on my local machine which suppose to connect to my another machine which also in my local network. I have enabled federated engine on both machines and is shown when I run this command show engines. Below is my syntax. CREATE TABLE fedTransfer( `fedID` INT NOT NULL AUTO_INCREMENT, `transferID` INT, `outletFromID` INT, `employeeToID` INT, `transferStatus` ENUM('y'), `transferToDate` DATE, `transferToTime` TIME, PRIMARY KEY (`fedID`) ) ENGINE=FEDERATED; COMMENT='mysql://root:passw1@192.168.2.103/mptest/fedTransfer'; Below is the error I get when I run this on sqlyog. Error Code: 1 Can't create/write to file 'server name: '' doesn't exist!' (Errcode: 1408685900) Execution Time : 0 sec Transfer Time : 0 sec Total Time : 0.047 sec --------------------------------------------------- Query: COMMENT='mysql://root:passw1@192.168.2.103/mptest/fedTransfer' Error Code: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'COMMENT='mysql://root:passw1@192.168.2.103/mptest/fedTransfer'' at line 1 Execution Time : 0 sec Transfer Time : 0 sec Total Time : 0 sec --------------------------------------------------- |
How to duplicate related records in MySQL Posted: 19 Jul 2013 09:57 AM PDT Tables: Payment with auto-increment ID and one-to-many relationship to Projects such that one payment (say £10) has two records in Projects allocating the £10 between the projects. Now next month, this payment is received again and I would like to duplicate this payment and it's project records except for changing the dates. With one table you can INSERT INTO payments (date, amount) SELECT CURRENT_DATE date, amount FROM payments WHERE ...; Obviously to re-create the child rows you need the new insert id from payments to use for the PK of the child rows. Is there a way to do this in (MySQL) SQL? I'm doing this for a whole set of matches, not just one. I'm looking for a more efficient way than reading it all out into -say- PHP and doing it in a loop that way. I've been asked for the CREATE TABLE s. They're a simple example, but here you go: CREATE TABLE `payments` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `amount` int(10) unsigned NOT NULL, `date` date NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; CREATE TABLE `projects` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `payment_id` int(10) unsigned NOT NULL, `project` varchar(20) NOT NULL, PRIMARY KEY (`id`), KEY `payment_id` (`payment_id`), CONSTRAINT `y_ibfk_1` FOREIGN KEY (`payment_id`) REFERENCES `payments` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
Database schema for a variable-level hierarchy with variable types? Posted: 19 Jul 2013 12:08 PM PDT A client is asking for a database structure for a medical translation app that involves narrowing down a health issue and then asking specific questions on it. The tricky part of representing this in a database is that there may be anywhere from 2-5 hierarchical "filters" to narrow down the health issue, then a hierarchy of questions to ask the patient. The same filters and questions can be re-used in any level of the hierarchy. The questions must also be translated into several languages if dealing with a non-English speaking patient. For example: -> Symptoms -->Cardiovascular -> Chest Pain "Have you experienced any chest pain recently?" "On a scale of one to ten, how severe is the pain?" "Is the onset of the pain sudden or gradual?" "Is the pain related to exertion?" "Do you ever suffer from shortness of breath?" -> Palpitations "Are you ever aware of your heartbeat? What is it like?" "Can you tap out the rhythm with your finger?" -> Discharge --> Chest Pain "We recommend the following lifestyle changes to improve your condition." "Avoid taking too much aspirin for pain." I can use a string hack to make this work, but am looking for something that can scale better and is searchable. Also it will be maintained by a non-technical person in the future, so preference to something simple to comprehend vs. better performance like nested sets. (Sorry for the formatting, everything I tried kept auto-block quoting stuff.) Thank you to anybody who can assist! |
After update triggers that deal with multi-row updates Posted: 19 Jul 2013 02:10 PM PDT Currently working on an database Audit project based on triggers that are fired on update on specific tables. The triggers write the changes into a table; information written are: table name, updated column, timestamp, user, old value and new value. Triggers work fine with single updates, but when it comes to multi-row updates, it is not working. My code is like this: IF (UPDATE(Priority)) BEGIN SET @UpdatedColumn = 'Priority' INSERT INTO dbo.AuditTable ( [TableName] , [Source] , [RecordId] , [User] , [TimeStamp] , [UpdatedColumn] , [OldValue] , [NewValue] ) SELECT N'BookingItem' , -- TableName - nvarchar(max) (SELECT CODE FROM TBL_LEG_SOURCE INNER JOIN INSERTED INS ON LEG_SOURCE_ID = INS.SourceId) , INS.Id , -- RecordId - bigint (SELECT USERNAME FROM INSERTED INNER JOIN TBL_USER ON ModifiedById = USER_ID) , -- User - nvarchar(max) GETDATE() , -- TimeStamp - datetime @UpdatedColumn , -- UpdatedColumn - nvarchar(max) DEL.Priority , -- OldValue - nvarchar(max) INS.Priority -- NewValue - nvarchar(max) FROM INSERTED INS INNER JOIN DELETED DEL ON INS.Id = DEL.Id WHERE ( (INS.Priority <> DEL.Priority) OR (INS.Priority IS NULL AND DEL.Priority IS NOT NULL) OR (INS.Priority IS NOT NULL AND DEL.Priority IS NULL) ) END Error message: Msg 512, Level 16, State 1, Procedure MyTrigger, Line 818 Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. Any suggestions on how to fix my trigger in order to handle multi-row operations? |
Multiple foreign keys with shared columns for weak entities Posted: 19 Jul 2013 02:22 PM PDT this is my first question at stackexchange. I have a parent table (P) with two child (CC and CD): CREATE TABLE P ( A CHAR(3), B CHAR(2), CONSTRAINT PK_P PRIMARY KEY(A, B) ) CREATE TABLE CC ( A CHAR(3), B CHAR(2), C CHAR(1), CONSTRAINT PK_CC PRIMARY KEY(A, B, C), CONSTRAINT FK_CC_P FOREIGN KEY(A, B) REFERENCES P(A, B) ) CREATE TABLE CD ( A CHAR(3), B CHAR(2), D CHAR(1), CONSTRAINT PK_CD PRIMARY KEY(A, B, D), CONSTRAINT FK_CD_P FOREIGN KEY(A, B) REFERENCES P(A, B) ) I want a table (T) with optional references to CC and CD: CREATE TABLE T ( TId INT AUTO_INCREMENT PRIMARY KEY, -- more fields CurrentA CHAR(3), CurrentB CHAR(2), CurrentC CHAR(1), CurrentD CHAR(1), CONSTRAINT FK_T_CurrentCC FOREIGN KEY(CurrentA, CurrentB, CurrentC) REFERENCES CC(A, B, C), CONSTRAINT FK_T_CurrentCD FOREIGN KEY(CurrentA, CurrentB, CurrentD) REFERENCES CD(A, B, D) ) So that if both records Current CC and Current CD are present, they must refer to the same parent P record. This is the first option, but I intuitively think it may create some problems which I'm not really sure about. I'm also considering a second option: CREATE TABLE TAlternative ( TId INT AUTO_INCREMENT PRIMARY KEY, -- more fields CurrentCCA CHAR(3), CurrentCCB CHAR(2), CurrentCCC CHAR(1), CurrentCDA CHAR(3), CurrentCDB CHAR(2), CurrentCDD CHAR(1), CONSTRAINT FK_TAlternative_CurrentCC FOREIGN KEY(CurrentCCA, CurrentCCB, CurrentCCC) REFERENCES CC(A, B, C), CONSTRAINT FK_TAlternative_CurrentCD FOREIGN KEY(CurrentCDA, CurrentCDB, CurrentCDD) REFERENCES CD(A, B, D), CONSTRAINT CHK_CurrentCC_CurrentDD_Same_P CHECK ( -- Parent not specified ( (CurrentCCA IS NULL AND CurrentCDA IS NULL) AND (CurrentCCB IS NULL AND CurrentCDB IS NULL) ) -- Same parent OR ( (CurrentCCA = CurrentCDA) AND (CurrentCCB = CurrentCDB) ) ) ) Which would be better? |
ORA-04031: unable to allocate 4708660 bytes of shared memory Posted: 19 Jul 2013 01:10 PM PDT I am running Oracle 11g R1 on my local development machine. I have been trying to run an anonymous block to test the FORALL statement in Oracle to insert 25,000 records. I get the following error: ORA-04031: unable to allocate 4708660 bytes of shared memory ("shared pool","DECLARE -- temp_rec temp%R...","PL/SQL SOURCE","PLD: Source Heap") 04031. 00000 - "unable to allocate %s bytes of shared memory (\"%s\",\"%s\",\"%s\",\"%s\")" *Cause: More shared memory is needed than was allocated in the shared pool. *Action: If the shared pool is out of memory, either use the dbms_shared_pool package to pin large packages, reduce your use of shared memory, or increase the amount of available shared memory by increasing the value of the INIT.ORA parameters "shared_pool_reserved_size" and "shared_pool_size". If the large pool is out of memory, increase the INIT.ORA parameter "large_pool_size". I tried looking here and here but wasn't able to resolve this issue. I have tried the following: ALTER SYSTEM FLUSH BUFFER_CACHE; ALTER SYSTEM FLUSH SHARED_POOL; ALTER SYSTEM SET cursor_sharing = 'SIMILAR' SCOPE=BOTH; Please help. EDIT The code I am using is as follows: DECLARE TYPE t_varchar IS TABLE OF varchar(512 char); biz_hierarchy t_varchar := t_varchar(); project_team t_varchar := t_varchar(); user_roles t_varchar := t_varchar(); username t_varchar := t_varchar(); sso t_varchar := t_varchar(); BEGIN SELECT * BULK COLLECT INTO biz_hierarchy, project_team, user_roles, username, sso FROM ... -- Query fetches 25000 records FORALL i IN biz_hierarchy.FIRST..biz_hierarchy.LAST INSERT INTO temp VALUES (biz_hierarchy(i), project_team(i), user_roles(i), username(i), sso(i)); END; / The error returned by Oracle does not mention the line number. If I try loading 5000 records, the code block runs successfully. Fails when I try with 25000. |
Why MySQL having clause omitting zero Posted: 19 Jul 2013 03:48 PM PDT My fiddle is here. Work around : My table structure is create table test (grIds int(1), amount int(1)); insert into test values (1,10), (1,20), (1,15), (2,0), (3,8), (3,25), (3,16), (4,0), (4,21); I have two select clause select grIds, max(amount) from test group by grIds having max(amount); select grIds, max(amount) from test group by grIds; My question is why select with having keeps omitting row having grIds = 2 ? |
mysql died with stack trace Posted: 19 Jul 2013 10:03 AM PDT My mysql server died while running select queries. Status of every table is good. When I went to the error log I found this: Thread pointer: 0x2ab63e0f49e0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 40f34098 thread_stack 0x40000 /usr/sbin/mysqld(my_print_stacktrace+0x35)[0x7d08b5] /usr/sbin/mysqld(handle_fatal_signal+0x4b4)[0x6a90e4] /lib64/libpthread.so.0[0x2b4d4c6a6be0] /usr/sbin/mysqld[0x904618] /usr/sbin/mysqld[0x904725] /usr/sbin/mysqld[0x90496c] /usr/sbin/mysqld[0x848b18] /usr/sbin/mysqld[0x848b7d] /usr/sbin/mysqld[0x84e581] /usr/sbin/mysqld[0x834878] /usr/sbin/mysqld[0x82a885] /usr/sbin/mysqld[0x7fb138] /usr/sbin/mysqld[0x7f3b39] /usr/sbin/mysqld[0x5c38c5] /usr/sbin/mysqld(_Z10sub_selectP4JOINP13st_join_tableb+0x66)[0x5bd396] /usr/sbin/mysqld[0x5c20d7] /usr/sbin/mysqld(_ZN4JOIN4execEv+0xca0)[0x5d7570] /usr/sbin/mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x12c)[0x5d8d7c] /usr/sbin/mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x1cd)[0x5d982d] /usr/sbin/mysqld[0x591282] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x34b0)[0x599580] /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x33b)[0x59b88b] /usr/sbin/mysqld[0x59c8a2] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1af2)[0x59ea62] /usr/sbin/mysqld(_Z10do_commandP3THD+0x167)[0x59f057] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x14f)[0x63af3f] /usr/sbin/mysqld(handle_one_connection+0x51)[0x63b121] /lib64/libpthread.so.0[0x2b4d4c69e77d] /lib64/libc.so.6(clone+0x6d)[0x2b4d4d9ef9ad] This is a percona xtradb cluster; version is mysql Ver 14.14 Distrib 5.5.31, for Linux (x86_64) using readline 5.1. |
Slave node can't create database? [on hold] Posted: 19 Jul 2013 10:05 AM PDT I found this question and I want to know if slave node can create database that will reflect to master. Edited: I also want to know if slave node can't create tables. |
Create spfile from pfile Posted: 19 Jul 2013 01:29 PM PDT I'm having a problem in creating spfile from pfile.I think I don't have permission to write on Oracle Home. I'm using Windows 8 SQL> startup nomount pfile='initlittle2.ora.txt'; ORACLE instance started. Total System Global Area 644468736 bytes Fixed Size 1376520 bytes Variable Size 192941816 bytes Database Buffers 444596224 bytes Redo Buffers 5554176 bytes SQL> create spfile from pfile; create spfile from pfile * ERROR at line 1: ORA-01078: failure in processing system parameters LRM-00109: could not open parameter file 'A:\ORACLEBASE\PRODUCT\11.2.0\OLTP2\DATABASE\INITLITTLE2.ORA' Any workaround for my problem? |
The InnoDB log sequence number is in the future Posted: 19 Jul 2013 06:30 PM PDT Our site was getting intermittent database errors and I asked my webhost to check what's happening. After some inspection they found that the database has few issues and tried repairing it. Finally I got following message from them- I have attempted every repair on the InnoDB database and we are still getting the InnoDB log sequence number is in the future. At this point to get the ibdata and the iblogfile to match up once again we will need to restore the MySQL directory (which includes databases) from the backup we have on the server. The process should not take to long but there will be some downtime associated with a restore like this. If this is not the best time to restore the MySQL directory I can schedule this for a different time. Please let me know how you would like to proceed with this. Can someone tell me what'd be the best way to address this issue. I really don't want to lose out on any data and want the dB to be repaired. PS: If you need more information, please let me know and I'll get it from our web hosts. Would really appreciate your help. |
how to verify mysql backup file Posted: 19 Jul 2013 05:30 PM PDT Is There any tool to verify the backup taken using mysqldump without manual restoration ? I have to take full backup daily so it is difficult for me to verify the backups daily by restoring them. As it will take time and also temporary disk space. Please suggest me how to verify my backup online |
Can I update an Access table with an Oracle table? Posted: 19 Jul 2013 03:29 PM PDT I have an Access database with a table with a unique key and price field among other fields. I have been told I have access to an Oracle database with those prices in it and the linking Key. Can I run an update query in Access to update those price fields with values in the Oracle table or do I need to import the Oracle table into Access first and then run the update query from that access table? I'm pretty well up on Access but have never had to link it with another SQL database system. |
Hierarchical Structure Posted: 19 Jul 2013 04:29 PM PDT Every single user has say, 3 of GROUP_A , 10 GROUP_B 's per GROUP_A , and 20 GROUP_C 's per GROUP_B . And each of the 20 GROUP_C 's involve lots of inserts/deletes... I'm not an expert, but I've done research but it's all theoretical at this point of course, and I don't have hands on experience with the implementation that's for sure. I think my options are something like 'adjacency lists' or 'nested sets'? Any guidance into the right direction would be very much appreciated! |
sql:space at the end of string Posted: 19 Jul 2013 12:57 PM PDT I have a SQL code like shown below: declare @str nvarchar(max),@i int set @i=0 set @str='abc ' declare @tbl Table(a nvarchar(max)) insert @tbl select @str while (select a from @tbl)<>'' begin set @i=@i+1 set @str = substring(@str,2,len(@str)) update @tbl set a=@str select * from @tbl end Here @str has value 'abc ' (there is a space at the end). When above query is executed it will stop when only a space is present in 'a' . Also output of this query is: bc c <here blank> For the above query if I give input @str as 'abcd' then output will be bcd cd d <here blank> So in the first case that is @str='abc ' I want to get output like bc c <here blank> <here blank> Now the code is checking for space and because of that I am having problem. But I want it to consider the space at the end also. Same is the problem in SQL for len() also. Both len('a') and len('a ') will return 1. So if anyone please help on my query so that it will give my desired output. |
Will deleting data in master automatically delete data in slave? Posted: 19 Jul 2013 12:25 PM PDT I have data replication setup on MySQL. If I delete data on the master, will replication know data was deleted and automatically delete the same data on slave? |
Why does that query cause lock wait timeouts? Posted: 19 Jul 2013 12:29 PM PDT From time to time, I find a lot of these errors in my PHP error log: MYSQL.1213: Deadlock found when trying to get lock; try restarting transactionSQL The problem persists for about 2 or 3 minutes. Thanks to stackoverflow, the reason was quite easy to find: ------------------------ LATEST DETECTED DEADLOCK ------------------------ 130320 15:53:37 *** (1) TRANSACTION: TRANSACTION 0 83395751, ACTIVE 35 sec, process no 26405, OS thread id 140507872417536 starting index read mysql tables in use 3, locked 3 LOCK WAIT 3 lock struct(s), heap size 1216, 2 row lock(s) MySQL thread id 1163191, query id 199629038 localhost sosci Updating UPDATE `database`.`table` SET `invalidate`='2013-03-21 03:53:02' WHERE ((token='C7G8X3HABCDEFGH') AND (invalidate IS NULL)) AND (project=26118) LIMIT 1 *** (1) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395751 lock_mode X locks rec but not gap waiting Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0 0: len 4; hex 000c2591; asc % ;; 1: len 6; hex 000004e36ace; asc j ;; 2: len 7; hex 8000000a830110; asc ;; 3: len 4; hex 80000001; asc ;; 4: len 4; hex 80006606; asc f ;; 5: len 1; hex 07; asc ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc N wd ;; 11: len 8; hex 8000124ef495e88e; asc N ;; *** (2) TRANSACTION: TRANSACTION 0 83395676, ACTIVE 37 sec, process no 26405, OS thread id 140507856160512 fetching rows, thread declared inside InnoDB 451 mysql tables in use 1, locked 1 4 lock struct(s), heap size 1216, 53 row lock(s), undo log entries 1 MySQL thread id 1163198, query id 199628885 localhost sosci updating DELETE FROM `database`.`table` WHERE ((action="limit") AND (info='login') AND (creation < DATE_SUB(NOW(), INTERVAL 10 MINUTE))) *** (2) HOLDS THE LOCK(S): RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395676 lock_mode X Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0 0: len 4; hex 000c2591; asc % ;; 1: len 6; hex 000004e36ace; asc j ;; 2: len 7; hex 8000000a830110; asc ;; 3: len 4; hex 80000001; asc ;; 4: len 4; hex 80006606; asc f ;; 5: len 1; hex 07; asc ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc N wd ;; 11: len 8; hex 8000124ef495e88e; asc N ;; *** (2) WAITING FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395676 lock_mode X waiting Record lock, heap no 117 PHYSICAL RECORD: n_fields 12; compact format; info bits 0 0: len 4; hex 000c31d2; asc 1 ;; 1: len 6; hex 000004f884fc; asc ;; 2: len 7; hex 80000011040110; asc ;; 3: len 4; hex 80000001; asc ;; 4: SQL NULL; 5: len 1; hex 06; asc ;; 6: SQL NULL; 7: len 15; hex 3133322e3139392e3132312e313632; asc 132.199.121.162;; 8: len 5; hex 6c6f67696e; asc login;; 9: len 1; hex 81; asc ;; 10: len 8; hex 8000124ef49502aa; asc N ;; 11: SQL NULL; *** WE ROLL BACK TRANSACTION (1) What I do not understand is: Why? The locked table is very small, only 61 entries (about 30 new and 30 deleted per day, the auto-incremented primary index is near 800.000). No column is especially large. I use the InnoDB enging for this table (one key refers to another table with approx. 20.000 entries) and the problem occurs from time to time. RAM should not be an issue.The webserver and MySQL server run on the same (virtual) machine that usually does not suffer performance problems. Other transactions (there were thousands during the locked minutes) in large tables (1-2 mio. entries) did not make any trouble. Thanks for your hints! |
Database design for an E-commerce website Posted: 19 Jul 2013 12:19 PM PDT I am new to database design. I am designing a database for an E-commerce website, there are a lot of products to be updated, but while designing for product specification table I do not understand whether I need specify all the specifications in a table or do I need to use different tables for different products? For example, consider the products Mobile and book, each of these having unique specifications (such as color,size, cost, model for mobile and title, ISBN, author, cost, year_of_publication, etc. for book), if it is only less number of products then it is possible for me to design, but when there is a thousands of products, it takes a lot of time. Can anyone tell me how to design my database to manage this situation? |
Partition Fact Table Posted: 19 Jul 2013 08:30 PM PDT In one of my fact table which has close to 25 million records in it and now when a cognos team try to run the report on top of that it takes lot of time, So i am thinking of partitioning the table we store the records in fact table based on daily basis and we do have have the id calaendarday dma in the fact table as data type int.So is there a way i can partition this fact table based on id calaendarday dma? Please help me on the above query as it is impacting the cognos reports |
SQL Server 2005 not collecting logins in the log file Posted: 19 Jul 2013 01:00 PM PDT I am currently running SQL Server 2005 Standard Edition on a Windows 2003 server machine. I have gone to the properties of the server and confirmed that the Login Auditing is set to both failed and successful logins. For some reason though there is nothing in the logins for fails or successes in the SQL Server logs when I know there have been logins for both. I have searched out the reason for this and have not been able to come up with anything helpful, so I am hoping that someone here could give me a little direction. This is working on my other SQL Servers, so I know where to look for the results, but there are none there. |
Pivot rows into multiple columns Posted: 19 Jul 2013 10:06 AM PDT I have a SQL Server instance that has a linked server to an Oracle server. There is a table on the Oracle server called PersonOptions which contains the following data: ╔══════════╦══════════╗ ║ PersonID ║ OptionID ║ ╠══════════╬══════════╣ ║ 1 ║ A ║ ║ 1 ║ B ║ ║ 2 ║ C ║ ║ 3 ║ B ║ ║ 4 ║ A ║ ║ 4 ║ C ║ ╚══════════╩══════════╝ I need to pivot that data so the results are: ╔══════════╦═════════╦══════════╦══════════╗ ║ PersonID ║ OptionA ║ Option B ║ Option C ║ ╠══════════╬═════════╬══════════╬══════════╣ ║ 1 ║ 1 ║ 1 ║ ║ ║ 2 ║ ║ ║ 1 ║ ║ 3 ║ ║ 1 ║ ║ ║ 4 ║ 1 ║ ║ 1 ║ ╚══════════╩═════════╩══════════╩══════════╝ Any suggestions? |
SQL Server 2008 R2 corrupted after changing MAXDOP Posted: 19 Jul 2013 02:29 PM PDT My SQL Server 2008 R2 doesn't work after changing the MAXDOP parameter on instances. I have 6 instances located on the same physical server and I changed MAXDOP from 1 to 2 on 5 instances. After that all these 5 instances don't work. After restart of the server any services don't start and I just see the error The request failed or the service did not respond in a timely fashion. Consult the event log or other application error logs for details In Event Viewer I didn't find anything useful, also error log wasn't updated when I tried to run instance. Also I noticed a strange thing. When I open the Configuration Manager and open properties of instance, I don't see the fonts of parameter's name in Advanced tab. In another tabs everything is fine. The screen is attached. Does anybody faced the same problem? I don't have any idea how repair this... upd: By the way, i tried start sql server manually from command line with -f or -m parameter but nothing.. I just see empty error window |
Applying user-defined fields to arbitrary entities Posted: 19 Jul 2013 07:30 PM PDT Currently we have an old (rather crude) system that has user-defined fields, which are mapped against rows in arbitrary tables. This was an after-the-fact modification based on a customer request, and it wasn't really designed to scale well. Our system has around 60 different types of entities, which makes things even more complicated. Essentially the implementation looks like this: USER_DEFINED_FIELDS : UDF_ID int ENTITY_TYPE int UDF_NAME nvarchar(64) UDF_DATA_TYPE int UDF_FORMAT nvarchar(16) UDF_MASK nvarchar(24) UDF_FLAGS int UDF_VALUES_NUMBER : UDF_ID int ENTITY_ID int VALUE int MODIFIED datetime UDF_VALUES_TEXT : UDF_ID int ENTITY_ID int VALUE nvarchar(255) MODIFIED datetime etc... This gets nice and fun when we generate our own ways to index compound primary keys, but that's another DailyWTF-worthy story. Obviously this is pretty hideous, and leads to some spectacularly horrific queries being generated, but it's worked alright for now because we limit each entity to a maximum of 5 user-defined fields. As a quick disclaimer, I wasn't with the company when this design decision was made! ;) Anyway, we're about to start a shiny new project and will inevitably need a better way of doing this, with no restrictions on the number of UDFs we can apply to entities, increased performance, and less horror in the generated query department. If we could run a NoSQL solution like Mongo or Redis I'd be happy and wouldn't need to bother you all, but sadly that's not an option. Instead, I need a way to do this from within SQL Server 2008 R2. So far, I've come up with the following options: - Individual UDF table for each entity type, with identical structures.
- Benefits: Queries are faster and simpler, solves the compound index problem, can search by UDF.
- Downsides: Table versioning is more cumbersome, lots of extra tables, difficult to index across similar entity types (e.g. purchase orders and sales orders)
- Binary blob UDF column in every entity's table.
- Benefits: Really fast: no fetch overhead, no extra JOINs.
- Downsides: Can't search by UDF, extra proprietary code required.
- XML column in every entity's table.
- Benefits: Fast like binary blobs, seems to be indexable.
- Downsides: No idea how they work - looks complicated!
So, do any of these ideas have merit? Is there a better alternative? |
Select * from statement execute very slowly, innodb io read speed is low Posted: 19 Jul 2013 09:30 PM PDT I have a very simple query " select * from ap_statistic " running in my servers. the servers have the same hardware and software configuration (CPU 8 core, mem :32G, OS: redhat 5.5, mysql version: 5.1 ) and run the same applications. In server A, the row number of the table ap_statistic is about 22512379, in server B, the row number of the table is 41438751. Of course the query running on server A is faster than server B, but what is strange is the query on server B is extreme slow, it takes more than 3 hours where in server A it just takes 10 minutes. I use some tool to monitor system status and find that when the query is running in server A, system IO read speed is about 20~30M/s, but in server B it's 2~3M/s. I've tried to clean linux cache and restart mysql server, all is the same result. And I tried to restored DB from server B to server A, so the query in Server A is very very slow and io read speed is very slow. I want to know why this happen? the ap_statistic table data in server A is generated by normally running and table data in server B is generated by a stored procedure. the table scheme is: CREATE TABLE `ap_statistic` ( `ap_statisticId` BIGINT(20) UNSIGNED NOT NULL, `deviceId` INT(11) UNSIGNED NOT NULL DEFAULT '0', `macaddress` VARCHAR(100) DEFAULT NULL, `check_time` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00', `ap_count` INT(10) UNSIGNED NOT NULL DEFAULT '0', `wlan` VARCHAR(64) DEFAULT NULL, `radio` VARCHAR(50) DEFAULT NULL, `bssid` VARCHAR(32) DEFAULT NULL, `zd_ap_name` VARCHAR(64) DEFAULT NULL, `channel` INT(2) DEFAULT NULL, `uplinkRSSI` INT(3) DEFAULT '0', `downlinkRSSI` INT(3) DEFAULT '0', `txBytes` BIGINT(20) DEFAULT '0', `rxBytes` BIGINT(20) DEFAULT '0', `txPkts` BIGINT(20) DEFAULT '0', `rxPkts` BIGINT(20) DEFAULT '0', `hops` INT(1) DEFAULT '0', `numDownlink` INT(3) DEFAULT '0', `distance` INT(4) DEFAULT '0', `phyerr` INT(11) DEFAULT '0', `max_num_clients` INT(3) DEFAULT '0', `max_mesh_downlinks` INT(1) DEFAULT '0', `airtime` INT(3) DEFAULT '0', `uptimePercentage` INT(3) DEFAULT '0', `total_num_clients` INT(3) DEFAULT '0', `tx_actual_throughput` BIGINT(20) DEFAULT '0', `rx_actual_throughput` BIGINT(20) DEFAULT '0', `tunnelMode` VARCHAR(32) DEFAULT NULL, `externalIp` VARCHAR(64) DEFAULT NULL, `externalPort` VARCHAR(32) DEFAULT NULL, `level` INT(1) DEFAULT '1' `essid` VARCHAR(64) DEFAULT NULL, `total_client_join` INT(11) DEFAULT '0', PRIMARY KEY (`ap_statisticId`), KEY `check_time` (`check_time`), KEY `macaddress` (`macaddress`), KEY `deviceId` (`deviceId`) ) ENGINE=INNODB DEFAULT CHARSET=utf8 the follows are the table file info and some outputs of the monitor tools Server B -rw-rw---- 1 mysql mysql 18568183808 Oct 11 14:52 ap_statistic.ibd [root@localhost itms]# filefrag ./ap_statistic.ibd ./ap_statistic.ibd: 164 extents found, perfection would be 159 extents TABLE Non_unique Key_name Seq_in_index Column_name COLLATION Cardinality Sub_part Packed NULL Index_type COMMENT ------------ ---------- ---------- ------------ -------------- --------- ----------- -------- ------ ------ ---------- ------- ap_statistic 0 PRIMARY 1 ap_statisticId A 41438751 (NULL) (NULL) BTREE ap_statistic 1 check_time 1 check_time A 10320 (NULL) (NULL) BTREE ap_statistic 1 macaddress 1 macaddress A 16 (NULL) (NULL) YES BTREE ap_statistic 1 deviceId 1 deviceId A 16 (NULL) (NULL) BTREE mysql>show status; Variable_name Value Aborted_clients 0 Aborted_connects 0 Binlog_cache_disk_use 0 Binlog_cache_use 0 Bytes_received 1256 Bytes_sent 8844 Com_admin_commands 0 Com_assign_to_keycache 0 Com_alter_db 0 Com_alter_db_upgrade 0 Com_alter_event 0 Com_alter_function 0 Com_alter_procedure 0 Com_alter_server 0 Com_alter_table 0 Com_alter_tablespace 0 Com_analyze 0 Com_backup_table 0 Com_begin 0 Com_binlog 0 Com_call_procedure 0 Com_change_db 1 Com_change_master 0 Com_check 0 Com_checksum 0 Com_commit 0 Com_create_db 0 Com_create_event 0 Com_create_function 0 Com_create_index 0 Com_create_procedure 0 Com_create_server 0 Com_create_table 0 Com_create_trigger 0 Com_create_udf 0 Com_create_user 0 Com_create_view 0 Com_dealloc_sql 0 Com_delete 0 Com_delete_multi 0 Com_do 0 Com_drop_db 0 Com_drop_event 0 Com_drop_function 0 Com_drop_index 0 Com_drop_procedure 0 Com_drop_server 0 Com_drop_table 0 Com_drop_trigger 0 Com_drop_user 0 Com_drop_view 0 Com_empty_query 0 Com_execute_sql 0 Com_flush 0 Com_grant 0 Com_ha_close 0 Com_ha_open 0 Com_ha_read 0 Com_help 0 Com_insert 0 Com_insert_select 0 Com_install_plugin 0 Com_kill 0 Com_load 0 Com_load_master_data 0 Com_load_master_table 0 Com_lock_tables 0 Com_optimize 0 Com_preload_keys 0 Com_prepare_sql 0 Com_purge 0 Com_purge_before_date 0 Com_release_savepoint 0 Com_rename_table 0 Com_rename_user 0 Com_repair 0 Com_replace 0 Com_replace_select 0 Com_reset 0 Com_restore_table 0 Com_revoke 0 Com_revoke_all 0 Com_rollback 0 Com_rollback_to_savepoint 0 Com_savepoint 0 Com_select 1 Com_set_option 3 Com_show_authors 0 Com_show_binlog_events 0 Com_show_binlogs 0 Com_show_charsets 0 Com_show_collations 0 Com_show_column_types 0 Com_show_contributors 0 Com_show_create_db 0 Com_show_create_event 0 Com_show_create_func 0 Com_show_create_proc 0 Com_show_create_table 1 Com_show_create_trigger 0 Com_show_databases 0 Com_show_engine_logs 0 Com_show_engine_mutex 0 Com_show_engine_status 0 Com_show_events 0 Com_show_errors 0 Com_show_fields 1 Com_show_function_status 0 Com_show_grants 0 Com_show_keys 1 Com_show_master_status 0 Com_show_new_master 0 Com_show_open_tables 0 Com_show_plugins 0 Com_show_privileges 0 Com_show_procedure_status 0 Com_show_processlist 0 Com_show_profile 0 Com_show_profiles 0 Com_show_slave_hosts 0 Com_show_slave_status 0 Com_show_status 21 Com_show_storage_engines 0 Com_show_table_status 0 Com_show_tables 0 Com_show_triggers 0 Com_show_variables 0 Com_show_warnings 0 Com_slave_start 0 Com_slave_stop 0 Com_stmt_close 0 Com_stmt_execute 0 Com_stmt_fetch 0 Com_stmt_prepare 0 Com_stmt_reprepare 0 Com_stmt_reset 0 Com_stmt_send_long_data 0 Com_truncate 0 Com_uninstall_plugin 0 Com_unlock_tables 0 Com_update 0 Com_update_multi 0 Com_xa_commit 0 Com_xa_end 0 Com_xa_prepare 0 Com_xa_recover 0 Com_xa_rollback 0 Com_xa_start 0 Compression ON Connections 323 Created_tmp_disk_tables 1 Created_tmp_files 5 Created_tmp_tables 2 Delayed_errors 0 Delayed_insert_threads 0 Delayed_writes 0 Flush_commands 1 Handler_commit 1 Handler_delete 0 Handler_discover 0 Handler_prepare 0 Handler_read_first 0 Handler_read_key 0 Handler_read_next 0 Handler_read_prev 0 Handler_read_rnd 0 Handler_read_rnd_next 39 Handler_rollback 0 Handler_savepoint 0 Handler_savepoint_rollback 0 Handler_update 0 Handler_write 37 Innodb_buffer_pool_pages_data 43392 Innodb_buffer_pool_pages_dirty 0 Innodb_buffer_pool_pages_flushed 43822 Innodb_buffer_pool_pages_free 637198 Innodb_buffer_pool_pages_misc 562 Innodb_buffer_pool_pages_total 681152 Innodb_buffer_pool_read_ahead_rnd 9 Innodb_buffer_pool_read_ahead_seq 27 Innodb_buffer_pool_read_requests 36489397 Innodb_buffer_pool_reads 27421 Innodb_buffer_pool_wait_free 0 Innodb_buffer_pool_write_requests 4165371 Innodb_data_fsyncs 5228 Innodb_data_pending_fsyncs 0 Innodb_data_pending_reads 1 Innodb_data_pending_writes 0 Innodb_data_read 626216960 Innodb_data_reads 36565 Innodb_data_writes 293947 Innodb_data_written 1792826880 Innodb_dblwr_pages_written 43822 Innodb_dblwr_writes 830 Innodb_log_waits 0 Innodb_log_write_requests 492588 Innodb_log_writes 268248 Innodb_os_log_fsyncs 2130 Innodb_os_log_pending_fsyncs 0 Innodb_os_log_pending_writes 0 Innodb_os_log_written 356559872 Innodb_page_size 16384 Innodb_pages_created 5304 Innodb_pages_read 38087 Innodb_pages_written 43822 Innodb_row_lock_current_waits 0 Innodb_row_lock_time 0 Innodb_row_lock_time_avg 0 Innodb_row_lock_time_max 0 Innodb_row_lock_waits 0 Innodb_rows_deleted 28637 Innodb_rows_inserted 306449 Innodb_rows_read 16579740 Innodb_rows_updated 887251 Key_blocks_not_flushed 0 Key_blocks_unused 212928 Key_blocks_used 1415 Key_read_requests 393323 Key_reads 16 Key_write_requests 102461 Key_writes 102439 Last_query_cost 9142769.199000 Max_used_connections 19 Not_flushed_delayed_rows 0 Open_files 24 Open_streams 0 Open_table_definitions 142 Open_tables 146 Opened_files 592 Opened_table_definitions 0 Opened_tables 0 Prepared_stmt_count 0 Qcache_free_blocks 0 Qcache_free_memory 0 Qcache_hits 0 Qcache_inserts 0 Qcache_lowmem_prunes 0 Qcache_not_cached 0 Qcache_queries_in_cache 0 Qcache_total_blocks 0 Queries 1578897 Questions 30 Rpl_status NULL Select_full_join 0 Select_full_range_join 0 Select_range 0 Select_range_check 0 Select_scan 2 Slave_open_temp_tables 0 Slave_retried_transactions 0 Slave_running OFF Slow_launch_threads 0 Slow_queries 0 Sort_merge_passes 0 Sort_range 0 Sort_rows 0 Sort_scan 0 Ssl_accept_renegotiates 0 Ssl_accepts 0 Ssl_callback_cache_hits 0 Ssl_cipher Ssl_cipher_list Ssl_client_connects 0 Ssl_connect_renegotiates 0 Ssl_ctx_verify_depth 0 Ssl_ctx_verify_mode 0 Ssl_default_timeout 0 Ssl_finished_accepts 0 Ssl_finished_connects 0 Ssl_session_cache_hits 0 Ssl_session_cache_misses 0 Ssl_session_cache_mode NONE Ssl_session_cache_overflows 0 Ssl_session_cache_size 0 Ssl_session_cache_timeouts 0 Ssl_sessions_reused 0 Ssl_used_session_cache_entries 0 Ssl_verify_depth 0 Ssl_verify_mode 0 Ssl_version Table_locks_immediate 1549525 Table_locks_waited 0 Tc_log_max_pages_used 0 Tc_log_page_size 0 Tc_log_page_waits 0 Threads_cached 0 Threads_connected 17 Threads_created 322 Threads_running 2 Uptime 8093 Uptime_since_flush_status 8093 mysql>show variables; Variable_name Value auto_increment_increment 1 auto_increment_offset 1 autocommit ON automatic_sp_privileges ON back_log 50 big_tables OFF binlog_cache_size 32768 binlog_direct_non_transactional_updates OFF binlog_format STATEMENT bulk_insert_buffer_size 8388608 character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server utf8 character_set_system utf8 collation_connection utf8_general_ci collation_database utf8_general_ci collation_server utf8_general_ci completion_type 0 concurrent_insert 1 connect_timeout 10 date_format %Y-%m-%d datetime_format %Y-%m-%d %H:%i:%s default_week_format 0 delay_key_write ON delayed_insert_limit 100 delayed_insert_timeout 300 delayed_queue_size 1000 div_precision_increment 4 engine_condition_pushdown ON error_count 0 event_scheduler OFF expire_logs_days 0 flush OFF flush_time 0 foreign_key_checks ON ft_boolean_syntax + -><()~*:""&| ft_max_word_len 84 ft_min_word_len 4 ft_query_expansion_limit 20 ft_stopword_file (built-in) general_log OFF group_concat_max_len 1024 have_community_features YES have_compress YES have_crypt YES have_csv YES have_dynamic_loading YES have_geometry YES have_innodb YES have_ndbcluster NO have_openssl DISABLED have_partitioning NO have_query_cache YES have_rtree_keys YES have_ssl DISABLED have_symlink YES hostname localhost.localdomain identity 0 ignore_builtin_innodb OFF init_connect init_file init_slave innodb_adaptive_hash_index ON innodb_additional_mem_pool_size 67108864 innodb_autoextend_increment 8 innodb_autoinc_lock_mode 1 innodb_buffer_pool_size 11159994368 innodb_checksums ON innodb_commit_concurrency 0 innodb_concurrency_tickets 500 innodb_data_file_path ibdata1:10M:autoextend innodb_data_home_dir innodb_doublewrite ON innodb_fast_shutdown 1 innodb_file_io_threads 4 innodb_file_per_table ON innodb_flush_log_at_trx_commit 2 innodb_flush_method O_DIRECT innodb_force_recovery 0 innodb_lock_wait_timeout 120 innodb_locks_unsafe_for_binlog ON innodb_log_buffer_size 134217728 innodb_log_file_size 5242880 innodb_log_files_in_group 2 innodb_log_group_home_dir ./ innodb_max_dirty_pages_pct 90 innodb_max_purge_lag 0 innodb_mirrored_log_groups 1 innodb_open_files 300 innodb_rollback_on_timeout OFF innodb_stats_on_metadata ON innodb_support_xa ON innodb_sync_spin_loops 20 innodb_table_locks ON innodb_thread_concurrency 8 innodb_thread_sleep_delay 10000 innodb_use_legacy_cardinality_algorithm ON insert_id 0 interactive_timeout 28800 join_buffer_size 268435456 keep_files_on_create OFF key_buffer_size 268435456 key_cache_age_threshold 300 key_cache_block_size 1024 key_cache_division_limit 100 large_files_support ON large_page_size 0 large_pages OFF last_insert_id 0 lc_time_names en_US license Commercial local_infile ON locked_in_memory OFF log OFF log_bin OFF log_bin_trust_function_creators OFF log_bin_trust_routine_creators OFF log_output FILE log_queries_not_using_indexes OFF log_slave_updates OFF log_slow_queries OFF log_warnings 1 long_query_time 10.000000 low_priority_updates OFF lower_case_file_system OFF lower_case_table_names 1 max_allowed_packet 134217728 max_binlog_cache_size 18446744073709547520 max_binlog_size 1073741824 max_connect_errors 10 max_connections 300 max_delayed_threads 20 max_error_count 64 max_heap_table_size 268435456 max_insert_delayed_threads 20 max_join_size 18446744073709551615 max_length_for_sort_data 1024 max_prepared_stmt_count 16382 max_relay_log_size 0 max_seeks_for_key 18446744073709551615 max_sort_length 1024 max_sp_recursion_depth 0 max_tmp_tables 32 max_user_connections 0 max_write_lock_count 18446744073709551615 min_examined_row_limit 0 multi_range_count 256 myisam_data_pointer_size 6 myisam_max_sort_file_size 9223372036853727232 myisam_mmap_size 18446744073709551615 myisam_recover_options OFF myisam_repair_threads 1 myisam_sort_buffer_size 8388608 myisam_stats_method nulls_unequal myisam_use_mmap OFF net_buffer_length 16384 net_read_timeout 30 net_retry_count 10 net_write_timeout 60 new OFF old OFF old_alter_table OFF old_passwords OFF open_files_limit 10240 optimizer_prune_level 1 optimizer_search_depth 62 optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on port 3306 preload_buffer_size 32768 profiling OFF profiling_history_size 15 protocol_version 10 pseudo_thread_id 18 query_alloc_block_size 8192 query_cache_limit 1048576 query_cache_min_res_unit 4096 query_cache_size 0 query_cache_type ON query_cache_wlock_invalidate OFF query_prealloc_size 8192 rand_seed1 rand_seed2 range_alloc_block_size 4096 read_buffer_size 67108864 read_only OFF read_rnd_buffer_size 67108864 relay_log relay_log_index relay_log_info_file relay-log.info relay_log_purge ON relay_log_space_limit 0 report_host report_password report_port 3306 report_user rpl_recovery_rank 0 secure_auth OFF secure_file_priv server_id 0 skip_external_locking ON skip_name_resolve ON skip_networking OFF skip_show_database OFF slave_compressed_protocol OFF slave_exec_mode STRICT slave_net_timeout 3600 slave_skip_errors OFF slave_transaction_retries 10 slow_launch_time 2 slow_query_log OFF sort_buffer_size 16777216 sql_auto_is_null ON sql_big_selects ON sql_big_tables OFF sql_buffer_result OFF sql_log_bin ON sql_log_off OFF sql_log_update ON sql_low_priority_updates OFF sql_max_join_size 18446744073709551615 sql_mode sql_notes ON sql_quote_show_create ON sql_safe_updates OFF sql_select_limit 18446744073709551615 sql_slave_skip_counter sql_warnings OFF ssl_ca ssl_capath ssl_cert ssl_cipher ssl_key storage_engine MyISAM sync_binlog 0 sync_frm ON system_time_zone UTC table_definition_cache 256 table_lock_wait_timeout 50 table_open_cache 512 table_type MyISAM thread_cache_size 0 thread_handling one-thread-per-connection thread_stack 262144 time_format %H:%i:%s time_zone +08:00 timed_mutexes OFF timestamp 1349946061 tmp_table_size 1073741824 transaction_alloc_block_size 8192 transaction_prealloc_size 4096 tx_isolation REPEATABLE-READ unique_checks ON updatable_views_with_limit YES version 5.1.53-enterprise-commercial-pro version_comment MySQL Enterprise Server - Pro Edition (Commercial) version_compile_machine x86_64 version_compile_os unknown-linux-gnu wait_timeout 28800 warning_count 0 mysql> show innodb status\G; *************************** 1. row *************************** Type: InnoDB Name: Status: ===================================== 121011 10:22:13 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 39 seconds ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 3806, signal count 3778 Mutex spin waits 0, rounds 282892, OS waits 2075 RW-shared spins 1969, OS waits 864; RW-excl spins 2336, OS waits 749 ------------ TRANSACTIONS ------------ Trx id counter 0 5303968 Purge done for trx's n:o < 0 5303951 undo n:o < 0 0 History list length 1 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, process no 30336, OS thread id 1189509440 MySQL thread id 520, query id 1861594 localhost root show innodb status ---TRANSACTION 0 5303967, not started, process no 30336, OS thread id 1188710720 MySQL thread id 526, query id 1861593 127.0.0.1 root ---TRANSACTION 0 5303962, not started, process no 30336, OS thread id 1186314560 MySQL thread id 519, query id 1861555 127.0.0.1 root ---TRANSACTION 0 5303952, not started, process no 30336, OS thread id 1188444480 MySQL thread id 515, query id 1861567 127.0.0.1 root ---TRANSACTION 0 5303948, not started, process no 30336, OS thread id 1187912000 MySQL thread id 516, query id 1861566 127.0.0.1 root ---TRANSACTION 0 5303937, not started, process no 30336, OS thread id 1190308160 MySQL thread id 511, query id 1861568 127.0.0.1 root ---TRANSACTION 0 0, not started, process no 30336, OS thread id 1090791744 MySQL thread id 18, query id 1596073 172.18.112.84 root ---TRANSACTION 0 5303959, ACTIVE 63 sec, process no 30336, OS thread id 1090525504 fetching rows, thread declared inside InnoDB 500 mysql tables in use 1, locked 0 MySQL thread id 17, query id 1861400 localhost root Sending data select * from ap_statistic Trx read view will not see trx with id >= 0 5303960, sees < 0 5303960 -------- FILE I/O -------- I/O thread 0 state: waiting for i/o request (insert buffer thread) I/O thread 1 state: waiting for i/o request (log thread) I/O thread 2 state: waiting for i/o request (read thread) I/O thread 3 state: waiting for i/o request (write thread) Pending normal aio reads: 0, aio writes: 0, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 63521 OS file reads, 294656 OS file writes, 5641 OS fsyncs 1 pending preads, 0 pending pwrites 149.38 reads/s, 16384 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 318, seg size 320, 63593 inserts, 63593 merged recs, 9674 merges Hash table size 22086161, node heap has 607 buffer(s) 0.08 hash searches/s, 0.26 non-hash searches/s --- LOG --- Log sequence number 15 2873617336 Log flushed up to 15 2873617336 Last checkpoint at 15 2873617336 0 pending log writes, 0 pending chkp writes 269102 log i/o's done, 0.00 log i/o's/second ---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 12452785320; in additional pool allocated 15261440 Dictionary memory allocated 789024 Buffer pool size 681152 Free buffers 610013 Database pages 70532 Modified db pages 0 Pending reads 1 Pending writes: LRU 0, flush list 0, single page 0 Pages read 65043, created 5488, written 45924 149.38 reads/s, 0.00 creates/s, 0.00 writes/s Buffer pool hit rate 888 / 1000 -------------- ROW OPERATIONS -------------- 1 queries inside InnoDB, 0 queries in queue 2 read views open inside InnoDB Main thread process no. 30336, id 1185782080, state: waiting for server activity Number of rows inserted 336555, updated 1112311, deleted 28681, read 29200669 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 8258.58 reads/s ---------------------------- END OF INNODB MONITOR OUTPUT ============================ 1 row in set, 1 warning (0.00 sec) ERROR: No query specified iostat -dx 2 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 2.50 141.50 11.50 4516.00 112.00 2258.00 56.00 30.25 0.95 6.23 5.70 87.25 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 0.00 2.50 141.50 11.50 4516.00 112.00 2258.00 56.00 30.25 0.95 6.23 5.70 87.25 dm-0 0.00 0.00 141.50 14.00 4516.00 112.00 2258.00 56.00 29.76 0.97 6.24 5.62 87.35 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 3.00 0.00 154.50 0.00 4932.00 0.00 2466.00 0.00 31.92 0.93 6.04 6.04 93.25 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 3.00 0.00 154.50 0.00 4932.00 0.00 2466.00 0.00 31.92 0.93 6.04 6.04 93.25 dm-0 0.00 0.00 157.50 0.00 4932.00 0.00 2466.00 0.00 31.31 0.95 6.04 5.93 93.40 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 3.00 1.50 150.50 1.50 4804.00 24.00 2402.00 12.00 31.76 0.94 6.15 6.14 93.40 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda2 3.00 1.50 150.50 1.50 4804.00 24.00 2402.00 12.00 31.76 0.94 6.15 6.14 93.40 dm-0 0.00 0.00 153.50 3.00 4804.00 24.00 2402.00 12.00 30.85 0.95 6.08 5.97 93.50 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 vmstat 2 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 16 27358488 18152 115500 0 0 2558 0 1193 8634 14 1 73 12 0 1 1 16 27346840 18168 115500 0 0 2356 12 1174 9129 14 2 73 12 0 2 1 16 27334320 18172 115504 0 0 2522 0 1184 8918 14 1 73 12 0 1 1 16 27321812 18180 115496 0 0 2456 12 1183 7357 13 1 74 12 0 1 1 16 27310132 18180 115504 0 0 2344 12 1174 6990 12 2 74 12 0 1 1 16 27297608 18184 115508 0 0 2506 0 1182 6203 12 2 74 11 0 1 1 16 27285444 18200 115504 0 0 2420 90 1187 9667 13 2 73 12 0 1 1 16 27277640 18200 115508 0 0 2248 0 1165 8103 19 2 69 11 0 2 1 16 27265380 18204 115504 0 0 2498 0 1179 5690 13 1 74 12 0 4 1 16 27252972 18216 115508 0 0 2434 12 1178 6096 14 1 74 12 0 1 1 16 27241032 18232 115496 0 0 2520 0 1181 9252 12 1 75 11 0 2 1 16 27229136 18240 115508 0 0 2468 10 1178 7116 13 1 74 12 0 1 0 16 27630612 18248 115508 0 0 1536 20 1121 4082 13 1 79 7 0 mpstat -P ALL 2 02:48:57 PM CPU %user %nice %system %iowait %irq %soft %idle intr/s 02:48:59 PM all 13.69 0.00 1.31 11.56 0.00 0.62 72.81 1190.95 02:48:59 PM 0 33.67 0.00 0.50 0.00 0.00 0.00 65.83 1006.03 02:48:59 PM 1 6.53 0.00 0.50 92.96 0.50 0.50 0.00 160.80 02:48:59 PM 2 1.01 0.00 0.50 0.00 0.00 0.00 98.49 0.00 02:48:59 PM 3 0.00 0.00 0.00 0.00 0.00 0.00 100.50 3.52 02:48:59 PM 4 35.68 0.00 1.01 0.00 0.00 1.01 62.81 13.57 02:48:59 PM 5 4.52 0.00 0.00 0.00 0.00 0.00 96.48 0.50 02:48:59 PM 6 3.52 0.00 0.00 0.00 0.00 0.00 96.98 0.50 02:48:59 PM 7 25.13 0.00 7.54 0.00 0.00 4.02 63.82 6.03 02:48:59 PM CPU %user %nice %system %iowait %irq %soft %idle intr/s 02:49:01 PM all 12.50 0.00 1.19 11.69 0.00 0.56 74.06 1177.11 02:49:01 PM 0 22.89 0.00 1.49 0.00 0.00 1.49 74.13 995.52 02:49:01 PM 1 5.97 0.00 0.50 92.54 0.00 0.50 0.00 159.70 02:49:01 PM 2 0.50 0.00 0.50 0.50 0.00 0.00 98.01 1.99 02:49:01 PM 3 0.00 0.00 0.00 0.00 0.00 0.00 99.50 2.49 02:49:01 PM 4 45.77 0.00 1.49 0.00 0.00 0.50 51.24 11.94 02:49:01 PM 5 0.00 0.00 0.00 0.00 0.00 0.00 99.50 0.50 02:49:01 PM 6 0.50 0.00 0.00 0.00 0.00 0.00 99.00 0.50 02:49:01 PM 7 23.38 0.00 5.47 0.00 0.00 1.99 68.16 4.48 02:49:01 PM CPU %user %nice %system %iowait %irq %soft %idle intr/s 02:49:03 PM all 13.05 0.00 1.12 11.62 0.00 0.50 73.70 1179.00 02:49:03 PM 0 43.50 0.00 0.50 0.00 0.00 0.00 56.00 1000.50 02:49:03 PM 1 6.50 0.00 1.00 93.00 0.00 0.50 0.00 157.00 02:49:03 PM 2 1.50 0.00 0.50 0.00 0.00 0.00 98.50 0.00 02:49:03 PM 3 0.00 0.00 0.00 0.00 0.00 0.00 100.00 2.50 02:49:03 PM 4 32.50 0.00 1.50 0.00 0.00 1.00 65.50 13.00 02:49:03 PM 5 11.00 0.00 4.00 0.00 0.00 1.50 83.50 0.50 02:49:03 PM 6 0.00 0.00 0.00 0.00 0.00 0.00 100.00 0.00 02:49:03 PM 7 10.50 0.00 2.00 0.00 0.00 1.00 87.00 5.50 |
Service Broker & AlwaysOn Availability Groups: Odd Transmission Queue Behavior Posted: 19 Jul 2013 10:07 AM PDT I have also posted this question on my blog: http://www.sqldiablo.com/2012/04/15/service-broker-alwayson-availability-groups-odd-transmission-queue-behavior/. I've been working on a project over the past several months that will utilize Service Broker and AlwaysOn Availability Groups to meet some of the HA and DR goals of the company I work for (more info: http://www.sqldiablo.com/service-broker-replication/ ). Just recently, I was able to implement the full solution in my development lab and point an instance of our website at it. While we were working out some kinks in our database and website to get the two working well with my Service Broker Replication project, I began noticing some odd behavior in Service Broker when it's used with AlwaysOn Availability Groups, and I wanted to blog about it in an attempt to see if anyone else has seen this issue and might have an idea how to address it. The Setup: I have a Hyper-V host running 6 Windows Server 2008 R2 VMs (BTDevSQLVM1-BTDevSQLVM6). The VMs are grouped into 2-node WSFCs with node and file share quorum. I've installed standalone SQL 2012 Developer Edition instances on each of the VMs, and created an Availability Group with a listener on each cluster (SBReplDistrib, SBRepl1, & SBRepl2). For the purpose of this blog post, I'll be focusing on the communication between SBRepl1 and SBReplDistrib. The illustration below shows the Service Broker objects for each side of the conversation: (I'm new and can't post images yet, so please see my blog at the URL above for the image) The Service Broker endpoints and routes are setup per this MSDN article.The SBRepl_Receive route in MSDB is for the local server's service (//SBReplDistrib/SBRepl on SBReplDistrib, and //SBRepl1/SBRepl on SBRepl1), and points to the local instance. The SBRepl_Send route on SBRepl1 maps service //SBReplDistrib/SBRepl to TCP://SBReplDistrib:4022, and the SBRepl_Send_SBRepl1 route on SBReplDistrib is a similar mapping for the service on SBRepl1. The Expected Behavior: My understanding of how Service Broker handles message sending and receiving is thus (This is pretty simplified. There is a lot more detail about this process in Klaus Aschenbrenner's book "Pro SQL Server 2008 Service Broker"): - The initiator app creates a message (in this case, well formed XML)
- If there is an existing dialog conversation between the initiator service and the target service that is in the conversing status, the app can simply send the message on the existing conversation handle. Otherwise, the initiator app should begin a dialog conversation between the initiator service and the target service and send the message on that conversation handle.
- The message is placed in the sys.transmission_queue system table and Service Broker begins making attempts to deliver the message to the target service.
- Service Broker looks for an appropriate route and remote service binding and uses them to determine the address to connect to in order to deliver the message.
- Service Broker opens a connection to the target, authenticates, and delivers the message to the target service broker.
- The target Service Broker attempts to classify the message and determine what local service will handle the message (it uses route data in the msdb database for this).
- The target Service Broker delivers the message to the target service's queue
- Once the message is successfully delivered to the target queue, the target Service Broker looks for route information back to the initiator and attempts to deliver an acknowledgement that the message was received.
- The initiator's Service Broker receives the acknowledgement and uses routing information in MSDB to determine what local service the acknowledgement is for.
- Upon successful routing of the acknowledgement to the initiating service, the message is then removed from the sys.transmission_queue system table.
- If the initiator does not receive an acknowledgement that the message was received, it will periodically retry delivering the message to the target. If the target has already received the message, it will simply drop any additional delivery retries and send acknowledgements for them.
The Odd Behavior: Step 11 is where I am seeing some very odd behavior with Service Broker and AlwaysOn. I see the message getting delivered to the target and processed successfully, and I also see the acknowledgement getting sent back to the initiator and received. However, the message remains in sys.transmission_queue as though no acknowledgement was received. To make things even more strange, Service Broker isn't attempting to resend the message like I would expect it to if the acknowledgement wasn't received. Instead, the message simply remain in the sys.transmission_queue, and as new messages are sent, they get delivered, acknowledged, and they too remain in the sys.transmission_queue. It seems to me like service broker is getting the acknowledgements and therefore stops trying to deliver the message, but doesn't remove it from the sys.transmission_queue for some reason. The transmission_status for these messages remains blank, which should indicate that Service Broker hasn't attempted to deliver them yet. I checked the retention setting on the service queue, and it is set to off, but that should only impact the service queue and not the sys.transmission_queue. I have also traced both sides of the conversation using SQL Profiler, and I am able to see the message getting sent and the acknowledgement being sent back to the initiator and getting received (see XML trace data at the end of this post). One odd thing did jump out at me in the traces though. I noticed that both sides seemed to be a bit confused about the TCP connections, because messages are sent from the IP address of the node itself while the service routes and the messages themselves point to the name/IP of the AG listener. This confusion appears to be causing each side to close the existing connection between the two services and create a new one in order to deliver a message or acknowledgement. I'm not sure if this is normal or not or if it has anything to do with why the acknowledgements aren't being handled correctly, but it was the only thing I could see that could possibly explain the odd behavior. The Plea for Help: At this time, I don't have a solution to this message retention issue other than to manually end the conversation with cleanup on both sides, and that's not really something I want to do. If you have any ideas as to why this might be happening or what I can do about it, please leave me a comment and let me know. If there is any additional information that you would like me to provide about my setup or about the issue, please let me know in the comments as well. I will post a followup to this post if/when I find a solution to this issue. The Trace Data: Please see my blog post (the URL is at the beginning of the question). |
No comments:
Post a Comment