[how to] Why is there such a huge performance difference in these two join statements? |
- Why is there such a huge performance difference in these two join statements?
- How do I find an invalid utf8 character "somewhere" in my MySQL/trac database
- Adding slave to existing master-master config in mysql throws Foreign key constraint failed error
- Transform XPath map into XML document using relational data
- Database Install Issue
- Turning on TF610 in SSIS
- Should all queries where you expect a specific order, include an ORDER BY clause?
- How to JOIN two table to get missing rows in the second table
- Create a constant in Postgresql [migrated]
- "Site Offline" MySQL server failing to start and stop
- How to recover data from corrupted SQL Server database?
- Idle connection plus schema-modifying query causing locked database
- ibdata1 grows exponentially when innodb_file_per_table is configured
- Bandwidth comparison between log shipping and transactional replication
- How to reinsert corrected rows from the conflict table?
- Automated SQL backup on a timely fashion, & cleaup the database for the backed up data
- MySQL slave stuck in "Reading event from the relay log"?
- Which database could handle storage of billions/trillions of records?
- Refactoring/normalization - but almost empty table
- Using MySQL InnoDB as an Archive
- ORA-16000 when trying to perform select on read only access ORACLE database
- MySQL Table not repairing
- Group By days interval (aging type)
- How to run a cold backup with Linux/tar without shutting down MySQL slave?
- Mysql innoDB write operations are extremely slow
- PostgreSQL: Unable to run repmgr cloned database
- Which database is best for deeply embedded database or through C DLL?
- What can cause statistics to get out of line?
Why is there such a huge performance difference in these two join statements? Posted: 05 Apr 2013 06:00 PM PDT I have a query which i first wrote as (query has been simplified a bit) : This took 8 seconds. If i changed it to the following it runs in under 1 second: I'd like to understand why there's such a huge difference in performance. Not only is the performance worse but it seems to be ignoring my conditional statements on the first sql query it's counting records regardless of what p.deleted or p.statusid is What I am not understanding about how joins work? This is being run on mssql 2012 |
How do I find an invalid utf8 character "somewhere" in my MySQL/trac database Posted: 05 Apr 2013 03:47 PM PDT I have an installation of trac, which uses MySQL. It has been upgraded so many times, moved servers etc. and chances are that the MySQL character sets were not always set correctly or consistently over the years. Currently all of them are utf8. When attempting to upgrade the data using "trac-admin wiki update", I'm getting an error message that a byte ("UnicodeDecodeError: 'utf8' codec can't decode byte 0xa0 in position 4274: invalid start byte") is not valid unicode. Unfortunately trac-admin gives me no hint where (table/row/column) to look for that byte sequence, or what I could do to fix it. My question is not about trac/trac-admin, however, it's about the database. How would you go about finding, "somewhere" in the database, the offending bytes, and replacing them with something that is at least valid utf8. I have attempted to mysqldump the database and to re-import it, but MySQL gives no indication that anything might be wrong. The invalid bytes get re-imported. Ideas? |
Adding slave to existing master-master config in mysql throws Foreign key constraint failed error Posted: 05 Apr 2013 07:22 PM PDT We have two mysql servers running in Now we have to add a slave to the existing configuration. But upon adding the third DB server and starting slave on it, it throws We have tried taking a EDIT 1mysqldump command: We have also tried it this way: percona xtrabackup In both cases, upon getting the foreign key error, we tried to dump and restore individual tables referenced by the foreign keys manually from the master to the slave. Upon doing this, the replication starts and seems to work normally with 0 seconds behind master for a few minutes, after which another foreign key error shows up, stopping the replication. EDIT 2
MySQL version 5.5.30. |
Transform XPath map into XML document using relational data Posted: 05 Apr 2013 04:37 PM PDT BackgroundMost modern databases have XML functions that can be used to extract data in an XML format. I want to avoid the task of manually calling XML functions to extract the data. This problem involves devising a generic solution to create XML documents based on mapping database tables (and ProblemAn XPath map codifies associations between an XML document and relational data as follows: Where a Where an Calling a function using the XPath map would produce the following XML document: In this case, James Jameson does not have an account and so the corresponding XML element ( This is a difficult problem and a complete solution is not necessary. A solution that handles 80% of simple tables mapped to simple XML elements and attributes would suffice. QuestionWhat algorithm would return an XML document based on such a generic XPath map? The algorithm must transform the structure defined by the XPath map into an XML document with the content from the relations defined in the XPath map. Are there any technologies, or open source implementations, that already perform such a task? Related LinksLinks that are somewhat related to this idea. Articles and White PapersArticles:
Commercial SoftwareSimilar solutions: |
Posted: 05 Apr 2013 02:40 PM PDT I was navigated over here from Stack Overflow, hopefully you can help me out. So I've got a dilemma. Our company has bought a volume license for Microsoft SQL Server 2012 Standard Edition. We purchased the per core license scheme. All of that didn't have any issues; we even installed it without any problems. The issue is coming from Microsoft though. When I configured our Volume License Account they said:
So from what I've gathered, a Microsoft SQL Server 2012 Standard Edition Per-Core License by default is configured as a dual-core. So I understand if I have a quad-core I have to purchase it twice. What I don't understand, why would I need to install it two times? I can't find any documentation on this. That process doesn't seem correct, is there a way to test how many cores SQL is running on? Any assistance would be terrific- Things I've Tried:
Some how I feel this issue shouldn't be nearly as complicated as it has become. Those haven't let me come any closer to my goal, I get the notion that I should only have to install it once. But with what that Microsoft Licensing Technician said I suddenly have a cloud of doubts over my head. Thanks again. Update: So within our Microsoft Volume License Account this is all it provides with our download:
Which I feel is adding to this confusion. We have it installed on the Server; but I'm not entirely sure if I can check to see that it is running all cores. The receipt shows we've purchased multiple licenses- If I understand the answer posted; it should be built into our license file from the installation. Is there a way to check that? |
Posted: 05 Apr 2013 11:44 AM PDT I have a simple SSIS package that loads a dummy file that contains 100000 rows. Each row is around 4k long, one int column and one long text column. I'm trying to test TF610 when loading this data into a table with a clustered index. In my SSIS package, my Control Flow has a Execute SQL task to enable TF610, then on success go to my Data Flow Task which loads the flat file into the table. Both the Execute SQL and OLE DB Destination use the same Connection. If I start a profile while running the SSIS package, and watch the commands, I can see DBCC TRACEON(610) executed then the INSERT BULK operations begin to fire. They both are using the same PID, so I'm assuming it's the same session. When I check the log record length though, the insert is NOT being minimally logged. If I enable TF610 globally and run the same SSIS package though the transaction is minimally logged. I must be doing something wrong when turning on the TF610 in my SSIS package but I can't figure out what... |
Should all queries where you expect a specific order, include an ORDER BY clause? Posted: 05 Apr 2013 12:01 PM PDT In order to better understand the SQL Server query processor, I've been thinking about how the It appears SQL Server will provide results in exactly the same order for any given set of data that remains the same and never, ever changes. Once you introduce any kind of uncertainty such as developers changing something, the order of results can no longer be expected to be the same. Simply seeing the results in the same order, each time you press Try this: The results: As you see, adding a simple index on the fields selected in the query alters the order of the results. From here on, I'm adding |
How to JOIN two table to get missing rows in the second table Posted: 05 Apr 2013 02:27 PM PDT In a simple voting system as for getting the list of elections a user has voted, the following JOIN is used but how to get the list of elections a user has NOT voted? |
Create a constant in Postgresql [migrated] Posted: 05 Apr 2013 12:20 PM PDT Suppose that I have this query: I would like to do that: It gives me a Syntax Error in the first row, can you help me fixing it? |
"Site Offline" MySQL server failing to start and stop Posted: 05 Apr 2013 04:55 PM PDT I'm hosting a few sites on linode and have encountered a strange problem. For the last two years the site has been running perfectly fine, and now randomly all the websites on the server go to the "site offline check the settings.php" page. No changes have been made to the website at all recently. when i try go do mysqld stop it says busy for a while then finally says failed, while doing start results in an instant fail. Many of the threads I googled suggest the hostname is off in the settings.php but since the website has been up for two years I don't think this can be the case. I haven't had to troubleshoot mySQL before, but if i remember correctly the log file I should be looking at is Hostname.err. In there, it has a large chain of errors concerning InnoDB. The one listed as fatal error says: InnoDB: Fatal error: cannot allocate memory for the buffer pool I would appreciate any suggestions, and if there are log files that would help let me know. Edit: Requested information. 1) When I examine the CNF, it appears that all lines involving innoDB are commented with #. This means that it has always been this way as I have not modified it. 2)mysql Ver 14.14 Distrib 5.5.14, for Linux (i686) using readline 5.1 3) It looks like there is a giant jump in the IO rate every time I try to start the database / before the websites die. How do i go about clearing those cache tables if they have become too unmanageable and is there any risk of losing anything? Third Edit (Now with datadir): How much RAM is on the VM ? 512MB How much space does datadir have ? Size: 20G Used: 19G Available 0 Use Percent: 100% What is the size of ibdata1 ( ls -lh /var/lib/mysql/ibdata1)? 114M FYI: If anyone looks here with the same problem in the future, My issue was the the binary log files had consumed my entire disk space. See here http://systembash.com/content/mysql-binary-log-file-size-huge/ |
How to recover data from corrupted SQL Server database? Posted: 05 Apr 2013 01:00 PM PDT We had several power outages and server rebooted couple times which seemed to cause issues with one of the databases. We tried detaching at attaching database again but it looks like database is corrupted and we're getting the Msg 5172, Level 16, State 15, Line 1 Claiming that database header is invalid. Is there anything we can to do repair database or extract data from MDF file? We do have a backup but it's like 2 weeks old and doesn't contain all data. |
Idle connection plus schema-modifying query causing locked database Posted: 05 Apr 2013 05:26 PM PDT As part of our automated deployment process for a web app running on a LAMP stack, we drop all our triggers and stored procedures and recreate them from source control. It turns out there was a hidden danger to this approach that we hadn't thought about. A few days ago we managed to end up with the database for (the staging version of) our web app stuck in a horribly hung state after the following sequence of events:
What's interesting is that this scenario wasn't caused by any kind of deadlock; it was caused by a sleeping connection implicitly holding some kind of lock that prevented the We've talked the problem over in the office, and there are a couple of hypothetical solutions we saw:
We're not sure if either of the first two solutions we considered are even possible in MySQL, though, or if we're missing a better solution (we're developers, not DBAs, and this is outside of our comfort zone). What would you recommend? |
ibdata1 grows exponentially when innodb_file_per_table is configured Posted: 05 Apr 2013 12:20 PM PDT I Have installed a MySQL Cluster with InnoDB (innodb_file_per_table enabled subsequently), but since I switched to innodb_file_per_table, the file ibdata1 grows (2GB at month). Is My |
Bandwidth comparison between log shipping and transactional replication Posted: 05 Apr 2013 04:23 PM PDT Which technique uses more network bandwidth:
Can some one share any benchmarks for the same ? What would be the Memory and I/O impact on the Primary server when we use any one of the technique ? Thanks, Piyush Patel |
How to reinsert corrected rows from the conflict table? Posted: 05 Apr 2013 04:23 PM PDT I have a bidirectional merge replication. I had failure constraints because the primary key was just integer. I change the primary key to the old primary key + a location identifier.The problem is how can I reinsert the old rows of conflict table ( that I can correct manually from MSmerge_conflict_) to the publishers and subscribers. can you help me please? sorry for making faults, I'm not english speaker |
Automated SQL backup on a timely fashion, & cleaup the database for the backed up data Posted: 05 Apr 2013 12:32 PM PDT I need to back up SQL database (historian), on a timely fashion, and then clean up the database by removing the backed up data. I am using MS SQL 2008 (R2), on a Windows XP machine. The biggest issue is the very limited hard disk space. The database is limited to a maximum of 3GB! In terms of overall performance, the PC is really slow, and unfortunately I do not have the choice to change that. So, I could consider backing up overnight when the data flow is expected to be less. The intention is to back up the data every two weeks, have it stored in a special directory (e.g. c:\ ). Then an operator can move the backup to another machine. Given the limited space, I could consider some 'house clean up', by removing the backed up data. What is more important is the ability to merge the regular backups to an external database. So perhaps a typical SQL backup routine and restore, could be an option. I would appreciate your kind advice regarding this matter. Thank you. |
MySQL slave stuck in "Reading event from the relay log"? Posted: 05 Apr 2013 12:09 PM PDT
My problem is similar to this question. It looks like a bug, except for no one mentioned that verion 5.5.28 is effected. Here're the additional informations: mysql> show slave status\G mysql> show engine innodb status; mysqlbinlog --start-position=226591944 mysql-bin.006392 The command that the user run on the master: mysql> show create table v3_cam_ip\G mysql> show keys from v3_cam_ip\G What I have done on one of two Slaves:
What should I do on the remaining slave? gdb --batch --quiet -ex 'set pagination off' -ex 'thread apply all bt full' -ex 'quit' -p $(pidof mysqld) The full backtrace: http://fpaste.org/pXvT/ |
Which database could handle storage of billions/trillions of records? Posted: 05 Apr 2013 08:30 PM PDT We are looking at developing a tool to capture and analyze netflow data, of which we gather tremendous amounts of. Each day we capture about ~1.4 billion flow records which would look like this in json format: We would like to be able to do fast searches (less than 10 seconds) on the data set, most likely over narrow slices of time (10 - 30 mintes intervals). We also want to index the majority of the data points so we can do searches on each of them quickly. We would also like to have an up to date view of the data when searches are executed. It would be great to stay in the open source world, but we are not opposed to looking at proprietary solutions for this project. The idea is to keep approximately one month of data, which would be ~43.2 billion records. A rough estimate that each record would contain about 480 bytes of data, would equate to ~18.7 terabytes of data in a month, and maybe three times that with indexes. Eventually we would like to grow the capacity of this system to store trillions of records. We have (very basically) evaluated couchbase, cassandra, and mongodb so far as possible candidates for this project, however each proposes their own challenges. With couchbase the indexing is done at intervals and not during insertion of the data so the views are not up to date, cassandra's secondary indexes are not very efficient at returning results as they typically require scanning the entire cluster for results, and mongodb looks promising but appears to be far more difficult to scale as it is master/slave/sharded. Some other candidates we plan to evaluate are elasticsearch, mysql (not sure if this is even applicable), and a few column oriented relational databases. Any suggestions or real world experience would be appreciated. |
Refactoring/normalization - but almost empty table Posted: 05 Apr 2013 11:54 AM PDT I normalized a legacy DB into this structure: But I'm not sure if it is correctly normalized. I don't feel very comfortable with the almost empty The requirements
Not really important requirements
Btw. the diagram was created using http://cakeapp.com. |
Using MySQL InnoDB as an Archive Posted: 05 Apr 2013 01:59 PM PDT My site has a main MySQL InnoDB table that it does most of its work on. New rows get inserted at a rate of 1 million per week, and rows older than a week gets moved over to an archive table on a daily basis. These archived rows are processed once a week for stuff like finding trends. This archive table consequently grows at 1 million new rows every week, and querying it can get really slow. Is MySQL suited for archiving data, or is my strategy very flawed? Please advise, thank you! |
ORA-16000 when trying to perform select on read only access ORACLE database Posted: 05 Apr 2013 12:59 PM PDT My application's SQL encounters ORA-16000 when trying to access read only Oracle Database This is the query that involves the XMLTYPE, the INTERFACE_CONTENT is a CLOB COLUMN : I also did A lot OF EXTRACTVALUE( ) method on an XML FIELD TYPE. The SQL is working perfectly if the Database is not read only ( read write ). My Question here is what is the issue here - Is this related to some missing priviledges/grant ? |
Posted: 05 Apr 2013 07:59 PM PDT Table info: when I do a mysqlcheck -r --all-databases it gets hung on that table even if you let it sit all day. Is there anther way to fix/repair/recover that table? Should I use myisamchk? I saw something like: My config on a 16GB ram box and could this have happened because of a crashed table from doing killall -9 mysqld because it would not shutdown and restart? EDIT: Does this mean that it is now fixed? If so how do I move it back? (this was done on a different server) Is there a way to maybe bring MySQL down on the main server and run a command to fix all the files? |
Group By days interval (aging type) Posted: 05 Apr 2013 10:59 AM PDT I would lIke to have Mysql group by days interval for example group by every 20 days from current day, like 1 - 20, 21 - 40, 41 - 60 and son on up to lets 120 days. The user can choose the days interval and up to how many days |
How to run a cold backup with Linux/tar without shutting down MySQL slave? Posted: 05 Apr 2013 01:07 PM PDT I run the following before tar-ing up the data directory: However, tar will sometimes complain that the The slave machine is in a cold standby machine so there are no client processes running while tar is running. CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution. |
Mysql innoDB write operations are extremely slow Posted: 05 Apr 2013 08:59 PM PDT I'm having serious performance problems with MySQL and the InnoDB engine. Even the simplest table makes writing operations (creating the table, inserting, updating and deleting) horribly slow, as you can see in the following snippet. I have been looking at htop and the long waiting times are not because of abnormal CPU load. It's almost zero, and memory usage is also normal. If I create the same table using the MyISAM engine, then it works normally. My my.cnf file contains this (if I remember right I haven't changed anything from the default Debian configuration): I have also tried to restart the server, but it doesn't solve anything. The slow queries log doesn't give any extra information. |
PostgreSQL: Unable to run repmgr cloned database Posted: 05 Apr 2013 04:41 PM PDT I'm running tests with PostgreSQL hot standby with 1 master, and exactly 1 slave. I am using the instructions on this guide: http://www.howtoforge.com/how-to-set-up-a-postgresql-9.0-hot-standby-streaming-replication-server-with-repmgr-on-opensuse-11.4 I'm using PostgreSQL version 9.1, repmgr 1.1.0 and Ubuntu 10.04 LTS. I followed steps upto step-6 in the guide where I ran on pgslave. Then I did a on it and it the script (seemingly) finished successfully.
Any help on proceeding further is welcome. |
Which database is best for deeply embedded database or through C DLL? Posted: 05 Apr 2013 08:48 PM PDT I want a deeply embedded database. Deeply embedded means that the server is started by an application and closed by the application with no tcp/ip or 0 port. The main features of consideration are:
There are many options available like MySQL, Oracle, MSSQL. An opensource database would be great. |
What can cause statistics to get out of line? Posted: 05 Apr 2013 03:11 PM PDT I've just worked through a problem at a clients site which it turned out was caused by the statistics being wrong which caused the Optimizer to time out. Running What I am a bit confused about now is how did the statistics get out of line in the first place? The database has both auto_create_stats and auto_update_stats switched on. So SQL Server should have kept the statistics up to date without any intervention. So why did it fail in this instance? This client had recently upgraded their database server. They handled it themselves, so I'm not exactly sure what procedure they went through, but I can't imagine it was anything more complicated than backing the database up on the old server and restoring it on the new one. Could this have caused the glitch somehow? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment