[how to] Cannot create FOREIGN KEY for a table |
- Cannot create FOREIGN KEY for a table
- "Alter table on a small table(1000 rows)" execute very slow on mysql slave db while within 1 second on master db?
- Plan for storing transactions in an easily retrievable manner, indefinitely
- Mysterious missing transactional replication tokens
- Recommended datatype fixed length in mysql
- Splitting a large SQL Server MDF file
- mysqldump: replace drop table in output with delete?
- Connecting to postgres from windows to virtual machine
- Where to install Oracle client software on SharePoint 2010 Server
- 1062 Duplicate entry but there are no duplicates?
- Bus time schedule database design
- Constraint on Composite Type
- MySQL Large DELETE with JOINs
- Group By primary key or DISTINCT increase query time over 1000x with limit
- Compare two Oracle database roles
- Restore to last transaction log restore
- Speeding up MySQL NOT IN (Subquery)
- Cannot connect to server. Too many symbolic links. Postgres Error
- Transaction Log maintenance when switching to Simple Recovery
- Filter on a window function without writing an outer SELECT statement
- How do I identify the remote db agent name to use in create_database_destination on Oracle 11gR2?
- How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?
- Is there slowdown inserting into an InnoDB table that has no index set?
- Loading XML documents to Oracle 11g DB with control file
- MySQL - run SELECT statement on another server without defining the table structure
- consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?
- How to I grant a user account permission to create databases in postgresql?
- How can I set a default session sql_mode for a given user?
- Tuning dedicated Percona Server with XtraDB for OLTP
- Duplicate column for faster queries?
Cannot create FOREIGN KEY for a table Posted: 22 Jul 2013 08:56 PM PDT I'm not a professional DBA, I encountered a problem when I's attempting to create FOREIGN KEY for a specific table by the following SQL script (I'm using MySQL): I'm sure there is no FOREIGN KEY named 'fk_answer_userId' with table mmy_answer and refered table mmy_user_account exists, the following is the error: Can anybody give me help? Thanks. |
Posted: 22 Jul 2013 07:45 PM PDT I added a column on a very small table(1000 rows). It was executed very fast on master db and one slave db(this is a backup slave) within 1 second, while over 600+ seconds the 'alter sql' hadn't finished on my main slave db(almost all read request). I want to know what may make this happen. At last I restarted this slave db. |
Plan for storing transactions in an easily retrievable manner, indefinitely Posted: 22 Jul 2013 06:55 PM PDT I've got a large (many columns) MySQL database table (InnoDB) that has a fairly high number of INSERTs (~500/day). Think of the records as financial transactions. Clients need to be able to view these records and/or retrieve a report containing details about the transactions through an online system (going back as far as possible). My question is this: Is it feasible to allow these records to accumulate and simply SELECT them from the table when preparing a listing, even accounting for significant growth in volume? The traffic would be relatively low. We've thought of implementing a clever scheme of separating the table into years; however, that would add significantly to the complexity of the project. Assuming the table was carefully optimized and indexed adequately, would the table still be fast enough at 1,000,000+ records per year? I hope this is not too vague or specific to warrant an answer and would like to thank those who take the time to consider responding. |
Mysterious missing transactional replication tokens Posted: 22 Jul 2013 02:29 PM PDT I am looking at implementing Transactional Replication subscriptions in an AlwaysOn AvailabilityGroup. I tried to use this as a guide: The reason I say tried is not in anyway besmirching the author - in this case he is dealing w/ push subscriptions and we are doing pull. There was a spectacular flame out trying to Initialize from LSN (The transactions required for synchronizing the subscription with the specified log sequence number (LSN) are unavailable at the Distributor. Specify a higher LSN. ) (perhaps b/c this is a pull subscription and it moved on, whereas in the example the push agent was disabled and then brought up on the now Primary?). Following this I just 're-initialized' the subscription (it is not that big a publication). Anyhow, now Replication Monitor, which from experience can be an unreliable narrator up there with the greats of literature, tells me this subscription's performance is 'Excellent' and that it was last synchronized 1 minute ago as of this writing, yet when I try to insert Tracer Tokens to get a feel for latency this subscription is nowhere to be seen. In other areas of Replication monitor or checking out the SQL Server Agent jobs there are no big oddities. What is going on? Is using pull as opposed to push subscriptions not advised in this scenario? I am somewhat befuddled at the moment and any enlightenment would be welcomed and appreciated. |
Recommended datatype fixed length in mysql Posted: 22 Jul 2013 03:06 PM PDT What's the best data type for store fix length data such as
They don I use BINARY data type for that. Is this a good choice or not ? |
Splitting a large SQL Server MDF file Posted: 22 Jul 2013 03:57 PM PDT I have a large (1.2 terabyte) SQL Server database that I need to migrate to a new server. Most of the database lives on a single, 1.25 TB data file, and a little bit sits on a much-more-manageable 550 GB file (which is practically empty). Now, the tricky bit: the server to which I'm migrating only has 3 700 GB volumes, meaning I need to somehow dissect this goliath into three equal chunks. Most advice I've found involves creating 3 target files and running DBCC SHRINKFILE EMPTYFILE on my main file to empty it into the targets, but that'd take ages with a database this large. Is there a recommended method for splitting a database this large? I'm considering using the Sql Server Integration Services Data Export feature to dump the data into a clone database with the proper file structure, but I'm curious as to whether there's a better way. |
mysqldump: replace drop table in output with delete? Posted: 22 Jul 2013 01:37 PM PDT I am using this emits script that includes Since I use the is there a way to achieve this? |
Connecting to postgres from windows to virtual machine Posted: 22 Jul 2013 01:43 PM PDT I have the following setup
The problem is that I am trying to connect from However, the test connection fails.
In postgresql.conf I have set only this settings: and What could be the reason of connection attempt fail? Maybe I missed some settings. EDIT1: on Linux side the command: executes successfully. |
Where to install Oracle client software on SharePoint 2010 Server Posted: 22 Jul 2013 01:40 PM PDT We have our ASP.net web application hosted on a SharePoint Server 2010. Our application uses oracle database to save and retrieve data.For our application to work on SharePoint server we need to install Oracle client software. Our SharePoint Server architecture consists as following
I am not the admin on the SharePoint server and trying to figure out if i have to install Oracle client on server where do i install it? Do i need to install it on all our servers or just the web fronends. Can some one please help me with this? |
1062 Duplicate entry but there are no duplicates? Posted: 22 Jul 2013 12:12 PM PDT I keep getting this error: The problem is, is that there are no duplicate entries. Which is why I'm confused when receiving that error. Is there any way to check using InnoDB? I have tried exporting the table and reading the SQL code and ran a search for any duplicates but there wasn't any. |
Bus time schedule database design Posted: 22 Jul 2013 06:27 PM PDT Hi I'm trying to figure out how to design a database for bus time schedule. There are several lines, each line have two routes (forward and back), each route have several stations (stops) and each station may belong to several routes. The main goal of the database is to easily select the next arrival time for a specific bus station with a specific line. I googled and came up with next database design: lines (id, name, ...) routes (id, name, line_id, ...) stations (id, location) route_station (id, route_id, station_id) times (id, station_id, line_id, time_forward, time_back) So, for instance In the first 7 rows, the line with line_id=100 stops at different times in the station with station_id=10 (The My questions are:
Thanks. P.S. I've taken a look at this design, but I think is too much for my system. P.S.2 I've read about using NoSQL databases may fits better for this kind of systems; but my knowledge is limited to relational databases. |
Posted: 22 Jul 2013 01:07 PM PDT How can I create a constraint on a sub-field of composite type? Pseudocode Is this possible in PostgreSQL 9.2? It seems to be not possible in 9.1 as mentioned here. |
Posted: 22 Jul 2013 05:13 PM PDT I have a large table [Edit] I would also like to delete all related data from stats, photos, etc. All-in-all, I would like to delete associations from about 10 tables, some of which are second-degree I would expect to delete the players with a query like so [updated]: But, I'm running into an issue that that query, as it's a huge delete, is taking significantly too long and too many resources to complete [ever]. As I'm running the query with a JOIN, MySQL won't let me limit the DELETE to break the query into chunks. Alternatively, I've tried deleting each row separately: Then But the amount of parallelization necessary to complete these on say 10MM rows also throttles the database to 100% CPU (running m1.xlarge on Amazon RDS), rendering complete downtime for what would be several days of query. My question is, what is the best way to delete these old rows from the database without incurring significant downtime for my application. Are there settings that could help, etc. that would make this simple and effective. Please feel free to ask more questions about configuration, etc. as necessary to solve this problem. Thanks in advance for all of your help! [Edit] SchemaPlayers Table Stats Table Photos Table |
Group By primary key or DISTINCT increase query time over 1000x with limit Posted: 22 Jul 2013 12:29 PM PDT Also see http://stackoverflow.com/questions/17741167/hibernate-jpa-improve-performance-of-distinct-query but I realized this is mainly a PostgreSQL issue. My application uses a 3rd party extension to PostgreSQL to for searching chemical structures. This is in general slow. I can not change the SQL directly as the application uses hibernate and native query is not an option. I have a many-to-many relationship and the "Link-table" has an additional column. Basically I have a Mixture that consists of elements and an element occurs in the mixture at a certain percentage. favorite I use Spring-Data JPA with QueryDSL, hibernate and PostgreSQL. I have a query with 2 Joins It's a many too many with a link-table that has additional columns. Bascially I have a Mixture that consists of elements and an element occurs in the mixture at a certain percentage. I'm now searching all Mixtures that contain an element matching the given criteria. Because a mixture can have multiple elements that match the criteria, the query may return the same entity multiple times. I want to prevent that hence DISTINCT or GROUP BY primary key. The query is also paged meaning it uses limit and offset. The query runs perfectly fine without either distinct or group by but then I can get duplicate rows. If I add either group by or distinct query is over 1000 times slower. Query with DISTINCT (note SQL from hibernate): EXPLAIN ANALYZE with DISTINCT: also http://explain.depesz.com/s/ocC The long time is caused by searching the 3rd party index for chemical structure search. For some reason the whole indexed is searched. If the distinct is removed, limit and offset are correctly applied to the 3rd part index and query is fast: EXPLAIN ANALYZE: Also http://explain.depesz.com/s/gU2 Is there anyway I can tune PostgreSQL to apply the 3rd party index only according to the limit and offset clause when using distinct? EDIT: After further thinking about the issue I came to the conclusion that there is no solution that I can implement. With group by or distinct the whole query obviously must be run regardless of limit clause. And if the whole query is run the 3rd party index must be used an that takes time (without that index such a search would take minutes not seconds). Now about statistics. Here a quote from supplier:
|
Compare two Oracle database roles Posted: 22 Jul 2013 02:39 PM PDT I'm trying to re-engineer my corporates user managament and want to compare all the roles with each other to find duplicate ones. Doing it by hand is really tiring and I need to compare like 150 roles. I wrote the following script: It works all fine, but for some reason it doesn't display the table head. The real problem why I'm asking is, is there a more comfortable way to automatically compare all roles to one another? Some neat way to do all this? I'm quite new to PL/SQL and don't really know how to loop this thing up. Is there even maybe a possibly way to display the corresponding column of the tables? |
Restore to last transaction log restore Posted: 22 Jul 2013 12:31 PM PDT We have a log shipping setup where we replicate from our primary database to a secondary. Now that database is, by the way of log shipping, in standby and thus read-only. We set it up so we could run reports from it, but it seems the reporting software used needs to go through the application server which needs to log onto the server - in this case, Microsoft Dynamics AX and Microsoft Dynamics ERP. Our plan is to have a teritary database - this would also get log backups via log shipping, but in a different way - we would like to roll the database back to what it was at the last transaction log restore, then restore the latest log file(s). Is this possible, and how would we go about it? EDIT 2013/07/21: Allow me to rephrase my question. If I have a SQL database with full transaction logging, can I tell it to roll back all changes to a certain point, and if so, how may I accomplish this? Thanks |
Speeding up MySQL NOT IN (Subquery) Posted: 22 Jul 2013 04:25 PM PDT I'm currently up to try to improve the performance of some of my queries. As far as I know, statements like "IN" or "NOT IN" are even faster with a large amount of values if a subquery on an indexed field without conditions is used.
When it comes to use conditions at table2, the query becomes realy slow on a large amount of data. I was thinking about using LEFT Join but when I need to to the filtering for the user_id, it also becomes slow. How may I solve this problem? Currently, I've no clue about that. Kind regards, Dominik //--- EDIT --> The original query looked sth. like this I have a table which represents the n:m relationship between stream and items. An unique item can be accessible via multiple streams. Based upon this i have a state-table for user dependent states of an item. Currently, the user-state-entry will only be generated, if the item was read or saved the first tim. Otherwise, there is no entry in the user_item table. In the beginning, this query was fine but now, it takes a lot of time. Even if I leave out the "INNER JOIN" it takes 7 seconds on the current tables with around 500k rows each. //--- Edit 2 --> The table-structure in the background is as follows:
btw. sorry for my bad english |
Cannot connect to server. Too many symbolic links. Postgres Error Posted: 22 Jul 2013 12:35 PM PDT I get this error when I try connecting to a PostgreSQL server: What can I do to fix this? I have Postgres installed on my Mac. |
Transaction Log maintenance when switching to Simple Recovery Posted: 22 Jul 2013 02:07 PM PDT Background: I recently inherited 50+ SQL Servers with 450+ databases. The nightly backups are roughly 8TB and, needless to say, we're using more disk space than we'd like. All of the databases are set to FULL recovery and the transaction logs have never been backed up. I've gone through all of the SQL Servers and identified low priority ones that only need a nightly backup and a where a day of data loss is acceptable. Question: I'm switching a lot of low priority databases to |
Filter on a window function without writing an outer SELECT statement Posted: 22 Jul 2013 01:35 PM PDT Since window functions cannot be included in the WHERE clause of the inner SELECT, is there another method that could be used to write this query without the outer SELECT statement? I'm using Oracle. Here is the sqlfiddle. |
How do I identify the remote db agent name to use in create_database_destination on Oracle 11gR2? Posted: 22 Jul 2013 07:35 PM PDT I am trying to setup DBMS_SCHEDULER in Oracle 11g to run a remote database job. I have a remote Oracle 11g R2 database on unix and a local one on Windows. I read that you can install the oracle scheduler agent from the 11g client install for machines that don't have Oracle installed but this is not needed for running remote jobs if Oracle is present on both machines. With the remote agent installation, you run schagent and provide parameters to register the agent to the remote machine but I cant find any instructions on the web regarding how to register remote agents when both machines have Oracle installed or what to use as the agent name in this case. I have added an entry to tnsnames.ora for the remote DB and can tnsping, etc. If I run the |
How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set? Posted: 22 Jul 2013 06:35 PM PDT I have innodb_file_per_table set and just today my ibdata1 file jumped from 59M to 323M after I made several changes to an 800M table to reduce it to about 600M. That particular table's .ibd file was reduced but the server's ibdata1 file went crazy. Any ideas? |
Is there slowdown inserting into an InnoDB table that has no index set? Posted: 22 Jul 2013 08:35 PM PDT I have an old application with lots of InnoDB tables, that have no indexes at all, not even a primary ID or such. Those tables only contain a few thousand rows. Would it be faster to |
Loading XML documents to Oracle 11g DB with control file Posted: 22 Jul 2013 03:35 PM PDT I am using Oracle 11g XML database and trying to load XML documents to this DB with a control file and the I want to use the Oracle function Here is the date entry in XML file: And here is entire code the control file: I believe that I can execute the above control file with the The UPDATE: I successfully registered the schema, which contains definition for the date string, and 100 other schema, with a script. Since this script is very large, I am posting only 2 registration portions of it: The 2nd registration above is the last in the script, and this creates the table STXP, in which I am trying to load about 800 XML files. Each XML file has a root element called stxp. This is the relevant definition of date string: And this is how I am using the above definition: When I make the above element optional (for testing purpose) and remove the date string entry (mentioned near the top of this question) from my XML file, the XML file is loaded successfully to Oracle XML database. When I put this entry back to XML file (because it is required), Oracle rejects it. Because I let Oracle take care of population of STXP table with data from XML files, I am not sure if I can set a trigger to pre-process the date string from the input XML file before saving it in database. i think there is a way to do it in the control file. |
MySQL - run SELECT statement on another server without defining the table structure Posted: 22 Jul 2013 02:35 PM PDT In MySQL I can query information on another server using federated tables, as long as I've defined the same table structure locally. In MS SQL Server, however, I can run any SQL statement against a linked server. Is it possible to do the same thing in MySQL? |
Posted: 22 Jul 2013 05:35 PM PDT Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD. Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time. How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut? Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway? . |
How to I grant a user account permission to create databases in postgresql? Posted: 22 Jul 2013 03:16 PM PDT Title says it all. I cannot figure out how to give a user account the ability to create and drop databases. Is there a way to do this with |
How can I set a default session sql_mode for a given user? Posted: 22 Jul 2013 04:35 PM PDT I want to have a default session sql_mode that is different from the global sql_mode. I can set it manually, but I'd like to do it automatically. I thought I could add it to Yet, when I connect my session sql_mode, the session still inherits the global sql_mode. I'm using MySQL 5.5.16. |
Tuning dedicated Percona Server with XtraDB for OLTP Posted: 22 Jul 2013 02:34 PM PDT Hardware specs for an OLTP DB
Software specs
My priority is data integrity/protection followed closely by performance. QUESTIONS
|
Duplicate column for faster queries? Posted: 22 Jul 2013 03:22 PM PDT The title doesn't make too much sense, but I couldn't think a better title for this problem. I have the following tables Projects
Customers
Payments
When a users enters the system, he will have access to a certain project. Now, I want to list all the payments for that project, and it should be pretty easy: My question is : if it isn't better to add a column id_project to payments table this way the queries will be easier and faster. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment