[how to] Two different ON DELETE paths, how do I need to change the schema? |
- Two different ON DELETE paths, how do I need to change the schema?
- How do I fix the definer problem The user specified as a definer ('fred'@'192.168.0.%') does not exist
- How to access the number of rows accumulated by RETURNs in PL/pgSQL
- Monitoring real-time statistics on PostgreSQL
- Normalization/normal forms - May a field describe an other field?
- mysql table not updating no errors
- Custom unique column constraint, only enforced if one column has a specific value
- MySQL export user with semi-colon ";" on the end
- How to migrate data from SQL database to MySQL including flat files?
- Unique index on 2 columns in mysql
- loading a csv file which is on local system in to Mysql DB which is on remote server
- Please Review my Schema
- MySQL - What exactly does smallint(2) permit?
- Postgresql query speed
- mysql crash SIGSEGV under stress
- Optimal drive configuration for SQL Server 2008R2
- Tools and methodologies to keep to DBs aligned
- How to connect to a Database made by Oracle SQL Database?
- replicate-wild-ignore-table does not stop already logged queries on slave
- Backup plan for MySQL NDB cluster databse not innoDB
- How do I display subtotals in cube browser in SSAS 2012 (Sql Server Data Tools)?
- How to optimize a log process in MySQL?
- Merge Replication SQL
- Migrating MySQL to new server
- Replicating data from Oracle to MySQL
- MySQL: Lock wait timeout exceeded
- MI Data Warehouse Advice
- Designing a database for a site that stores content from multiple services?
- Why does MyISAM support FULLTEXT Searching and InnoDB does not?
Two different ON DELETE paths, how do I need to change the schema? Posted: 23 Mar 2013 09:29 PM PDT For a project in school I'm designing a website that sells items via auctions. Think eBay, but on the complexity scale of school project. We went through the process of making an ER Diagram and planning things out but we still ran into a snag with the table creation statements in SQL. It's a problem I've never had before. I know what the problem is now after a bit of digging. There is a table with two foreign keys to two different tables. One of those two tables has a foreign key into the other. That's all fine and great. Every foreign key has an A clearer example, in SQL. So if those three statements went through (#3 wont) and someone deleted a row in A, there is the possibility that a row in C would want to be deleted and set null at the same time. While I respect the issue of the concurrency, shouldn't the deletion win? It isn't technically against the business logic for that to happen, but if it did I'd expect the row to be deleted. So I obviously need to restructure the schema to avoid this pitfall but I can't really think of a way to do it and was hoping there was some magic keyword that would tell Derby to just delete the row and move on with life. If I do need to restructure, the context is that table C is the auction. Each auction has exactly one feedback entry, which is B. It also has exactly one user who is selling the item (A). Feedback (B) entries keep track of the user (A) who places them. In this respect it is the buyer who leaves the feedback. Now it makes perfect sense to delete the auction (C) if the user (A) who is selling the item has their account deleted. However it doesn't make sense for the auction to be deleted if a rouge user (A) leaves foul feeback (B) on an auction (C) and gets their account deleted. It also doesn't make sense to populate the foreign key to B from C until after the auction is closed and the buyer (A) can leave feedback, which is why it is set to NULL initially. Like I said before, it is technically possible for a user (A) to create an auction (C) and then buy the item from themselves and leave feedback (B) and then have their account deleted. In that case the I read through this question and the resolution was to normalize, which I've done already. In that question this was linked, and there they offered a solution to the multiple cascade path that I don't fully understand, but I get enough to believe it isn't a solution to this because I don't actually want to delete along both paths. Another thing mentioned was triggers. I have a rough understanding of triggers at a high level but have never designed or coded one before. |
Posted: 23 Mar 2013 02:17 PM PDT I dumped an existing database from my web site into a new MySQL setup on my laptop. Everything works except pages that use a view, when the error message above appears. There is only one view on the system. I assumed that this was caused because the user was not in the system, but looking in the users database of MySQL the user is listed correct with the password. The dump was from an earlier version of MySQL (5.0) into the latest one that comes with Xamp 5.527. At first I thought it could be that the latest version was incompatible with the existing one. Looking on this forum, there are some existing answers covering this problem that refer to making changes to the database and then dumping it again. However, my on line database works fine, so I dare not make changes to it in case I kill it. That is assuming that I understood what changes I should make and how to do it. I have PHPmyadmin. Can I modify it on the existing dumped database,or do I have to change it on the on line one and then export it? And exactly what do I need to do to modify it. The original database was a Microsoft Access one, converted to MySQL, and it has worked without error since 2003. |
How to access the number of rows accumulated by RETURNs in PL/pgSQL Posted: 23 Mar 2013 04:17 PM PDT When performing in a PL/pgSQL function, is it possible afterwards to directly access the number of rows accumulated into the pile of records, which are returned when the function ends? Example: The number of accumulated records should now be 44. |
Monitoring real-time statistics on PostgreSQL Posted: 23 Mar 2013 11:28 AM PDT I need to monitor the DB performance on a distributed system with multiple databases. Some of the database servers use pgbouncer to pool connections and in those servers I can connect to pgbouncer and execute a SHOW STATS; command to retrieve certain statistics, I'm interested in the following: avg_req | avg_recv | avg_sent | avg_query The columns means: avg_req Average requests per second in last stat period. avg_recv Average received (from clients) bytes per second. avg_sent Average sent (to clients) bytes per second. avg_query Average query duration in microseconds. Now my question is, is there a way of gathering this information without using pgbouncer on a postgresql server? |
Normalization/normal forms - May a field describe an other field? Posted: 23 Mar 2013 10:43 AM PDT Like this:
I wonder if it violates this rule (because I don't fully understand the rule): Every non-prime attribute of R is non-transitively dependent (i.e. directly dependent) on every superkey of R. |
mysql table not updating no errors Posted: 23 Mar 2013 10:41 AM PDT i am using mysql 5.2.44 CE revision 9933 i have a table in which i am trying to update a few columns of the rows i am not using queries, just doing it using the edit table UI provided in the workbench. after i fill in the details and hit apply, mysql doesnt show any errors but it doesnt update the table either. the table uses the InnoDB engine. can anyone please tell me what the issue might be. thanks |
Custom unique column constraint, only enforced if one column has a specific value Posted: 23 Mar 2013 02:05 PM PDT Is it possible to have a custom unique column constraint as follows? Suppose I have two cols, If |
MySQL export user with semi-colon ";" on the end Posted: 23 Mar 2013 08:23 AM PDT I am trying to set up automatic export of user accounts from one server to import into another. I am running the following command; The problem here is that these SQL statements don't end with ; so I can't import it on another remote server using: I know I could write bash one-liner to wrap this and add a semi-colon but I am hoping there is away to get these statements with the semi-colon on the end to keep this nice and simple, and clear. |
How to migrate data from SQL database to MySQL including flat files? Posted: 23 Mar 2013 09:56 AM PDT one of my friend has asked some asp developer to make online application where user can upload files also. jpg, doc, pdf and all these files are saved in SQL database as flat file. Now my friend asked me to do everything in php and he want to migrate all the data which he has in SQL datbase to MySQL database. I can do it. text can be migrated easily but the files which are saved in SQL database can be migrated to MySQL as flat files. is it possible? |
Unique index on 2 columns in mysql Posted: 23 Mar 2013 05:55 AM PDT I have one table in mysql named here is the schema of the table ( now, I want to create unique index on example : now i am inserting How do i prevent this? |
loading a csv file which is on local system in to Mysql DB which is on remote server Posted: 23 Mar 2013 08:27 AM PDT Can we directly load a CSV file ( which is on the local system) on MYSQL DB ( which is installed on the Remote server ) ? 'load data infile into table name' command can only be used for loading in local system only. |
Posted: 23 Mar 2013 02:48 PM PDT Anything obvious that could be improved upon? :) I am positive there are no direct errors. More what I am looking for is errors in normalization and areas that I could simplify data storage. Please see my previous question for the context and motivation. |
MySQL - What exactly does smallint(2) permit? Posted: 23 Mar 2013 08:05 AM PDT Sorry for the really simple question, but I can't figure it out. When I set up my schema, I set up my primary key to use a field of type We've broken the upper limit of that and now I'm seeing 3-digit numbers being added to this field. I'm now speculating that What values does a field such as Thanks! |
Posted: 23 Mar 2013 05:44 PM PDT This is my first time seriously attempting to get into the guts of the speed of a query on Postgres, and I was hoping for some more explanation of exactly what is going on. I am using Postgres v 9.1 on Windows x64 with 8 GB of RAM. I've updated my postgres.conf, changing the following relevant lines My query looks like the following I've already gone ahead and run explain (analyze, buffers), and the output can be seen, parsed here. I'm happy to supply any more information that is requested. Since this is my first time really trying to squeeze performance out of Postgres, I'm hoping to get more information on exactly what is happening in my query. It looks like the most time is spent in an index scan of the dose_hsts table, and I really want to know if I can make this faster. If it makes any difference, the dose_hsts table has 170976 rows in it, but it will be growing by a lot, how much of a difference does this make? Also this query takes about 100 ms to run and returns only 5 rows. EDIT here is the output with EXPLAIN (ANALYZE, VERBOSE) on Depesz, and below is the raw output: |
mysql crash SIGSEGV under stress Posted: 23 Mar 2013 03:45 PM PDT I am running a server in C++ which is using MYSQL client library. When simultaneous connections are 200 or more I see mysql doing a SIGSEGV. How to handle them |
Optimal drive configuration for SQL Server 2008R2 Posted: 23 Mar 2013 06:21 PM PDT I have a fairly busy database server running SQL Server 2008 R2 that has the following setup:
Assuming I can't add additional drives into this server, have I made the best use of the configuration I have available? Or should I consider another scheme here where logs are isolated from the data files, for example? Update: For those that requested further hardware details:
|
Tools and methodologies to keep to DBs aligned Posted: 23 Mar 2013 01:48 PM PDT 2 DBs having schemas that represent the same semantic objects. The first one is production DB (Non-RDBMS, in-house implemented in-memory DB with shitload of RAM). Other is Postgres. Once in a while the production DB is changed (schema upgrade). Question: what is the best practice to keep DBs of different types aligned semantically? |
How to connect to a Database made by Oracle SQL Database? Posted: 23 Mar 2013 10:48 AM PDT So I am fairly new at this, so if you could keep that in mind in your answers, it would be much appreciated. I installed Oracle SQL Database on my Windows PC. It came in two zip files. I installed it and the online portion of it works fine. I can login with Username: sys and Password: **. What I am trying to do is connect to this newly created database on another computer through SQL Developer. I have read that in order to do this, you need to change the hostname of the Database from "localhost" to an IP Address. How do I do that and is there anything else I need to do to make this work? I also found this LINK. Is this something I should do? I do not have a Domain though. listener.ora tnsnames.ora |
replicate-wild-ignore-table does not stop already logged queries on slave Posted: 23 Mar 2013 11:48 AM PDT We have a master DB where where multiple queries where fired on a specific table, these queries were taking too long to execute on slave and it was lagging behind and as a result of which queries were queued and executing one by one. Later we identified that this table was not required to be replicated and we added this table to replicate-wild-ignore-table on the slave's my.cnf and restarted the slave. But the slave again picked up the remaining queries for that table. Ideally it should have skipped the queries for that table. We are not able to figure out why this is happening. Can somebody explain? |
Backup plan for MySQL NDB cluster databse not innoDB Posted: 23 Mar 2013 06:48 AM PDT I have a Database which will grow more than 250GB all data is in NDB engine(2 datanodes) and no other mysql engine used for data store.
Kind regards, |
How do I display subtotals in cube browser in SSAS 2012 (Sql Server Data Tools)? Posted: 23 Mar 2013 04:48 PM PDT Apparently this feature was available in previous versions but has now been removed. What are some alternatives? |
How to optimize a log process in MySQL? Posted: 23 Mar 2013 06:10 AM PDT In my project, I have about 100.000 users and can't control their behavior. Now, what I would like to do is log their activity in a certain task. Every activity, is one record which includes columns like user_id and some tag_id's. The problem I have, is that these tasks in some cases can go up to 1.000.000 per year per user. So if I would store all these activities in one table. that would obviously become very big (=slow). What is best to do here? Create a single table per user (so I have 100.000 log tables) or put all these activities in one table? And what kind of engine should I use? One important thing to note: Although i simplified the situation a bit the following doesn't look normal, but users can also change values in these tables (like tag_id's). |
Posted: 23 Mar 2013 03:48 PM PDT I am currently having a problem with the following Creating a merge replication is simple if the servers are on the same network, i would like to know how can i do a merge replication if the servers are on seperate networks I am using SQL 2008 R2 |
Posted: 23 Mar 2013 12:48 PM PDT We have a MySQL 5.1 instance running on Windows Server 2008 R2 x64 (Server A) replicated to another MySQL 5.1 instance on Windows Server 2003 R2 (Server B). We have a new MySQL 5.5 instance on another Windows Server 2008 R2 x64 machine (Server C). I want to replace Server A with Server C. My plan is to set up Server C as another slave to Server A by 'cloning' Server B like this: http://www.redips.net/mysql/add-new-slave/. After that I plan on doing this: http://blesseddlo.wordpress.com/2011/06/03/convert-mysql-slave-to-master/. After that I want to change Server B's master to Server C. Is this a viable approach? Are there caveats? May there be any compatibility issues when setting a MySQL 5.5 instance as slave to a MySQL 5.1 instance? I can't just stop everything and use |
Replicating data from Oracle to MySQL Posted: 23 Mar 2013 09:07 PM PDT I work with a vendor that does data analytics, and they currently receive a replication stream from some of our databases using a product called Goldengate (which is very expensive). Goldengate has been great - it replicates transactions from the Tandem-NSK source and can apply the changes into any supported database - they're using MySQL at the remote end. We're switching our billing system to Oracle, and while we could continue to use Goldengate to move these logs, I'd like to see if there's another option. We initially chose Goldengate because nothing else could get data out of the Tandem NSK, but now that we're moving to Oracle, there may be some more native (or at least simpler) choices. I've got nothing against them - like I said, it works great - but I'm hoping that two mainstrem databases are easier to do replication between than the NSK. Are there any products of methods that would help get transactional data from an Oracle system into an MySQL database? I'm not sure if there's any way to do this kind of replication natively (I know we can do Oracle -> MSSQL using native replication, but not any way to target MySQL that I'm aware of), or if anybody knows of a product that could facilitate this (and costs less than Goldengate). Thanks for any suggestions! |
MySQL: Lock wait timeout exceeded Posted: 23 Mar 2013 02:48 PM PDT I have a developer that has been trying to alter a large table (~60M rows). Via LiquidBase/JDBC they're running Today while it was running I checked in on it periodically; everything looked normal, the query was running, in state "copying to tmp table", I could see the temp table on disk getting larger and larger (still plenty of free space). I could see other processes blocked waiting for this table to be unlocked. Finally after about 4.5 hours, they got the "lock wait timeout exceeded; try restarting transaction" error. This is actually the 2nd time they've tried, and it seems to fail right about when I would expect it to complete. innodb_lock_wait_timeout is set to the default 50, I can't imagine it would run for so long to be affected by this. No errors logged, no deadlocks or other weirdness seen in 'show engine innodb status'. Can anyone help me with other ideas? I'm fairly stumped on this one. thanks |
Posted: 23 Mar 2013 06:49 PM PDT I have recently started a new job and part of my remit is to try to rescue the Management Information (MI) Data Warehouse. I use the term Data Warehouse very loosely here! The server setup is:
The disks split in to 3 drives:
These are the observations I have made regarding the database:
Importing data The data is imported using batch files and OSQL. It is slow, clunky and prone to failure (It has failed 4 times and I have only been there for 2 and half weeks) The logging is also poor. So apart from all that, it is perfect... I need to find a way to fight my way out of this mess but I am not sure how to go about it. Ideally, I want to be able to:
The main issue at the moment is the performance. I have created a new filegroup on drive D: (where the log files are stored) and placed a few non clustered indexes on there. I am being slightly cautious as I don't want to increase the import times as these are taking too long as it is! I wanted to partition the larger tables but partitioning is not included in Standard, it is an Enterprise feature. I realise that this is a pretty huge task and I am not looking for a magic fix but a little guidance on how to attack this would be a great help. EDIT: I should also point out that there is no test or dev environment for this either... |
Designing a database for a site that stores content from multiple services? Posted: 23 Mar 2013 03:48 AM PDT I'm building a site that implements David Allen's Getting Things Done that pulls in your email, Facebook newsfeed, tweets from those you follow on Twitter, and more services are planned. The problem is that I'm not a DBA, and I'm not sure how to design the database so that as I add features to the site, I won't have to artificially corrupt people's raw data for the purposes of storing it (for example, I want to add the ability to get RSS feeds sometime in the future, but I'm not sure how I'd do that without making a mess). I've put down my initial ideas using DBDesigner 4, below, you'll find the diagram and the SQL. A few notes to help clarify clarify things.
Can someone please point me in the right direction? I'd also be willing to look at using a NoSQL database if suggested. Thank you for your time and consideration. Here's the SQL create script just in case anyone wants to see it. |
Why does MyISAM support FULLTEXT Searching and InnoDB does not? Posted: 23 Mar 2013 07:29 AM PDT I know the basic differences between the MyISAM and InnoDB engines. In MySQL 5.6 beta, InnoDB will support fulltext search. But, I wonder why previous versions of InnobDB didn't support FTS? My guess is because table storage format or the way that data is indexed. Sorry for my ignorance. |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment