[how to] "Cannot add or update a child row" when deleting a record? |
- "Cannot add or update a child row" when deleting a record?
- Create Log Shipping Step Failed After ServerName Changed
- App for rapid prototyping of relational data structures
- Unicode data getting lost during or after insert from file
- CPU(Core) underutilization SQL server 2008 R2 Standard SP2
- SQL replication Agent permission
- Reason for using hexadecimal in NCHAR()?
- SQL Server Designers, Failed Saves, and Generated Scripts
- Historic hierarchy analysis via SSAS and Excel
- Query performance differs greatly between a development setup and production
- How to find parent rows that have indentical sets of child rows?
- Managing the transaction log during restore
- How to calculate total ON hours and Total OFF hours in day of a motor using PHP and MySQL
- Understanding SIX lock in Microsoft SQL-Server
- Problem adding a new node to a SQL Server 2012 Failover Cluster
- Eliminating duplicate records in data cleansing
- Statistical Analysis of Data that has to be done in an order?
- Clear schema from database without dropping it
- Are there any disadvantages to partitioning on financial year?
- SUPER privilege not defined for master user in Amazon MySQL RDS
- How to import table's data in MySql from Sql Server?
- Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?
- MySQL Workbench sync keeps requesting the same changes
- SQL developer: Setup debugger for plsql
- Deleting Data From Multiple Tables
- Why does MySQL (InnoDB) table get faster after OPTIMIZE TABLE, but then don't work?
- Minimizing Indexed Reads with Complex Criteria
"Cannot add or update a child row" when deleting a record? Posted: 26 Jun 2013 05:22 PM PDT I have the two following tables: I have the following foreign key constraint on the survey_answers table: If I try to delete a record from survey_main that has child records in the
I understand what the error is saying, but shouldn't the fact that I have cascading deletes make it so this error would never be thrown? What am I missing here? |
Create Log Shipping Step Failed After ServerName Changed Posted: 26 Jun 2013 05:30 PM PDT I was doing some test on server name change and encounter some errors. Here is the original setup: Server name - ServerA, SQL Server default instance - ServerA Changes: Server name - ServerB Before I change the SQL Server default instance 'servername' Return exec
As expected. So I change the SQL Server default instance 'servername', Restart the SQL Server service. Return
So question is, am I missing some steps over changing servername or is this a bug? Version - SQL Server 2012 SP1 |
App for rapid prototyping of relational data structures Posted: 26 Jun 2013 04:05 PM PDT What are some apps (especially web apps) that provide an Extremely Lightweight user interface for building, inserting test data into, and querying a relational data structure? The app should have some kind of "visual" interface (even if only a dropdown) for defining relationships between properties (columns in the RDBMS world). The schema (if there is one), data, and any relationships should be exportable in a common format and convention (something based on JSON maybe). An API for interacting with the base programmatically would be nice (REST and JSON for example), but since I can't find anything that fits the above criteria, I'll settle for prototype-and-then-export functionality. |
Unicode data getting lost during or after insert from file Posted: 26 Jun 2013 06:19 PM PDT I'm experiencing confusing behavior when bulk inserting data in SQL_Latin1_General_CP1_CI_AS, on a Japanese server, and later selecting it. Extended characters like é are being converted to question marks, either during the SELECT or at some earlier point. This makes me think it's being converted to Unicode somewhere, but the file is Latin-1, the format file specifies SQL_Latin1_General_CP1_CI_AS, and the columns themselves are SQL_Latin1_General_CP1_CI_AS (verified in Properties). So I'm not sure where the problem is occurring. Is Management Studio silently converting the characters on SELECT? Here's the detailed setup:
Right now I'm not even sure where to look. Maybe the issue is with the command, maybe with the table, maybe even with how I'm using SQL Server Management Studio. Can anybody provide a suggestion for narrowing down the problem further? In the long run I actually want to convert the SELECT'ed data to Unicode, but the right way, so that accented characters are mapped to their Unicode equivalents. |
CPU(Core) underutilization SQL server 2008 R2 Standard SP2 Posted: 26 Jun 2013 02:53 PM PDT I have a SQL Server 2008 R2 Standard running on a HyperV 2012 virtualized machine. The configuration is : 18 vCpu's, 22GB of RAM. The SQL server is running on a Win2k8 r2 VM, which uses 75% of the ressources which are available on the physical machine(2x Xeon Six core @2.5ghz, 10x300gb sas 10K, 32GB Ram - DELL poweredge t620 - hyper-v 2012 server core).Hyperthreading is on In the VM's task manager I see 18 cores as assigned, when i run a CPU test, all 18 cores are maxed out at 99-100%. When i am running a high intensive query, the sql server uses only 1 core, or 5% of the CPU. The querry takes almost an hour to run. When another user runs the same querry at the same time, the server uses one more core, and the overall CPU usage goes to 10%. Why doesnt the server use all the available cores? when i run /when i set the VM to 4 vCpu's and run the same querry, it utilizes all 4 cores evenly at about 25% overall CPU usage, but it still needs an hour to complete. a normal time should be 3-4 minutes/ edit |
SQL replication Agent permission Posted: 26 Jun 2013 03:03 PM PDT
I have one sql from ip How can I add a login for |
Reason for using hexadecimal in NCHAR()? Posted: 26 Jun 2013 02:14 PM PDT I found this in some source code today: It looks like the error message string is adding two line feeds after the description. 0X0D is 13 and 0X0A is 10. Is there a reason to use hexadecimal instead of just integers? Normally what I've done is NCHAR(13) + NCHAR(10)... |
SQL Server Designers, Failed Saves, and Generated Scripts Posted: 26 Jun 2013 01:10 PM PDT I am a big fan of the simple diagramming tool that comes with SSMS, and use it frequently. When I save changes to the model, I have it configured to automatically generate the change scripts that go along with the save. I then save (and source control) the resulting change script. This works great and an important piece of the process my team(s) uses. What occasionally happens is that a save fails, and I still get the option to save my change script. I then fix the problem and save again (which results in another change script). I'm never clear what I need to do at this point to maintain a consistent set of change scripts. There seems to be overlap between the two scripts (the failed and the successful), but they are not identical. If I want to continue to use this feature, what should I be doing with the resulting script as soon as I get a failed save of the model? |
Historic hierarchy analysis via SSAS and Excel Posted: 26 Jun 2013 02:36 PM PDT I am in the process of constructing a SSAS cube for a client and ran into the following issue: The client is doing organisational analysis and needs to be able to analyse the all relevant measures based of the organisational structure as it was at a certain point in time. The setup is as follows. The organisational structure is a ragged hierarchy which is stored in a Type 2 fashion with all the relevant effective dates and states. The facts (measures) are linked based of a surrogate key. I have set all the relevant SCD types on the organisational structure dimension attribute type properties in SSAS. The question is, is there any articles or pointer that can assist in providing the ability via Excel such that a user can specify the "date" of the organisational structure and have the structure as at that date reflect? While still being able to see and interact with all other information (both current and historic). The functionality can be provided via SQL by grouping on the "business key" and filtering the organisational structure based on the given date. I have searched through the documentation of SSAS and various articles but have thus far not been able to find a solution. Any help or pointers would be appreciated. Thanks in advance, Jacques Buitendag |
Query performance differs greatly between a development setup and production Posted: 26 Jun 2013 12:20 PM PDT I'll try to keep this question straightforward, though I am dealing with a big ball of mud. When I run my test query acrossed linked servers both located locally (on a shared virtual host), the query is fast at about 9 seconds. When I run the same query acrossed linked servers (one local, one about 1,200 miles away) it is MUCH slower at 5:23 I am trying to learn how to analyze an execution plan, but are there other probable causes for this sort of thing? Edit: Based on @Mat's comment, here is an example. DISCLAIMER: I do not vouch for the quality of this code. Is this a "chatty" query, and also, I guess chatty would mean that it is round tripping for each row on the INSERT? |
How to find parent rows that have indentical sets of child rows? Posted: 26 Jun 2013 03:46 PM PDT Suppose I have structure like this: What are some good ways for finding duplicate recipes? A duplicate recipe is defined as having the exact same set of ingredients and quantities for each ingredient. I've thought of using Edit: There are 48K recipes and 200K ingredient rows. |
Managing the transaction log during restore Posted: 26 Jun 2013 11:24 AM PDT We are using SQL Server 2008 R2. I've got transaction log shipping set up between my servers and everything is working just fine as far as my log backups being created, transferred, and restored. However, I noticed that the actual transaction log of my backup database while in the "restoring" state is very large. The database is about 200GB and its log is 146GB. That makes sense to me since the .bak file is 142GB, but maybe that is just a coincidence. The .bak file was restored with the NO RECOVERY option so that the log backups could be restored as they are received by the server. It seems like that 146GB log doesn't need to be that large after the restore of the initial .bak file. Each of my log backups that gets restored is roughly 10GB in size, so I figure that a log file of around 15GB would suffice. I would really like that 130GB of space back. Is there any way to make the transaction log file smaller while the database is in the "restoring" state? Or would I just have to wait until a disaster scenario when the database is actually in a usable state to shrink the log file then? |
How to calculate total ON hours and Total OFF hours in day of a motor using PHP and MySQL Posted: 26 Jun 2013 06:44 PM PDT In my application I would need to calculate total ON hrs and OFF hrs in a day of motors in one tank. Every time he/she turn ON/OFF motors, data will go and store in While I searched for same thing in google I got following URL: calculating total login-logout time of a particular user in mysql My rearranged query is: and by using the given SQL Query in the above link it is not working. And table in my database is like: Please help me. Thanks in advance |
Understanding SIX lock in Microsoft SQL-Server Posted: 26 Jun 2013 07:10 PM PDT Can somebody explain me how a process can acquire What does this mean and how that lock could have been acquired? From what I got from http://msdn.microsoft.com/en-us/library/aa213039%28v=sql.80%29.aspx For my case that would be Another thing is that I expect several EDIT: Deadlock xml: |
Problem adding a new node to a SQL Server 2012 Failover Cluster Posted: 26 Jun 2013 11:05 AM PDT
|
Eliminating duplicate records in data cleansing Posted: 26 Jun 2013 08:47 PM PDT I have a database full of records of people with simple information like first name, last name, email, location, ... . I need to eliminate the duplicate records. As I've search the process is called "duplicate elimination in Data Cleansing". Does anyone know a good open source tool to do that? |
Statistical Analysis of Data that has to be done in an order? Posted: 26 Jun 2013 05:45 PM PDT Bear with me - that is the first time try that in SQL Server, normally I have been doing that on the front end ;) I a implementing some analysis on time coded data series. This is not super complicated stuff, but some of it requires some numbers we do not store in the database and that has to be calculated by aggregating the numbers in a specific algorithm IN ORDER. To give an example:
This can not be pre-calculated due to dynamic filtering - there are a number of filters that can be applied to the data. So far - past - I pulled the data to the application, now for the standard stuff I plan to try to keep that in the sql server. My problem now is - I can see how that works (acceptable) in SQL Server: But if I put that into a view... and then filter out rows, the Sum is still calcualted from the beginning. And I need a view because I want (need) to map that standard analysis data into an ORM (so dynamic SQL is out). Anyone an idea how to do that? |
Clear schema from database without dropping it Posted: 26 Jun 2013 06:45 PM PDT I'm working on a school project where I have a SQL Server with a database for my team. I already imported a local database created with Entity Framework. Now the model has changed, table properties were added/deleted and I want to update my full database. However, the teachers didn't gave us the create rights so dropping the whole database isn't really an option. Now is my question, is it possible to drop all the tables currently in the database and just import the newly created one without problems? Or do I really need to drop the whole database? |
Are there any disadvantages to partitioning on financial year? Posted: 26 Jun 2013 11:39 AM PDT Our current set up has one table per financial year (May 1- April 30). Each table has approx 1.5 million rows. We have about 8 years of data, and will obviously be adding each year. The majority of queries are within the financial year/one partition. Either My plan is to have a range partition on an InnoDB table. e.g. This means that the PK has to become Are there any significant disadvantages to partitioning compared to having an unpartitioned table? I know that means the PK is now length 12 and all further indexes will have that prepended to it. Does that make a difference? The table needs to work faster on reads than writes, and there are a fair few indexes on it.
We do sometimes need to query the time across all time or over "the last X months", but this is pretty rare. The main advantages of moving to a single table is to eliminate the logic in the application working out which table to insert/update/select and not needing to calculate unions in those situations where we need more than one table. |
SUPER privilege not defined for master user in Amazon MySQL RDS Posted: 26 Jun 2013 02:45 PM PDT I have created one medium instance on amazon rds in asia pecific (singapore) region. i have created my master user with master password. and it is working/connecting fine with workbench installed on my local PC. When, I am going to create function on that instance, it show me following error
At my instance, my variable (log_bin_trust_function_creators) shows OFF. now when I go to change with variable using it gives me another error
I don't know how to solve this error. Can anybody help??? |
How to import table's data in MySql from Sql Server? Posted: 26 Jun 2013 04:45 PM PDT I am trying to export table from SQL Server 2008 R2 TO MySql 5.5. For this I am using Here this error may be occurring because table in Sql Server has a column with data type Please provide your expert answers. If not possible through |
Is it possible to pipe the result of a mysqldump straight to rsync as the source argument? Posted: 26 Jun 2013 03:45 PM PDT Is it possible to pipe the result of a mysqldump straight to rsync as the source argument? Conceptually, I was thinking something like: I've seen people pipe the result to mysql for their one liner backup solution, but I was curious if it was possible with rsync. You know--- cause rsync is magic :) Thanks for your time! |
MySQL Workbench sync keeps requesting the same changes Posted: 26 Jun 2013 08:45 PM PDT I am using MySQL Workbench, and when I try to "synchronize" it with my remote database, it keeps detecting some changes to make. Specifically, the most recurrent ones are:
I was compliant and executed all the queries given to me (and added the semi-colon that they forgot). MySQL didn't complain and executed them. However it didn't help, I can run it 20 times in a row, it will still ask the same useless changes. |
SQL developer: Setup debugger for plsql Posted: 26 Jun 2013 08:50 PM PDT I'm trying to debug remotely pl/sql. But I can't - database returns me an error. What should I do to fix this and start debugging ? UPD |
Deleting Data From Multiple Tables Posted: 26 Jun 2013 01:25 PM PDT Suppose,I've a table called UNIVERSITY containing universities name: Now these universities ID's has been(obviously) used in many tables within the database(name e.g.Education),Suppose 10 tables. Q.Now what happen if i delete one university? A.The universityID field in other tables becomes NULL. But I don't want these,rather when I delete 1 university from UNIVERSITY TABLE,all its occurrences with Rows in all 10 table should get deleted. What will be the shortest and easiest MySQL Query for this operation. NOTE:I'm using PHP language. |
Why does MySQL (InnoDB) table get faster after OPTIMIZE TABLE, but then don't work? Posted: 26 Jun 2013 01:37 PM PDT I have a Django web application that stores data in a MySQL InnoDB database. There is a particular page that is accessed a lot on the django admin, and the query is taking a long time (~20 seconds). Since it's the Django internals, the query cannot be changed. There are 3 tables, A simple join-3-tables together. The However the query plan looks wrong and says it's not using any keys on B (but it is using the keys for the other joins). From reading lots of MySQL performance stuff it should in theory be using indexes, since it's various const joins that can 'fall through'. It says it has to scan all ~1,200 rows of B. The weird thing is that I So we ran that on the live system… and it didn't change anything (the speed nor explain). I recreated my MySQL database ( Why does this happen? How can I get MySQL to use the correct indexes and get back my 0.02s query time? This blog post ( http://www.xaprb.com/blog/2010/02/07/how-often-should-you-use-optimize-table/ ) implies that |
Minimizing Indexed Reads with Complex Criteria Posted: 26 Jun 2013 01:45 PM PDT I'm optimizing a Firebird 2.5 database of work tickets. They're stored in a table declared as such: I generally want to find the first ticket that hasn't been processed and is in My processing loop would be:
Nothing too fancy. If I'm watching the database while this loop runs I see the number of indexed reads climbs for each iteration. The performance doesn't seem to degrade terribly that I can tell, but the machine I'm testing on is pretty quick. However, I've received reports of performance degradation over time from some of my users. I've got an index on -- Edits for comments -- In Firebird you limit row retrieval like: So when I say "first", I'm just asking it for a limited record set where |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment