Wednesday, June 26, 2013

[how to] "Cannot add or update a child row" when deleting a record?

[how to] "Cannot add or update a child row" when deleting a record?


"Cannot add or update a child row" when deleting a record?

Posted: 26 Jun 2013 05:22 PM PDT

I have the two following tables:

survey_main  ----------  id  name    survey_answers  --------------  id  survey_id  

I have the following foreign key constraint on the survey_answers table:

CONSTRAINT `fk_answers_main_survey` FOREIGN KEY (`survey_id`) REFERENCES `survey_main` (`id`) ON DELETE CASCADE)  

If I try to delete a record from survey_main that has child records in the survey_answers table I get the following error:

ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails (surveyplanet.survey_answers, CONSTRAINT fk_answers_main_survey FOREIGN KEY (survey_id) REFERENCES survey_main (id) ON DELETE CASCADE)

I understand what the error is saying, but shouldn't the fact that I have cascading deletes make it so this error would never be thrown? What am I missing here?

Create Log Shipping Step Failed After ServerName Changed

Posted: 26 Jun 2013 05:30 PM PDT

I was doing some test on server name change and encounter some errors.

Here is the original setup: Server name - ServerA, SQL Server default instance - ServerA

Changes: Server name - ServerB

Before I change the SQL Server default instance 'servername'

SELECT @@SERVERNAME, SERVERPROPERTY('SERVERNAME')  

Return ServerA, ServerB (since I have changed the server name)

exec sp_add_log_shipping_primary_database completed successfully. However, there was error when running the log shipping backup job,

Error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.

As expected. So I change the SQL Server default instance 'servername',

EXEC sp_dropserver 'ServerB'  EXEC sp_addserver 'ServerA', 'local'  

Restart the SQL Server service.

SELECT @@SERVERNAME, SERVERPROPERTY('SERVERNAME')  

Return ServerB, ServerB. OK, I delete the previous log shipping backup job. Run the sp_add_log_shipping_primary_database but receive this error,

Msg 14234, Level 16, State 1, Procedure sp_verify_jobstep, Line 214 The specified '@server' is invalid (valid values are returned by sp_helpserver).

sp_helpserver return ServerB under name and network_name column with id 0

So question is, am I missing some steps over changing servername or is this a bug?

Version - SQL Server 2012 SP1

App for rapid prototyping of relational data structures

Posted: 26 Jun 2013 04:05 PM PDT

What are some apps (especially web apps) that provide an Extremely Lightweight user interface for building, inserting test data into, and querying a relational data structure? The app should have some kind of "visual" interface (even if only a dropdown) for defining relationships between properties (columns in the RDBMS world).

The schema (if there is one), data, and any relationships should be exportable in a common format and convention (something based on JSON maybe). An API for interacting with the base programmatically would be nice (REST and JSON for example), but since I can't find anything that fits the above criteria, I'll settle for prototype-and-then-export functionality.

Unicode data getting lost during or after insert from file

Posted: 26 Jun 2013 06:19 PM PDT

I'm experiencing confusing behavior when bulk inserting data in SQL_Latin1_General_CP1_CI_AS, on a Japanese server, and later selecting it. Extended characters like é are being converted to question marks, either during the SELECT or at some earlier point.

This makes me think it's being converted to Unicode somewhere, but the file is Latin-1, the format file specifies SQL_Latin1_General_CP1_CI_AS, and the columns themselves are SQL_Latin1_General_CP1_CI_AS (verified in Properties). So I'm not sure where the problem is occurring. Is Management Studio silently converting the characters on SELECT?

Here's the detailed setup:

  1. Unzip text file and load it with: MERGE INTO [table] AS [alias] USING (SELECT [columns] FROM OPENROWSET (BULK [datafile], FORMATFILE=[formatfile] ...

  2. The format file specifies the columns like: 2 SQLCHAR 0 0 "\"|" 1 MYCOL1 SQL_Latin1_General_CP1_CI_AS

  3. The table columns are VARCHAR and have SQL_Latin1_General_CP1_CI_AS collation. However, the database itself may be a different default collation (Japanese or Unicode or Latin1_General_CI_AS.)

  4. When I select in SQL Server Management Studio: SELECT mycol FROM mytable WHERE active='yes'; I get cells like: Associ? Same result for this: SELECT CONVERT(NVARCHAR, mycol) FROM mytable WHERE active='yes';

Right now I'm not even sure where to look. Maybe the issue is with the command, maybe with the table, maybe even with how I'm using SQL Server Management Studio. Can anybody provide a suggestion for narrowing down the problem further?

In the long run I actually want to convert the SELECT'ed data to Unicode, but the right way, so that accented characters are mapped to their Unicode equivalents.

CPU(Core) underutilization SQL server 2008 R2 Standard SP2

Posted: 26 Jun 2013 02:53 PM PDT

I have a SQL Server 2008 R2 Standard running on a HyperV 2012 virtualized machine. The configuration is : 18 vCpu's, 22GB of RAM. The SQL server is running on a Win2k8 r2 VM, which uses 75% of the ressources which are available on the physical machine(2x Xeon Six core @2.5ghz, 10x300gb sas 10K, 32GB Ram - DELL poweredge t620 - hyper-v 2012 server core).Hyperthreading is on

In the VM's task manager I see 18 cores as assigned, when i run a CPU test, all 18 cores are maxed out at 99-100%. When i am running a high intensive query, the sql server uses only 1 core, or 5% of the CPU. The querry takes almost an hour to run. When another user runs the same querry at the same time, the server uses one more core, and the overall CPU usage goes to 10%. Why doesnt the server use all the available cores?

when i run select scheduler_id,cpu_id, status, is_online from sys.dm_os_schedulers where status='VISIBLE ONLINE' it returns 18 cores.

/when i set the VM to 4 vCpu's and run the same querry, it utilizes all 4 cores evenly at about 25% overall CPU usage, but it still needs an hour to complete. a normal time should be 3-4 minutes/ edit

SQL replication Agent permission

Posted: 26 Jun 2013 03:03 PM PDT

Error messages: The schema script 'teste2_2.sch' could not be propagated to the subscriber. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201001)

The process could not read file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.VUZIQ_ENTP\MSSQL\ReplData\unc\ADSERVER$VUZIQ_ENTP_TESTE2_TESTE2\20130626155609\teste2_2.sch' due to OS error 3. (Source: MSSQL_REPL, Error number: MSSQL_REPL0) Get help:

I have one sql from ip 199.168.***.*** working as Publication and another server 64.37.**.**. working as Subscription. The local Subscriptions of 199.168.***.*** is working fine but the external access from 64.37.**.** cant load file.

How can I add a login for 64.37.**.** server to have read/write permission to 199.168.***.*** server ?

Reason for using hexadecimal in NCHAR()?

Posted: 26 Jun 2013 02:14 PM PDT

I found this in some source code today:

        SELECT              @Error = ERROR_NUMBER (),              @ErrorMsg = N'An Error occured while populating the TABLE ' + @DestinationTableName +                          N' with data. Error Code: ' + CAST ( @Error AS nvarchar(20) ) +                           N', Error Description: "' + ERROR_MESSAGE () + N'".'                           + NCHAR(0X0D) + NCHAR(0X0A) + NCHAR(0X0D) + NCHAR(0X0A),              @ErrorSeverity = ERROR_SEVERITY (),              @ErrorState = ERROR_STATE () ;  

It looks like the error message string is adding two line feeds after the description. 0X0D is 13 and 0X0A is 10.

Is there a reason to use hexadecimal instead of just integers?

Normally what I've done is NCHAR(13) + NCHAR(10)...

SQL Server Designers, Failed Saves, and Generated Scripts

Posted: 26 Jun 2013 01:10 PM PDT

I am a big fan of the simple diagramming tool that comes with SSMS, and use it frequently. When I save changes to the model, I have it configured to automatically generate the change scripts that go along with the save. I then save (and source control) the resulting change script. This works great and an important piece of the process my team(s) uses.

What occasionally happens is that a save fails, and I still get the option to save my change script. I then fix the problem and save again (which results in another change script).

I'm never clear what I need to do at this point to maintain a consistent set of change scripts. There seems to be overlap between the two scripts (the failed and the successful), but they are not identical.

If I want to continue to use this feature, what should I be doing with the resulting script as soon as I get a failed save of the model?

Historic hierarchy analysis via SSAS and Excel

Posted: 26 Jun 2013 02:36 PM PDT

I am in the process of constructing a SSAS cube for a client and ran into the following issue:

The client is doing organisational analysis and needs to be able to analyse the all relevant measures based of the organisational structure as it was at a certain point in time.

The setup is as follows. The organisational structure is a ragged hierarchy which is stored in a Type 2 fashion with all the relevant effective dates and states. The facts (measures) are linked based of a surrogate key. I have set all the relevant SCD types on the organisational structure dimension attribute type properties in SSAS.

The question is, is there any articles or pointer that can assist in providing the ability via Excel such that a user can specify the "date" of the organisational structure and have the structure as at that date reflect? While still being able to see and interact with all other information (both current and historic).

The functionality can be provided via SQL by grouping on the "business key" and filtering the organisational structure based on the given date. I have searched through the documentation of SSAS and various articles but have thus far not been able to find a solution.

Any help or pointers would be appreciated.

Thanks in advance, Jacques Buitendag

Query performance differs greatly between a development setup and production

Posted: 26 Jun 2013 12:20 PM PDT

I'll try to keep this question straightforward, though I am dealing with a big ball of mud.

When I run my test query acrossed linked servers both located locally (on a shared virtual host), the query is fast at about 9 seconds.

When I run the same query acrossed linked servers (one local, one about 1,200 miles away) it is MUCH slower at 5:23

I am trying to learn how to analyze an execution plan, but are there other probable causes for this sort of thing?

Edit: Based on @Mat's comment, here is an example.

DISCLAIMER: I do not vouch for the quality of this code.

SELECT * INTO TempQTYOnHand FROM [RemoteServer].remoteDB.dbo.QuantityOnHand     DELETE FROM [RemoteServer].remoteDB.dbo.QuantityOnHand     INSERT INTO [RemoteServer].remoteDB.dbo.QuantityOnHand   SELECT DISTINCT ProductId, Onhand FROM Inventory  WHERE ProductID IN (SELECT ProductId FROM TempQTYOnHand)    DROP TABLE TempQTYOnHand   

Is this a "chatty" query, and also, I guess chatty would mean that it is round tripping for each row on the INSERT?

How to find parent rows that have indentical sets of child rows?

Posted: 26 Jun 2013 03:46 PM PDT

Suppose I have structure like this:

Recipes Table:        RecipeID      Name      Description    RecipeIngredients Table:        RecipeID      IngredientID      Quantity      UOM  

What are some good ways for finding duplicate recipes? A duplicate recipe is defined as having the exact same set of ingredients and quantities for each ingredient.

I've thought of using FOR XML PATH to combine the ingredients into a single column. I haven't fully explored this but it should work if I make sure the ingredients/UOMs/quantities are sorted in the same sequence and have a proper separator. Are there better approaches?

Edit: There are 48K recipes and 200K ingredient rows.

Managing the transaction log during restore

Posted: 26 Jun 2013 11:24 AM PDT

We are using SQL Server 2008 R2.

I've got transaction log shipping set up between my servers and everything is working just fine as far as my log backups being created, transferred, and restored. However, I noticed that the actual transaction log of my backup database while in the "restoring" state is very large. The database is about 200GB and its log is 146GB. That makes sense to me since the .bak file is 142GB, but maybe that is just a coincidence. The .bak file was restored with the NO RECOVERY option so that the log backups could be restored as they are received by the server.

It seems like that 146GB log doesn't need to be that large after the restore of the initial .bak file. Each of my log backups that gets restored is roughly 10GB in size, so I figure that a log file of around 15GB would suffice. I would really like that 130GB of space back.

Is there any way to make the transaction log file smaller while the database is in the "restoring" state? Or would I just have to wait until a disaster scenario when the database is actually in a usable state to shrink the log file then?

How to calculate total ON hours and Total OFF hours in day of a motor using PHP and MySQL

Posted: 26 Jun 2013 06:44 PM PDT

In my application I would need to calculate total ON hrs and OFF hrs in a day of motors in one tank. Every time he/she turn ON/OFF motors, data will go and store in motor_status_log table of my database.

While I searched for same thing in google I got following URL: calculating total login-logout time of a particular user in mysql

My rearranged query is:

"SELECT ilv.motor_id,SUM(UNIX_TIMESTAMP(offtime)-UNIX_TIMESTAMP(ontime)) FROM (SELECT a.motor_id, a.timestamp AS offtime,(SELECT MAX(b.timestamp) FROM motor_status_log b WHERE b.motor_id=a.motor_id AND b.timestamp<a.timestamp AND b.runningstatus='on') as ontime FROM motor_status_log a WHERE a.runningstatus='off' AND a.timestamp BETWEEN '2013-06-26 00:00:00' AND '2013-06-26 23:59:59' AND motor_id='$motor_id') ilv GROUP BY ilv.motor_id"  

and by using the given SQL Query in the above link it is not working.

And table in my database is like:

+--------+-------+---------+--------------------+-----------------+  |user_id |Tank_id|motor_id |  timestamp         | runningstatus   |  +--------+-------+---------+--------------------+-----------------+  |    1   |    1  |    1    |2013-06-26 09:53:29 | off             |  |    1   |    1  |    1    |2013-06-26 10:45:42 | on              |  |    1   |    1  |    1    |2013-06-26 10:55:29 | off             |  |    1   |    1  |    1    |2013-06-26 17:55:39 | on              |  |    1   |    2  |    2    |2013-06-26 17:56:39 | off             |  |    1   |    2  |    2    |2013-06-26 08:53:29 | off             |  |    1   |    2  |    2    |2013-06-26 10:45:42 | on              |  |    1   |    2  |    2    |2013-06-26 12:55:29 | off             |  |    1   |    2  |    2    |2013-06-26 14:55:39 | on              |  |    1   |    3  |    3    |2013-06-26 17:56:39 | off             |  +----------------+---------+--------------------+-----------------+  

Please help me. Thanks in advance

Understanding SIX lock in Microsoft SQL-Server

Posted: 26 Jun 2013 07:10 PM PDT

Can somebody explain me how a process can acquire SIX lock on a page? I my deadlock-graph xml file I see that a process running under RC isolation level (executing a select statement at the moment of deadlock) holds a SIX lock on a page.

What does this mean and how that lock could have been acquired? From what I got from http://msdn.microsoft.com/en-us/library/aa213039%28v=sql.80%29.aspx SIX locks protects S-locks on all resources and IX locks on some resources lower in the hierarchy.

For my case that would be IX-locks on rows? Can IX-lock be placed on a row? (I guess no). I am confused.

Another thing is that I expect several X-locks on rows and no S-locks at all (since the IL is ReadCommited). Why do I have the whole page locked with SIX if I only inserted several records in previous statement?

EDIT: Deadlock xml:

<deadlock-list>   <deadlock victim="process4df94c8">    <process-list>     <process id="process4df94c8" taskpriority="0" logused="2968" waitresource="PAGE: 7:1:181357" waittime="3111" ownerId="41854656297" transactionname="user_transaction" lasttranstarted="2013-06-06T11:09:42.087" XDES="0x1d2434e80" lockMode="IX" schedulerid="6" kpid="3476" status="suspended" spid="52" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2013-06-06T11:09:42.183" lastbatchcompleted="2013-06-06T11:09:42.183" clientapp=".Net SqlClient Data Provider" hostname="CWCEINAW" hostpid="4260" loginname="cwcuser" isolationlevel="read committed (2)" xactid="41854656297" currentdb="7" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056">      <executionStack>       <frame procname="adhoc" line="1" stmtstart="508" stmtend="1358" sqlhandle="0x02000000876d6b08786774c344ce9ce8bd077ccce54751be">  INSERT [WP_CashCenter_StockTransactionLine] ([isverified], [isstockownerchanged], [dateupdated], [quantity], [value], [weight], [qualificationtype], [direction], [material_id], [stockcontainer_id], [stockownerid], [stocktransaction_id]) VALUES (@isverified, @isstockownerchanged, @dateupdated, @quantity, @value, @weight, @qualificationtype, @direction, @material_id, @stockcontainer_id, @stockownerid, @stocktransaction_id);     </frame>       <frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000">  unknown     </frame>      </executionStack>      <inputbuf>  (@isverified bit,@isstockownerchanged bit,@dateupdated datetime,@quantity int,@value decimal(4,2),@weight decimal(8,8),@qualificationtype int,@direction int,@material_id nvarchar(3),@stockcontainer_id bigint,@stockownerid int,@stocktransaction_id bigint)INSERT [WP_CashCenter_StockTransactionLine] ([isverified], [isstockownerchanged], [dateupdated], [quantity], [value], [weight], [qualificationtype], [direction], [material_id], [stockcontainer_id], [stockownerid], [stocktransaction_id]) VALUES (@isverified, @isstockownerchanged, @dateupdated, @quantity, @value, @weight, @qualificationtype, @direction, @material_id, @stockcontainer_id, @stockownerid, @stocktransaction_id); SELECT scope_identity()    </inputbuf>     </process>     <process id="process5cd948" taskpriority="0" logused="24656" waitresource="KEY: 7:72057594277003264 (889d2c878f57)" waittime="3098" ownerId="41854656065" transactionname="user_transaction" lasttranstarted="2013-06-06T11:09:41.970" XDES="0x1253053c0" lockMode="S" schedulerid="3" kpid="4116" status="suspended" spid="59" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2013-06-06T11:09:42.180" lastbatchcompleted="2013-06-06T11:09:42.177" lastattention="2013-06-03T13:13:45.090" clientapp=".Net SqlClient Data Provider" hostname="CWCEINAW" hostpid="4260" loginname="cwcuser" isolationlevel="read committed (2)" xactid="41854656065" currentdb="7" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056">      <executionStack>       <frame procname="adhoc" line="1" sqlhandle="0x020000009e156e12789f4ef811698c627d479b0240c2a7c1">  SELECT * FROM WP_CashCenter_StockTransaction                  WHERE [id] IN (                      SELECT DISTINCT [ST].[id]                      FROM WP_CashCenter_StockTransaction AS [ST]                          LEFT JOIN WP_CashCenter_StockTransactionLine AS [STL] ON ([STL].[StockTransaction_id] = [ST].[id])                      WHERE [ST].[Type] IN (1, 0, 10, 9)  AND ([STL].[Direction] IN (1, 0) OR [STL].[id] IS NULL)   AND [ST].[Status] IN (0, 1)  AND ([STL].[StockContainer_id] = 300000274211 OR [ST].[StockContainerID] = 300000274211))     </frame>      </executionStack>      <inputbuf>  SELECT * FROM WP_CashCenter_StockTransaction                  WHERE [id] IN (                      SELECT DISTINCT [ST].[id]                      FROM WP_CashCenter_StockTransaction AS [ST]                          LEFT JOIN WP_CashCenter_StockTransactionLine AS [STL] ON ([STL].[StockTransaction_id] = [ST].[id])                      WHERE [ST].[Type] IN (1, 0, 10, 9)  AND ([STL].[Direction] IN (1, 0) OR [STL].[id] IS NULL)   AND [ST].[Status] IN (0, 1)  AND ([STL].[StockContainer_id] = 300000274211 OR [ST].[StockContainerID] = 300000274211))    </inputbuf>     </process>    </process-list>    <resource-list>     <pagelock fileid="1" pageid="181357" dbid="7" objectname="Eindhoven_CWC.dbo.WP_CashCenter_StockTransactionLine" id="lock366ad3a80" mode="SIX" associatedObjectId="72057594277265408">      <owner-list>       <owner id="process5cd948" mode="SIX"/>      </owner-list>      <waiter-list>       <waiter id="process4df94c8" mode="IX" requestType="wait"/>      </waiter-list>     </pagelock>     <keylock hobtid="72057594277003264" dbid="7" objectname="Eindhoven_CWC.dbo.WP_CashCenter_StockTransaction" indexname="PK_WP_Inbound_StockTransaction" id="lockee362e00" mode="X" associatedObjectId="72057594277003264">      <owner-list>       <owner id="process4df94c8" mode="X"/>      </owner-list>      <waiter-list>       <waiter id="process5cd948" mode="S" requestType="wait"/>      </waiter-list>     </keylock>    </resource-list>   </deadlock>  </deadlock-list>  

Problem adding a new node to a SQL Server 2012 Failover Cluster

Posted: 26 Jun 2013 11:05 AM PDT

  • I tried to add a new node to SQL Server (2012 Standard) Failover, that is already installed on another node in Windows2012 Failover Cluster
  • I have an issue with non editable field wiht input for SQL Server Agents Account Name
  • there could be the same AD name as for SQL Server Database Engine (e.i.)
  • my account access haven't AD admin privileges
  • access, account and setting for SQL Server Agent on parent node are accesible and settable

enter image description here

Eliminating duplicate records in data cleansing

Posted: 26 Jun 2013 08:47 PM PDT

I have a database full of records of people with simple information like first name, last name, email, location, ... . I need to eliminate the duplicate records. As I've search the process is called "duplicate elimination in Data Cleansing". Does anyone know a good open source tool to do that?

Statistical Analysis of Data that has to be done in an order?

Posted: 26 Jun 2013 05:45 PM PDT

Bear with me - that is the first time try that in SQL Server, normally I have been doing that on the front end ;)

I a implementing some analysis on time coded data series. This is not super complicated stuff, but some of it requires some numbers we do not store in the database and that has to be calculated by aggregating the numbers in a specific algorithm IN ORDER.

To give an example:

  • I have a list of trades and I need to know the maximum loss we had in the account, so i need to aggregate the plus/minus and then take the most extreme negative and positive.

This can not be pre-calculated due to dynamic filtering - there are a number of filters that can be applied to the data.

So far - past - I pulled the data to the application, now for the standard stuff I plan to try to keep that in the sql server.

My problem now is - I can see how that works (acceptable) in SQL Server:

[TradeBegin],  Pnl,  SUM (Pnl) OVER (ORDER BY [TradeBegin] ROWS UNBOUNDED PRECEDING)  [Total]  

But if I put that into a view... and then filter out rows, the Sum is still calcualted from the beginning. And I need a view because I want (need) to map that standard analysis data into an ORM (so dynamic SQL is out). Anyone an idea how to do that?

Clear schema from database without dropping it

Posted: 26 Jun 2013 06:45 PM PDT

I'm working on a school project where I have a SQL Server with a database for my team.

I already imported a local database created with Entity Framework.

Now the model has changed, table properties were added/deleted and I want to update my full database.

However, the teachers didn't gave us the create rights so dropping the whole database isn't really an option.

Now is my question, is it possible to drop all the tables currently in the database and just import the newly created one without problems? Or do I really need to drop the whole database?

Are there any disadvantages to partitioning on financial year?

Posted: 26 Jun 2013 11:39 AM PDT

Our current set up has one table per financial year (May 1- April 30). Each table has approx 1.5 million rows. We have about 8 years of data, and will obviously be adding each year.

The majority of queries are within the financial year/one partition. Either select * from sales where date time between '2013-05-01' and '2014-04-30 23:59:59' or some period within that year.

My plan is to have a range partition on an InnoDB table. e.g.

PARTITION BY RANGE COLUMNS(datetime)  (PARTITION p0 VALUES LESS THAN ('1999-05-01') ENGINE = InnoDB,   PARTITION p1 VALUES LESS THAN ('2000-05-01') ENGINE = InnoDB,   PARTITION p2 VALUES LESS THAN ('2001-05-01') ENGINE = InnoDB,  ...  

This means that the PK has to become PRIMARY KEY (index,datetime).

Are there any significant disadvantages to partitioning compared to having an unpartitioned table? I know that means the PK is now length 12 and all further indexes will have that prepended to it. Does that make a difference? The table needs to work faster on reads than writes, and there are a fair few indexes on it.

  • Is there any functional difference between partitioning on RANGE COLUMNS(datetime) and RANGE TO_DAYS(datetime) using MySQL 5.5+?
  • Is partitioning on datetime OK? One alternative is adding in a column for financial year and indexing/partitioning on that? (And then adding another index on datetime for other queries).

We do sometimes need to query the time across all time or over "the last X months", but this is pretty rare. The main advantages of moving to a single table is to eliminate the logic in the application working out which table to insert/update/select and not needing to calculate unions in those situations where we need more than one table.

SUPER privilege not defined for master user in Amazon MySQL RDS

Posted: 26 Jun 2013 02:45 PM PDT

I have created one medium instance on amazon rds in asia pecific (singapore) region. i have created my master user with master password. and it is working/connecting fine with workbench installed on my local PC. When, I am going to create function on that instance, it show me following error

ERROR 1418: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable)

At my instance, my variable (log_bin_trust_function_creators) shows OFF. now when I go to change with variable using

SET GLOBAL log_bin_trust_function_creators = 1;  

it gives me another error

Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation

I don't know how to solve this error.

Can anybody help???

How to import table's data in MySql from Sql Server?

Posted: 26 Jun 2013 04:45 PM PDT

I am trying to export table from SQL Server 2008 R2 TO MySql 5.5. For this I am using SQL Server Import and Export Wizard of Sql Server 2008, but it is giving this error. Error

Here this error may be occurring because table in Sql Server has a column with data type image and table in MySql this column's data type is LONGBLOB.

Please provide your expert answers. If not possible through SQL Server Import and Export Wizard than please suggest any other method for data transfer.

Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?

Posted: 26 Jun 2013 03:45 PM PDT

Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?

Conceptually, I was thinking something like:

mysqldump -u root -p database_to_backup > db_backup_file.sql | sudo rsync -avz db_backup_file.sql myuser@mysite.com:/var/www/db_backup_file.sql  

I've seen people pipe the result to mysql for their one liner backup solution, but I was curious if it was possible with rsync. You know--- cause rsync is magic :)

Thanks for your time!

MySQL Workbench sync keeps requesting the same changes

Posted: 26 Jun 2013 08:45 PM PDT

I am using MySQL Workbench, and when I try to "synchronize" it with my remote database, it keeps detecting some changes to make.

Specifically, the most recurrent ones are:

  • Dropping foreign keys and re-creating the same ones again
  • Changing the collation

I was compliant and executed all the queries given to me (and added the semi-colon that they forgot). MySQL didn't complain and executed them.

However it didn't help, I can run it 20 times in a row, it will still ask the same useless changes.

SQL developer: Setup debugger for plsql

Posted: 26 Jun 2013 08:50 PM PDT

I'm trying to debug remotely pl/sql. But I can't - database returns me an error. What should I do to fix this and start debugging ?

    ORA-30683: failure establishing connection to debugger      ORA-12541: TNS:no listener      ORA-06512: at "SYS.DBMS_DEBUG_JDWP", line 68      ORA-06512: at line 1  

UPD
I'm trying to debug procedure on remote server. On local computer I have only sql developer installed.

Deleting Data From Multiple Tables

Posted: 26 Jun 2013 01:25 PM PDT

Suppose,I've a table called UNIVERSITY containing universities name:

universityID    universityNAME  isACTIVE       7            GNDU             1       6            PU               1       5            PTU              1       8            LPU              1  

Now these universities ID's has been(obviously) used in many tables within the database(name e.g.Education),Suppose 10 tables.

Q.Now what happen if i delete one university?

A.The universityID field in other tables becomes NULL.

But I don't want these,rather when I delete 1 university from UNIVERSITY TABLE,all its occurrences with Rows in all 10 table should get deleted.

What will be the shortest and easiest MySQL Query for this operation.

NOTE:I'm using PHP language.

Why does MySQL (InnoDB) table get faster after OPTIMIZE TABLE, but then don't work?

Posted: 26 Jun 2013 01:37 PM PDT

I have a Django web application that stores data in a MySQL InnoDB database. There is a particular page that is accessed a lot on the django admin, and the query is taking a long time (~20 seconds). Since it's the Django internals, the query cannot be changed.

There are 3 tables, A, B, and C. The query looks like:

SELECT *   FROM A      INNER JOIN B ON (A.b_id = B.foo)      INNER JOIN C ON (B.foo = C.id)   ORDER BY A.id DESC   LIMIT 100  

A simple join-3-tables together.

The id fields are primary keys and have indexes. A.b_id, B.foo both have their own indexes.

However the query plan looks wrong and says it's not using any keys on B (but it is using the keys for the other joins). From reading lots of MySQL performance stuff it should in theory be using indexes, since it's various const joins that can 'fall through'. It says it has to scan all ~1,200 rows of B.

The weird thing is that I OPTIMIZEed each one on my local machine and re-ran the queries (with SQL_NO_CACHE) and it was much faster, 0.02sec vs. the original 20sec. EXPLAIN on the same query gave a different, and much more sensible result, showing that it can use an index on each one, and that it doesn't have to scan the whole lot. A co-worker ran OPTIMIZE for each one on a testing machine with approximately the same data (which was recently recreated from a loaded dump file) and it also showed a speed increase, and a sensible-explain.

So we ran that on the live system… and it didn't change anything (the speed nor explain). I recreated my MySQL database (DROPed the database and reloaded from a dump), and now the OPTIMIZE doesn't change anything (i.e. ~20sec run time, bad query plan).

Why does this happen? How can I get MySQL to use the correct indexes and get back my 0.02s query time? This blog post ( http://www.xaprb.com/blog/2010/02/07/how-often-should-you-use-optimize-table/ ) implies that OPTIMIZE only really optimizes the primary key (not that b_id, foo are not primary indexes). How can I "rebuild secondary index"? I tried doing ALTER TABLE A ENGINE=InnoDB (and the same for B & C) and there was no change.

Minimizing Indexed Reads with Complex Criteria

Posted: 26 Jun 2013 01:45 PM PDT

I'm optimizing a Firebird 2.5 database of work tickets. They're stored in a table declared as such:

CREATE TABLE TICKETS (    TICKET_ID id PRIMARY KEY,    JOB_ID id,    ACTION_ID id,    STATUS str256 DEFAULT 'Pending'  );  

I generally want to find the first ticket that hasn't been processed and is in Pending status.

My processing loop would be:

  1. Retrieve 1st Ticket where Pending
  2. Do work with Ticket.
  3. Update Ticket Status => Complete
  4. Repeat.

Nothing too fancy. If I'm watching the database while this loop runs I see the number of indexed reads climbs for each iteration. The performance doesn't seem to degrade terribly that I can tell, but the machine I'm testing on is pretty quick. However, I've received reports of performance degradation over time from some of my users.

I've got an index on Status, but it still seems like it scans down the Ticket_Id column each iteration. It seems like I'm overlooking something, but I'm not sure what. Is the climbing number of indexed reads for something like this expected, or is the index misbehaving in some way?

-- Edits for comments --

In Firebird you limit row retrieval like:

Select First 1    Job_ID, Ticket_Id  From    Tickets  Where    Status = 'Pending'  

So when I say "first", I'm just asking it for a limited record set where Status = 'Pending'.

No comments:

Post a Comment

Search This Blog