Friday, April 19, 2013

[SQL Server 2008 issues] Deadlock

[SQL Server 2008 issues] Deadlock


Deadlock

Posted: 18 Apr 2013 11:56 AM PDT

One of the system in my company was not available on the 9th Apr. As the application team in my company check in the log.. they found this error 2013/04/09 15:39:05:391 ERRORservices.ib101_access.register.IBRegisterSecurityImageServices: Error details :org.hibernate.exception.SQLGrammarException: could not insert:[net.penril.ibss.core.hibernate.IbUser]Caused by:com.microsoft.sqlserver.jdbc.SQLServerException: Lock request time out period exceeded.2013/04/09 15:39:17:559 ERRORenterprise.services.ib101_access.IBGenericLoginServices: Error details :org.hibernate.exception.SQLGrammarException: could not update:[net.penril.ibss.core.hibernate.IbCustProfile#4737Caused by:com.microsoft.sqlserver.jdbc.SQLServerException: Lock request time out period exceeded. 2013/04/09 15:39:43:487 ERRORenterprise.services.ib101_access.IBGenericLoginServices: Error details :org.hibernate.exception.SQLGrammarException: could not update:[net.penril.ibss.core.hibernate.IbCustProfile#2951]Caused by:com.microsoft.sqlserver.jdbc.SQLServerException: Lock request time out period exceeded its suspected because of deadlock. But i have no idea where to check on this issue. Im very new in this. Please help.

Restoring file group error

Posted: 15 Jan 2013 09:11 PM PST

I have created the file group for my database.First i took backup of individual file group(mdf and ndf) then I tried to restore primary file group but i got error as File 'testnvt2' was not backed up in file 1 on device 'D:vtprimary.bak'. The file cannot be restored from this backup set.RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3144)then i tried to restore only secondary file group, but again i got the error asFile 'regSQL_dat' was not backed up in file 1 on device 'D:vtndf.bak'. The file cannot be restored from this backup set.RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3144)So why this error comes? How exactly to backup and restore only a single group or restore file groups?

How do I calculate totals for a distributed measure in SSAS?

Posted: 18 Apr 2013 06:07 PM PDT

I'm not sure if this is the right place to post my question but here goes.I have an OLAP cube consisting of quite a few tables.In this scenario there's only reason to look at one fact table with costs.This table constructed by a join in SQL with a table of distribution keys.The purpose is to split costs across Company, Cost Center, etc.The joining keys are Company, CostCenter, AccountGroup, DateFrom, and DateTo.As such, the number of rows can be blown up a multitude of times.If we look at a simple case with just one variable: Cost Distribution Distributed Cost Count DistrCalcCompany A 100 0.8 80 1 0.8Company B 200 0.2 40 1 0.2Total 300 1.0 120 2 0.5I need to show the Distribution figure in the cube.I do that by dividing Count with Distribution which you can see in the final column in the table above, DistrCalc.Here's the cube formulaIif( [Measures].[Count] = 0 Or [Measures].[] is NULL , NULL , [Measures].[Distribution] / [Measures].[Count])The Total of 0.5 for DistrCalc is clearly wrong. I should be 1.0 (or if not possible, NULL).Since we use Excel as the frontend tool we could easily switch off totals but totals are relevant for other measures.So how do I get around this problem?I'm thinking that some sort of Scope statement could it. But it has to span across all the variables from the SQL join.My level of expertise in MDX is unfortunately not very high, and Google hasn't provided me with a usable result.Thank you in advance!Hans Riis

Connect Two Different databases on two Different Data server Locations

Posted: 18 Apr 2013 04:22 PM PDT

Hi Friends, i had a requirement , which leads to work on two database on two different data servers. Now my Problem is how to connect one database to another database. i have to access the other database procedures and other operation like CRUD . Please give me some examples or Idea to approach Thanks in Advance

need help in creating a procedure which uses cursor and variables

Posted: 18 Apr 2013 03:08 PM PDT

have situation where i need to create a procedure that uses another table data and fetches each table row and passes them as variables into the following user defined function (dbo.Split)The values id,SID,DOB are getting from another table columns valuesSoure table hasID ColNAME 1 SID,DOB2 SID,DOB3 NAME,CITY,ZIP4 NAME,CITY,ZIPi need to use variables to fetch each row and pass them into the below functionin the below mentioned function SID and DOB whould be passed as a variables from the source table select id, SID, DOBinto table1from(select t.id,c.colname,c.colvaluefrom yt tcross apply dbo.split(t.name, t.text, ',') cwhere c.colname in ('SID', 'DOB')) srcpivot(max(colvalue)for colname in (SID, DOB)) piv;select *from table1Can some one please help me

Is it possible to create tables Dynamically using Cursors from another table with csv values

Posted: 18 Apr 2013 08:42 AM PDT

Have a Table with the CSV Values in the columns as belowID Name text 1 SID,DOB 123,12/01/1990 2 City,State,Zip NewYork,NewYork,01234 3 SID,DOB 456,12/21/1990 What is need to get is 2 tables in this scenario as out put with the corresponding valuesID SID DOB 1 123 12/01/1990 3 456 12/21/1990ID City State Zip2 NewYork NewYork 01234Is there any way of achieving it using a Cursor or any other method in SQL server?

Trace Flag 8048, 8015, or SUMA?

Posted: 14 Nov 2012 08:37 AM PST

Hola! Recently included SQL Server startup Trace Flag 8048 to resolve a serious spinlock contention issue in a SQL Server 2008 R2 system. Interested to hear from others who have found usage cases where performance value was delivered by trace flag 8048 (promote query memory grant strategy from per-NUMA node to per-core), trace flag 8015 (SQL Server ignores physical NUMA), or SUMA (interleaved sufficiently uniform memory access). Trace flag 8048http://blogs.msdn.com/b/psssql/archive/2011/09/01/sql-server-2008-2008-r2-on-newer-machines-with-more-than-8-cpus-presented-per-numa-node-may-need-trace-flag-8048.aspxTrace flag 8015http://blogs.msdn.com/b/psssql/archive/2010/04/02/how-it-works-soft-numa-i-o-completion-thread-lazy-writer-workers-and-memory-nodes.aspxSUMA, or interleaved memoryhttp://msdn.microsoft.com/en-us/library/ms178144(v=sql.105).aspx"If you have hardware NUMA, it may be configured to use interleaved memory instead of NUMA. In that case, Windows and therefore SQL Server will not recognize it as NUMA."Gory details of system workload, gathered metrics from troubled system, and gathered metrics from the system after intervention in posts to follow. Peace!tw: @sql_handle

Customer performance report

Posted: 18 Apr 2013 09:32 AM PDT

Hi All,Has anyone put together a performance recommendations report to send to clients?This would put forward recommendations beyond your control such as disk space/configuration, SQL Server version, RAM, etc...

Could this be possible - what to check

Posted: 18 Apr 2013 08:10 AM PDT

Hello everyone.Got a situation that I can hope I can explain.We have an application that at the moment is performing a lot faster from the application front end, than it is when the same SP is run from SSMS or SQLCMD.Is that possible ?The apps team tell me that the application function when completed does complete successfully (My first check , was does it really do what they think it does)when I extract a trace from a successful run , i see the SP and parameters being called.When I try to run that in SSMS it runs and has to be killed off.A successful run from application <> 3-5 seconds.SSMS killed off after 10 mins.What can I check and confirm or deny in this scenario.All seems very odd to me ?:w00t:cheers for any advice or tips.

1 to many relationship query

Posted: 18 Apr 2013 07:46 AM PDT

Hi all,I have a documents database that has 2 tables, the main table contains the documents general information and the other tracks changes to the documents.I need to know, for each document, the [b]oldest [/b]code status from the 2 table based on the InsertedDate.Note: Every time a document is renewed a new line is inserted in the main table... but the document name is the same. At the same time a new status is created in the 2nd table.Here's the example:--remove temp table if necessaryIF OBJECT_ID('tempdb..#temp_main') IS NOT NULL DROP TABLE #temp_mainIF OBJECT_ID('tempdb..#temp_status') IS NOT NULL DROP TABLE #temp_status --make temp_mainCREATE TABLE #temp_main ( Uniq_ID int NOT NULL, DocCode char(32) NOT NULL, EffectiveDate datetime NULL)--make temp_statusCREATE TABLE #temp_status ( Uniq_ID int NOT NULL, CodeStatus char(2) NOT NULL, InsertedDate datetime NULL)--poplulate temp tablesINSERT INTO #temp_main (Uniq_ID, DocCode, EffectiveDate) VALUES (65977,123456,'2012-07-19 00:00:00.000')INSERT INTO #temp_main (Uniq_ID, DocCode, EffectiveDate) VALUES (65977,123456,'2012-07-19 00:00:00.000')INSERT INTO #temp_main (Uniq_ID, DocCode, EffectiveDate) VALUES (65977,123456,'2012-07-19 00:00:00.000')INSERT INTO #temp_main (Uniq_ID, DocCode, EffectiveDate) VALUES (66065,123456,'2013-07-18 00:00:00.000')INSERT INTO #temp_status (Uniq_ID, CodeStatus, InsertedDate) VALUES (65977,'N','2012-07-19 18:23:44.637')INSERT INTO #temp_status (Uniq_ID, CodeStatus, InsertedDate) VALUES (65977,'E','2012-07-19 18:36:16.027')INSERT INTO #temp_status (Uniq_ID, CodeStatus, InsertedDate) VALUES (65977,'E','2012-07-25 15:44:47.553')INSERT INTO #temp_status (Uniq_ID, CodeStatus, InsertedDate) VALUES (66065,'R','2013-04-18 16:01:02.510')select a.Uniq_ID, a.DocCode, a.EffectiveDate, b.Uniq_ID, b.CodeStatus, b.InsertedDatefrom #temp_main as aleft join #temp_status as b on b.Uniq_ID = a.Uniq_IDgroup by a.Uniq_ID, a.DocCode, a.EffectiveDate, b.Uniq_ID, b.CodeStatus, b.InsertedDateUniq_ID DocCode EffectiveDate Uniq_ID CodeStatus InsertedDate----------- -------------------------------- ----------------------- ----------- ---------- -----------------------65977 123456 2012-07-19 00:00:00.000 65977 E 2012-07-19 18:36:16.02765977 123456 2012-07-19 00:00:00.000 65977 E 2012-07-25 15:44:47.55365977 123456 2012-07-19 00:00:00.000 65977 N 2012-07-19 18:23:44.63766065 123456 2013-07-18 00:00:00.000 66065 R 2013-04-18 16:01:02.510(4 row(s) affected)What I want to achieve is :Uniq_ID DocCode EffectiveDate Uniq_ID CodeStatus InsertedDate----------- -------------------------------- ----------------------- ----------- ---------- -----------------------65977 123456 2012-07-19 00:00:00.000 65977 N 2012-07-19 18:23:44.63766065 123456 2013-07-18 00:00:00.000 66065 R 2013-04-18 16:01:02.510Thank you all for your time!!

Help interpret SQLIO result

Posted: 18 Apr 2013 04:59 AM PDT

Im trying to either prove or disprove that the slow front-end (web ui) performance is, in some part, attributed to disk bottleneck. I've run SQLIO to gather some metrics, but they'll always be just numbers without context. For example:c:\Program Files (x86)\SQLIO>sqlio -kR -t4 -s120 -o64 -frandom -b64 -LS -Fparam.txt sqlio v1.5.SGusing system counter for latency timings, 3579545 counts per secondparameter file used: param.txt file e:\sqlio\testfile.dat with 2 threads (0-1) using mask 0x0 (0)2 threads reading for 120 secs from file e:\sqlio\testfile.dat using 64KB random IOs enabling multiple I/Os per thread with 64 outstandingusing specified size: 100 MB for file: e:\sqlio\testfile.datinitialization doneCUMULATIVE DATA:throughput metrics:IOs/sec: 1649.36MBs/sec: 103.08latency metrics:Min_Latency(ms): 4Avg_Latency(ms): 77Max_Latency(ms): 779How can I provide context to the above to state definitely that the disk subsystem is underperforming to support a multi client UI tool. All of the articles I've read online only seem to use SQLIO as a tuning tool (optimize within existing hardware scope), not as a tool to determine whether the architecture is suitable or not.Anyone have any ideas? Am I using the right tool? How can I either make this case or move onto something else?Thanks in advance.

Does microsoft have a tool similar to Red Gate

Posted: 18 Apr 2013 08:02 AM PDT

One of the guys here in the office thought they had one but has no clue of the name of it. Asked me to evaluate it but couldn't find anything. Thanks.

Naming Multi-Line Table-Valued Function Default Constraints

Posted: 18 Apr 2013 04:54 AM PDT

I want to be able to name the default constraints that I place on the table returned from a multi-line table-valued function, however SQL Server doesn't appear to support naming the constraint in this scenario. I want to be able to do this so that schema comparisions don't get false positives when comparing the names given to these default constraints. Even though you can't specify them, SQL Server does give names to these default constraints.I'd like to be able to do something like what's in bold below:[code="sql"]CREATE FUNCTION [dbo].[GetElementList] ( @color smallint)RETURNS @elementList TABLE ( PartName VARCHAR(50), Color VARCHAR(50), Active INT [b]CONSTRAINT DF_GetElementList_Active[/b] DEFAULT 0)ASBEGIN... [i]Body removed for brevity.[/i]...END[/code]Does anybody know a way to do this, or a way to acheive the same end result?

RESTORE DATABASE

Posted: 18 Apr 2013 06:31 AM PDT

Hello,What is the syntax on how to restore database on these requirement:I need to run restore sql job from server 1,but the bakup file is on server2 and database where I need to restore is on server2here is how I startedRESTORE DATABASE MyDatabase FROM DISK='E:\Backups\MyDatabase.bak'Can anyone help me?Thank you

How do I add a row to when a column is read only?

Posted: 18 Apr 2013 06:57 AM PDT

How do I add a row to when a column is read only? I think this issue I am having is because I am using an ID field (column) that is automatically generated and incremented whenever a new row is added in Microsoft SQl Server Management Studio, I cannot manually change an ID field and it appears a different (grayer) color. Somehow, once, after I entered all the other items in the row and hit "Enter" on my keyboard, I managed to have a row added and the ID field was automatically generated. The problem is that it did not seem to add it out of sequence. I do not know why it would be 159 and not 157[img]http://i67.photobucket.com/albums/h292/Athono/sql%20server/stuck_zps00d884e5.png[/img]I deleted the row and tried again. This time, it gave me an error:[img]http://i67.photobucket.com/albums/h292/Athono/sql%20server/stuck02_zpsf175086e.png[/img]When I try to edit the field by hand, it seems i am not allowed to:[img]http://i67.photobucket.com/albums/h292/Athono/sql%20server/stuck03_zpse022ed98.png[/img]Please advise.I am using SQL Server 2008 R2Here are the version numbers:Microsoft SQL Server Management Studio 10.50.2500.0Microsoft Analysis Services Client Tools 10.50.2500.0Microsoft Data Access Components (MDAC) 6.1.7601.17514Microsoft MSXML 3.0 4.0 5.0 6.0 Microsoft Internet Explorer 9.0.8112.16421Microsoft .NET Framework 2.0.50727.5466Operating System 6.1.7601

Help with a query to get substring

Posted: 18 Apr 2013 05:14 AM PDT

Hello,I have a column called EventText with a string such as 'Receiving batch [688_31142.TRN].' (without the ' marks)I need to get it to be '688-31142'I can accomplish that with this: update #temptesttableset EventText = Replace(EventText, 'Receiving batch ','')FROM #temptesttable update #temptesttableset EventText = Replace(EventText, '[','')FROM #temptesttableupdate #temptesttableset EventText = Replace(EventText, '.TRN].','')FROM #temptesttableupdate #temptesttableset EventText = Replace(EventText, '_','-')FROM #temptesttableBut there must be a much cleaner way... any suggestions?

index scan

Posted: 18 Apr 2013 05:25 AM PDT

I have a table that has a composite primary key studentID + Schoolyear, both of the two columns are int, But looking at the execution plan just by using a select * on the table, I see it is using index scan, why can it use index seek?Thanks,

sql datatype convertion error??

Posted: 18 Apr 2013 03:02 AM PDT

Table with Values[code]CREATE TABLE InitialData(recno int PRIMARY KEY, Dept Varchar(30), entry_name Varchar(50), entry varchar(500), orgID int)INSERT INTO InitialData Values(1, 'Marketing', 'Timesservedstartdate', '8/6/2012 12:00:00 AM', 1234),(2, 'Sales', 'Timesservedstartdate', '8/6/2012 12:00:00 AM', 1234),(3, 'Development', 'Reporting', 'Somevalue', 1234),(4, 'HumanResources', 'Reporting', '1', 1234),(5, 'Support', 'Reporting', '1', 1234);[/code]Trying to test the condition that i developed...i want to compare all the rows with the entry value '1'. Any values other than 1 should be equal to 0 is the condition. NOTE: entry is a varchar column[code] IF (SELECT COALESCE((SELECT dI.[entry] FROM dbo.InitialData AS dI WITH(NOLOCK) WHERE dI.Dept = 'Reporting' AND dI.entry_name = 'PledgeRequireBatch' AND dI.orgID = 1234 AND dI.[entry] <> 1), 0)) = 1 BEGIN Select * from InitialData END ELSE Select TOP 1 * FROM InitialData[/code]I am getting the else statement result always but that is not what i want1 Marketing Timesservedstartdate 8/6/2012 12:00:00 AM 1234

Assigning a fieldname with rollup

Posted: 16 Apr 2013 04:26 AM PDT

I have two tables that have racing data, and I need to add up the number of cancelled races and make it my aliased field name (in this case as you see the query below numCancelled, meaning number of races cancelled):(select a.eid, count(*) as numCancelled from(select ev4.eid,f.nvid, f.perfdate, f.racenumber from Finish f (nolock)join event ev4 (nolock) on ev4.eid = f.eid wheref.NoRace = 'true' and f.PerfDate = '2013-04-11') a group by a.eid with rollup)....the resultset here just lists all the eids (event ids) then the rollup total (in this case it is referred to a NULL in that eid column), but I need my numCancelled field to contain that grand total 7, as right now it comes back as 1 when I run this subquery in my stored procedure.??Z

Linked Server to MS Access

Posted: 18 Apr 2013 02:56 AM PDT

All,Well, this is kind of three folds...1. the problem2. the solution3. any alternative1. the problemI receive a number of errors when trying to test connection to a linked server.The linked server is created to an MS Access file located in the program files folder.When I log into SSMS using Windows Authen, I successfully tested the linked server connection.When I log into SSMS using SQL Server Authen, the connection test results in a lot of errors... one error after another.Cause - After about three days trying to figure this crap out, I realized that the MDB file was locked. This locking is due to an automatic download service that another software is running to update the mdb file. Once i stopped the service, i am able to successfully test the connection using SQL Server Authen.Question: Why does this locking affect only SQL Server Authen from connecting to the Linked server?2. The solutionSince the users front end does not require constant access to the MDB File, i am able to set a stored procedure to run automatically such that I can pull data from the mdb file into the sql server database while not having to stop the service.3. Is there any other solution to this problem?RegardsSurendra

Update on ID's

Posted: 18 Apr 2013 12:39 AM PDT

Hi Guys,Need a more efficient way to do this :[b]Old Fact Table[/b]FiscalID SID Value1 344 78.442 345 44.333 346 44.554 347 223.44Now this join to Dimensions on ID's[b]Dim_FiscalYear_Old [/b]ID FiscalName1 20082 20093 20104 2011[b]Dim_Scenario_Old [/b]ID Name344 Model1 345 Model2346 Model3347 Model4Now basically the fact table stores ID's from DIM tables, so I want to retain old values (Names will remain same but ID's have changed in the Dimension Tables)[b]Dim_FiscalYear_New [/b]ID FiscalName7 20088 20099 201010 2011[b]Dim_Scenario_New [/b]ID Name546 Model1 547 Model2548 Model3549 Model4[b]Old Fact Table Updated or New Fact Table[/b]FiscalID SID Value7 546 78.448 547 44.339 548 44.5510 549 223.44[b]Note : ID's are not auto increment by 1[/b]

SSIS Transfer SQL Server Object Task

Posted: 19 Jul 2012 08:05 AM PDT

Hello, all!Due to database corruption referenced in a previous post, I am attempting to transfer all objects and data out of one database into another using SSIS. So far, I have been able to troubleshoot many various errors, but I noticed something strange while trying to track this one down and it makes me wonder if I'm missing some setting or something. Here is the error...Execution failed with the following error: "ERROR : errorCode=-1073548784 description=Executing the query "GRANT EXECUTE ON [DMUSER].[InsertHFR_WarrantyCard]..." failed with the following error: "Cannot find the object 'InsertHFR_WarrantyCard', because it does not exist or you do not have permission.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.I assumed it has to do with permissions of the DMUSER user and/or the DMUSER schema or something along those lines, but when I compare the source db to the new objects in the destination db, I find that the stored proc in the source is '[b]DMUSER[/b].InsertHFR_WarrantyCard' and the one in the destination is now '[b]dbo[/b].InsertHFR_WarrantyCard'. It appears that this didn't happen to all of the stored procs in the DMUSER schema, just certain ones. Why in the world would it do that??! :hehe:~Any insite would be greatly appreciated!(OT): If anyone just *happens* to read this post that *happens* to be attending SQL Saturday in Cleveland on Aug, 18 or the Ohio North SQL Server User Group on Aug, 7th...look me up! (I'm a brand-spankin-new'b!)

SQL developer vs Enterprise editions

Posted: 18 Apr 2013 04:17 AM PDT

Hi SQL server 2008 R2.Can we use developer edition for all our development , testing and staging servers , and use enterprise edition on production server.what is the disadvantage , or where does the developer version falls short in terms of functionality.Using Enterprise version on all servers is expensive , am not sure if that by itself is recommended Note that all sql server functionality like DR, Encryption... will be implementedThanks

Plan Reuse - Unrelated tables in execution plan

Posted: 17 Apr 2013 10:42 PM PDT

Hi allI have been banging my head against a brick wall over this one. Hopefully somebody can help.One our users ran a simple query on a partitioned table but for some reason it was taking an age to execute. The table is accessed via a simple view in another database.[Code]SELECT TOP 1000 * FROM FactTable fld (NOLOCK)WHERE PartitionKey = 50230216[/Code]However, the execution plan shows tables that have not even been specified in the query which is the reason why it is taking so long. Obviously, the Optimizer is reusing an existing plan but this is not the best choice.I have tried ;-creating a proc and executing sp_compile against it-clearing the cache (DBCC FREEPROCCACHE)-specifying WITH RECOMPILE against the procThere is a composite non-clustered index on the base table which includes the PartitionKey (2nd col of index).Statistics update automatically.No matter what I try, it keeps producing this sub-optimal execution plan containing the two unrelated tables. Anybody know why it would do this especially given that the cache has been cleared ?ThanksPreet

SSIS and VS 2010

Posted: 27 Apr 2010 07:17 PM PDT

I got my fresh new copy of VS 2010 yesterday and immediately began playing. I want to build an SSIS package. Funny, there wasn't a template for that in the database section. I went to the Web. Dear me lots of people complaining that they couldn't find it either. At the tail-end of one long diatribe, there was a note, 'maybe they'll fix it with SQLServer2008 R2'.I went looking for R2. It had been released only a few days ago. I spent most of the evening downloading it and then more time installing it.I ran BIDS. It opened VS2008. :angry: :angry:MS in their wisdom (and displaying an immense lack of any communication between divisions) have set it in concrete - to build an SSIS 2008 package you have to use VS 2008. Never mind that they are two totally different beasts. Never mind that the VS development team have put together Add-Ons for a huge amount of stuff including old and new Frameworks and Databases.So I and I am assuming many others will not upgrade any system that uses SSIS to VS 2010, no matter what other benefits we are missing out on, because we can't run it all in the same IDE.Good one Microsoft.

Why such a delay in the commit?

Posted: 18 Apr 2013 01:52 AM PDT

I have a scheduled job (.Net, WinOS Task Scheduler) which completes shortly after it's called but the SQL work it should accomplish in real time does not happen until several hours later.The .Net scheduled job calls a stored procedure which updates 2 "top level" tables and outputs 3 data readers (that's what they are to the .Net program). Both updated tables have 2 update triggers each (one trigger inserts to a log table & one manages some computed fields in "quick search" tables). One of the fields updated in each table is a datetime field - set to getdate() within the sproc. Temp tables are used to identify the rows to be updated. Quick pseudo-code (showing just one update and select statement): ----declare @mydate dateimeset @mydate = getdate()declare @UpdateIDs table(MyID int)insert @UpdateIDsselect CaseIDfrom TopLevelTable1where ...begin transactionupdate TopLevelTable1set Flag1 = [new value], Flag1Date = @mydatewhere CaseID in (select MyID from @UpdateIDs)-- 2 triggers are fired in conjunction with this statementcommit transactionselect [several fields]from @UpdateIDs tinner join [several joins from temp table]----The problem: the .Net program runs at 11p each night and completes in about 20 seconds. It sends out an email for each row in the select and one to me regarding the ending status. All those emails happen at 11p. The job reports success.But the time stamp (the param @mydate & the field TopLevelTable1.Flag1Date) is 6am the next day. The trigger inserting to TopLevelTable1_Log has a field LogDate which is set to getdate() within the trigger and its value is also 6am.There is no other maintenance work being done at that time. Transaction logs (using sqllogship.exe) are being done and full database backup is done at 1:45am. That full backup has the unmodified rows relating to the this job.What could be happening?

Can full text search on a column in Multi Languages like En&Fr?

Posted: 18 Apr 2013 12:15 AM PDT

I have a table column contains English and French. My FTS always returns something unexpected. I am not sure because of the multi lingual or not. Can someone tell? Thanks

partitioning strategy for this scenario:

Posted: 17 Apr 2013 11:54 PM PDT

HIHere is the scenario:Data is loaded in Table "ABC" every week and size of 1 week data is around 10 GB. Table ABC is partitioned on WeekId Column.WeekID+Someother column are PK(clustered). Clustered index is also partitioned.Only 52 week data need to present in table ABC.There is archive table ARC_ABC with same structure and is also partitioned on Weekid.Clustered index is present with same structure.Every week, the oldest partition has to be moved out to archive table and a new week data has to be added to ABC table.Now what will be a better approach out of these two:1.Create 52 partitions, Take oldest partition out and merge old partitions. Split right side partition for new week data and add data.ISSUE:We'll need to keep making changes in partition function and scheme. These are database level objects. Will it be a good idea to keep them chaging?2.Create partition function and scheme for next 5 years (52*5 partitions) so that we don't have to touch these database objects for several years. We can then keep merging the oldest partitions and keep using the new ones week by week fo rnext 5 years.thanks

adding a prefix to a column in an SQL view

Posted: 17 Apr 2013 11:17 PM PDT

Hi all,Is it possible to add a prefix to a returned result in a view in sql.eg select productId from Table1returns id1id2id3I would like to put a set value(prefix) to the returned result eg www.john.com/returning the followingwww.john.com/id1www.john.com/id2www.john.com/id3is this possible ?thanks very muchJohn.

get overtime hours

Posted: 17 Apr 2013 08:18 PM PDT

i am have a problem to get overtime hoursthe tableEmployeeNo TimeIn TimeOut BreakHours WorkingHours001 06:20 19:00 1 8how query to get overtime hoursnote : datatype Timein, timeout, BreakHours,WorkingHours is decimal(6,3)OverTime = (Timein + timeout) - BreakHours - WorkingHours

Indexes with JOINs

Posted: 17 Apr 2013 10:13 PM PDT

Hi all, I have some knowledge about indexes but it's very fuzzy when the queries become more complex, where upon everything I read on the internet seems to contradict itself. For example, if we have the following two tables and two queries run as reports:People:PeopleIdFirstNameLastNameAgeAddressCountryProfession.... some other fields .....Employee:EmployeeIdFirstNameLastNameCompanyDepartmentEmploymentStatus... some other fields....Query 1:SELECT P.FirstName, P.LastName, P.Age, P.Address, E.Company, E.DepartmentFROM People P INNER JOIN Employees E ON (P.LastName = E.LastName AND P.FirstName = E.FirstName)Where E.EmploymentStatus = 'Employed' AND P.Profession is not nullQuery 2:SELECT P.FirstName, P.LastName, P.Age, P.Address, E.Company, E.DepartmentFROM People P INNER JOIN Employees E ON (P.LastName = E.LastName)Where E.EmploymentStatus = 'Employed'By the way, there are no primary keys or clustered indexes in this query for simplicity sake.For Query 1, I would write the following two non-clustered indexes for each table (please don't mind the syntax) in order to get an index seek:CREATE NONCLUSTERED INDEX INDEX_Q1_Employee ON Employee( LastName ASC, FirstName ASC, EmploymentStatus ASC)INCLUDE (Company, Department)CREATE NONCLUSTERED INDEX INDEX_Q1_People ON People( LastName ASC, FirstName ASC, Profession ASC)INCLUDE (Age, Address)Question 1: Are these indexes correct? From my knowledge, whatever is in the JOIN or WHERE clauses should get placed inside the index and what remains in the SELECT statement should get "covered" and placed with the include portion.Question 2: The two indexes above are written for Query 1. Can an index seek on both tables be used using these indexes for Query 2? Please note that unlike Query 1, Query 2 only joins on LastName and there is no "P.Profession is not null" in the WHERE clause. I'm curious if the two indexes created cover this query or if I need another set of indexes to get an index seek.Any help would be greatly appreciated.

[SQL Server] Using REPLACE in an UPDATE statement



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[SQL Server] Handling SQL Server Errors



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

Thursday, April 18, 2013

[how to] MYSQL score by rank

[how to] MYSQL score by rank


MYSQL score by rank

Posted: 18 Apr 2013 07:52 PM PDT

I am using MYSQL to create a rating system to implement my database. What I want to do is to rate each attribute by its percentage. Here is the example database:

ID, value  1, 3  2, 5  3, 2  4, 5  

The output I want is:

ID, value, rank, score  1, 3, 2, 6.6  2, 5, 1, 10  3, 2, 3, 3.3  4, 5, 1, 10  

score's value will based on the rank so it becomes such as

10*(MAX(rank)-(rank))/(MAX(rank)-MIN(rank))  

I have done the rank query but stuck with transforming it into scores. Here is the query I got so far:

SELECT `ID`, `value`, FIND_IN_SET( `value`, (  SELECT GROUP_CONCAT(DISTINCT `value`   ORDER BY `value` DESC)  FROM table)   ) AS rank  FROM table;  

Thank you all guys :)

MySQL hogging memory

Posted: 18 Apr 2013 07:26 PM PDT

An installation of MySQL 5.6.10 on a virtualized Ubuntu 12.04 is exhibiting massive memory hogging:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND  14019 mysql     20   0 29.0g  17g 8600 S   54 76.7  20:42.64 mysqld  

Usually, I am able to free ~ 3 GB by issuing FLUSH TABLES. The tables used are nearly exclusively InnoDB, the innodb_buffer_pool_size has been set to 10 GB (after setting it to 16 GB quickly depleted the available physical memory and swapped out more than 18 GB of it).

While the system was swapping, I could observe rather high numbers for "swap out" counters (vmstat is showing ~1k pages/second during bursts) and hardly anything at all swapped back in (few dozens of pages per minute). I first suspected memory leakage but have not found anything supporting this hypothesis so far.

What means do I have to identify the possible causes for the apparently unlimited growth?

Equivalent of MRG_MYISAM in databases other than MySQL?

Posted: 18 Apr 2013 05:57 PM PDT

Does anyone know if other database have something equivalent to MRG_MYISAM (aka the MERGE table type/storage engine)?

I know about fragmenting, but this is not quite the same AFAIK. We're using MRG_MYISAM to avoid large amounts of duplicate data across customer specific databases, so MRG_MYISAM is perfect.

That said, I'd like to know if there are equivalent things in other DBs, particularly other open source DBs.

MySQL: logging queries which would execute without using indexes

Posted: 18 Apr 2013 05:24 PM PDT

I am trying to use log_queries_not_using_indexes = 1 to find queries which are not executing optimally on a MySQL server. However, I find the resulting log file of rather limited value. Apparently, queries are logged whenever the optimizer really decided not to use an index as a criterion in a WHERE clause. And not if they truly have no indexes matching the filtered columns.

So given a table with the following structure

CREATE TABLE `test` (      `id_test`   int(11) NOT NULL AUTO_INCREMENT,      `some_text`  varchar(255) DEFAULT NULL,      `some_more_text` text,   PRIMARY KEY (`id_test`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1   

a query SELECT id_test from test where id_test != 69 would be logged to the slow log because of not using indexes (the optimizer has decided that a table scan is more efficient as not much could be won by using an index) but SELECT id_test from test where id_test = 69 would not.

I would expect the behavior of the latter query in the first case as well since the index is present. As is, it makes troubleshooting missing indexes rather tiresome. Ideas on how to approach this greatly appreciated.

mysql: need help to optimize my query/table

Posted: 18 Apr 2013 04:27 PM PDT

I'm wondering if someone could help me optimize my tables/query to speed up a query. It is currently running ridiculously slow. I think a well-thought out index could help me. Any help would be really appreciated

Tables URLS and TAGS mentioned below are 2 and 20 million rows respectively (will probably end up having 10x). A query like the one below already takes 10 seconds to run.

An Example: http://whatrethebest.com/php+tutorials

Tables

CREATE TABLE IF NOT EXISTS `TAGS` (  `hash` varchar(255) NOT NULL,  `tag` varchar(255) NOT NULL,  UNIQUE KEY `my_unique_key` (`hash`,`tag`),  KEY `tag` (`tag`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

and

CREATE TABLE IF NOT EXISTS `URLS` (  `url` text NOT NULL,  `domain` text,  `title` text NOT NULL,  `description` text,  `numsaves` int(11) NOT NULL,  `firstsaved` varchar(256) DEFAULT NULL,  `md5` varchar(255) NOT NULL DEFAULT '',  PRIMARY KEY (`md5`),  UNIQUE KEY `md5` (`md5`),  KEY `numsaves` (`numsaves`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

QUERY

SELECT urls.md5, urls.url, urls.title, urls.numsaves  FROM urls  JOIN tags ON urls.md5 = tags.hash  WHERE tags.tag  IN (  'php', 'tutorials'  )  GROUP BY urls.md5  HAVING COUNT( * ) =2  ORDER BY urls.numsaves DESC  LIMIT 20  

EXPLAIN

I'm not sure what this shows

id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  tags    range   my_unique_key,tag   tag     767     NULL    230946  Using where; Using index; Using temporary; Using filesort  1   SIMPLE  urls    eq_ref  PRIMARY,md5     PRIMARY     767     jcooper_whatrethebest_urls.tags.hash    1     

So I think the problem is:

certain tags like 'php have 34,000 entries, most of which only have under 5 saves. But in order to get the 20 most saved it is having to sort them all.Right?

I can't really create a 'numsaves' column in TAGS and index on that because that number will be changing up and down, and that wouldnt make sense. Is it possible to create a cross-table index between urls.numsaves and tags.tag? Or a third table to use in my query somehow? Would this solve my problem? I know almost nothing about indexing.

Any help would be really appreciated!

Edits: Trying Ypercube suggestions*

I tried making the index but not sure if it finished, is there any way to tell for sure? Here is that explain for your (very nice) query for php + tutorials

id  select_type table   type    possible_keys   key key_len ref rows       Extra  1   SIMPLE  t1  ref my_unique_key,tag_hash_UX   tag_hash_UX 767 const   64962   Using where; Using index; Using temporary; Using filesort  1   SIMPLE  t2  eq_ref  my_unique_key,tag_hash_UX   my_unique_key   1534    jcooper_whatrethebest_urls.t1.hash,const    1   Using where; Using index  1   SIMPLE  u   eq_ref  PRIMARY,md5 PRIMARY 767 jcooper_whatrethebest_urls.t2.hash  1   Using where  

When I run your query in php or phpmyadmin (I know, I know, gross, im new to this) it takes a long long time, but when I run it with explain in front it gives me the number of ROWS very quickly. What could this mean??

I will consider using a ID field. Its a good idea, Would it account for this much slowness though? I didn't think it was neccesary because the order of the rows doesnt matter and a lot will be deleted eventually and they only need to be unique on the hash of the URL .. but I could keep the hash for uniqueness and the other stuff is irrelevant

I'm trying to disable xp_Cmdshell and rpc_out and when i run the commands on query analyzer it shows its diabled

Posted: 18 Apr 2013 04:43 PM PDT

I'm trying to disable xp_Cmdshell and rpc_out and when i run the commands on query analyzer it shows its diabled

But after this i need to run a security scan report which provides me the following report that its not disabled can anyone help me

5 Microsoft SQL Server Database Link Crawling Command Execution

QID: 19824 Category: Database

CVE ID:

Vendor Reference

Bugtraq ID:

Service Modified: 02/20/2013

User Modified:

Edited: No PCI Vuln: Yes THREAT: Microsoft SQL Server is exposed to a remote command execution vulnerability. Affected Versions: Microsoft SQL Server 2005, 2008, 2008 R2, 2012 are affected. IMPACT: Successful exploitation could allow attackers to obtain sensitive information and execute arbitrary code. SOLUTION: There are no solutions available at this time. Workaround: Disable RPC_Out and xp_cmdshell for this issue. COMPLIANCE: Not Applicable EXPLOITABILITY: There is no exploitability information for this vulnerability. ASSOCIATED MALWARE: There is no malware information for this vulnerability. RESULTS: C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Binn\sqlservr.exe Version is 2009.100.4000.0

Loading data in mysql using LOAD DATA INFILE, replication safe?

Posted: 18 Apr 2013 04:21 PM PDT

I am trying to load data into mysql database form CSV file. I found that we could use LOAD DATA INFILE command to do it. But as per the mysql documentation it is not replication safe. (See here)

Is there a better way to do it rather than to do it via application?

MySQL Tables Require Daily Repairs - Server, Table or HD?

Posted: 18 Apr 2013 11:39 AM PDT

I've been experiencing a weird issue with one of MySQL DBs. Every day, sometimes 2-3 times per day, I'll need to repair the tables. The MySQL DB has 25 tables with 5.6m rows in total.

The bigger ones are:

Table A - 599k rows / 867MB  Table B - 2.1m rows / 146MB  Table C - 2.2m rows / 520MB  

It seems table C needs to be repaired pretty frequently, Tables A & B not as much.

When the table needs to be repaired, I'm not seeing it being marked as crashed or in use. But through other tools, I can see the data is not what it should be.

When I do repair the table, I'll see a message similar to:

[table c] repair info Wrong bytesec:  54-55-102 at 368251940; Skipped  [table c] repair warning Number of rows changed from 2127934 to 2127931  

or

[table c] repair info Wrong bytesec:  171-30-101 at 341237312; Skipped  [table c] repair warning Number of rows changed from 1984585 to 1984582  

I've tried making adjustments in my.cnf but no difference.

The server is a cloud server running both MySQL and Apache. Plenty of space available on all HDs:

Filesystem            Size  Used Avail Use% Mounted on  /dev/xvda2             99G   14G   80G  15% /  tmpfs                 1.9G     0  1.9G   0% /dev/shm  /dev/xvda1             97M   49M   44M  53% /boot  /dev/xvdc1            296G   25G  257G   9% /data  

I'm not sure if this is a problem with the cloud HD, the server or the tables themselves. The problem didn't start happening until about 2 months ago and the size of the DB has only changed by 300-400MB until now.

Any idea what I should be looking at to verify where the problem might be?

Using MySQL v5.1.66 and MyISAM

Thanks in advance.

Best, Cent

Update "NULL" string to Actual NULL value

Posted: 18 Apr 2013 11:51 AM PDT

I have a table that contains NULL values but the problem is that some of the values are actually string "NULL" and not actual NULLS so when you trying something like

where date is null  

it will not return the fields that have the "NULL" string.

What I am needing to do is run an update of the whole table that will convert all string "NULLS" to the actual NULL value. The "NULL" strings happen throughout all columns of the table so it is not just 1 column that needs to be updated. I am not sure how to approach this scenario. I'm thinking I might need to use a loop since i have many columns but then again there might be a simple solution without having to use a loop. What would be the best way to resolve this issue?

Reinstall MySql but keep database tables and data

Posted: 18 Apr 2013 10:37 AM PDT

Please help!

There are server issues and MySql is no longer running on our server (Ubuntu). The service is not recognized and needs to be reinstalled. Unfortunately, the database has not been backed up for 48 hours and that is a lot of information.

How do I reinstall MySql AND keep all my database data? Please note - I can't access mysql at all. I can't use command line mysql nor phpmyadmin.

Thanks in advance and let me know if I am missing important details.

Need ideas about OPTIMIZE TABLE

Posted: 18 Apr 2013 10:38 AM PDT

Looking at a database with 10 tables and fairly active at by-the-hour changes. On the first of each month, I purge some rows from 3 tables to remove outdated material and keep the size down. All of these tables show highlighted (red) 'Overhead' in phpMyAdmin.

Given these conditions, should the tables be OPTIMIZED just after the purge. If not a good idea, why? (The purge occurs at lowest usage time of day.)

Let's say the tables and current (mid-month) Data & Index sizes are:

          Rows     Deleted     Data      Index    table1   17,000     7000      4.3 MiB    1.2 MiB    Holds +300/day Transaction Info  table2    6,000     1000     25.5 MiB    231 KiB    Holds User Schedule Info  table3    1,800       30      297 KiB     43 KiB    Holds User Info  

This question is in response to learning about indexes and efforts to reduce slow queries and learning about high counts for:

  1. Handler read rnd next
  2. Handler read prev
  3. Created tmp disk tables
  4. Opened tables

In reading articles about these items, it seems to be a 'learned' science and requires testing.

Thanks for responding.

Delete shared memory segments left over by an Oracle instance

Posted: 18 Apr 2013 11:12 AM PDT

We're using Oracle Enterprise 11rR2 running on Solaris.

How can I delete/remove allocated shared memory segments using ipcrm?

I'm getting this error:

 ORA-01041: internal error. hostdef extension doesn't exist  

Pull Subscription: process cannot read file due to OS error 5

Posted: 18 Apr 2013 04:37 PM PDT

I am trying to migrate a working pull subscription for transactional replication from one subscribing server to a new one. The subscribing server is at another site and is connected via a VPN tunnel. The serverName is resolvable via the hosts file.

I am trying to capture the existing configuration precisely, but clearly am missing something.

The error is The process could not read file '\\[server]\repldata\unc\[folder]\[folder]\[file].pre' due to OS error 5. I can RDP into the subscribing server with the distributor connection account and can access the file on the UNC share.

Everyone has permissions to the UNC share and we haven't seen this problem with other subscriptions.

The distribution process account is the SQL Server Agent, which I know is not best practice but matches the configuration of the existing working replication. I temporarily tried using a (local) Windows account

Again, we have tried to configure the subscribing server exactly as the working server. What are we missing? We never saw this error when setting up the previous subscription.

One note: the old subscription is still up and functioning, and uses the same accounts to connect to the distributor. I wonder if Access is Denied could be thrown due to a sharing conflict.

Reducing Log Impact During Re-Indexing

Posted: 18 Apr 2013 12:05 PM PDT

We use Ola's maintenance solution and its great.

Regardless of method for re-indexing a major friction point with IT is the amount of log generated during the weekly re-indexing process. For a 1TB DB upwards of 300 GB of log can be generated. This causes mirroring backlogs/delays and also cause Data Protection Manager to take a long time to sync up with its off-site DPM partner server (sometimes several Days!). As we approach a time where we will have a Second site on warm standby we know that this delay in having off-site backups available during the vulnerable period after Index maintenance could be the Achilles heal. We are considering a larger pipe between the sites for Avail Group but to generate less burst Log activity would be great.

To mitigate this we have done 2 things with only minimal impact. First we spread out the weekly re-indexing by introducing Delays, purposely slowing a 3 hour process to about 8 hours or so. Secondly "some" key tables are maintained by a process that runs hourly resulting in Just in Time re-indexing.

In a large and active OLTP DB with some LOB what are the rule of thumbs for re-indexing frequency, % of database affected, # indexes that should be rebuilt less frequently? Is weekly rebuild overkill?

Time to apply transaction logs: does it matter how many logs?

Posted: 18 Apr 2013 09:56 AM PDT

When restoring from a backup in SQL Server, the procedure is to restore the .bak file and then apply any .trn files since the last full backup.

Does it make a difference how many .trn files there are, if they cover the same transactions? I.e. is it faster or slower restore a 1-hour .trn file vs twelve 5-minute .trn logs?

SQL pre-login handshake connection failure

Posted: 18 Apr 2013 09:56 AM PDT

I'm having an intermittent problem with one of my old SQL 2000 servers. Every once in a while it stops accepting logins. I fix the problem by bouncing sqlservr.exe and then it starts working properly for a few days. For example, running the PowerShell:

$connectionstring = "Server=.;Integrated Security=SSPI;"  $sqlconnection = new-object 'System.Data.SqlClient.SqlConnection'  $sqlconnection.connectionstring = $connectionstring  $sqlconnection.open()  

Produces the error:

Exception calling "Open" with "0" argument(s): "A connection was successfully established with the server, but then an  error occurred during the pre-login handshake. (provider: TCP Provider, error: 0 - The specified network name is no lon  ger available.)"  

Similarly, if I change the connection string to:

$connectionstring = "Server=\\.\pipe\MSSQL`$SMS3000\sql\query;Integrated Security=SSPI;"  

I get a similar error, except the end of the error says: (provider: Named Pipes Provider, error: 0 - The pipe has been ended.)

Checking C:\Program Files\Microsoft SQL Server\80\Tools\Binn\SVRNETCN.exe. Both Named Pipes and TCP/IP are enabled and "Force protocol encryption" is unchecked. And as I mentioned, if I bounce sqlservr.exe then the above commands create a successful connection.

I've checked the certificate store and there are no expired certificates in there. Checking the Windows event logs and SQL Server logs I'm not finding anything remotely useful there.

Lastly, I've run some network traces. For a failed connection I show:

TLS:TLS Rec Layer-1 HandShake: Client Hello.  TCP:Flags=...A.R.., ScrPort=1433, DstPort=18721, PayloadLen=0, Seq=888695317, Ack=3640041213, Win=0 ...  

A successful connection looks like:

TLS:TLS Rec Layer-1 HandShake: Client Hello.  TLS:TLS Rec Layer-1 HandShake: Server Hello. Certificate.  

I've hunted around the registry to try to figure out which certificate SQL is using, but I haven't had any luck there yet. Any ideas on what to look for next?

Single slave - multiple master MySQL replication

Posted: 18 Apr 2013 05:10 PM PDT

I need to replicate different MySQL databases from multiple servers into a single slave server. How can this be done? is there a way to define multiple master hosts?

Is it possible in Oracle to trace SQL statements that result in errors?

Posted: 18 Apr 2013 11:14 AM PDT

We have Oracle 11g in production. Application system is still under active development. It will be very handy to get SQL statements which cause any error.

Does Oracle provide a standard function to trace and log these statements and additional (debug) info?

Do I need client certs for mysql ssl replication?

Posted: 18 Apr 2013 01:51 PM PDT

I'm setting up mysql replication using SSL, and have found two different guides.

The first one creates both client and server certs, while the second one only creates server certs.

I don't know enough about SSL to understand the implication of one option over the other. Should the slave be using the client certs or the server certs?

Custom sp_who/sp_whoUsers

Posted: 18 Apr 2013 03:02 PM PDT

I need to allow a client in a dev DW SQL 2K8R2 environment, to view and kill processes, but I do not want to grant VIEW SERVER STATE to this person (he's a former sql dba and is considered a potential internal threat).

When I run the following, it returns one row as if the user ran the sp themselves with their current permissions.

USE [master]  GO    SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO    CREATE PROCEDURE [dbo].[usp_who] with execute as owner  AS  BEGIN      SET NOCOUNT ON;      exec master.dbo.sp_who;  END  

Changing the "with execute as" to "self" (I'm a sysadmin) returns the same results. I've also tried the below instead of calling sp_who, and it only returns one row.

select * from sysprocesses  

It seems that the context isn't switching, or persisting, throughout the execution of the procedure. And this is to say nothing of how I'm going to allow this person to "kill" processes.

Does anyone have a solution or some suggestions to this seemly unique problem?

Need to suppress rowcount headers when using \G

Posted: 18 Apr 2013 10:02 AM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

How can I replicate some tables without transferring the entire log?

Posted: 18 Apr 2013 11:02 AM PDT

I have a mysql database that contains some tables with private information, and some tables with public information.

I would like to replicate only the tables containing public information from one database to another, making sure that NO confidential information ever gets stored on the slave.

I know I can use the replicate-do-table to specify that only some tables are replicated, but my understanding is that the entire bin log is transferred to the slave.

Is there a way to ensure that only the public information is transferred to the slave?

How to search whole MySQL database for a particular string

Posted: 18 Apr 2013 01:02 PM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

multivalued weak key in ER database modeling

Posted: 18 Apr 2013 12:02 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 18 Apr 2013 04:02 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

Foreign Key Constraint fails

Posted: 18 Apr 2013 07:47 PM PDT

I have the following tables:

// Base Scans  CREATE TABLE `basescans` (      `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,      `name` VARCHAR(100) NULL DEFAULT NULL,      `status_id` INT(10) UNSIGNED NULL DEFAULT NULL,      PRIMARY KEY (`id`),      CONSTRAINT `status_id_fk` FOREIGN KEY (`status_id`) REFERENCES `statuses` (`id`) ON UPDATE CASCADE ON DELETE SET NULL  )  COLLATE='utf8_general_ci'  ENGINE=InnoDB  ROW_FORMAT=COMPACT  AUTO_INCREMENT=29    // Statuses  CREATE TABLE `statuses` (      `id` INT(10) UNSIGNED NULL AUTO_INCREMENT,      `name` VARCHAR(100) NULL DEFAULT NULL,      PRIMARY KEY (`id`)  )  COLLATE='utf8_general_ci'  ENGINE=InnoDB  ROW_FORMAT=DEFAULT  AUTO_INCREMENT=4  

Trying to save the first table fails when I put in that foreign key constraint. Can't figure out why. Both of the columns referenced in the constraint have the same type, size, etc:

INT(10) UNSIGNED NULL  

They only have a difference default value. One has a default value of NULL, the other is AUTO_INCREMENT. I didn't think that made a difference for foreign key constraints but I could be wrong.

Both tables are InnoDB and UFT8. What am I missing here?

UPDATED: My specific error:

/* SQL Error (1452): Cannot add or update a child row: a foreign key constraint fails (`db`.<result 2 when explaining filename '#sql-31c2_22ac1e1'>, CONSTRAINT `status_id_fk` FOREIGN KEY (`status_id`) REFERENCES `statuses` (`id`) ON DELETE SET NULL ON UPDATE CASCADE) */  

SSIS Row Count: Getting a null variable error where there is clearly a selected variable

Posted: 18 Apr 2013 02:02 PM PDT

Validation error. Build Files Count VIE [245]: The variable "(null)" specified by VariableName property is not a valid variable. Need a valid variable name to write to.

From what I can tell, this error is thrown when a variable is not assigned to the VariableName property; however, I definitely have a variable assigned, as seen in the image below:

"Count VIE" Row Count Properties

I've deleted the Row Count component and remade it, but the error continues to show up. Here is a snapshot of the Data Flow in question:

"Build Files" Data Flow

I'm not sure if its inclusion in a Conditional Split may be causing this error, but none of the other Row Count components seem to be throwing this error.

How do I copy my SQL Azure database to a local SQL Server instance?

Posted: 18 Apr 2013 12:10 PM PDT

I have an OLTP database hosted on a SQL Azure instance. I want to pull a copy of the database down from the cloud so I can run some heavy extracts and OLAP-style queries against it without impacting the source database.

How do I pull a copy of the database down to a local SQL Server instance?

[SQL Server] How do I query dates

[SQL Server] How do I query dates


How do I query dates

Posted: 18 Apr 2013 02:43 AM PDT

Seems simple enuf? So I have a datetime field in a db which happens to be 04/10/2013I do a select * from orderswherecreate_date like '%/10/%'returns nothing. So clear not that simple. I have a URL with either a date &date=11/11/2011 or d=a&m=2&y=2012I need to do w where (day in date_created=url.d) or some such. Better still where date_created=url.dateThanks for your patience with something that has to be SO simple ... did google it and looked up a couple of reference books!

convert error???

Posted: 18 Apr 2013 03:01 AM PDT

Table with Values[code]CREATE TABLE InitialData(recno int PRIMARY KEY, Dept Varchar(30), entry_name Varchar(50), entry varchar(500), orgID int)INSERT INTO InitialData Values(1, 'Marketing', 'Timesservedstartdate', '8/6/2012 12:00:00 AM', 1234),(2, 'Sales', 'Timesservedstartdate', '8/6/2012 12:00:00 AM', 1234),(3, 'Development', 'Reporting', 'Somevalue', 1234),(4, 'HumanResources', 'Reporting', '1', 1234),(5, 'Support', 'Reporting', '1', 1234);[/code]Trying to test the condition that i developed...i want to compare all the rows with the entry value '1'. Any values other than 1 should be equal to 0 is the condition. NOTE: entry is a varchar column[code] IF (SELECT COALESCE((SELECT dI.[entry] FROM dbo.InitialData AS dI WITH(NOLOCK) WHERE dI.Dept = 'Reporting' AND dI.entry_name = 'PledgeRequireBatch' AND dI.orgID = 1234 AND dI.[entry] <> 1), 0)) = 1 BEGIN Select * from InitialData END ELSE Select TOP 1 * FROM InitialData[/code]I am getting the else statement result always but that is not what i want1 Marketing Timesservedstartdate 8/6/2012 12:00:00 AM 1234

Problem with joins

Posted: 17 Apr 2013 10:08 PM PDT

Hi,I need a help.i have 5 tables defined below:-these tables are used to record all data pertaining to emails.Emarketing_email_history : message_stub,acct_id,....Survey_email_history : message_stub,acct_id,....Event_email_history : message_stub,acct_id,....this record all data pertaining to email bounce.Email_history_log : message_stub....this is acct specific data.Account: acct_id,acct_namei want to get the count of all acct's that have bounced emails.my query below:-select top 500 a.acct_id, a.acct_num, a.acct_company, count(a.acct_id) from email_history_log ehl (nolock)join email_history ehe (nolock) on ehe.message_stub = ehl.message_stubjoin survey_email_history ehs (nolock) on ehs.message_stub = ehl.message_stubjoin emarketing_email_history ehm (nolock) on ehm.message_stub = ehl.message_stubjoin account a on a.acct_id = ehm.acct_id -----------????where ehl.created_date > (dateadd("day",-7,getdate()))GROUP BY a.acct_id, a.acct_num, a.acct_companyORDER BY count(a.acct_id) DESCbut some how this doesnt retrive records. though when I remove join condition( i e removing 2 joins from join 1 ,2 or 3) I get records.is there any way I can get my result by joining all 3 tables plus account table with event_history_log?PS i dont want union/union allhelp much appritiated.!!:hehe:Thanks!

Backup overwrite

Posted: 17 Apr 2013 08:00 PM PDT

Hello Masters,Is it possible to know when last backup was restored on specific database ? Is it possible to know if any database has been overwrite ?

Search This Blog