Friday, April 19, 2013

[SQL Server 2008 issues] Deadlock

[SQL Server 2008 issues] Deadlock


Deadlock

Posted: 18 Apr 2013 11:56 AM PDT

One of the system in my company was not available on the 9th Apr. As the application team in my company check in the log.. they found this error 2013/04/09 15:39:05:391 ERRORservices.ib101_access.register.IBRegisterSecurityImageServices: Error details :org.hibernate.exception.SQLGrammarException: could not insert:[net.penril.ibss.core.hibernate.IbUser]Caused by:com.microsoft.sqlserver.jdbc.SQLServerException: Lock request time out period exceeded.2013/04/09 15:39:17:559 ERRORenterprise.services.ib101_access.IBGenericLoginServices: Error details :org.hibernate.exception.SQLGrammarException: could not update:[net.penril.ibss.core.hibernate.IbCustProfile#4737Caused by:com.microsoft.sqlserver.jdbc.SQLServerException: Lock request time out period exceeded. 2013/04/09 15:39:43:487 ERRORenterprise.services.ib101_access.IBGenericLoginServices: Error details :org.hibernate.exception.SQLGrammarException: could not update:[net.penril.ibss.core.hibernate.IbCustProfile#2951]Caused by:com.microsoft.sqlserver.jdbc.SQLServerException: Lock request time out period exceeded its suspected because of deadlock. But i have no idea where to check on this issue. Im very new in this. Please help.

Restoring file group error

Posted: 15 Jan 2013 09:11 PM PST

I have created the file group for my database.First i took backup of individual file group(mdf and ndf) then I tried to restore primary file group but i got error as File 'testnvt2' was not backed up in file 1 on device 'D:vtprimary.bak'. The file cannot be restored from this backup set.RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3144)then i tried to restore only secondary file group, but again i got the error asFile 'regSQL_dat' was not backed up in file 1 on device 'D:vtndf.bak'. The file cannot be restored from this backup set.RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3144)So why this error comes? How exactly to backup and restore only a single group or restore file groups?

How do I calculate totals for a distributed measure in SSAS?

Posted: 18 Apr 2013 06:07 PM PDT

I'm not sure if this is the right place to post my question but here goes.I have an OLAP cube consisting of quite a few tables.In this scenario there's only reason to look at one fact table with costs.This table constructed by a join in SQL with a table of distribution keys.The purpose is to split costs across Company, Cost Center, etc.The joining keys are Company, CostCenter, AccountGroup, DateFrom, and DateTo.As such, the number of rows can be blown up a multitude of times.If we look at a simple case with just one variable: Cost Distribution Distributed Cost Count DistrCalcCompany A 100 0.8 80 1 0.8Company B 200 0.2 40 1 0.2Total 300 1.0 120 2 0.5I need to show the Distribution figure in the cube.I do that by dividing Count with Distribution which you can see in the final column in the table above, DistrCalc.Here's the cube formulaIif( [Measures].[Count] = 0 Or [Measures].[] is NULL , NULL , [Measures].[Distribution] / [Measures].[Count])The Total of 0.5 for DistrCalc is clearly wrong. I should be 1.0 (or if not possible, NULL).Since we use Excel as the frontend tool we could easily switch off totals but totals are relevant for other measures.So how do I get around this problem?I'm thinking that some sort of Scope statement could it. But it has to span across all the variables from the SQL join.My level of expertise in MDX is unfortunately not very high, and Google hasn't provided me with a usable result.Thank you in advance!Hans Riis

Connect Two Different databases on two Different Data server Locations

Posted: 18 Apr 2013 04:22 PM PDT

Hi Friends, i had a requirement , which leads to work on two database on two different data servers. Now my Problem is how to connect one database to another database. i have to access the other database procedures and other operation like CRUD . Please give me some examples or Idea to approach Thanks in Advance

need help in creating a procedure which uses cursor and variables

Posted: 18 Apr 2013 03:08 PM PDT

have situation where i need to create a procedure that uses another table data and fetches each table row and passes them as variables into the following user defined function (dbo.Split)The values id,SID,DOB are getting from another table columns valuesSoure table hasID ColNAME 1 SID,DOB2 SID,DOB3 NAME,CITY,ZIP4 NAME,CITY,ZIPi need to use variables to fetch each row and pass them into the below functionin the below mentioned function SID and DOB whould be passed as a variables from the source table select id, SID, DOBinto table1from(select t.id,c.colname,c.colvaluefrom yt tcross apply dbo.split(t.name, t.text, ',') cwhere c.colname in ('SID', 'DOB')) srcpivot(max(colvalue)for colname in (SID, DOB)) piv;select *from table1Can some one please help me

Is it possible to create tables Dynamically using Cursors from another table with csv values

Posted: 18 Apr 2013 08:42 AM PDT

Have a Table with the CSV Values in the columns as belowID Name text 1 SID,DOB 123,12/01/1990 2 City,State,Zip NewYork,NewYork,01234 3 SID,DOB 456,12/21/1990 What is need to get is 2 tables in this scenario as out put with the corresponding valuesID SID DOB 1 123 12/01/1990 3 456 12/21/1990ID City State Zip2 NewYork NewYork 01234Is there any way of achieving it using a Cursor or any other method in SQL server?

Trace Flag 8048, 8015, or SUMA?

Posted: 14 Nov 2012 08:37 AM PST

Hola! Recently included SQL Server startup Trace Flag 8048 to resolve a serious spinlock contention issue in a SQL Server 2008 R2 system. Interested to hear from others who have found usage cases where performance value was delivered by trace flag 8048 (promote query memory grant strategy from per-NUMA node to per-core), trace flag 8015 (SQL Server ignores physical NUMA), or SUMA (interleaved sufficiently uniform memory access). Trace flag 8048http://blogs.msdn.com/b/psssql/archive/2011/09/01/sql-server-2008-2008-r2-on-newer-machines-with-more-than-8-cpus-presented-per-numa-node-may-need-trace-flag-8048.aspxTrace flag 8015http://blogs.msdn.com/b/psssql/archive/2010/04/02/how-it-works-soft-numa-i-o-completion-thread-lazy-writer-workers-and-memory-nodes.aspxSUMA, or interleaved memoryhttp://msdn.microsoft.com/en-us/library/ms178144(v=sql.105).aspx"If you have hardware NUMA, it may be configured to use interleaved memory instead of NUMA. In that case, Windows and therefore SQL Server will not recognize it as NUMA."Gory details of system workload, gathered metrics from troubled system, and gathered metrics from the system after intervention in posts to follow. Peace!tw: @sql_handle

Customer performance report

Posted: 18 Apr 2013 09:32 AM PDT

Hi All,Has anyone put together a performance recommendations report to send to clients?This would put forward recommendations beyond your control such as disk space/configuration, SQL Server version, RAM, etc...

Could this be possible - what to check

Posted: 18 Apr 2013 08:10 AM PDT

Hello everyone.Got a situation that I can hope I can explain.We have an application that at the moment is performing a lot faster from the application front end, than it is when the same SP is run from SSMS or SQLCMD.Is that possible ?The apps team tell me that the application function when completed does complete successfully (My first check , was does it really do what they think it does)when I extract a trace from a successful run , i see the SP and parameters being called.When I try to run that in SSMS it runs and has to be killed off.A successful run from application <> 3-5 seconds.SSMS killed off after 10 mins.What can I check and confirm or deny in this scenario.All seems very odd to me ?:w00t:cheers for any advice or tips.

1 to many relationship query

Posted: 18 Apr 2013 07:46 AM PDT

Hi all,I have a documents database that has 2 tables, the main table contains the documents general information and the other tracks changes to the documents.I need to know, for each document, the [b]oldest [/b]code status from the 2 table based on the InsertedDate.Note: Every time a document is renewed a new line is inserted in the main table... but the document name is the same. At the same time a new status is created in the 2nd table.Here's the example:--remove temp table if necessaryIF OBJECT_ID('tempdb..#temp_main') IS NOT NULL DROP TABLE #temp_mainIF OBJECT_ID('tempdb..#temp_status') IS NOT NULL DROP TABLE #temp_status --make temp_mainCREATE TABLE #temp_main ( Uniq_ID int NOT NULL, DocCode char(32) NOT NULL, EffectiveDate datetime NULL)--make temp_statusCREATE TABLE #temp_status ( Uniq_ID int NOT NULL, CodeStatus char(2) NOT NULL, InsertedDate datetime NULL)--poplulate temp tablesINSERT INTO #temp_main (Uniq_ID, DocCode, EffectiveDate) VALUES (65977,123456,'2012-07-19 00:00:00.000')INSERT INTO #temp_main (Uniq_ID, DocCode, EffectiveDate) VALUES (65977,123456,'2012-07-19 00:00:00.000')INSERT INTO #temp_main (Uniq_ID, DocCode, EffectiveDate) VALUES (65977,123456,'2012-07-19 00:00:00.000')INSERT INTO #temp_main (Uniq_ID, DocCode, EffectiveDate) VALUES (66065,123456,'2013-07-18 00:00:00.000')INSERT INTO #temp_status (Uniq_ID, CodeStatus, InsertedDate) VALUES (65977,'N','2012-07-19 18:23:44.637')INSERT INTO #temp_status (Uniq_ID, CodeStatus, InsertedDate) VALUES (65977,'E','2012-07-19 18:36:16.027')INSERT INTO #temp_status (Uniq_ID, CodeStatus, InsertedDate) VALUES (65977,'E','2012-07-25 15:44:47.553')INSERT INTO #temp_status (Uniq_ID, CodeStatus, InsertedDate) VALUES (66065,'R','2013-04-18 16:01:02.510')select a.Uniq_ID, a.DocCode, a.EffectiveDate, b.Uniq_ID, b.CodeStatus, b.InsertedDatefrom #temp_main as aleft join #temp_status as b on b.Uniq_ID = a.Uniq_IDgroup by a.Uniq_ID, a.DocCode, a.EffectiveDate, b.Uniq_ID, b.CodeStatus, b.InsertedDateUniq_ID DocCode EffectiveDate Uniq_ID CodeStatus InsertedDate----------- -------------------------------- ----------------------- ----------- ---------- -----------------------65977 123456 2012-07-19 00:00:00.000 65977 E 2012-07-19 18:36:16.02765977 123456 2012-07-19 00:00:00.000 65977 E 2012-07-25 15:44:47.55365977 123456 2012-07-19 00:00:00.000 65977 N 2012-07-19 18:23:44.63766065 123456 2013-07-18 00:00:00.000 66065 R 2013-04-18 16:01:02.510(4 row(s) affected)What I want to achieve is :Uniq_ID DocCode EffectiveDate Uniq_ID CodeStatus InsertedDate----------- -------------------------------- ----------------------- ----------- ---------- -----------------------65977 123456 2012-07-19 00:00:00.000 65977 N 2012-07-19 18:23:44.63766065 123456 2013-07-18 00:00:00.000 66065 R 2013-04-18 16:01:02.510Thank you all for your time!!

Help interpret SQLIO result

Posted: 18 Apr 2013 04:59 AM PDT

Im trying to either prove or disprove that the slow front-end (web ui) performance is, in some part, attributed to disk bottleneck. I've run SQLIO to gather some metrics, but they'll always be just numbers without context. For example:c:\Program Files (x86)\SQLIO>sqlio -kR -t4 -s120 -o64 -frandom -b64 -LS -Fparam.txt sqlio v1.5.SGusing system counter for latency timings, 3579545 counts per secondparameter file used: param.txt file e:\sqlio\testfile.dat with 2 threads (0-1) using mask 0x0 (0)2 threads reading for 120 secs from file e:\sqlio\testfile.dat using 64KB random IOs enabling multiple I/Os per thread with 64 outstandingusing specified size: 100 MB for file: e:\sqlio\testfile.datinitialization doneCUMULATIVE DATA:throughput metrics:IOs/sec: 1649.36MBs/sec: 103.08latency metrics:Min_Latency(ms): 4Avg_Latency(ms): 77Max_Latency(ms): 779How can I provide context to the above to state definitely that the disk subsystem is underperforming to support a multi client UI tool. All of the articles I've read online only seem to use SQLIO as a tuning tool (optimize within existing hardware scope), not as a tool to determine whether the architecture is suitable or not.Anyone have any ideas? Am I using the right tool? How can I either make this case or move onto something else?Thanks in advance.

Does microsoft have a tool similar to Red Gate

Posted: 18 Apr 2013 08:02 AM PDT

One of the guys here in the office thought they had one but has no clue of the name of it. Asked me to evaluate it but couldn't find anything. Thanks.

Naming Multi-Line Table-Valued Function Default Constraints

Posted: 18 Apr 2013 04:54 AM PDT

I want to be able to name the default constraints that I place on the table returned from a multi-line table-valued function, however SQL Server doesn't appear to support naming the constraint in this scenario. I want to be able to do this so that schema comparisions don't get false positives when comparing the names given to these default constraints. Even though you can't specify them, SQL Server does give names to these default constraints.I'd like to be able to do something like what's in bold below:[code="sql"]CREATE FUNCTION [dbo].[GetElementList] ( @color smallint)RETURNS @elementList TABLE ( PartName VARCHAR(50), Color VARCHAR(50), Active INT [b]CONSTRAINT DF_GetElementList_Active[/b] DEFAULT 0)ASBEGIN... [i]Body removed for brevity.[/i]...END[/code]Does anybody know a way to do this, or a way to acheive the same end result?

RESTORE DATABASE

Posted: 18 Apr 2013 06:31 AM PDT

Hello,What is the syntax on how to restore database on these requirement:I need to run restore sql job from server 1,but the bakup file is on server2 and database where I need to restore is on server2here is how I startedRESTORE DATABASE MyDatabase FROM DISK='E:\Backups\MyDatabase.bak'Can anyone help me?Thank you

How do I add a row to when a column is read only?

Posted: 18 Apr 2013 06:57 AM PDT

How do I add a row to when a column is read only? I think this issue I am having is because I am using an ID field (column) that is automatically generated and incremented whenever a new row is added in Microsoft SQl Server Management Studio, I cannot manually change an ID field and it appears a different (grayer) color. Somehow, once, after I entered all the other items in the row and hit "Enter" on my keyboard, I managed to have a row added and the ID field was automatically generated. The problem is that it did not seem to add it out of sequence. I do not know why it would be 159 and not 157[img]http://i67.photobucket.com/albums/h292/Athono/sql%20server/stuck_zps00d884e5.png[/img]I deleted the row and tried again. This time, it gave me an error:[img]http://i67.photobucket.com/albums/h292/Athono/sql%20server/stuck02_zpsf175086e.png[/img]When I try to edit the field by hand, it seems i am not allowed to:[img]http://i67.photobucket.com/albums/h292/Athono/sql%20server/stuck03_zpse022ed98.png[/img]Please advise.I am using SQL Server 2008 R2Here are the version numbers:Microsoft SQL Server Management Studio 10.50.2500.0Microsoft Analysis Services Client Tools 10.50.2500.0Microsoft Data Access Components (MDAC) 6.1.7601.17514Microsoft MSXML 3.0 4.0 5.0 6.0 Microsoft Internet Explorer 9.0.8112.16421Microsoft .NET Framework 2.0.50727.5466Operating System 6.1.7601

Help with a query to get substring

Posted: 18 Apr 2013 05:14 AM PDT

Hello,I have a column called EventText with a string such as 'Receiving batch [688_31142.TRN].' (without the ' marks)I need to get it to be '688-31142'I can accomplish that with this: update #temptesttableset EventText = Replace(EventText, 'Receiving batch ','')FROM #temptesttable update #temptesttableset EventText = Replace(EventText, '[','')FROM #temptesttableupdate #temptesttableset EventText = Replace(EventText, '.TRN].','')FROM #temptesttableupdate #temptesttableset EventText = Replace(EventText, '_','-')FROM #temptesttableBut there must be a much cleaner way... any suggestions?

index scan

Posted: 18 Apr 2013 05:25 AM PDT

I have a table that has a composite primary key studentID + Schoolyear, both of the two columns are int, But looking at the execution plan just by using a select * on the table, I see it is using index scan, why can it use index seek?Thanks,

sql datatype convertion error??

Posted: 18 Apr 2013 03:02 AM PDT

Table with Values[code]CREATE TABLE InitialData(recno int PRIMARY KEY, Dept Varchar(30), entry_name Varchar(50), entry varchar(500), orgID int)INSERT INTO InitialData Values(1, 'Marketing', 'Timesservedstartdate', '8/6/2012 12:00:00 AM', 1234),(2, 'Sales', 'Timesservedstartdate', '8/6/2012 12:00:00 AM', 1234),(3, 'Development', 'Reporting', 'Somevalue', 1234),(4, 'HumanResources', 'Reporting', '1', 1234),(5, 'Support', 'Reporting', '1', 1234);[/code]Trying to test the condition that i developed...i want to compare all the rows with the entry value '1'. Any values other than 1 should be equal to 0 is the condition. NOTE: entry is a varchar column[code] IF (SELECT COALESCE((SELECT dI.[entry] FROM dbo.InitialData AS dI WITH(NOLOCK) WHERE dI.Dept = 'Reporting' AND dI.entry_name = 'PledgeRequireBatch' AND dI.orgID = 1234 AND dI.[entry] <> 1), 0)) = 1 BEGIN Select * from InitialData END ELSE Select TOP 1 * FROM InitialData[/code]I am getting the else statement result always but that is not what i want1 Marketing Timesservedstartdate 8/6/2012 12:00:00 AM 1234

Assigning a fieldname with rollup

Posted: 16 Apr 2013 04:26 AM PDT

I have two tables that have racing data, and I need to add up the number of cancelled races and make it my aliased field name (in this case as you see the query below numCancelled, meaning number of races cancelled):(select a.eid, count(*) as numCancelled from(select ev4.eid,f.nvid, f.perfdate, f.racenumber from Finish f (nolock)join event ev4 (nolock) on ev4.eid = f.eid wheref.NoRace = 'true' and f.PerfDate = '2013-04-11') a group by a.eid with rollup)....the resultset here just lists all the eids (event ids) then the rollup total (in this case it is referred to a NULL in that eid column), but I need my numCancelled field to contain that grand total 7, as right now it comes back as 1 when I run this subquery in my stored procedure.??Z

Linked Server to MS Access

Posted: 18 Apr 2013 02:56 AM PDT

All,Well, this is kind of three folds...1. the problem2. the solution3. any alternative1. the problemI receive a number of errors when trying to test connection to a linked server.The linked server is created to an MS Access file located in the program files folder.When I log into SSMS using Windows Authen, I successfully tested the linked server connection.When I log into SSMS using SQL Server Authen, the connection test results in a lot of errors... one error after another.Cause - After about three days trying to figure this crap out, I realized that the MDB file was locked. This locking is due to an automatic download service that another software is running to update the mdb file. Once i stopped the service, i am able to successfully test the connection using SQL Server Authen.Question: Why does this locking affect only SQL Server Authen from connecting to the Linked server?2. The solutionSince the users front end does not require constant access to the MDB File, i am able to set a stored procedure to run automatically such that I can pull data from the mdb file into the sql server database while not having to stop the service.3. Is there any other solution to this problem?RegardsSurendra

Update on ID's

Posted: 18 Apr 2013 12:39 AM PDT

Hi Guys,Need a more efficient way to do this :[b]Old Fact Table[/b]FiscalID SID Value1 344 78.442 345 44.333 346 44.554 347 223.44Now this join to Dimensions on ID's[b]Dim_FiscalYear_Old [/b]ID FiscalName1 20082 20093 20104 2011[b]Dim_Scenario_Old [/b]ID Name344 Model1 345 Model2346 Model3347 Model4Now basically the fact table stores ID's from DIM tables, so I want to retain old values (Names will remain same but ID's have changed in the Dimension Tables)[b]Dim_FiscalYear_New [/b]ID FiscalName7 20088 20099 201010 2011[b]Dim_Scenario_New [/b]ID Name546 Model1 547 Model2548 Model3549 Model4[b]Old Fact Table Updated or New Fact Table[/b]FiscalID SID Value7 546 78.448 547 44.339 548 44.5510 549 223.44[b]Note : ID's are not auto increment by 1[/b]

SSIS Transfer SQL Server Object Task

Posted: 19 Jul 2012 08:05 AM PDT

Hello, all!Due to database corruption referenced in a previous post, I am attempting to transfer all objects and data out of one database into another using SSIS. So far, I have been able to troubleshoot many various errors, but I noticed something strange while trying to track this one down and it makes me wonder if I'm missing some setting or something. Here is the error...Execution failed with the following error: "ERROR : errorCode=-1073548784 description=Executing the query "GRANT EXECUTE ON [DMUSER].[InsertHFR_WarrantyCard]..." failed with the following error: "Cannot find the object 'InsertHFR_WarrantyCard', because it does not exist or you do not have permission.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.I assumed it has to do with permissions of the DMUSER user and/or the DMUSER schema or something along those lines, but when I compare the source db to the new objects in the destination db, I find that the stored proc in the source is '[b]DMUSER[/b].InsertHFR_WarrantyCard' and the one in the destination is now '[b]dbo[/b].InsertHFR_WarrantyCard'. It appears that this didn't happen to all of the stored procs in the DMUSER schema, just certain ones. Why in the world would it do that??! :hehe:~Any insite would be greatly appreciated!(OT): If anyone just *happens* to read this post that *happens* to be attending SQL Saturday in Cleveland on Aug, 18 or the Ohio North SQL Server User Group on Aug, 7th...look me up! (I'm a brand-spankin-new'b!)

SQL developer vs Enterprise editions

Posted: 18 Apr 2013 04:17 AM PDT

Hi SQL server 2008 R2.Can we use developer edition for all our development , testing and staging servers , and use enterprise edition on production server.what is the disadvantage , or where does the developer version falls short in terms of functionality.Using Enterprise version on all servers is expensive , am not sure if that by itself is recommended Note that all sql server functionality like DR, Encryption... will be implementedThanks

Plan Reuse - Unrelated tables in execution plan

Posted: 17 Apr 2013 10:42 PM PDT

Hi allI have been banging my head against a brick wall over this one. Hopefully somebody can help.One our users ran a simple query on a partitioned table but for some reason it was taking an age to execute. The table is accessed via a simple view in another database.[Code]SELECT TOP 1000 * FROM FactTable fld (NOLOCK)WHERE PartitionKey = 50230216[/Code]However, the execution plan shows tables that have not even been specified in the query which is the reason why it is taking so long. Obviously, the Optimizer is reusing an existing plan but this is not the best choice.I have tried ;-creating a proc and executing sp_compile against it-clearing the cache (DBCC FREEPROCCACHE)-specifying WITH RECOMPILE against the procThere is a composite non-clustered index on the base table which includes the PartitionKey (2nd col of index).Statistics update automatically.No matter what I try, it keeps producing this sub-optimal execution plan containing the two unrelated tables. Anybody know why it would do this especially given that the cache has been cleared ?ThanksPreet

SSIS and VS 2010

Posted: 27 Apr 2010 07:17 PM PDT

I got my fresh new copy of VS 2010 yesterday and immediately began playing. I want to build an SSIS package. Funny, there wasn't a template for that in the database section. I went to the Web. Dear me lots of people complaining that they couldn't find it either. At the tail-end of one long diatribe, there was a note, 'maybe they'll fix it with SQLServer2008 R2'.I went looking for R2. It had been released only a few days ago. I spent most of the evening downloading it and then more time installing it.I ran BIDS. It opened VS2008. :angry: :angry:MS in their wisdom (and displaying an immense lack of any communication between divisions) have set it in concrete - to build an SSIS 2008 package you have to use VS 2008. Never mind that they are two totally different beasts. Never mind that the VS development team have put together Add-Ons for a huge amount of stuff including old and new Frameworks and Databases.So I and I am assuming many others will not upgrade any system that uses SSIS to VS 2010, no matter what other benefits we are missing out on, because we can't run it all in the same IDE.Good one Microsoft.

Why such a delay in the commit?

Posted: 18 Apr 2013 01:52 AM PDT

I have a scheduled job (.Net, WinOS Task Scheduler) which completes shortly after it's called but the SQL work it should accomplish in real time does not happen until several hours later.The .Net scheduled job calls a stored procedure which updates 2 "top level" tables and outputs 3 data readers (that's what they are to the .Net program). Both updated tables have 2 update triggers each (one trigger inserts to a log table & one manages some computed fields in "quick search" tables). One of the fields updated in each table is a datetime field - set to getdate() within the sproc. Temp tables are used to identify the rows to be updated. Quick pseudo-code (showing just one update and select statement): ----declare @mydate dateimeset @mydate = getdate()declare @UpdateIDs table(MyID int)insert @UpdateIDsselect CaseIDfrom TopLevelTable1where ...begin transactionupdate TopLevelTable1set Flag1 = [new value], Flag1Date = @mydatewhere CaseID in (select MyID from @UpdateIDs)-- 2 triggers are fired in conjunction with this statementcommit transactionselect [several fields]from @UpdateIDs tinner join [several joins from temp table]----The problem: the .Net program runs at 11p each night and completes in about 20 seconds. It sends out an email for each row in the select and one to me regarding the ending status. All those emails happen at 11p. The job reports success.But the time stamp (the param @mydate & the field TopLevelTable1.Flag1Date) is 6am the next day. The trigger inserting to TopLevelTable1_Log has a field LogDate which is set to getdate() within the trigger and its value is also 6am.There is no other maintenance work being done at that time. Transaction logs (using sqllogship.exe) are being done and full database backup is done at 1:45am. That full backup has the unmodified rows relating to the this job.What could be happening?

Can full text search on a column in Multi Languages like En&Fr?

Posted: 18 Apr 2013 12:15 AM PDT

I have a table column contains English and French. My FTS always returns something unexpected. I am not sure because of the multi lingual or not. Can someone tell? Thanks

partitioning strategy for this scenario:

Posted: 17 Apr 2013 11:54 PM PDT

HIHere is the scenario:Data is loaded in Table "ABC" every week and size of 1 week data is around 10 GB. Table ABC is partitioned on WeekId Column.WeekID+Someother column are PK(clustered). Clustered index is also partitioned.Only 52 week data need to present in table ABC.There is archive table ARC_ABC with same structure and is also partitioned on Weekid.Clustered index is present with same structure.Every week, the oldest partition has to be moved out to archive table and a new week data has to be added to ABC table.Now what will be a better approach out of these two:1.Create 52 partitions, Take oldest partition out and merge old partitions. Split right side partition for new week data and add data.ISSUE:We'll need to keep making changes in partition function and scheme. These are database level objects. Will it be a good idea to keep them chaging?2.Create partition function and scheme for next 5 years (52*5 partitions) so that we don't have to touch these database objects for several years. We can then keep merging the oldest partitions and keep using the new ones week by week fo rnext 5 years.thanks

adding a prefix to a column in an SQL view

Posted: 17 Apr 2013 11:17 PM PDT

Hi all,Is it possible to add a prefix to a returned result in a view in sql.eg select productId from Table1returns id1id2id3I would like to put a set value(prefix) to the returned result eg www.john.com/returning the followingwww.john.com/id1www.john.com/id2www.john.com/id3is this possible ?thanks very muchJohn.

get overtime hours

Posted: 17 Apr 2013 08:18 PM PDT

i am have a problem to get overtime hoursthe tableEmployeeNo TimeIn TimeOut BreakHours WorkingHours001 06:20 19:00 1 8how query to get overtime hoursnote : datatype Timein, timeout, BreakHours,WorkingHours is decimal(6,3)OverTime = (Timein + timeout) - BreakHours - WorkingHours

Indexes with JOINs

Posted: 17 Apr 2013 10:13 PM PDT

Hi all, I have some knowledge about indexes but it's very fuzzy when the queries become more complex, where upon everything I read on the internet seems to contradict itself. For example, if we have the following two tables and two queries run as reports:People:PeopleIdFirstNameLastNameAgeAddressCountryProfession.... some other fields .....Employee:EmployeeIdFirstNameLastNameCompanyDepartmentEmploymentStatus... some other fields....Query 1:SELECT P.FirstName, P.LastName, P.Age, P.Address, E.Company, E.DepartmentFROM People P INNER JOIN Employees E ON (P.LastName = E.LastName AND P.FirstName = E.FirstName)Where E.EmploymentStatus = 'Employed' AND P.Profession is not nullQuery 2:SELECT P.FirstName, P.LastName, P.Age, P.Address, E.Company, E.DepartmentFROM People P INNER JOIN Employees E ON (P.LastName = E.LastName)Where E.EmploymentStatus = 'Employed'By the way, there are no primary keys or clustered indexes in this query for simplicity sake.For Query 1, I would write the following two non-clustered indexes for each table (please don't mind the syntax) in order to get an index seek:CREATE NONCLUSTERED INDEX INDEX_Q1_Employee ON Employee( LastName ASC, FirstName ASC, EmploymentStatus ASC)INCLUDE (Company, Department)CREATE NONCLUSTERED INDEX INDEX_Q1_People ON People( LastName ASC, FirstName ASC, Profession ASC)INCLUDE (Age, Address)Question 1: Are these indexes correct? From my knowledge, whatever is in the JOIN or WHERE clauses should get placed inside the index and what remains in the SELECT statement should get "covered" and placed with the include portion.Question 2: The two indexes above are written for Query 1. Can an index seek on both tables be used using these indexes for Query 2? Please note that unlike Query 1, Query 2 only joins on LastName and there is no "P.Profession is not null" in the WHERE clause. I'm curious if the two indexes created cover this query or if I need another set of indexes to get an index seek.Any help would be greatly appreciated.

No comments:

Post a Comment

Search This Blog