Wednesday, March 13, 2013

[T-SQL] Word wrapping query output

[T-SQL] Word wrapping query output


Word wrapping query output

Posted: 13 Mar 2013 05:51 AM PDT

This one is probably very simple, but I can't find the answer to save my life.I have a query that I run in which the output has to be in text mode (not grid mode). One of the output columns is text that often is several hundred characters long. This of course makes my output extremely wide. I'd like to format this one column so that it's only 50 characters wide but the text is wrapped to several lines tall.Is this possible?

Convert String to Time Format

Posted: 13 Mar 2013 04:26 AM PDT

Hello EveryoneI have been working most of the morning to try and figure out how I can convert a string to a time format. So far, I have not found anything that will allow this in SQL Server.So, I have come up with another way, unless someone has a nifty convert function in their toolbox.I have a string that is taken from a file name. And I would like to store that in time format column in a table. I am already storing the date portion, which seems to convert just fine.This is my example of a string that I have:[code="sql"]012345[/code]I need to insert a colon after every 2 digits, just like a time would be formatted.[code="sql"]01:23:45[/code]I can easily convert this to a Time(0) format. As long as the colons are in place.But, if for some odd reason, the string would happen to come by as '12345', missing the leading zero.How can I handle that so that the time format would still be correct?Either way, I need this to be able to be converted to a Time(0) format.Thank You in advance for all your assistance, suggestions, and comments.Andrew SQLDBA

Recursive Update Trigger

Posted: 13 Mar 2013 03:32 AM PDT

I am trying to create an update trigger that updates isactive field depending on different criteria.If DateField1 is >= GETDATE() OR DateField2 >= GETDATE() AND Extension =1 then IsActive is true.If DateField1 is < GETDATE() AND DateField2 < GETDATE() OR DateField2 IS NULL then IsActive is false.Right now I have this as two separate updates, but that makes the trigger recursive. Can I combine these using CASE statement?

Split A String into 4 parts

Posted: 13 Mar 2013 01:44 AM PDT

Hello EveryoneThank you all for the suggestions yesterday. I have written a SSIS package that now needs to read the file name of a text file to pump in.I need to read the file name, split the file into pieces and use that parts of the file name for some other things. My real issue is splitting the file name and getting the parts that I need. I cannot use the splitter that is part of the tally table. I just need to have the SQL functions to work and split the file name.This is the file name, which consists of: TypeofData_Date_Time.FileExtension[code="sql"]Notes_20130204_003015.txt[/code]I would like to split the string at each underscore. I do not care about the file extension or the underscores. All I need are the other three parts. The file name will be changing everyday when the file is generated. This will be generated multiple times per day. Hence the time in the file name.I am in need of these three parts of the file.Notes20130204003015Thank You in advance for all your assistance, suggestions and commentsAndrew SQLDBA

Performance Problems converting data into columns

Posted: 12 Mar 2013 10:33 PM PDT

Hello, I am in need of some help with an SSIS package (SQL 2008). I am trying to extract data out of a progress database and provide a reporting database for our users.The area I am having some trouble is with some tables which have columns where data is stored and split by a ';' . This data in the system actually represents different fields in the application so the end game is to have these in their own columns for people to select from.Example of table:GroupNumber Group_id GroupName AddressLine1 AddressLine2 AddressLine3 Postcode DataColumnGroup1 0001 TestGroup Capital letter Ltd Test Street Test Town XX1 22A 00-00-00;112115;TestBankGROUP2 0002 TestGroup2 Bad letter Ltd Test2 Street Test2 Town XX2 11B 11-00-55;522445;TestBank2 The way the data needs to end up is to have the data column split out into: Sortcode AccountNumber BankName00-00-00 112115 TestBank11-00-55 522445 TestBank2The issue I am having is performance of the methods I have tried so far: the DataColum has up to 200 'columns' within it. This is fine over a few records <100 but when it performs across >40,000 then I am starting to notice some serious performance problems results are taking as long as 8 minutes. If I only split 10 out of 100 columns it performs much better over 40,000 rows than splitting them all. Example data:CREATE TABLE GroupTable ( GroupNumber VARCHAR(6) NULL ,Group_ID INT ,GroupName VARCHAR(10) NULL ,AddressLine1 VARCHAR(50) NULL ,AddressLine2 VARCHAR(50) NULL ,AddressLine3 VARCHAR(50) NULL ,Postcode VARCHAR(8) NULL ,Datacolumn VARCHAR(300) );INSERT INTO GroupTable (GroupNumber,Group_ID,GroupName,AddressLine1,AddressLine2,AddressLine3,Postcode,datacolumn)VALUES ('Group1',1,'testgroup','Capital letter Ltd','1 Test Street','Test Town','XX1 22A','00-00-00;112115;TestBank'),('Group2',2,'testgroup2','Low letter Ltd','2 Test2 Street','Test2 Town','XX2 11B','11-00-55;522445;TestBank2'),('Group1',1,'testgroup','Capital letter Ltd','1 Test Street','Test Town','XX1 22A','10-10-10;112115;TestBank'),('Group2',2,'testgroup2','Low letter Ltd','2 Test2 Street','Test2 Town','XX2 11B','11-00-00;522445;TestBank2')I originally started using the following function and it was working ok until I got to pull larger amounts of data (and adding in more columns to split): (this was in an old post on these forums)SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOALTER FUNCTION [dbo].[FromProgressArray] (@array NVARCHAR(4000), @index INT)RETURNS NVARCHAR(256)ASBEGINIF @index = 0RETURN( LEFT(@array, CHARINDEX(';', @array) - 1) )DECLARE @counter INTSELECT @counter = 0WHILE @counter +1 < @index BEGINIF (CHARINDEX(';', @array) <> 0)SELECT @array = SUBSTRING(@array, CHARINDEX(';', @array) + 1, LEN(@array))ELSESELECT @array = ''SELECT @counter = @counter + 1ENDIF CHARINDEX(';', @array) <> 0SELECT @array = LEFT(@array, CHARINDEX(';', @array) - 1)RETURN( @array )ENDExample query:SELECT GroupNumber ,Group_ID ,GroupName ,AddressLine1 ,AddressLine2 ,AddressLine3 ,Postcode ,dbo.FromProgressArray(datacolumn,1) AS sortcode ,dbo.FromProgressArray(datacolumn,2) AS AccountNumber ,dbo.FromProgressArray(datacolumn,3) AS BankNameFROM dbo.GroupTable I have also tried to use the tally table method mentioned by Jeff Moden: http://www.sqlservercentral.com/articles/T-SQL/62867/Although this performs 50% quicker I have a problem here transposing the rows into columns. I was using the function DelimitedSplit8KExample query:SELECT GroupNumber ,GroupName ,AddressLine1 ,AddressLine2 ,AddressLine3 ,Postcode ,MAX(CASE WHEN dsk.ItemNumber = 1 THEN dsk.Item END) AS SortCode ,MAX(CASE WHEN dsk.ItemNumber = 2 THEN dsk.Item END) AS AccountNumber ,MAX(CASE WHEN dsk.ItemNumber = 3 THEN dsk.Item END) AS BankNameFROM GroupTableCROSS APPLY dbo.DelimitedSplit8K(datacolumn, ';') dskGROUP BY GroupNumber ,GroupName ,AddressLine1 ,AddressLine2 ,AddressLine3 ,PostcodeThe problem here is that the group record can have multiple entries in the table and the only way to distinguish the differences is once the datacolumn has been split out. This causes problems when using the above method as the grouping messes it up.As you can see only two results are shown not the 4 that should be. If you add in dsk.itemnumber to the grouping the case ceases to work. I will also point out that the datacolumn can have no data between the ; .In the event it finds one it should add in a blank or NULL entry.I have also looked into using the derived column feature but am not sure about the best way of doing this and haven't found that much when using search engines (I maybe using the wrong keywords though).Could someone tell me which way should be more efficient? Using a function or derived column in SSIS and assist with getting me on the right road so to say?Many Thanks

Converting Rows to columns

Posted: 12 Mar 2013 10:19 AM PDT

i want to convert rows to columns220output2 2 0

Insert Performance

Posted: 12 Mar 2013 01:49 PM PDT

Hi,Which will perform faster/better inserting million rows to a table or put all insert in a transaction then commit?Does it have a considerable difference?Exampleinsert into ...insert into ...insert into ...insert into ...insert into ...orbegin transactioninsert into ...insert into ...insert into ...insert into ...insert into ...commit transaction;Thanks!

Very frustrating performance tuning!

Posted: 12 Mar 2013 09:54 AM PDT

So I have run into this most aggravating of issues :-DHere is what I have; I am loading sales order data from staging data. I have this in production, and I have it in test. Our production server is well over 2x the resources (16 cores, 50gb memory, several raid 5's etc) vs test (8 cores, 24gb memory, 1 raid 5)I have been having some performance issues with said load script in production, so I restored to test and began tuning. One thing I noticed right away is that test was running the same exact data and script 2x faster! So diving into the first script which basically removes records from the destination that no longer exist in the source, I noticed in the execution plan that it is using a merge join instead of a hash join to join the 2 large tables (~13 million records). On the test system it is using the hash join. So just to test, I used the hash join option on the production server and it runs the same speed as the test server. I rebuilt all the indexes, created new statistics, messed with maxdop and recompile options to no avail. So my question is...what the heck am I missing? I have not come across such a conundrum in all my years as a DBA!

Advanced String Manipulation

Posted: 12 Mar 2013 10:12 AM PDT

I have a string that needs some serious help...I need to rewrite many strings with the aggregate value of anything between position 1, 12 and on a per UserID basis. Essentially...I need to GROUP BY UserID.The formatting needs to stay exactly the same fixed length.Here's the breakdown of the string:The [Amount] is anything between position 1, 12. The [UserID] is the final 6.Thanks in advance...[code="sql"]IF OBJECT_ID('TempDB..#StrungOut','U') IS NOT NULL DROP TABLE [#StrungOut] CREATE TABLE [dbo].[#StrungOut]( [Value] varchar (255) NOT NULL) INSERT INTO #StrungOut (Value) SELECT 'D000000206408312335308055102 Joe Blow 1002220200000290357' UNION ALLSELECT 'D00000248190001233930343216 Sue O''Malley 1004539500000290360' UNION ALLSELECT 'D00031228690003213965347415 Sue O''Malley 1004539500000290360' UNION ALLSELECT 'D00000022940010007560323433 Fred Greenface 1002403000000290361' UNION ALLSELECT 'D000000063100013455348487150327 Bill Guy 9835627000000290363' UNION ALLSELECT 'D000000063700043445238454350364 Bill Guy 1004695300000290363' UNION ALLSELECT 'D00000020640010005467116730 billy buck 9836280000000290376' UNION ALLSELECT 'D00000020640010001334112330 billy buck 9836280000000290376' UNION ALLSELECT 'D00000350640010001356162670 billy buck 1236280000000290400' UNION ALLSELECT 'D00000020640001602130103454 dave user 1004253500000290379' UNION ALLSELECT 'D00001220440002304835606876 dave user 5004253500000290379'SELECT * FROM #StrungOut[/code]

How do I optimize a query with a text column?

Posted: 12 Mar 2013 02:14 AM PDT

I am looking for a way to optimize a SELECT query that includes a column with a data type text. I cannot change the table. I am open to any and all suggestions! Thanks!HawkeyeDBA

Reporting on report execution over the last 12 months

Posted: 12 Mar 2013 11:02 AM PDT

Hi guys,I'm looking to create a report that will detail how many times each SSRS report was executed over the last 12 months. I'm aware that by default the execution log only holds 2 months worth of data but we're going to be changing this to hold 12 months worth.My initial thought was to do some kind of pivot but I'm not sure how to make this dynamic in terms of the dates. I'd like the report to look something like:Report Name Jan 2013 Feb 2013 Mar 2013Report1 12 15 10....... and so onWhat would be the best way for me to do this?Thanks in advance

How to make triple-pass UPDATE single-pass?

Posted: 12 Mar 2013 04:34 PM PDT

I'm attempting to optimize some code that I was just handed, and I'm not exactly sure if what I want to do is possible. The way it's currently written, there are 3 temp tables that have just a few rows eachHeader has 5 million rowsDetails has 3 million rowsI have included a stripped donw version of the code as it is currently written: 3 updates to populate a "summary row", making three passes of multi-million row tables. I am trying to find a way to do this in a single pass. Expected output (sorry for any formatting glitches):EmployeeID Code1PeriodToDate Code1YearToDate Code2PeriodToDate Code2YearToDate Code3PeriodToDate Code3YearToDate1 3.00 3.00 4.00 4.00 5.00 5.00We are running SQL 2008 R2 Standard. Thanks in advance for any suggestions. sqlnyc[code="sql"]CREATE TABLE CodeIDTable1 ( CodeID INT NOT NULL,Code CHAR(1) NOT NULL)CREATE TABLE CodeIDTable2 ( CodeID INT NOT NULL,Code CHAR(1) NOT NULL)CREATE TABLE CodeIDTable3 ( CodeID INT NOT NULL,Code CHAR(1) NOT NULL)INSERT CodeIDTable1 VALUES (1, 'A') INSERT CodeIDTable1 VALUES (2, 'B') INSERT CodeIDTable1 VALUES (3, 'C') INSERT CodeIDTable2 VALUES (4, 'D') INSERT CodeIDTable2 VALUES (5, 'E') INSERT CodeIDTable2 VALUES (6, 'F') INSERT CodeIDTable3 VALUES (7, 'G') INSERT CodeIDTable3 VALUES (8, 'H') INSERT CodeIDTable3 VALUES (9, 'I') CREATE TABLE Details ( HeaderID INT NOT NULL ,DetailID INT NOT NULL ,CodeID INT NOT NULL ,EmployeeID INT NOT NULL ,Date Datetime NOT NULL ,Amount DECIMAL (18,2) NOT NULL)CREATE TABLE Header ( HeaderID INT NOT NULL ,EmployeeID INT NOT NULL ,HeaderDate DATETIME NOT NULL)INSERT Header VALUES (1, 1, '2013-01-01')INSERT Header VALUES (2, 1, '2013-01-01')INSERT Header VALUES (3, 1, '2013-01-01')INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (1, 1, 1, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (1, 2, 1, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (1, 3, 1, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (2, 4, 5, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (2, 5, 5, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (2, 6, 5, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (2, 7, 6, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (3, 8, 7, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (3, 9, 7, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (3, 10, 7, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (3, 11, 7, 1, '2013-01-01', 1.00)INSERT Details (HeaderID, DetailID, CodeID, EmployeeID, Date, Amount) VALUES (3, 12, 7, 1, '2013-01-01', 1.00)--drop TABLE EmployeeSummary CREATE TABLE EmployeeSummary ( EmployeeID INT NOT NULL ,Code1PeriodToDate DECIMAL(18,2) NULL ,Code1YearToDate DECIMAL(18,2) NULL ,Code2PeriodToDate DECIMAL(18,2) NULL ,Code2YearToDate DECIMAL(18,2) NULL ,Code3PeriodToDate DECIMAL(18,2) NULL ,Code3YearToDate DECIMAL(18,2) NULL)--TRUNCATE TABLE EmployeeSummaryINSERT EmployeeSummary (EmployeeID) VALUES (1)DECLARE @FirstDayOfYear DATETIME = '2013-01-01', @LastDayOfYear DATETIME = '2013-12-31'DECLARE @ReportStartingDate DATETIME = '2013-01-01', @ReportEndingDate DATETIME = '2013-12-31'UPDATE EmployeeSummarySET Code1PeriodToDate = Summary.AmountPeriod ,Code1YearToDate = Summary.AmountYtdFROM ( SELECT Header.EmployeeID ,SUM(CASE WHEN Header.HeaderDate BETWEEN @ReportStartingDate AND @ReportEndingDate THEN Details.Amount ELSE 0 END) AS AmountPeriod ,SUM(CASE WHEN Header.HeaderDate <= @ReportEndingDate THEN Details.Amount ELSE 0 END) AS AmountYtd FROM Details INNER JOIN CodeIDTable1 ON CodeIDTable1.CodeID = Details.CodeID INNER JOIN Header ON Details.HeaderID = Header.HeaderID WHERE Header.HeaderDate BETWEEN @ReportStartingDate AND @ReportEndingDate GROUP BY Header.EmployeeID ) AS SummaryWHERE EmployeeSummary.EmployeeId = Summary.EmployeeIDUPDATE EmployeeSummarySET Code2PeriodToDate = Summary.AmountPeriod ,Code2YearToDate = Summary.AmountYtdFROM ( SELECT Header.EmployeeID ,SUM(CASE WHEN Header.HeaderDate BETWEEN @ReportStartingDate AND @ReportEndingDate THEN Details.Amount ELSE 0 END) AS AmountPeriod ,SUM(CASE WHEN Header.HeaderDate <= @ReportEndingDate THEN Details.Amount ELSE 0 END) AS AmountYtd FROM Details INNER JOIN CodeIDTable2 ON CodeIDTable2.CodeID = Details.CodeID INNER JOIN Header ON Details.HeaderID = Header.HeaderID WHERE Header.HeaderDate BETWEEN @ReportStartingDate AND @ReportEndingDate GROUP BY Header.EmployeeID ) AS SummaryWHERE EmployeeSummary.EmployeeId = Summary.EmployeeIDUPDATE EmployeeSummarySET Code3PeriodToDate = Summary.AmountPeriod ,Code3YearToDate = Summary.AmountYtdFROM ( SELECT Header.EmployeeID ,SUM(CASE WHEN Header.HeaderDate BETWEEN @ReportStartingDate AND @ReportEndingDate THEN Details.Amount ELSE 0 END) AS AmountPeriod ,SUM(CASE WHEN Header.HeaderDate <= @ReportEndingDate THEN Details.Amount ELSE 0 END) AS AmountYtd FROM Details INNER JOIN CodeIDTable3 ON CodeIDTable3.CodeID = Details.CodeID INNER JOIN Header ON Details.HeaderID = Header.HeaderID WHERE Header.HeaderDate BETWEEN @ReportStartingDate AND @ReportEndingDate GROUP BY Header.EmployeeID ) AS SummaryWHERE EmployeeSummary.EmployeeId = Summary.EmployeeIDSELECT * FROM EmployeeSummary[/code]

FOR XML and trying to avoid nested cursors

Posted: 02 Mar 2013 05:03 PM PST

Alright, history first. I've got a bit of a mess I've inherited. Groups inheriting groups inheriting... you get the drift. Each of these groups have external attributes that also need to be considered, but I'm not at that point yet. Right now, I'm trying to get a collar on the group hierarchy itself. Antiques littered with obsoletes have caused a reality that the business is just creating new groups to avoid the pitfalls that any old group my bring up.This is primarily because the vendor app can't find a good way to display the hierarchy to the users. Avoiding that entire discussion, I'm trying to build something that I can personally use to troubleshoot with, and eventually put into a position that my business users can troubleshoot themselves.So, what's the deal? I've got 'n' level hierarchies in this mess with a theoretical lack of an upper bound. Realistically it's seven because the software will otherwise crap out but I don't trust the historicals. There's more than likely forgotten hierarchies stuffed in there somewhere. Add to this the hierarchy is NOT org-chart particular. A subgroup can belong to multiple parents. If anything this is similar to a Bill of Materials chart. IE: The wheels go on both cars and trucks, the wheels have multiple components, one of which is bolts, and bolts go all over the danged place. And you can't trust a particular component to always exist at a particular tier of the hierarchy.My current intent is to build myself an XML document I can stuff into IE and collapse/expand/search nodes to figure out what all is affected when a particular group is selected. See, whenever you select one, it unrolls in the app to all underlying levels. So I need to be able to get a full node list of all the underlying pieces.That XML is killing me, because it pivots EVERYTHING to be usable as a node tree. I'm hoping you guys can give me a bit of a hand.First, some sample data to work with:[code="sql"]IF OBJECT_ID('tempdb..#RecursiveGroupFinding') IS NOT NULL DROP TABLE #RecursiveGroupFindingIF OBJECT_ID('tempdb..#GroupList') IS NOT NULL DROP TABLE #GroupListCREATE TABLE #RecursiveGroupFinding (ParentID INT NOT NULL, ChildID INT NOT NULL )CREATE TABLE #GroupList (GroupID INT IDENTITY(1,1) NOT NULL, GroupName VARCHAR(30) NOT NULL )CREATE CLUSTERED INDEX idx_c_RecursiveGroupFinding ON #RecursiveGroupFinding (ParentID, ChildID)CREATE CLUSTERED INDEX idx_c_GroupList ON #GroupList (GroupID)INSERT INTO #GroupListVALUES ('Parent1'),('Parent2'),('Child1'),('Child2'),('Child3'),('SubChild1'),('Subchild2'),('Subchild3'),('Icing')INSERT INTO #RecursiveGroupFindingVALUES ( 1, 3), (1, 4), (2, 3),(2,5), (3, 6), (3, 7), (4, 8), (6, 9), (7,9)[/code]Simple enough, you get this as a result list:[code="sql"]SELECT g.GroupName AS Parent, g2.GroupName AS ChildFROM #RecursiveGroupFinding AS gf JOIN #GroupList AS g ON gf.ParentID = g.GroupID JOIN #GroupList AS g2 ON gf.ChildID = g2.GroupID[/code][code="plain"]Parent1 Child1Parent1 Child2Parent2 Child1Parent2 Child3Child1 SubChild1Child1 Subchild2Child2 Subchild3SubChild1 IcingSubchild2 Icing[/code]Now, the code I've got so far illustrates a few of the uglies I'm arguing with:[code="sql"]SELECT RootLevel.GroupName, Tier1.GroupName AS T1GroupName, Tier2.GroupName AS T2GroupName, Tier3.GroupName AS T3GroupName, Tier4.GroupName AS T4GroupNameFROM (SELECT g.GroupID, g.GroupName FROM #GroupList AS g LEFT JOIN #RecursiveGroupFinding AS gf ON g.GroupID = gf.ChildID WHERE gf.ChildID IS NULL ) AS RootLevel OUTER APPLY (SELECT g.GroupID, g.GroupName FROM #GroupList AS g JOIN #RecursiveGroupFinding AS gf ON g.GroupID = gf.ChildID WHERE gf.ParentID = RootLevel.GroupID ) AS Tier1 OUTER APPLY (SELECT g.GroupID, g.GroupName FROM #GroupList AS g JOIN #RecursiveGroupFinding AS gf ON g.GroupID = gf.ChildID WHERE gf.ParentID = Tier1.GroupID ) AS Tier2 OUTER APPLY (SELECT g.GroupID, g.GroupName FROM #GroupList AS g JOIN #RecursiveGroupFinding AS gf ON g.GroupID = gf.ChildID WHERE gf.ParentID = Tier2.GroupID ) AS Tier3 OUTER APPLY (SELECT g.GroupID, g.GroupName FROM #GroupList AS g JOIN #RecursiveGroupFinding AS gf ON g.GroupID = gf.ChildID WHERE gf.ParentID = Tier3.GroupID ) AS Tier4FOR XML AUTO[/code]There are rCTE methods out there for BoM I can use but they 'stack' the results. Pivoting ends up looking like this. I'm not AGAINST a pivot per se, but the part I can't seem to kick is the results that look like this:[code="other"] [/code]Please note all the extraneous tiers because of non-existant data but columns needing to exist."Craig, get to the question!!" Errr, yeah, sorry, rambling a bit. Was hoping as I typed this out the answer would come to me, but it hasn't. Here's what I'd like to do:1) Turn the above node list into Parent1Child1 (etc...). Note this is impossible with named columns in the select list.2) Make this n-tier recursion.3) Remove extraneous tier levels.4) Avoid my last recourse... recursive cursors.The only solution I can see to this is nesting cursors via proc executions and passing a VARCHAR(MAX) around to build out the XML exactly as I want it.I'm rather open to suggestions on avoiding that... Also, if my google-fu has just failed me utterly (Bill of Materials XML Node list being one of my search patterns) please point me in the right direction with a simple lmgtfy. I can find plenty of VB/C# code to get the result I want, but I can't find anything at the proc level and I'd rather not have to dump this entire tableset through the pipes to a front end for them to bubblesort (orwhatever) the records together to build the hierarchy... I'm also not [i]that [/i]good at it and I don't have an app coder to spare.

sysjobsteps.last_run_outcome = 0 or is it?

Posted: 12 Mar 2013 07:40 AM PDT

I'm trying to create some code to quickly run on each server to get a listing of failed jobs currently on that particular server.The code so far is as follows:[code="sql"]SELECT DISTINCT(sj.name), sjs.last_run_outcome, sjh.run_statusFROM msdb.dbo.sysjobs AS SJ INNER JOIN msdb.dbo.sysjobhistory AS SJHON SJ.job_id = SJH.job_idINNER JOIN msdb.dbo.sysjobsteps AS SJSON sj.job_id = SJS.job_idWHERESJS.last_run_outcome = 0ANDSJH.run_status = 0[/code]The code runs but in my case on my test server I get 3 results back showing 0's in the "[b]last_run_outcome[/b]" column & the "[b]run_status[/b]" column.The thing is...when I look at the Job Activity Monitor, it shows "[b]Succeeded[/b]" under the "[b]Last Run Outcome[/b]" column for those 3 particular jobs.Am I doing something wrong? Querying the wrong table?Thanks!

Trigger with computed column criteria

Posted: 12 Mar 2013 05:53 AM PDT

Would a update after trigger work if the where clause criteria contains a computed column? The computed column is persisted.

Split a String

Posted: 12 Mar 2013 01:57 AM PDT

Hello EveryoneI am working on some old data that should not be store this way, but it is.My data is two values with pipe separators.[code="sql"]10.0||14.5or2||34or7.1||19or4||11.7[/code]I need to query this column and get each value separately. I have no idea how to go about this. There are always double pipe in the middle. But as you can see, there may or may not be decimal values in each. I think that is what is throwing me off. When perhaps, it really may not matter about the actual values.I appreciate any and all assistance, suggestions and commentsAndrew SQLDBA

[Articles] Resetting DMVs

[Articles] Resetting DMVs


Resetting DMVs

Posted: 12 Mar 2013 11:00 PM PDT

Steve Jones asks the question why so much data in SQL Server is cleared when we restart an instance.

[SQL Server 2008 issues] How to query SSAS dimension properties

[SQL Server 2008 issues] How to query SSAS dimension properties


How to query SSAS dimension properties

Posted: 13 Mar 2013 03:55 AM PDT

Hello all,Short version.I need to determine how from within TSQL I can query an SSAS cube to find the source column name (KeyColumn) of an attribute within a dimension. ie, my attribute is called "Product Color"; however in the database, that is coming from a table called "Products" with a column name of "Custom1". I basically need to find the database column names of all the attributes in a specified dimension. Long version.Our application leverages a somewhat generic, yet configurable cube per client. Each of our clients, are configured by the amount and type of data they send us. So from a database perspective, within our customer and product dimensions, we allow for our ETL team to leverage any one of multiple generic/custom columns. Being that said generic columns aren't named for their purpose (again because they're generic so we can have a model database for all clients), the columns are then renamed within the Cube and the cube then uses those names for the application. We're working on some custom reporting that hits the database and not the cube (I know) and with the above said, I have no way of knowing what to index because any given client can be using X to Y of these custom fields. Outside of dumping a blanket index on all columns, the next best bet would be to dynamically create an index based on the columns in use for that cube.We've accomplished this within the application using SQL AMO; however I have a need to do this from within TSQL. Actually, I could use AMO from Powershell which is what I'm going to start looking into shortly, but that will take longer. Was curious if this was possible from within TSQL and furthermore, any suggestions on this topic (outside of fixing our database/cube) are most welcome!Thanks

Extended Events

Posted: 13 Mar 2013 03:03 AM PDT

I only recently discovered Extended Events. Why? I don't know!I jumped in and learned how to use it and then I let the rest of our sql team in on the discovery. One of the team members quickly stated that it's worthless and is too difficult to use and read, and about the only thing it might be good for is locating long running queries. I thought it was easy to create scripts using 2012 and also easy to use ssms to read the xel in table-format.Am I just dealing with a case of Debbie Downer here or am I just too excited with the shinny-toy? Perfmon seems to be the tool of choice for, Debbie.Anyone else think that this is a great tool?What types of things do you prefer to use it for?Thanks!!

invalid character in input file

Posted: 12 Mar 2013 12:04 PM PDT

Hi all, I'm being told I have to scrub invalid data in an input file. I have a 30 character description field with extended Ascii poop causing a failure in my SSIS package, trying to store this in a char column.Is there a simple way using sql to replace ascii values < 32 and > 127 with a space ' '?thanks in advance

Recommended index causing query to run twice as long / HASH vs NESTED LOOP

Posted: 12 Mar 2013 10:13 AM PDT

Hey all,Been working on some performance tuning and running into a wall at this point. I've taken some initial passes at rewriting the query in question and have reduced run time without touching indexing by more than half (from 40 to 16 seconds). When viewing the missing index recommendations in all three spots (DTA, DMV's, execution plan), SQL is telling me that I have an opportunity for a better index for this query. After review of said recommendation, it's pretty straight forward that it's a better index than the existing (or at least I thought). The existing index that is used is twice as large and the order isn't as ideal; whereas the new index is much more focused on the query at hand. So, seems like a no brainer to go ahead and add this index yeah? I've done so and have experimented a multitude of ways; however now that the optimizer wants to use this new index, run time is now averaging around 33 seconds!? Looking at the execution plans, there is a distinct change ... with the new index, it has added an additional Nested Loop to the mix vs. in the old, there were just the two Hash Match (Aggregate)'s. I can't say I'm overly educated on all the differences; however I tried adding JOIN hints and specifying each type only resulted in an even longer query time.At this stage, I'm simply trying to better understand why my plan looks worse with a better index and why the nested loop was introduced into the mix.Thank you!Execution plans (top is pre index, bottom is post index):[URL=http://imageshack.us/photo/my-images/831/execplan.png/][IMG]http://img831.imageshack.us/img831/3883/execplan.png[/IMG][/URL]Query:[code="sql"]SELECT YEAR(i.[InvoiceDate]) AS [Year] ,MONTH(i.[InvoiceDate]) AS [TimePeriod] ,SUM(i.[Amount] * ISNULL(cur.[Rate],1)) AS [Revenue]FROM dbo.[ftInvoices] iJOIN dbo.[DimCurrency] cur ON i.[CurrencyID] = cur.[CurrencyID]WHERE i.[Amount] > 0 AND i.[Quantity] > 0AND i.[Cost] > 0AND i.[InvoiceDate] BETWEEN '01/01/0001' AND '12/31/9999'GROUP BY YEAR(i.[InvoiceDate]), MONTH(i.[InvoiceDate])[/code]Original Index:[code="sql"]CREATE INDEX [ix_InvoiceDate_Quantity_Amount] ON [dbo].[ftInvoices]( [InvoiceDate] ASC, [Quantity] ASC, [Amount] ASC)INCLUDE ( [ContractPricingFlag], [Cost], [CurrencyID], [CustID], [DimFourId], [DimOneId], [DimThreeId], [DimTwoId], [ExchangeRate], [Exclude], [ExtendedCost], [InvoiceNum], [LineNum], [ListPrice], [ProdID], [UOMID]) [/code]New Index:[code="sql"]CREATE NONCLUSTERED INDEX [IX__ftInvoices__CurrencyID__InvoiceDate__Quantity__Amount__Cost]ON [dbo].[ftInvoices] ([CurrencyID],[InvoiceDate],[Quantity],[Amount],[Cost])[/code]

Incorrect syntax error when using group by

Posted: 12 Mar 2013 06:54 AM PDT

I'm trying to do a join and then a group by and order by, but I'm getting an "Incorrect syntax near 'ds' error. ds is the alias for the dataset which is the union of the 2 inner queries. Any help would be greatly appreciated: -- Add the parameters for the stored procedure here Declare @FromDt as date = '01-01-2011', @ThruDt as date = '03-11-2013', @Region as varchar(50) = 'Claims', @Queue as varchar(50) = 'Catch-All'; Declare @RegionTbl as table(RegionName varchar(50)); Declare @QueueTbl as table(QueueName varchar(50)); Insert @RegionTbl select Value from hpexprpt.dbo.split(@Region,','); Insert @QueueTbl select Value from hpexprpt.dbo.split(@Queue,','); ( select users.last_name + ',' + users.first_name as [User ID] , workflow_regions.name as Region , queues.name as [Queue] , work_item_statuses.name as [Status] , case when convert(varchar,work_items.creation_date,110) < @FromDt then 'From Backlog' else 'Current' end as [Class] , wf_jobs.wf_job_id as [JobID] , work_items.elapsed_time , CONVERT(varchar, DATEADD(ms, work_items.elapsed_time * 1000, 0), 114) as [Total Time] , DATEADD(SECOND,(work_items.elapsed_time * -1),work_items.completion_date) as [Start_Date] , work_items.completion_datefrom hpexpprod.dbo.work_items join hpexpprod.dbo.queues on queues.queue_uid = work_items.queue_uid join hpexpprod.dbo.workflow_regions on workflow_regions.workflow_region_uid = work_items.workflow_region_uid join hpexpprod.dbo.work_item_statuses on work_item_statuses.work_item_status_uid = work_items.work_item_status_uid join HPEXPPROD.dbo.wf_jobs on wf_jobs.wf_job_uid = work_items.wf_job_uid join hpexpprod.dbo.actors on actors.actor_uid = work_items.actor_uid left join HPEXPPROD.dbo.users on users.actor_uid = actors.actor_uidwhere workflow_regions.name in (select * from @RegionTbl) and queues.name in (select * from @QueueTbl) and ( cast(isnull(work_items.completion_date,'') as date) between @FromDt and @ThruDt or cast(work_items.last_updated_date as date) between @FromDt and @ThruDt ) unionSELECT users.last_name + ', ' + users.first_name as [User ID] ,workflow_regions.name as Region ,btr.name as [Queue] ,'Break' as [Status] ,case when convert(varchar, btt.creation_date,110) < @FromDt then 'From Backlog' else 'Current' end as [Class] , '' as [JobId] ,btt.elapsed_time ,CONVERT(varchar, DATEADD(ms,btt.elapsed_time * 1000, 0), 114) as [Total Time] ,DATEADD(SECOND,(btt.elapsed_time * -1),btt.creation_date) as [Start_Date] , btt.creation_date completion_dateFROM HPEXPPROD.dbo.break_time_tracking btt join HPEXPPROD.dbo.break_time_reasons btr on btr.break_time_reason_uid = btt.break_time_reason_uid join hpexpprod.dbo.workflow_regions on workflow_regions.workflow_region_uid = btt.workflow_region_uid join HPEXPPROD.dbo.actors on actors.actor_uid = btt.actor_uid left join HPEXPPROD.dbo.users on users.actor_uid = actors.actor_uid where workflow_regions.name in (select * from @RegionTbl) and cast(isnull(btt.creation_date,'') as date) between @FromDt and @ThruDt)ds group by ds.[USER ID] order by 9 asc

Executionlog2 table

Posted: 07 Mar 2013 08:13 PM PST

How much time Executionlog2 table and catlog table in ReportServer database hold the data?Does it store all historical data and it always remain in table or after some time of interval period sql server remove the data from both the table?????????????????

Temp table join example

Posted: 13 Mar 2013 01:59 AM PDT

Hi,I'm looking for a basic example of a temporary table joined to and actual tableI'm guessing it should be like:Select ModelFROM CARS INNER JOIN #tempTable onCARS.Model = #tempTable.Model

Linked Server Replication

Posted: 12 Mar 2013 07:44 PM PDT

Good MorningI have been struggling with data synchronization between SQL SERVER and MYSQL for ages, I have tried all possible scenario and cannot get it to work. Added triggers to SQL SERVER with errors. I am now at the point to sync data through REPLICATION but also picked up an issue now. I created the Linked server and it works 100% - Server Options setup as Subscriber, but when I setup new Subscription I cannot see it in the list of databases. What am I missing? Thanks

Bulk Insert & Temporary Table

Posted: 13 Mar 2013 12:42 AM PDT

Hi,I have the following situation. I have a list of items that I am excluding from a query. So far I have been using a WHERE statement and NOT LIKE and NOT LIKE. The list keeps growing, so I thought why not just create a temporary table based on a text file that contains the list of items I want to exclude. This was easy. The problem is that this temporary table contains just one column and I have nothing to join it to.So I am hoping someone has some advice on how to best handle a list of items you want to exclude, other than having numerous not like statements.

Possible to recover / retrieve a server-side trace definition?

Posted: 13 Mar 2013 12:18 AM PDT

I'm trying to find what is being traced from a server-side trace on a couple SQL servers I'm managing. The person who created these traces left before I started, and there are no clear notes about this.The trace logs to a file (figured that out with this bit of T-SQL:[code="sql"]SELECT t.id ,CASE t.status WHEN 0 THEN 'Stopped' ELSE 'Running' END AS status ,t.path ,t.max_size ,t.stop_time ,t.max_files ,t.is_rollover ,t.is_shutdown ,t.is_default ,t.file_position ,t.start_time ,t.last_event_time ,t.event_countFROM sys.traces AS tWHERE t.is_rowset = 0 ;[/code])The trace in question is saving to files on disk, and doesn't clean up (or get cleaned up) after itself, so I have to manually delete files to keep the disk from filling up...I don't (yet) want to stop the trace, until I know what it's doing...Thanks,Jason A.

Access issue

Posted: 12 Mar 2013 09:24 PM PDT

I have got access to Production Domain but I don't have access to Development Domain means I ca not create AD group or service account in Dev environment.I tried installing SQL server on Dev servers. SQL Service accounts were created by someone who has got the access to Dev Domain but when I tried installing SQL server it errored with 'Access denied error'Is this because I don't have access to Dev Domain?Thaks in advance!

T-SQL which cause Table or Index Scan

Posted: 12 Mar 2013 11:00 PM PDT

Trying to find a comprehensive list of T-SQl that will cause a table scan. 1 Such as Select * from xxx, 2 Where xyz like '%abc%' etc, etc.

Slow query - optimisation help required!

Posted: 12 Mar 2013 08:48 PM PDT

I have the following query:[code="sql"]INSERT INTO dbo.Load9_PotentialOverlaps_(Master_Id, Master_GUID, Master_SubmissionID, Duplicate_Id, duplicate_GUID, Duplicate_SubmissionID, MatchKeyType) SELECT a.id, a.GUID, a.SubmissionID, b.id, b.GUID, b.SubmissionID, 6 FROM dbo.All_keys_ AS a INNER JOIN dbo.Load9_keys_ AS b on a.idxMatchKey1 = b.idxMatchKey1 WHERE NOT EXISTS (SELECT 1 from dbo.Load9_PotentialOverlaps_ as c WHERE c.duplicate_ID IN (a.id,b.id)) OPTION (MAXDOP 2);[/code]I've attached the estimated execution plan if anyone wants to look at it.The main tables used by the above query are as below:Table sizes:[b]dbo.All_keys_ = 88 million rowsdbo.Load9_keys_ = 750 thousand rows[/b]Using sp_whoisactive, I can see regular wait info of the type: [b](20ms)PAGEIOLATCH_SH:MyDatabase:1(*)[/b]Wait stats from this moring (SQL Server restarted last night around 9PM)[img]http://s15.postimage.org/xvj5k2zob/waitstats.jpg[/img][img]http://s15.postimage.org/q3xu7s657/All_Keys.jpg[/img][img]http://s24.postimage.org/4oe7dbdut/Load9_keys.jpg[/img]

Not able to modify existing maintenance plan?

Posted: 12 Mar 2013 10:57 PM PDT

Hi,Tlog backup failed and no errors are written error log and application logs as well..when try to editing existing maintenance plan as below errors display"Microsoft SQL Server Management Studio is unable to load this document Error Loading from xml. No further detailed error information can be specified for this problem because no Events Objects was passed where detailed error information can be stored"ErrorSQL Server Scheduled Job 'Tlog_Backup.Subplan_1' (0xDFA7450D1A79F54291B8D3F5AA540950) - Status: Failed - Invoked on: 2013-03-13 15:52:00 - Message: The job failed. The Job was invoked by Schedule 14 (tlog). The last step to run was step 1 (Subplan_1).pl. suggestion me how to resolve this issues?thanksananda

Login history

Posted: 12 Mar 2013 10:11 PM PDT

Hi Friends,Let us consider I have a login ABC & I have given db_reader access for XYZ Database. But now that particular login has db_owner permission. Is there any way to find when that user ABC has got the db_owner permission.Thanks in advance

How SQL query result insert in to table?

Posted: 12 Mar 2013 08:46 PM PDT

Hi Team,Daily am taking the size of databases using "EXEC SP_Databases" and updating in a excel file manually,[b]DATABASE_NAME | DATABASE_SIZE | REMARKS[/b]now i want to create a job to execute the above query and store the results in a table.New table structure is [b]DATE_OF_EXECUTION | DATABASE_NAME | DATABASE_SIZE | REMARKS[/b]Please help me in query

SQL Server Templates

Posted: 12 Mar 2013 08:15 PM PDT

My own created SQL Server templates have suddenly disappeared from Template Explorer in SSMS ?Anyone know how i can find them? Tried looking in the directory but they're not there?!C:\Users\XXX\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell\Templates

Cross Join Trick

Posted: 12 Mar 2013 07:26 PM PDT

Create table Amount (YoA int ,Currency char(3),Amt decimal (9,2))Create table Master(YoA int ,Currency char(3),)Insert into Amountselect 2008, 'CAD' , 3400 UNION select 2008, 'USD' , 400 UNION select 2009, 'CAD' , 560 UNION select 2010, 'USD' , 6750 insert into MAsterselect 2008, 'CAD' UNION select 2008, 'USD' UNION select 2009, 'CAD' UNION select 2009, 'USD' UNION select 2010, 'CAD' UNION select 2010, 'USD'Required Output2008, CAD, 34002008, USD, 4002009, CAD, 5602009, USD, 0.002010, CAD, 0.002010, USD, 6750

When delimiter is part of the field - How to handle in SSIS

Posted: 12 Mar 2013 05:43 PM PDT

I have a CSV file with field delimiter as [b]COMMA (,)[/b]. The package is working without any issues for few source files, but fails when the field value in the file contains "COMMA".Note: I have no control over source file. Is there a way to handle it in SSIS?

EXEC Master.dbo.xp_DirTree

Posted: 12 Mar 2013 05:41 PM PDT

hello all.I use this command for my purpose:EXEC Master.dbo.xp_DirTree 'D:\Reports ',1,1now I want to have *.rpx file that exist in my folder(Reports).how to do this?

retrive data from path on another computer

Posted: 12 Mar 2013 04:06 PM PDT

hello all.I have som rpx file on one drive on server and I want to select rpx file name to sql server.how to retrive this file name with T-Sql?

Attempt to fetch logical page (1:440) in database 2 failed. It belongs to allocation unit 422212869292032 not to 6269010747738816512.

Posted: 05 Mar 2013 04:23 PM PST

hi all,One of my sp is giving the following error some times not every time:[b]Attempt to fetch logical page (1:440) in database 2 failed. It belongs to allocation unit 422212869292032 not to 6269010747738816512.[/b]can any one help on this.

copying table with filegroups and partitions in DW server

Posted: 12 Mar 2013 10:41 AM PDT

DB Size - 4tbTable Size - 160gbPartitioned - YesFGs - 159row count - 205,363,396indexes - 1CI, 4NCIPartitioned code:[code="sql"]USE [xx]GO/****** Object: PartitionScheme [PS_PRTN_ID_159] Script Date: 3/12/2013 6:36:32 PM ******/CREATE PARTITION SCHEME [PS_PRTN_ID_159] AS PARTITION [PFN_PRTN_ID_159] TO ([FG_PRTN_159_P1], [FG_PRTN_159_P2], [FG_PRTN_159_P3], [FG_PRTN_159_P4], [FG_PRTN_159_P5], [FG_PRTN_159_P6], [FG_PRTN_159_P7], [FG_PRTN_159_P8], [FG_PRTN_159_P9], [FG_PRTN_159_P10], [FG_PRTN_159_P11], [FG_PRTN_159_P12], [FG_PRTN_159_P13], [FG_PRTN_159_P14], [FG_PRTN_159_P15], [FG_PRTN_159_P16], [FG_PRTN_159_P17], [FG_PRTN_159_P18], [FG_PRTN_159_P19], [FG_PRTN_159_P20], [FG_PRTN_159_P21], [FG_PRTN_159_P22], [FG_PRTN_159_P23], [FG_PRTN_159_P24], [FG_PRTN_159_P25], [FG_PRTN_159_P26], [FG_PRTN_159_P27], [FG_PRTN_159_P28], [FG_PRTN_159_P29], [FG_PRTN_159_P30], [FG_PRTN_159_P31], [FG_PRTN_159_P32], [FG_PRTN_159_P33], [FG_PRTN_159_P34], [FG_PRTN_159_P35], [FG_PRTN_159_P36], [FG_PRTN_159_P37], [FG_PRTN_159_P38], [FG_PRTN_159_P39], [FG_PRTN_159_P40], [FG_PRTN_159_P41], [FG_PRTN_159_P42], [FG_PRTN_159_P43], [FG_PRTN_159_P44], [FG_PRTN_159_P45], [FG_PRTN_159_P46], [FG_PRTN_159_P47], [FG_PRTN_159_P48], [FG_PRTN_159_P49], [FG_PRTN_159_P50], [FG_PRTN_159_P51], [FG_PRTN_159_P52], [FG_PRTN_159_P53], [FG_PRTN_159_P54], [FG_PRTN_159_P55], [FG_PRTN_159_P56], [FG_PRTN_159_P57], [FG_PRTN_159_P58], [FG_PRTN_159_P59], [FG_PRTN_159_P60], [FG_PRTN_159_P61], [FG_PRTN_159_P62], [FG_PRTN_159_P63], [FG_PRTN_159_P64], [FG_PRTN_159_P65], [FG_PRTN_159_P66], [FG_PRTN_159_P67], [FG_PRTN_159_P68], [FG_PRTN_159_P69], [FG_PRTN_159_P70], [FG_PRTN_159_P71], [FG_PRTN_159_P72], [FG_PRTN_159_P73], [FG_PRTN_159_P74], [FG_PRTN_159_P75], [FG_PRTN_159_P76], [FG_PRTN_159_P77], [FG_PRTN_159_P78], [FG_PRTN_159_P79], [FG_PRTN_159_P80], [FG_PRTN_159_P81], [FG_PRTN_159_P82], [FG_PRTN_159_P83], [FG_PRTN_159_P84], [FG_PRTN_159_P85], [FG_PRTN_159_P86], [FG_PRTN_159_P87], [FG_PRTN_159_P88], [FG_PRTN_159_P89], [FG_PRTN_159_P90], [FG_PRTN_159_P91], [FG_PRTN_159_P92], [FG_PRTN_159_P93], [FG_PRTN_159_P94], [FG_PRTN_159_P95], [FG_PRTN_159_P96], [FG_PRTN_159_P97], [FG_PRTN_159_P98], [FG_PRTN_159_P99], [FG_PRTN_159_P100], [FG_PRTN_159_P101],[FG_PRTN_159_P102], [FG_PRTN_159_P103], [FG_PRTN_159_P104], [FG_PRTN_159_P105], [FG_PRTN_159_P106], [FG_PRTN_159_P107], [FG_PRTN_159_P108],[FG_PRTN_159_P109], [FG_PRTN_159_P110], [FG_PRTN_159_P111], [FG_PRTN_159_P112], [FG_PRTN_159_P113], [FG_PRTN_159_P114], [FG_PRTN_159_P115],[FG_PRTN_159_P116], [FG_PRTN_159_P117], [FG_PRTN_159_P118], [FG_PRTN_159_P119], [FG_PRTN_159_P120], [FG_PRTN_159_P121], [FG_PRTN_159_P122],[FG_PRTN_159_P123], [FG_PRTN_159_P124], [FG_PRTN_159_P125], [FG_PRTN_159_P126], [FG_PRTN_159_P127], [FG_PRTN_159_P128], [FG_PRTN_159_P129],[FG_PRTN_159_P130], [FG_PRTN_159_P131], [FG_PRTN_159_P132], [FG_PRTN_159_P133], [FG_PRTN_159_P134], [FG_PRTN_159_P135], [FG_PRTN_159_P136], [FG_PRTN_159_P137], [FG_PRTN_159_P138], [FG_PRTN_159_P139], [FG_PRTN_159_P140], [FG_PRTN_159_P141], [FG_PRTN_159_P142], [FG_PRTN_159_P143], [FG_PRTN_159_P144], [FG_PRTN_159_P145], [FG_PRTN_159_P146], [FG_PRTN_159_P147], [FG_PRTN_159_P148], [FG_PRTN_159_P149], [FG_PRTN_159_P150], [FG_PRTN_159_P151], [FG_PRTN_159_P152], [FG_PRTN_159_P153], [FG_PRTN_159_P154], [FG_PRTN_159_P155], [FG_PRTN_159_P156], [FG_PRTN_159_P157], [FG_PRTN_159_P158], [FG_PRTN_159_P159])GO[/code]This is a DW server and we are planning to add additional columns. we would like to test this in the test.Q is what will be the best way to emulate production i.e. have partitions, fgs and populate the table. Copying the table structure and moving data will be simple but i am trying to find a less time consuming way to move partitions (if possible) and fgs (if possible)any input will be greatly appreciated

Read & Write Permissions on SQL Server Agent Service startup account

Posted: 18 Nov 2012 10:17 PM PST

How do I add read permission and write permission to the Temp directory of the SQL Server Agent Service startup account. Many ThanksStuart.

SQL AGENT IS NOT NOT ABLE TO START....

Posted: 12 Mar 2013 03:31 AM PDT

Hi,Can anyone help me ?SQL AGENT IS NOT NOT ABLE TO START....Getting an error " The process terminated unexpectedly [0x8007042b] in my development server.I installed one more instance for that also the same issue is occuring....not able to change for sql agent server user in configuration manager.

query performance

Posted: 04 Mar 2013 08:53 AM PST

I have a table with 6 million rows. The table has around 200 columns. The data can be sliced and diced by the user in a number of ways. Are there any tips of improving query performance other than adding indexes?Currently some of the queries take around 40 seconds.thanksKK

SQL concatenate

Posted: 12 Mar 2013 05:45 AM PDT

I have a SQL question…Picture 3 tables in this relationship:Table A – Column1 (PK), Coulmn2 (Identity)Table B – Column1 (PK), Coulmn2 (Identity) sample data for column 2 is – a_id, c_idTable C – Column1 (PK), Coulmn2 (Identity) sample data for column 2 is integer information)A row of A can have 1 to many rows of B.Each row of B must refer to one row of CWhat I want to do is set up this result. Assume that A.identity =1 and B.c_id refers to C.identity=1 and C.integer_information = 100Assume that another row of A.identity = 2 and there are there rows of B referring to C.Integer_information of 100,200 and 300.I need this result:a.Identity Result of c.integer_information1 1002 100,200,300I have this much SQLSelect A.identity From AInner join B on B.a_id = a.identityInner join C on c.identity = b.c_idHow do I concatenate the values above with no comma if there is 1 value and commas between multiple values?

data staging (reading BAK files without restoring)

Posted: 12 Mar 2013 06:35 AM PDT

Hello,Has anyone heard of or used a third party tool to import data to a staging area from a BAK file without conducting a full DB restore?Red Gate has SQL Virtual Restore which has been retired. Is there another way to get a table subset off a FULL backup without redesigning the source DB filegroups?I am an ETL staging developer with no control over the source systems and want to stage only a small set of my BAK file without a full restore.Ideas?John

Track SQL severity levels in SCOM?

Posted: 12 Mar 2013 05:50 AM PDT

Trying to set up alerts and/or email notifications in SCOM 2007 that are triggered by SQL Server (2005 & 08) severity levels 19-25 and 823–825. Several db servers at my organization cannot send dbmail due to compliancy issues so I need another way to send alerts.Does anyone know if this can be accomplished in SCOM 2007? I can only find SCOM severity levels 0, 1, and 2 in FMSQLOPERATIONS.alert.vAlert. Maybe I can log these or pull these from the SQL Boxes' windows app logs using SCOM.Anybody have any experience with this?

SQL Server error - Cannot Open User Default Database error 4064

Posted: 12 Mar 2013 01:36 AM PDT

Any idea on this?user is part of active directory and has windows authentication and still getting this error....any thoughts?master is default db..from sql error log : sql error 18456 severity 14 state 16thanks--------------------------------------------------------------------------------

sp_updatestats and the default sampling rate

Posted: 04 Mar 2013 08:51 AM PST

Using SQL Server 2008R2As my Production database approaching 500GB and potentially can grow up to 1TB in 2 years, issue with sp_updatestats (after re-indexing part of maint step) using the default sampling rate (which potentially can skew the performance) bothers me.It has been a discussion earlier at that link: http://www.sqlservercentral.com/Forums/Topic1310877-146-2.aspx, but I still confused about using sp_updatestatsCurrently I am performing the following steps during weekend maintenance:1. ALTER Database MyDBSET RECOVERY Simple,AUTO_CREATE_STATISTICS OFF,AUTO_UPDATE_STATISTICS OFF2. My index maintenance routine based on the following criteria:Currently I Reindex Clustered and Non-Clustered Indexes when avg_page_space_used_in_percent < 75 and avg_fragmentation_in_percent > 10 and page_count > 500.Of those selected, if Fragmentation <=30, than I reorganize those Indexes. If Fragmentation > 30, than I rebuild those Indexes. So at the end of the Reindex Maint I have Non-Clustered and Clustered Indexes either Rebuilt or Reorganized.3. Currently I am running the Update Statistics on a whole database after previous reindex step:[b]sp_updatestats[/b]Since sp_updatestats updates statistics by using the default sampling rate, it possibly can deteriorate all my indexes after running reindex routine.[b]3A.[/b] So I was thinking about "… update statistics for all tables which are defragged and don't run update statistics for all tables which are REBUILD"http://sqlserverpedia.com/wiki/Updating_StatisticsSo here is my logic for performing routine in [b]3A[/b]Select indexes which were "REORGANIZE"d (no statistics update) during reindex maintenance along with other indexes, where statistics were either not updated for the last, say, few weeks and run the following:UPDATE STATISTICS Table_Name (IndexName) WITH FULLSCAN, NORECOMPUTE against indexes selected above.By running above I will be able to update statistics without running sp_updatestats4. ALTER Database MyDBSET RECOVERY Full,AUTO_CREATE_STATISTICS ON,AUTO_UPDATE_STATISTICS ONPlease let me know if you have any comments, suggestions, recommendations on [b]step 3A[/b].It has been a suggestion earlier to run: USE dbname;EXEC sys.sp_MSforeachtable @command1 = N'UPDATE STATISTICS ? WITH FULLSCAN, COLUMNS;';But unfortunately it takes way to long on my Production Database, given the time frame I have for the weekend maintenance.

Tuesday, March 12, 2013

[MS SQL Server] SSIS Job fails using Domain Account

[MS SQL Server] SSIS Job fails using Domain Account


SSIS Job fails using Domain Account

Posted: 12 Mar 2013 09:51 AM PDT

I get this error when I attempt to run the Package using the Domain Account, SQLServerAgent.[quote]MessageExecuted as user: MyDomain\SQLServerAgent. Microsoft (R) SQL Server Execute Package Utility Version 10.50.2500.0 for 64-bit Copyright (C) Microsoft Corporation 2010. All rights reserved. Started: 01:29:42 PM Could not load package "\Dev\AS400\CYP_Staging_Load_Truncate_Nightly" because of error 0xC0014062. Description: The LoadFromSQLServer method has encountered OLE DB error code 0x80040E4D (Login failed for user 'MyDomain\SQLServerAgent'.). The SQL statement that was issued has failed. Source: Started: 01:29:42 PM Finished: 01:29:42 PM Elapsed: 0.046 seconds. The package could not be loaded. The step failed.[/quote]The package is stored in MSDB.I added the Account to sysadmin and made it a local admin.It works with my login.It is totally weird.I granted permissions to the Temp Folder for the SQLServerAgent but no dice.Any ideas?Thanks.

Monitoring error

Posted: 11 Mar 2013 07:47 AM PDT

I'm using Red-Gate Monitor tool to help me in managing our SQL Server. I see very frequently these two messages: Monitoring error (SQL Server data collection) and Monitoring Error (host machine data collection). I know there could be almost an infinite number of reasons for these error but I was wondering how would it be best to try to find out the reasons why I'm getting notified on these. I looked at our existing backup jobs and schedule jobs as well as the logs to see if I can find a reason why this is happening. Any other ideas or a direction that I need to go to get more information on this it would be very appreciated.

Find SPID of Linked Server Call

Posted: 12 Mar 2013 07:14 AM PDT

Hi SCC,I have a stored procedure on server A that makes a linked server call to server B.... on server A I see the wait type is OLEDB so I want to look at the SPID on server B... is there a way to identify the remote SPID other than connecting to the server and doing an sp_who2 and looking for the client who made the connection and then username of the linked server?Thanks!

Reporting Services Logs not being deleted

Posted: 08 Feb 2011 06:16 AM PST

We have a server running SQL 2008 SP1, Standard. We recently discovered that SSRS isn't cleaning up it's log files. Checking the in the ReportingServicesService.exe.config file and Service configuration confirms the service is configured to the default of 14 days, and the the log file locations are also default.Does anyone have an idea why SSRS would keep over a years worth of log files, or what could stop the cleanup. We aren't seeing anything fail in the environment but the drive fills up.CheersLeo

Big table

Posted: 11 Mar 2013 07:59 AM PDT

We have a lookup table GeographicLookup table with a primary key called gisGeorgraphicLookup.It now has 2 million records, with each year we have about 550000 rows records ranging year 2010-2014 .Could many records like this reduce performance or reduce the speed for lookup?If we only use most recently a couple years data, should we archive other records into another table?or any other recommendations, we are using SQL 2008 standard edition.

DR/HA proposal

Posted: 11 Mar 2013 10:59 AM PDT

I need to write DR/HA proposal any suggestions?

SSMS showing local DBs not remote DBs when editing remote maint package

Posted: 12 Mar 2013 12:11 AM PDT

Hi AllWhen I open an Integrity check task from a server (SQL 2008 R2) with my SSMS(SQL2012) on my PC the task shows loacal DBs in the database drop down. Any ideas.:w00t:

dadication and shared connection in sql

Posted: 12 Mar 2013 12:09 AM PDT

is there a concept of dadicated connection and shared connection in sql similar to oracle.if yes, how it has to be configured can anybody explain about this.Thanks in advance

Replication from SQL 2008R2 to SQL 2012

Posted: 11 Mar 2013 05:41 PM PDT

Gurus, I have weird replication issue. Can't figure out what the issue is here. Please help me with ideas/known bugs(if any):w00t:Publisher: SRVA - SQL 2008R2Distributor: SRVB - SQL 2008R2Subscribers: SRV1 - SQL 2008R2 SRV2 - SQL 2008R2 SRV3 - SQL 2008R2 SRV4 - SQL 2012Number of Publications: 3originally, replication was set-up from backup. later on, we had to add couple of articles to Publication-3. we followed an article which said to switch off "immediate_sync" and "allow_anonymous" to false and re-snap. This way, we will take snap of only the two articles we have to add.Now, the issue is, we had to add two more articles today, when we followed same procedure, snapshot agent has created drop and re-create scripts for the two articles. However, create table scripts were not applied to the SRV4. Rest of all the procs for repl insert, update, delete were created on this instance. Just the tables and thier pks weren't created.Fixed the issue by manually creating the table and Pk. Stopped and re-started Distributed agent and everything started to work fine.:pinch:Checked the permissions for snapshot, distributed agents. Everything is normal with regards to permissions. Can't find a root cause of the issue. Please help!!:crying:ThanksJagan K

[SQL Server] Part Table or Procedure not updating

[SQL Server] Part Table or Procedure not updating


I'm not sure if it is hardcoded or a stored procedure and I'm currently stuck with no hair left!I hope this makes sense and I'm sure it is something really simple but I'm tearing my hair out here :OThanks

Part Table or Procedure not updating

Posted: 12 Mar 2013 10:09 AM PDT

Hi all,I'm fairly new at all this but I know a little, I've recently taken over our database from a guy who designed it and I've been tasked with adding a part name to our database. It is aspx format and I've managed to add the line item to the aspx file:WipersI've also added Wipers to the table in sql. That part seems OK.Now when I open a ticket for a repair the part name Wipers is not shown against the part name field but the value 24 is shown and I cannot work out why it is not picking up the value I added Wipers. The code in the form is:
Part:

supplying a schema in queries, performance?

Posted: 12 Mar 2013 12:51 AM PDT

HiA random question, if you write queries and put the schema before objects will it affect performance in any way and if so what type of metrics are we looking at? even if they are tiny :-DRegardsGordon Beeming

I am looking to gain work experience here in London UK - Please HELP!!!

Posted: 11 Mar 2013 10:40 PM PDT

I am a certificated SQL Server DBA in both 05 and 08 since 2010, but has had no joy successfully getting a job as a SQL Server DBA - even at junior level. I am looking for such an opportunity. I am an analytical person who is adaptable to change and can hit the ground running. I come from a Testing background, but have since decided to change my career.Can anyone please help me please :-) :-) :-)

SQL Cached Plans - MPA

Posted: 11 Mar 2013 06:09 PM PDT

Hi AllI have a question regarding SQL cached plans and the multipage allocatorBased on what I understand, any request of more that 8KB of memory will go through the multipage allocator.I have run the below select and the results are a bit confusing[code="sql"]SELECT COUNT (*) FROM sys.dm_exec_cached_plans AS cp where cp.size_in_bytes/1024 > 8SELECT COUNT (*)FROM sys.dm_exec_cached_plans AS cp where cp.size_in_bytes/1024 < = 8[/code]For the first select, I have 86 plansFor the second select, I have 8 plansDoes this mean that 86 of my cached plans have been allocated through the multipage allocator?Thanks

How to find database,object,schema level permissions on a server

Posted: 11 Mar 2013 11:47 PM PDT

Hi...any one can provide the solution please it's very urgenti need a tsql script for SQL SERVER 2005Requirements:I have 2 user accounts for the both accounts i need to find the all the permissions.Like...• The 2 users are mapped to which databases on the server?• What are the databases names? • What kind of permissions having on the mapped databases?• For each database i need to find all the permissions like object level, schema level....etc.....?Thanks

Search This Blog