Friday, May 17, 2013

[SQL 2012] How do you force the maxdop in the App plan to match the SSMS plan ?

[SQL 2012] How do you force the maxdop in the App plan to match the SSMS plan ?


How do you force the maxdop in the App plan to match the SSMS plan ?

Posted: 17 May 2013 12:31 AM PDT

From SSMS Plan: <StmtSimple StatementSubTreeCost="329.661" StatementEstRows="9" StatementOptmLevel="FULL" QueryHash="0x8D96B6E9E59734B3" QueryPlanHash="0xB33E59170E9A855A"><QueryPlan DegreeOfParallelism ="8" MemoryGrant="3263496" CachedPlanSize="280" CompileTime="160" CompileCPU="160" CompileMemory="4248"> From App Plan: <StmtSimple StatementSubTreeCost="6.74389" StatementEstRows="1" StatementOptmLevel="FULL" QueryHash="0x8D96B6E9E59734B3" QueryPlanHash="0x44E50B9406BB5E36" StatementOptmEarlyAbortReason="TimeOut"><QueryPlan DegreeOfParallelism ="1" MemoryGrant="5120" CachedPlanSize="240" CompileTime="100" CompileCPU="100" CompileMemory="4048"> Thanks

SSRS 2012

Posted: 17 May 2013 01:07 AM PDT

Hello,Currently I a am referencing the ReportExecution2005 on my web service application. This web service is use to print report automatically. Currently, we have this application running on sql server 2005. We are testing this application with Sql Server 2012; For some reason, I found two issues, but I have found a solution for one.Font in SSRS 2012 is much bigger on then on 2005. I had two specify DPI 96 on Device Info to print the report. The second issue I am having when the report prints. It only prints on page instead of all the pages. Any thoughts?

Updating Covering Indexes

Posted: 16 May 2013 08:12 PM PDT

Hi,During some performance tuning and optimization I performed at a client, I discovered what appears to be strange behavior and I just wanted to ask the community for any comments.The behavior is related to the IO's generated when updating a table with covering indexes.As a test, I setup the following scenario:Create a simple table with three columns1. A primary key (clustered).2. A column to index on.3. A column to include.I create three of these tables1. A table with only a clustered IX on column 12. A table with a clustered IX on column 1 and a non-clustered IX on column 23. A table with a clustered IX on column 1 and a non-clustered IX on column 2 including column 3Finally I run a simple update statement while outputting STATISTICS IO. The update statement updates column 3.[u][b]I am expecting to see the following:[/b][/u][u]Update on table 1[/u]A clustered index scan followed by a clustered index update.[u]Update on table 2[/u]A non-clustered index scan followed by a clustered index update. There are fewer pages in the non-clustered index and the update should not affect the non-clustered index in any way. I am expecting to see fewer reads for this operation.[u]Update on table 3[/u]A non-clustered index scan followed by a clustered index update and a non-clustered update. There are fewer pages in the non-clustered index and the update will also affect the non-clustered index. I am expecting to see somewhere in the region of double the reads for this operation.[u][b]I actually saw the following:[/b][/u][u]Update on table 1[/u]A Clustered index scan - 22 PagesScan count 1, logical reads 24elapsed time = 29 ms.[u]Update on table 2[/u]A non-clustered index scan - 18 pagesScan count 1, logical reads 10341elapsed time = 117 ms.Unexpectedly, logical reads have increased massively[u]Update on table 3[/u]A non-clustered index scan - 20 pagesScan count 1, logical reads 30343elapsed time = 5201 ms.Now logical reads have increased three times over the second update.Here is my test code:[code="sql"]-- Make sure tables don't existIF EXISTS(SELECT * FROM sys.tables WHERE name = 'ONLY_PK')BEGIN DROP TABLE ONLY_PK;END;IF EXISTS(SELECT * FROM sys.tables WHERE name = 'NO_INCLUDE')BEGIN DROP TABLE NO_INCLUDE;END;IF EXISTS(SELECT * FROM sys.tables WHERE name = 'ONE_INCLUDE')BEGIN DROP TABLE ONE_INCLUDE;END;-- Create Test Tables-- One table to contain only a PKCREATE TABLE ONLY_PK (PK_COL INT NOT NULL PRIMARY KEY, IX_COL INT NOT NULL, INC_COL INT NOT NULL);GO-- One table to contain a PK and an NC index with no included columnCREATE TABLE NO_INCLUDE (PK_COL INT NOT NULL PRIMARY KEY, IX_COL INT NOT NULL, INC_COL INT NOT NULL);GOCREATE NONCLUSTERED INDEX IX_NO_INCLUDE ON NO_INCLUDE (IX_COL);GO-- One table to contain a PK and an NC index containing an included columnCREATE TABLE ONE_INCLUDE (PK_COL INT NOT NULL PRIMARY KEY, IX_COL INT NOT NULL, INC_COL INT NOT NULL);GOCREATE NONCLUSTERED INDEX IX_ONE_INCLUDE ON ONE_INCLUDE (IX_COL) INCLUDE (INC_COL);GO-- Load dummy data into our three test tablesWITH CTE AS ( SELECT TOP 5000 ROW_NUMBER() OVER (ORDER BY a.object_id) AS RowNo FROM sys.all_columns a CROSS JOIN sys.all_columns b)INSERT INTO ONLY_PKSELECT RowNo, RowNo, RowNoFROM CTE;INSERT INTO NO_INCLUDESELECT * FROM ONLY_PK;INSERT INTO ONE_INCLUDESELECT * FROM ONLY_PK;GO-- Have a look at the index page countSELECT * FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('ONLY_PK'), NULL, NULL, 'DETAILED'); -- 22 pages for clustered indexSELECT * FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('NO_INCLUDE'), NULL, NULL, 'DETAILED'); -- 18 pages for nonclustered indexSELECT * FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('ONE_INCLUDE'), NULL, NULL, 'DETAILED'); -- 20 pages for nonclustered index-- Switch on stats for the update testSET STATISTICS IO ON;SET STATISTICS TIME ON;SET STATISTICS XML ON;UPDATE ONLY_PKSET INC_COL = 0;-- Clustered index scan - 22 Pages-- Scan count 1, logical reads 24-- elapsed time = 29 ms.UPDATE NO_INCLUDESET INC_COL = 0;-- Non clustered index scan - 18 pages-- Scan count 1, logical reads 10341-- elapsed time = 117 ms.UPDATE ONE_INCLUDESET INC_COL = 0;-- Non clustered index scan - 20 pages-- Scan count 1, logical reads 30343-- elapsed time = 5201 ms.SET STATISTICS IO OFF;SET STATISTICS TIME OFF;-- This code will reset the tables.UPDATE ONLY_PKSET INC_COL = PK_COL;UPDATE NO_INCLUDESET INC_COL = PK_COL;UPDATE ONE_INCLUDESET INC_COL = PK_COL;[/code]

Linked Server connecting to mirrored database

Posted: 16 May 2013 09:27 PM PDT

I have a cluster with 2 database instances. There is a linked server set up from one instance to the other. The database that the linked server connects to is mirrored to another server at a remote site. The instance with the linked server does not know anything about the remote server or mirrored databse. The link is working fine but the remote/mirror instance is constantly reporting that the linked server is attempting to connect to it and failing with a pasword mis-match. I know the failed logins are from the linked server because the IP address is that of the node hosting it and the user is an account specifically set up for the link.The only reason I can come up with is that some failover code in SQL Server is going wrong, working out that the database is mirrored, routing the linked server to the mirror database, failing, then correctly routing it to the primary database.As far as I know the link always works but I worry that it may not perform optimally when connections are going astray like this.Note: I am using clustering and mirroring but not the new "always on" functionality.Anyone have any ideas what is going on here.P.S. I also mirror my ReportServer database and the log shows that Reporting Services is repeatedly attempting to connect to the mirror. Again, everything is working fine but the errors are worrying.

SQL 2012 MDS

Posted: 16 May 2013 05:54 AM PDT

OK, so I convinced after reading lots of articles that MDM using SQL Server 2012 is a good thing!! Well done MS marketing machine.... I have found lots of articles talking about concepts and the registry pattern looks the job for me and what I am doing... tick.... Now how do I implement the registry pattern to do distributed queries to several Azure databases from my local server?? I can't find anything that gives me this how-to for this approach....Please can someone point me in the right direction??Steve.

SSIS Data Flow Task From SS2000 to SS2012

Posted: 16 May 2013 04:12 AM PDT

I am using Visual Studio 2008 version 9.0.30729.1 SP (BIDS) to create a package that will copy data from a SS2000 database table to a SS2012 database table. It works great when I run it in BIDS from my Windows XP machine. When I schedule it to run as a SQL Server Agent job on the SS2012 server, it failed with the following error message.[i]MessageExecuted as user: NT Service\SQLAgent$SQLPROD. Microsoft (R) SQL Server Execute Package Utility Version 11.0.2100.60 for 64-bit Copyright (C) Microsoft Corporation. All rights reserved. Started: 11:32:36 AM Error: 2013-05-16 11:32:37.44 Code: 0xC0209303 Source: SendEmail Connection manager "sqlprod.AOC_Applications.sa" Description: The requested OLE DB provider SQLNCLI10.1 is not registered. If the 64-bit driver is not installed, run the package in 32-bit mode. Error code: 0x00000000. An OLE DB record is available. Source: "Microsoft OLE DB Service Components" Hresult: 0x80040154 Description: "Class not registered". End Error Error: 2013-05-16 11:32:37.44 Code: 0xC020F42A Source: SendEmail Connection manager "sqlprod.AOC_Applications.sa" Description: Consider changing the PROVIDER in the connection string to SQLNCLI11 or visit http://www.microsoft.com/downloads to find and install support for SQLNCLI10.1. End Error Error: 2013-05-16 11:32:37.44 Code: 0xC020801C Source: Data Flow Task OLE DB Source [1] Description: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "sqlprod.AOC_Applications.sa" failed with error code 0xC0209303. There may be error messages posted before this with more information on why the AcquireConnection method call failed. End Error Error: 2013-05-16 11:32:37.44 Code: 0xC0047017 Source: Data Flow Task SSIS.Pipeline Description: OLE DB Source failed validation and returned error code 0xC020801C. End Error Error: 2013-05-16 11:32:37.44 Code: 0xC004700C Source: Data Flow Task SSIS.Pipeline Description: One or more component failed validation. End Error Error: 2013-05-16 11:32:37.44 Code: 0xC0024107 Source: Data Flow Task Description: There were errors during task validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 11:32:36 AM Finished: 11:32:37 AM Elapsed: 0.764 seconds. The package execution failed. The step failed.[/i]After wading through all that text, I focused in on the following: [b]Description: Consider changing the PROVIDER in the connection string to SQLNCLI11[/b]Since my connection string is provided through a configuration file, I went in and changed SQLNCLI10.1 to SQLNCLI11 as suggested. And great news! I got a different error.[i]MessageExecuted as user: NT Service\SQLAgent$SQLPROD. Microsoft (R) SQL Server Execute Package Utility Version 11.0.2100.60 for 64-bit Copyright (C) Microsoft Corporation. All rights reserved. Started: 12:30:49 PM Error: 2013-05-16 12:30:50.03 Code: 0xC020801E Source: SendEmail Log provider "{31503914-01FB-4A31-880A-C914E7B2A6E1}" Description: The connection manager "sqlprod.AOC_Applications.sa" is an incorrect type. The type required is "FILE". The type available to the component is "OLEDB". End Error Error: 2013-05-16 12:30:50.06 Code: 0xC0202009 Source: SendEmail Connection manager "sqlprod.AOC_Applications.sa" Description: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80004005 Description: "Client unable to establish connection". An OLE DB record is available. Source: "Microsoft SQL Server Native Client 11.0" Hresult: 0x80004005 Description: "SQL Server Native Client 11.0 does not support connections to SQL Server 2000 or earlier versions.". End Error Error: 2013-05-16 12:30:50.06 Code: 0xC020801C Source: Data Flow Task OLE DB Source [1] Description: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "sqlprod.AOC_Applications.sa" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed. End Error Error: 2013-05-16 12:30:50.06 Code: 0xC0047017 Source: Data Flow Task SSIS.Pipeline Description: OLE DB Source failed validation and returned error code 0xC020801C. End Error Error: 2013-05-16 12:30:50.06 Code: 0xC004700C Source: Data Flow Task SSIS.Pipeline Description: One or more component failed validation. End Error Error: 2013-05-16 12:30:50.06 Code: 0xC0024107 Source: Data Flow Task Description: There were errors during task validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 12:30:49 PM Finished: 12:30:50 PM Elapsed: 0.608 seconds. The package execution failed. The step failed.[/i]Out of which I extracted the following:[b][i]Description: "SQL Server Native Client 11.0 does not support connections to SQL Server 2000 or earlier versions.". [/i][/b]So I googled that helpful tidbit, and found this link: [url=http://www.biadmin.com/2012/06/sql-2012-upgrade-gotcha-sql-2000-not.html][/url]In there, the author said the following: [b][i]All you have to do is change the data source to use a .Net Provider (SqlClient Data Provider) instead of the Native OLE DB provider.[/i][/b] From this, I'm guessing that I have to use a different SSIS connection manager.So I have some questions. First, is this a surmountable issue? Second, have I diagnosed the fix correctly? And if so, what connection manager type should I use?Thanks, and my apologies if I'm in the wrong forum.Mattie

No comments:

Post a Comment

Search This Blog