[T-SQL] Inline Valued Function query... |
- Inline Valued Function query...
- Blocking SPID with Text Data?
- Change Excel formula to Case Statement
- Most efficient way to extract data for a historical data table
- Concatenating several row columns into one
- Question Regarding Store Procedure
- Ignore Error in T-SQL
Inline Valued Function query... Posted: 03 Oct 2013 12:05 AM PDT Afternoon all,Looking for a bit of advice on something I am struggling to explain. I have 3 functions, 2 of which are examples provided by Adam Machanic in a useful post he did on Scalar Functions vs Inline Functions. The functions use AdventureWorks and are as follows:------------------------------------------------------[b]FUNCTION 1 - Inline Function 1:[/b]CREATE FUNCTION [dbo].[GetMaxProductQty_Inline_Test1]( @ProductId INT)RETURNS TABLEAS RETURN ( SELECT MAX(sod.OrderQty) AS maxqty FROM Sales.SalesOrderDetail sod WHERE sod.ProductId = @ProductId )[b]FUNCTION 2 - Inline Function 2:[/b]CREATE FUNCTION GetMaxProductQty_Inline_Test2( @ProductId INT)RETURNS @test TABLE (MaxQty INT)ASBEGIN DECLARE @MaxQty INT SELECT @MaxQty = (SELECT MAX(sod.OrderQty) AS maxqty FROM Sales.SalesOrderDetail sod WHERE sod.ProductId = @ProductId) INSERT INTO @test (MaxQty) VALUES (@MaxQty) RETURNEND[b]FUNCTION 3 - Scalar Function:[/b]CREATE FUNCTION GetMaxProductQty_Scalar_Test1( @ProductId INT)RETURNS INTASBEGIN DECLARE @maxQty INT SELECT @maxQty = MAX(sod.OrderQty) FROM Sales.SalesOrderDetail sod WHERE sod.ProductId = @ProductId RETURN (@maxQty)END------------------------------------------------------I used this to give people an easy to understand example with the implications of a Scalar Functions vs Inline Function when they are both setup to return a similar results set. [b]Inline Function 1[/b] vs [b]Scalar Function 1[/b] highlighted what Adam described in his blog post in that the Inline Function was 288 times quicker and did 1200 (ish) Reads vs 365,000 (ish) Reads of the Scalar Function, and consumed vastly less CPU and in the process, allowed an optimised Execution Plan to be generated detailing exactly what was happening, rather than the Black Box Approach of the Scalar Function.However, I set about re-writing the Inline Function to try and utilise parameters to return as a Table, which spawned [b]Inline Function 2[/b]...This function when used in exactly the same way as Inline Function 1, allowed SQL to generate a plan, but in reality still did a similar amount of Reads to the Scalar Function, only this time it was visible as to what was going on under the covers and what I deem to be the culprit which was a Nested Loop, which was cycling through the results in a similar way to the Scalar Function with a Row by Row Cursor like approach which was just as bad on performance, only more visible through the Execution Plan.What I am struggling to understand is why [b]Inline Function 1[/b] is so superior to [b]Inline Function 2[/b] when they both run Inline with the query and they both allow SQL to generate a useful Execution Plan of what is going on behind the scenes. The obvious answer is the Nested Loop, but I am looking for more detail than this if possible. The SQL used to execute both these queries is as follows:-------------------------------------------------------- Inline Function 1 --SELECT ProductId, ( SELECT MaxQty FROM dbo.GetMaxProductQty_Inline_Test1(ProductId) ) MaxQtyFROM Production.ProductORDER BY ProductId-- Inline Function 2 --SELECT ProductId, ( SELECT MaxQty FROM dbo.GetMaxProductQty_Inline_Test2(ProductId) ) MaxQtyFROM Production.ProductORDER BY ProductID------------------------------------------------------Any advice on this would be massively appreciated.Cheers |
Posted: 02 Oct 2013 11:17 PM PDT HiSELECT wt.session_id, ot.task_state, wt.wait_type, wt.wait_duration_ms, wt.blocking_session_id, wt.resource_description, es.[host_name], es.[program_name] FROM sys.dm_os_waiting_tasks wt INNER JOIN sys.dm_os_tasks ot ON ot.task_address = wt.waiting_task_address INNER JOIN sys.dm_exec_sessions es ON es.session_id = wt.session_id WHERE es.is_user_process = 1 this script provide only bocking SPID.. Pl give me, I want textdata (Query which query get it blocking) along with SPID.thanksananda |
Change Excel formula to Case Statement Posted: 02 Oct 2013 05:34 PM PDT I have this formula I use within Excel, I'm moving most of the work into SQL & can't think how I can replicate this formula...IF(AND(A9=A8,L8="MoreThan1"),"MoreThan1",IF(A9=A8,IF(E9=E8,"OneStop","MoreThan1"),"OneStop"))Any ideas? o_O |
Most efficient way to extract data for a historical data table Posted: 02 Oct 2013 07:23 PM PDT I have a 3rd normal form table which maintains history over time using a date row start and date row end. I need to extract data incrementally from this table starting at 1900-01-01 and looping through each day. The easiest way is to provide the data and use it in a where clause like this:[code="sql"]WHERE '1900-01-01' between date row start and date end date[/code]incrementing the date by a day then running the query again. However this would obviously pick up the same records time and time again. This isn't neccessary a issue as I will be merging these 'incremental' records using a column . It will only be a one off data load operation. But from a performance I would like to think of other ways to access that data.I cannot change the structure of the source table and there are no other dates I can use.Thanks |
Concatenating several row columns into one Posted: 02 Oct 2013 07:53 AM PDT I've been tasked with creating a table that contains a customer# and all doctors within certain proximity of that customer in ONE row. I'll explain. one customer may have more than one Dr. avail within proximity, ALSO one Dr. may have more than one office within proximity of that one customer, so the data in the table would look like this:Cust# DocId DocName DocProfile DocCity DocState111 aaa Dr.John Doe Our prac... Sunrise FL111 aaa Dr.John Doe Our prac... Deerfield FL111 bbb Dr.Jane Doe It's with... Tamarac FL222 ccc Dr.Bill Doe We've.... Encinitas CA222 ccc Dr.Bill Doe We've.... Encinitas CA(2 offices in same city)222 ddd Dr.Ray Doe I'm glad... Sherman CA222 ddd Dr.Ray Doe I'm glad... Agoura CA222 eee Dr.Ruth Doe We're loc.. Studio CA222 fff Dr.Brian Doe We strive... Palm Sp CAMy table right now follows the structure and data above. My next step is to create a table to have a single row for each cust# with the doctor(s) column(s) concatenated into one column. i.e.CustomerNo Doctors in Area111 John Doe Our prac… Sunrise FL; John Doe Our prac… Deerfield FL; Jane Doe It's with… Tamarac FLThis is were I'm stuck. what's the optimal way to go about this? ROWNUMBER OVER? WHILE statement? IF THEN statement? not really sure. Any light shed is appreciated.Regards,John |
Question Regarding Store Procedure Posted: 02 Oct 2013 05:10 AM PDT Hi Guys,I have quick question, I am using SP, in the SP I am using few Insert and Update statements.I want to know what should I do, If one Insert or Update Statement fails SP gonna give me error, however nothing happen (Not any insert or any update) any advise would be great appreciate...Urgent Please.Thank You, |
Posted: 24 Sep 2013 01:12 AM PDT Hi,How do we ignore an error in T-SQL? For e.g.:- The following code throw an error once the @lCounter reach at 15 and will come out. Requirement is it should go till 1000. Even if there are errors in between.[code="sql"]DECLARE @lCounter SMALLINTDECLARE @lError SMALLINTSET @lCounter = 0WHILE (@lCounter <= 1000 )BEGIN SELECT CONVERT(VARCHAR, GETDATE(), @lCounter) SET @lCounter = @lCounter + 1END[/code]Thanks in advance |
You are subscribed to email updates from SQLServerCentral / SQL Server 2008 / T-SQL (SS2K8) To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment