Showing posts with label SQL Server 2008 issues. Show all posts
Showing posts with label SQL Server 2008 issues. Show all posts

Thursday, March 21, 2013

[SQL Server 2008 issues] Data Conversion Error

[SQL Server 2008 issues] Data Conversion Error


Data Conversion Error

Posted: 20 Mar 2013 07:09 PM PDT

I have a staging data which holds all data as NVARCHAR, I am now loading from the staging table to specific tables where I convert various fields to numerics, but I'm getting a conversion error. Is there any easy way of finding which row is causing the problem. I've searched through the forum for this error but couldn't find anything that matches my specific needs.

Trace Flag 8048, 8015, or SUMA?

Posted: 14 Nov 2012 08:37 AM PST

Hola! Recently included SQL Server startup Trace Flag 8048 to resolve a serious spinlock contention issue in a SQL Server 2008 R2 system. Interested to hear from others who have found usage cases where performance value was delivered by trace flag 8048 (promote query memory grant strategy from per-NUMA node to per-core), trace flag 8015 (SQL Server ignores physical NUMA), or SUMA (interleaved sufficiently uniform memory access). Trace flag 8048http://blogs.msdn.com/b/psssql/archive/2011/09/01/sql-server-2008-2008-r2-on-newer-machines-with-more-than-8-cpus-presented-per-numa-node-may-need-trace-flag-8048.aspxTrace flag 8015http://blogs.msdn.com/b/psssql/archive/2010/04/02/how-it-works-soft-numa-i-o-completion-thread-lazy-writer-workers-and-memory-nodes.aspxSUMA, or interleaved memoryhttp://msdn.microsoft.com/en-us/library/ms178144(v=sql.105).aspx"If you have hardware NUMA, it may be configured to use interleaved memory instead of NUMA. In that case, Windows and therefore SQL Server will not recognize it as NUMA."Gory details of system workload, gathered metrics from troubled system, and gathered metrics from the system after intervention in posts to follow. Peace!tw: @sql_handle

result of adding non-clustered non-unique index vs clustered one

Posted: 20 Mar 2013 07:20 PM PDT

We're testing some rather large - at least for us - narrow tables that will be populated with between 200 and 500 million records.Any access to the table will be by addressing a low-cardinality id (some 20-50 distinct values) Without the option of partitioning we test some index scenarios.The table:Id1 (high cardinality)DatekeyNameValueId2 (low cardinality; always used in where clauses in queries)When adding a non-unique non-clustered index , the index is only used when additional columns are included. The index space is then larger than the table.When adding a non-unique clustered index , the index is always used (when Id2 is adressed) and index space is minimalWith DB2 as background and being used to Bitmap indices i'm trying to understand SQL Server's approach. The clustered index seems ideal, but what is the catch?

KILLED/ROLLBACK STATE - SERVICE RESTART

Posted: 20 Mar 2013 06:57 PM PDT

Hi all,Is there any fix other than sql service restaRT when there is transaction stuck in killed/rollback state (after killing a blocked transaction) in sysprocesses?i had this situation and once i unsucessfully tried moving the database to offline mode(courtesy:google) to end the transaction.any help will be appreciated.

XML Data Type as a parameter in a Stored Procedure

Posted: 20 Mar 2013 07:06 AM PDT

Hi,I've table as follows,[code="sql"]CREATE TABLE [dbo].[majikanAG_subMajikan_1]( [idx] [int] IDENTITY(-2147483648,1) NOT NULL, [batch_Id] [uniqueidentifier] NOT NULL, [icNo (Baru)] [varchar](100) NULL, [icNo (Lama)] [varchar](100) NULL, [payerNme] [varchar](300) NULL, [zakatAmount] [decimal](10, 2) NULL, [subMajikan] [varchar](100) NULL, CONSTRAINT [PK__majikanA__51EFEBF8002AF460] PRIMARY KEY CLUSTERED ( [idx] ASC, [batch_Id] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY][/code]My Stored Procedure as follows,[code="sql"]CREATE PROCEDURE [dbo].[addAGSummary_SubMajikan_Process1]@agItem xml,@batch_Id uniqueidentifier outputASBEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON;set transaction isolation level repeatable readBegin transactionBegin Tryselect @batch_Id=NEWID()insert into majikanAG_subMajikan_1(batch_Id, [icNo (Baru)], [icNo (Lama)],payerNme, zakatAmount, subMajikan)select @batch_Id,a.b.value('icNo[1]','varchar(200)') as icNo, --as input1,a.b.value('icNoOld[1]','varchar(15)') as icNoOld, --as input2,upper(a.b.value('payerNme[1]','varchar(100)')) as payerNme, --as input3,--a.b.value('amt[1]','decimal(10,2)') as amt, --as input4,a.b.value('amt[1]','varchar(100)') as amt, --as input4,a.b.value('subCd[1]','varchar(100)') as subCd --as input5,from@agItem.nodes('/data/ag') a(b)COMMIT transactionEnd TryBegin Catch-- Whoops, there was an error--IF @@TRANCOUNT > 0ROLLBACK transaction-- Raise an error with the details of the exceptionDECLARE @ErrMsg nvarchar(4000), @ErrSeverity intSELECT @ErrMsg = ERROR_MESSAGE(),@ErrSeverity = ERROR_SEVERITY()RAISERROR(@ErrMsg, @ErrSeverity, 1)End Catch END[/code]There are 2 scenario1- If @agItem did not have so much data (1000 records), the stored procedure run well2- If @agItem have a so much data (10000 records), the stored procedure cannot process as expectedWhy no (2) is happening? Did XML Data Type is not suitable for 10000 records?Please help. I'm stuck

existing column Level encryption in sql server2008

Posted: 18 Mar 2013 04:32 PM PDT

Hi, how to set the column level encyption in sql server 2008 in production enviroment without drop existing column and what are method available for encrypt the data in sql server 2008 standard edition.

venkatrao.m.cse@gmail.com

Posted: 20 Mar 2013 06:09 PM PDT

Hi,tempdb full on solvereply me

First letter upper case for each word in sql server.

Posted: 09 Dec 2011 06:40 AM PST

hello All,I want to write down stored procedure or function that will return or generate first letter in Upper case andrest of the letter in lower case.I already have table with Facility with facility name.(I have 50 facilityname like this and I want to update this table)Example :Facility Name : ABINGTON HEALTH LANSDALE HOSP and I want a outputFacilityname : Abington Health Lansdale HospThanksBhavesh

Anyone help to script the linkedservers on weekly basis by automatically

Posted: 20 Mar 2013 05:51 PM PDT

Hi ,Anyone help to script the linkedservers on weekly basis by automatically in text or .sql format file Regards,Saran

Help needed in SQL Server Replication

Posted: 20 Mar 2013 05:50 PM PDT

I have got History tables to track the changes and for those, the Primary Key is not there. Also there are some stored procedures in my application which are encrypted and can not be exposed to end-users. When I try to do the SQL Server Replication methods, none of the methods working for my case. Is there any way to achieve Replication or do I need to find an alternative method(please suggest me in that case)?Expecting your support at the earliest.

File Groups advatages

Posted: 20 Mar 2013 05:48 PM PDT

hi,We have a best practice in BOL that "We need to have a separate File Group for the User data".Is the below statement is true:If there is any corruption while writing the data to PRIMARY File Group's files the database will not available to access. We can make the Database available by adding Secondary FG and placing the data to this Group.please clarify.

Cannot scan a remote server with SQL Server BPA 1.0

Posted: 20 Mar 2013 05:34 PM PDT

Hello gentsI am using Microsoft SQL Server 2008 R2 BPA 1.0 wrapped in Microsoft Baseline Configuration Analyzer 2.0 on my workstation and can perform normal scan on my local sql server instance. However, when I tried it to connect to a remote server, it kept reporting the following error:Category: PrerequisiteSource: <servernmae>Issue: 1. User is not a member of Administrators group on remote machine, OR 2. Powershell Remoting is not Enabled on Remote ServerImpact: Analysis cannot be performedResolution: 1. Add user as a member of Administrators group, OR 2. Run Commands Enable-PSRemoting -f through PowerShell command prompt using elevated privileges, AND 3. Run Command winrm set winrm/config/winrs `@`{MaxShellsPerUser=`"10`"`} through PowerShell command prompt using elevated privilegesI've verfied all three pre-requisites (being a local admin for my own AD account, executed the ps commands with elevated privileges) listed above and also turned off Windows Firewall on this target server but still have no luck at all.Do you have any other directions to point out for me?Thanks in anticipation!P.S. The target server is not clustered, just a standalone physical box; both my worksation and the server are in the same domain; my AD account has been explicitly added to local windows admin group and sysadmin role on the server and its hosted sql instance.

TSQL Optimization - CASTing and CURSOR

Posted: 20 Feb 2013 11:59 AM PST

hi,Is CASTing cost an additional overhead to the query? I need to optimize our database initialization and i reviewing some of the sql scripts and found out that there are several sp using this CAST( NULL AS VARBINARY(MAX)) in defining their column. If they cast a null value then it results is always a null value so why they still need to cast it? This sp is called a million times (>7millions calls).How about a declaring a cursor, what it is overhead cost in terms of performance. They use it in one of their scalar function and being called also several times. this function compares the value of previous record to the current one. the loop will only pass on less than 10 records, and reading to a table containing a millions of records.any comments, suggestion.thanks.

SSRS 'Render' and TempDB Growth

Posted: 20 Mar 2013 05:36 AM PDT

I have a client that beginning two days ago is having an issue with TempDB growing extremely fast and running out of space. I launched the SQL Profiler while it was happening and noticed that most of the sessions were SSRS reports. I looked at the SSRS log and noticed a report running a the same time as the TempDB warnings. The SSRS log showed the report as Item Action = 'Render'. What does this mean? The report itself is based on SQL query that is based on views. The views are not complex, however, I noticed that one of the underlying tables could benefit with and additional index. I studied the query plan and determined that.If this were a report based on a stored procedure using a lot of temp table and such I would think it was that. But it's this query.Any thoughts?Thanks.Joe

Performance issue

Posted: 19 Mar 2013 10:58 PM PDT

Hi,I have two tables.treatment_plan table having 8 fields like txt_instruct_Description_1,txt_instruct_Description_2....As per requirement txt_instruct_Description_4,txt_instruct_Description_5,txt_instruct_Description_6,txt_instruct_Description_7,txt_instruct_Description_8 field values should be moved to another table custom_plan.custom_plan table having ,txt_addition_description_1,txt_addition_description_2,txt_addition_description_3,txt_addition_description_4,txt_addition_description_5,txt_addition_description_6,txt_addition_description_7,txt_addition_description_8 fields.I want to move treatment_plan table 5 fields value to custom_plan 8 fields.I have written the following the query.But this query taking more time to execute.Is there any other way to improve the performance or changing the code to execute fast. Please let me know.-------------------------------------------------SET NOCOUNT ON IF EXISTS (SELECT 1 FROM SYSOBJECTS SO WHERE SO.NAME = 'custom_plan') IF EXISTS (SELECT 1 FROM SYSOBJECTS SO WHERE SO.NAME = 'treatment_plan') BEGIN declare @num_source_field int declare @num_destination_field int declare @source_field varchar(100) declare @destination_field varchar(100) declare @src_value varchar(75) declare @dest_value varchar(75) declare @strsql varchar(1000) SET @num_source_field = 4 SET @num_destination_field = 1 select *, id = IDENTITY(INT,1,1) into #temp from treatment_plan Declare @mx int declare @mn int Declare @encid varchar(45) select @mx=max(id),@mn=min(id) from #temp create table #tbl(col1 varchar(45)) while(@mn<=@mx) BEGIN select @encid= enc_id from #temp where id=@mn SET @num_source_field=4 while(@num_source_field <= 8) BEGIN select @source_field = 'txt_instruct_description_'+cast(@num_source_field as varchar(2)) SET @num_destination_field = 1 while(@num_destination_field <= 8) BEGIN select @destination_field = 'txt_additional_description_'+cast(@num_destination_field as varchar(2)) truncate table #tbl SET @strsql='insert into #tbl select '+@source_field+' from treatment_plan where enc_id='+''''+@encid +'''' --EXECUTE sp_executesql @strsql exec(@strsql) select @src_value= col1 from #tbl truncate table #tbl SET @strsql='insert into #tbl select '+@destination_field+' from custom_plan where enc_id='+''''+@encid +'''' --EXECUTE sp_executesql @strsql exec(@strsql) select @dest_value= col1 from #tbl if(@dest_value is null) begin SET @strsql='update custom_plan SET '+@destination_field+'='+''''+@src_value+''''+' where enc_id='+''''+@encid+'''' --EXECUTE sp_executesql @strsql exec(@strsql) break end SET @num_destination_field=@num_destination_field+1 END SET @num_source_field=@num_source_field+1 END SET @mn=@mn+1 END drop table #tbl drop table #temp END Print '----------End----------'SET NOCOUNT OFFGO--------------------------------------------------------Thanks,Tony

SSRS 2008R2 showing counties (not just states)

Posted: 08 Mar 2013 01:53 PM PST

I have a hopefully simple question. SSRS 2008R2 can do maps by state just fine... is there an easy way to get City>County>State rollup data somewhere or get shapefiles for Counties in the US? I'm working on a database for someone that would be infinitely more useful if I could show like a heat map for sales by county across the country... there's a really cool map [url=http://www.mssqltips.com/sqlservertip/2552/creating-an-ssrs-map-report-with-data-pinpoints/]here[/url] that shows the county lines in it... and that's the part I want - the Counties. The granularity of the data I have is not too good, so county-level or so is about right.Is there an easy way to create a map like that? (like a color-coded map from election night, but a county-by-county instead of state-by-state?) If so, how? And where would I get the shapefiles for the counties?Thanks!Pieter

Same stored procs and different schemas = issues?

Posted: 20 Mar 2013 02:40 PM PDT

I have two stored procs which are identical, except they exist in 2 different schemas. Stored proc A in schema dbo references objects, views, tables in the dbo schema. (all of which are named properly using the schema preface). Stored proc B in schema ABC references objects, views, tables in the dbo schema (since it's the same as the other one). The only difference between these two schemas is their name, since of course, they exist in 2 schemas. (This was done soley for organizational purposes, and the original A dbo sproc was going to go away.)So. Here's where it gets odd.I expected them both to run/perform the same way. However, both were set up to run as a step in a job in the SQL Agent. Stored proc A, which is in the dbo schema runs perfectly fine. Stored proc B does not consistently perform fine. It has issues. Runs forever, won't finish, ends up locking things up.I'm confused in why code running from two different schemas operates differently, despite the proper naming conventions between them both. As a fix, i had to comment out the step that references schema B and just let the step with the schema A stored proc run.Are there any ideas out there of what may be going on or what I might check out? (And if I've left out any details that would help research this, do let me know.)Any help would be appreciated.

Websockets?

Posted: 13 Mar 2013 11:54 PM PDT

Anyone has experience with SQLServer and websockets? I need my Stored Procedure to be able to send a message to clients through nodeJS.Thanks,

Stored procedures and using the Exec statment on a dynamic string of SQL

Posted: 20 Mar 2013 05:51 AM PDT

sometimes a sp can get out of hand if this if that etc where I saw a developer go over 9k lines.allot of the code was re-use, the fields, the joins etc, just the where statement would changeso in all the sp just declares the fields and join in two varchar variablesthe ifs at the top build the if in 8950 less lines of code. Just by consolidating, I noticed she made so many mistakes because at this length it becomes almost unmanageable.and just ends up doingexec @sql1 + @sql2 + @sql3my question is performance, is this now an uncompiled TSQL statement and less powerful if all the code was "if" out naturallyP.S. I also noticed a downfall with exec, reporting services cannot pickup the fields and you have to either do it manually or cheat and put a sql statement up top and then change it on the backend

SSIS Conditional Split not working when run on SQL 2008 R2 using SQL Server Agent

Posted: 20 Mar 2013 06:16 AM PDT

I'm having an odd problem after upgrading our SQL 2008 server to 2008 R2: An SSIS package containing a conditional split is not passing rows through its outputs according to the set conditions when executed via a SQL Server Agent job. The SSIS package operates normally when run from Visual Studio, and it runs normally when executed via Management Studio connected to the SSIS instance--it only has issues when run via the SQL Agent. Anyone experience this before or have ideas on what to check?Note that I did not yet upgrade the SSIS package, so it's still in the SQL 2008 format (not R2).Current SQL Server 2008 R2 version: 10.50.1600

Convert Row into Columns

Posted: 20 Mar 2013 07:32 AM PDT

Hi All,I've a table that has data in the particular formatManagerID | EmployeeID1001 | 99901001 | 99911002 | 99931002 | 99941003 | 99951003 | 99961003 | 99971003 | 99981003 | 9999I would like to get the results asManagerID | EmployeeID1 | EmployeeID2 | EmployeeID31001 | 9990 | 9991 | null1002 | 9993 | 9994 | null1003 | 9995 | 9996 | 9997If you notice correctly ManagerID 1003 has 5 employeeid but i need only 3 of them in ascending order...Thanks in advance

Top N makes query run faster

Posted: 17 Mar 2013 11:13 PM PDT

I have a select query which brings back 35k rows. it runs in approx 5 seconds.I moved this to a new server and it runs for around an hour before i give up and kill the process.I put in select top 35000 .......... at the start of the query and i can get it to run in just under 4 minutes.The query runs across two servers to fetch the data using a left outer join to pull back everything from server 1 regardless of a match on server 2Where do i start looking to find the bottle neck ?I've attached the plan.

Checklist after Migrating Databases from 2005 to 2008 R2

Posted: 20 Mar 2013 03:48 AM PDT

I am migrating SQL databases from 2005 to 2008 R2 Enterprise edition. There are only 5 databases but they all are 500 GB or more. I would like to know what checks to make on the dbs after they are moved to the new server. Great hardware on the new server. We have a very small maintenance window and most of the time will be going towards moving databases, testing jobs etc. I am considering running following things in this order, but worried about the time it takes on such a large databases and performance impact. DBCC UPDATEUSAGEDBCC CHECKDBREBUILD/REORGANIZE INDEXESUPDATE STATISTICS (for Indexes that were ReOrganized)RECOMPILE All the procs.Can anyone please provide expert comments if we really need all these or not? Thanks in advance...

Monitor SSIS Job

Posted: 20 Mar 2013 12:48 AM PDT

We have a SQL Job that calls an SSIS package. This package used to take about 10 minutes to run. A while back it started to take 1.5 hours and still does. We are able to narrow it down to which task in the control flow is taking the longest.I'm very new to SSIS, but I have been asked to look in to it. So I guess I am wondering if anyone knows where I would start? I would like to some how monitor what goes on but the job runs at 3:30AM so I never get to see it till it's done running. Can I create an audit to record events for me to look at later? Is there a way to tell if tables are blocking/locking one another after the job has already ran? I can run the step again but it runs very quickly by itself so it's not that helpful.Below is the step that takes a long time, it's just a execute sql task with a query.--CREATE TABLE #FullList(DECLARE @FullList table ( StoreProductId INT, StoreId INT, ProductId INT, NewTaxRate NUMERIC (7,6), BottleDeposit MONEY)--Insert values into #fullListINSERT INTO @FullList(StoreProductId, StoreId, ProductId , NewTaxRate , BottleDeposit)SELECT tblTaxLoad.StoreProductId,tblTaxLoad.fkStoreId, tblTaxLoad.fkProductId, tblTaxLoad.NewTaxRate, tblTaxLoad.BottleDeposit FROM dbo.tblTaxLoad WITH (NOLOCK)ORDER BY tblTaxLoad.StoreProductId-------------------------------------------------------------------------------------------Update Taxes / Deposits 100 at a timedeclare @myIteration intdeclare @myCounter intset @myIteration = 0set @myCounter = 0----------------------------------------------------------- Take 100 items at a time & put them in temp table #SubList--WHILE ( SELECT COUNT(*) FROM @FullList ) > 0 BEGIN SET @myCounter = ( SELECT COUNT(*) FROM @FullList ) --------------------------------------- -- Get next 100 items -- SELECT TOP 100 StoreProductId, StoreId, ProductId , NewTaxRate , BottleDeposit INTO #SubList FROM @FullList ORDER BY StoreProductId --------------------------------------- -- Update these items -- -- begin tran UPDATE tblStore_Product SET InStoreSalesTaxRate = NewTaxRate, Deposit = BottleDeposit --select * FROM #SubList SubList INNER JOIN dbo.tblStore_Product WITH (NOLOCK) ON SubList.StoreProductId = tblStore_Product.[Id] --commit -- rollback --------------------------------------- -- Report to screen -- set @myIteration = @myIteration + 1 print 'Iteration ' + cast(@myIteration as varchar) + ': ' + cast(@myCounter as varchar) + ' left' --------------------------------------- -- Remove updated from #FullList table & LoadTable -- DELETE FullList FROM @FullList FullList INNER JOIN #SubList ON FullList.StoreProductId = #SubList.StoreProductId DELETE tblTaxLoad FROM dbo.tblTaxLoad INNER JOIN #SubList ON tblTaxLoad.StoreProductId = #SubList.StoreProductId --------------------------------------- -- Drop temp table (to be remade again) -- DROP TABLE #SubList WAITFOR DELAY '00:00:00.200' -- .2 second between each iteration END----------------------------------------- Drop temp table -- @FullList will simply go out of scope--DROP TABLE #FullList

ADP - passing value from a form to a query

Posted: 20 Mar 2013 07:27 AM PDT

Hi all,I am trying to develop a small database for our company. Although I come from a computing background I an not a database expert. After looking at some options I decided to try access ADP. By way of learning both Access and SQL Server I started going through tutorials on accessallinone.com site. First as regular Access database, then as an ADP. :-)) I use Access 2010 and SQL Server 2008 express. Well, I got stomped right off the bat. In "09 – Select Queries (Criteria)" tutorial a value from a text field on a form is passed to a query. It works fine in Access-stand-alone, of coarse; but not in ADP. Here is the query as seen in ADP (the last line is my attempt to re-create what I see in stand-alone version as the query builder does not prompt for the Forms):SELECT StudentID, LastName, FirstNameFROM dbo.tblStudentsWHERE (LastName = Forms!FrmLastName!TxtLastName) Any insights into how to achieve this will be greatly appreciated!

Change the Date Format

Posted: 20 Mar 2013 07:36 AM PDT

Good evening,Please, my sql server Database has a table which contains this field: [b]Date_Start(Datetime, Null)[/b]. The format of the data in the table is yyyy/mm/dd. I have tried to export this table in Excel through SIS. Although I have used this fonction [b]Convert(Date,Date_Start,103)[/b] the date format is still yyyy/mm/dd instead of dd/mm/yyyy in my Excel File. please what can I do to obtain the correct date format dd/mm/aaaa in my excel file after data exportation from my table through SIS?Thanks in advance.

Rebuilt Index

Posted: 20 Mar 2013 05:40 AM PDT

Dear experts,Our application is designed to support multiple SQL server version from 2000-current.One of the thing the application does is it imports data to db on schedule basis, depends on each client, some imports may take couple days to complete (yes no mistake)During the course of an import indexes may be very fragmented and import performance degrades. to overcome this issue we have index status check periodicly to dermine if indexes need to be rebuilt if so it rebuilds fragmented indexes automatically.The way we handle this is we first check sys.dm_db_index_physical_stats and determine which indexes needed to be reindexed, and use the following logic to defrag them:[code="sql"]If SQL version=Enterprise ALTER INDEX [indexX] ON [tableY] REBUILD WITH (FILLFACTOR = 80, ONLINE = ON);Else DBCC INDEXDEFRAG (0,'indexX','tableY') WITH NO_INFOMSGS;End[/code]My question to you is: Do you think what we have is sufficient or is there away to avoid using DBCC as my understanding DBCC is old method that may do thing less efficient?Any suggestion to impove would be very appreciate.

Restore msdb

Posted: 20 Mar 2013 07:14 AM PDT

In SQL 2008 R2, the msdb database can be restore from a backup correct, a record in the table sysdtspackages90 was remove and instead of inserting the information back in I thought it would be easy to do a restore.

How to interprete the unused space ?

Posted: 20 Mar 2013 01:35 AM PDT

In the properties of a user database, we can obtain the database size and the available space. We can shrink the database such that the available space is zero. However, if we execute sp_spaceused against this database, we can see the unused space is not zero. How to interprete the unused space in executing the stored procedure?Is there any way in removing the unused space?Many thanks in advance for any input.

Server drop connection error

Posted: 16 Jul 2012 08:37 PM PDT

Error: 17886, Severity: 20, State: 1. The server will drop the connection, because the client driver has sent multiple requests while the session is in single-user mode. This error occurs when a client sends a request to reset the connection while there are batches still running in the session, or when the client sends a request while the session is resetting a connection. Please contact the client driver vendor. [Policy: MSSQL Logfile Template]Any suggestions

reduce resource waits on sql server without upgrading sql server

Posted: 20 Mar 2013 04:06 AM PDT

Hi Experts,How to reduce resource waits on sql server without upgrading CPU??Thanks,Nihar

Restore multiple databases

Posted: 26 Oct 2012 01:11 PM PDT

Hi All,I have to take backups of 100+ databases from one server and restore it to another server.I have done the backups using a job and it went pretty smoothly.Now, my question is can I do the restore all these DBs using a job as well.I have done restore using jobs before but this volume.Does anyone have a good script for this type of restore or any advise or suggestions would be much appreciated.Thanks,SueTons.

SQL architecture

Posted: 17 Mar 2013 05:27 PM PDT

Can any one explain the SQL architecture

Wednesday, March 20, 2013

[SQL Server 2008 issues] best optimized query for the requirement

[SQL Server 2008 issues] best optimized query for the requirement


best optimized query for the requirement

Posted: 19 Mar 2013 06:12 PM PDT

I have the below query and need the best query to optimize the requirementSELECT DISTINCT product_id FROM fact_sales WHERE (agent_id=1 AND product_code in (1)) AND product_id IN (SELECT DISTINCT product_id FROM fact_sales WHERE (agent_id=2 AND product_code in (3))AND product_id IN (SELECT DISTINCT product_id FROM fact_sales WHERE (agent_id=3 AND product_code in (2))AND product_id IN (SELECT DISTINCT product_id FROM fact_sales WHERE (agent_id=4 AND product_code in (1)))))

Strange issue with one of the four Sharepoint databases that we Log ship to our DR site.

Posted: 18 Mar 2013 09:13 PM PDT

On the server that hosts our Sharepoint databases we are log shipping 4 of them to a sql server instance at our DR site. The issue we are seeing is that for one of the 4 databases the first transaction log generated by the Log Shipping scheduled task after the full backup has been taken is the same size as the database itself (2.4Gb). Following this all subsquent files genrated are much smaller (200 Mb).I have checked for open / long running transactions but none are reported.Out of the 4 Sharepoint databases the one with the issue is the only one that has any Sharepoint Workflows configured. However they are configured to run in response to actions rather than at a certain time of day.Both Servers are SQL Server 2008 SP3 Standard 64 bit.Has anyone got any thoughts on what might be causing this behavoiur.

Comparision of tables in two instances

Posted: 19 Mar 2013 07:12 PM PDT

Am having two instances A and B, in that having a table called students in two databasess, but the records are not in match,i want to retrieve records which are having in "A" database and not having in "B" database.Instance : ADatabase : StudentsTable : Stud_InfoInstance : BDatabase : StudTable : Stud_Info.

System.Data.SqlClient.SqlException (0x80131904): Timeout Expired

Posted: 19 Mar 2013 11:49 AM PDT

I apologize if this question has been asked before, but we're really stumped and the answers I've seen posted don't really apply to our situation. We're getting the following error sporadically in a system that's being run by SQL 2008 R2 and connects to other databases on remote computers using the same version of SQL. The error message may be lengthy, but perhaps one of you will see something we don't: Error in GetScheduleByPFTID Type PAST:System.Data.SqlClient.SqlException (0x80131904): Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.Run(RunBehavior.runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Dta.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataset, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) at Scheduling.GetScheduleByPFTID(String pType, String pReason, Int32 pPFTID, Int32 pLastTxtNo, String pRSWChartNumber)

BULK INSERT Different Data Types in the SAME Columns

Posted: 19 Mar 2013 07:25 AM PDT

Issue: I have a small csv file uploaded through a web page by an end-user. It has two INT columns. Sometimes, the user accidentally uploads the column headers - they are varchar. I am using BULK INSERT to get the rows into a temporary table. The bulk insert works fine when the column headers are NOT there. It silently fails when the column headers are there. I specifically changed the table definition to use NVARCHAR's for the input when I discovered that the end-users were uploading the column headers.[code="sql"]SET @Command = N'CREATE TABLE ##temp_123 ([Staging_ID] NVARCHAR(512), [LeadID] NVARCHAR(512))'EXECUTE sp_ExecuteSQL @CommandSET @Command = N'BULK INSERT ##temp_123 FROM ''' + @InputFileNameWithPath + N''' WITH ( FIELDTERMINATOR = '','', ROWTERMINATOR = '''')'EXECUTE sp_ExecuteSQL @Command[/code]I can see the temp table. When the input file has column headers, there is one row with both columns NULL. When there are no column headers, there are as many rows as in the user's file and they all have values.I'm stumped. Thanks in advance.- Seth

Check Data While Inserting

Posted: 19 Mar 2013 06:24 PM PDT

Dear All,I have a UI like,LineNo ProdID Discount(%) and a "Insert" Button-------- ------ --------I can insert a single or multiple lines in this UI.While inserting the following data, it will check whether there is any data in Discount Column greater than 18. If so, it will not insert any data and raise an error message.[code="plain"]LineNo ProdID Discount(%)------ ------ -------1 1000 02 1001 23 1002 194 1003 0[/code]I coded but it always checks LineNo1. Please help me to code whether I can check data that is greater than 18 in Discount Column against all the LineNo .Regards,Akbar

Need help to read text file

Posted: 18 Mar 2013 11:49 PM PDT

Hi,My text file as attachment ~ AG-Persekutuan-Bln-09-2012.txtHow to read this file as follows,1. Taxpayer data starting in the second row2. The 1st TaxPayer read as follows,[code="sql"]SUBSTRING('02029105500000326272560816105987 Samat Bin Sani 0000015004363 ',21,6) + '-' + SUBSTRING('02029105500000326272560816105987 Samat Bin Sani 0000015004363 ',27,2) +'-' + SUBSTRING('02029105500000326272560816105987 Samat Bin Sani 0000015004363 ',29,4) as newIdx,SUBSTRING('02029105500000326272560816105987 Samat Bin Sani 0000015004363 ',52,40) as myName[/code]3. The 2nd TaxPayer read as follows,[code="sql"]SUBSTRING('02011105700000488437551130055335 Jalaludin Bin Jalil 0000025004363 ',21,6) + '-' + SUBSTRING('02011105700000488437551130055335 Jalaludin Bin Jalil 0000025004363 ',27,2) +'-' + SUBSTRING('02011105700000488437551130055335 Jalaludin Bin Jalil 0000025004363 ',29,4) as newIdx,SUBSTRING('02011105700000488437551130055335 Jalaludin Bin Jalil 0000025004363 ',52,40) as myName[/code]4. And the next reading as like above5. After all the content is read, the result as follow[code="sql"]560816-10-5987 SAMAT BIN SANI 551130-05-5335 JALALUDIN BIN JALIL 590826-05-5028 MUKARRAMAH BT IBRAHIM 550317-05-5242 FATIMAH BT ISMAIL 540301-04-5163 ABU MANSOR BIN HJ MOHD NOR [/code]It's possible SQL Server 2008 R2 read text file? If yes, I really need help. Please help me. I'm stuck

Not able to find historical data using SQL Server Profiler in SQL Server 2008

Posted: 19 Mar 2013 05:10 PM PDT

Hello All,I would like to capture the script that was ran by user on yesterday between particular time.But somehow i am not able to get the desire information.Can anyone please help me if i am missing anything.Template: Standard (No option with Default)LoginName: Name of userStartTime : Greater than > 2013-03-18 06:00:00.000EndTime : Less than < 2013-03-18 06:30:00.000.Please let me know if more information is required.

Data not inserting

Posted: 16 Mar 2013 12:20 AM PDT

Hi i have written a query to check the insertion but its not working neither getting any error :Declare @valuebased varchar(13)='0000000000'Declare @valueDiscount varchar(13)='0000000000'Declare @ProductDiscount varchar(13)='0000000000'Declare @IsBooklet BITDeclare @vouchertypeid int, @voucherno varchar(13), @denomination int, @createddate datetime, @expirydate datetime, @voucherstatus int, @quantity int, @isbookletid bitIf @IsBooklet = 1BEGIN DECLARE @count int SET @count =1WHILE (@count < 10) BEGIN SET @voucherno = 'VVB'+CONVERT(varchar(10),MAX(@valuebased)+1) INSERT INTO GV_Voucher (VoucherTypeId,VoucherNo, Denomination, CreatedDate, ExpiryDate, VoucherStatusId, TransactionID, Quantity, IsBooklet) Values (1,@voucherno,10,GETDATE(),GETDATE(),1,'a1',12,0) SET @valuebased = @valuebased + 1 SET @count = @count +1 ENDEND

Real time live stats

Posted: 19 Mar 2013 03:55 PM PDT

Icecube have now upgraded our statistics on live streaming. Now you can even track how many people are viewing your live broadcast real time. You can among other things see how many at any one time, how many total. how long they see, where they are from and more Please contact us for a test

SQL Script Help - Need to capture a single transaction from many possibilities

Posted: 19 Mar 2013 01:44 PM PDT

Here is the DLL[code="sql"]IF EXISTS ( SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[XX_TEMP_AR_INSERT]') AND type IN ( N'U' ) ) DROP TABLE [dbo].[XX_TEMP_AR_INSERT]GOSET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOSET ANSI_PADDING ONGOCREATE TABLE [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] [varchar](32) NULL , [PAYER_ID] [varchar](15) NULL , [AMOUNT] [decimal](12 , 2) NULL , [BILLING_SITE_ID] [varchar](15) NULL , [TRANSFER_ID] [varchar](32) NULL )ON [PRIMARY]GOSET ANSI_PADDING OFFGOINSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( 'DA6773113801A85B803C0F3FAC1D1A8D' , '0466' , '-100.78' , '400' , '348C701138017AAD2203767EAC1B1238' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( 'FA6773113801D9A6833C0F24AC1D1A8D' , 'SELF' , '100.78' , '400 ' , '348C701138017AAD2203767EAC1B1238' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '8411E299A994420681210E3AB07E8ABF' , 'SELF' , '-100.78' , '400' , '2F525D70E0B74634A82E26983E3EBCA9' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '42FAE87AC9774132ADDA13570402AC1C' , 'SELF' , '100.78' , '900' , '2F525D70E0B74634A82E26983E3EBCA9' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '59BBC4473A013DCEAD3009B4AC1D1A8C' , '0466' , '-400.49' , '400' , '8620BF473A01AFE3BE03AF49AC1B1339' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '88BBC4473A012F73B0300949AC1D1A8C' , 'SELF' , '400.49' , '400' , '8620BF473A01AFE3BE03AF49AC1B1339' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( 'B84A37A6E93B4882B6289C016EC47868' , 'SELF' , '-400.49' , '400' , '9EEFA7DEE2CA4318B7B140148E111293' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( 'E49651E5715046269AF023332CB50C40' , 'SELF' , '400.49' , '900' , '9EEFA7DEE2CA4318B7B140148E111293' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '758237E78D5A4458A4DD6B2BAD3F602D' , 'SELF' , '501.27' , '901' , '28F880EDC0D148E9B2C3393161CA509A' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '916E1F1F238E4EA48D1C31525C846372' , 'SELF' , '-501.27' , '900' , '28F880EDC0D148E9B2C3393161CA509A' )[/code]Here is query that I'm working with[code="sql"]SELECT [JArd1].[PAYER_ID] AS TRANSFER_FROM , [JArd1].[AMOUNT] , [JArd1].[SYS_ID] , [JArd2].[PAYER_ID] AS TRANSFER_TO , [JArd2].[SYS_ID] , [JArd2].[AMOUNT] FROM [dbo].[XX_TEMP_AR_INSERT] AS Ard JOIN ( SELECT [Ard2].[SYS_ID] , [Ard2].[PAYER_ID] , [Ard2].[TRANSFER_ID] , [Ard2].[AMOUNT] FROM [dbo].[XX_TEMP_AR_INSERT] AS Ard2 WHERE [Ard2].[AMOUNT] < 0.00 ) JArd1 ON [Ard].[SYS_ID] = [JArd1].[SYS_ID] JOIN ( SELECT [Ard3].[SYS_ID] , [Ard3].[PAYER_ID] , [Ard3].[TRANSFER_ID] , [Ard3].[AMOUNT] FROM [dbo].[XX_TEMP_AR_INSERT] AS Ard3 WHERE [Ard3].[AMOUNT] > 0.00 ) JArd2 ON [JArd1].[TRANSFER_ID] = [JArd2].[TRANSFER_ID] WHERE [JArd1].[PAYER_ID] <> [JArd2].[PAYER_ID][/code]What I'm trying to capture is the last transaction to occur in XX_TEMP_AR_INSERT where the balance goes from payer "X" to payer "Y" and payer "X" <> payer "Y". When you run SELECT * FROM XX_TEMP_AR_INSERT, I want the query above to capture data from rows 5 & 6 only. Ignore rows 1 & 2 because they are the first occurence of the a transfer. These are the results that I want the query to return:0466 -400.49 59BBC4473A013DCEAD3009B4AC1D1A8C SELF 88BBC4473A012F73B0300949AC1D1A8C 400.49

SSMS User Interface Tip to Freeze Panes.

Posted: 18 Mar 2013 08:39 PM PDT

Accidentally discovered this, and thought I should share it, as I don't think I've seen it mentioned before.In Management Studio, in a sql window, if you click on the little rectangular block at the top right of the window (below the X to close), and pull down, the current window splits into panes, allowing you to look at 2 parts of the same procedure simultaneously.I don't expect it will change your life, but you never know!

table naming conventions

Posted: 19 Mar 2013 09:38 AM PDT

we are creating a new database maintaining address and in the future may have more into the databases.I have a question about the nameing convention for database tables.We used to have a standard to give a prefix of 3 letters for a table to group them together functionally, like schSchoolProgram, schAssignment, appProcess, secUsers.Is this a good way to name tables, I found sometimes it is hard to mention the table, I have to spell out the first 3 characters when I talk to someone about the tables.For the new database, when creating tables, shall I just use the plain names, like Address instead of addrAddress, StudentHomeAddress instead of addrStudentHomeAddress?Also for domain tables shall I just: City, County instead of domCity, domCounty?Thanks

sys.dm_io_virtual_file_stats - Number of Read and Writes

Posted: 19 Mar 2013 03:57 AM PDT

I looked up http://msdn.microsoft.com/en-us/library/ms190326.aspx for details on sys.dm_io_virtual_file_stats table.and see following descriptions:num_of_reads - Number of reads issued on the file.num_of_writes -Number of writes made on this file.Is it ok to assume that number of reads translate into number of select queries? andnumber of writes translate into number of insert queries?thanks

Transactional replication problem

Posted: 19 Mar 2013 07:49 AM PDT

Hi everyone,We encountered some strange error when we were trying to add a subscriber.Following steps were performed:1. Publsher and distributor (2 different machines) were created2. Publication was created3. Few (12) subscribers were successfully added to the publicationWhen some time after everything is successfully running we tried to add 2 or more subscribers and got an error:MessageThe replication agent encountered a failure. See the previous job step history message or Replication Monitor for more information. The step failed.Checking replication monitor it sends us to job and job sends us to replication monitor. Removing newly added subscribers make everything looks good again.Any ideas if we need to check some settings?Thanks.

Doubt about Shrinking process , Rebiulding index and fragmenatation

Posted: 19 Mar 2013 06:25 AM PDT

hi i know that shrinking data base increase fragmentation , so logically shrinking data files would two result in fragmentation but does shrinking log file increases fragmentation. i hope answer is NO.Second question is rebuilding an index reduces fragmentation but it increase size of database.I have seen increase in size of log file but does data file size to increases . Again i hope ans is No.Plz correct me on this

Update YTD number with the sum of MTD numbers

Posted: 19 Mar 2013 06:57 AM PDT

I have a table with budget data. The month to date number has been loaded. I need to update the YTD value. So, the October YTD number would be the same as the MTD. I got that already. Tricky part is......... November's YTD has to be October's YTD plus November's MTD. December's YTD has to be November's YTD plus December's MTD and so on.Any thoughts?

Substring with IF or CASE statement

Posted: 19 Mar 2013 06:02 AM PDT

Hi,I am trying to import into SQL Server 2008 R2 a list of strings as follows:=KET+N.207-13-F4001=KET+DG014-13-F4011=KET+RE002-36-MV009I need to split the strings so they are inserted into separate fields. One of the substring statements is:'SUBSTRING(xlCode; 15; 2) [if the second position is a number, and]'SUBSTRING(xlCode; 15; 1) [if the second position is anything else]AS GroupNoMy experience with T-SQL is just not enough to figure this one out. Can you tell me how the statement should be set up?Thanks in advance for any help.Maarten

Can't get rid of Key Lookup in execution plan

Posted: 19 Mar 2013 04:42 AM PDT

I have a query that has a very costly Key Lookup step in its execution plan. It looks like this:[code="sql"]SELECT B.BenefitID, PW.WaiverReason, EW.DateWaived, EW.DateEntered, CBA.OrderID FROM EmployeeWaivers EW JOIN CompanyBenefitAssoc CBA ON CBA.BenefitID = EW.BenefitID AND CBA.CompanyID = YYYY LEFT JOIN PlanWaivers PW ON PW.WaiverID = EW.WaiverID LEFT JOIN Benefits B ON B.BenefitID = EW.BenefitID WHERE EW.EmployeeID = XXXX AND EW.Overwritten = 0 ORDER BY CBA.OrderID, PW.WaiverReason, EW.DateEntered[/code]The Key Lookup is on the EmployeeWaivers table. I created the following covering index:[code="sql"]CREATE NONCLUSTERED INDEX [IX_EmployeeWaivers_BenefitID_WaiverID_EmployeeID_Overwritten_incl_DateWaived_DateEntered] ON [dbo].[EmployeeWaivers] ( [BenefitID] ASC, [WaiverID] ASC, [EmployeeID] ASC, [Overwritten] ASC)INCLUDE ( [DateWaived],[DateEntered])[/code]I still have the Key Lookup as part of the execution plan. What did I miss?

Problem having restoring database backup file on secondary server

Posted: 19 Mar 2013 06:41 AM PDT

Hi for last two days at one of our client side we have been trying figure out a why its taking two days two restore database backup files on secondary server. restore process is till going on and is stuck at 98% and no other process are there on secondary server.using sp_who2 i could find only two lastwaittypes as BACKUPTHREAD and BACKUPIO. how should i proceed ahead with analysis . i mean does disk has some bottleneck problem ?/the database of which we are trying to restore has file group.does it had to anything with filegropu

how to make a stored procedure return a value when there are no available values

Posted: 19 Mar 2013 01:04 AM PDT

Here is the script for a stored procedure that i have, i want the stored procedure to return a value of 1,if it is a null or if there are no values for it to return ,Create PROCEDURE [dbo].[GetExtractSiteExtractFileBatchPreviousSuccessInd] @BatchStatusCd NVARCHAR(5) ,@ExtractSiteCd NVARCHAR (10) ,@ExtractAsOfDate DATETIME AS SELECT case when @ExtractAsOfDate = isnull(convert(varchar(10),ExtractAsOfDate,120),'1900-01-01') then 0 when @ExtractAsOfDate <> isnull(convert(varchar(10),ExtractAsOfDate,120),'1900-01-01')then 1 else 1 end as ExtractSiteExtractFileBatchPreviousSuccessInd FROM ExtractSiteExtractFileBatch WHERE BatchStatusCd = @BatchStatusCd and ExtractSiteCd =@ExtractSiteCd AND ExtractAsOfDate = @ExtractAsOfDategohere is the exec statement for the stored procedure, i want it to return a value of 1, if the value does not exist in the sample data.EXEC @return_value = [dbo].[GetExtractSiteExtractFileBatchPreviousSuccessInd] @BatchStatusCd = N'sb', @ExtractSiteCd = N'CEPOK', @ExtractAsOfDate = N'2013-03-18 00:00:00.000'SELECT 'Return Value' = @return_valueGObelow is the sample dataSELECT '[ExtractSiteExtractFileBatchID]','[ExtractSiteCd]','[ExtractAsOfDate]','[BatchStatusCd]','[InsertedDate]','[UpdatedDate]' union all select '1442','CEPOK','2012-07-12 00:00:00.000','SB','2012-07-13 06:04:41.843','2012-07-13 07:30:29.660' union all select '1443','CFTRT','2012-07-12 00:00:00.000','SB','2012-07-13 07:30:30.077','2012-07-13 08:29:40.433' union all select '5472','CEPED','2012-07-12 00:00:00.000','SBR','2013-03-18 17:34:12.047','2013-03-18 17:34:12.047'

divide the percentage equally for every user- sql server stored procedure

Posted: 19 Mar 2013 03:55 AM PDT

I have written a stored procedure. as of now it inserts 10 percent for every user as below. orderid processid uid ordervalue perwet(percent wieghtage)1 1 1 10000 101 1 2 10000 101 1 3 10000 10I want that if more than 1 users are involved in 1 process in same order it should divide the percent equallythat means it must insert like thisorderid processid uid ordervalue perwet(percent weightage)1 1 1 10000 3.33 1 1 2 10000 3.331 1 3 10000 3.33structure of tables CREATE TABLE [dbo].[temp_calpoints1]([orderid] [int] NULL,[processid] [int] NULL,[uid] [int] NULL,[ordervalue] [bigint] NULL,[perwet] [int] NULL) ON [PRIMARY]CREATE TABLE [dbo].[process]([processid] [int] NULL,[processdesc] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[perwet] [int] NULL) ON [PRIMARY]set ANSI_NULLS ONset QUOTED_IDENTIFIER ONgoALTER PROCEDURE [dbo].[CalculatePointsAllorder]@fromDate varchar(10), @toDate varchar(10)ASBEGINprint @fromdate;print @todate; delete from temp_calpoints1; delete from temp_users;delete from temp_OrderMaster;insert into temp_usersSELECT uid FROM UserInfo where exists (select * fromOrderMaster where UserInfo.uid = OrderMaster.uid);insert into temp_OrderMasterselect * from OrderMaster where orderlogindate between @fromDate and @toDate DECLARE @t_orderid intDECLARE @t_processid intDECLARE @t_uid intDECLARE @t_ordervalue bigintDECLARE db_cursor CURSOR FOR SELECT orderid FROM temp_OrderMaster;--select orderid from OrderMaster where CONVERT(VARCHAR(10),orderlogindate,110) between --@fromDate and @toDate;--where orderlogindate --between @fromDate and @toDate;DECLARE db_cursor1 CURSOR FOR SELECT processid FROM process;--DECLARE db_cursor2 CURSOR FOR select uid from temp_users;DECLARE db_cursor2 CURSOR FOR select uid from userinfo;OPEN db_cursor FETCH NEXT FROM db_cursor INTO @t_orderid WHILE @@FETCH_STATUS = 0 BEGIN --print 'order ' -- PRINT @t_orderid;OPEN db_cursor1 FETCH NEXT FROM db_cursor1 INTO @t_processid WHILE @@FETCH_STATUS = 0 BEGIN --print 'process*******' --PRINT @t_processid;OPEN db_cursor2 FETCH NEXT FROM db_cursor2 INTO @t_uid WHILE @@FETCH_STATUS = 0 BEGIN--print '***'--print 'user' --print @t_uid--print @t_processid --print '***'if @t_processid = 1begin--print 'in processid ' --print @t_uid--print @t_processid --print '***'set @t_ordervalue = 0;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue-- @t_uid = b.uid from temp_OrderMaster a,EnquiryMaster b wherea.EnquiryId = b.enquiryid and b.uid = @t_uid and a.orderid = @t_orderid --)if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endif @t_processid = 2beginset @t_ordervalue = 0;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue --@t_uid = b.uid from temp_OrderMaster a,Requirement b wherea.requirementid = b.RequirementID and b.uid = @t_uid and a.orderid = @t_orderid --)if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endif @t_processid = 3beginset @t_ordervalue = 0;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue --,@t_uid = b.uid from temp_OrderMaster a,Proposal b wherea.proposalid = b.proposalid and b.uid = @t_uid and a.orderid = @t_orderid --)if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endif @t_processid = 4beginset @t_ordervalue = 0;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue --,@t_uid = uid from temp_OrderMaster whereorderid = @t_orderid and uid = @t_uid--)if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endif @t_processid = 5beginset @t_ordervalue = 0;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue --,@t_uid = b.uid from temp_OrderMaster a,OrderVendor b where b.orderid = @t_orderid and b.uid = @t_uid--)if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endif @t_processid = 6beginset @t_ordervalue = 0;--set @t_uid = 0;print 'in processid 6 'print @t_uid;PRINT @t_orderid;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue -- ,@t_uid = b.uid from temp_OrderMaster a,CollectionFollowUp b where b.orderid = @t_orderid and b.uid = @t_uid--)print @t_ordervalue ;if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endFETCH NEXT FROM db_cursor2 INTO @t_uidENDCLOSE db_cursor2FETCH NEXT FROM db_cursor1 INTO @t_processidENDCLOSE db_cursor1 FETCH NEXT FROM db_cursor INTO @t_orderid END CLOSE db_cursor update temp_calpoints1 set perwet = (select perwetfrom process where processid=temp_calpoints1.processid) END--update query set on count of group by claue from another table in sql--select count(*) from temp_calpoints1 group by processid,orderid--EXEC [CalculatePointsAllorder] @fromDate = '2012-09-10' ,@toDate = '2013-11-13'--select * from temp_calpoints1 order by orderid,processid,uid--select * from temp_OrderMaster

Bulk Insert via VBA - Operating system error 5

Posted: 19 Mar 2013 03:58 AM PDT

Hi,I hope this is the correct location for my question.I have written a simple piece of VBA to convert some data and save as a csv file which is then to be bulk instered into a database.I get the error 'Cannot bulk load because file "filename" could not be opened. Operating system error code 5(failed to retrieve text for this error. Reason: 15105).'When I copy the SQL string into SQL Studio is completes correctly and I have fully named the file and location or it. I have seen elsewhere on the forum similar issues which people put down to permissions, given the studio can complete the code without trouble that seems odd to me.Any ideas?

SQL to divide one row by another row based on formula or rule.

Posted: 19 Mar 2013 03:35 AM PDT

Hi All,I have a below table and need to divide one row by another row based on the formula.Date Name ID Result3/18/2013 A 1 53/18/2013 B 2 163/18/2013 C 3 213/18/2013 D 4 113/18/2013 E 5 453/18/2013 F 6 223/18/2013 G 7 53/19/2013 A 1 23/19/2013 B 2 73/19/2013 C 3 153/19/2013 D 4 93/19/2013 E 5 193/19/2013 F 6 123/19/2013 G 7 3Please suggest how to proceed to get below result. Formula Column is used to get Result(%) column.Formula Date Result(%)B/A 3/19/2013 0.285714286D/C 3/19/2013 0.6F/(E-G) 3/19/2013 0.75B/A 3/18/2013 0.3125D/C 3/19/2013 0.523809524F/(E-G) 3/19/2013 0.55Thanks in advance.

case statement in count?

Posted: 19 Mar 2013 03:55 AM PDT

Hi trying to do the followingSELECT Date, Year, CASE DATEPART(mm, date) WHEN 07 THEN 6 END AS JulyMaxhours, KindOfDay, Month,[b]COUNT(CASE DATEPART(mm, date) WHEN 07 )[/b]FROM Auxiliary.CalendarWHERE (KindOfDay = 'Weekday') AND (Year = 2013) AND (Month = 7)What I want in the case is the count of days = 07 even better is if I can multiply that count by 6 ThanksJoe

Combining Two Very Large Tables

Posted: 19 Mar 2013 02:54 AM PDT

I have two tables that each contain approximately 90 million records. The second table represents an "update" of the second table; however, no unique identifier exists within either population to identify which records in the new table are added or changed from the previous table. Each table contains the following fields:1. First Name2. Middle Name3. Last Name4. Social Security Number5. Birth Date7. ID (unique to the table, but not across tables)The SSN is unique in both tables; however, an SSN may be revised for a distinct record from one table to the next based on updated information, etc. that is received. We currently have the following indexes assigned to each table:1. Social Security Number (Clustered Index)2. First Name, Middle Name, Last Name, Birth Date, ID (Non-Clustered Indexes)I am trying to identify the most efficient way to:1. Combine the tables, and;2. De-duplicate records within the combined table based on a combination of different fields (e.g., same SSN; same first name, last name, birth date; etc.)We've accomplished #1 in the past by partitioning each table into chunks based on SSN and then unioning those chunks. For example: SELECT Social_Security_Number, Last_Name, First_Name, Middle_Name, Death_Date, Birth_DateINTO ##tbl_20110901_FullFile_00_04 FROM tbl_20110901_FullFileWHERE LEFT(Social_Security_Number,2) IN('00','01','02','03','04')SELECT Social_Security_Number, Last_Name, First_Name, Middle_Name, Death_Date, Birth_DateINTO ##tbl_20120301_FullFile_00_04 FROM tbl_20120301_FullFileWHERE LEFT(Social_Security_Number,2) IN('00','01','02','03','04')SELECT * INTO ##tbl_Combined_00_04 FROM ##tbl_20110901_FullFile_00_04 UNION SELECT * FROM ##tbl_20120301_FullFile_00_04Is there a better way to accomplish this?Thanks,James

cache plan

Posted: 19 Mar 2013 01:39 AM PDT

what is query cache plan , list the dmv's to trouble shoot poor performance of a query in sql 2005/2008?

SSAS Count and not SUM

Posted: 19 Mar 2013 03:49 AM PDT

HiHopefully this will be an easy one for the SSAS heads out there.I am new to cubs, i have set up some dimention and measures and i am browsing my cube nicely.problem is it is summing the values and i would like a count instead. I can see for the life of me where you change this to a count.Thanks in advance

SP execution time.

Posted: 19 Mar 2013 02:14 AM PDT

How can I find how long a particular SP runtime?SueTons.

Search This Blog