Thursday, March 21, 2013

[SQL Server 2008 issues] Data Conversion Error

[SQL Server 2008 issues] Data Conversion Error


Data Conversion Error

Posted: 20 Mar 2013 07:09 PM PDT

I have a staging data which holds all data as NVARCHAR, I am now loading from the staging table to specific tables where I convert various fields to numerics, but I'm getting a conversion error. Is there any easy way of finding which row is causing the problem. I've searched through the forum for this error but couldn't find anything that matches my specific needs.

Trace Flag 8048, 8015, or SUMA?

Posted: 14 Nov 2012 08:37 AM PST

Hola! Recently included SQL Server startup Trace Flag 8048 to resolve a serious spinlock contention issue in a SQL Server 2008 R2 system. Interested to hear from others who have found usage cases where performance value was delivered by trace flag 8048 (promote query memory grant strategy from per-NUMA node to per-core), trace flag 8015 (SQL Server ignores physical NUMA), or SUMA (interleaved sufficiently uniform memory access). Trace flag 8048http://blogs.msdn.com/b/psssql/archive/2011/09/01/sql-server-2008-2008-r2-on-newer-machines-with-more-than-8-cpus-presented-per-numa-node-may-need-trace-flag-8048.aspxTrace flag 8015http://blogs.msdn.com/b/psssql/archive/2010/04/02/how-it-works-soft-numa-i-o-completion-thread-lazy-writer-workers-and-memory-nodes.aspxSUMA, or interleaved memoryhttp://msdn.microsoft.com/en-us/library/ms178144(v=sql.105).aspx"If you have hardware NUMA, it may be configured to use interleaved memory instead of NUMA. In that case, Windows and therefore SQL Server will not recognize it as NUMA."Gory details of system workload, gathered metrics from troubled system, and gathered metrics from the system after intervention in posts to follow. Peace!tw: @sql_handle

result of adding non-clustered non-unique index vs clustered one

Posted: 20 Mar 2013 07:20 PM PDT

We're testing some rather large - at least for us - narrow tables that will be populated with between 200 and 500 million records.Any access to the table will be by addressing a low-cardinality id (some 20-50 distinct values) Without the option of partitioning we test some index scenarios.The table:Id1 (high cardinality)DatekeyNameValueId2 (low cardinality; always used in where clauses in queries)When adding a non-unique non-clustered index , the index is only used when additional columns are included. The index space is then larger than the table.When adding a non-unique clustered index , the index is always used (when Id2 is adressed) and index space is minimalWith DB2 as background and being used to Bitmap indices i'm trying to understand SQL Server's approach. The clustered index seems ideal, but what is the catch?

KILLED/ROLLBACK STATE - SERVICE RESTART

Posted: 20 Mar 2013 06:57 PM PDT

Hi all,Is there any fix other than sql service restaRT when there is transaction stuck in killed/rollback state (after killing a blocked transaction) in sysprocesses?i had this situation and once i unsucessfully tried moving the database to offline mode(courtesy:google) to end the transaction.any help will be appreciated.

XML Data Type as a parameter in a Stored Procedure

Posted: 20 Mar 2013 07:06 AM PDT

Hi,I've table as follows,[code="sql"]CREATE TABLE [dbo].[majikanAG_subMajikan_1]( [idx] [int] IDENTITY(-2147483648,1) NOT NULL, [batch_Id] [uniqueidentifier] NOT NULL, [icNo (Baru)] [varchar](100) NULL, [icNo (Lama)] [varchar](100) NULL, [payerNme] [varchar](300) NULL, [zakatAmount] [decimal](10, 2) NULL, [subMajikan] [varchar](100) NULL, CONSTRAINT [PK__majikanA__51EFEBF8002AF460] PRIMARY KEY CLUSTERED ( [idx] ASC, [batch_Id] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY][/code]My Stored Procedure as follows,[code="sql"]CREATE PROCEDURE [dbo].[addAGSummary_SubMajikan_Process1]@agItem xml,@batch_Id uniqueidentifier outputASBEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON;set transaction isolation level repeatable readBegin transactionBegin Tryselect @batch_Id=NEWID()insert into majikanAG_subMajikan_1(batch_Id, [icNo (Baru)], [icNo (Lama)],payerNme, zakatAmount, subMajikan)select @batch_Id,a.b.value('icNo[1]','varchar(200)') as icNo, --as input1,a.b.value('icNoOld[1]','varchar(15)') as icNoOld, --as input2,upper(a.b.value('payerNme[1]','varchar(100)')) as payerNme, --as input3,--a.b.value('amt[1]','decimal(10,2)') as amt, --as input4,a.b.value('amt[1]','varchar(100)') as amt, --as input4,a.b.value('subCd[1]','varchar(100)') as subCd --as input5,from@agItem.nodes('/data/ag') a(b)COMMIT transactionEnd TryBegin Catch-- Whoops, there was an error--IF @@TRANCOUNT > 0ROLLBACK transaction-- Raise an error with the details of the exceptionDECLARE @ErrMsg nvarchar(4000), @ErrSeverity intSELECT @ErrMsg = ERROR_MESSAGE(),@ErrSeverity = ERROR_SEVERITY()RAISERROR(@ErrMsg, @ErrSeverity, 1)End Catch END[/code]There are 2 scenario1- If @agItem did not have so much data (1000 records), the stored procedure run well2- If @agItem have a so much data (10000 records), the stored procedure cannot process as expectedWhy no (2) is happening? Did XML Data Type is not suitable for 10000 records?Please help. I'm stuck

existing column Level encryption in sql server2008

Posted: 18 Mar 2013 04:32 PM PDT

Hi, how to set the column level encyption in sql server 2008 in production enviroment without drop existing column and what are method available for encrypt the data in sql server 2008 standard edition.

venkatrao.m.cse@gmail.com

Posted: 20 Mar 2013 06:09 PM PDT

Hi,tempdb full on solvereply me

First letter upper case for each word in sql server.

Posted: 09 Dec 2011 06:40 AM PST

hello All,I want to write down stored procedure or function that will return or generate first letter in Upper case andrest of the letter in lower case.I already have table with Facility with facility name.(I have 50 facilityname like this and I want to update this table)Example :Facility Name : ABINGTON HEALTH LANSDALE HOSP and I want a outputFacilityname : Abington Health Lansdale HospThanksBhavesh

Anyone help to script the linkedservers on weekly basis by automatically

Posted: 20 Mar 2013 05:51 PM PDT

Hi ,Anyone help to script the linkedservers on weekly basis by automatically in text or .sql format file Regards,Saran

Help needed in SQL Server Replication

Posted: 20 Mar 2013 05:50 PM PDT

I have got History tables to track the changes and for those, the Primary Key is not there. Also there are some stored procedures in my application which are encrypted and can not be exposed to end-users. When I try to do the SQL Server Replication methods, none of the methods working for my case. Is there any way to achieve Replication or do I need to find an alternative method(please suggest me in that case)?Expecting your support at the earliest.

File Groups advatages

Posted: 20 Mar 2013 05:48 PM PDT

hi,We have a best practice in BOL that "We need to have a separate File Group for the User data".Is the below statement is true:If there is any corruption while writing the data to PRIMARY File Group's files the database will not available to access. We can make the Database available by adding Secondary FG and placing the data to this Group.please clarify.

Cannot scan a remote server with SQL Server BPA 1.0

Posted: 20 Mar 2013 05:34 PM PDT

Hello gentsI am using Microsoft SQL Server 2008 R2 BPA 1.0 wrapped in Microsoft Baseline Configuration Analyzer 2.0 on my workstation and can perform normal scan on my local sql server instance. However, when I tried it to connect to a remote server, it kept reporting the following error:Category: PrerequisiteSource: <servernmae>Issue: 1. User is not a member of Administrators group on remote machine, OR 2. Powershell Remoting is not Enabled on Remote ServerImpact: Analysis cannot be performedResolution: 1. Add user as a member of Administrators group, OR 2. Run Commands Enable-PSRemoting -f through PowerShell command prompt using elevated privileges, AND 3. Run Command winrm set winrm/config/winrs `@`{MaxShellsPerUser=`"10`"`} through PowerShell command prompt using elevated privilegesI've verfied all three pre-requisites (being a local admin for my own AD account, executed the ps commands with elevated privileges) listed above and also turned off Windows Firewall on this target server but still have no luck at all.Do you have any other directions to point out for me?Thanks in anticipation!P.S. The target server is not clustered, just a standalone physical box; both my worksation and the server are in the same domain; my AD account has been explicitly added to local windows admin group and sysadmin role on the server and its hosted sql instance.

TSQL Optimization - CASTing and CURSOR

Posted: 20 Feb 2013 11:59 AM PST

hi,Is CASTing cost an additional overhead to the query? I need to optimize our database initialization and i reviewing some of the sql scripts and found out that there are several sp using this CAST( NULL AS VARBINARY(MAX)) in defining their column. If they cast a null value then it results is always a null value so why they still need to cast it? This sp is called a million times (>7millions calls).How about a declaring a cursor, what it is overhead cost in terms of performance. They use it in one of their scalar function and being called also several times. this function compares the value of previous record to the current one. the loop will only pass on less than 10 records, and reading to a table containing a millions of records.any comments, suggestion.thanks.

SSRS 'Render' and TempDB Growth

Posted: 20 Mar 2013 05:36 AM PDT

I have a client that beginning two days ago is having an issue with TempDB growing extremely fast and running out of space. I launched the SQL Profiler while it was happening and noticed that most of the sessions were SSRS reports. I looked at the SSRS log and noticed a report running a the same time as the TempDB warnings. The SSRS log showed the report as Item Action = 'Render'. What does this mean? The report itself is based on SQL query that is based on views. The views are not complex, however, I noticed that one of the underlying tables could benefit with and additional index. I studied the query plan and determined that.If this were a report based on a stored procedure using a lot of temp table and such I would think it was that. But it's this query.Any thoughts?Thanks.Joe

Performance issue

Posted: 19 Mar 2013 10:58 PM PDT

Hi,I have two tables.treatment_plan table having 8 fields like txt_instruct_Description_1,txt_instruct_Description_2....As per requirement txt_instruct_Description_4,txt_instruct_Description_5,txt_instruct_Description_6,txt_instruct_Description_7,txt_instruct_Description_8 field values should be moved to another table custom_plan.custom_plan table having ,txt_addition_description_1,txt_addition_description_2,txt_addition_description_3,txt_addition_description_4,txt_addition_description_5,txt_addition_description_6,txt_addition_description_7,txt_addition_description_8 fields.I want to move treatment_plan table 5 fields value to custom_plan 8 fields.I have written the following the query.But this query taking more time to execute.Is there any other way to improve the performance or changing the code to execute fast. Please let me know.-------------------------------------------------SET NOCOUNT ON IF EXISTS (SELECT 1 FROM SYSOBJECTS SO WHERE SO.NAME = 'custom_plan') IF EXISTS (SELECT 1 FROM SYSOBJECTS SO WHERE SO.NAME = 'treatment_plan') BEGIN declare @num_source_field int declare @num_destination_field int declare @source_field varchar(100) declare @destination_field varchar(100) declare @src_value varchar(75) declare @dest_value varchar(75) declare @strsql varchar(1000) SET @num_source_field = 4 SET @num_destination_field = 1 select *, id = IDENTITY(INT,1,1) into #temp from treatment_plan Declare @mx int declare @mn int Declare @encid varchar(45) select @mx=max(id),@mn=min(id) from #temp create table #tbl(col1 varchar(45)) while(@mn<=@mx) BEGIN select @encid= enc_id from #temp where id=@mn SET @num_source_field=4 while(@num_source_field <= 8) BEGIN select @source_field = 'txt_instruct_description_'+cast(@num_source_field as varchar(2)) SET @num_destination_field = 1 while(@num_destination_field <= 8) BEGIN select @destination_field = 'txt_additional_description_'+cast(@num_destination_field as varchar(2)) truncate table #tbl SET @strsql='insert into #tbl select '+@source_field+' from treatment_plan where enc_id='+''''+@encid +'''' --EXECUTE sp_executesql @strsql exec(@strsql) select @src_value= col1 from #tbl truncate table #tbl SET @strsql='insert into #tbl select '+@destination_field+' from custom_plan where enc_id='+''''+@encid +'''' --EXECUTE sp_executesql @strsql exec(@strsql) select @dest_value= col1 from #tbl if(@dest_value is null) begin SET @strsql='update custom_plan SET '+@destination_field+'='+''''+@src_value+''''+' where enc_id='+''''+@encid+'''' --EXECUTE sp_executesql @strsql exec(@strsql) break end SET @num_destination_field=@num_destination_field+1 END SET @num_source_field=@num_source_field+1 END SET @mn=@mn+1 END drop table #tbl drop table #temp END Print '----------End----------'SET NOCOUNT OFFGO--------------------------------------------------------Thanks,Tony

SSRS 2008R2 showing counties (not just states)

Posted: 08 Mar 2013 01:53 PM PST

I have a hopefully simple question. SSRS 2008R2 can do maps by state just fine... is there an easy way to get City>County>State rollup data somewhere or get shapefiles for Counties in the US? I'm working on a database for someone that would be infinitely more useful if I could show like a heat map for sales by county across the country... there's a really cool map [url=http://www.mssqltips.com/sqlservertip/2552/creating-an-ssrs-map-report-with-data-pinpoints/]here[/url] that shows the county lines in it... and that's the part I want - the Counties. The granularity of the data I have is not too good, so county-level or so is about right.Is there an easy way to create a map like that? (like a color-coded map from election night, but a county-by-county instead of state-by-state?) If so, how? And where would I get the shapefiles for the counties?Thanks!Pieter

Same stored procs and different schemas = issues?

Posted: 20 Mar 2013 02:40 PM PDT

I have two stored procs which are identical, except they exist in 2 different schemas. Stored proc A in schema dbo references objects, views, tables in the dbo schema. (all of which are named properly using the schema preface). Stored proc B in schema ABC references objects, views, tables in the dbo schema (since it's the same as the other one). The only difference between these two schemas is their name, since of course, they exist in 2 schemas. (This was done soley for organizational purposes, and the original A dbo sproc was going to go away.)So. Here's where it gets odd.I expected them both to run/perform the same way. However, both were set up to run as a step in a job in the SQL Agent. Stored proc A, which is in the dbo schema runs perfectly fine. Stored proc B does not consistently perform fine. It has issues. Runs forever, won't finish, ends up locking things up.I'm confused in why code running from two different schemas operates differently, despite the proper naming conventions between them both. As a fix, i had to comment out the step that references schema B and just let the step with the schema A stored proc run.Are there any ideas out there of what may be going on or what I might check out? (And if I've left out any details that would help research this, do let me know.)Any help would be appreciated.

Websockets?

Posted: 13 Mar 2013 11:54 PM PDT

Anyone has experience with SQLServer and websockets? I need my Stored Procedure to be able to send a message to clients through nodeJS.Thanks,

Stored procedures and using the Exec statment on a dynamic string of SQL

Posted: 20 Mar 2013 05:51 AM PDT

sometimes a sp can get out of hand if this if that etc where I saw a developer go over 9k lines.allot of the code was re-use, the fields, the joins etc, just the where statement would changeso in all the sp just declares the fields and join in two varchar variablesthe ifs at the top build the if in 8950 less lines of code. Just by consolidating, I noticed she made so many mistakes because at this length it becomes almost unmanageable.and just ends up doingexec @sql1 + @sql2 + @sql3my question is performance, is this now an uncompiled TSQL statement and less powerful if all the code was "if" out naturallyP.S. I also noticed a downfall with exec, reporting services cannot pickup the fields and you have to either do it manually or cheat and put a sql statement up top and then change it on the backend

SSIS Conditional Split not working when run on SQL 2008 R2 using SQL Server Agent

Posted: 20 Mar 2013 06:16 AM PDT

I'm having an odd problem after upgrading our SQL 2008 server to 2008 R2: An SSIS package containing a conditional split is not passing rows through its outputs according to the set conditions when executed via a SQL Server Agent job. The SSIS package operates normally when run from Visual Studio, and it runs normally when executed via Management Studio connected to the SSIS instance--it only has issues when run via the SQL Agent. Anyone experience this before or have ideas on what to check?Note that I did not yet upgrade the SSIS package, so it's still in the SQL 2008 format (not R2).Current SQL Server 2008 R2 version: 10.50.1600

Convert Row into Columns

Posted: 20 Mar 2013 07:32 AM PDT

Hi All,I've a table that has data in the particular formatManagerID | EmployeeID1001 | 99901001 | 99911002 | 99931002 | 99941003 | 99951003 | 99961003 | 99971003 | 99981003 | 9999I would like to get the results asManagerID | EmployeeID1 | EmployeeID2 | EmployeeID31001 | 9990 | 9991 | null1002 | 9993 | 9994 | null1003 | 9995 | 9996 | 9997If you notice correctly ManagerID 1003 has 5 employeeid but i need only 3 of them in ascending order...Thanks in advance

Top N makes query run faster

Posted: 17 Mar 2013 11:13 PM PDT

I have a select query which brings back 35k rows. it runs in approx 5 seconds.I moved this to a new server and it runs for around an hour before i give up and kill the process.I put in select top 35000 .......... at the start of the query and i can get it to run in just under 4 minutes.The query runs across two servers to fetch the data using a left outer join to pull back everything from server 1 regardless of a match on server 2Where do i start looking to find the bottle neck ?I've attached the plan.

Checklist after Migrating Databases from 2005 to 2008 R2

Posted: 20 Mar 2013 03:48 AM PDT

I am migrating SQL databases from 2005 to 2008 R2 Enterprise edition. There are only 5 databases but they all are 500 GB or more. I would like to know what checks to make on the dbs after they are moved to the new server. Great hardware on the new server. We have a very small maintenance window and most of the time will be going towards moving databases, testing jobs etc. I am considering running following things in this order, but worried about the time it takes on such a large databases and performance impact. DBCC UPDATEUSAGEDBCC CHECKDBREBUILD/REORGANIZE INDEXESUPDATE STATISTICS (for Indexes that were ReOrganized)RECOMPILE All the procs.Can anyone please provide expert comments if we really need all these or not? Thanks in advance...

Monitor SSIS Job

Posted: 20 Mar 2013 12:48 AM PDT

We have a SQL Job that calls an SSIS package. This package used to take about 10 minutes to run. A while back it started to take 1.5 hours and still does. We are able to narrow it down to which task in the control flow is taking the longest.I'm very new to SSIS, but I have been asked to look in to it. So I guess I am wondering if anyone knows where I would start? I would like to some how monitor what goes on but the job runs at 3:30AM so I never get to see it till it's done running. Can I create an audit to record events for me to look at later? Is there a way to tell if tables are blocking/locking one another after the job has already ran? I can run the step again but it runs very quickly by itself so it's not that helpful.Below is the step that takes a long time, it's just a execute sql task with a query.--CREATE TABLE #FullList(DECLARE @FullList table ( StoreProductId INT, StoreId INT, ProductId INT, NewTaxRate NUMERIC (7,6), BottleDeposit MONEY)--Insert values into #fullListINSERT INTO @FullList(StoreProductId, StoreId, ProductId , NewTaxRate , BottleDeposit)SELECT tblTaxLoad.StoreProductId,tblTaxLoad.fkStoreId, tblTaxLoad.fkProductId, tblTaxLoad.NewTaxRate, tblTaxLoad.BottleDeposit FROM dbo.tblTaxLoad WITH (NOLOCK)ORDER BY tblTaxLoad.StoreProductId-------------------------------------------------------------------------------------------Update Taxes / Deposits 100 at a timedeclare @myIteration intdeclare @myCounter intset @myIteration = 0set @myCounter = 0----------------------------------------------------------- Take 100 items at a time & put them in temp table #SubList--WHILE ( SELECT COUNT(*) FROM @FullList ) > 0 BEGIN SET @myCounter = ( SELECT COUNT(*) FROM @FullList ) --------------------------------------- -- Get next 100 items -- SELECT TOP 100 StoreProductId, StoreId, ProductId , NewTaxRate , BottleDeposit INTO #SubList FROM @FullList ORDER BY StoreProductId --------------------------------------- -- Update these items -- -- begin tran UPDATE tblStore_Product SET InStoreSalesTaxRate = NewTaxRate, Deposit = BottleDeposit --select * FROM #SubList SubList INNER JOIN dbo.tblStore_Product WITH (NOLOCK) ON SubList.StoreProductId = tblStore_Product.[Id] --commit -- rollback --------------------------------------- -- Report to screen -- set @myIteration = @myIteration + 1 print 'Iteration ' + cast(@myIteration as varchar) + ': ' + cast(@myCounter as varchar) + ' left' --------------------------------------- -- Remove updated from #FullList table & LoadTable -- DELETE FullList FROM @FullList FullList INNER JOIN #SubList ON FullList.StoreProductId = #SubList.StoreProductId DELETE tblTaxLoad FROM dbo.tblTaxLoad INNER JOIN #SubList ON tblTaxLoad.StoreProductId = #SubList.StoreProductId --------------------------------------- -- Drop temp table (to be remade again) -- DROP TABLE #SubList WAITFOR DELAY '00:00:00.200' -- .2 second between each iteration END----------------------------------------- Drop temp table -- @FullList will simply go out of scope--DROP TABLE #FullList

ADP - passing value from a form to a query

Posted: 20 Mar 2013 07:27 AM PDT

Hi all,I am trying to develop a small database for our company. Although I come from a computing background I an not a database expert. After looking at some options I decided to try access ADP. By way of learning both Access and SQL Server I started going through tutorials on accessallinone.com site. First as regular Access database, then as an ADP. :-)) I use Access 2010 and SQL Server 2008 express. Well, I got stomped right off the bat. In "09 – Select Queries (Criteria)" tutorial a value from a text field on a form is passed to a query. It works fine in Access-stand-alone, of coarse; but not in ADP. Here is the query as seen in ADP (the last line is my attempt to re-create what I see in stand-alone version as the query builder does not prompt for the Forms):SELECT StudentID, LastName, FirstNameFROM dbo.tblStudentsWHERE (LastName = Forms!FrmLastName!TxtLastName) Any insights into how to achieve this will be greatly appreciated!

Change the Date Format

Posted: 20 Mar 2013 07:36 AM PDT

Good evening,Please, my sql server Database has a table which contains this field: [b]Date_Start(Datetime, Null)[/b]. The format of the data in the table is yyyy/mm/dd. I have tried to export this table in Excel through SIS. Although I have used this fonction [b]Convert(Date,Date_Start,103)[/b] the date format is still yyyy/mm/dd instead of dd/mm/yyyy in my Excel File. please what can I do to obtain the correct date format dd/mm/aaaa in my excel file after data exportation from my table through SIS?Thanks in advance.

Rebuilt Index

Posted: 20 Mar 2013 05:40 AM PDT

Dear experts,Our application is designed to support multiple SQL server version from 2000-current.One of the thing the application does is it imports data to db on schedule basis, depends on each client, some imports may take couple days to complete (yes no mistake)During the course of an import indexes may be very fragmented and import performance degrades. to overcome this issue we have index status check periodicly to dermine if indexes need to be rebuilt if so it rebuilds fragmented indexes automatically.The way we handle this is we first check sys.dm_db_index_physical_stats and determine which indexes needed to be reindexed, and use the following logic to defrag them:[code="sql"]If SQL version=Enterprise ALTER INDEX [indexX] ON [tableY] REBUILD WITH (FILLFACTOR = 80, ONLINE = ON);Else DBCC INDEXDEFRAG (0,'indexX','tableY') WITH NO_INFOMSGS;End[/code]My question to you is: Do you think what we have is sufficient or is there away to avoid using DBCC as my understanding DBCC is old method that may do thing less efficient?Any suggestion to impove would be very appreciate.

Restore msdb

Posted: 20 Mar 2013 07:14 AM PDT

In SQL 2008 R2, the msdb database can be restore from a backup correct, a record in the table sysdtspackages90 was remove and instead of inserting the information back in I thought it would be easy to do a restore.

How to interprete the unused space ?

Posted: 20 Mar 2013 01:35 AM PDT

In the properties of a user database, we can obtain the database size and the available space. We can shrink the database such that the available space is zero. However, if we execute sp_spaceused against this database, we can see the unused space is not zero. How to interprete the unused space in executing the stored procedure?Is there any way in removing the unused space?Many thanks in advance for any input.

Server drop connection error

Posted: 16 Jul 2012 08:37 PM PDT

Error: 17886, Severity: 20, State: 1. The server will drop the connection, because the client driver has sent multiple requests while the session is in single-user mode. This error occurs when a client sends a request to reset the connection while there are batches still running in the session, or when the client sends a request while the session is resetting a connection. Please contact the client driver vendor. [Policy: MSSQL Logfile Template]Any suggestions

reduce resource waits on sql server without upgrading sql server

Posted: 20 Mar 2013 04:06 AM PDT

Hi Experts,How to reduce resource waits on sql server without upgrading CPU??Thanks,Nihar

Restore multiple databases

Posted: 26 Oct 2012 01:11 PM PDT

Hi All,I have to take backups of 100+ databases from one server and restore it to another server.I have done the backups using a job and it went pretty smoothly.Now, my question is can I do the restore all these DBs using a job as well.I have done restore using jobs before but this volume.Does anyone have a good script for this type of restore or any advise or suggestions would be much appreciated.Thanks,SueTons.

SQL architecture

Posted: 17 Mar 2013 05:27 PM PDT

Can any one explain the SQL architecture

Wednesday, March 20, 2013

[how to] sql server sum and count

[how to] sql server sum and count


sql server sum and count

Posted: 20 Mar 2013 08:52 PM PDT

Sample data :

LINE_NO E_FIELD F_FIELD G_FIELD HSA_STATUS  FAMILY  CACHE_FAMILY    Count  Percentage     23053B  00000   00000   00000   S           SUMMIT      WER           43   ??  23053B  00000   00000   00000   T           SUMMIT      WER           144  ??  23053B  00000   00000   00684   T           SUMMIT      WER           2    ??  23053B  00353   00418   00684   T           SUMMIT      WER           1    ??  23053B  00353   00418   00763   T           SUMMIT      WER           1    ??  23053B  00353   00418   01512   T           SUMMIT      WER           1    ??  23053B  00353   00418   06797   T           SUMMIT      WER           1    ??  23053B  00353   00418   30228   T           SUMMIT      WER           1    ??  23053B  00353   00418   31935   T           SUMMIT      WER           2    ??  23053B  05601   01402   00758   T           SUMMIT      WER           1    ??  23053B  05601   01402   09091   T           SUMMIT      WER           1    ??  23053B  05601   01402   65053   T           SUMMIT      WER           1    ??  

This is my query:

SELECT LINE_NO,    E_FIELD,    F_FIELD,    G_FIELD,    HSA_STATUS,    FAMILY,    CACHE_FAMILY,    Count = ((SUM(TOTAL)) )    FROM  (    SELECT LINE_NO,      E_FIELD,      F_FIELD,G_FIELD,      HSA_STATUS,      FAMILY,      CACHE_FAMILY,       Count(LINE_NO) as Total     FROM TX_HSA_SUMM     WHERE MT_TIMESTAMP2 BETWEEN ('2013-03-07 10:10:00') AND ('2013-03-08 10:20:00')     GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY  ) as a  GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  ORDER BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  

This is my sample data actually. I already make the count. As You can see the column header. My problem is I need to sum the count and the for each row I need to get the percentage. For example the total for the above record is 199 so for the first record count is 43 so the calculation should be 43/199 * 100. How can I view the percentage? Please help me I need this urgently.

Is there any suggestion to show how to get the count first. then from that sum the count and finally. the count should be divided with the sum and * 100 to get percentage. Thanks

sql server-percentage calculation

Posted: 20 Mar 2013 06:39 PM PDT

Sample data :

LINE_NO E_FIELD F_FIELD G_FIELD HSA_STATUS  FAMILY  CACHE_FAMILY    Count  23053B  00000   00000   00000   S           SUMMIT      WER           43  23053B  00000   00000   00000   T           SUMMIT      WER           144  23053B  00000   00000   00684   T           SUMMIT      WER           2  23053B  00353   00418   00684   T           SUMMIT      WER           1  23053B  00353   00418   00763   T           SUMMIT      WER           1  23053B  00353   00418   01512   T           SUMMIT      WER           1  23053B  00353   00418   06797   T           SUMMIT      WER           1  23053B  00353   00418   30228   T           SUMMIT      WER           1  23053B  00353   00418   31935   T           SUMMIT      WER           2  23053B  05601   01402   00758   T           SUMMIT      WER           1  23053B  05601   01402   09091   T           SUMMIT      WER           1  23053B  05601   01402   65053   T           SUMMIT      WER           1  

This is my query:

SELECT LINE_NO,    E_FIELD,    F_FIELD,    G_FIELD,    HSA_STATUS,    FAMILY,    CACHE_FAMILY,    Count = ((SUM(TOTAL)) )    FROM  (    SELECT LINE_NO,      E_FIELD,      F_FIELD,G_FIELD,      HSA_STATUS,      FAMILY,      CACHE_FAMILY,       Count(LINE_NO) as Total     FROM TX_HSA_SUMM     WHERE MT_TIMESTAMP2 BETWEEN ('2013-03-07 10:10:00') AND ('2013-03-08 10:20:00')     GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY  ) as a  GROUP BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  ORDER BY LINE_NO,E_FIELD,F_FIELD,G_FIELD,HSA_STATUS,FAMILY,CACHE_FAMILY,total  

This is my sample data actually. I already make the count. As You can see the column header. My problem is I need to sum the count and the for each row I need to get the percentage. For example the total for the above record is 199 so for the first record count is 43 so the calculation should be 43/199 * 100. How can I view the percentage? Please help me I need this urgently.

Communication link failure for some queries to linked server

Posted: 20 Mar 2013 04:28 PM PDT

I am seeing the following error in SSMS (server has Windows Server 2008 and Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 installed) when running some queries against the linked server, in particular long running queries.

Simple selects from tables across the linked server work fine. This is a new issue that was noticed when SPs that have worked for years started failing.

I have run a Wireshark capture on the server, capturing for packets to port 1433 on the linked server host. At the tail of the capture, I see many (10) TCP Keep-Alives being issued (after a message regarding bad checksum) and then an RST packet. The RST packet is correlated with the error below being returned to the client.

There are other database servers on our network, where the linked server is configured identically, that don't exhibit this issue.

I have found some articles such as this and this. We are using the implicated Broadcom NICs. The Chimney Offload State setting is enabled on the server.

We will try disabling. Other thoughts on troubleshooting would be much appreciated.

OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Protocol error in TDS stream".  OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Communication link failure".  Msg 65535, Level 16, State 1, Line 0  Session Provider: Physical connection is not usable [xFFFFFFFF].   OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Communication link failure".  Msg 65535, Level 16, State 1, Line 0  Session Provider: Physical connection is not usable [xFFFFFFFF].   OLE DB provider "SQLNCLI10" for linked server "myServer" returned message "Communication link failure".  Msg 64, Level 16, State 1, Line 0  TCP Provider: The specified network name is no longer available.  

When is data moved during an Oracle partition split?

Posted: 20 Mar 2013 06:31 PM PDT

I need to split an Oracle partition, and I'm confused about whether or not Oracle is going to physically relocate the data when I execute the split. Here's what I intend to do:

alter table SCHEMA.DATATABLE     split partition MAXVAL_PART     at (TO_DATE(' 2013-04-01 00:00:00',                 'SYYYY-MM-DD HH24:MI:SS',                 'NLS_CALENDAR=GREGORIAN'))     into (partition PARTFOREXISTINGROWS          tablespace EXISTINGMAXVALTABLESPACE,           partition MAXVAL_PART          tablespace NEWMAXVALTABLESPACE);  

The problem is that my current MAXVAL_PART has 320 GB of data in it, and I don't want to physically move this data on the disk. If I cut off the partition at 2013-04-01, then there will be no data in the new partition, but I'm getting conflicting information about whether this will still necessitate a move of all the data. Ideally, Oracle sees that my new MAXVAL partition will be empty, defines it in the new table space, and I'm all done.

Will this command move the data that's already on the disk, or will it leave it in place and just create a new partition?

This article says Oracle will detect the empty partition and do a fast split http://docs.oracle.com/cd/B19306_01/server.102/b14231/partiti.htm

But I see in a number of places that a split means all data gets moved to new storage. Which is it?

SQL Server 2008R2 DBCC CheckDB never finishing

Posted: 20 Mar 2013 02:20 PM PDT

I'm running SQL Server 2008 R2 on Windows server 2008 R2 and up until recently we were having no problems. After windows updates recently (no sql server updates) we cannot run step 1 of our maintenance plan DBCC CHECKDB It does not error out, it just runs indefinitely, longest time so far 36 hours. The DB is only about 500MB so I dont think size is an issue.

I was just wondering if anyone has seen this sort of issue in the past.

Why does that query cause lock wait timeouts?

Posted: 20 Mar 2013 06:24 PM PDT

From time to time, I find a lot of these errors in my PHP error log:

MYSQL.1213: Deadlock found when trying to get lock; try restarting transactionSQL  

The problem persists for about 2 or 3 minutes. Thanks to stackoverflow, the reason was quite easy to find:

------------------------  LATEST DETECTED DEADLOCK  ------------------------  130320 15:53:37  *** (1) TRANSACTION:  TRANSACTION 0 83395751, ACTIVE 35 sec, process no 26405, OS thread id 140507872417536 starting index read  mysql tables in use 3, locked 3  LOCK WAIT 3 lock struct(s), heap size 1216, 2 row lock(s)  MySQL thread id 1163191, query id 199629038 localhost sosci Updating  UPDATE `database`.`table` SET `invalidate`='2013-03-21 03:53:02' WHERE ((token='C7G8X3H9ABCDEEFGT') AND (invalidate IS NULL)) AND (project=26118) LIMIT 1  *** (1) WAITING FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 0 page no 65548 n bits 192 index `PRIMARY` of table `database`.`table` trx id 0 83395751 lock_mode X locks rec but not gap waiting  Record lock, heap no 4 PHYSICAL RECORD: n_fields 12; compact format; info bits 0   0: len 4; hex 000c2591; asc   % ;; 1: len 6; hex 000004e36ace; asc     j ;; 2: len 7; hex 8000000a830110; asc        ;; 3: len 4; hex 80000001; asc     ;; 4: len 4; hex 80006606; asc   f ;; 5: len 1; hex 07; asc  ;; 6: len 16; hex 32455637363853485447444734584252; asc 2EV768SHTGDG4XBR;; 7: SQL NULL; 8: len 30; hex 3935363436362c656e672c616e6e612e63616d706f7265736940676d6169; asc 956466,eng,anna.camporesi@gmai;...(truncated); 9: SQL NULL; 10: len 8; hex 8000124ef477640e; asc    N wd ;; 11: len 8; hex 8000124ef495e88e; asc    N    ;;  

What I do not understand is: Why? The locked table is very small, only 61 entries (about 30 new and 30 deleted per day, the auto-incremented primary index is near 800.000). No column is especially large.

I use the InnoDB enging for this table (one key refers to another table with approx. 20.000 entries) and the problem occurs from time to time. RAM should not be an issue.The webserver and MySQL server run on the same (virtual) machine that usually does not suffer performance problems. Other transactions (there were thousands during the locked minutes) in large tables (1-2 mio. entries) did not make any trouble.

Thanks for your hints!

PostgresSQL: Get single attribute of UDT in SELECT statement

Posted: 20 Mar 2013 03:16 PM PDT

I created a user-defined type in a PostgreSQL 9.2 database and am trying in the SELECT statement to get only an attribute of the UDT. However, I don't seem to get my code to work.

Creation of type and table:

CREATE TYPE  ttp AS (f1 int, f2 int);  CREATE TABLE tbl (ctyp ttp);  

The SELECT statement:

-- does not work  SELECT ctyp.f1 FROM tbl  SELECT ctyp(f1) FROM testtable  

Both times I get an error. Is it possible to access a single attribute defined in a UDT in a SELECT?

Optimal drive configuration for SQL Server 2008R2

Posted: 20 Mar 2013 04:22 PM PDT

I have a fairly busy database server running SQL Server 2008 R2 that has the following setup:

  • SATA RAID 1 (2 Drives) - OS / Programs
  • SAS RAID 10 (4 Drives) - Sql Database Files (data and logs)
  • SAS RAID 1 (2 Drives) - TempDB (data and logs)

Assuming I can't add additional drives into this server, have I made the best use of the configuration I have available? Or should I consider another scheme here where logs are isolated from the data files, for example?

Streaming replication WAL files pg_xlog

Posted: 20 Mar 2013 05:40 PM PDT

We are using this awesome tool repmgr developed by 2nd Quadrant for streaming replication.

We set 1000 WALs to be archived. What I noticed though is that on master we have about 600 WALs in the pg_xlog directory whilst on the standby we have about 10 - yet replication is not lagging and from the looks on the data it looks like all data is there ie replication works.

I might be missing something but I'd expect the same amount of WALS to be on both master and standby ?

Is there some magical process which is removing them ?

Need help with syntax of Instr in Access 2000. Trying to find values 0.0 to 9.9 in a text string. [migrated]

Posted: 20 Mar 2013 01:08 PM PDT

I am an Access novice trying to use Instr to select numeric data from a text string.

 Instr ( [start], string_being_searched, string2, [compare] )        [start] defaults to 1        string_being_searched is [Source Key]        string2 is a number, decimal and number between 0.0 and 9.9        [compare] is optional and I do not think it applies here  

My problem is with string2, which returns a 0 for every occurrence when I use

 Expr1: InStr([Source Key],'#.#') or, with double quotes    Expr1: InStr([Source Key],"#.#")  

If I specify a value such as 1.4 or 3.2 or even a text value such as QJX (not even a number) it returns the value specified, but I need it to select two numbers with a decimal point between them.

Please can you help as I am really stuck.

Thanks

When is it appropriate to use SQL Server Developer Edition?

Posted: 20 Mar 2013 04:27 PM PDT

Do you use SQL Server Developer Edition on server-class machines in DEV and STAGING environments?

I am working on a large project where (if it passes the proof of concept stage) we will have several large geographically distributed enterprise class database servers running SQL Server 2012 Enterprise Edition.

We will have a production environment will initially have 3 servers, our Staging environment will have a minimum of 3 servers, and our Development environment will have 1 server (hosting three instances).

I was under the impression that we would only need to acquire Enterprise licences for the actual production servers, and we could get by with developer edition in our developer and staging environments because they are not "production".

Other sources have told me that we need to have an enterprise licence on all of the above mentioned machines, and that the developer edition is only meant for a single developer on their workstation.

Since developer edition has all the sexy features of Enterprise, I can't really see the value of it on a workstation class machine, especially for developing and testing the type of high availability system we are building.

If we have to fork out Enterprise licences for a dev server, that will just about kill our proof of concept stage, thus killing the project. Forcing an enterprise licence on a staging environment will make management just want to skip staging altogether.

Modeling staff dimension in data warehouse

Posted: 20 Mar 2013 12:44 PM PDT

I need to write a data warehouse diagram. I've got a problem with staff area. I have to store a information about workers details like name, age etc., and workers job time and agreement details like job_start, job_end, salary etc.

Firstly, I tried to draw a dimension for each data, but I consider whether it should be done like a connection between these dimensions each other?

Avoiding repetition without creating a view

Posted: 20 Mar 2013 03:27 PM PDT

Suppose that I have a query Q1 and I need to run a query like the following:

Q1  union  select *  from (some query that uses Q1 outcome)  

I would like to do that:

  • Without creating any view
  • Without running twice the Q1 query.

How can I do that on PostgreSQL?

How to restore deleted rows from a binary log?

Posted: 20 Mar 2013 01:06 PM PDT

Background

I am trying to recover a few thousand rows that were accidentally deleted from a MySQL database. Luckily replication is enabled, so in the binary logs I have a record of each INSERT, UPDATE and DELETE that was performed on the database since it was created.

Unfortunately in a previous attempt to recover the lost rows, someone had copied the entire INSERTS from the binlogs into the database. This created an even bigger mess by adding a bunch of duplicates and changing the ID value which is set to auto_increment.

Remediation

I created a new recovery database based on a backup that was made a few weeks prior to the incident. I then found the exact point in the binary logs were the backup ended. I extracted all the INSERTS/UPDATES/DELETES from the binlog file and fed them into the new recovery database up until the point were the rows got deleted; this restored the database back to where it was before all the rows had been deleted.

Problem

Because of the first failed attempt to recover the lost rows, a lot of invalid data was added to the original database, along with new legitimate data and because one of the columns in the row is set to auto_increment, this creates a mismatch for the value of the column that auto increments when trying to restore the database. I can't simply COPY/PASTE all the missing legitimate binlog SQL statements, because the value of the auto_increment column has changed due to the invalid data that was inserted, which I don't want.

Example

An example of the binlog file:

INSERT INTO reservations (id, name, date) VALUES (null, "john doe", "2013-03-20");  INSERT INTO reservations (id, name, date) VALUES (null, "jane doe", "2013-03-21");  INSERT INTO reservations (id, name, date) VALUES (null, "garbage", "2013-03-21");  INSERT INTO reservations (id, name, date) VALUES (null, "garbage", "2013-03-22");  INSERT INTO reservations (id, name, date) VALUES (null, "mary jane", "2013-03-23");  UPDATE reservations SET name = "mary janett" WHERE id=5;  

OLD database

id, name, date  1, john doe, 2013-03-20  2, jane doe, 2013-03-21  3, garbage, 2013-03-21  4, garbage, 2013-03-22  5, mary janett, 2013-03-23  

NEW database

id, name, date  1, john doe, 2013-03-20  2, jane doe, 2013-03-21  3, mary jane, 2013-03-23  

Now if I were to copy the binlog SQL statements into the recovery database, ignoring the garbage INSERTS which I don't want, the UPDATE statement would fail because the value of `id` would be different. Same goes for DELETE's and INSERT's which rely on the value of `id`.

Question

Is there an "easy" way to recover all the new legitimate data that was introduced to the database after the incident? The only solution I can think of at the moment is manually going through each INSERT/UPDATE/DELETE and adjusting the value of `id`.

I am hoping there is a better solution, because with thousands of records, this approach would take a long time. Any help is greatly appreciated!

Database and query optimizacion

Posted: 20 Mar 2013 07:32 PM PDT

I have a database containing three tables: tbl_database (main table is the primary), tbl_cmdatabase and tbl_blacklist. The three tables share the same structure but differ in the number of records that can be. This is the SQL structure, that basically is the same:

CREATE TABLE IF NOT EXISTS `tbl_ndatabase` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `id_telefonica` int(11) NOT NULL,    `number` varchar(11) COLLATE utf8_spanish2_ci NOT NULL,    PRIMARY KEY (`id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_spanish2_ci AUTO_INCREMENT=1 ;  

Table tbl_database contains ~194 074 records and they should continue to grow in large numbers, table tbl_cmdatabase contains ~45,742 records and similarly can increase but perhaps not so exaggerated as tbl_database but would increase and finally the table tbl_blacklist having ~92,038 records. I am developing an application where I need to verify that the number that I will insert into table tbl_database is not in tables tbl_blacklist and tbl_cmdatabase so I need to check for every row looking for the number (this queries are slow in this amount of records). What I need to do is to optimize some tables or change the structure or not because when I perform these queries the server tends to fall as the shared hosting does not support large queries, anyone can help me with this issue? Any suggestions?

Edit: Added a file for test data

Retrieving cached plans fails due to cached objects of database Mirror

Posted: 20 Mar 2013 07:58 PM PDT

I'm trying to retrieve cached execution plans for a specific database object, but this fails because of cached plans which belong to a mirrored database in MIRROR state.

The below query, without any additional WHERE clauses is failing with the following SQL error:

Msg 954, Level 14, State 1, Line 1 The database "DatabaseName" cannot be opened. It is acting as a mirror database.

SELECT  *  FROM    sys.dm_exec_query_stats AS deqs   CROSS APPLY     sys.dm_exec_query_plan(deqs.plan_handle) AS deqp  

I suspect what happens is the sys.dm_exec_query_plan function is first trying to parse all the plan handles from the cache, but fails on cached objects for a mirrored database.

Does anyone know if there are any ways to get around this, T-SQL wise?

Off course I could execute DBCC FREEPROCCACHE to clear the cache, however I'm hoping for other solutions. I'm a bit amazed this function is not discarding any objects from mirrored databases while trying to parse the plans.

Fastest way to delete matching rows in MySQL?

Posted: 20 Mar 2013 12:16 PM PDT

I have more than 10 million records in a table. I want to delete the records where an email column contains .ac..

What is the fastest method to perform the task?

help with simple sql queries

Posted: 20 Mar 2013 12:03 PM PDT

Can I use some help please with the following query

Consider the following schema: Suppliers(sid,sname,saddress) Parts(pid,pname,color) Catalog(sid->Suppliers,pid->Parts,cost)sid and pid are in this table FK

The primary keys are emphasis and ->Rel indicates a foreign key relationship with the primary key of Rel.

  1. find the pids of parts that are supplied by at least two different suppliers
  2. find the sids of supplies who supply every red part
  3. find the sids of suppliers who supply every red part or every green part

can somebody help me with those queries please?

Create log file only for my executed queries

Posted: 20 Mar 2013 03:24 PM PDT

I have been using the following to set up the log file:

log_statement='all',    log_collector='on',    log_destination='on',    log_filename='filename'  

After that, for testing I just executed a query then I have seen the log file in tools>serverstatus. Unfortunately it has a lot of anonymous queries including the query which I ran.

How can I filter out the queries which were executed along with my query (anonymous queries)? Is there any possibility to log the affected table information like:

  • which table got affected?
  • at what time was the query executed?
  • what fields got affected?

And how can I get these details using queries?

Retrieve Log Detail

Posted: 20 Mar 2013 03:10 PM PDT

I have been using the following to set up the log file:

log_statement='all',  log_collector='on',  log_destination='on',  log_filename='filename'  

After that, for testing I just executed a query then I have seen the log file in tools>serverstatus. Unfortunately it has lot of anonymous queries including the query which I ran.

How can I filter out the queries which were executed along with my query (anonymous queries)? Is there any possibility to log the affected table information like:

  • which table got affected?
  • at what time was the query executed?
  • what fields got affected?

And how can I get these details using queries?

PL/SQL trigger error ORA-0001 and ORA-06512

Posted: 20 Mar 2013 04:27 PM PDT

create or replace   TRIGGER "SUP" AFTER INSERT ON "EMP_REPORT" REFERENCING OLD AS "OLD" NEW AS "NEW" FOR EACH ROW  DECLARE      miclaim_supervisor_count number;    employee_company_code VARCHAR2(10);    employee_businessunit number;    projMgrs NUMBER;    supId NUMBER;      cursor  projMgrsCursor is select b.BU_MEMBER_ID    from BU_MEMBER b, EMP_SUB_DIV s    where s.EMP_NO = :NEW.EMP_NO    and s.SUB_DIVISION_CODE = '01' and s.DIV_CODE = '2'    and b.BU_ID IN (select BU_ID from BU_MEMBER where BU_MEMBER_ID = :NEW.EMP_NO);      BEGIN          delete from MICL_SUPERVISORS where EMP_NO = :NEW.EMP_NO and IS_OVVERRIDDEN = 0;          select count(*) into miclaim_supervisor_count from MICL_SUPERVISORS where EMP_NO = :NEW.EMP_NO and IS_OVVERRIDDEN = 1;          select COMPANY_CODE into employee_company_code from  EMPLOYEE_MASTER where EMP_NO = :NEW.EMP_NO;          projMgrs := 0;    if (employee_company_code ='F')then       OPEN  projMgrsCursor;      LOOP         select micl_sup_id_seq.nextval into  supId from dual;      FETCH projMgrsCursor INTO projMgrs;      EXIT WHEN projMgrsCursor%NOTFOUND;        insert into SUP VALUES ((supId), (SELECT SYSDATE FROM DUAL), :NEW.ENTRYADDEDBY_EMP_NO, 3000, 0,projMgrs, NULL,:NEW.EMP_NO);      END LOOP;         CLOSE projMgrsCursor;  else      if(miclaim_supervisor_count IS NULL or miclaim_supervisor_count<1) then      insert into SUP VALUES ((:NEW.ID), (SELECT SYSDATE FROM DUAL), :NEW.ENTRYADDEDBY_EMP_NO, 3000, 0, :NEW.SUP_EMP_NO, NULL,:NEW.EMP_NO);      end if;  end if;    END;  

I created this trigger a week go, with no compilation errors. But when I insert a record into "EMP_REPORT", I get the following errors:

 *"INSERT INTO"EMP_REPORT" (ID, ADDEDDATE, ENTRYADDEDBY_EMP_NO, SUP_EMP_NO, EMP_NO) VALUES ('71', TO_TIMESTAMP('19-MAR-13 09.55.57.983000000 AM', 'DD-MON-RR HH.MI.SS.FF AM'), '81', '841', '5295')  ORA-00001: unique constraint (SYS_C0023329) violated  ORA-06512: at line 1    One error saving changes to table  "EMP_REPORT":  Row 51: ORA-00001: unique constraint (SYS_C0023329) violated  ORA-06512: at line 1"*   

I can't figure out where I went wrong. How can I fix this?

Please note that I can't remove the constraint, and it is a primary key.

Segmentation fault (core dumped) with self-complied mysql-5.1.9-beta.tar.gz

Posted: 20 Mar 2013 06:35 PM PDT

I am compiling mysql-5.1.9-beta.tar.gz source code under Ubuntu 12.04. My steps are following:

tar -xzf mysql-5.1.9-beta.tar.gz  cd /home/gaoyu/mysql-5.1.9-beta  ./configure --prefix=/home/gaoyu/mysql-5.1.9 --with-debug  sudo make  sudo make install  sudo cp /home/gaoyu/mysql-5.1.9-beta/support-files/my-medium.cnf /etc/my.cnf  cd /home/gaoyu/mysql-5.1.9  bin/mysql_install_db --user=mysql  sudo cp /home/gaoyu/mysql-5.1.9-beta/support-files/mysql.server /etc/init.d/mysql  sudo bin/mysqld_safe --user=mysql &  

The bug is :

gaoyu@FlashGroup:~/mysql-5.1.9$ sudo bin/mysqld_safe --user=mysql &  [1] 26196  gaoyu@FlashGroup:~/mysql-5.1.9$ nohup: ignoring input and redirecting stderr to stdout  Starting mysqld daemon with databases from /home/gaoyu/mysql-5.1.9/var  ^C  

(here I wait for a very long time, so I must press Ctrl + C to stop it and continue)

gaoyu@FlashGroup:~/mysql-5.1.9$ bin/mysql  Segmentation fault (core dumped)  

After I input sudo bin/mysqld_safe --user=mysql &, I must wait for a very long time. Why is that? Why do I get a segfault? Has anyone encountered a similar problem?

MySQL replication between VPS and shared host

Posted: 20 Mar 2013 12:18 PM PDT

I have a VPS where I have ssh access and shared host with CPanel. Both have MySQL.

I need to create a master/slave setup among the two server. As per this tutorial: How To Set Up Database Replication In MySQL, I will need to modify the my.conf file.

Is it possible to achieve this with the following permissions on both the servers?

  • VPS

    I have root access with ssh login where I can modify /etc/my.conf file.

  • Shared host

    Cpanel with phpmyadmin.

Slave SQL thread got hanged

Posted: 20 Mar 2013 12:30 PM PDT

We have a master - slave setup with ROW based replication. We are seeing huge delay's on the salve even though there is no activity running wither on master or slave.

When we looked in, we observed the SQL thread looks like hanged. It has been in "Reading event from the relay log" state since last 3 hours or more.

baleaf:(none)> show processlist ;  +--------+-------------+-----------+------+---------+-------+----------------------------------+-----    -------------+  | Id     | User        | Host      | db   | Command | Time  | State                            |        Info             |  +--------+-------------+-----------+------+---------+-------+----------------------------------+-----    -------------+  | 217159 | system user |           | NULL | Connect |  1039 | Waiting for master to send event | NULL             |  | 217160 | system user |           | NULL | Connect |  10045 | Reading event from the relay log | NULL             |  +--------+-------------+-----------+------+---------+-------+----------------------------------+-----    -------------+  4 rows in set (0.00 sec)        baleaf:blackarrow_dw> SHOW SLAVE STATUS \G  *************************** 1. row ***************************                 Slave_IO_State: Queueing master event to the relay log                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: binlog.001403            Read_Master_Log_Pos: 95601911                 Relay_Log_File: relay-bin.000002                  Relay_Log_Pos: 12757514          Relay_Master_Log_File: binlog.001403               Slave_IO_Running: Yes              Slave_SQL_Running: Yes                Replicate_Do_DB:             Replicate_Ignore_DB:              Replicate_Do_Table:          Replicate_Ignore_Table:         Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 0                     Last_Error:                    Skip_Counter: 0            Exec_Master_Log_Pos: 32820355                Relay_Log_Space: 75539220                Until_Condition: None                 Until_Log_File:                   Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:              Master_SSL_CA_Path:                 Master_SSL_Cert:               Master_SSL_Cipher:                  Master_SSL_Key:           Seconds_Behind_Master: 7720  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 0                  Last_IO_Error:                  Last_SQL_Errno: 0                 Last_SQL_Error:     Replicate_Ignore_Server_Ids:                Master_Server_Id: 1  1 row in set (0.00 sec)  

Shall some one please have a look into this ASAP.

Partition of tablespace

Posted: 20 Mar 2013 07:45 PM PDT

My postgres server is running in a windows server and three tablespace are created. Postgresql server is installed in C: drive and the tablespaces are in D: and E: drive. So how to take a backup and restore. Is it similar to normal backup and while restore, in another machine, do i need to have the same setup or i can restore it in a single drive? From the tablespace folder, is it possible to retreive information for only one table which is huge?

Restore a Database to a New database (T-SQL)

Posted: 20 Mar 2013 08:45 PM PDT

I have a database backup from a database, say SourceData which needs to be restored on another database, same schema, but different name (SourceData_1) using T-SQL. I don't know why I gets the following error, despite specifying REPLACE option. For example I used the links below:

Restore Database script

http://msdn.microsoft.com/en-us/library/ms190447(v=sql.105).aspx

Based on the posts I found on the internet, first I got the logical names of the backup file using script below:

RESTORE FILELISTONLY  FROM DISK = 'C:\Backups\SourceData_20121227.BAK'  

Then used the logical names in the following script:

RESTORE DATABASE SourceData_1   FROM DISK = 'C:\DEV\SourceData_20121227.BAK'  WITH      MOVE 'SourceDate' TO 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\SourceData_1.mdf',      MOVE 'SourceDate_log' TO 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\SourceData_1_log.ldf',      REPLACE  GO  

Below is the error I'm getting:

MODIFY FILE encountered operating system error 112 (failed to retrieve text for this error. Reason: 15105) while attempting to expand the physical file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\SourceData_1_log.ldf'.
RESTORE DATABASE is terminating abnormally.

I checked the disk space and can confirm it is fine. I assume the target database should exist before running the restore. Is that right?

Any suggestion how to fix this issue?

Thanks.

Character vs Integer primary keys

Posted: 20 Mar 2013 02:48 PM PDT

I'm designing a database with multiple lookup tables containing possible attributes of the main entities. I'm thinking of using a 4 or 5-character key to identify these lookup values rather than an auto-incrementing integer so that when I store these attribute IDs on the main tables I'll see meaningful values rather than just random numbers.

What are the performance implications of using a character field as a primary key rather than an integer?

I'm using MySQL if that matters.

[Edit]
These lookup tables have new records added infrequently. They are manually maintained, and the character-based keys are manually created as well. Here's an example:

      CUISINES   ID      Description  -----  --------------  CHNSE  Chinese  ITALN  Italian  MXICN  Mexican  

[SQL Server] Unable to find Server name on my local machine.

[SQL Server] Unable to find Server name on my local machine.


Unable to find Server name on my local machine.

Posted: 19 Mar 2013 10:39 PM PDT

Hi,I have installed SQL Server 2005 backward compatibility installed on my local machine. I am seeing it in the program & features section as installed. The version is 8.05.1054. It does not showup under all programs and i am unable to find server name on my system. I have admin rights to my system and still when i try to access the server name as localhost it says i need atleast read rights.Could some one help me.Regards,Renato.

Possible to use this in an 'SP_ExecuteSQL'?

Posted: 20 Mar 2013 12:44 AM PDT

Hi Guys, I'm looking to use an SP_ExecuteSQL that would allow me to retrieve the following, and add some additional spice to the query when needed. Doing it this way allows for me to add complexities to it in the future. I have about 9 parameters in my query, but only list one since all the rest will follow suit.I'm looking to use a comma delimited string that allows me to select multiple variables in a parameter. However, I'm not sure how to add this into my query below:declare @system_status varchar(30)select @system_status = '12'declare @sql nvarchar(4000) select @sql = 'SELECT [system_status] FROM VW_Document_Main where 1=1 'if @System_Status = '-1' Begin select @sql = @sql + 'and system_status <> 20' End else if @system_status is not null and @system_status not in ('-1','0') Begin select @sql = @sql + 'and ','+REPLACE(@system_Status,'','')+',' LIKE '%,'+system_Status+',%''I run into several issues: Firstly, the addition to the @sql must be in a string concatenation or else it won't recognize the column name "system_Status" however, when it is in the string, I seem to be having issues with the commas. When I run the query without the @SQL variable it works fine:declare @system_status varchar(30)select @system_status = '9,20'SELECT [system_status] FROM VW_Document_Main WHERE 1=1AND ','+REPLACE(@system_Status,'','')+',' LIKE '%,'+system_Status+',%'I'm sure it just has something to do with some misplace commas, or bad script. Any guesses as to how to attack?

how to insert datetime column into table

Posted: 20 Mar 2013 01:00 AM PDT

All,I'm having strange issue trying to do the following which I thought was just going to be straight forwardselect top 5 machine.name, machine.model, machine.scantime 'INSERT INTO [SMS_000].[dbo].[BMCMachines] ([ComputerName],[MachineModel],[stime]) VALUES (' + '''' + machine.name + ''',' + '''' + machine.model + ''',' + convert(datetime,machine.scantime,112) + ''')' from machine when I run the query comes back with following errorMsg 241, Level 16, State 1, Line 1Conversion failed when converting date and/or time from character string.I can confirm that machine.scantime is "DateTime" column.

powershell and ssis

Posted: 19 Mar 2013 09:40 PM PDT

hi all, im trying to build a data collection of various sql servers. currently i do most of the data gathering just running a dmv query in ssis and loading the data into my collection db.Id like to be able to use powershell to run some queries and add that data to the collection db as well. but struggling at the moment.. I can get powershell to run in ssis (made a simple create folder script and ran it in ssis) but i cant work out how to gather data with posh then push it out to a db.. thanks for any help

[MS SQL Server] Replication configuration

[MS SQL Server] Replication configuration


Replication configuration

Posted: 20 Mar 2013 05:02 AM PDT

We have one requirement in Tranactional replication setup, One publisher and three subscriber.. How to configure one pub and three subscriber, and how to schedule data transfer for all subscriber.. Please anyone help to me ... Thanksjerry

Capturing deadlock info

Posted: 19 Mar 2013 11:56 PM PDT

I am looking for advice on capturing deadlock info.Can somebody please address the follwoing points :1. Does enabling trace flags T1204 AND T1222 have any impact on server performance and do i need to restart the sql server service to do this as the boss has said this really isnt an option.2. If i run profiler i understand it will have to be running at the time a deadlock occurs to capture info - what are the events i need to capture ? - Is there anyother way to capture the info - for example is a server side trace a better option ?3.Any links to good advice on troubleshooting deadlocks would be appreciated.Many thanksSteve:-)

SSIS package failing while executing as Agent job

Posted: 20 Mar 2013 01:29 AM PDT

Hi,I had some weird issues while running SSIS package as agent job. I can run SSIS package manually but failing to run as agent job. I thought its permission issues with Agent service account then I RDP into the server using service account login (domain\agentserviceaccount) and run the package and it went fine. any idea? much appreciated :)Thanks

SQL Agent - SQLSTATE 21000 error

Posted: 10 Jan 2010 08:14 PM PST

I have found weird error in my SQL Agent which is produced about every 30 seconds. The message reads as follows[298] SQLServer Error: 512, Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. [SQLSTATE 21000]Strange thing is I cannot find the process which is causing this error, I have even disabled all SQL Agent jobs and restarted the service and server eventually but it still produces this error message every 30 seconds. Anyone come across this before?

CLR Question

Posted: 19 Mar 2013 09:37 AM PDT

We have a production CLR process that queries Active Directory for User Information and loads it into a table. We had 2 jobs that collided and ended up with blocking and hung jobs accessing the table. In an attempt to stop and restart the process I killed the job, however the process would not rollback as I believe the CLR thread still had a lock on the table. The process was in a rollback state and never moving(this is a small transaction with no more than 100 or so rows)I know this is not an ideal situation, but was curious for opinions on how to get rid of the lock on the table:My understanding is you can use the kpid from the sysprocesses table and map to a process on the server, and kill it there, but you run a likely risk of crashing SQL Server completely.I was thinking the most likely way to clear the process is to restart the SQL Service, however this would have been a large impact to the other databases on the instance.My thought and what I was looking for feedback on was switching the clr enabled function from 1 to 0 and then back to 1. My thought was this would unload the assemblies and kill the process. There was minimal impact as we only have a few databases using CLR assemblies compared to restarting the entire service. Thanks for the feedbackErich

Log shipping and CDC

Posted: 19 Mar 2013 10:41 PM PDT

I come seeking the wisdom of the collective. I have searched to no avail to try and understand the implication of using CDC on a database that uses log shipping. In a weird way I also want to understand if CDC will affect log shipping and vice versa.After much head scratching I am assuming that the truncation point in the log will move if CDC is deployed. So instead of the last commited transaction it will be the last commited transaction that has been harvested?So if I have log shipping every 10 minutes does this mean I need to get the CDC data out of my source system before that 10 minutes cut off. Will it make my log shipping out of date??So many questions. Pointers to good articles gratefully received.Many thanks:w00t:

Doubts on mirroring

Posted: 19 Mar 2013 06:09 PM PDT

Dear AllI need some clarifications on mirroring. please help me out1) After configure the mirroring if the principal is failed and don't have the witness?if witness is there automatically failover will happen and mirror will come onlineif witness is not there how to do the fail over2) if mirror server is fail here no issue for principal after some time principal is also not available how to do the fail overCan anyone please help on this

how to know the delta data volume processed every day in mb

Posted: 07 Mar 2013 05:57 PM PST

Hi I wanted to know if there is a way to figure out how many bytes of data have been processed while performing the delta on a table. I can easily get the delta count of every delta job run, but I'm not aware if there is a way to figure out how many bytes are processed based on the delta row counts or other. We use Informatica for our ETLs if this helps to get the delta data volume info.Please help and suggest of all possible ways to get this info.

Log Shipping Compression

Posted: 19 Mar 2013 09:14 AM PDT

Hello, does anybody know if any versions of SQL have the ability to compress log files before shipping. The plan was to log ship from the local data centre to a SQL instance in EC2 (Amazon Cloud) but due to the high latency we get from here (NZ) to Amazon (Singapore) a large file takes an unrealistic time to copy. Therefore compression would be a help.We are currently using SQL 2008 STD.I think I have read the SQL 2008 Enterprise editions have this feature but I am unsure if SQL 2008 R2 STD or SQL 2012 STD also have this feature as I think normal backup compression was added to these additions....thanks

Auditing a Table for Select Statement By Active Directory User

Posted: 19 Mar 2013 08:10 AM PDT

Hey guys,I'm attempting to set up auditing on a table in my database. Under the database audit specification properties GUI, I have Audit Action Type "Select"Object Class "Object"Object Schema "dbo"Object Name "ImportantTable"Principal Name "OurNetworkDomain\Me"With this current setup, I perform a Select * From ImportantTableHowever the audit log does no logging. If I change the principal name to "Public" or "dbo" it correctly logs my activity and even shows under "Session Server Principal Name" and "Server Principal Name" columns that I am coming from OurNetworkDomain\MeI don't want to use dbo or Public as it will collect way too much information. Any ideas why this doesn't work when I just specify the Active Directory user individually?Thanks!Andrew

[Articles] Stop, Smell, Smile

[Articles] Stop, Smell, Smile


Stop, Smell, Smile

Posted: 19 Mar 2013 11:00 PM PDT

On this quiet Friday, Steve Jones skips a poll and talks about life. This editorial was originally published on Jun 6, 2008. It is being republished as Steve is on vacation.

Search This Blog