Wednesday, October 30, 2013

Advanced SQL Server 2008 Extended Events with Examples [SQL Server]


Advanced SQL Server 2008 Extended Events with Examples




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



Read more

sqlteam.com



Testing with Profiler Custom Events and Database Snapshots [SQL Server]


Testing with Profiler Custom Events and Database Snapshots




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



Read more

sqlteam.com



Efficiently Reuse Gaps in an Identity Column [SQL Server]


Efficiently Reuse Gaps in an Identity Column




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



Read more

sqlteam.com



Tuesday, October 29, 2013

Using REPLACE in an UPDATE statement [SQL Server]


Using REPLACE in an UPDATE statement




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



Read more

sqlteam.com



Handling SQL Server Errors [SQL Server]


Handling SQL Server Errors




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



Read more

sqlteam.com



Using Dynamic SQL in Stored Procedures [SQL Server]


Using Dynamic SQL in Stored Procedures




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



Read more

sqlteam.com



Thursday, October 24, 2013

Commented Issue: AdventureWorksDW2008R2 DimDate table has missing dates [13383][SQL server issues]


Commented Issue: AdventureWorksDW2008R2 DimDate table has missing dates [13383]




pavanbarnala wrote Feb 23, 2012 at 10:47 AM

Hello,
Is this been taken care? If so, please share the latest data for DimDate. I'm working on some POC which requires dates in between 2008 and 2010 as well. If it is not already done, can someone let me know is there is any way I can generate the data for those missing dates in that table?


petermyers wrote Feb 26, 2012 at 5:21 AM

The attached script will fix the problem. Use the stored procedure to extend to future dates if needed.



Read more
sqlserversamples.codeplex.com


Created Issue: SQL Server 2008 Developer edition sample database install error [18174] SQL server issues]


Created Issue: SQL Server 2008 Developer edition sample database install error [18174]









description


I'm trying to install Adventureworks sample dB and i get an error.
I successfully downloaded the mdf file however when attach, add and click OK from SSMS I get the error:

Msg 5172, Level 16, State 15, Line 1
The header for file 'C:\Program Files\Microsoft SQL Server\AdventureWorks2012_Data.mdf' is not a valid database file header. The FILE SIZE property is incorrect.







Read more
sqlserversamples.codeplex.com


Created Issue: AdventureWorksDW2012 Download file corrupt [19063][SQL server issues]


Created Issue: AdventureWorksDW2012 Download file corrupt [19063]



There seams to be an issue with the AdventureWorksDW2012 Data file download. When I try to run this SQL statement:
CREATE DATABASE AdventureWorksDW2012 ON (FILENAME = 'E:\MSSQL11.SQL02\MSSQL\DATA\AdventureWorksDW2012_Data.mdf') FOR ATTACH_REBUILD_LOG;
This is the error I get:
Msg 5172, Level 16, State 15, Line 1
The header for file 'E:\MSSQL11.SQL02\MSSQL\DATA\AdventureWorksDW2012_Data.mdf' is not a valid database file header. The FILE SIZE property is incorrect.
Thanks,Dan


Read more
sqlserversamples.codeplex.com


Wednesday, October 16, 2013

[SQL Server 2008 issues] how do you ship logs through dedicated NIC?

[SQL Server 2008 issues] how do you ship logs through dedicated NIC?


how do you ship logs through dedicated NIC?

Posted: 15 Oct 2013 06:44 PM PDT

Hi AllI have read through the post on the below link :http://blogs.msdn.com/b/sqlserverfaq/archive/2010/03/31/step-by-step-guide-to-setup-a-dedicated-sql-database-mirroring-dbm-on-dedicated-nic-card.aspx I am now wondering if I would do the same for log shipping to a DR server?active server with 2 NICsnic1 day to day functions 172.20.20.18nic2 dedicated to log shipping 172.21.20.19DR server same setupnic1 day to day functions 172.20.20.21nic2 dedicated to log shipping 172.21.20.30would this be done with T-SQL or just the same as in the above link?

Recursion Help - CTE

Posted: 15 Oct 2013 06:17 PM PDT

Hi, I need help on how to calculate a moving index, based on values in another column. Here's my data:[code="sql"]CREATE TABLE #temp_RecursionHelp2( MonthNumber int NULL, FInalMOM float NULL, IndexedPerc float NULL)GOinsert into #temp_RecursionHelp2 values (1, 1.30613547,100), (2, 1.30613547,NULL), (3, 1.30613547,NULL), (4, -0.2058964,NULL), (5, -0.2058964,NULL), (6, -0.2058964,NULL), (7, -0.70990703,NULL), (8, -1.21391765,NULL), (9, 0.80212484,NULL), (10, -0.2058964,NULL), (11, -1.2139176,NULL), (12, -0.70990703,NULL);[/code]What I want to do is recursively update IndexedPerc using the IndexedPerc value from the previous month. So, for example, IndexedPerc for MonthNumber 2 should be:MonthNumber 1's IndexedPerc + (MonthNumber 2's FinalMON value + 100)/100or:100 + (1.30613547 + 100)/100And then, Month 3's Indexed Perc value is calculated from this:MonthNumber 2's IndexedPerc + (MonthNumber 3's FinalMON value + 100)/100etc.I can do it with a loop but have a feeling a recursive CTE may help, but can't get the syntax right. Any help greatly appreciated.Thanks - Jaosn

Function vs Direct query

Posted: 15 Oct 2013 05:18 PM PDT

Hello, I have problem with comparing function vs direct querythere is different about 15-30ms and this function I have to join with another table in select in more procedures.I updated statistics and rebuild index from includes tables, I tried use in select OPTION recompile, added primary key, which is in ON clausule but it is still same.[quote]UPDATE STATISTICS TB_CODE_WORK_TIMEDTLUPDATE STATISTICS TB_CODE_WORK_TIMEUPDATE STATISTICS TB_WORK_CALENDARUPDATE STATISTICS TB_RCV_ALCDATA[/quote]First optimazed was from [b]CPU time = 85 ms, elapsed time = 85 ms. to CPU time = 16 ms, elapsed time = 26 ms.[/b]with added WORK_DATE to WHERE clausule, but still I dont understand why I have this one, If I selected one row, If I start direct I have in all queries time 0ms[code="sql"]ALTER FUNCTION [dbo].[FN_TABLE_GET_WORK_DATE]()RETURNS @tbl table ( WORK_DATE VARCHAR(10), WORK_TIME_ID VARCHAR(10), SHIFT_ID VARCHAR(5), PLAN_STOP_SEC INT , PRE_WORK_DATE VARCHAR(10), PRE_WORK_TIME_ID VARCHAR(10), PRE_SHIFT_ID VARCHAR(5), PRE_PLAN_STOP_SEC INT, primary key ( WORK_DATE,WORK_TIME_ID ))asBEGIN -- TIME TOTAL SEARCH DECLARE @GET_TIME DATETIME DECLARE @GET_DATE DATETIME SET @GET_TIME = CONVERT(VARCHAR(8), GETDATE(), 108) SET @GET_DATE = CONVERT(VARCHAR(11), GETDATE(), 120) + @GET_TIME --SET @GET_TIME = '05:50' --SET @GET_DATE = '2011-05-30 ' + @GET_TIME DECLARE @WORK_DATE VARCHAR(10) DECLARE @WORK_DATE_PRE VARCHAR(10) DECLARE @WORK_TIME_ID VARCHAR(10) DECLARE @SHIFT_ID VARCHAR(5) DECLARE @PLAN_STOP_SEC INT DECLARE @PRE_WORK_DATE VARCHAR(10) DECLARE @PRE_WORK_TIME_ID VARCHAR(10) DECLARE @PRE_SHIFT_ID VARCHAR(5) DECLARE @PRE_PLAN_STOP_SEC INT DECLARE @TIME_SEQ INT DECLARE @TIME_DAY INT SELECT TOP 1 @GET_TIME = CASE WHEN @GET_TIME < END_TIME THEN DATEADD(DAY, 1, @GET_TIME) ELSE @GET_TIME END FROM TB_CODE_WORK_TIMEDTL WHERE TIME_DAY > 0 ORDER BY TIME_SEQ DESC -- WORK TIME SEARCH SELECT @WORK_DATE = CONVERT(VARCHAR(8), DATEADD(DAY, -START_TIME_DAY, @GET_DATE), 112) ,@WORK_TIME_ID = WORK_TIME_ID, @SHIFT_ID = SHIFT_ID, @TIME_SEQ = TIME_SEQ , @PLAN_STOP_SEC = CASE WHEN IS_WORK = 'N' THEN DATEDIFF(SECOND, CAST(START_TIME AS DATETIME) + START_TIME_DAY, CONVERT(DATETIME, CONVERT(VARCHAR(8), @GET_TIME, 114)) + TIME_DAY ) ELSE 0 END FROM TB_CODE_WORK_TIMEDTL WHERE WORK_TIME_ID = (SELECT WORK_TIME_ID FROM TB_WORK_CALENDAR WHERE WORK_DATE =CONVERT(VARCHAR(8), DATEADD(DAY, -START_TIME_DAY, @GET_DATE), 112)) AND @GET_TIME >= CAST(START_TIME AS DATETIME) + START_TIME_DAY AND @GET_TIME < CAST(END_TIME AS DATETIME) + END_TIME_DAY SET @WORK_DATE_PRE=(SELECT TOP 1 PROD_DATE FROM TB_RCV_ALCDATA WHERE PROD_DATE<@WORK_DATE ORDER BY TR_ID desc) -- SELECT @WORK_DATE, @SHIFT_ID, @TIME_SEQ, @WORK_DATE + RIGHT('000' + CONVERT(VARCHAR, @TIME_SEQ),3) -- PRE WORK TIME SEARCH SELECT TOP 1 @PRE_WORK_DATE = WORK_DATE, @PRE_SHIFT_ID = SHIFT_ID, @PRE_WORK_TIME_ID = A.WORK_TIME_ID FROM ( SELECT WC.WORK_DATE + RIGHT('000' + CONVERT(VARCHAR, MAX(WTD.TIME_SEQ)),3) "WORK_DT", WC.WORK_DATE, WTD.WORK_TIME_ID, WTD.SHIFT_ID FROM TB_WORK_CALENDAR WC LEFT JOIN TB_CODE_WORK_TIMEDTL WTD ON WC.WORK_TIME_ID = WTD.WORK_TIME_ID WHERE WORK_DATE BETWEEN @WORK_DATE_PRE AND @WORK_DATE GROUP BY WC.WORK_DATE, WTD.WORK_TIME_ID, WTD.SHIFT_ID ) A WHERE A.WORK_DT < @WORK_DATE + RIGHT('000' + CONVERT(VARCHAR, @TIME_SEQ),3) ORDER BY A.WORK_DT DESC SELECT @PLAN_STOP_SEC = @PLAN_STOP_SEC + (ISNULL(SUM(TIME_MINUTE),0) * 60) FROM TB_CODE_WORK_TIMEDTL WHERE WORK_TIME_ID = @WORK_TIME_ID AND SHIFT_ID = @SHIFT_ID AND TIME_SEQ < @TIME_SEQ AND IS_WORK = 'N' SELECT @PRE_PLAN_STOP_SEC = ISNULL(SUM(TIME_MINUTE),0) * 60 FROM TB_CODE_WORK_TIMEDTL WHERE WORK_TIME_ID = @PRE_WORK_TIME_ID AND SHIFT_ID = @PRE_SHIFT_ID AND IS_WORK = 'N' INSERT INTO @tbl SELECT @WORK_DATE, @WORK_TIME_ID, @SHIFT_ID, @PLAN_STOP_SEC, @PRE_WORK_DATE, @PRE_WORK_TIME_ID, @PRE_SHIFT_ID, @PRE_PLAN_STOP_SEC RETURN;END[/code][b]Here is result of statistics io and time with direct query [/b][quote]SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.Table 'TB_CODE_WORK_TIMEDTL'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.Table 'TB_WORK_CALENDAR'. Scan count 4, logical reads 8, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_CODE_WORK_TIMEDTL'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.Table 'TB_RCV_ALCDATA'. Scan count 1, logical reads 89, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 1 ms.Table 'TB_CODE_WORK_TIMEDTL'. Scan count 2, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_WORK_CALENDAR'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.Table 'TB_CODE_WORK_TIMEDTL'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.Table 'TB_CODE_WORK_TIMEDTL'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.(1 row(s) affected) SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.[/quote]Here is result with function [quote] select * from FN_TABLE_GET_WORK_DATE() OPTION (RECOMPILE)SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms.(1 row(s) affected)Table '#025493D5'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 16 ms, elapsed time = 23 ms.[/quote]Anybody know where can be problem ? Thx lot for response

inserted / deleted tables

Posted: 14 Oct 2013 09:59 PM PDT

From what i understand, inserted/deleted tables never inherit "real table's" PRIMARY KEY, does anyone knows how come?It makes standard inserted/deleted-queries quite slow...

How to pass datetime parameter to SP dynamically

Posted: 13 Oct 2013 10:16 PM PDT

Hi,In below query I want to pass the parameter whose datatype is Datatime,dynamically,I am getting following error,Conversion failed when converting date and/or time from character string.Declare @P_AsonDate as datetime = Getdate(),@Str_ToDo As varchar(1000) Set @Str_ToDo = '' Set @Str_ToDo = 'SpTable_2mail' + ' ' + @P_AsonDate Print @Str_ToDo EXEC (@Str_ToDo )Please help me,This is urgent to me.

Is TRY-CATCH in SQL still a best practice?

Posted: 15 Oct 2013 06:11 AM PDT

I overheard our DBA today (2013-10-17) telling a junior developer to NEVER use a TRY-CATCH block in stored procedures. He told her that it is a very inefficient way of creating a transaction and should not be used. He said it was only still available for "backward compatibility." Instead, he suggested she put a BEGIN TRANSACTION and COMMIT or ROLLBACK TRANSACTION around anything that would do an insert or update.To my knowledge, TRY-CATCH was introduced in SQL 2005. I've been using it regularly as a best practice since about 2007, and this is the first time I've ever heard anyone say it should not be used.Any opinions here? Is a TRY-CATCH block still a valid practice today?(Part of the reason I'm obsessing over this is because he is pretty full of himself and also made the statement, "Trust me, I've been doing this for 10 years," to support his argument. That kind of statement drives me nuts.)Thanks!edit: took out the stupid and invalid "END TRANSACTION" statement. (LJ)

Add column which is part of a clustered index to non clustered index

Posted: 15 Oct 2013 01:58 AM PDT

Would it make sense to include a column that is part of a clustered index in the definition of a non clustered index, given that the clustered index is added by default to a non clustered index ?For exampleI have a Table Orders which has the following clustered indexCREATE CLUSTERED INDEX CIX_Orders_OrderNo ON dbo.Orders(OrderNo)Would it make sense do something like thisCREATE NONCLUSTERED INDEX IX_OrderDateOrderNo ON dbo.Orders(OrderDate,OrderNo)

SSRS - Column headings don't display on subsequent pages

Posted: 01 Dec 2010 04:44 AM PST

What property value do I need to set so that the header row I created displays on subsequent pages?

Connection to SQL Server &gt; 5 sec after packet arrives at NIC

Posted: 15 Oct 2013 09:59 AM PDT

Hi There,I'm at a loss and I'm hoping someone in the community can help me out. We have a PHP application using FreeTDS connecting to a SQL Server box. There is a MSSQL Connect Timeout of 5 seconds.Intermittently we get an error where PHP complains it can't connect to SQL Server.We checked for blocking, above average user connections, batch requests per second and nothing jumps out... so I setup a network trace using Wireshark.I found that the web server and SQL box handshake, then the web server sends a TDS packet, the box acknowledges the packet at the TCP level but then doesn't return the TDS packet (which I'm assuming is the initiation to make the SQL Server level connection) back within 5 seconds.Given that that the box gives a ACK at the NIC level, going upward the next place to check is the OS? Is it possible the OS is not passing the packet to the SQL Server instance quick enough? Is there OS level metrics I could look at?Maybe the OS is though, so if there is a way from SQL Server to tell that a TDS packet has arrived that would be great too. Can SQL Profiler do this?Any thoughts or suggestions would be appreciated.OS Windows Server 2008 SP 1SQL Server 2008 R2 StandardThanks!AmarettoSlim

tsql for XML format the values as elements

Posted: 15 Oct 2013 07:28 AM PDT

I am trying to get the values in a column as XML elements. Is it possible to do this using For XML in sql server?declare @XMLTest table( [Name] [nvarchar](50) NOT NULL )INSERT @XMLTest ([Name]) VALUES (N'One¬d¦Uº')INSERT @XMLTest ([Name]) VALUES (N'Two')INSERT @XMLTest([Name]) VALUES (N'Three')I would like to get the following on separate rows from the select query.This would help me escape the Invalid characters in the values, so they can then be serialized to XML properly.<One_x00AC_d_x00A6_U_x00BA_><Two/><Three/>Is it possible to get this return from the FOR XML query, so that the invalid characters are properly encoded when the sql server generates the XML?

Adding notes or text to a SQL Login

Posted: 15 Oct 2013 07:58 AM PDT

Hi There,Hopefully this isn't a stupid question but here goes.Is there a simple way to add notes to a SQL login? From what I can see in the properties in SSMS ther isn't.I sometimes come across legacy logins and can't remember who created them,what app they were created for, if it is still required etc etc. It would be nice to be able to add a little text blurb tologins for just such a purpose.Is there some simple way to do this I don't know about.Thanks,Tim.

Help on update ??

Posted: 15 Oct 2013 06:56 AM PDT

HiI'm trying to setup a job to run weekly that would add/update each PROVIDER_TO_TRUSTED_PROVIDER_COLLECTION.OID_LINK to 3 specific oids The end goal because I have a hard time explaining things... is that every provider should have 3 specific trusted providers SELECT PROVIDER_TO_TRUSTED_PROVIDER_COLLECTION.OID, PROVIDER_TO_TRUSTED_PROVIDER_COLLECTION.OID_LINK, Provider.LNameFROM dbo.Provider AS Provider INNER JOIN dbo.PROVIDER_TO_TRUSTED_PROVIDER_COLLECTION AS PROVIDER_TO_TRUSTED_PROVIDER_COLLECTION ON Provider.OID = PROVIDER_TO_TRUSTED_PROVIDER_COLLECTION.OIDexampleProvider.OID-----PROVIDER_TO_TRUSTED_PROVIDER_COLLECTION.OID_LINK12--------------00112--------------00212--------------00313--------------00113--------------00213--------------003Thanks IN advanceJoe

Query Help

Posted: 15 Oct 2013 06:05 AM PDT

Hello EveryoneI need one help to develop logic in my SQL ScriptIn one file we have values as below[code="plain"]Periods X X X X X X X X [/code]I need to display their actual position where X is see below[code="plain"]Periods Periods X 3 X 3 X X 3,8 X X 5,9 X X 3,10[/code]I try with LEN Function and I get Position where there is only one X but There are some case where we have 2 X so We need to display first Number then Comma and then Second Number.Please help me do thisThanks

select problem 3 table

Posted: 13 Oct 2013 09:46 PM PDT

please i have a one problem this query DECLARE @a1 table(p1 int ,x nvarchar(20),date1 smalldatetime)Insert Into @a1(p1,x,date1)Values (1,'yes','2013-10-01')Insert Into @a1(p1,x,date1)Values(2,'yes','2013-10-02')Insert Into @a1(p1,x,date1)Values(3,'yes','2013-10-03')----------select p1=case when p1 >= 1 and p1 <= 2 then 'my phone ' else 'thanks' end ,count(x) num_p1 from @a1 where date1>='2013-10-01' group by p1----p1 num_p1my phone 1 my phone 1thanks 1DECLARE @a2 table(p2 int ,z nvarchar(20),date2 smalldatetime)Insert Into @a2(p2,z,date2)Values (1,'end','2013-10-01')Insert Into @a2(p2,z,date2)Values(2,'end','2013-10-02')Insert Into @a2(p2,z,date2)Values(3,'end','2013-10-03') select p2=case when p2 >= 1 and p2 <= 2 then 'my phone ' else 'thanks' end ,count(z) num_p2 from @a2 where date2>='2013-10-01' group by p2-----p2 num_p2my phone 1my phone 1thanks 1------DECLARE @a3 table(p3 int ,x nvarchar(20),date3 smalldatetime)Insert Into @a3(p3,x,date3)Values (1,'no','2013-10-01')Insert Into @a3(p3,x,date3)Values(2,'no','2013-10-02')Insert Into @a3(p3,x,date3)Values(3,'no','2013-10-03')Insert Into @a3(p3,x,date3)Values (4,'no','2013-10-05')Insert Into @a3(p3,x,date3)Values (5,'no','2013-10-06')select p3=case when p3 >= 1 and p3 <= 2 then 'my phone ' when p3 >= 3 and p3 <= 4 then 'my 'when p3 = 5 then 'your 'else 'thanks' end ,count(x) num_p3 from @a3 where date3>='2013-10-01' group by p3------p3 num_p3-----------------my phone 1my phone 1my 1my 1your 1how i get resulting tablephone num_p1 num_p2 num_p3my phone 1 0 1my phone 1 0 1my 0 0 1my 0 0 1thanks 0 1 0

upgrade from SQL Server 2008 to SQL Server 2008 R2 ?

Posted: 15 Oct 2013 05:48 AM PDT

Not sure about licensing, we have installed a SQL Server 2008 Standard Edition (64-bit) server with SP1. At this time we are not ready to go to SQL Server 2012 but this is supposed to be a development environment for a production server that is SQL Server 2008 R2 with SP2 Enterprise Edition.Is it within licensing to "upgrade" [or would it just be applying an update?] to R2 SP2 ?If someone can clear up my confusion it will be appreciated.

Backup sessions got hung

Posted: 15 Oct 2013 05:41 AM PDT

Hi,I having a strange issue. In production, the backup job got hung and noticed that the differential backup for one of the databases got hung. I'm using Ola's backup script (http://ola.hallengren.com/) to backup the databases with litespeed. I have attached the text file with sessions.Can you please advise what could be the reason or how to troubleshoot this issue?

Database is in suspect mode

Posted: 15 Oct 2013 02:15 AM PDT

Database is in suspect mode due to full logspacewe already have both last night full and all transaction logs backupswe requested extra space added to the server what are the good steps in bring database online after we get the extra space?

users - schemas - permissions

Posted: 15 Oct 2013 02:04 AM PDT

I would like to have a credential limited to read only access to the tables in a database.I have a created such a user with a default schema of 'mydbschema' and a Database role membership of 'db_dataeader'When connecting to the database with SSMS as that user I only see one table. This table has a schema of 'dbo.' The other tables (that the user needs to see) has a schema of 'mydbschema.'What do I need to do so that the user has read only access to the tables with a schema of 'mydbschema?'

After Restore 'dbo' of database does not recognize correct permissions for Login

Posted: 15 Oct 2013 03:39 AM PDT

Background:We are Implementing SP3 for SQL Server 2008 and are required to provide a backout plan. To backout SP3 we uninstall SQL Server and re-install SQL Server 2008 base and SP1. After the database is restored vendor application Stored Procedures no longer run correctly when executed as the 'dbo'. Specifically the sp runs DBCC loginfo and errors with "User 'dbo' does not have permission to run DBCC LOGINFO. [SQLSTATE 42000] (Error 50000)." The 'dbo' for the database is the 'sa' account on the instance. If I login as 'sa' and run the command it has no issues. If the sp is executed in the very same query window if fails as above. Any help would be greatly appreciated.

Why does this varchar to integer fail?

Posted: 15 Oct 2013 03:04 AM PDT

Hi All,Why does this conversion fail?[code="sql"]select convert(integer,'15.0271428571429')[/code][code="plain"]Msg 245, Level 16, State 1, Line 1Conversion failed when converting the varchar value '15.0271428571429' to data type int.[/code]

Linked Servers

Posted: 14 Oct 2013 06:17 PM PDT

I have a 64 bit SQL Server 2005 Standurd edition installed on my server, I am trying to query Oracle 11g databaseI have a link server configured say LINK1 using 'OLEDB Provider for Oracle' Provider. The connection is working fine.I get the results for the below queryselect * from openquery(LINK1, select * from all_objects)BUT, when I try querying views on the Oracle 11g database using the opensource query, I get blank rows... I do not get any error... just blank rows.The same query returns records when executed directly at Oracle end.Can any one give me sloution to this so that I can see the view data while using open querymuch appreciated...

Preventing All Updates and Deletes

Posted: 14 Oct 2013 07:26 AM PDT

I have a need to prevent all updates and deletes from occuring in an database. Inserts and Selects are ok. What are the reccomendations for doing this? Can it be done via SQL Server Policy Management? I need to restrict everybody including sa accounts. Thanks for the helpSteve

Update statistics

Posted: 14 Oct 2013 07:40 AM PDT

During maintenance plans (means dbcc checkdb,update statistics and index rebuild) will increase I/O workload?Thanks

how to split the comma delimited string into rows without using LOOP, CTE, XML.

Posted: 08 Oct 2013 10:58 PM PDT

DECLARE @PARAMETER NVARCHAR(MAX)SELECT @P = 'MONDAY,SUN,FUN,D'

SQL Crash SqlDumpExceptionHandler: Process 8232 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION.

Posted: 14 Oct 2013 11:24 PM PDT

Our SQL Server 2008 R2 10.50.1600.1 x64 cluster running on Windows Server 2008 SP2 x64 Enterprise crashed this morning, quite unusually. Below is the stack trace, any ideas?2013-10-15 03:54:52.31 Server Using 'dbghelp.dll' version '4.0.5'2013-10-15 03:54:52.31 Server ***Stack Dump being sent to E:\EVI_ESSYSD1_043E\MSSQL10_50.ES\MSSQL\LOG\SQLDump0041.txt2013-10-15 03:54:52.31 Server SqlDumpExceptionHandler: Process 8232 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process.2013-10-15 03:54:52.31 Server * *******************************************************************************2013-10-15 03:54:52.31 Server *2013-10-15 03:54:52.31 Server * BEGIN STACK DUMP:2013-10-15 03:54:52.31 Server * 10/15/13 03:54:52 spid 82322013-10-15 03:54:52.31 Server *2013-10-15 03:54:52.31 Server *2013-10-15 03:54:52.31 Server * Exception Address = 000000000132A5DD Module(sqlservr+000000000043A5DD)2013-10-15 03:54:52.31 Server * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION2013-10-15 03:54:52.31 Server * Access Violation occurred writing address 0000000000F935C02013-10-15 03:54:52.31 Server *2013-10-15 03:54:52.31 Server * MODULE BASE END SIZE2013-10-15 03:54:52.31 Server * sqlservr 0000000000EF0000 0000000004A9DFFF 03bae0002013-10-15 03:54:52.31 Server * ntdll 00000000775E0000 0000000077765FFF 001860002013-10-15 03:54:52.31 Server * kernel32 00000000773E0000 000000007750CFFF 0012d0002013-10-15 03:54:52.31 Server * ADVAPI32 000007FEFDD30000 000007FEFDE37FFF 001080002013-10-15 03:54:52.31 Server * RPCRT4 000007FEFE430000 000007FEFE571FFF 001420002013-10-15 03:54:52.31 Server * MSVCR80 0000000074EB0000 0000000074F78FFF 000c90002013-10-15 03:54:52.31 Server * msvcrt 000007FEFE760000 000007FEFE7FBFFF 0009c0002013-10-15 03:54:52.31 Server * MSVCP80 0000000074D30000 0000000074E38FFF 001090002013-10-15 03:54:52.31 Server * sqlos 0000000075860000 0000000075866FFF 000070002013-10-15 03:54:52.31 Server * Secur32 000007FEFDB80000 000007FEFDB9BFFF 0001c0002013-10-15 03:54:52.31 Server * pdh 000007FEFA6F0000 000007FEFA73FFFF 000500002013-10-15 03:54:52.31 Server * SHLWAPI 000007FEFE320000 000007FEFE392FFF 000730002013-10-15 03:54:52.31 Server * GDI32 000007FEFE030000 000007FEFE093FFF 000640002013-10-15 03:54:52.31 Server * USER32 0000000077510000 00000000775DCFFF 000cd0002013-10-15 03:54:52.32 Server * USERENV 000007FEFDBA0000 000007FEFDBC5FFF 000260002013-10-15 03:54:52.32 Server * WINMM 000007FEFBC70000 000007FEFBCA8FFF 000390002013-10-15 03:54:52.32 Server * ole32 000007FEFDE50000 000007FEFE027FFF 001d80002013-10-15 03:54:52.32 Server * OLEAUT32 000007FEFF820000 000007FEFF8F2FFF 000d30002013-10-15 03:54:52.32 Server * OLEACC 000007FEFBC10000 000007FEFBC64FFF 000550002013-10-15 03:54:52.32 Server * IPHLPAPI 000007FEFD3A0000 000007FEFD3C2FFF 000230002013-10-15 03:54:52.32 Server * dhcpcsvc 000007FEFD350000 000007FEFD394FFF 000450002013-10-15 03:54:52.32 Server * DNSAPI 000007FEFD670000 000007FEFD6A9FFF 0003a0002013-10-15 03:54:52.32 Server * WS2_32 000007FEFF7D0000 000007FEFF813FFF 000440002013-10-15 03:54:52.32 Server * NSI 000007FEFDE40000 000007FEFDE46FFF 000070002013-10-15 03:54:52.32 Server * WINNSI 000007FEFD340000 000007FEFD349FFF 0000a0002013-10-15 03:54:52.32 Server * dhcpcsvc6 000007FEFD310000 000007FEFD33AFFF 0002b0002013-10-15 03:54:52.32 Server * opends60 0000000075850000 0000000075857FFF 000080002013-10-15 03:54:52.32 Server * NETAPI32 000007FEFD780000 000007FEFD822FFF 000a30002013-10-15 03:54:52.32 Server * PSAPI 0000000077780000 0000000077788FFF 000090002013-10-15 03:54:52.32 Server * BatchParser 00000000706D0000 00000000706FCFFF 0002d0002013-10-15 03:54:52.32 Server * IMM32 000007FEFF700000 000007FEFF72CFFF 0002d0002013-10-15 03:54:52.32 Server * MSCTF 000007FEFE200000 000007FEFE301FFF 001020002013-10-15 03:54:52.32 Server * LPK 000007FEFE310000 000007FEFE31CFFF 0000d0002013-10-15 03:54:52.32 Server * USP10 000007FEFF730000 000007FEFF7C9FFF 0009a0002013-10-15 03:54:52.32 Server * comctl32 000007FEFC3C0000 000007FEFC5B8FFF 001f90002013-10-15 03:54:52.32 Server * instapi10 0000000070FF0000 0000000070FFCFFF 0000d0002013-10-15 03:54:52.32 Server * CLUSAPI 000007FEF95A0000 000007FEF95DEFFF 0003f0002013-10-15 03:54:52.32 Server * NTDSAPI 000007FEFD5F0000 000007FEFD615FFF 000260002013-10-15 03:54:52.32 Server * WLDAP32 000007FEFE100000 000007FEFE154FFF 000550002013-10-15 03:54:52.32 Server * cryptdll 000007FEFDA70000 000007FEFDA83FFF 000140002013-10-15 03:54:52.32 Server * ACTIVEDS 000007FEF9420000 000007FEF9464FFF 000450002013-10-15 03:54:52.32 Server * adsldpc 000007FEF93E0000 000007FEF941BFFF 0003c0002013-10-15 03:54:52.32 Server * credui 000007FEF93A0000 000007FEF93D5FFF 000360002013-10-15 03:54:52.32 Server * SHELL32 000007FEFEAA0000 000007FEFF6F2FFF 00c530002013-10-15 03:54:52.32 Server * ATL 000007FEFCA70000 000007FEFCA88FFF 000190002013-10-15 03:54:52.32 Server * RESUTILS 000007FEF9580000 000007FEF9596FFF 000170002013-10-15 03:54:52.32 Server * cscapi 000007FEF9E90000 000007FEF9E9CFFF 0000d0002013-10-15 03:54:52.32 Server * sqlevn70 00000000744F0000 00000000746F0FFF 002010002013-10-15 03:54:52.32 Server * rsaenh 000007FEFCD80000 000007FEFCDC7FFF 000480002013-10-15 03:54:52.32 Server * AUTHZ 000007FEFDB50000 000007FEFDB75FFF 000260002013-10-15 03:54:52.32 Server * MSCOREE 000007FEF8EB0000 000007FEF8F1EFFF 0006f0002013-10-15 03:54:52.32 Server * mscoreei 000007FEF89F0000 000007FEF8A7FFFF 000900002013-10-15 03:54:52.32 Server * CRYPT32 000007FEFD470000 000007FEFD5A9FFF 0013a0002013-10-15 03:54:52.32 Server * MSASN1 000007FEFD620000 000007FEFD638FFF 000190002013-10-15 03:54:52.32 Server * credssp 000007FEFD2B0000 000007FEFD2B8FFF 000090002013-10-15 03:54:52.32 Server * schannel 000007FEFCE40000 000007FEFCE98FFF 000590002013-10-15 03:54:52.32 Server * msv1_0 000007FEFD060000 000007FEFD0A4FFF 000450002013-10-15 03:54:52.32 Server * kerberos 000007FEFD150000 000007FEFD1F5FFF 000a60002013-10-15 03:54:52.32 Server * mswsock 000007FEFD0B0000 000007FEFD0FEFFF 0004f0002013-10-15 03:54:52.32 Server * wshtcpip 000007FEFCEB0000 000007FEFCEB6FFF 000070002013-10-15 03:54:52.32 Server * wship6 000007FEFD130000 000007FEFD136FFF 000070002013-10-15 03:54:52.32 Server * NLAapi 000007FEFCCC0000 000007FEFCCD2FFF 000130002013-10-15 03:54:52.32 Server * napinsp 000007FEFB9A0000 000007FEFB9B2FFF 000130002013-10-15 03:54:52.32 Server * winrnr 000007FEFB990000 000007FEFB99AFFF 0000b0002013-10-15 03:54:52.32 Server * rasadhlp 000007FEFBA20000 000007FEFBA27FFF 000080002013-10-15 03:54:52.32 Server * security 0000000072580000 0000000072583FFF 000040002013-10-15 03:54:52.32 Server * ftimport 0000000060000000 0000000060024FFF 000250002013-10-15 03:54:52.32 Server * MSFTE 0000000049980000 0000000049D2DFFF 003ae0002013-10-15 03:54:52.32 Server * VERSION 000007FEFD140000 000007FEFD14AFFF 0000b0002013-10-15 03:54:52.32 Server * dbghelp 00000000747B0000 000000007490DFFF 0015e0002013-10-15 03:54:52.32 Server * WINTRUST 000007FEFC9B0000 000007FEFC9E8FFF 000390002013-10-15 03:54:52.32 Server * imagehlp 000007FEFEA80000 000007FEFEA97FFF 000180002013-10-15 03:54:52.32 Server * NTMARTA 000007FEFCD10000 000007FEFCD3BFFF 0002c0002013-10-15 03:54:52.32 Server * SAMLIB 000007FEFD640000 000007FEFD65BFFF 0001c0002013-10-15 03:54:52.32 Server * ncrypt 000007FEFD250000 000007FEFD291FFF 000420002013-10-15 03:54:52.32 Server * dssenh 000007FEF8550000 000007FEF8583FFF 000340002013-10-15 03:54:52.32 Server * bcrypt 000007FEFD200000 000007FEFD24FFFF 000500002013-10-15 03:54:52.32 Server * CLBCatQ 000007FEFE160000 000007FEFE1F8FFF 000990002013-10-15 03:54:52.32 Server * sqlncli10 000000006D800000 000000006DAB7FFF 002b80002013-10-15 03:54:52.32 Server * COMCTL32 000007FEFB3A0000 000007FEFB43FFFF 000a00002013-10-15 03:54:52.32 Server * COMDLG32 000007FEFE3A0000 000007FEFE42BFFF 0008c0002013-10-15 03:54:52.32 Server * SQLNCLIR10 0000000071350000 0000000071386FFF 000370002013-10-15 03:54:52.32 Server * xpsqlbot 00000000747A0000 00000000747A7FFF 000080002013-10-15 03:54:52.32 Server * xpstar 0000000074310000 0000000074397FFF 000880002013-10-15 03:54:52.32 Server * SQLSCM 00000000757E0000 00000000757EDFFF 0000e0002013-10-15 03:54:52.32 Server * ODBC32 000007FEF96A0000 000007FEF9711FFF 000720002013-10-15 03:54:52.32 Server * ATL80 00000000713F0000 000000007140FFFF 000200002013-10-15 03:54:52.32 Server * odbcint 0000000071BB0000 0000000071BE7FFF 000380002013-10-15 03:54:52.32 Server * xpstar 0000000074700000 0000000074724FFF 000250002013-10-15 03:54:52.32 Server * xplog70 0000000074740000 000000007474FFFF 000100002013-10-15 03:54:52.32 Server * xplog70 0000000074730000 0000000074731FFF 000020002013-10-15 03:54:52.32 Server * COMRES 000000006CF30000 000000006D06CFFF 0013d0002013-10-15 03:54:52.32 Server * XOLEHLP 000007FEF49A0000 000007FEF49B1FFF 000120002013-10-15 03:54:52.32 Server * MSDTCPRX 000007FEF85A0000 000007FEF8657FFF 000b80002013-10-15 03:54:52.32 Server * MTXCLU 000007FEF8A90000 000007FEF8AECFFF 0005d0002013-10-15 03:54:52.32 Server * ktmw32 000007FEFCDD0000 000007FEFCDD7FFF 000080002013-10-15 03:54:52.32 Server * sqlvdi 000000006FE30000 000000006FE5BFFF 0002c0002013-10-15 03:54:52.32 Server * dbghelp 00000000397B0000 000000003990DFFF 0015e0002013-10-15 03:54:52.32 Server *2013-10-15 03:54:52.32 Server * P1Home: 000000007E5CE0B0: 00000000C0000005 0000000000000000 000000000132A5DD FFFFF6FB00000002 0000000000000001 0000000000F935C0 2013-10-15 03:54:52.32 Server * P2Home: 000000095B51D4A8: 0000000000000000 0000C3B400000000 0000000000000000 0000000000000000 0000000000000000 000000095B51D4C8 2013-10-15 03:54:52.32 Server * P3Home: FFFFFA8000000000: 2013-10-15 03:54:52.32 Server * P4Home: 0000000000EF0000: 0000000300905A4D 0000FFFF00000004 00000000000000B8 0000000000000040 0000000000000000 0000000000000000 2013-10-15 03:54:52.32 Server * P5Home: FFFFFA80A5763F80: 2013-10-15 03:54:52.32 Server * P6Home: 0000000000000000: 2013-10-15 03:54:52.32 Server * ContextFlags: 000000000010001F: 2013-10-15 03:54:52.32 Server * MxCsr: 0000000000001FA0: 2013-10-15 03:54:52.32 Server * SegCs: 0000000000000033: 2013-10-15 03:54:52.32 Server * SegDs: 000000000000002B: 2013-10-15 03:54:52.32 Server * SegEs: 000000000000002B: 2013-10-15 03:54:52.32 Server * SegFs: 0000000000000053: 2013-10-15 03:54:52.32 Server * SegGs: 000000000000002B: 2013-10-15 03:54:52.32 Server * SegSs: 000000000000002B: 2013-10-15 03:54:52.32 Server * EFlags: 0000000000010206: 0000000000000000 0000000000000000 000000000FA00000 0000000000000000 0001000000800000 0001000000010000 2013-10-15 03:54:52.32 Server * Rax: 0000000000F935C0: 00000000018D99C0 00000000018D99EC 00000000018D99D0 00000000018D99F8 00000000018D99E0 00000000018D9A04 2013-10-15 03:54:52.32 Server * Rcx: 00000000012F73C0: 00000000012F7610 C46559840F01F883 F8830038CA47E9FF 840F00000008BA01 C465A3E9FFC465A6 90909090909090FF 2013-10-15 03:54:52.32 Server * Rdx: 0000000080018001: 5000000000000000 0800000000800180 1000000001000000 1000000000800181 2C00000000800181 0000000000002B00 2013-10-15 03:54:52.32 Server * Rbx: 000000095B51D4A8: 0000000000000000 0000C3B400000000 0000000000000000 0000000000000000 0000000000000000 000000095B51D4C8 2013-10-15 03:54:52.32 Server * Rsp: 000000007E5CE180: 000000095B51D4A8 000000095B51C820 000000008D1435C0 000000095B51D460 0000000000F63706 0000000100000003 2013-10-15 03:54:52.32 Server * Rbp: 000000095B51C820: 2BD28BBF00000000 4B61079B41147FE8 000007498918044B 000000008D1435C0 0000000000000000 0000000000000000 2013-10-15 03:54:52.32 Server * Rsi: 000000008D1435C0: 00000000012F73C0 000000008D143F00 0000000000000000 0000000000000000 00000000974FCC81 0000000000000000 2013-10-15 03:54:52.32 Server * Rdi: 000000095B51D460: 000000000131ED60 0000000000000000 FFFFFFFFFFFFFFFF 0000000000000006 000000095B51C820 0000000000000001 2013-10-15 03:54:52.32 Server * R8: 000000000132A6B8: 30316C71735C3A65 5C745F6E69616D5F 6264746E5C6C7173 695C696E735C736D 735C6564756C636E 6F6D6D6F635F696E 2013-10-15 03:54:52.32 Server * R9: 0000000000000254: 2013-10-15 03:54:52.32 Server * R10: 000007FFFFCF0000: 0000000000000000 000000007E5D0000 000000007E5B1000 0000000000000000 0000000000001E00 0000000000000000 2013-10-15 03:54:52.32 Server * R11: 000000007E5CE206: 00000132AFA20000 FFFFFFFFFFFF0000 000000000001FFFF 00008D1435C00000 000F4192EAD00000 0000000000060000 2013-10-15 03:54:52.32 Server * R12: 0000000000000006: 2013-10-15 03:54:52.32 Server * R13: 000000007E5CE300: 0000D00800000010 000000EF000001EF FFFFFFFFFFFFFFFF 0000D00800000002 0000000000F4D260 0000000000EFFFFF 2013-10-15 03:54:52.32 Server * R14: 0000000000000000: 2013-10-15 03:54:52.32 Server * R15: 0000000F4192EFA0: 0000000000F04C40 0000000F4192E3F0 0000000F4192E3F0 0000000F4192E3B0 0000000F4192EEA0 0000000F4192E080 2013-10-15 03:54:52.32 Server * Rip: 000000000132A5DD: 0008608348088948 C0854803894800EB 8B480035B741840F F88B0000001DE8C8 0035B739850FC085 4860245C8B48C033 2013-10-15 03:54:52.32 Server * *******************************************************************************2013-10-15 03:54:52.32 Server * -------------------------------------------------------------------------------2013-10-15 03:54:52.32 Server * Short Stack Dump2013-10-15 03:54:52.32 Server 000000000132A5DD Module(sqlservr+000000000043A5DD)2013-10-15 03:54:52.32 Server 000000000132A513 Module(sqlservr+000000000043A513)2013-10-15 03:54:52.32 Server 000000000132AFA2 Module(sqlservr+000000000043AFA2)2013-10-15 03:54:52.32 Server 00000000012FF329 Module(sqlservr+000000000040F329)2013-10-15 03:54:52.32 Server 0000000001301EEB Module(sqlservr+0000000000411EEB)2013-10-15 03:54:52.32 Server 00000000012F69FD Module(sqlservr+00000000004069FD)2013-10-15 03:54:52.32 Server 0000000000EFBBD8 Module(sqlservr+000000000000BBD8)2013-10-15 03:54:52.32 Server 0000000000EFB8BA Module(sqlservr+000000000000B8BA)2013-10-15 03:54:52.32 Server 0000000000EFB6FF Module(sqlservr+000000000000B6FF)2013-10-15 03:54:52.32 Server 0000000001418FB6 Module(sqlservr+0000000000528FB6)2013-10-15 03:54:52.32 Server 0000000001419175 Module(sqlservr+0000000000529175)2013-10-15 03:54:52.32 Server 0000000001419839 Module(sqlservr+0000000000529839)2013-10-15 03:54:52.33 Server 0000000001419502 Module(sqlservr+0000000000529502)2013-10-15 03:54:52.33 Server 0000000074EB37D7 Module(MSVCR80+00000000000037D7)2013-10-15 03:54:52.34 Server 0000000074EB3894 Module(MSVCR80+0000000000003894)2013-10-15 03:54:52.35 Server 00000000773FB1BD Module(kernel32+000000000001B1BD)2013-10-15 03:54:52.35 Server 0000000077606861 Module(ntdll+0000000000026861)2013-10-15 03:54:52.35 Server Stack Signature for the dump is 0x00000001D48451F62013-10-15 03:54:53.14 Server External dump process return code 0x20000001.External dump process returned no errors.

Please educate me: leaf_ghost_count

Posted: 15 Oct 2013 12:35 AM PDT

During the study of a production database I run in a table which is used to cache some data.[b]My questions:1. where is the space gone ? (43 Mb used for only .7 Mb of information).2. What is leaf_ghost count ?3. Can anybody explain the mechanism to me ?4. Advises/remarks/questions ?[/b]The table (Disk usage by table)#records 31600 reserved(KB) 5456[b]= 43 MB[/b] Data (KB) 2392[b]= 20 MB[/b] Indexes (KB) 600[b]= 4 MB[/b] Unused (KB) 2464 [b]= 19 MB[/b]Sum of all datalength's is : 717467 [b]= 0.7 MB[/b] (script see below)So .7 Mb of information sits in 2.3 Mb of data (OK), which is present in 5456 pages 43 mb. For me this is out of proportion.dbcc showcontig- Pages Scanned................................: 299 -- [b]= 2.3 MB[/b]In sys.dm_db_index_operational_stats, I see 241610 leaf_ghost_countThanks for educating me,Ben(PLE is over 10 hours, memory is over Gigabytes, so there are no resource or performance problems).Table usage during office hours. (Rough estimates).Inserts: 1 each second.Search (clustered key), 2 times each second.Deletes (in batches) all records which are older than 32 hours.[code="sql"]-- Table definitionsCREATE TABLE [dbo].[The_Table]( [An_id] [dbo].[id_type] NOT NULL, -- varchar(20) [account_nm] [dbo].[txt_type] NOT NULL, -- varchar(200) On average 12.03277 long. [settime] [smalldatetime] NOT NULL CONSTRAINT [DF_The_Table_settime] DEFAULT (getdate())) ON [PRIMARY]CREATE CLUSTERED INDEX [IX_The_Table] ON [dbo].[The_Table] ( [An_id] ASC, [account_nm] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY][/code]-- sys.dm_db_index_operational_stats[code="sql"]SELECT object_name( object_id),* FROM sys.dm_db_index_operational_stats(NULL, NULL, NULL, NULL) order by object_name( object_id)-- RESULT:name database_id object_id index_id partition_number leaf_insert_count leaf_delete_count leaf_update_count leaf_ghost_count nonleaf_insert_count nonleaf_delete_count nonleaf_update_count leaf_allocation_count nonleaf_allocation_count leaf_page_merge_count nonleaf_page_merge_count range_scan_count singleton_lookup_count forwarded_fetch_count lob_fetch_in_pages lob_fetch_in_bytes lob_orphan_create_count lob_orphan_insert_count row_overflow_fetch_in_pages row_overflow_fetch_in_bytes column_value_push_off_row_count column_value_pull_in_row_count row_lock_count row_lock_wait_count row_lock_wait_in_ms page_lock_count page_lock_wait_count page_lock_wait_in_ms index_lock_promotion_attempt_count index_lock_promotion_count page_latch_wait_count page_latch_wait_in_ms page_io_latch_wait_count page_io_latch_wait_in_ms tree_page_latch_wait_count tree_page_latch_wait_in_ms tree_page_io_latch_wait_count tree_page_io_latch_wait_in_ms page_compression_attempt_count page_compression_success_count database_id object_id index_id partition_number leaf_insert_count leaf_delete_count leaf_update_count leaf_ghost_count nonleaf_insert_count nonleaf_delete_count nonleaf_update_count leaf_allocation_count nonleaf_allocation_count leaf_page_merge_count nonleaf_page_merge_count range_scan_count singleton_lookup_count forwarded_fetch_count lob_fetch_in_pages lob_fetch_in_bytes lob_orphan_create_count lob_orphan_insert_count row_overflow_fetch_in_pages row_overflow_fetch_in_bytes column_value_push_off_row_count column_value_pull_in_row_count row_lock_count row_lock_wait_count row_lock_wait_in_ms page_lock_count page_lock_wait_count page_lock_wait_in_ms index_lock_promotion_attempt_count index_lock_promotion_count page_latch_wait_count page_latch_wait_in_ms page_io_latch_wait_count page_io_latch_wait_in_ms tree_page_latch_wait_count tree_page_latch_wait_in_ms tree_page_io_latch_wait_count tree_page_io_latch_wait_in_ms page_compression_attempt_count page_compression_success_count--------------------The_Table 7 2057058364 1 1 568996 326853 0 241610 188 167 0 188 0 68 0 2415290 0 0 0 0 0 0 0 0 0 0 1080606722 31 748 16670243 0 0 249 33 99 77 158 4074 0 0 73 1014 0 0[/code]The most important (?) numbers of the above query:[code="other"][font="Courier New"]568490 leaf_insert_count326853 leaf_delete_count241610 leaf_ghost_count000068 leaf_page_merge_count2412984 range_scan_count000000 singleton_lookup_count1 078 881 537 row_lock_count16 650 173 page_lock_count[/font][/code]The table29731 records5456 reserved2384 data600 index2472 Unused[code="other"][font="Courier New"]-- dbcc show contigTABLE level scan performed.- Pages Scanned................................: 299- Extents Scanned..............................: 77- Extent Switches..............................: 293- Avg. Pages per Extent........................: 3.9- Scan Density [Best Count:Actual Count].......: 12.93% [38:294]- Logical Scan Fragmentation ..................: 99.00%- Extent Scan Fragmentation ...................: 97.40%- Avg. Bytes Free per Page.....................: 3865.2- Avg. Page Density (full).....................: 52.25%DBCC execution completed. If DBCC printed error messages, contact your system administrator.[/font][/code][code="sql"]-- Datalength script calculate total amount of information the table.-- Calculate total number of bytes and average number of bytes for each column.select count(*) from The_Table-- 32074select 1.0*datalength(An_id),1.0*datalength(account_nm),1.0*datalength(settime),1.0*datalength(An_id)+1.0*datalength(account_nm)+1.0*datalength(settime)from The_Tablecompute sum(1.0*datalength(An_id)+1.0*datalength(account_nm)+1.0*datalength(settime)), avg(1.0*datalength(An_id)), avg(1.0*datalength(account_nm)), avg(1.0*datalength(settime))-- the computed result:[b]-- 738432.0 7.000000 12.022759 4.000000 [/b][/code]

SQL Server 2008 R2 slow query performance to Oracle 11g using Openquery

Posted: 01 May 2013 04:17 AM PDT

Gang,I'm completely perplexed on this one. I am running a query against an Oracle 11g database. It looks something like this:select * from openquery(oradb, "select col1, col2, col3....col15 from ora.Table where rownum <= 100000")The query is going against a linked server using the Oracle 11.2 client download. The purpose of the query is to pull in the first 100,000 records from a particular table. The data is about 15MB worth of data. When I run the query on two of my SQL Servers - let's call them A1 and A2 and both running 2008 R2, the query takes about 20 seconds. Here is where it gets wierd.When I run it on my new server - call it B1 also running 2008 R2, it takes EXACTLY 3 minutes and 36 seconds, everytime. The query executes immediately but the data transfer is incredibly slow.We monitored the server and the network performance is only about 40,000 B/sec. If we run the query in SQL Plus (Oracle client), it runs in 20 seconds - just like to other SQL Servers. THe network performance is well over 500,000 B/sec. That level of performance is the same on servers A1 and A2.I'm leaning that SQL Server is throttling this query for some reason. My networking and Oracle teams looked at it and said Oracle & Network are not the issue - I agree since the SQL Plus query runs in 20 seconds.[b]Has anyone else seen this on SQL Server? Are there some networking settings in SQL Server that would cause it to not spike out the network?[/b]I'm completely lost on this one.Dennis

Need to know the previous Report Type? Any Query or T-SQL?

Posted: 14 Oct 2013 10:25 PM PDT

Hi Friends,I am looking for a query or procedure which should give the previous report type of every type 'R' . 'R' is called Re-Do so i want to know the previous type which is realted to 'R'.I tried a query with the join condition like Table A join Table B on A.ID = B.ID-1 and A.PID = B.PIDbut the problem is, the type 'R' may be coming continuously , the query i tried is failed.For eg. In the below table , as you see the ID from 9 to 14 has type 'R', the result should be 'F' but with my query i get the same type 'R'. So friends, please gimme your suggestions on this. If you are not clear with my question, please let me know.[code="sql"]Declare @Temp Table (PID int, ID int, ReportType char(1))Insert Into @Temp select 4, 1, 'P'union allselect 4, 2, 'P'union allselect 4, 3, 'P'union allselect 4, 4, 'P'union allselect 4, 5, 'R'union allselect 4, 6, 'F'union allselect 4, 7, 'F'union allselect 4, 8, 'R'union allselect 4, 9, 'R'union allselect 4, 10, 'R'union allselect 4, 11,'R'union allselect 4, 12, 'R'union allselect 4, 13,'R'union allselect 4, 14, 'R'union allselect 4, 15, 'F'union allselect 4, 16, 'R'union allselect 4, 17, 'R'union allselect 4, 18, 'P'select * from @Temp[/code]

Trigger on create table

Posted: 14 Oct 2013 11:40 PM PDT

Hi Friends,Is there a way to capture the create table on particular database using triggers and send mail to particular user id.Thanks in advance.

SQL Server Jobs not being scheduled after restart

Posted: 20 Aug 2013 04:31 AM PDT

I failed over our Cluster (SQL 2008 R2) and the SQL Agent did not start (AgentXPsDisabled)... I ran this script which brings SQL Agent online, but ALL the jobs "Next Run" is set to "Not scheduled"EXEC sp_configure 'allow updates', 0RECONFIGUREgo-- then:sp_configure 'show advanced options', 1;GORECONFIGURE;GOsp_configure 'Agent XPs', 1;GORECONFIGUREGOEXEC sp_configure 'allow updates', 0RECONFIGUREI tried restarting SQL Agent from the SQL Server Config Manager but to no avail. Thoughts?

Quick Question - PRIMARY filegroup is full

Posted: 14 Oct 2013 08:14 PM PDT

Hello,I know the reason and how to go about fixing this issue.My question is:what is the behaviour of all the update/insert/delete(?) requests that cannot be processed.....will they be stored in the transaction log or will the database in effect be in read-only mode once space limit is reached??thanks in advancesamuel

Alter Column with user defined data type

Posted: 13 Oct 2013 05:41 PM PDT

Hi i need to modify datatype create Table mytable(i int,j datetime default getdate());Alter Table mytable ALTER COLUMN J As dbo.myfunction1();[b]Msg 156, Level 15, State 1, Line 1Incorrect syntax near the keyword 'As'.[/b]where as if i create a table with the user defined datatype i am able to create the table.create Table mytable1(i int,j As dbo.myfunction1());

Data Loss during Replication in SQL Server 2008 R2

Posted: 14 Oct 2013 01:34 PM PDT

Hi,We are replicating data from dbx of source server A (SQL Server 2008R2) to another destination server B (SQL Server 2008R2) on dby database.Replication monitor doesn't show any error but over the period of time we find that dby is out of sync to dbx. We replicate all tables from dbx to dby.Time to time we do reboot destination server B but not the source server A. I have following questions.1. Can someone tell me that if destination server is rebooted then do we lose any data generated during reboot time from source to destination?2. Is reboot is an issue for source or destination server when replication process is ON?ThanksChandra

SSIS no longer writes data to file

Posted: 12 Jun 2012 06:23 AM PDT

I have a question about SSIS. I have an export package that we run monthly. I is simple, just an OLE DB Source feeding a Flat File Destination. We have a SQL query in the OLE DB Source. This has been running fine for several months.A change was requested to the SQL query to add another condition in the WHERE statement. In SSMS it runs fine, returns about 414K rows. In the editor box, the preview returns the 200 rows like it should.Running the package in debug returns all green boxes, no errors or warnings. But it shows there were zero rows written to the file. And the file is zero length.The Preview button in the Flat File Destination editor box returns zero rows as well. It just looks like no data is being passed from the query in the OLD DB Source to the Flat File Destination.I've tried to create a new data flow with a different file name and that returns zero rows as well. Again with no errors or warnings.What I did was open the existing OLE DB Source editor, selected the SQL query, deleted it and pasted in the new SQL query. After doing that, the package runs fine but no data is written to the flat file. The other 11 files (from the other data flow tasks) are populated and look OK. The query is the correct one, the SQL query columns are all matching output columns in the flat file, same operator is used so there shouldn't be a permissions problem.I am out of ideas what has happened and don't know what else to look for.This is a SQL 2008 R2 Enterprise (production) server running:Microsoft SQL Server 2008 R2 (SP1) - 10.50.2500.0 (X64) Jun 17 2011 00:54:03 Copyright (c) Microsoft Corporation Data Center Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) Anyone care to give me a hand on what I can try next to fix this? It seemed like a simple process, it worked for another data flow change without complaints but this one doesn't report an error but doesn't give me any data.My knowledge of how this works is pretty limited, we only have two monthly SSIS packages that are run once a month, neither are complex, just extract some data and write it to flat files for FTP to the requesting agencies.Thanks!Norman

help on creating entity

Posted: 11 Oct 2013 09:20 PM PDT

i have a stone and this stone can be two type a circle stone that have this info:radius and center point(x,y) or Square stone that have this info:4 corner point(x,y) but id in both of them that is primary key is same.how can i create this entity??????

sp_generate_inserts

Posted: 07 Oct 2013 11:45 PM PDT

I am still using sp_generate_inserts :[code="sql"] (Copyright © 2002 Narayana Vyas Kondreddi. All rights reserved.) Purpose: To generate INSERT statements from existing data. These INSERTS can be executed to regenerate the data at some other location. This procedure is also useful to create a database setup, where in you can script your data along with your table definitions.Written by: Narayana Vyas Kondreddi http://vyaskn.tripod.com[/code]This code has not changed for a number of years, it is still functioning, but is there a alternative for this within the SQL-server product ?I am aware of the 'database task generate scripts'.Thanks for your time and attention,Ben

Few tricky Questions about SQL Server 2008 that I came across...

Posted: 10 Jan 2012 06:31 AM PST

Few Questions about SQL Server 2008 that I came across. [b]Q: Which of the following allows for the truncation of a transaction log?[/b]A. Transaction logs are truncated when you use SQL Profiler.B. Transaction logs are truncated when you use the following syntax: truncate table AC. Transaction logs are truncated when you backup your database.D. You can not backup transaction log.[b]Q: Which of the following statements about unique Constraints are TRUE?[/b]A. You can have only one unique constraint applied to a table.B. Unique constraints can be used to establish a link between two tables.C. You can only have one column assigned to a unique constraint.D. If a primary key was already assigned to a table, the unique constraint would identify a special column used for indexing, [b]Q: Which of the following is an improved performance feature that uses bitmap filters for selecting minimal rows with queries?[/b]A. Table partitionsB. Star JoinC. Where clauseD. Outer Join[b]Q: Using a join in your select statement instead of a where clause is more efficient because the join pulls less data into your working memory.[/b]A. TrueB. False[b]Q: Full text indexes can be backed up and restored with the rest of the database due to the ...[/b]A. ...new index organization systemB. ...overall size of the index getting smaller.C. ...index becoming a part of the transaction logs.D. ...indexes no longer being stored outside of the database.[b]Q. Which of the following statements, if used for creating a new login id, will result in an error?[/b]A. CREATE LOGIN [HumanResources/JohnSmith] FROM WINDOWSB. CREATE LOGIN AndrewWalters WITH PASSWORD '!^%&7', DEFAULT DATABASE = HRC. CREATE LOGIN John WITH PASSWORD '*&!@*&'D. CREATE LOGIN [HumanResources/Jim] FROM WINDOWS, DEFAULT DATABASE=CRM[b]Q. Which of the following statements can be used to provide sorted result set based in the CustomerName column?[/b]A. SELECT ROW_NUMBER() AS RN, CustomerName from Customers order by CustomerNameB. SELECT ROW_NUMBER() OVER (ORDER BY CustomerName) AS RN, CustomerName FROM CustomersC. SELECT ROW_NUMBER() (ORDER BY CustomerName) AS RN, from CustomersD. SELECT ROW_NUMBER() AS RN, CustomerName from Customers

Index question...

Posted: 13 Oct 2013 09:40 PM PDT

Hi,Running the missing indexes DMVs (with Glenn Berry's query) I get two missing indexes that are very alike...One has:Equality columns: [Documento], [NumDoc], [Modulo], [Serie]The other:Inequality columns: [Modulo]Include columns: [Documento], [NumDoc], [Serie]Building an index with ([Modulo], [Documento], [NumDoc], [Serie]) will satisfy both requests?I know that the 1st index will be used if only [Documento] is filtered and won't be with my index... But DMVs record the "exact" needs for the indexes, right? If a query just needed [Documento] there would be just an entry for that...Thanks,Pedro

Tuesday, October 15, 2013

Advanced SQL Server 2008 Extended Events with Examples[SQL Server]


Advanced SQL Server 2008 Extended Events with Examples




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



Read more

sqlteam.com



[how to] how do i get hotel email addresses

[how to] how do i get hotel email addresses


how do i get hotel email addresses

Posted: 15 Oct 2013 09:07 PM PDT

I'm looking for hotel email addresses in texas.I cant get the email addresses I have looked them up but I cannot get them to correspond with the hotels physical address.

How to make SQL Server Agent job history less verbose?

Posted: 15 Oct 2013 08:04 PM PDT

So, I've created a SQL Server Agent job, according to the answer to my prev question:

General

Advanced

enter image description here

running nightly (at 2:11) according to the schedule:

Nightly Schedule

There are 9 .bak files in 3 subdirectory of d:\backup\ source folder, 3 old bakup files are being deleted and 3 new is being created by another preceding SQL Server Agent job.

The described here job copy-purges files but... what the heck the history of this Copy&Purge BAKs SQL ServerAgent job shows 96 items:

enter image description here

How to make it less verbose?

Eventvwr.msc doesn't contain any eroors for the corresponding period of time (of job run).

Distributed Database using 3rd party services

Posted: 15 Oct 2013 07:33 PM PDT

Many websites offer ways to save information in the form of plain text. Some websites allow retrieval in structured formats such as JSONP which is easily parseable and available for the client without hitting the server (well, it hits the 3rd party server).

My question is why hasn't / has this been exploited yet? It seems to me that one could wire up something easily that uses a 3rd party's public facing JSONP api to store their information. Then whenever the client needs that information, it can be fetched easily through that 3rd party service.

After writing this, I feel like the benefits make this moot. The only thing I can come up with is saving disk space, but disk space is so cheap it almost doesn't matter.

What filter to use in SQL Profiler for socket write error

Posted: 15 Oct 2013 07:07 PM PDT

Can someone please help with intitial SQL Profiler filter setup to narrow down the settings and reduce the noise when I'm trying to find out the reason behind socket write error ?

Basically the problem is that certain integration software is dropping xml files along the way from one application to the end application but some go through. There are the errors in the log. After we restarted the sql and app server first two days no messages were dropped.

I need to figure out the possible reason. We will probably have to run the profiler for 24 hours because the error appear only occasionaly so if there is too much data it becomes useless. Any help appreaciated. Thank you.

jdbc:jtds:sqlserver://ASSET-DB-02;instance=test;DatabaseName=MonitorEngine'  I/O Error: Connection reset by peer: socket write error  java.sql.SQLException: I/O Error: Connection reset by peer: socket write error  

Subscription uploading is not happening every time

Posted: 15 Oct 2013 04:31 PM PDT

I have two SQL Servers (2008 R2), and merge replication between them, one being the publisher and the other subscriber. I noticed that downloads are happening every minute, but uploads are happening every 10-15 minutes. I wonder why is this happening. There is very little activity on the subscriber, in average one change per minute. Does the subscriber batches uploads and only uploads them only when the batch reaches certain size? What can I do to determine the cause of this behavior.

filling a table by triggering multiple tables on insert

Posted: 15 Oct 2013 04:39 PM PDT

I have a table m contains the following fields:

id int(11) not null auto_increment, //PK  name varchar(100) null,  something_else varchar(100) default 'not important'  

I need to fill up table m by using this trigger

CREATE TRIGGER some_name AFTER INSERT ON `x` FOR EACH ROW INSERT INTO m (name) VALUES (NEW.name);  

which I got from this dba question

but the problem is that each row on m should be filled using after insert on multiple different tables like x,y. So, how can this be done.

I hope that my question is clear enough as it is in my mind.

Update 1: table m has the following row:

id - name - something_else  1  - null - not important  

on after insert for table x, table m row should become like:

id - name     - something_else  1  - newthing - not important  

then, on after insert for table y, table m row should become like:

id - name     - something_else  1  - newthing - new value  

Hence, the row on table m had been filled by multiple inserts on different tables and updated accordingly.

How can Sql Server have an Index Allocation Map error on object ID 0, index ID -1

Posted: 15 Oct 2013 02:45 PM PDT

I received a Fatal error 823 due to an incorrect checksum in tempdb.mdf. This can be a hard drive error (and create corruption) so I ran DBCC CHECKDB on all my databases (on restores of last nights backups at least on a different server) and one of the databases returned the following issue:

Msg 2576, Level 16, State 1, Line 1
The Index Allocation Map (IAM) page (0:0) is pointed to by the previous pointer of IAM page (1:330) in object ID 0, index ID -1, partition ID 0, alloc unit ID 72057594044612608 (type Unknown), but it was not detected in the scan.

I have looked online about the error and normally there is a non-zero object ID for the object with the issue. This is a very small database (60 MB) and since the app that uses it has been chugging for at least a week (went back to my oldest backup and the issue was there) the corruption is not a big issue its self (will still get addressed and hopefully be ammo for better maintenance procedures).

On my backup running DBCC CHECKDB('DBName',REPAIR_ALLOW_DATA_LOSS) fixes the issue with no apparent data loss but since this requires single user mode and down time I want to be sure it's the right thing and that I understand what the error message is saying before I risk possible data loss (and a late night).

The 823 read error is being addressed by a different team.

Join dynamically on tables in Postgres

Posted: 15 Oct 2013 12:22 PM PDT

SELECT id, audited_row, name            FROM audit_log_change AS C                              JOIN audit_log_action AS A                 ON A.id = C.action_id                JOIN audit_log_table AS T                  ON A.audited_table_id = T.id  

This produces a result like this

  id   audited_row     name      ----------------------------------        41   6108            poolsystem    42   1108            pool    43   342             user  

Values in the column column above are table names and the corresponding value in audited_row are their id. I am wondering id there is a way to join on the tables dynamically in Postgres ? For example, JOIN poolsystem ON poolsystem.id = 6108 , JOIN pool ON pool.id =1108, JOIN user ON user.id = 342

Thanks

Query gets slow when bind peeking is turned on

Posted: 15 Oct 2013 11:49 AM PDT

There is an old database that was upgraded from Oracle 10 to 11G R1. In Oracle 10 we had disabled bind peeking. Since Oracle 11 has adaptive cursor sharing, we want to turn bind peeking back on. When we did this we found that many queries went much much faster. Unfortunately, one critical query got very slow. As soon as we turn off bind peeking, the one query gets fast again but everything else goes back to being sluggish.

The question is: In Oracle 11, what would cause bind peeking to make a query slow? I thought adaptive cursor sharing was supposed to take care of the bad bind variable peeks problem.

When is it appropriate database design to use a separate schema?

Posted: 15 Oct 2013 03:26 PM PDT

I am developing a database which has data sourced from many different applications. In my first pass at design, I placed the staging tables each in a schema named for their source application.

As several of the source applications have similar data and similar table names, I use the schema name to differentiate the source application. The alternative I am considering would be using a single schema and including the source application in the table name.

I wanted to look into the design rules pertaining to when to use a different schema and the pros and cons of doing so and I could not find anything.

Is the schema purely for permissioning and security?

Does it make sense from an organizational point of view to create objects in separate schemas beyond what is required for application development or is this just needlessly adding complexity to queries?

Are there any other repercussions of this decision which I have neglected to consider?

I have table 'log' that dont have FK, just alone, is better to use MyISAM or InnoDB?

Posted: 15 Oct 2013 01:29 PM PDT

Database: Mysql 5.1. Default engine for database: InnoDB. Webapp use it.

MySQL - Autoincrement does not increment sequentally if last row was deleted

Posted: 15 Oct 2013 05:34 PM PDT

I have a table which contains an auto incremented primary key id. If I delete the last row (highest id, for exampe id = 6) and insert a new row, the new id starts at 7. Which paramater I have to change that the primary key starts at 6?

CREATE TABLE animals (   id MEDIUMINT NOT NULL AUTO_INCREMENT,   name CHAR(30) NOT NULL,   PRIMARY KEY (id)  ) ENGINE=MyISAM;    INSERT INTO animals (name) VALUES  ('dog'),('cat'),('penguin'),  ('lax'),('whale'),('ostrich');  

Result:
id name
1 dog
2 cat
3 penguin
4 lax
5 whale
6 ostrich

DELETE FROM animals WHERE id = 6;  INSERT INTO animals (name) VALUES  ('x');  

Result:
id name
1 dog
2 cat
3 penguin
4 lax
5 whale
7 x

Thanks for advice.

What is a Scalable Storage Mechanism for large TTL data collections

Posted: 15 Oct 2013 05:49 PM PDT

We currently have a legacy webservice that stores each xml request/response in Sql Server. The data only needs to persist for 3 days before it is considered expired. Sql Server is not good at deleting rows since every delete forms part of the transaction log. The db currently grows at 6-10gb per day and this is going to increase. Only around 1% of the responses that are stored are ever recalled therefore this is a very write heavy application. Each request/response xml document can be upto 14k in size.

What storage mechanism would you choose for upto 50/100gb of data per day?

I understand the solution is not sustainable and I am really looking for a tactical fix since we cannot easily change how all our clients query and re-query the data. We could look into a db that has native support for TTL (Riak, Postgres) etc or maybe a file/blob s3/azure storage solution is a better fit? The issue with a cloud blob storage solution could be lookup performance if we had to scan multiple buckets (since buckets have capacity limits) especially compared to the current sql server single table lookup.

Open to ideas and suggestions?

Install Oracle 11gR2 driver for Node.js 0.10.20

Posted: 15 Oct 2013 08:57 PM PDT

I want to connect Oracle 11gR2 and Node.js 0.10.20. I use this package but I don't understand this part of installation process. Can you explain it to me?

# Replace /opt/instantclient_11_2/ with wherever you extracted the Basic Lite files to  echo '/opt/instantclient_11_2/' | sudo tee -a /etc/ld.so.conf.d/oracle_instant_client.conf  sudo ldconfig  

How to get performance benefits from a view vs subquery?

Posted: 15 Oct 2013 11:39 AM PDT

I'd like to know how/if anyone has gained significant performance benefits from using views instead of subqueries.

I would prefer not to define a specific case, since I'm trying to establish good practice and rule of thumb in addition to case-by-case.

An example would be finding the last claim date in an insurance policy claim list where you started the search by sorted/filtered customer set and all claims are in their own table. I'm currently using a view and thinking about a subquery instead.

Things that might affect performance across cases:

  • Can views be used somehow to avoid a full scan where a subquery would need to?
  • Are there any limitations/caveats on ensuring that the best indexes are used when joining to a view?

Postgres functions vs prepared queries

Posted: 15 Oct 2013 06:45 PM PDT

I'm sure it's there but I've not been able to find a simple definitive answer to this in the docs or via Google:

In Postgres, are prepared queries and user defined functions equivalent as a mechanism for guarding against SQL injection? Are there particular advantages in one approach over the other? Thanks

Troubleshooting Mirroring Configuration

Posted: 15 Oct 2013 08:40 PM PDT

I am configuring database mirroring on SQL Server 2012. After configuration I am getting the following error when attempting to start the mirroring session:

tcp server cannot be reached or does not exist. Check the network address name and the ports for the local and remote are operational.

an Exception occured while executing a Transactional-SQL Statment.

multiple line text values in mysqldump text export file

Posted: 15 Oct 2013 10:27 AM PDT

I'm trying to export +100mil record table into txt file. My plan is split up txt file to small pieces by size or line then import.

I have one text field has multiple line like blog post text, in txt export file it exported as multiple lines which I want it to be 1 line 1 row so I can process it by lines.

I tried various fields-terminated-by, lines-terminated-by, fields-escaped-by parameters for export but nothing made that multiple line text into single, quoted and comma separated line.

It does quote well when I export the data in sql format but I haven't succeeded to convert new line characters in the text field to \n\r or \n whatever those characters are. Even if I escape it, still exported as new line with the quote.

Altering the location of Oracle-Suggested Backup

Posted: 15 Oct 2013 03:27 PM PDT

On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere.

As far as I can see, the scheduled backup job is simply:

run {  allocate channel oem_disk_backup device type disk;  recover copy of database with tag 'ORA_OEM_LEVEL_0';  backup incremental level 1 cumulative  copies=1 for recover of copy with tag 'ORA_OEM_LEVEL_0' database;  }  

Asking RMAN to show all reveals that device type disk is indeed configured to store elsewhere:

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/s01/backup/PROD11/PROD11_%U';  

If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM,

Why might Oracle still choose to dump the backupset to the db_recovery_file_dest?

Ultimately, how can I change the backup destination?

MYSQL Timezone support

Posted: 15 Oct 2013 05:27 PM PDT

We are having a shared hosting plan and they are saying that do provide MYSQL Timezone support in a shared hosting plan. I can create timezone related tables in our database and populate them with required data(data from from our local MYSQL Timezone related tables. How to view the code syntax for MySQL "CONVERT_TZ" function?

Thanks Arun

replication breaks after upgrading master

Posted: 15 Oct 2013 01:27 PM PDT

I have a set up of replication with master 5.1.30 and slave 5.5.16 and the replication is working good

Now i have upgraded mysql master to 5.1.47

As far as i know we have to turn off the log bin with sql_log_bin=0 before using mysql_upgrade program in order to up grade the replication setup as well

but the problem here is the binary log was not turned off while mysql_upgrade program is running

The reason i found is in 5.1 the sql_log_bin is a session variable and mysql_upgrade program runs in another session

so how to upgrade the replication as well along with the server with any breakage on replication setup.

any suggestions are really useful.....

How I can copy from local file to remote DB in PostgreSQL?

Posted: 15 Oct 2013 09:27 PM PDT

I am a novice in psql and need some help. How can I load a local CSV to a remote DB?

I am using the following command

\COPY test(user_id, product_id, value)         FROM '/Users/testuser/test.tsv' WITH DELIMITER '\t' CSV HEADER;  

but this searches the file on the remote DB while I need to do that on my local PC.

extproc env variables oracle 11g

Posted: 15 Oct 2013 09:27 AM PDT

I have oracle 11g with extproc separately configured in listener.ora.

Users report some environmental variables that should be exported are not set.

From where does extproc gets it environment besides ENV in its definition in listener.ora? They come from shell that started listener? Why variables included in ENV do not appear?

How could I efficiently check what env variabls extproc has set?

Cascading Inserts in MySql

Posted: 15 Oct 2013 02:37 PM PDT

I have a users table that has a one to one relationship to a user_preferences table (primary foreign key user_id). When a new row is added to the users table (new id added), is there a way to setup the relationship between the users_preferences table so that a row with the new user id is also added to it?

Need to suppress rowcount headers when using \G

Posted: 15 Oct 2013 02:27 PM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

multivalued weak key in ER database modeling

Posted: 15 Oct 2013 04:27 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 15 Oct 2013 08:39 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

MYSQL 5.5 Fail start Fedora 16

Posted: 15 Oct 2013 12:27 PM PDT

I installed mysql and mysql-server from the repos (MySQL version 5.5). Then tried to start it, but got an error.

[root@server]# service mysqld start  Redirecting to /bin/systemctl start  mysqld.service  Job failed. See system logs and 'systemctl status' for details.  

Here is the log:

121118  2:41:38 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  121118  2:41:38 [Note] Plugin 'FEDERATED' is disabled.  121118  2:41:38 InnoDB: The InnoDB memory heap is disabled  121118  2:41:38 InnoDB: Mutexes and rw_locks use GCC atomic builtins  121118  2:41:38 InnoDB: Compressed tables use zlib 1.2.5  121118  2:41:38 InnoDB: Using Linux native AIO /usr/libexec/mysqld: Can't create/write to file '/tmp/ibhsfQfU' (Errcode: 13)  121118  2:41:38  InnoDB: Error: unable to create temporary file; errno: 13  121118  2:41:38 [ERROR] Plugin 'InnoDB' init function returned error.  121118  2:41:38 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.  121118  2:41:38 [ERROR] Unknown/unsupported storage engine: InnoDB  121118  2:41:38 [ERROR] Aborting    121118  2:41:38 [Note] /usr/libexec/mysqld: Shutdown complete    121118 02:41:38 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended  

Fresh installation, nothing changed prior to that, just ran yum update.

Here is the systemctl status trace

[root@linyansho /]# systemctl status mysqld.service  mysqld.service - MySQL database server    Loaded: loaded (/lib/systemd/system/mysqld.service; disabled)    Active: failed since Sun, 18 Nov 2012 02:45:19 +0300; 5min ago    Process: 864 ExecStartPost=/usr/libexec/mysqld-wait-ready $MAINPID (code=exited, status=1/FAILURE)    Process: 863 ExecStart=/usr/bin/mysqld_safe --basedir=/usr (code=exited, status=0/SUCCESS)    Process: 842 ExecStartPre=/usr/libexec/mysqld-prepare-db-dir %n (code=exited, status=0/SUCCESS)    CGroup: name=systemd:/system/mysqld.service  

Sql Anywhere 11: Restoring incremental backup failure

Posted: 15 Oct 2013 11:27 AM PDT

We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers.

We are doing a full backup as follows:

dbbackup -y -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\full  

This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups:

dbbackup -y -t -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\inc    dbbackup -y -l -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"       c:\backuppath\live  

However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:

10092: Unable to find table definition for table referenced in transaction log

The transaction log restore command is:

dbeng11 "c:\dbpath\dbname.db" -a "c:\backuppath\dbname.log"  

The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore.

Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11?

UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ...

Edit:

You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure

Search This Blog