Wednesday, May 15, 2013

[MS SQL Server] High Memory utilization alerts in SQL Server 2008

[MS SQL Server] High Memory utilization alerts in SQL Server 2008


High Memory utilization alerts in SQL Server 2008

Posted: 15 May 2013 04:51 AM PDT

Hi,We are getting alerts from our monitoring tool Idera SQL Diagnostic manager ( which is use as monitoring and notification tool ). We are receiving these alerts since 4 days. I don't see anything unusual in the server like sessions using to more memory...not sure how to proceed and decide if its SQL memory issue orOS issue.Environment Details :SQL Server 2008 SP1, standard Edition, Clustered with Active/passive nodesWindows Server 2008 Enterprise SP2Memory : 12 GBSQL Memory config : Min 7168, Max 8192 MBProcessors : 16Please let me know if you need any information

Refresh tables from production to Dev

Posted: 15 May 2013 12:30 AM PDT

Hi,I have a requirement to refresh some tables in dev from prod...and we need to do this every week...so i need to automate...I thought of using SSIS, but never did it....if anyone did please let me know the process or your suggestions.Thanks

ERD drawings for the tables not related

Posted: 15 May 2013 04:34 AM PDT

I am struggling to find the relationships between tables. There are no foreign key relationships on the tables and I need to work on the ERD project. What's the approach I need to take? I also don't know the table usage. Any suggestions?

SQL Profiler sp_trace_setfilter parameters

Posted: 15 May 2013 12:36 AM PDT

Hello,I am trying to exclude some users from being captured from a trace.If the traceid I have is 2 I see that the syntax mentions this below so would these be the parameters for each user?exec sp_trace_setfilter 2, 10, 0, 7, N'userhere'-- EXEC sp_trace_setfilter-- @trace_id = 2,-- @columnid = 10, --app name column-- @logicaloperator = 1, -- logical "or"-- @comparisonoperator = 0, -- equals-- @value = N'SQL Profiler' Thanks

The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions.

Posted: 14 May 2013 10:11 PM PDT

I am using MSQL 2008 R2. I am getting the below error "The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions."RAM is 64 GB and general properties of sql instance is showing 65536 MB (64 GB) under memory and MIN & MAX memory settings are default (0 & 2147483647) in SQL server. Please help me to resolve this issue.Thanks,Sandeep

Need advice of hardware configuration of SQL server in production environment

Posted: 14 May 2013 04:30 PM PDT

Dear All,We are having a SQL server which is being used through out day and night from local and outside the country. We planned to upgrade because of its low performance since the data got huge nowadays. We are having around 20 databases with ranging from 100MB to 100GB, in which few DBs have binary data. We are unable to do index maintenance as the files tend to grow more than we expect.Could someone suggest me a very good configuration of hardware for the server in terms of Disk for data and log files, RAM and CPU.Thanks.KB

send me som sample type of tickets

Posted: 14 May 2013 04:34 PM PDT

generally as a DBA what type of tickets we will get....? send some samples

Resource Governor - how to see what workload group current connections are in?

Posted: 14 May 2013 11:29 PM PDT

We just enabled resource governor and monitoring it via perfmon and the DMVs. Through the DMVs, I can see active counts for each workload group that we have defined. Is there a way to see what workload group current connections have been classified into? I haven't found an answer with the searching I did. Maybe I wasn't wording it correctly.Appreciate any input.Thanks,John

Can't Kill SPID “Transaction Rollback in Progress”

Posted: 26 Feb 2013 08:01 AM PST

I'm running into frequent Blocking on a Development Server.I have been correctly the code or making recommendations to prevent this but new code keeps getting executed and by the time I find out about it it is too late.When I attempt to kill the SPID the Rollback is stuck at 0%.I get the following message when I retry killing the SPID.transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 0 seconds.KILL SPID WITH StatusOnly does not work.I have had to restart the SQL Server Service, Set the Database OFFLINE and try to bring it back ONLINE and and force it to recover.This is making me nervous.Any suggestions?

reduce impact performance impact on the OLTP SQL database

Posted: 14 May 2013 05:42 PM PDT

Hi,Our client occasionally needs to load very large transaction files into their OLTP SQL database during periods of heavy "live" API load. How would i structure these imports to reduce impact performance impact on the OLTP SQL database?Thanks

deploying changes to a live database

Posted: 14 May 2013 05:29 PM PDT

Hi,What are some major risks for deploying changes to a live database, and how would we mitigate them?a. Stored procedure changes?b. Indexing changes?c. Schema changes?Thanks

Restart Server to shrink TempDB?

Posted: 14 May 2013 02:13 PM PDT

So I had a bad query blow out the tempdb (dev, not log) database to 36GB. It's normally a lot smaller. I'm looking at the best way to resolve the issue and get it back to it's normal 1-2GB size.Restarting the server seems to be one of the most recommended solutions, but I'm not sure if I understand correctly that it automatically fixes the size? If I go to the db files, it says the Initial Size (MB) is 35380. So if I restart SQL, won't it just go back to being 35380 if that's what the initial size is?Do I need to do something beforehand like this?[code="sql"]use tempdb go dbcc shrinkfile (tempdev, '1000') go[/code]Then after I run that restart? Or do I not bother and simply restart?Also, if I do have to run that code, do I have to stop the sql service and put it into single user mode?It's not something I have to do very often, so I'm a little unsure of the best way forward here. Any help is greatly appreciated.

optimize the schema definition

Posted: 14 May 2013 05:38 PM PDT

1. How would i optimize the following schema definition?CREATE TABLE dbo.Orders ( RowID INT IDENTITY NOT NULL, OrderID INT PRIMARY KEY NONCLUSTERED, OrderTypeNVARCHAR(20) NOT NULL, OrderDateTime DATETIME NOT NULL);CREATE CLUSTERED INDEX ix_cls_Orders_RowID ON dbo.Orders (RowID);CREATE NONCLUSTERED INDEX ix_Orders_OrderDateTime ON dbo.Orders (OrderDateTime);CREATE TABLE dbo.LineItems ( LineItemID INT IDENTITY PRIMARY KEY, OrderID INT NOT NULL FOREIGN KEY REFERENCES dbo.Orders (OrderID), LineItemTypeNVARCHAR(20) NOT NULL, RegisterID INT NOT NULL, LineItemCustomValue1 NVARCHAR(20) NULL, LineItemCustomValue2 NVARCHAR(20) NULL, LineItemCustomValue3 NVARCHAR(20) NULL, LineItemCustomValue4 NVARCHAR(20) NULL, LineItemCustomValue5 NVARCHAR(20) NULL, LineItemCustomValue6 NVARCHAR(20) NULL, LineItemCustomValue7 NVARCHAR(20) NULL, LineItemCustomValue8 NVARCHAR(20) NULL, LineItemCustomValue9 NVARCHAR(20) NULL, LineItemCustomValue10 NVARCHAR(20) NULL );CREATE NONCLUSTERED INDEX ix_LineItems_OrderID ON dbo.LineItems (OrderID);2.The following query, which utilizes the schema defined above, performs poorly. How would i optimize it?SELECT o.OrderDateTime, o.OrderType, li.OrderID, li.RegisterID, li.LineItemCustomValue3, li.LineItemCustomValue4FROM dbo.Orders o WITH (INDEX(ix_Order_OrderDateTime))JOIN dbo.LineItems li ON o.OrderID = li.OrderIDWHERE li.LineItemCustomValue3 = 'CouponPurchase' AND o.OrderDateTime>= DATEADD(dd,-1,CURRENT_TIMESTAMP);

Clear SQL Plans -- DBCC FREESYSTEMCACHE('SQL Plans')

Posted: 14 May 2013 09:38 AM PDT

I know I have a large number of single use plans being created on my SQL 2008 STD Server.I have enabled "optimize for ad hoc workloads" which has reduced the size of each plan but the cache size still increases (but more plans).I understand that SQL will manage the Plan Cache based on its own algorithm but I am checking I fully understanding the following article suggests.It is basically saying that if the SQL Plan part of the cache is greater than a set threshold then clear the cache: DBCC FREESYSTEMCACHE('SQL Plans') Am I correct in saying doing this would free up more memory for other processes (if it was run every one or two days) and the only hit would be higher compilation times the next time the query was run?So it might be a good thing for me to do this on my Prod server -- if I have understood the article correctly!http://www.sqlskills.com/blogs/kimberly/plan-cache-adhoc-workloads-and-clearing-the-single-use-plan-cache-bloat/Out of interest on a 8MB 64 bit machine the returned figures as now are (using Kimberly's logic):[i][b]TotalPhysicalMemory (MB)[/b] 8191.000[b]TotalConfiguredMemory (MB)[/b] 0.000[b]MaxMemoryAvailableToSQLServer (%[/b]) 0.000000000000000[b]MemoryInUseBySQLServer (MB)[/b] 7089.000[b]TotalSingleUsePlanCache (MB)[/b] 1695.375[b]TotalNumberOfSingleUsePlans[/b] 99134 [b]PercentOfConfiguredCacheWastedForSingleUsePlans (%)[/b] 23.916[/i]thanks

No comments:

Post a Comment

Search This Blog