Wednesday, May 1, 2013

[MS SQL Server] Can resource governor apply based on what DB is being accessed?

[MS SQL Server] Can resource governor apply based on what DB is being accessed?


Can resource governor apply based on what DB is being accessed?

Posted: 01 May 2013 04:16 AM PDT

Sadly, I don't have a "play" environment at work anymore, and forgot to putz with this myself at home, so I'm asking.There is a user DB at my job, which was recently moved to a SQL2008R2 system from SQL2005. The end-users report that since the move, the response time from the DB has gotten longer. The contact person stated that on the previous server, MAXDOP was set to 1, and requested we do that on the new server.To which, I explained that being a server-wide setting, this would impact everyone else with DBs on the server, likely adversely affecting everyones performance.That was the last I heard from them.Since then, I was poking around in the Resource Governor, which is currently not set up on any of the SQL servers, and saw that it can apply a MXDOP setting that overrides the server setting...So, if they come back pushing for MAXDOP, is it possible to create a Classification Function for the RG so that any queries against one particular DB will then be processed by the Workload Group which would force a MAXDOP=1?I've tried Googling for this, and not found a clear answer, so I'm turning to the forums...Thanks!Jason

Proper way to move data in large filegroup using DBCC SHRINKFILE emptyfile option

Posted: 30 Apr 2013 10:03 AM PDT

Hi,I have a filegroup with one large datafile that is 300GB and I want to create 10- 30GB files and spread the data across them for better performance. Here are the steps I took in my test environment:1) added 10 - 30GB files to the filegroup2) used DBCC SHRINKFILE emtyfile to move the data evenly into the 10 new files (took approx 15 hrs)3) stop/restart SQL Services so that I was able to shrink down original 300GB datafile (now empty) to 30GB to match the other files and reclaim the space4) the process generated 360GB transaction log. since this is my test environment i just shrunk them back down but in production we are using FULL recovery mode with 15 min log backups. So i plan to let a log backup process and then shrink the logs back down to the normal 15GB size. 5) the process fragmented the indexes pretty badly so I rebuild all my indexes, which took about 8 hours.I have never done this before and have been doing lots of reading online but I want to make sure that I'm not missing something obvious in my process. Here is what the DBCC SHOWFILESTATS show:Fileid TotalExtents UsedExtents 5 480000 479167 20 491520 405854 21 491520 406778 22 491520 407820 23 491520 408629 24 491520 408280 25 491520 408915 26 491520 407728 27 491520 407528 28 491520 407909 29 491520 392160 This the was FILESTATS before:5 4 4705128 4253047Any feedback/comments would be greatly appreciated. Thanks,Isabelle

MDW and upload schedules from different instances

Posted: 25 Mar 2013 11:08 AM PDT

I have recently set up MDW across our SQL estate. There are approximately 40 instances being monitored all sending their data to a cental sysutility_mdw database. I started to run into issues with uploads failing and timing out so I staggered the upload schedules. Each set of upload jobs from an instance still kicked off at the same time (every 15 mins) but only happening once a day. However, wheh I now look at the reports a No Data Availble message appears despite the upload completed without errors.I then staggered one set of uploads to start 2 minutes apart to upload every 15 mins and every 17 mins and this report is now displaying data again.If the upload only happens once a day, where has the data gone? Each of these is running in cached mode, so I assumed all data for last 24 hours would just be uploaded, but this is not the case?Can anyone shed any light on this?In terms of locking issues on the sysutility_mdw database, what would be the best upload schedule? If I have 10 instances, should the upload times be staggered from each instance so data is hitting the sysutility_mdw database at different times, or is it safe to just stagger the upload times between jobs on the one instance, but upload the data at the same time from all instances? Or should each instance have it's own MDW database?Hope this makes sense?

Getting bizarre error while installing SQL 2008 SP2 on standalone server

Posted: 06 Oct 2010 07:02 AM PDT

While trying to upgrade a SQL-2008 R2 installation to SQL 2008 SP2 I'm getting the following error in the rule-checking phase of the installation:[quote]Rule "Not clustered or the cluster service is up and online." failed.The machine is clustered, but the cluster is not online or cannot be accessed from one of its nodes.To continue determine why the cluster is not online and rerun setup instead of rerunning the rule sincethe rule can no longer detect a cluster environment correctly.[/quote]In the Rule-Check dialog I see these 3 items, among others:...- Not clustered or the cluster service is up and online - Status: failed- Cluster group owned by local computer - Status: passed- Cluster Resource DLL Update Restart Check: passedIf the server is not clustered, why do I have the last 2 items? It's weird...

how to reset SA password

Posted: 30 Apr 2013 07:44 AM PDT

I have couple SQL servers that nobody knows the SA password and nobody has sys_admin right to reset it. There is no BUILTIN either. How could I reset the SA password?

SSIS and T-SQL's CHECKPOINT

Posted: 30 Apr 2013 06:29 AM PDT

Currently we have a rather elaborate backup job we run nightly on a very large reporting server.We do our backups and for the databases we aren't backing up (due to them being replicated from a live server) we run a T-SQL CHECKPOINT on them. All in all, with every every step, this is over 157 steps. And when we add a new database, it's quite elaborate adding it in since we try to keep things alphabetized.I've been building a SSIS package so all I have to do is fix 1 line of code to add a new database. This part works great! I won't go into detail with this other then it works very well.I want to add the next step to this which is a loop that will read t-sql code and get a list of replicated databases that we run CHECKPOINT on. Now to be clear...this is [b][i][u]NOT[/u][/i][/b] the checkpoint package within SSIS. This is the t-sql code "CHECKPOINT". I've added a "Execute SQL Task" that is within a Sequence container, which is housed within a ForEach Loop Container, that reads from a "Execute SQL Task". The looping works fine. I have the t-sql CHECKPOINT within my "Execute SQL Task" but I'm not 100% sure if this working right.So this leads me to several questions:1. Will this work?2. Is there a way to check to see if my CHECKPOINT ran on my list of databases?3. If this isn't the right way, how would I do it?I hope this makes sense for everyone.Thank you for your time.

How to relocate the msmdsrv.ini file for OLAP

Posted: 20 Feb 2013 11:16 PM PST

We recently purchased a high-performance SAN and I'm in the process of migrating stuff off from the old SAN drives onto new ones.Although I have moved the data OLAP onto one of the new SAN LUNs, I still have the Analysis Services configuration file located on an old drive, and I would like to move it.If it matters, the SQL Server is clustered along with analysis services.I'm hoping that this can be accomplished without needing to completely re-install it on both cluster nodes.Any help or advice is greatly appreciated.Thanks folks!

No comments:

Post a Comment

Search This Blog