Friday, August 9, 2013

[SQL Server 2008 issues] Read committed snapshot:Finding lock on table

[SQL Server 2008 issues] Read committed snapshot:Finding lock on table


Read committed snapshot:Finding lock on table

Posted: 08 Aug 2013 06:40 PM PDT

IF update command is run on a table and commit is pending then another user cannot be able to update the table.Is there any query to get the user or PC which caused lock on the table.

The transaction log for database 'tempdb' is full. Msg 9002, Level 17, State 4, Line 2

Posted: 10 Feb 2011 07:15 PM PST

hi All,I found this Error The transaction log for database 'tempdb' is full. Msg 9002, Level 17, State 4, Line 2To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databasescan any body please explain ,if we encounter the above error what are the steps to resolve this errorThanks & RegardsDeepak.A

Calculating rows processed and updated

Posted: 08 Aug 2013 11:03 AM PDT

Hi ProfessionalsI am running a procedure to calculate the total number of rows updated but I also wan the number of rows it hasnt updated, could I be going wrong somewhere would I need to calculated a select count(*) from the table minus the total rows updated or something.anyway here is my procedure[code]USE [TestData]GO/****** Object: StoredProcedure [dbo].[cleanseSMPN] Script Date: 08/09/2013 09:25:30 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOALTER procedure [dbo].[cleanseSMPN]asbegindeclare @TotalRows int =0, @numrows int; --Update the Software Manufacturer and the Product name begin truncate table myupdates UPDATE dbsource SET softwaremanufacturer = dbref.Amended_SW_Manufacturer, productname = dbref.Amended_Product_Name FROM dbo.newtable dbsource INNER JOIN ( SELECT Raw_SW_Manufacturer,Amended_SW_Manufacturer,Raw_Product_Version,Amended_Product_Version, Raw_Product_Name,Amended_Product_Name FROM datalookuptable GROUP BY Raw_SW_Manufacturer,Amended_SW_Manufacturer,Raw_Product_Version,Amended_Product_Version, Raw_Product_Name,Amended_Product_Name ) dbref ON dbref.Raw_SW_Manufacturer = softwaremanufacturer --and dbref.Raw_Product_Version = dbsource.productversion and dbref.Raw_Product_Name = productname -------------------------------------------------------------------------------------------------------- set @NumRows = @@ROWCOUNT; set @TotalRows= @TotalRows + 1; insert into myupdates(NumRows,TotalRows) values (@NumRows,@TotalRows) --------------------------------------------------------------------------------------------------------- end END[/code]

Replication pre-requisties

Posted: 08 Aug 2013 03:40 PM PDT

Hi All,I am going to set up replication in an OTLP environment. I would like to know what as a DBA should i check before setting up replication.Last time, when I set up replication in such an environment, i got memory issue and SQL Server started generating memory dumps.I had to remove replication temporarily. Please let me know if there is way to determine the memory required for replication.thanks,

Strange File issue > "CREATE FILE encountered operating system error 5(Access is denied.) while attempting to open or create the physical file"

Posted: 08 Aug 2013 02:26 AM PDT

Recently I have had to move and restructure my SQL farms and environment. I had detached a SolarWinds database called NetPerfMon that consisted of three data (mdf) files and one log(ldf) file. The database detached without issue. Today one of my engineers needed to retrieve some data and when I went to restore it I found the files still there. That's great I thought. Just re-attach. Then I received this error:"CREATE FILE encountered operating system error 5(Access is denied.) while attempting to open or create the physical file 'd:\userdbs\pathToFile.mdf'"I listed the files and database names from sys.databases and this database is DEFINITELY not loaded in to SQL. When I looked at all the open file handles with Sysinternals "Handle.exe", it lists the netperfmon mdf files as being open by the SQL server process. This server has been rebooted several times during the move since I dropped the original database. Why (WHY!?!?) would SQL still have these files open? I already worked around the issue by restoring it somewhere else but I'm dying of curiosity....why would SQL have these files open when the database is not currently attached to the system and hasn't been for several starts? Also why when attaching to an EXISTING file would it be trying to create the data file as it indicates in the error?

Replication DB migration on new sql server 2008R2

Posted: 08 Aug 2013 07:49 AM PDT

Hello,We will be planning to migrate sql 2005 and sql 2008R2 on new windows 2008R2 server, we have one Replicated DB which ones we are subscribing and we don't have any control over publisher.Could you please guide me that how can we handle it?We are planning to do side by side installation and will be using Backup and Restore method but some worries how we can handle it for Replication DB and which is the biggest DB almost more then 300GB.Thanks,

Messaging from SQL Server Agent to a web service???

Posted: 08 Aug 2013 07:29 AM PDT

Sorry if I didn't place this question in the right forum.....Is it possible to send a SQL Server Agent message on a job process completion % to a web service?? Service Broker?? Not sure...anybody got a direction I can head in???thank you in advance!!:-P

Exports other than bcp/Data Flow tasks

Posted: 07 Aug 2013 11:31 PM PDT

I'm looking for other file transfers than bcp queryout and Data Flow tasks in SSIS packages. Are there other programmatical processes that are upgradable? I hear that bcp is not quite reliable, but it works for us and it will take too long to create 114 individual data flow tasks in an SSIS package. Thanks.

Strange Job Slowdown 30 secs to 30 minutes

Posted: 08 Aug 2013 03:19 AM PDT

I have a job that runs 22 times a day.Four of those times, it is kicked off by another job.We recently moved to a SAN (EMC SAN is a VNX5300). That seemed great, all jobs sped up.Then we virtualized the machine to VMWare 5.1Now all jobs are fine except the one that runs 22 times a day. That one runs fine too except when it is kicked off by another job at 3:30 AM. It runs fine the other three times it is kicked off by the other job, it only slows down at 3:30 AM.Normally, the job takes between 30 seconds and 5 minutes depending on the volume of data.Before virtualization, the run at 3:30 AM was taking between 30 seconds and 1 minute.Now it is taking 28 minutes.Nothing else is running on the server at this time. Everything else has finished around 3:00.Backups (EMC, third party solution) start at 12:30 AM, finish at 3:28 AM, and don't slow down any of the other jobs that run between 12:30 and 3:30.The step in the job that is taking longer is an SSIS package.The job moved data from one database to another on the same server, then does a query. The data from the query is fed to an SSIS loop which creates files.All the files that are created have a timestamp within a minute (i.e. all are 3:55)This leads me to believe the slowdown is in one of the first two queries.The first, larger query, that inserts data to another table, typically has 500-600 records at this hour. The indexes all are under 5% fragmentation.I plan on enabling logging on the SSIS package tomorrow (today is a business-critical day, no changes allowed)Any ideas?

Generate "One Merged Email" from any query result ran against multiple instances in CMS (Central Management server)

Posted: 03 Aug 2013 05:01 PM PDT

Hello,We manage our instances via CMS. As a daily process, we ran a script against PROD group under CMS to check Last night backup and Job failures and many other things. We then place the output of those query result into excel sheet and then manually add colors to this excel and then send report to the group. It's been going on for a while. I like the fact that we use CMS for this info. But I would like to change this process now. I have created a script that ran against any "individual" server and generate a html good looking email with all the filtering we do in the excel. Now I am wondering if I can get this script to run against CMS server and generate one single email with all the data in it, then all our manual work on generating excel will be gone. It will save 90 minutes of our dedicated work everyday.Script is generating output in temp tables and from those temp tables, am filtering out the unwanted stuff and asking it to generate a email. Now the Issue is, when I ran it on CMS Server. It is sending me "individual mails" from "each individual servers" that are part of the CMS group. Resulting I am getting 90+ emails.Is there any way to send only "One Merged Email" with all the data in it, rather then individual emails from all the servers individually with the result of any query ran against CMS.I am not sure if this is possible. Since in sql query option there is a option for multiple server, over there by default "merge" has been set to true. That is how CMS generate all the result merge into a single result window.Is there any way to do this ? Please advice..Thanks!

cook book

Posted: 07 Aug 2013 10:56 PM PDT

what is cookbook in sql , how to install in sql ?

difference between sql2005 and sql2008 installation

Posted: 08 Aug 2013 02:12 AM PDT

Hi All,Can some one tell me the difference b/w installation of SQL Server 2005 and SQL server2008R2.Thanks,Santosh.

Use a Linked Server or move DB location to another Server

Posted: 07 Aug 2013 09:48 PM PDT

I have 2 SQL servers on same domain.ServerOLD: running workgroup edition 2008 R2 with AccOldDB (old accounts DB now read only)ServerNEW: running standard edition 2008 R2 with AccNewDB (new 3rd party accounts system)To maintain legacy reporting I kept the AccOldDB on ServerOLD and added a Linked Server to ServerNEW.In this way I made views that make the new accounts system report data like the old system did.The benefit has been no calling code/reports have had to change, in fact many people do not realise we have changed systems and that the 2 systems have very different schemas.An Example of the views I've changed:[code="sql"]create view AccOldDB..vCustomersASSELECT cust.name, tAdd.Add1, tAdd.Add2, tAdd.PostCode, ....from ServerNEW.AccNewDB..tblCustomer as tCustINNER JOIN ServerNEW.AccNewDB..tblAddress as tAddON tCust.CID = tAdd.CIDWHERE tAdd.AddressType = 'Main'[/code]I'm considering moving the AccOldDB from ServerOLD --> SERVERNEW thinking it would be quicker.I realised that I don't understand what happens with queries across Linked servers. I presume the work is done by ServerNEW and just the results are passed to ServerOLD.Would it benefit from moving AccOldDB to the new Server? So both dbs are on same serverI suppose an even better improvement would be to create the views directly on ServerNEW..AccNewDB, but the 3rd party vendor frowns on this.Of course moving the DB Location would involve some changes to hard coded connection strings, however much of the connection strings out there use global File DSNs.I'd appreciate advice from people more experienced in Linked servers than I..Many thanks

Mind blowing parallelism issue

Posted: 08 Aug 2013 12:14 AM PDT

To be clear, I'm not posting this looking to have anyone tell me exactly how I should set MAX DOP or the threshold. I realize this is more voodoo than exact science. What I am seeking is any information or suggestions for a particular issue we had yesterday.We have an application (OLTP) that has its own database server (SQL 2008 SP2 (I know it needs and update but downtime is hard to come by)) and generally things run very well. However, calls started coming in with users saying they could open up "these" records but not "those" records and the application was timing out while trying to look up the problematic records. The app has decent logging so I was able to capture the statements it claimed were time outs and run them manually. These statements came back in a snap so I have no idea how a timeout could occur on them. I ran a trace of all app activity related to these look ups and all those statements ran fast as well. Of course, before I did all this I looked for the obvious such as locking/blocking and anything else under the sun. As far as I could tell the server had nothing going on that was detrimental to performance.This server is also used by a piggy back set of applications written in-house that add additional functionality to the 3rd party app mentioned previously. There are times when it executes queries that cause parallelism and over the last 8 months or so I have tweaked the MAX DOP setting trying to find a balance. When I set it low the users of the 3rd party app are happy but the add-on apps are not and if I switch it the other way the situation reverses.While we were having the issue with the look ups I checked to see how much parallelism was going on and it was nothing out of the ordinary. The server is a dual Xeon 4 core with hyper threading on so there are sixteen logical processors available and I had the MAX DOP set to eight. I had no reason to change this value based on anything I had seen but I decided to take a chance and I dropped it to four.What happened? All of the look up problems went away. I could understand this if all records had an issue but it was only some of them. These records are all the same type as far as table structure, data types, etc. and the look up method is the same for either the good or problematic records. So the shot in the dark change of MAX DOP resolved the issue but why? Of course we were happy to get past the issue but we do not like that we have absolutely no idea why this arbitrary change was so effective.

Replication: add a new table with out a new snapshot

Posted: 07 Aug 2013 09:40 PM PDT

Hello,i've got a database with 500 GB DATA and 200 GB INDEX. I had to add an article (table) to the replication. I want do avoid, that the replication creates a new snapshot. We need about 7 hours to create the new index in the database.Is it possible, to add article without a new snapshot. I read something, that only the difference ( in my example: 1 table) is stored in the snapshot.Publisher: SQL 2008 R2 SP2 / OS Windows Enterprise 2008 48 GB RAMSubscriber: SQL 2008 R2 SP2 / OS Windows Standards 2008 48 GB RAMDistribution-Server: SQL 2012 / OS Windows Enterprise 24 GB RAMcreate the snapshot needed about 30 Minutesdeliver the snapshot needed about 6,5 Hourscreate index neede about 7 HoursThanks for you help,Andreas

CDC records Updates as Insert and Delete

Posted: 06 Mar 2013 05:41 AM PST

I am experimenting with using CDC to perform auditing in one of our user databases. If I perform an INSERT or UPDATE statement in SQL Server Management Studio, the operations are recorded correctly as 2 (INSERT) and 3 (BEFORE UPDATE) and 4 (AFTER UPDATE) but when I use our VB.net application, which passes a table-valued parameter to a stored procedure which then performs an UPDATE statement, cdc records the operations as 1 (DELETE) and 2 (INSERT). Has anyone run into this before? How can I get the stored procedure execution to record the update correctly?Thanks.

How to update Flag based on Maximum Amount

Posted: 07 Aug 2013 10:00 PM PDT

Create Table #Temp(Lnno Varchar(15),Custcode varchar(10),Flag varchar(10),Amount Int,Amount_Flag varchar(1))Insert Into #TempValues ('1','A1','Cust',1000,''),('1','A2','CoAp',500,''),('1','A3','CoAp',100,''),('1','A4','CoAp',2000,''),('2','B1','Cust',1000,''),('2','B2','CoAp',1000,''),('2','B3','CoAp',1000,''),('2','B4','CoAp',1000,''),('3','C1','Cust',0,''),('3','C2','CoAp',1000,''),('3','C3','CoAp',1000,''),('3','C4','CoAp',5000,'')Select * from #Temp/*Hi,I have this data where it has Lnno,Custcode,Amount and Flag.My requirement is,I has to Update Amount_Flag as Y,to maximum Amount for that Lnno.1)Now in case Lnno = 1,there is one Cust and three CoAp ,So Amount_Flag should be updated as Y to Custcode 'A4',since it has maximum amount.2)In Case Lnno = 2,Amount is same for all,so in this case Amount_Flag should be Updated to Flag = 'Cust',that is the priority should be given to Cust.3)In Case Lnno = 3,Cust has amount 0,but two CoAp has Amount same that is 1000,so Amount_Flag should be updated to any one of the record.Please Help me.Thanks in Advance!!*/

Find and Replace in a UPDATE Statement

Posted: 08 Aug 2013 12:36 AM PDT

hi SSC,am having a text in one column. assume that below is the text text : in a delete, insert, or update trigger, i want a single update statement, find multiple text strings and replace with another text.eg: 1) find delete and replace it with del2) find insert and replace it with ins3) find update and replace it with upd.any help please

Update

Posted: 07 Aug 2013 11:23 PM PDT

update #InwardTransDetails set Branch='BP66' where Branch='NG26' and logindate='2013-08-07'is not working why?

Attach MDF that wasn't cleanly shutdown

Posted: 06 Aug 2013 02:13 AM PDT

Hi experts, im trying to recreate a disaster recovery scenario and simulate a drive going down that hosts the database log (ldf) file. I want to see / document the steps I'd need to perform to recover the database from the mdf file (if at all possible). The steps i've performed so far are:Create a new database 'Test'mdf file in default SQL locationldf file on a different drive (happens to be a removable pen drive)I then remove the pen drive to simulate a drive going down. So now the Test database is inaccessable right.So what I need to know is: A. Is it possible to recover this database from the remaining mdf file (assuming no full backup exists)B. If so, how?Thanks in advance

Joins and Counts

Posted: 07 Aug 2013 09:23 PM PDT

Hi,My SQL knowledge is basic to average and need some help in counting rows in a join where 2 dates match:SELECT MT.[Organisation ],MT.[Assignment Number ],MT.[First Name ],MT.[Last Name ],MT.[Position Title ],MT.[Site Name ],MT.[Date of Expense ],MT.[Reason ],MT.[Expense Item ],MT.[From ],MT.[FromPostcode ],MT.[To ],MT.[ToPostcode ],MT.[Step Number ],MT.[Step Mileage ],MT.[Total Journey Mileage ],MT.[Total Journey Value ],S.[Assignment Number] as [Assignment No.],S.[CRS Identifier],S.[Org Cost Centre],SC.[Name],C.[Contact Date][size="4"]--need to add in a new line something like below--,count (C.Contact_Date) when C.Contact_Date = MT.[Date of Expense ][/size]FROM [lchs_ref].[dbo].[Mileage_Table2] MTjoin lchs__esr_dwh.dbo.dwh_201_staff_list S on S.[Assignment Number] = MT.[Assignment Number ] join lchs_data_warehouse.dbo.dwh_091_s1_smartcard_list SC on SC.[Smartcard ID] = S.[CRS Identifier]join nhs_costing.dbo.cost_contacts C on SC.[Name] = C.[DoneBy1]Dont know if what i am trying is poosible but any help would really be appreciated.Thanks

Cannot open New SSIS Project in SQL Server 2008 R2

Posted: 02 Nov 2012 12:03 PM PDT

I installed SQL Server 2008 R2 and Visual Studio 2008. However, after several installations and un-installations. I could no longer use the SSIS to create New Projects. I cleaned out the registry with a Registry cleaner from Wise. I deleted the SQL Server and Visual Studio folders as well.When I create a New SSIS Project, I immediately get an error as follows:"Microsoft Visual Studio""An error prevented the view from loading.""Additional information:""Specified Module cannot be Found. (Exception from HRESULT: 0x8007007E)""(System.Windows.Forms)"How do I resolve this problem? What this means is that I cannot create any new SSIS Projects.Cecil

No comments:

Post a Comment

Search This Blog