Thursday, April 4, 2013

[how to] Merge Replication is Slow for Uploads

[how to] Merge Replication is Slow for Uploads

Merge Replication is Slow for Uploads

Posted: 04 Apr 2013 05:23 PM PDT

I have implemented merge replication with pull subscriptions in production environment. Initially it was working fine, but now a days, its too slow to upload any changes to publisher , but subscribers download changes in minimum time. The maximum percentage of total synchronization is consumed in uploading while for downloading its very minimum. The Subscribers on a WAN. I need a solution to overcome this uploading problem.

Changed server name, now maintenance plan fails

Posted: 04 Apr 2013 06:17 PM PDT

I am using SQL Server 2008 R2. When I try to execute a maintenance plan I get the following error:

Message [298] SQLServer Error: 15404, Could not obtain information about Windows NT group/user 'WIN-6QFI9JAK804\sqladmin', error code 0x534. [SQLSTATE 42000] (ConnIsLoginSysAdmin)

I ran the following queries:

sp_dropserver 'WIN-OldName'  GO  sp_addserver 'VM-MSSQL-2008R2', local  GO  

I then restarted the MSSQLSERVER service and the agent service.

When I run select @@servername I do get VM-MSSQL-2008R2 returned however the original error persists when I try to execute the maintenance plan again.

Is there another step that I haven't performed yet?


I ran:

select * from msdb.dbo.sysssispackages  

and noticed that the only plan is called Backups. So I instead ran the query:

UPDATE msdb.dbo.sysssispackages    SET ownersid = SUSER_SID('VM-MSSQL-2008R2\sqladmin')    WHERE name = 'Backups'   

I ran the following two selects to make sure that the are the same:

select SUSER_SID('VM-MSSQL-2008R2\sqladmin')  select ownersid from msdb.dbo.sysssispackages WHERE name = 'Backups'  

And they both return the same value.

When I run the plan, I get the same error message as before.

(I have created a new Backup Plan, but I'd still like to figure out why this one is working just for knowledge's sake)

Sybase SQL - Truncation error occurred. Command has been aborted

Posted: 04 Apr 2013 03:40 PM PDT

Need help with below code, as it fails with truncation error

Truncation error occurred. Command has been aborted.

create table monCacheQuality (          ServerName sysname          ,CollectionDateTime smalldatetime not null          ,PhysicalWrites decimal(15, 0) not null          ,PhysicalReads decimal(15, 0) not null          ,LogicalReads decimal(15, 0) not null          ,CacheQuality decimal(15, 0) not null          ,CacheHitPct decimal(15,4) not null          )    -- Main code starts here   declare @physical_read1 decimal(15, 0)      ,@logical_read1 decimal(15, 0)      ,@physical_write1 decimal(15, 0)      ,@cache_search1 decimal (15,4)    declare @physical_read2 decimal(15, 0)      ,@logical_read2 decimal(15, 0)      ,@physical_write2 decimal(15, 0)      ,@cache_search2 decimal (15,4)    while (1=1)  begin      select @physical_write1 = PhysicalWrites          ,@physical_read1 = PhysicalReads          ,@logical_read1 = LogicalReads          ,@cache_search1 = CacheSearches      from master..monDataCache        waitfor delay '00:00:20' -- Log every 1 min        select @physical_write2 = PhysicalWrites          ,@physical_read2 = PhysicalReads          ,@logical_read2 = LogicalReads          ,@cache_search2 = CacheSearches      from master..monDataCache        insert monCacheQuality      select @@servername as ServerName          ,getUTCdate()          ,@physical_write2 - @physical_write1          ,@physical_read2 - @physical_read1          ,@logical_read2 - @logical_read1          ,case               when @physical_read2 - @physical_read1 = 0                  then - 1              else (@logical_read2 - @logical_read1) / (@physical_read2 - @physical_read1)              end as CacheQuality          ,100-(((@physical_read2-@physical_read1)/(@cache_search2-@cache_search1))*100) as CacheHitPct  end  

Deadlocks in Small Table

Posted: 04 Apr 2013 02:56 PM PDT

Our application is seeing deadlocks while inserting records into a table shortly after installation, while the table is small. These deadlocks eventually go away as the table fills up.

The application (.NET 4.0) spins up a number of threads for document processing. After processing each document, a thread inserts a new record into the database with a .NET-generated GUID as the clustered primary key. The insert is happening entirely within a transaction, and we are specifying ROWLOCK on the insert operation to try to minimize the impact of the operation (also happened without ROWLOCK).

Here is an example of the error we are seeing:

System.Data.SqlClient.SqlException (0x80131904): Transaction (Process ID 75) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

The application has a retry mechanism, and once the document table has a few hundred rows, we stop seeing this.

Our inquiries into these situations haven't been able to find any definitive cause. Our only leading theory right now is that there is some unintended lock escalation because of the non-sequential nature of GUIDs, but we're not sure. We're considering trying sequential GUIDs as a way to increase insert/indexing performance and maybe fix the problem indirectly, but switching GUID algorithms would be problematic for existing installations.

Why might we be seeing this very distinctive behavior, and how might we fix it?

Preferred Methods Of Indexing MS SQL Server Audit Data To Splunk

Posted: 04 Apr 2013 12:56 PM PDT

What is your preferred method for indexing MS SQL Server for Splunk? I am collecting audit data in various environments by various means (profiler, app logs, and extended events) but I hope to consolidate them all into extended events moving forward.

However, one or two environments have Splunk connected and we want to ensure all security logs are collected and sent to the Splunk server. I have seen a few ways of doing this but I'm not sure which would be preferred or the 'best practice'. I haven't seen much on the Splunk community on this since really it's up to us to get it in Splunk however we can, so I figured I'd ask us here.

3 Requirements:

Make sure Splunk can index the darn thing. Prefer minimal user scripting to accomplish it, if possible. Basically make it a text or csv file.

minimal risk of 'fudging' the audit logs before Splunk gets them. I want them to be written to splunk or the text file that is indexed as fast as possible. This is to reduce 'man in the middle' attacks where audit logs are fudged 'in between' pulls and syncs. Also we might be exposed to duplicate entries in certain scenarios.

Minimize DBA access. I understand the DBAs with sa rights for the enterprise will always be able to get around issues, but we could vastly minimize our access to it. I am thinking of a solution so that security really manages this, because sometimes, even DBA's go bad.


1- Collect profiler data and use the .Net class to write a mini-app so it exports to a text file for splunk. I haven't done this before but it seems like it would resolve 'having data read immediately'. It requires a server side trace but in 2012 you can start it up without having to enable server start up stored procs which would go against the security standard.


Profiler is easy and everyone knows how to use it in and out, won't be dependent on me.

Easy .Net integeration


Profiler sucks compared to extended events.

Will have to ensure it is always collecting data. In 2012 it's easy but in 2008 you will have issues if you don't use start up stored procs, and will have to get creative with jobs. Even then, you might miss some logging.

It is being depreciated.

2- Replace profiler with Extended Events.


Will always start up in 2008 and up without any special parameters or start up stored procs.

Very light weight.

Preferred new method


How the heck do you get the file saved in a text or csv format so Splunk can easily access it? I haven't seen any way to do that and my pluralsight sub ran out :/

3- Log the data to a locked down table and have splunk read that table. Put proper permissions in place where a dba/sec admin can modify the trigger/service broker app that logs it and alert secops/dbas if someone changes anything in that table. Problem is I have only found Splunk to be able to query MySQL and not MS SQL Server. Perhaps I could run a powershell script to continually query that table or get creative with a trigger that starts a powershell session (have never done that before) to reduce man in the middle editing the files.

Positives: Meets the requirements

Reduces man in the middle attack if we can have it update the file immediately.

Negatives: It's perhaps more complicated than it needs to be.

4- Use this beta product.


Security team will manage it which is preferred since this is a security concern, and part of it IMO is protecting the data from DBA's as well.

Minimal overhead on the DBA team.

Negatives It's beta.

No real documentation.

What will happen if i change the compatibility level of Distribution Database

Posted: 04 Apr 2013 01:56 PM PDT

I have implemented merge replication in production environment, initially it was in SQL Server 2005, then it was upgraded in-line to SQL Server 2008 R2. Later on all databases compatibility level was changed to 100, except the compatibility of Distribution, it's still in SQL Server 2005 compatibility level.

What impact will happen if I change the distribution database compatibility to 100?

Can I deactivate log shipping jobs without raising errors?

Posted: 04 Apr 2013 01:06 PM PDT

I've set up log shipping from Server A to Server B. I then set it up from B to A when B was the primary.

I've reverted back to A being primary and disabled the backup, copy and restore jobs associated with B to A shipping. However, I have a failure of the LSAlert job on both A and B.

Is there a way to suppress these error or must I remove Log Shipping from B to A while A is primary?

My goal is to leave the log shipping configuration in place for DR, but have it not raise errors stating that databases have not been sync'd.


update column based on the sort order of another query

Posted: 04 Apr 2013 12:51 PM PDT

I'm trying to add arbitrarily ordered records to a database so that they can be sorted by the ui (or when I query the database ). My problem is I already have the list, and I need to add a default sorting based on alphabetical order. I'm thinking I should be able to do this with a subquery or coalesce, but I can't get it quire right. I'm doing this on MySQL so I'm hoping it's possible to do it at the database level.

Write performance of Postgresql 9.1 with read-only slave

Posted: 04 Apr 2013 11:12 AM PDT

I have a Postgresql 9.1 database that is being hosted by Heroku. It currently has a read-only "follower". I need to truncate several large tables (over 100GB) of data and reloads them. Is the read-only follower going to cause an additional overhead to this operation? Should I drop the follower and recreate it after the process is complete?

Convert SQL Server UPDATE Statement for Use in Oracle

Posted: 04 Apr 2013 12:36 PM PDT

I can not get this UPDATE statement to work in an Oracle environment. It was written for SQL server.

I am looking for some guidance on how to convert it.

UPDATE       SOB  SET       COA =   CASE                   WHEN                          (                          SELECT COUNT(*)                          FROM SOB                          WHERE                              ORD = T.ORD AND                               SHP_KEY <= T.SHP_KEY AND                               SHP_DTTM <= T.SHP_DTTM AND                               SOB_KEY <= T.SOB_KEY                      ) > 1 THEN 0                   ELSE 1               END  FROM       SOB T  WHERE       COA IS NULL  

Gaps and islands: client solution vs T-SQL query

Posted: 04 Apr 2013 06:27 PM PDT

Can a T-SQL solution for gaps and islands run faster than a C# solution running on the client?

To be specific, let us provide some test data:

CREATE TABLE dbo.Numbers    (      n INT NOT NULL            PRIMARY KEY    ) ;   GO     INSERT  INTO dbo.Numbers          ( n )  VALUES  ( 1 ) ;   GO   DECLARE @i INT ;   SET @i = 0 ;   WHILE @i < 21     BEGIN       INSERT  INTO dbo.Numbers              ( n               )              SELECT  n + POWER(2, @i)              FROM    dbo.Numbers ;       SET @i = @i + 1 ;     END ;    GO    CREATE TABLE dbo.Tasks    (      StartedAt SMALLDATETIME NOT NULL ,      FinishedAt SMALLDATETIME NOT NULL ,      CONSTRAINT PK_Tasks PRIMARY KEY ( StartedAt, FinishedAt ) ,      CONSTRAINT UNQ_Tasks UNIQUE ( FinishedAt, StartedAt )    ) ;  GO    INSERT  INTO dbo.Tasks          ( StartedAt ,            FinishedAt          )          SELECT  DATEADD(MINUTE, n, '20100101') AS StartedAt ,                  DATEADD(MINUTE, n + 2, '20100101') AS FinishedAt          FROM    dbo.Numbers          WHERE   ( n < 500000                    OR n > 500005                  )  GO  

This first set of test data has exactly one gap:

SELECT  StartedAt ,          FinishedAt  FROM    dbo.Tasks  WHERE   StartedAt BETWEEN DATEADD(MINUTE, 499999, '20100101')                    AND     DATEADD(MINUTE, 500006, '20100101')  

The second set of test data has 2M -1 gaps, a gap between each two adjacent intervals:

TRUNCATE TABLE dbo.Tasks;  GO    INSERT  INTO dbo.Tasks          ( StartedAt ,            FinishedAt          )          SELECT  DATEADD(MINUTE, 3*n, '20100101') AS StartedAt ,                  DATEADD(MINUTE, 3*n + 2, '20100101') AS FinishedAt          FROM    dbo.Numbers          WHERE   ( n < 500000                    OR n > 500005                  )  GO  

Currently I am running 2008 R2, but 2012 solutions are very welcome. I have posted my C# solution as an answer.

PostgreSQL backup error

Posted: 04 Apr 2013 01:51 PM PDT

I am trying to backup my company PostgreSQL database using pgAdmin III, so I selected our company DB from the tree, then right clicked on it and selected 'Backup', selected the destination folder for the backup file then clicked OK. Then I got this error message:

C:/Program Files/PostgreSQL/9.0/bin\pg_dump.exe --host localhost --port 5432 --username "pgres1" --format tar --blobs --encoding UTF8 --verbose --file "E:\DB_Backup\DB_Backup_TEST.backup" \"CompanyDB_TEST\" pg_dump: [archiver (db)] connection to database ""CompanyDB_TEST"" failed: FATAL:  database ""CompanyDB_TEST"" does not exist  pg_dump: *** aborted because of error    Process returned exit code 1.  

So can someone please help me by telling me what I am doing wrong here?

I am 100% sure that CompanyDB_TEST do exist.

I am running the PostgreSQL under Windows Server 2003

How to check which tables in DB (MYSQL) updated in last 1 hour / last 1 minute?

Posted: 04 Apr 2013 11:04 AM PDT

I have to create a xls datafeed for a website and I would like to know which tables are getting affected when I do a manual entry from CMS.

If i have installed fresh database and I'm doing first entry in it using CMS: I would like to know which tables got updated/appended in last 1 min in that DB.

It is somewhat similar to this question

But in my case I dont know which tables to check. I can check each and every table in the db using the solution posted in the question but I have a gut feeling that there is a better solution for this.

RID vs INCLUDE on a large field

Posted: 04 Apr 2013 11:20 AM PDT

I have a table that stores notes

create tblNote(      Id int identity(1,1),      ParentId  int ,       ParentType varchar(32),       NoteType varchar(32),       Note varchar(max),      CreatedBy varchar(25),       CreatedDate  datetime,       .      .      .      <other metadata about the note>    )    

I have done a lot of reading recently about how MSSS handles indexes (2005 and forward).

I have a clustered index on ID

[ I have considered changing the clustered index to parentId, parentType since that is reasonably narrow and it is static. ]

The overwhelming percentage of queries against this table are going to be along the lines of

select NOTE, createdDate, createdBy   from tblNote   where parentId = 12 and parentType = 'RFQ'  

The question I want to ask today (though any feedback is welcome) is this:

The NC index I could add is:

create index  idx_nc_note_parent(            parentId ,             parenttype        )        include (createdby, createdDate)    

This would be useful in creating little lists of the notes where we might include who and when type info.

I am hesitant to include a varchar(max) field. It seems like it would really hurt the amount of the index that would be cached (Is this reasonable or unreasonable)

Assuming I dont include the NOTE field, a RID Lookup will be necessary to actually fetch the note content if it is requested.

While I have read quite a bit about how expensive RID lookups are, it still has to be better to have this index as opposed to doing a table scan, RIGHT?

[apologies for the code block, i have added the 4 spaces, but maybe i did it wrong? ]

SSIS keeps force changing excel source string to float

Posted: 04 Apr 2013 02:51 PM PDT

There is a column in Excel that is supposed to be text, but SSIS sees numeric text there and automatically makes it double-precision float [DT_R8].

I can change it manually in the Output branch's External and Output columns, but the Error Output simply refuses to let me change the respective column.

Error at Extract Stations [Excel Source [1]]: The data type for "output "Excel Source Error Output" (10)" cannot be modified in the error "output column "Group No" (29)". Error at Extract Stations [Excel Source [1]]: Failed to set property "DataType" on "output column "Group No" (29)".

I have tried modifying the package xml. I tried the IMEX=1 and typeguessrow=0 but not of that has fixed my problem. Does this have any fix at all?

The excel field to be imported into the SQL nvarchar field reads for example

295.3  296.33  

but they are being written to the SQL table as

295.30000000000001  296.32999999999998  

I put in dataviewers and the fields show

295.3  296.33  

all the way thru execution which is correct but I guess when it hits the OLE DB Destination source it somehow converts it to the

295.30000000000001  296.32999999999998  

which is wrong.

Stored Procedure Create Table from Variable with Variable Constraint

Posted: 04 Apr 2013 12:04 PM PDT

I have managed use a store procedure to create a copy of a table with a variable name. But I am struggling to understand how to incorporate a constraint into the stored procedure.

The problem:

The constraint must be a variable as when it makes a copy of a table it cannot name the PK that i have with the same as one used before. I am getting syntax errors in these areas.

I am very new to SQL server... literally just started learning today!! So please explain in dummy terms.

Code so far below:

    CREATE PROCEDURE procFinancialPeriodTable      (      @TABLENAME as varchar (50)       )      AS      DECLARE @SQL varchar(2000)      --SET @SQL = "if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[@TABLENAME]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)      --drop table [dbo].[@TABLENAME]      BEGIN        SELECT @SQL = 'CREATE TABLE ' + @TABLENAME + '('      SELECT @SQL = @SQL + '[ID] [int] IDENTITY(1,1) NOT NULL,[FinPeriodNr] [int] NOT NULL,[FinCurrentPeriod] [bit] NULL,[FinStart] [date] NULL,[FinEnd] [date] NULL,[FinStatusOpen] [bit] NOT NULL,[PeriodClosedTS] [smalldatetime] NULL,[PeriodClosedUID] [varchar](3) NULL,)'        CONSTRAINT [PK_FinancialPeriod' + @TABLENAME + '_1] PRIMARY KEY CLUSTERED(      [FinPeriodNr] ASC      )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF,                 ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]      ) ON [PRIMARY]      EXEC(@SQL)      --GO      SET NOCOUNT ON;      END  

The DELETE statement conflicted with the REFERENCE constraint

Posted: 04 Apr 2013 12:56 PM PDT

I'm trying to delete all users but getting the error:

Msg 547, Level 16, State 0, Line 1  The DELETE statement conflicted with the REFERENCE constraint "FK_M02ArticlePersons_M06Persons". The conflict occurred in database "", table "dbo.M02ArticlePersons", column 'M06PersonId'.  The statement has been terminated.  

The query:

DELETE FROM [].[dbo].[M06Persons]   WHERE ID > '13'  GO  

Seems I need to use on delete cascade; but I'm stuck.

Stress test MySQL with queries captured with general log in MySQL

Posted: 04 Apr 2013 12:45 PM PDT

Is there any tool available which can do stress testing using the log file created by MySQL general log? After a lot of search on google I found few stress testing tools which only use some benchmarks for stress test. One solution is using Apache JMeter, but it does not create test plans from MySQL log file and creating custom test plan for all the queries I have is too time consuming.
Or is there a tool which can at least create .sql file from MySQL log?

SQL Server 2008 DB Performance on single disk

Posted: 04 Apr 2013 01:55 PM PDT

I have a database in SQL Server 2008 with around 20 GB in size. And it is increasing rapidly.

Somehow I can not add multiple independent harddisks to increase IO performance.

If a add large table in separate file group, will it help to improve performance on single disk?

Or any one has tips to improve performance?

Thank you

Slow SSRS Report in production

Posted: 04 Apr 2013 01:54 PM PDT

I have an SSRS report which gets its data by firing a series of stored procedures.

Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine.

I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run.

What else should I be doing? Something with perfmon?

Postgresql querying trends

Posted: 04 Apr 2013 04:03 PM PDT

Firstly apologies if this is a duplicate, I am fairly new to SQL and so Im not sure what the correct terminology to use in my searches

So I have a database which records motor races, with the following simplified schema

race_table  ==========  race_id PK  race_date timestamp  average_speed Decimal  max_speed Decimal    drivers  =======  driver_id PK  driver_name text  date_started timestamp    driver_races  ============  driver_id FK  race_id FK  

If each driver has ~1000 races spread over 2/3 years

How would I go about querying the overall % change (positive of negative) in their average race speed for a given date range, for example

% Change in first 6 months

Joe Smith - 5% increase  Andy James - 4% increase  

% Change in first 12 months

Joe Smith - 8% increase  Lewis May - 6% increase  

UPDATE: More detail on % Change

The % Change could be calculated using linear regression, (least-squares-fit would be suitable), the average change is effectivly the y-difference on a line-of-best-fit, where each point is a race, x is the race_date and y is the average_speed for that race.

Postgres's regr_slope will give the gradient of the line which is effectivly the same as the %change

SELECT regr_slope(r.average_speed, EXTRACT(EPOCH FROM r.race_date)) as slope      FROM race_table as r, driver_races as dr      WHERE dr.race_id = r.race_id      AND d.driver_id = 1  

This gives the figure I want, but I now need to apply is against all users, sorted by 'slope'

Error 1044 Access denied to user

Posted: 04 Apr 2013 02:59 PM PDT

This is driving me crazy.

When I try to create a few tables from my Workbench model I get this error 1044.

I've been trying to find a solution but nothing works for me.

Curiously when I run SELECT USER(),CURRENT_USER(); I get two versions of the same user. One is techboy@(an ip address) and the other is techboy@%.

Access is denied to both at one point or another.

The MySql server is a remote hosted server with the user permissions correctly set.

Multiple database servers for performance vs failover

Posted: 04 Apr 2013 12:59 PM PDT

If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best?

Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active?

I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.'

If it matters, we have traffic around 3,000-7,000 simultaneous users.

SQL Server distribution database log file grows uncontrollably after full database backup

Posted: 04 Apr 2013 10:59 AM PDT

We have a merge replication environment that is pushing to 8 subscribers. This is working fine. Our distribution database is setup in Simple recovery mode. We have a maintenance plan that will backup all database every day at 00:30. Once this process completes, the distribution log file grows over the next 30 minutes and absorbs all the remaining space on the hard drive (about 90GB)

What then happens is that the distribution database shows as "Recovery Pending" and we cannot do anything till we restart the machine. After this I can shrink the log file down to 2MB.

I have no idea why this is happening. The log file is running at about 10MB during the day. The database size is sitting at 15GB.

Restore SQL Server database using Windows Powershell 3.0

Posted: 04 Apr 2013 07:59 PM PDT

I'm trying to restore a SQL Server database with a PowerShell script, but I'm having problems.

Here is the error I'm getting:

Exception calling "SqlRestore" with "1" argument(s): "Restore failed for Server 'WUSFK250042-OLU\SQLSERVER2008R2'. " At line:48 char:1 + $smoRestore.SqlRestore($server)

Here is my code:

#clear screen  cls    #load assemblies  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null  #Need SmoExtended for backup  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null    $backupFile = "C:\SafewayRT\SafewayRTFUll.bak"    #we will query the database name from the backup header later  $server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") "WUSFK250042-OLU\SQLSERVER2008R2"  $backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backupFile, "File")  $smoRestore = new-object("Microsoft.SqlServer.Management.Smo.Restore")    #restore settings  $smoRestore.NoRecovery = $false;  $smoRestore.ReplaceDatabase = $true;  $smoRestore.Action = "Database"  $smoRestorePercentCompleteNotification = 10;  $smoRestore.Devices.Add($backupDevice)    #get database name from backup file  $smoRestoreDetails = $smoRestore.ReadFileList($server)    #display database name  "Database Name from Backup Header : " +$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #give a new database name  $smoRestore.Database =$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #specify new data and log files (mdf and ldf)  $smoRestoreFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")  $smoRestoreLog = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")    #the logical file names should be the logical filename stored in the backup media  $smoRestoreFile.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"]  $smoRestoreFile.PhysicalFileName = $server.Information.MasterDBPath + "\" + $smoRestore.Database + "_Data.mdf"  $smoRestoreLog.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"] + "_Log"  $smoRestoreLog.PhysicalFileName = $server.Information.MasterDBLogPath + "\" + $smoRestore.Database + "_Log.ldf"  $smoRestore.RelocateFiles.Add($smoRestoreFile)  $smoRestore.RelocateFiles.Add($smoRestoreLog)    #restore database  $smoRestore.SqlRestore($server)  

mysqlbackup mysql enterprise utility issue

Posted: 04 Apr 2013 11:59 AM PDT

I recent took a backup using mysqlbackup.

While restoring it, I noticed that the files that were copied into datadir are with root:root (user:group) instead of mysql:mysql.

Is anything wrong with taking backup or what?....

Cross Database transactions - Always on

Posted: 04 Apr 2013 01:59 PM PDT

Recently we are working on a POC to get Always on work and happened to see this article in BOL

This article suggests that there would be logical inconsistency when we are dealing with Synchronous mode too, but will this actually be the case?

Consider for example databases A and B on which the transaction is running and A is in High-safety mode and B is not mirrored. The log of A has to go to Mirrored database then the Primary database commits eventually two phase commit(transaction on B) succeeds but article suggests that log will not be transferred in the first place and results in commit on B which is contradictory. Please help me in understanding Whether the statement suggested in above article is true. If yes how can it be :).

PS :Please let me know if I need to provide more information around this.

MySQL generic trigger to save identification data for later accessing of the changed row

Posted: 04 Apr 2013 05:59 PM PDT

I am pretty inexperienced with this.

I need a generic trigger, able to create and save in a fixed table some sort of identification data for a changed row from generic (any) table. The identification data should be used later to SELECT the changed item in the given table.

Can be this done without previously knowing the table structure?

The only idea I had, but it's way too inefficient in my opinion, also requires previous knowledge of the table column names, is to save a hash by:

MD5(concat(NEW.field1, NEW.field2, NEW.field3, ...))  


SELECT * FROM chaged_table WHERE hash = MD5(concat(field1, field2, field3, ...))  

to identify the changed row in the table which triggered the change.

I would greatly appreciate any help or suggestions!

When should I use a unique constraint instead of a unique index?

Posted: 04 Apr 2013 01:58 PM PDT

When I want a column to have distinct values, I can either use a constraint

create table t1(  id int primary key,  code varchar(10) unique NULL  );  go  

or I can use a unique index

create table t2(  id int primary key,  code varchar(10) NULL  );  go    create unique index I_t2 on t2(code);  

Columns with unique constraints seem to be good candidates for unique indexes.

Are there any known reasons to use unique constraints and not to use unique indexes instead?

SQL Server 2005 Express in VMware causing very high CPU load

Posted: 04 Apr 2013 02:17 PM PDT

I'm having the problems described in KB937745 - very high CPU usage and the Application Log is reporting something like this:

The client was unable to reuse a session with SPID SPID, which had been reset for connection pooling.  

I've downloaded the hotfixes and I can't run them - I suspect it is because SQL Server 2005 Express Edition is not in the "Applies to" section of the KB:

alt text

The machine is running on an ESX 3.5 host running Windows XP (patched).

Result of:

SELECT  SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition')  


9.00.3068.00    SP2	Express Edition  

Any ideas? I'm stumped why the CPU is so slammed. This is a product from a vendor that hasn't seen this kind of problem with several installations.

No comments:

Post a Comment

Search This Blog