Thursday, March 7, 2013

[SQL Server 2008 issues] Populating fixed destination table with different source tables

[SQL Server 2008 issues] Populating fixed destination table with different source tables


Populating fixed destination table with different source tables

Posted: 06 Mar 2013 06:25 PM PST

I have a stored procedure which basically takes a table, performs some data manipulation then inserts the data into a destination table. Sounds straightforward but there's a more to it than that. Let me explain starting with the below image:[img]http://s23.postimage.org/rvklmnvjv/idea.jpg[/img]The destination table does not change so it will always have these 5 columns.The source table however may have the structure as above or it may come in with fewer columns.My business rules for populating the destination table is like this:Column 1 = Column A + Column BColumn 2 = Column BColumn 3 = Column A + Column CColumn 4 = Column A + Column B + Column DColumn 5 = Column DI've solved this problem by using multiple IF statements (code below) but I'm wondering if others have different ways of doing this?[code="sql"]CREATE PROCEDURE [dbo].[GenerateKeys_DEV] (-- Version 24 @SOURCETABLE VARCHAR(50) ,@ID VARCHAR(50) = NULL -- Name Details ,@TITLE VARCHAR(50) = NULL ,@FULLCONTACTNAME VARCHAR(50) = NULL ,@INITIAL VARCHAR(50) = NULL ,@FORENAME VARCHAR(50) = NULL ,@MIDDLENAME VARCHAR(50) = NULL ,@SURNAME VARCHAR(50) = NULL -- Address Details ,@BUILDINGNAME VARCHAR(50) = NULL ,@ADDRESS1 VARCHAR(50) = NULL ,@ADDRESS2 VARCHAR(50) = NULL ,@ADDRESS3 VARCHAR(50) = NULL ,@ADDRESS4 VARCHAR(50) = NULL ,@ADDRESS5 VARCHAR(50) = NULL ,@TOWN VARCHAR(50) = NULL ,@COUNTY VARCHAR(50) = NULL ,@POSTCODE VARCHAR(50) = NULL ,@POSTCODEPREFIX VARCHAR(50) = NULL ,@POSTCODESUFFIX VARCHAR(50) = NULL ,@COUNTRY VARCHAR(50) = NULL -- Company Details ,@ORGANISATIONNAME VARCHAR(50) = NULL ,@COMPANYNAME2 VARCHAR(50) = NULL ,@COMPANYNAME3 VARCHAR(50) = NULL ,@COMPANYNAME4 VARCHAR(50) = NULL -- Other Details ,@EMAIL VARCHAR(50) = NULL ,@HOMETELEPHONE VARCHAR(50) = NULL ,@MOBILETELEPHONE VARCHAR(50) = NULL ,@FAX VARCHAR(50) = NULL ,@TELEPHONEAREACODE VARCHAR(50) = NULL ,@LANGUAGECODE VARCHAR(50) = NULL ,@WEBSITE VARCHAR(50) = NULL ,@CUSTOMERURN VARCHAR(50) = NULL ,@CUSTOMERURN2 VARCHAR(50) = NULL ) AS BEGIN SET NOCOUNT ON DECLARE @SQL NVARCHAR(MAX) DECLARE @MSG NVARCHAR(500) -- Used to generate the insert into the keys table DECLARE @InsertIntoPart VARCHAR(4000) = '' DECLARE @SelectPart VARCHAR(4000) = '' DECLARE @FromPart VARCHAR(4000) = '' DECLARE @CROSSAPPLY VARCHAR(200) = '' -- Name name details DECLARE @mkTitleSELECTString VARCHAR(800) = '''''' DECLARE @mkNameKeySELECTString VARCHAR(800) = '''''' DECLARE @mkName1SELECTString VARCHAR(800) = '''''' DECLARE @mkName2SELECTString VARCHAR(800) = '''''' DECLARE @mkName3SELECTString VARCHAR(800) = '''''' DECLARE @mkNormalisedName VARCHAR(800) = '''''' -- Company specific keys DECLARE @mkOrgNameKeyString VARCHAR(800) = '''''' DECLARE @mkOrgNameSELECTString VARCHAR(800) = '''''' DECLARE @mkOrgName1SELECTString VARCHAR(800) = '''''' DECLARE @mkOrgName2SELECTString VARCHAR(800) = '''''' DECLARE @mkOrgName3SELECTString VARCHAR(800) = '''''' DECLARE @mkNormalisedOrganisation VARCHAR(800) = '''''' -- Address specific details DECLARE @mkPostInSELECTString NVARCHAR(800) = '''''' DECLARE @mkPostOutSELECTString NVARCHAR(800) = '''''' DECLARE @mkPhoneticStreetSELECTString NVARCHAR(800) = '''''' DECLARE @mkPremiseSELECTString NVARCHAR(800) = '''''' DECLARE @mkTownSELECTString NVARCHAR(800) = '''''' DECLARE @mkAddressKeySELECTString NVARCHAR(800) = '''''' -- Email and Telephone Numbers DECLARE @mkEmailSELECTString NVARCHAR(800) = '''''' DECLARE @mkTelephoneSELECTString NVARCHAR(800) = '''''' DECLARE @mkMobileSELECTString NVARCHAR(800) = '''''' -- Match Keys (this is to capture matches that are produced by the current merge routines DECLARE @MatchKeyType1 VARCHAR(MAX) = '' IF @TITLE IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @TITLE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @TITLE = NULL IF @FULLCONTACTNAME IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @FULLCONTACTNAME AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @FULLCONTACTNAME = NULL IF @INITIAL IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @INITIAL AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @INITIAL = NULL IF @TITLE IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @TITLE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @TITLE = NULL IF @FORENAME IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @FORENAME AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @FORENAME = NULL IF @MIDDLENAME IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @MIDDLENAME AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @MIDDLENAME = NULL IF @SURNAME IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @SURNAME AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @SURNAME = NULL IF @BUILDINGNAME IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @BUILDINGNAME AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @BUILDINGNAME = NULL IF @ADDRESS1 IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @ADDRESS1 AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @ADDRESS1 = NULL IF @ADDRESS2 IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @ADDRESS2 AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @ADDRESS2 = NULL IF @ADDRESS3 IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @ADDRESS3 AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @ADDRESS3 = NULL IF @ADDRESS4 IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @ADDRESS4 AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @ADDRESS4 = NULL IF @ADDRESS5 IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @ADDRESS5 AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @ADDRESS5 = NULL IF @TOWN IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @TOWN AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @TOWN = NULL IF @COUNTY IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @COUNTY AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @COUNTY = NULL IF @POSTCODE IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @POSTCODE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @POSTCODE = NULL IF @POSTCODEPREFIX IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @POSTCODEPREFIX AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @POSTCODEPREFIX = NULL IF @POSTCODESUFFIX IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @POSTCODESUFFIX AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @POSTCODESUFFIX = NULL IF @COUNTRY IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @COUNTRY AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @COUNTRY = NULL IF @ORGANISATIONNAME IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @ORGANISATIONNAME AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @ORGANISATIONNAME = NULL IF @COMPANYNAME2 IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @COMPANYNAME2 AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @COMPANYNAME2 = NULL IF @COMPANYNAME3 IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @COMPANYNAME3 AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @COMPANYNAME3 = NULL IF @COMPANYNAME4 IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @COMPANYNAME4 AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @COMPANYNAME4 = NULL IF @EMAIL IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @EMAIL AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @EMAIL = NULL IF @HOMETELEPHONE IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @HOMETELEPHONE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @HOMETELEPHONE = NULL IF @MOBILETELEPHONE IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @MOBILETELEPHONE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @MOBILETELEPHONE = NULL IF @FAX IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @FAX AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @FAX = NULL IF @TELEPHONEAREACODE IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @TELEPHONEAREACODE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @TELEPHONEAREACODE = NULL IF @LANGUAGECODE IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @LANGUAGECODE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @LANGUAGECODE = NULL IF @WEBSITE IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @WEBSITE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @WEBSITE = NULL IF @CUSTOMERURN IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @CUSTOMERURN AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @CUSTOMERURN = NULL IF @CUSTOMERURN2 IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = @CUSTOMERURN2 AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) SET @CUSTOMERURN2 = NULL -- Main sproc logic: IF @SOURCETABLE IS NULL BEGIN RAISERROR('No source table specified', 16, 1); RETURN END IF @SOURCETABLE IS NOT NULL AND NOT EXISTS (SELECT 1 FROM sys.tables WHERE name = @SOURCETABLE) BEGIN RAISERROR('Source table doesn''t exist!', 16, 1); RETURN END -- If no ID column is supplied then we can't really perform any processing! IF @ID IS NULL BEGIN RAISERROR('No ID column specified!', 16, 1); RETURN END DECLARE @FullNameForProcessing NVARCHAR(255) = '' DECLARE @FullAddressForProcessing NVARCHAR(1000) = '' DECLARE @AddressStreetPart NVARCHAR(1000) = '' -- If we already have a full contact name then we can just use this to generate our various name parts IF @FULLCONTACTNAME IS NOT NULL AND EXISTS (SELECT 1 FROM sys.columns WHERE name = @FULLCONTACTNAME AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) BEGIN SET @FullNameForProcessing = 'ISNULL(' + @FULLCONTACTNAME + ', '''')' END ELSE BEGIN SET @FullNameForProcessing = 'ISNULL(' + ISNULL(@INITIAL, '''''') + ', '''') + '' '' + ISNULL(' + ISNULL(@FORENAME, '''''') + ', '''') + '' '' + ISNULL(' + ISNULL(@MIDDLENAME, '''''') + ', '''') + '' '' + ISNULL(' + ISNULL(@SURNAME, '''''') + ', '''')' END -- mkNameKey = Phonetic Surname + First letter of Forename SET @mkNameKeySELECTString = 'dbo.clrFn_DoubleMetaphone(dbo.clrFn_GetLastWord(' + @FullNameForProcessing + ')) + LEFT(dbo.clrFn_GetFirstWord(' + @FullNameForProcessing + '), 1)' -- Phonetic Surname SET @mkName1SELECTString = 'dbo.clrFn_DoubleMetaphone(dbo.clrFn_GetLastWord(' + @FullNameForProcessing + '))' -- Phonetic Forename SET @mkName2SELECTString = 'dbo.clrFn_DoubleMetaphone(dbo.clrFn_GetFirstWord(' + @FullNameForProcessing + '))' -- Phonetic Middle name or Initial SET @mkName3SELECTString = 'dbo.clrFn_DoubleMetaphone(dbo.clrFn_GetSecondWord(' + @FullNameForProcessing + '))' -- Normalised name = Surname + Forename + Middle Name SET @mkNormalisedName = 'UPPER(dbo.clrFn_GetLastWord(' + @FullNameForProcessing + ')) + '','' + UPPER(dbo.clrFn_GetFirstWord(' + @FullNameForProcessing + ')) + '','' + UPPER(dbo.clrFn_GetSecondWord(' + @FullNameForProcessing + '))' IF @ORGANISATIONNAME IS NOT NULL AND EXISTS (SELECT 1 FROM sys.columns WHERE name = @ORGANISATIONNAME AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) BEGIN SET @mkOrgNameKeyString = 'dbo.clrFn_CompanyNameSplitter(' + @ORGANISATIONNAME + ', 1) + '' '' + dbo.clrFn_CompanyNameSplitter(' + @ORGANISATIONNAME + ', 2)' SET @mkNormalisedOrganisation = 'dbo.clrFn_CoNameMatch_String(' + @ORGANISATIONNAME + ')' SET @mkOrgName1SELECTString = 'dbo.clrFn_CompanyNameSplitter(' + @ORGANISATIONNAME + ', 1)' SET @mkOrgName2SELECTString = 'dbo.clrFn_CompanyNameSplitter(' + @ORGANISATIONNAME + ', 2)' SET @mkOrgName3SELECTString = 'dbo.clrFn_CompanyNameSplitter(' + @ORGANISATIONNAME + ', 3)' END ELSE BEGIN SET @mkOrgNameKeyString = '''''' SET @mkNormalisedOrganisation = ''',,''' SET @mkOrgName1SELECTString = '''''' SET @mkOrgName2SELECTString = '''''' SET @mkOrgName3SELECTString = '''''' END IF @POSTCODE IS NOT NULL AND EXISTS (SELECT 1 FROM sys.columns WHERE name = @POSTCODE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) BEGIN -- We need to know if we're dealing with UK or International postcodes IF @COUNTRY IS NOT NULL AND EXISTS (SELECT 1 FROM sys.columns WHERE name = @COUNTRY AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) BEGIN SET @mkPostOutSELECTString = 'CASE WHEN ' + @COUNTRY + ' = ''UNITED KINGDOM'' THEN dbo.clrFn_SplitUKPostCode(' + @POSTCODE + ', 1) ELSE REPLACE(' + @COUNTRY + ' + ' + @POSTCODE + ', '' '', '''') END' SET @mkPostInSELECTString = 'CASE WHEN ' + @COUNTRY + ' = ''UNITED KINGDOM'' THEN dbo.clrFn_SplitUKPostCode(' + @POSTCODE + ', 2) ELSE REPLACE(' + @COUNTRY + ' + ' + @POSTCODE + ', '' '', '''') END' END ELSE BEGIN SET @mkPostOutSELECTString = 'dbo.clrFn_SplitUKPostCode(' + @POSTCODE + ', 1)' SET @mkPostInSELECTString = 'dbo.clrFn_SplitUKPostCode(' + @POSTCODE + ', 2)' END END ELSE BEGIN SET @mkPostOutSELECTString = '''''' SET @mkPostInSELECTString = '''''' END -- Complete Address SET @FullAddressForProcessing = 'LTRIM(RTRIM(REPLACE(REPLACE(REPLACE(ISNULL(' + ISNULL(@BUILDINGNAME, '''''') + ', '''') + '' '' + ISNULL(' + ISNULL(@ADDRESS1, '''''') + ', '''') + '' '' + ISNULL(' + ISNULL(@ADDRESS2, '''''') + ', '''') + '' '' + ISNULL(' + ISNULL(@ADDRESS3, '''''') + ', '''') + '' '' + ISNULL(' + ISNULL(@ADDRESS4, '''''') + ', '''') + '' '' + ISNULL(' + ISNULL(@ADDRESS5, '''''') + ', ''''), '' '',''<>''),''><'',''''),''<>'','' '')))' -- For Street name we will assume that it's made up of BuildingName and Address1 SET @AddressStreetPart = 'LTRIM(RTRIM(REPLACE(REPLACE(REPLACE(ISNULL(' + ISNULL(@BUILDINGNAME, '''''') + ', '''') + '' '' + ISNULL(' + ISNULL(@ADDRESS1, '''''') + ', ''''), '' '',''<>''),''><'',''''),''<>'','' '')))' SET @mkPhoneticStreetSELECTString = 'dbo.clrFn_DoubleMetaphone(dbo.clrFn_RemoveDigits(' + @AddressStreetPart + '))' SET @mkPremiseSELECTString = 'dbo.clrFn_GetDigits(' + @FullAddressForProcessing + ')' SET @mkAddressKeySELECTString = 'UPPER(' + @FullAddressForProcessing + ')' IF @TOWN IS NOT NULL AND EXISTS (SELECT 1 FROM sys.columns WHERE name = @TOWN AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) BEGIN SET @mkTownSELECTString = 'dbo.clrFn_DoubleMetaphone(dbo.clrFn_RemoveDigits(' + @TOWN + '))' END IF @EMAIL IS NOT NULL AND EXISTS (SELECT 1 FROM sys.columns WHERE name = @EMAIL AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) BEGIN SET @mkEmailSELECTString = 'ISNULL(' + @EMAIL + ', '''')' END IF @HOMETELEPHONE IS NOT NULL AND EXISTS (SELECT 1 FROM sys.columns WHERE name = @HOMETELEPHONE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) BEGIN SET @mkTelephoneSELECTString = 'ISNULL(' + @HOMETELEPHONE + ', '''')' END IF @MOBILETELEPHONE IS NOT NULL AND EXISTS (SELECT 1 FROM sys.columns WHERE name = @MOBILETELEPHONE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) BEGIN SET @mkMobileSELECTString = 'ISNULL(' + @MOBILETELEPHONE + ', '''')' END IF @TITLE IS NOT NULL AND EXISTS (SELECT 1 FROM sys.columns WHERE name = @TITLE AND OBJECT_ID = OBJECT_ID(@SOURCETABLE)) BEGIN SET @mkTitleSELECTString = 'UPPER(ISNULL(' + @TITLE + ', ''''))' END SET @InsertIntoPart = 'INSERT INTO dbo.' + @SourceTable + '_keys_ (ID, GUID, mkTitle, mkNameKey, mkAddressKey, mkName1, mkName2, mkName3, mkNormalizedName, mkOrganizationKey, mkNormalizedOrganization, mkOrgName1, mkOrgName2, mkorgName3, mkPostIn, mkPostOut, mkPhoneticStreet, mkPremise, mkPhoneticTown, mkEmailAddress, mkTelephoneNumber, mkMobileNumber)' SET @SelectPart = 'SELECT ' + @ID + ', GUID' + ', ' + @mkTitleSELECTString + ', ' + @mkNameKeySELECTString + ', ' + @mkAddressKeySELECTString + ', ' + @mkName1SELECTString + ', ' + @mkName2SELECTString + ', ' + @mkName3SELECTString + ',' + @mkNormalisedName + ',' + @mkOrgNameKeyString + ', ' + @mkNormalisedOrganisation + ', ' + @mkOrgName1SELECTString + ', ' + @mkOrgName2SELECTString + ', ' + @mkOrgName3SELECTString + ', ' + @mkPostInSELECTString + ', ' + @mkPostOutSELECTString + ', ' + @mkPhoneticStreetSELECTString + ', ' + @mkPremiseSELECTString + ', ' + @mkTownSELECTString + ', ' + @mkEmailSELECTString + ', ' + @mkTelephoneSELECTString + ', ' + @mkMobileSELECTString SET @FromPart = 'FROM dbo.' + @SourceTable EXEC (@InsertIntoPart + ' ' + @SelectPart + ' ' + @FromPart ) EXEC('DELETE FROM dbo.' + @SourceTable + '_keys_ WHERE LEN(LTRIM(RTRIM(mkNameKey) )) = 0 AND LEN(LTRIM(RTRIM(mkAddressKey) )) = 0 AND LEN(LTRIM(RTRIM(mkPostOut) )) = 0 AND LEN(LTRIM(RTRIM(mkPostIn) )) = 0') -- Probably not the most efficient way to do this but it will do for now: -- Exact Match Key Type 1 EXEC('UPDATE a SET a.mkMatchKeyType1 = HASHBYTES(''SHA2_512'', UPPER(REPLACE(LEFT(dbo.clrFn_GetLastWord(' + @FullNameForProcessing + '), 5) + dbo.clrFn_GetFirstWord(' + @FullNameForProcessing + ') + ' + @mkPostOutSELECTString + ' + ' + @mkPostInSELECTString + ' + LEFT(dbo.clrFn_GetFirstWord(' + @mkNormalisedOrganisation + ') + dbo.clrFn_GetFirstWord(' + @mkNormalisedOrganisation + ') + dbo.clrFn_GetLastWord(' + @mkNormalisedOrganisation + '), 12) + LEFT(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(' + @ADDRESS1 + ', ''flat '', ''''), ''apartment '', ''''), ''apt '', ''''), '' ave'', ''''), '' street'', ''''), '' st'', ''''), '' road'', ''''), '' rd'', ''''), '' avenue'', ''''), '' lane'', ''''), ''-'', '' ''), ''the'', ''''), ''.'', ''''), '','', ''''), ''('', ''''), '')'', ''''), '''''''', ''''), ''&'', ''''), '' and '', '' ''), '' limited'', ''''), '' ltd'', ''''), '' plc'', ''''), '' crescent'', ''''), ''/'', '' ''), 12), '' '', ''''))) FROM dbo.' + @SourceTable + '_keys_ AS a INNER JOIN dbo.' + @SourceTable + ' AS b ON a.ID = b.ID WHERE LEN(' + @FullNameForProcessing + ') > 0 AND LEN(' + @mkPostOutSELECTString + ') > 0 AND LEN(' + @mkPostInSELECTString + ') > 0 AND LEN(REPLACE(' + @mkNormalisedOrganisation + ', '','', '''')) > 0 AND LEN(' + @ADDRESS1 + ') > 0') -- Exact Matching Key Type 2 EXEC('UPDATE a SET a.mkMatchKeyType2 = HASHBYTES(''SHA2_512'', UPPER(REPLACE(LEFT(dbo.clrFn_GetLastWord(' + @FullNameForProcessing + '), 5) + dbo.clrFn_GetFirstWord(' + @FullNameForProcessing + ') + ' + @mkPostOutSELECTString + ' + ' + @mkPostInSELECTString + ' + LEFT(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(' + @ADDRESS1 + ', ''flat '', ''''), ''apartment '', ''''), ''apt '', ''''), '' ave'', ''''), '' street'', ''''), '' st'', ''''), '' road'', ''''), '' rd'', ''''), '' avenue'', ''''), '' lane'', ''''), ''-'', '' ''), ''the'', ''''), ''.'', ''''), '','', ''''), ''('', ''''), '')'', ''''), '''''''', ''''), ''&'', ''''), '' and '', '' ''), '' limited'', ''''), '' ltd'', ''''), '' plc'', ''''), '' crescent'', ''''), ''/'', '' ''), 12), '' '', ''''))) FROM dbo.' + @SourceTable + '_keys_ AS a INNER JOIN dbo.' + @SourceTable + ' AS b ON a.ID = b.ID WHERE LEN(' + @FullNameForProcessing + ') > 0 AND LEN(' + @mkPostOutSELECTString + ') > 0 AND LEN(' + @mkPostInSELECTString + ') > 0 AND LEN(' + @ADDRESS1 + ') > 0') -- Exact Matching Key Type 3 EXEC('UPDATE a SET a.mkMatchKeyType3 = HASHBYTES(''SHA2_512'',UPPER(REPLACE(LEFT(dbo.clrFn_GetLastWord(' + @FullNameForProcessing + '), 5) + LEFT(dbo.clrFn_GetFirstWord(' + @FullNameForProcessing + '), 1) + ' + @mkTitleSELECTString + ' + ' + @mkPostOutSELECTString + ' + ' + @mkPostInSELECTString + ' + LEFT(dbo.clrFn_GetFirstWord(' + @mkNormalisedOrganisation + ') + dbo.clrFn_GetFirstWord(' + @mkNormalisedOrganisation + ') + dbo.clrFn_GetLastWord(' + @mkNormalisedOrganisation + '), 12) + LEFT(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(' + @ADDRESS1 + ', ''flat '', ''''), ''apartment '', ''''), ''apt '', ''''), '' ave'', ''''), '' street'', ''''), '' st'', ''''), '' road'', ''''), '' rd'', ''''), '' avenue'', ''''), '' lane'', ''''), ''-'', '' ''), ''the'', ''''), ''.'', ''''), '','', ''''), ''('', ''''), '')'', ''''), '''''''', ''''), ''&'', ''''), '' and '', '' ''), '' limited'', ''''), '' ltd'', ''''), '' plc'', ''''), '' crescent'', ''''), ''/'', '' ''), 12), '' '', ''''))) FROM dbo.' + @SourceTable + '_keys_ AS a INNER JOIN dbo.' + @SourceTable + ' AS b ON a.ID = b.ID WHERE LEN(' + @FullNameForProcessing + ') > 0 AND LEN(' + @mkPostOutSELECTString + ') > 0 AND LEN(' + @mkPostInSELECTString + ') > 0 AND LEN(REPLACE(' + @mkNormalisedOrganisation + ', '','', '''')) > 0 AND LEN(' + @ADDRESS1 + ') > 0') -- Exact Matching Key Type 4 EXEC('UPDATE a SET a.mkMatchKeyType4 = HASHBYTES(''SHA2_512'',UPPER(REPLACE(LEFT(dbo.clrFn_GetLastWord(' + @FullNameForProcessing + '), 6) + ' + @mkTitleSELECTString + ' + ' + @mkPostOutSELECTString + ' + ' + @mkPostInSELECTString + ' + LEFT(dbo.clrFn_GetFirstWord(' + @mkNormalisedOrganisation + ') + dbo.clrFn_GetFirstWord(' + @mkNormalisedOrganisation + ') + dbo.clrFn_GetLastWord(' + @mkNormalisedOrganisation + '), 12) + LEFT(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(' + @ADDRESS1 + ', ''flat '', ''''), ''apartment '', ''''), ''apt '', ''''), '' ave'', ''''), '' street'', ''''), '' st'', ''''), '' road'', ''''), '' rd'', ''''), '' avenue'', ''''), '' lane'', ''''), ''-'', '' ''), ''the'', ''''), ''.'', ''''), '','', ''''), ''('', ''''), '')'', ''''), '''''''', ''''), ''&'', ''''), '' and '', '' ''), '' limited'', ''''), '' ltd'', ''''), '' plc'', ''''), '' crescent'', ''''), ''/'', '' ''), 12), '' '', ''''))) FROM dbo.' + @SourceTable + '_keys_ AS a INNER JOIN dbo.' + @SourceTable + ' AS b ON a.ID = b.ID WHERE LEN(' + @FullNameForProcessing + ') > 0 AND LEN(' + @mkPostOutSELECTString + ') > 0 AND LEN(' + @mkPostInSELECTString + ') > 0 AND LEN(REPLACE(' + @mkNormalisedOrganisation + ', '','', '''')) > 0 AND LEN(' + @ADDRESS1 + ') > 0 AND LEN(' + @mkTitleSELECTString + ') > 0') -- Exact Match Key Type 5 EXEC('UPDATE a SET a.mkMatchKeyType5 = HASHBYTES(''SHA2_512'', UPPER(REPLACE(LEFT(dbo.clrFn_GetLastWord(' + @FullNameForProcessing + '), 5) + dbo.clrFn_GetFirstWord(' + @FullNameForProcessing + ') + ' + @mkPostOutSELECTString + ' + ' + @mkPostInSELECTString + ' + LEFT(dbo.clrFn_GetFirstWord(' + @mkNormalisedOrganisation + ') + dbo.clrFn_GetFirstWord(' + @mkNormalisedOrganisation + ') + dbo.clrFn_GetLastWord(' + @mkNormalisedOrganisation + '), 12), '' '', ''''))) FROM dbo.' + @SourceTable + '_keys_ AS a INNER JOIN dbo.' + @SourceTable + ' AS b ON a.ID = b.ID WHERE LEN(' + @FullNameForProcessing + ') > 0 AND LEN(' + @mkPostOutSELECTString + ') > 0 AND LEN(' + @mkPostInSELECTString + ') > 0 AND LEN(REPLACE(' + @mkNormalisedOrganisation + ', '','', '''')) > 0 AND LEN(' + @mkTitleSELECTString + ') = 0') END;[/code]

Query for extended properties

Posted: 06 Mar 2013 06:25 PM PST

Hi,Can anyone help me with a query which will select all the stored procedures with their Extended Properties.Thanks a lot.

IN Vs INNER JOIN

Posted: 06 Mar 2013 05:21 PM PST

Which one is more performance oriented query?1. SELECT 1 FROM table1 WHERE Id IN (SELECT Id FROM table2)2. SELECT 1 FROM table1 t1 INNER JOIN table2 t2 ON t1.Id = t2.Id

sql server stored procedure logic problem

Posted: 06 Mar 2013 06:02 PM PST

HI friends i have small doubt in sql server.plese tell me how to solve this issuse i have 2 tables based on that i want load first table records into second table useing stored procedure. structure is same in both tablecolumns like id ,name,sal in both table same columns.first table contains 1000 records .when ever we insert this records into second table that time 501 record is failed.when ever we load records what ever failed record that records must be showing errore message and remaing reords must be to load .that means when ever we applaying try catach method in stored procedure to handle errore .first in try methode to appply logicand catch metode is devlop errore message(501 records) once its catach errore.that errore is showing that records and remaing records(502 to 1000 records) to loaded in to second table plese tell me how to implement logic code in try catch block.

SQL 2008 DDL Auditing - A Full Self Installing/Updating Solution For Whole Server

Posted: 04 Jul 2010 10:46 PM PDT

[size="4"][b]Introduction[/b][/size]We will see a totally free DDL Auditing solution (and source code) for the whole server regardless of SQL Server version (2005/2008) or SQL Server edition (Enterprise/Standard/Workgroup/Web/Express) which is very easily and quickly installed and self maintaining. No need to worry about manually installing the solution into new databases whether they are created, restored or attached.DDL triggers enable us to audit DDL changes but there are a few missing events, design decisions and installation complications. This post explains and provides a full solution that includes auditing for database restores (there is no DDL event for this) and an incremental self install, which keeps the whole server audit configured for DDL auditing. There are also significant differences between SQL Server 2005 and SQL Server 2008. Much of this solution is the same between SQL 2005 and SQL 2008, but there is a different set of scripts available for SQL 2005. The SQL 2005 scripts provide alternative methods of overcoming the shortcomings of SQL 2005 DDL events and make it very similar to the capabilities of this SQL 2008 solution described here.All SQL 2008 and SQL 2005 scripts are provided in two zip attachments.In the scripts it is assumed that there exists a database called dbadata to hold the server level audit trail table dbadata.dbo.ServerAudit. Please create this database yourself first or globally replace all references to dbadata to another database of your choice.[size="4"][b]Overview[/b][/size]Starting with SQL Server 2005, DDL triggers have made it possible to audit changes made with DDL TSQL commands. This is distinct from auditing data changes (DML), which is not covered here. This capability allows us to see what objects and permissions have been added, deleted and changed e.g. changed stored procedure, new sysadmin role member, dropped index, new login, new table etc. In other words the work of a DBA or software release is now logged. It is atypical for these types of commands to be issued by the normal workings of applications but there are some exceptions.This document predominantly discusses the SQL 2008 DDL auditing solution. The SQL 2005 deviations are noted in place and in a dedicated section.A number of scripts have been written to automate various aspects of DDL auditing. These are noted throughout. Some are different between SQL 2005 and SQL 2008.There are some shortcomings of the raw DDL auditing capability e.g. in relation to database attach and restore. In both these scenarios auditing information could be lost if the attached/restored database does not already have DDL auditing configured as expected. Also restores are very likely to make the details held at the server level different to that at the database level with no auditing that the restore took place. This is obviously not a full audit trail.Both of these shortcomings have been addressed with coded solutions. Note that database creation is audited because the model database already has DDL auditing configured as expected and this is inherited by the new database. One known shortcoming remains and is minor - database detaches are not audited. There is no known way to audit this.[size="4"][b]Usage[/b][/size]All auditable events at the database level are logged to the table <database>.dbo.DatabaseAudit e.g. create user, add user to database role. You can see these very easily (latest first) with TSQL (see script list_all_events.sql) via:[font="Courier New"]select * from <database>.dbo.DatabaseAudit[/font]Of course you are free to enhance this TSQL to filter, sort, join etc.All auditable events at the server level are logged to the table dbadata.dbo.ServerAudit e.g. create login, add login to server role. You can see these very easily (latest first) with TSQL via:[font="Courier New"]select * from dbadata.dbo.ServerAudit[/font]Actually this will also give you all auditable events at the database level for all databases on the server too. This is because all auditable database level events are purposely audited twice. This means that the database level events are always kept with the database when backed up and restored etc but it is also easy to see what did happen to databases on the server which are no longer accessible because they have been detached, dropped, taken offline etc. Obviously restoring dbadata will bring back a different ServerAudit table but that is a very rare scenario.. A modicum of pragmatism excuses this shortcoming for the greater good of 99.99% useful auditing.All auditable events for all databases can be listed in one step using:[font="Courier New"]exec sp_MSForEachDB 'if ''?'' != ''tempdb'' select * from [?].dbo.DatabaseAudit'[/font][size="4"][b]High Level Design[/b][/size]The DDL Auditing Solution is comprised of several components descibed below. When you run one of the attached install scripts (see later) it will install these components:[b]server_audit[/b]SQL login used for all auditing processing. (Works OK on Windows Authenticated servers too.)[b]dbadata.dbo.ServerAudit[/b]Table to hold server level audit trail including a copy of each database level audit trail.[b]<database>.dbo.DatabaseAudit[/b]Table in each database to hold it own audit trail.[b]dbadata.dbo.ServerAuditTrigger[/b]DDL trigger which populates dbadata.dbo.ServerAudit. It also detects (SQL 2008 only) database attaches and runs the Setup DDL Audit job automatically in case the attached database does not have DDL auditing configured as expected. For SQL 2008 this trigger is fired for every server level and database level DDL event. For SQL 2005 this is only fired for server level DDL events.[b]dbadata.dbo.DatabaseAuditTrigger[/b]DDL trigger which populates dbadata.dbo.DatabaseAudit. For SQL 2005 this also inserts to dbadata.dbo.ServerAudit via INSERT trigger on dba.dbo.DatabaseAudit. [b]SQL 2008 Audit RESTORE DATABASE[/b]SQL Agent job which runs (in less than 1 second) every 1 minute to copy new restore database auditing information from msdb.dbo.restorehistory to dbadata.dbo.ServerAudit. If it finds that a database restore has happened but has not been audited it automatically runs the "Setup DDL Audit" job because there is a possibility that the restored database is not configured for DDL auditing as expected. [b]SQL 2005 Audit RESTORE/ATTACH DATABASE[/b]Audits database restores as for SQL 2008. Also audits database attaches. If it finds that a database restore or attach has happened but has not been audited it automatically runs the "Setup DDL Audit" job because there is a possibility that the restored/attached database is not configured for DDL auditing as expected.[b]Setup DDL Audit job[/b]SQL Agent job to incrementally install/fix DDL auditing where required. See installation section for more details. The job is used for multiple purposes and so started by several methods:1. During initial installation2. Daily at 02:45 to catch anything changed but not detected3. SQL 2008 within "Audit RESTORE DATABASE" job4. SQL 2005 within "Audit RESTORE/ATTACH DATABASE" job5. SQL 2008 within ServerAuditTrigger when a database is attached [size="4"][b]Installation[/b][/size]The SQL 2005 install process is more complex than SQL 2008 – see SQL 2005 Issues section.The installation is one script split into two distinct parts – one off install of SQL Agent jobs and everything else. The installation script includes the code for the jobs and then calls the potentially newly created "Setup DDL Audit" job to complete the installation.There are two scripts install_ddl_auditing.sql and uninstall_ddl_auditing.sql. Both effect the server level and all databases in one invocation. The uninstall currently deletes all existing database level audit logs by dropping all <database>.dbo.DatabaseAudit tables. The dbadata.dbo.ServerAudit table is not dropped by any script to avoid accidental deletion of the full server audit log. If you wish to drop dbadata.dbo.ServerAudit it must be explicitly dropped manually.The install script is designed to be incremental so that it can be run periodically to install to new databases as required. For SQL 2005 there is also an element of fixing/avoiding known problems that can recur. The install process includes the creation of a SQL Agent job which will periodically run the install script. Following initial installation the auditing should be tested. The following commands (also in script test_ddl_auditing.sql) will create and drop a test table in all databases. The operations should all appear in the audit tables (except for SQL2005 model database at the server level).[font="Courier New"]sp_MSForEachDB 'use [?]; print ''?''; create table aa_test_ddl_audit (col1 int)'sp_MSForEachDB 'use [?]; print ''?''; drop table aa_test_ddl_audit'[/font]The audit trails can be viewed with one or more of (also in script list_ddl_audit_trail.sql):[font="Courier New"]-- Whole serverselect * from dbadata..ServerAudit-- Current databaseselect * from DatabaseAudit-- All databasesexec sp_MSForEachDb 'if ''?'' != ''tempdb'' begin use ?;print ''?'';select * from DatabaseAudit end'[/font][size="4"][b]Stopping and Starting Auditing[/b][/size]You may wish to stop auditing if it is causing a problem. If auditing fails it will cause the operation being audited to fail too.If you wish to stop and restart auditing the easiest way is to disable the appropriate server level or database level triggers. This can be done independently using DISABLE TRIGGER commands. Three scripts have been written to automate this process for the whole server. You can run these scripts in any order and any number of times:[b]status_ddl_auditing.sql[/b]This script displays the current enabled/disabled status of the server level audit trigger and each of the database level audit triggers. You might want to run this before and/or after the next two scripts to see the status change. The ouput is self explanatory. See attached script for TSQL. Combined SQL 2005/2008 pseduo code for the script is:[font="Courier New"]Checks existence of ServerAuditTriggerFor each database: • Check existence of DatabaseAuditTrigger • (SQL2005) Check existence of DatabaseAudit_i triggerList enable/disable status of ServerAuditTriggerFor each database: • List enable/disable status of DatabaseAuditTrigger • (SQL2005) List enable/disable status of DatabaseAudit_i trigger[/font][b]disable_ddl_auditing.sql[/b]This script disables the server level audit trigger and each of the database level audit triggers. The ouput is self explanatory. See attached script for TSQL. Combined SQL 2005/2008 pseduo code for the script is:[font="Courier New"]Disable ServerAuditTriggerFor each database: • Disable DatabaseAuditTrigger • (SQL2005) Disable DatabaseAudit_i trigger[/font][b]enable_ddl_auditing.sql[/b]This script enables the server level audit trigger and each of the database level audit triggers. The ouput is self explanatory. See attached script for TSQL. Combined SQL 2005/2008 pseduo code for the script is:[font="Courier New"]Enable ServerAuditTriggerFor each database: • Enable DatabaseAuditTrigger • (SQL2005) Enable DatabaseAudit_i trigger[/font][size="4"][b]Permissions[/b][/size]No permissions are required for any application service accounts, SQL service accounts or user groups. All permissions are assigned to the SQL login server_audit by the installation script. Note that the SQL login will work in an EXECUTE AS clause regardless of whether the server is configured to use Windows Authentication or SQL and Windows Authentication (aka mixed mode). It is not intended that anyone would ever login to SQL via the server_audit account but an initial password is set for the server_audit account during installation. Post installation this should be set to a new strong password and recorded in the DBA Passwords document. Note that the server_audit account has very little permission:1. CONNECT permission to every database except tempdb2. INSERT permission on the <database>.dbo.DatabaseAudit table3. INSERT and SELECT permission on dbadata.dbo.ServerAudit table. The SELECT permission is required to determine whether a RESTORE DATABASE event has already been audited or not.4. Member of msdb SQLAgentOperatorRole so that "Setup DDL Audit" job can be run expediently.Typically only a SQL system administrator would need to look at the dbadata.dbo.ServerAudit table so no permissions are required to be added for other support groups e.g. Application Support.Users wishing to access a <database>.dbo.DatabaseAudit table may need SELECT permission to be granted. This will be added on request. Often access to this table will already be possible via a role such as db_datareader or db_owner.[size="4"][b]Known Issues (Mostly resolved by this solution)[/b][/size][b]@JobLogDir (Easily solved by installer)[/b]The installation script sets a variable called [font="Courier New"]@JobLogDir[/font]. You will need to change the path to suit your environment.[b]Detach Database (Not resolved)[/b]A detach database operation is not audited at all because there is no DDL event for this. [b]Different SID for server_audit login on restore database from different server (Solved)[/b]If a database backup containing an enabled copy of the DatabaseAuditTrigger is restored to a different SQL Server instance it will not be possible to run DDL commands against that database because the DDL auditing will fail. This is because the SID for the server_audit user from the backup source server (see <database>.sys.sysusers.sid) will be different to the SID for the server_audit login on the new server (see master.sys.syslogins.sid). This can be easily corrected with the commands:[font="Courier New"]use <database>alter user server_audit with login = server_audit[/font]This solution detects this problem and automatically resolves it.[b]Attaching and Dropping Databases (Not really an issue)[/b]Note that the CREATE DATABASE FOR ATTACH command and DROP DATABASE are audited at the server level only. This is not really an issue.[b]Potential Temporary Audit Gap on Database Restore/Attach (Minor issue mostly solved)[/b]There is a possibility that some events will not be audited for a minute or so when a database is restored or attached if that database does not already have DDL auditing configured as expected. It will be automatically configured within 2 minutes of being restored or attached by one of:• SQL 2008 – "Audit RESTORE DATABASE" job• SQL 2008 - ServerAuditTrigger. • SQL 2005 – "Audit RESTORE/ATTACH DATABASE" job[size="4"][b]SQL 2005 Issues (Mostly resolved by this solution)[/b][/size][b]SQL 2005 Has Less DDL Events (Unsolvable)[/b]Much fewer events are audited in SQL 2005. There is nothing we can do about this. Some auditing is better than no auditing. One minor side effect of this is that the install of the auditing tables and trigger are not audited as they are for the SQL 2008 install.[b]SQL 2005 ddl_events tree is hard to use (Solved)[/b]It is not possible to audit server level events from the most obvious place in the event group tree – ddl_events. There is a bug which disallows this. Instead it is necessary to list all child events of ddl_events. SQL 2008 does not have this bug and so all events are specified at once via ddl_events.[b]Database Level Events Not Also Raised At Server Level (Solved)[/b]No database level events are also raised at the server level as for SQL 2008. However this has been implemented via a DML insert trigger on the DatabaseAudit tables. This does work but the trigger DatabaseAudit_i attempts to write to dbadata.dbo.ServerAudit i.e. to another database. Note that the security context is switched to server_audit when the trigger runs via the EXECUTE AS clause. There are some problems relating to the use of the EXECUTE AS clause across databases:1. All databases must have the trustworthy property set to ON. It is off by default.2. It is not possible to set the model database trustworthy property so any DDL changes to this database cannot be duplicated to dbadata.dbo.ServerAudit.3. All databases' owner SID must be correct. It has often been wrong during development and testing of this DDL audit capability.To address these issues the install/fix process (Setup DDL Audit job):1. Sets the trustworthy property for all databases to ON when required and possible.2. Sets the owner of for all databases to sa – this SID is always the same across servers. This will be fine even if the sa account is disabled. It also creates a new database user for the previous dbo and add it to the db_owner role. This is so that the previous dbo account can access the database with the same permissions as before.3. The trigger DatabaseAudit_i checks the trustworthy property before attempting to insert to dbadata.dbo.ServerAudit. This is required because: a. model cannot be trustworthy b. the trustworthy property is always set to false when a database is created, restored or attached.Note that the install process (SQL Agent job "Setup DDL Audit") is incremental and is automatically run periodically and when triggered. This will set the trustworthy property ON if any are set to OFF as well as correcting the owner SID and installing the tables and triggers if required.With all these measures in place it should be very rare that an auditable event is not logged at database and server level. The model database is very rarely changed anyway.[size="4"][b]Conclusion[/b][/size]We have seen a totally free DDL Auditing solution (and source code) for the whole server regardless of SQL Server version (2005/2008) or SQL Server edition (Enterprise/Standard/Workgroup/Web/Express) which is very easily and quickly installed and self maintaining. No need to worry about manually installing the solution into new databases whether they are created, restored or attached. The audit trail is held at both the server level and the database level so the database level audit trail will always be connected with the database even if it is restored to a different server. Despite the shortcomings of SQL 2005 the solution is almost identical between SQL 2005 and SQL 2008 thanks to the carefully coded workarounds for the SQL 2005 version. Give it a try - I'm sure you will be pleased that you did.

SQL server version Requirements For installing Dotnetnuke

Posted: 06 Mar 2013 10:55 AM PST

What are the server requirements in order to install dotnetnuke?Can any one kindly help me to evaluate the technical requirement below to install a application using Dotnetnuke.NET Framework 3.5 SP1 DotNetNuke System Requirements Supported Operating Systems Windows Server 2003 Windows 2008 Windows 7 Windows 2008 R2 Web Server Microsoft IIS 6.0, 7.0, 7.5 Database Server Microsoft SQL Server 2005 Microsoft SQL Server 2008 Microsoft SQL Server 2005 Express Microsoft SQL Server 2008 Express .NET Framework 3.5 SP1 4.0 Please recommend (SQL server version and .dot frame) environment for installion.Many Thanks,

Result of dynamic sql with parameteres into a variable

Posted: 06 Mar 2013 03:22 PM PST

Hello, i have problem with dynamic sql in cursor and i want to set to variablefirst I had problem, when I wanted to use variable int, there was problem with assign varchar to int ...I solved with SET @WORK_SEQ1=CAST(@WORK_SEQ as varchar(2))then I had problem with error -- Must declare the scalar variable "@RESULT_COUNT".and I found something on internet and I tried executed @sqlStr like this [code="sql"] exec sp_executesql @query=@sqlStr, @params=N'@RESULT_COUNT INT OUTPUT', @RESULT_COUNT=@RESULT_COUNT OUTPUT PRINT @RESULT_COUNT[/code]But now I have problem with this error Msg 214, Level 16, State 2, Procedure sp_executesql, Line 1Procedure expects parameter '@statement' of type 'ntext/nchar/nvarchar'.All code is below [code="other"]SET @sqlStr = 'DECLARE myCursor CURSOR FORSELECT LINE_CODE, ORDER_DATE, COMMIT_NO, BODY_NO, STATION_ID, WORK_POS, WORK_QTY, WORK_SEQ, WORK_TYPE, ITEM_CODE, ALC_CODE, OPTION_VALUE, LIMITV_LOW, LIMITV_HIGHFROM MCS_MESDB.dbo.TB_MASTER_' + @P_LINE_CODE + ' M (NOLOCK)WHERE M.ORDER_DATE = ''' + @P_ORDER_DATE + ''' AND M.COMMIT_NO = ''' + @P_COMMIT_NO + ''' AND M.LINE_CODE = ''' + @P_LINE_CODE + ''' AND M.WORK_TYPE = ''N'' AND M.STATION_ID = ''' + @P_STATION_ID + ''' 'EXEC(@sqlStr);OPEN myCursor FETCH NEXT FROM myCursor INTO @LINE_CODE, @ORDER_DATE, @COMMIT_NO, @BODY_NO, @STATION_ID, @WORK_POS, @WORK_QTY, @WORK_SEQ, @WORK_TYPE, @ITEM_CODE, @ALC_CODE, @OPTION_VALUE, @LIMITV_LOW, @LIMITV_HIGH WHILE @@FETCH_STATUS = 0 BEGIN DECLARE @RESULT_COUNT int DECLARE @WORK_SEQ1 varchar(2) SET @WORK_SEQ1=CAST(@WORK_SEQ as varchar(2)) PROBLEM IS HERE >> SET @sqlStr = N'SELECT @RESULT_COUNT=(SELECT CASE(COUNT(*)) WHEN 0 THEN 1 ELSE COUNT(*) END FROM MCS_MESDB.dbo.TB_RESULT_TOOL_ENG01 R WITH(NOLOCK) WHERE ORDER_DATE = ''' + @ORDER_DATE + ''' AND COMMIT_NO = ''' + @COMMIT_NO + ''' AND STATION_ID = ''' + @STATION_ID + ''' AND ITEM_CODE = ''' + @ITEM_CODE + ''' AND WORK_SEQ = ''' +@WORK_SEQ1 + ''' AND WORK_RESULT = ''OK'' )' exec sp_executesql @query=@sqlStr, @params=N'@RESULT_COUNT INT OUTPUT', @RESULT_COUNT=@RESULT_COUNT OUTPUT PRINT @RESULT_COUNT[/code]

Query to get rows if value contains result of another query

Posted: 04 Mar 2013 06:45 PM PST

hi,I have searching for a query that returns a row if column value contains result of another query. Its like "select * from table1 where col contains(select col1 from table2 where <condition>)". Its just like using sub query in IN,but i want contains instead. Is there any way i can do this??Thank you

Synonyms and performance

Posted: 05 Mar 2013 11:37 PM PST

How does the use of synonyms effect performance?More specifically, are synonyms replaced with fully qualified names in execution plans? It would seem to me that any synonym has to be resolved to the actual object so any query or procedure that doesn't have a cached plan is going to have an extra step in mapping the synonym to an actual object. I'm thinking there would be a very small performance hit in this. Perhaps a slightly larger hit when the synonym references an object on a different database and slightly larger still when on a different server.Do synonyms effect SARGability in any way?

Just accepted a Sr. DBA job. Time for a gut check?

Posted: 03 Mar 2013 12:46 PM PST

Hi SQLSC!Buckle up, I apologize in advance for the long post. In many ways, typing it out is as much as my own sanity as it is for me asking for everyone's advice. I hope you take the time to read it and comment; I am need of advice.First off, I'm a new poster in the forums, but not to the forums or SQL Server central itself; I've been a lurker here for some time, and I have to say that I am routinely impressed with the knowledge level, maturity, and overall helpful attitude of all the members that ask questions and provide solutions here. Which is why I'm here, because I need some advice from the people who know the most about my "dilemma."I recently interviewed for, was offered, and accepted a senior database administrator role. And now I'm having a form of "buyer's remorse" so to speak, for two main reasons:1. I've been at my current employer for almost 10 years, and2. I am wondering if my skills are where they need to be for this role.Aside from the fact that I'm leaving a relatively stable and secure job for a new role at a new company where I'll be "the new guy" (which is stressful enough), I have a terrible fear of failure; I know that's a "me problem" and it's not confined to my job. I feel this way about a lot of things, and truthfully I should probably be medicated :w00t:Let me provide a little background:I started my career as an "accidental DBA" back in 2002. At the time, I was tasked with creating a local web page for people to post articles, announcements, and so forth for everyone in the building to see. I went to school to be a software developer, and I go swept up in the web application development craze while there. I started out coding pages in PHP with a MySQL back-end, and graduated to active server pages with Microsoft Access databases that I "inherited" when other people left the company. In 2004 I obtained my first SQL Server instance: a SQL server 2000 installation that I ended up migrating all my legacy application databases to.I had never really used a full DBMS before, so naturally I started out slow, learning the system and how it worked. I built tables and views, and I even messed around with DTS packages to automate some of my tasks. I learned how the transaction logs work (the hard way, when thew grew out of control) and how to manage them. I learned the value of backing up databases, and the value developing a solid disaster recovery plan (and making sure people knew how to execute it when I was not around).From there, I made the switch to ASP.NET, coding in C#. I recoded lots of applications, and around 2007 I took the opportunity to migrate to a brand new instance of SQL Server 2005. I started coding TSQL stored procedures and functions, and started the best practice of using them for the actual business logic and SQL code instead of embedding it in my applications. I got to work with SSIS, which I love, and continue to use for many things. And up until recently, I wrote many complex stored procedures with CTE's, temp variables and table variables, and the like. I've also followed all the best practices I've can in regards to "least required" access rights, creating application log ins, restricting users to certain schemas based on business needs, and even using "EXECUTE AS" when needed, both with local SQL server log ins and domain ID's. And today, I manage 3 severs (2 virtual 2008 instances and 1 physical 2005 instance) that my applications all use to store, read, and execute on data.So why leave? Well in my current role, I'm not really considered to be in "IT." I'm in a position where I'm part of the operations of the business because of my diverse skills; I get to work with IT, though, in securing new instances of SQL server hardware, VM's, or other developers in getting access to other systems and/or providing access to my own database servers. In many ways, I feel my development background gives me a proclivity to working with TSQL and developing complex queries. Coupled with the fact that I've been writing SQL for almost 10 years now, I feel I have a pretty solid handle on what makes good, quick-executing queries and what doesn't. And I love SQL server; working with data sets is something I truly enjoy. In many ways, I feel my development background gives me a proclivity to working with TSQL and developing complex queries. Coupled with the fact that I've been writing SQL for almost 10 years now, I feel I have a pretty solid handle on what makes good, quick-executing queries and what doesn't. And I love SQL server; working with data sets is something I truly enjoy.But I found myself at a cross-roads; while I'm viewed as a "go-to" person for solutions, I felt I've hit a ceiling. Since I wasn't IT, I knew I wasn't going to be able to learn, say, SQL Server 2012, or be able to attend classes for certification or training; it's just not in the budget for the area of the company I worked for. And in some ways, my job could clearly be viewed as "redundant" since it's an IT-based role in the non-IT division. It's been tried before, and there were many times I was told (in confidence) that my job had to be justified at the highest levels. Which is always good to hear...At any rate, I started looking. I felt I had two career paths: to continue as a developer, or to look for database administrator jobs. Again, being self taught, I felt my development skills are "behind the curve" a bit (seriously, does anyone want web forms anymore? Everything is MVC now), I decided to look for DBA roles. It didn't start out good; the first job I interviewed for (which was an ETL-type role) I interviewed sat me right down and had had me do some "live" testing by developing some SSIS packages in front of some of the other staff. I was a little flustered, and while I was able to complete the tasks, I think they felt I was a little under-prepared and passed on me. The second job I interviewed for was even worse; I never got to talk to the hiring manager. HR brought me in, and sat me in front of a computer terminal with some testing software. I bombed horribly, and was told that my skills were sub-par.So was starting to feel defeated; I studied up, and practiced at my current job, learning as much as I could. So when I went to my third interview, I wanted to be better prepared. But when I got to my third interview, there was no testing at all; First, there was a phone interview where they asked me some questions about what I did now. I told them basically my story above: that I was an application developer that had to work with SQL server. They wanted to hear more, so I gave them all the answers I could. They asked some questions that I felt I didn't have good answers to, and I told them so. The interview lasted a half an hour, and I hung up the phone thinking "that went well, but there are probably better qualified people."Imagine my surprise when, almost 10 minutes later, my phone rang. I was my recruiter; he told me that the hiring manager and current DBA were "blown away" with my experience and wanted me to come in for an interview asap. He told me that many people didn't even make it past the phone interview! So I was psyched, but nervous: what if I bombed another test? I shared my worry wit my recruiter; he told me just to do my best.I showed up to the in-person, and first met with the hiring manager in person. We talked for almost an hour, but it was more about the kind of person I was and what I liked. Why did I want the job? In effect, he was measuring up my personality, which, as a person who had hired some analysts in the past, I appreciated. From there, I met with the members of his team: server administrators, storage administrators, the security team, active directory administrators, exchange team... the meetings went great. I got to spend some time with the current DBA (who, I should mention, has been there two years and I am going to be working WITH, not replacing). Finally, I met with members of the development staff. This was the most technically demanding part of the interview; they grilled me on some SQL stuff ("Would you ever use a cursor?" "What's a left join?"), and and some application development stuff ("Ever worked with Hadoop?") and also about my current release management. I gave the best answers I could.When we were finished, I glanced at my watch, I was there about 3 hours! I was escorted back down to HR where I met with the HR senior manager. She talked more about the company and the benefits, and then she told me that she wanted to make me an offer right then and there. I was floored. I asked her if maybe she should get some feedback from the team to make sure, to which she replied "I wouldn't be making this offer if every single person you met with didn't come down and say 'hire this guy.'"I had a few days to think about it, and I decided to accept; the company seems great, it's growing, and the culture is very appealing. I'll have access to a formal training and certification budget to keep my skills sharp, and I get to get back to an pure IT role.Sounds great, right?Well, back to my dilemma: since accepting, I am having a pure panic attack. Do I really have the skills to do this? There are certain things in the environment that will be new to me, such as:1. The scale: there will be many servers to manage, much more than I've had to work with in the past.2. The size of the instances: While the company is MUCH smaller than where I was, they have massive amounts of data. My tables maybe held 10's of thousands of rows, but some of their production databases hold 100's of thousands of rows.3. They utilize a HA set up on a cluster; I don't have any experience with those.4. They have replication set up on a few of the production databases to create an instance for the BI and operations teams to query; I've never set up or changed replication schemes.Also, I've always been the person who's had to develop the application code as well as the TSQL code to support it, I've never really had to troubleshoot major performance issues; that's not to say that I don't understand query execution plans, but I haven't had to use database tuning adviser, or SQL Server profiler to really dig into a query issue. If something wasn't working I could pretty easily identify where an index was needed or a procedure that needed re-written. So I don't have a firm grasp on what the "best" way to troubleshoot performance issues are.But part of the reason they liked me for this role was because of my development experience; the current DBA is bombarded with questions for the various development teams asking him for help with their queries. The fact that I have development experience is why they selected me: to help ease the burden on the current DBA, and to help suggest different ways of accomplishing tasks to the teams. They are undergoing an massive overhaul of many of their systems, and trying to position them company's information systems for future growth.I'm not exaggerating when I say I haven't been sleeping well; I feel like I don't have all the skills I need for this job, and that I'll fail and regret giving up what I had. Ever since accepting, I've been reading all my books, re-visiting the stairwell articles here, and reading and testing myself to make sure I'm ready. I start the job in a couple weeks, and I want to make sure I'm ready. My absolute low point came last night: I had recently bought the test prep books from Microsoft press for two of the certifications. I read the books and studied a bit, and I took the practice tests: I failed both of them. Cue panic.If you've read this far, I want to ask for all of your advice: do you think I accepted this role too hastily? Am I missing critical job skills? Are there areas that you would suggest me boning up on? I had an opportunity to speak to the current DBA, and he doesn't seem concerned about my skill set, but I don't want to be another person who burdens him with questions when I don't understand something. Or am I over-thinking it? Should I just have faith in my abilities and move forward? It all boils down to: I want to come in and be successful. I've been good at my job in the past and this will be a change for me. In six months (or less!) I don't want them to think they made a mistake in hiring me. I really feel the company I going to is a good one, with a very bright future, and that the role is a positive step for my career.Having typed all that out, what do you all think? Did you go through something similar? Any advice or comments are welcome, and I thank you all for your time and thought. I'll certainly be monitoring the thread, so if there's other things that you want to know about me or my skills, feel free to ask!

Custom print?

Posted: 06 Mar 2013 12:14 AM PST

Has anyone built or seen a custom Print function for SQL? I'm running into the 4000 character limitation and before I go ahead and build one of my own I was wondering if anyone already had one. Basically the problem I'm running into is that my nvarchar(max) is way over 4000 characters and I need to print it in pieces, extracting 4000 character chunks from it each time.Thoughts?

Granting Explicit View Definition Permissions on Stored Procedure to dbo

Posted: 06 Mar 2013 06:48 AM PST

The developers in our shop have a need to explicitly grant view definition permissions to themselves on stored procedures they create in their development databases. They have dbo level permissions in these databases and although they can explicitly grant view definition permissions to other developers in the same database, they are unable to do so for themselves. When they attempt this, it appears that they are successful but when they check the stored procedure afterwards the permission is not there for themselves.While this does not cause an issue in development, the intention is for these view definition permissions to be carried forward to the test and production databases where they only have datareader permissions.When these stored procedures are scripted out by the dba to move to Test and Production the view definition permissions are not scripted out for the developer in question.Is there a way that a developer with dbo rights in a database can explicitly grant themselves view definition permissions on a stored procedure they create as dbo?

Virtualization reviews

Posted: 20 Feb 2013 10:35 PM PST

i have to Assess server landscape–Server reviews–Virtualization reviews–OS/patch level reviewscan any one help me out regarding Virtualization reviews.what does it mean. where to starthow to start what are the questions i need to ask the client regarding that. any documents or articles available in internet.

Sql Server 2008 Stops when dbo.Decrypt('') used.

Posted: 06 Mar 2013 03:46 PM PST

Hi there,We use Microsoft [b]SQL Server Enterprise Edition (64-bit) Version- 10.50.1600.1[/b] in our company. Whenever I use dbo.Decrypt() with blank string in parameters which occurs sometimes in SP's the server instance [b]STOPS[/b] suddenly and needs to be re-started from Configuration manager.The exact query which stops server is '[b]Select dbo.Decrypt('')[/b]'.Anyone who can shed some lights on this problem is this a bug of SQL server or any specific problem.

Droping or moving old tables

Posted: 06 Mar 2013 02:13 PM PST

How to validate the last updated date for each of the tables.I want you to check if the tables are updated in real-time·How to check if the tables going to move or delete have any dependencies.·Please include any suggestions for how to drop these tables. The reason for moving or deleting these tables is we are running out of space on that server.

Asking If I can Create a table for prices

Posted: 06 Mar 2013 02:52 PM PST

Hi, I wanna ask if I can put just one table for both prices an cities. In the beginning I created just table Product, but I have two different prices by cities(each city has a particular price). After that I put two tables one for cities and one prices. but I think I don't need table cities, I'm asking if I can create one table prices with field city. my data base NOW contains : Table Products: ProductID,ProductCode,ProductName,CityID,SubCategoryID...Table Cities: CityID,PriceID,City Name.Table Prices : PriceID,PriceTable OrderDetails: OrderDetailsID, OrderID, FK ProductID, UnitPrice Table order OrderID, CostomerID,...

Error 7391; Unable to Begin a Distributed Transaction

Posted: 05 Nov 2010 12:39 AM PDT

Hey All,I am trying to get a list of error messages on a server using the sp_readlogerror stored procedure. I have a central server A and I am trying to get the information from Server B.exec [server B].model.sys.sp_readerrorlog :-)the above statement works well and gets the data, But it throws up a transaction error when i try to insert into a temp table on server A. Here is the code running on server A:create table #ErrorLog(LogDate datetime, ProcessorInfo varchar(1000),Text varchar(5000))insert into #errorLogexec [Server B].model.sys.sp_readerrorlogthis Throws up a transaction error as belowOLE DB provider "SQLNCLI" for linked server "BHSF-BTR-W318" returned message "The partner transaction manager has disabled its support for remote/network transactions.".Msg 7391, Level 16, State 2, Line 3The operation could not be performed because OLE DB provider "SQLNCLI" for linked server "BHSF-BTR-W318" was unable to begin a distributed transaction.Any Suggestions???Thanks for the Help

Getting minimum of top n rows without using subquery

Posted: 26 Feb 2013 04:18 PM PST

Hi,We use SELECT min([date]) FROM table WHERE [date] IN (select top(100) [date] from table order by [date]) query for selecting minimum date from top hundred rows. Is there any other way to do same task in single query without using sub query in IN?Thank you

SQL Instalation Issue

Posted: 06 Mar 2013 12:55 PM PST

Hi All,I have tried to install SQL 2008 Ent evalution edition on Win 2008 R2 Datacenter edn. and SQL Engine installation was failed due to 'Kerbirous Authentication' issue but Shared components & other prerequists was installed successfully. Later I have uninstalled the SQL from Maintenance tab of Installation home page.[u]My Issue Is[/u]Later I have tried installing the SQL 2008 R2 ent on the same box, DB engine was installed fine but not the client components. while DB installing, in Database components step, it is not allowing me to select the shared components, they are showing selected by default but they are not exist on box.Can any one please help me how to come out of the above.

Insert help

Posted: 06 Mar 2013 09:51 AM PST

I am trying to insert records into the database as followsinsert into lookuptableselect * from lookuptable1which producesMsg 2627, Level 14, State 1, Line 1Violation of UNIQUE KEY constraint 'uqLookuptable1cols1'. Cannot insert duplicate key in object 'dbo.lookuptable'.The statement has been terminated.Is there a way to handle this error so I can carry on inserting the rest of the records into the table

Read a backup file to find the version

Posted: 06 Mar 2013 04:57 AM PST

Is it possible to read a backup file directly and determine the version of the server that created it? I've been Googling for it for a while but haven't been able to find anything. I did find a utility (SQL BAK) that can do it but I need to include this capability in my app. Anyone know how to read a backup file?

How to Check for Upcoming SQL Server Login (not Windows login) Password Expirations

Posted: 06 Mar 2013 03:15 AM PST

SQL Server 2008 has an "Enforce password expiration" checkbox on the Login Properties window that will expire a login after x number of days. I don't see a "password last changed date" column in the sys.syslogins table.Can anyone tell me how SQL Server tracks the upcoming SQL Server login (not Windows login) password expirations?Ideally, I would like to write a query that would identify the logins that will expire 30 days from now so that we can take proactive measures to change the passwords without applications (like ColdFusion) having unexpected downtime due to expired passwords.

SSRS custom assembly using old ActiveX dll

Posted: 05 Mar 2013 11:44 PM PST

Hi All,Need some clues...Scenario: I have created a .Net assembly based on an old VB6 ActiveX library(.dll).Why? Because I'm working with an old system and we don't have source code.I've modified the rssrvpolicy.config file the same way I did my RSPreviewPolicy.config.On the server that is hosting SSRS, I've registered the ActiveX DLL.In SSDT my test report runs fine and I get a value from the new .Net assembly.When I run it from the server I get the "#Error" in the text boxes.Setup:SSRS 2012ActiveX DLL is registered on the serverAdded code group for both the .Net assembly and the RCW wrapper with FullTrust (rssrvpolicy.config, RSPreviewPolicy.config)RCW wrapper created with tlbimp.exeSecurityPermission(PermissionState.Unrestricted) assertedAssembly:AllowPartialTrustCallersQuestion:Why would it work in my SSDT but not from the SSRS rpt manager?It's having problems getting to the old registered ActiveX library.. is there another permission I need to raise?Help is greatly appreciated...

Indexed views

Posted: 06 Mar 2013 05:52 AM PST

Hi All,I want to create indexed views. I don't want to have those heavy clustered indexes on the tables the view is created over. Can I create unique non-clustered keys with filter "not null" for the tables' keys instead of the clustered keys and then create a clustered key on the view?Thanks in advance,IgorMi

CDC records Updates as Insert and Delete

Posted: 06 Mar 2013 05:41 AM PST

I am experimenting with using CDC to perform auditing in one of our user databases. If I perform an INSERT or UPDATE statement in SQL Server Management Studio, the operations are recorded correctly as 2 (INSERT) and 3 (BEFORE UPDATE) and 4 (AFTER UPDATE) but when I use our VB.net application, which passes a table-valued parameter to a stored procedure which then performs an UPDATE statement, cdc records the operations as 1 (DELETE) and 2 (INSERT). Has anyone run into this before? How can I get the stored procedure execution to record the update correctly?Thanks.

Design Builder Missing

Posted: 06 Mar 2013 02:25 AM PST

Can someone please help me to figure out where my Designer Tab is not showing up. When I open an existing .RDL it opens as XML data and give me now option tor Design. What have I done and how do I rectify the situation?

LiteSpeed impact on msdb database?

Posted: 05 Mar 2013 07:13 PM PST

We have a customer who wants to use LiteSpeed for backup process on few SQL Server 2008 EE machines. We have to deploy our product and we are considering system database sizing. What impact LiteSpeed will have on msdb (considering that it is used for backup and restore history, job definitions,...). Or more precisely what LiteSpeed store on msdb, what informations, jobs,...Thanks

Repeating code over time period

Posted: 06 Mar 2013 12:40 AM PST

Hi Guys,I have a code snippet to run every month and have to take a union of current data and previous 11 months data and put in excel to prepare a report. I have to create a table with current month suffix and there are changes to be made to date variables.Any suggestion how I can avoid making changes to the code and generate reports based on the current month,(it should automatically create a table based on the current month and take a union with the previous 12 months data only)? Your help is appreciated!Regards,Prasanna

SQL migration

Posted: 05 Mar 2013 11:01 PM PST

Whats the best way to copy logins, users, logins with password along with exact status of enable/disable from source 2008 to destination server 2008

Copy a big table (250G) form one server to another

Posted: 04 Mar 2013 04:56 AM PST

Hi,I need to copy a big table (about 250G, basically used to store images) that I have on a MSSQL Server to a MSSQL Server. I don't need to transform data but I don't if this command will worksINSERT INTO Temp SELECT * FROM [SourceServerName].DBName.dbo.TempThanks for your help

Wednesday, March 6, 2013

[how to] Minimising downtime implementing merge replication on live system

[how to] Minimising downtime implementing merge replication on live system


Minimising downtime implementing merge replication on live system

Posted: 06 Mar 2013 09:01 PM PST

Background

We have three Servers running SQL Server 2008 which all contain a Product database. These are currently kept in sync once per day by sending a copy of the MDF to the two Subscribing servers and detaching the existing DB and replacing it with the latest copy.

The two subscribers contain a Customer database which is not present on the Subscriber. The Customer database contains customer purchase information, so it needs to reference ProductIDs in the Product table.

We have implemented Merge Replication on a similar set up in our testing environment and are preparing to implement these changes on our live system.

The end result should look something like: http://puu.sh/2dl2z (Apologies for the linked image, I can't paste images due to a lack of reputation).

The Problem

In order to implement these changes, we will need to generate a snapshot and synchronise the data between the Publisher and the Subscribers. While this is occurring, any client application which relies on the data in the Product database will fail, and any Stored Procedures on the Customer database which make cross database joins will also fail.

We'd like to keep this downtime to a minimum or eliminate it completely.

We've had some crazy ideas thrown around which may or may not work (using synonyms to point a subscriber to another server's Product table while snapshot generation/synchronisation takes place) but they don't seem very elegant, if they'll work at all. I'm hoping someone else has had a similar issue and might have some insight :)

PostgreSQL user can not connect to server after changing password

Posted: 06 Mar 2013 08:42 PM PST

I've met this with 4 roles I created:
After changing password for a user in pgAdmin III using the GUI (1), that user can not log in any more.
pgAdmin III show error message:

An error has occurred:    Error connecting to the server: FATAL:  password authentication failed for user "sam"  FATAL:  password authentication failed for user "sam"  

My system: Postgresql 9.2 on Ubuntu 12.04

Is there any way to fix this? Thanks.

(1): login with account postgres, right click user in Login Roles, go to tab 'Definition' and enter password

mySQL Quickly find rows linked to another row

Posted: 06 Mar 2013 08:30 PM PST

For example if I have a posts table with id and a likes table width id post_id user_id, how can I set up the database to organize the likes table by post_id in order so queries are faster? I can't make the post_id the primary key because there are more than one likes for each post.

Log Shipping - RESTORE WITH STANDBY - on SQL Server 2012 keeps breaking

Posted: 06 Mar 2013 08:38 PM PST

We are using log shipping and RESTORE WITH STANDBY on SQL Server 2012 in order to restore the database in read-only mode for reporting purposes. However, the log shipping setup keeps breaking after completing a restore of one or two log backups. Log-shipping only breaks when it is running as RESTORE WITH STANDBY; RESTORE WITH NORECOVERY does not cause any problems.

My only intuition about this is that the primary database is not that dynamic. Therefore, when there are no transactions, this causes issues with the RESTORE process, maybe?

Any ideas, known fixes?

I had it working for a few days by running a regular job that does heavy updating on two tables. When the job stopped running the log shipping setup quickly failed, unable to process the .trn file. I reset log-shipping and tried to see if it would keep running by just doing a small update, changing the value of one column of one record in a table, whoever it still failed.

Thanks for all your responses.

Automatically create linked row in a different table

Posted: 06 Mar 2013 07:41 PM PST

What I mean is that for example if I have a table user with id name prefId, and a second table preferences with id etc..., is it possible that when I add a row to user, it would automatically create a row in preferences and set the row's id to prefId? Should I create the preferences row first and manually put its id? And is this a good practice?

Why does myisam mysql table indexes go out of date?

Posted: 06 Mar 2013 08:08 PM PST

I have few myisam tables, where only the autoincrement primary index is up to date with the number of columns, but not other indexes? Any idea, why this happens?

Ways to make this stored procedure run faster and optimaised in SQL SERVER for ASP.NET?

Posted: 06 Mar 2013 06:18 PM PST

I have a very complex stored procedure which runs very slow takes long time to respond, i am using this to create form and working in ASP entity framework, however i am just wondering is there are better ways of doing this or optimized this to make this run faster i.e views, do single selects in the coding level with multiple query's ..etc

here is the stored procedure ..

ALTER PROCEDURE [dbo].[_t_ORDER_SHEET_GoodsReceiving]    @COM_KEY int = -1,  @BS_KEY int = -1,  @StartDate VARCHAR(20) = '',  @EndDate VARCHAR(20) = '',  @status int = 0,  @cpokey int = -1,  @po varchar(50) = '',  @invno varchar(50) = '',  @dlvemail varchar(50) = ''  AS  BEGIN  IF @po <> ''  BEGIN  SSELECT     s.[OS_KEY] as OS_KEY,[OS_CPO_KEY],[OS_PO_NUMBER],[OS_ACCOUNT],[OS_DATE_APPROVE], [OS_DATE_SUBMIT], OS_COM_KEY, OS_BS_KEY, OS_USER_ORDER, OS_USERID_ORDER, OS_USER_APPROVE, OS_USERID_APPROVE,                 IsNull([OS_TOTAL_ITEMS],0) AS OS_TOTAL_ITEMS,[OS_RECEIVED_GOODS],[OS_SUPPLY_STORE], OS_DLV_EMAIL, OS_DLV_STREET, OS_DLV_SUBURB, OS_DLV_POSTCODE, OS_DLV_STATE, OS_DLV_CONTACT, OS_DLV_PHONE,                 [OS_RECEIVED_DATE],                  IsNull(RXQty,0) as RXQty, SentQty,RxDate, InvAmt, RXAmt, Extra,      d.[DSP_KEY],[DSP_PO_KEY],[DSP_BS_KEY],[DSP_SHIP],[DSP_DATE_SENT],[DSP_TIME_SENT],[DSP_SR_CANCELLED],[DSP_DATE_RETURNED],[DSP_DAYS_TO_DESP],[DSP_CARTONS],[DSP_TY_PACK_UNIT],[DSP_TY_RECEIVER],[DSP_BDLV_KEY],[DSP_CON_NOTE],[DSP_TY_CARRIER],[DSP_TOT_ITEMS]      ,[DSP_CCMS],[DSP_BACKORD],[DSP_INVOICE],[DSP_TY_JOB],[DSP_COMMENT],[DSP_TY_PICKER],[DSP_PACKER],DocumentDate as InvoiceDate, DlvRxPerson,                   (CASE WHEN ISNULL(r.RXQty, 0) = 0 THEN 'Not Received' WHEN r.RXQty = SentQty THEN 'Fully Received' ELSE 'Partially Received' END) AS RStatus,          (CASE          WHEN DSP_SHIP =0 THEN 'Std'            ELSE 'Backorder' END) as BOrders, t_BAY_WAREHOUSE.BWH_LOCATION          FROM         t_BAY_WAREHOUSE INNER JOIN                       t_BUSINESS ON dbo.t_BAY_WAREHOUSE.BWH_KEY = dbo.t_BUSINESS.BS_DEF_WAREHOUSE RIGHT OUTER JOIN                         t_ORDER_SHEET s inner join t_ORDER_LINE l on s.OS_KEY = l.OL_OS_KEY LEFT JOIN t_DESPATCH d ON s.OS_CPO_KEY = d.DSP_PO_KEY              left join (select  RXDocumentNumber, SUM(IsNull(RX_QTY,0)) as RXQty, MAX(RX_DATE) as RxDate, SUM(CONVERT(float, s.UnitPrice) * CONVERT(float, x.RX_QTY)) AS RXAmt from t_GOODSRX AS x INNER JOIN                                                     dbo.t_INVOICE_LINEITEMS AS s ON x.PKLineItems = s.PKLineItems GROUP BY RXDocumentNumber) r on r.RXDocumentNumber = d.DSP_INVOICE              inner join (select DocumentNumber, sum(CONVERT(int,Qty)) as SentQty , SUM(CONVERT(float, i.UnitPrice)* CONVERT(float, i.Qty)) AS InvAmt from t_INVOICE_LINEITEMS AS i where (UnitUsed = 'PCS' OR UnitUsed = 'Pair' OR UnitUsed='EA' or UnitUsed='PR' or UnitUsed='BX')  and ItemCode not like 'EMB%' and ItemCode not like 'SRT%' and ItemCode not like 'IND%' and ItemCode not like 'HAN%' and Qty  not like '%[a-z,A-Z]%' and Qty not like '%.%' GROUP BY DocumentNumber) i on i.DocumentNumber = d.DSP_INVOICE              left join (select DocumentNumber, SUM(CONVERT(float,UnitPrice)*CONVERT(float,Qty))as Extra from t_INVOICE_LINEITEMS  where   (ItemCode  like 'EMB%' or ItemCode like 'SRT%' or ItemCode like 'SEW%'or ItemCode like 'IND%' or ItemCode like 'HAN%') and UnitPrice not like '%[a-z,A-Z]%' and Qty  not like '%[a-z,A-Z]%' and Qty not like '%.%' GROUP BY DocumentNumber) e on e.DocumentNumber = d.DSP_INVOICE              inner join t_INVOICE_HEADER h on h.DocumentNumber = d.DSP_INVOICE              inner join (select OS_KEY, ISNULL(DlvREName, OS_DLV_EMAIL) as DlvRxPerson from t_ORDER_SHEET ss left join (select UserName, FirstName + ' ' + LastName as DlvREName from  t_USER_PROFILES p inner join t_USERS u on p.UserId = u.UserId) i on i.UserName = ss.OS_DLV_EMAIL) dd on dd.OS_KEY = s.OS_KEY  ON                         dbo.t_BUSINESS.BS_KEY = s.OS_BS_KEY    WHERE    OS_COM_KEY = @COM_KEY             AND (OS_PO_NUMBER = @po OR @po = '')    GROUP BY s.[OS_KEY],[OS_CPO_KEY],[OS_PO_NUMBER],[OS_ACCOUNT],[OS_RECEIVED_GOODS],[OS_SUPPLY_STORE],  OS_TOTAL_ITEMS, OS_USER_ORDER, OS_USERID_ORDER, OS_USER_APPROVE, OS_USERID_APPROVE, OS_DLV_EMAIL, OS_DLV_STREET, OS_DLV_SUBURB, OS_DLV_POSTCODE, OS_DLV_STATE, OS_DLV_CONTACT, OS_DLV_PHONE, d.[DSP_KEY],[DSP_PO_KEY],[DSP_BS_KEY],[DSP_SHIP],[DSP_DATE_SENT],[DSP_TIME_SENT],[DSP_SR_CANCELLED],[DSP_DATE_RETURNED],[DSP_DAYS_TO_DESP],[DSP_CARTONS],[DSP_TY_PACK_UNIT],[DSP_TY_RECEIVER],[DSP_BDLV_KEY],[DSP_CON_NOTE],[DSP_TY_CARRIER],[DSP_TOT_ITEMS]      ,[DSP_CCMS],[DSP_BACKORD],[DSP_INVOICE],[DSP_TY_JOB],[DSP_COMMENT],[DSP_TY_PICKER],[DSP_PACKER],[OS_RECEIVED_DATE],SentQty, RxDate, InvAmt, RXAmt, Extra, [OS_DATE_APPROVE], [OS_DATE_SUBMIT], OS_COM_KEY, OS_BS_KEY, RXQty, DocumentDate, DlvRxPerson,BWH_LOCATION    ORDER BY OS_CPO_KEY DESC  END    ELSE  IF @cpokey <> -1  BEGIN  ------------------------------  -----------------------------  END    ELSE  if @invno <> ''  BEGIN  --------------------------------------  -----------------------  END  ELSE  IF @status <= 0  BEGIN  -------------------------------------  ------------------------------------  END    ELSE IF @status = 1  BEGIN  ----------------------------  -----------------------------  END    ELSE IF @status = 2  BEGIN  -----------------------------------  --------------------------  END    ELSE IF @status = 3  BEGIN  --------------------------------------------  -------------------------------------------  END    END  

Designing ERD1 and ERD2 with crows foot

Posted: 06 Mar 2013 04:01 PM PST

What would be the perfect ERD1 & ERD2 for the following case study? And what are the attributes of the entities?

Five Aside Football Independent League (FAFIL)

The Five Aside Football Independent League (FAFIL) is an amateur organisation which was set-up to organise and record the results of a number of five aside football clubs who during the football season play relatively friendly matches with each other. FAFIL has recently decided to record the details of each team and which other team they play against by using a relational database system and is calling on you as renowned experts in this field to solve their problems.

The organisation consists of a number of clubs which regularly take part in the prestigious challenge cup tournament, which consists of each club putting forward a team to compete for the honour of winning the challenge cup by scoring the most points when competing with the other clubs. Each club may have more than one team, but can only enter one team into this prestigious tournament.

Throughout the season FAFIL has to arrange and record the order in which each team will play against each other and the venue and date that the match will take place on, as well as arranging for a designated FAFIL referee to oversee the match. The venue will be held at one of the team's home ground and consequently be an away match for the other team. Each team must play all other teams twice, once at home and once away.

Therefore the first task which the database system must provide is to automatically create a fixtures list which will take as input the existing teams and produce a listing having each team playing all the others both home and away. The organisers can then arrange the matches required and record the referee and venue accordingly. Difficulties have arisen in the past when during the season it was discovered that either the referee or the venues were double booked on the same day.

Once the matches are arranged the results must be recorded as and when they come in. Points are awarded to the teams for each match they play according to the end result, two points for a win and one point for a draw and zero points if they lost or failed to show.

Once all the matches have been played it should be any easy job to calculate which team has the most number of points. Teams with equal points scored are ordered by the difference between goals scored and goals conceded. It is hoped that the database system will be able to record the points and allocate them to the correct team as the results are put into the system.

Due to some very aggressive players and the occasional hospital bill during the last season it has been decided to record and allocate penalty-points against players if they misbehave.

Accordingly the FAFIL referees where sent instructions to allocate penalty-points, such as: 1-bad language, 5-rude gestures, 10-stabings, and these will be allocated against each player as the season progresses. It was further decided that any player who exceeds 10 penalty-points e.g. stabs another player and then swears at them, must not be allowed to take part in any further matches.

It has also been decided to reward the best player with a highly prized book voucher. Accordingly each time a match is played, one player is chosen as the 'player of the match'. At the end of the season the player who was 'player of the match' the most is awarded the prize, if there are a number of players who have been nominated the same number of times, they must share the book voucher as FAFIL is not made of money.

In an aid to assist your design plans, FAFIL has supplied a typical list of questions which it believes should be easy for the new database to answer and hence save its organisers a lot of time and effort.

Typical Set of Questions.

  1. List the different types of penalty-points which can be awarded give the highest scoring ones first.

  2. Provide an alphabetical list of all the teams and their players.

  3. List all the players names and how many time they have been awarded 'player of the match', showing who is in the lead for being awarded the prized book voucher.

(Note this has to be calculated live by counting how often each player has been chosen as player of the match)

  1. Produce a list of all Teams playing all other Teams indicating who is at home and who is away.

Efficient way to move rows across the tables?

Posted: 06 Mar 2013 03:51 PM PST

This is somewhat long question as I would like to explain all details of the problem.

System Description

We have a queue of incoming messages from external system(s). Messages are immediately stored in the e.g. INBOX table. Few thread workers fetch the job chunk from the table (first mark some messages with UPDATE, then SELECT marked messages). Workers do not process the messages, they dispatch them to different internal components (called 'processors'), depending on message command.

Each message contains several text fields (longest is like 200 varchars), few ids and some timestamp(s) etc; 10-15 columns total.

Each internal component (i.e. processor) that process messages works differently. Some process the message immediately, others triggers some long operation, even communicating via HTTP with other parts of the system. In other words, we can not just process message from the INBOX and then remove it. We must work with that message for a while (async task).

Still, there are not too many processors in the system, up to 10.

Messages are all internal, i.e. it is not important for user to browse them, paginate etc. User may require list of processed relevant messages, but that's not mission-critical feature, so it does not have to be fast. Some invalid message may be deleted sometimes.

Its important to emphasize that expected traffic might be quite high - and we don't want bottlenecks because of bad database design. Database is MySql.

Decision

The one of the decisions is not to have one big table for all messages, with some flags column that will indicate various messages states. Idea is to have tables per processors; and to move messages around. For example, received messages will be stored in INBOX, then moved by dispatcher to some e.g. PROCESSOR_1 table, and finally moved to ARCHIVE table. There should not be more then 2 such movements. W

While in processing state, we do allow to use flags for indicating processing-specific states, if any. In other words, PROCESSOR_X table may track the state of the messages; since the number of currently processing messages will be significantly smaller.

The reason for this is not to use one BIG table for everything.

Question

Since we are moving messages around, I wonder how expensive this is with high volumes. Which of the following scenarios is better:

(A) to have all separate similar tables, like explained, and move complete messages rows, e.g. read complete row from INBOX, write to PROCESSOR table (with some additional columns), delete from INBOX.

or

(B) to prevent physical movement of the content, how about to have one big MESSAGES table that just stores the content (and still not the state). We would still have other tables, as explained above, but they would contain just IDs to messages and additional columns. So now, when message is about to move, we physically move much less data - just IDs. The rest of the message remains in the MESSAGE table unmodified all the time.

In other words, is there a penalty in sql join between one smaller and one huge table?

Thank you for your patience, hope I was clear enough.

Open a Locked Text File with SQL Server

Posted: 06 Mar 2013 08:20 PM PST

When I try to

BULK INSERT table FROM 'c:\file.txt'  

I get

Msg 4861, Level 16, State 1, Line 1  Cannot bulk load because the file "c:\file.txt" could not be opened. Operating system error code 32(The process cannot access the file because it is being used by another process.).  

error since the file is a log file opened by another process.

However with C# I can open the file with System.IO.FileShare.ReadWrite as:

using (System.IO.FileStream fileStream = new System.IO.FileStream("c:\\file.txt", System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.ReadWrite))          {              using (System.IO.StreamReader streamReader = new System.IO.StreamReader(fileStream))              {                  file = streamReader.ReadToEnd();              }          }  

Is there a way to have that ReadWrite share functionality within SQL Server (bulk insert or any other)?

Thanks

relational databse for address model

Posted: 06 Mar 2013 07:40 PM PST

I want to design a "Address" model for all type of entity like user, business and etc.

I have now two type of main models their one is User and another one is Business. Each one has varies address types like below.

      User    1.Contact Address  2.Billing Address           Business  1.Contact Address  2.something  

So I created a address model with addresstype column like this

Address  id  addresstype  user  addressline1  addressline2  

Relationship

User --OneToMany-->Business User --OneTOMany-->Address(User Column)

Now using above relation addresstype and user column will be in relation, But Business address is not with any relationship with address.

So How to design this one on efficient way

sql 2005 upgrade to sql 2005 service pack 4 (SP4) via command line

Posted: 06 Mar 2013 03:01 PM PST

I have been wracking my brain over upgrading sql server 2005 to sql server 2005 sp4. I have a script for installing new instances of sql servers for all versions and they all work wonderfully. Problem is, sql server 2012 can't upgrade sql 2005 SP3 and below. So, I have to install up to sql 2005 SP4, AND THEN upgrade through sql 2012, which is highly annoying.

I need to do this silently via the command line, but I cannot get it to work at all. Here are some samples I tried:

"SQLSP4.exe /qb /ACTION=Upgrade /INSTANCENAME="FOOBAR""

"SQLSP4.exe /quiet /instancename="FOOBAR""

among a couple other variations. I could not find anything online at all. In fact, the one helpful thread ended in saying that Microsoft didn't have any support for this, but I'm not sure about that.

What can be done here? Does anyone have any experience in updating service packs via command line?

Useful topics on database theories [closed]

Posted: 06 Mar 2013 12:24 PM PST

Useful topics to write about on different Database Theories?

Hi, I want to know what useful topics I can research on database theories.

For example I have already wrote up e-learning and I based e-learning on these for topics below:

  • How has e-learning affected education
    • What can we gain from e-learning
    • Benefits of e-learning
    • Limitations of e-learning

Now for database theories I am going to write it in a synthesis matrix but what topics should I talk about?

Here is a synthesis matrix example if you don't know what that is: http://writingcenter.fiu.edu/resources/synthesis-matrix-2.pdf Thanks

Using MySQL InnoDB as an Archive

Posted: 06 Mar 2013 12:37 PM PST

My site has a main MySQL InnoDB table that it does most of its work on. New rows get inserted at a rate of 1 million per week, and rows older than a week gets moved over to an archive table on a daily basis. These archived rows are processed once a week for stuff like finding trends.

This archive table consequently grows at 1 million new rows every week, and querying it can get really slow. Is MySQL suited for archiving data, or is my strategy very flawed?

Please advise, thank you!

Sequential joining of tables in order

Posted: 06 Mar 2013 12:33 PM PST

Table albums has (among other field) field id.

Table photos has column id, field album which is a foreign key referring to album id and some other fields (which are irrelevant for the question I ask).

So in an album there is several photos (photos in an album are ordered by id of a photo).

Now I introduce one more "level of indirection": Bunches. There may be several albums in one bunch.

For this I add fields bunch and seq INT UNSIGNED into the table albums. bunch is the ID of the bunch where the album belongs and seq is the number of the album in the bunch.

Now the problem:

Let it is given a bunch ID.

I want to make a SELECT query which selects all photos from albums belonging to the given bunch, ordered first by seqs of the albums and then by IDs of photos in the album.

TSQL DateTime DataType - fractional seconds vs milliseconds

Posted: 06 Mar 2013 12:15 PM PST

Wanting to get all records from a single day. So everything between 2013-03-05 00:00:00.000 and 2013-03-05 23:59:59.999. We're getting overlap at the backend with things that happened at 2013-03-06 00:00:00.000.

So, in testing I run in SSMS:

select cast('2013-03-05 23:59:59.999' as datetime)  select cast('2013-03-05 23:59:59.996' as datetime)  select cast('2013-03-05 23:59:59.994' as datetime)  

And get:

2013-03-06 00:00:00.000  2013-03-05 23:59:59.997  2013-03-05 23:59:59.993  

The millisecond part comes back off by a little. In the case of the first line it really matters. Because I don't want 2013-03-06 00:00:00.000 - I want a millisecond before midnight.

TSQL states in documentation that it uses fractional seconds as opposed to .Net datetime which uses milliseconds. This would seem to be a syntax difference only but if you really care about millisecond precision in TSQL are you forced to use DateTime2 across the board. Casting from datetime2 to datetime still mucks things up by a couple milliseconds.

In c# DateTime.Parse("03/5/2013 23:59:59.999").Millisecond still returns 999.

Am I interpreting this all right? If I care about millisecond precision am I stuck with datetime2 only.

Normally that would be an easy fix but in this case we're using a Microsoft system function that receives datetime as parameter.

SQL Server Database Hot Backups: Which Technique(s) to Use for Multi-Level Replication?

Posted: 06 Mar 2013 01:57 PM PST

I'm reading the many other posts here about mirroring vs. replication, but I'd still appreciate it if someone with some experience would weigh in.

My small development team needs to have local automatic synchronization of databases which are hosted remotely.

The network situation is a little complicated:

  • The actual databases (dev, test, prod) are being hosted remotely on VMs in a DMZ. Our team does not have direct access to the DMZ.
  • Instead we have VPN access to the host's internal network, and the firewall between the internal network and the DMZ currently only allows RDP to pass through.
  • Locally we have a development database server, and individual developers have local SQL Servers on their development laptops.

We have requested and received an always-on VM in the remote host internal network that we plan to use to monitor the health of our dev/test/prod servers as well as to maintain hot backups of our databases.

So we are looking at two levels of data synchronization:

  1. Constantly (but asynchronously) updated copies of dev/test/prod databases on our monitoring VM.
  2. Occasionally-updated copies of the databases on our local development database server and/or developer laptops.

We aren't looking to implement fail-over at this point, because even the "prod" environment is not critical (it's more of a pre-prod environment).

We also need this to be as simple as possible, and SQL Server 2012 is not an option. We are stuck with 2008.

Here's what I'm thinking:

  1. Implement database mirroring between our dev/test/prod database servers and our always-on monitoring VM in the host's network.
  2. Implement log shipping to occasionally (when on the VPN) synchronize our development database server and laptops with the DB copies on the always-on monitoring VM.

My questions are:

  1. Is this the best and/or simplest way to accomplish what we are trying to do? If not, what is better and why?
  2. Is it even possible to do log shipping off a mirrored database?
  3. What ports are required to be open in order to implement database mirroring (or log shipping if that's better) between dev/test/prod in the DMZ and our monitor VM in the internal network?

Thank you for your time.

Manually moving SQL instance to another node causes immediate database mirror failover

Posted: 06 Mar 2013 12:33 PM PST

I have a two-node SQL cluster (2008 R2).

Some of the databases within that SQL instance are mirrored to another server on a remote site, using the High safety with automatic failover. The mirroring connection timeout value for those databases is set to 90 seconds.

When I move SQL from one node in the cluster to another node, using the Failover Cluster Manager application's option of "Move this service or application to another node" the databases that are mirrored are instantly failing over to the mirror.

This is undesirable behaviour. My reason for setting the mirroring connection timeout value is that I only want to fail over to the database mirror if the cluster fails completely and there are no functioning nodes.

Is there any way to achieve this? It feels as though it should be possible, otherwise the concept of mixing clustering and automatic failover database mirroring would be unworkable as every node failover within the cluster would trigger a mirror failover.

Thanks.

Write differences between varchar and nvarchar

Posted: 06 Mar 2013 01:38 PM PST

Currently in our SQL Server 2012 database, we're using varchar, and we'd like to change that nvarchar. I've generated a script to do that.

My question is are there any differences in how SQL Server writes to varchar columns vs. nvarchar columns? We have a number of backend procedures that I'm concerned about.

Edit:
Not sure if this helps, but the columns don't have indexes, f/k, or constraints on them.

MS Access ER diagram

Posted: 06 Mar 2013 04:04 PM PST

I have database scheme for Microsoft Access. I'm interested how I can convert the scheme into an ER diagram? Is there any tool that can do this based on scheme relations?

How to properly kill MySQL?

Posted: 06 Mar 2013 07:48 PM PST

I have CentOS 64bit with CPanel installed and I use

service mysql stop  

However, it keeps doing ..... for minutes and it never stops. It used to be instant. Any idea why it does that and how to fix? Right now I have to do killall -9 mysql but is there a better way?

The server is also very very active.

mysqld_safe version different than mysqld?

Posted: 06 Mar 2013 12:05 PM PST

Is it a problem that mysqld socket has a different version than the mysqld server? I noticed this in my mysqld log during startup

120420 19:30:06 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  120420 19:30:06  InnoDB: Initializing buffer pool, size = 8.0M  120420 19:30:06  InnoDB: Completed initialization of buffer pool  120420 19:30:06  InnoDB: Started; log sequence number 0 44233  120420 19:30:06 [Note] Event Scheduler: Loaded 0 events  120420 19:30:06 [Note] /usr/libexec/mysqld: ready for connections.  Version: '5.1.61'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution  

Yet, when I show variables I get this

SHOW VARIABLES LIKE "%version%";

+-------------------------+-------------------------------------------+  | Variable_name           | Value                                     |  +-------------------------+-------------------------------------------+  | innodb_version          | 5.5.30                                    |  | protocol_version        | 10                                        |  | slave_type_conversions  |                                           |  | version                 | 5.5.30-cll                                |  | version_comment         | MySQL Community Server (GPL) by Atomicorp |  | version_compile_machine | x86_64                                    |  | version_compile_os      | Linux                                     |  +-------------------------+-------------------------------------------+  7 rows in set (0.00 sec)  

Based on golimar's questions I ran a

ps aux

and I see this

mysql 633 31.4 33.5 11942788 5452172 ? SNl 08:18 101:31 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/host.reviewcritical.com.err --open-files-limit=4096 --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock --port=3306

root 32674 0.0 0.0 108292 1576 ? SN 08:18 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock --pid-file=/var/run/mysqld/mysqld.pid --basedir=/usr --user=mysql

Tuning Postgres config for new Database Server

Posted: 06 Mar 2013 04:20 PM PST

I have finally gotten a Purpose Built Database machine for a project i'm working on.

The server is a 4 Core Xeon with 64GB of Ram and a Raid10 of 10krpm drives.

I have just got the database moved over to the new machine; the performance with the same code are worse than when it was running on a VM.

I'm looking for suggestions on what settings to adjust to what values.

Currently, I've upped shared_buffers to 60GB and the kernel settings needed to make that change.

temp_buffers is 32MB

work_mem is 5MB

I'm working on doing some stuff I'd like to get loaded in quickly so I have synchronous_commit set to off.

Can anyone point me in the right direction as to how to improve the speed? I had it running quite a bit faster on a slower machine with much less memory and drives shared with the machines that were making calls to it, so I'm not really sure what the issues are.

Update: 2013-03-06 Performance is falling off a cliff shortly after a run starts. Not sure what to do. Settings

shared_buffers = 12GB  temp_buffers=32MB  work_mem = 5MB  maintenance_work_mem = 256MB  fsync = off  synchronous_commit = off  wal_buffers = 32MB  checkpoint_segments = 256  checkpoint_completion_target = .9  effective_cache_size 50GB  auto_vacuum = on  autovacuum_naptime = 1min  

The task is a long script that's taking data from a copied in table and normalizing it into the database. So big reads occasionally to pick up 1000 rows or more, then lots of little reads to de-duplicate the record and find IDs etc, then some inserts along the way that are needed, and finally lots of inserts at the end. Then Repeat.

Any Suggestions? or ideas what's falling off? This is one of my slower queries, I'd love ideas of how to speed it up.

EXPLAIN (ANALYZE, BUFFERS) select provider_id, count(list_alias.name_part_id)  from list_alias  where provider_id in (1,4,5,6,7,8)  and name_part_id in (5,7,8,3,111)  group by provider_id  order by count(list_alias.name_part_id) desc  limit(1)  

The output.

"Limit  (cost=31.62..31.62 rows=1 width=8) (actual time=0.157..0.157 rows=0 loops=1)"  "  Buffers: shared hit=17 read=1"  "  ->  Sort  (cost=31.62..31.62 rows=1 width=8) (actual time=0.153..0.153 rows=0 loops=1)"  "        Sort Key: (count(name_part_id))"  "        Sort Method: quicksort  Memory: 25kB"  "        Buffers: shared hit=17 read=1"  "        ->  GroupAggregate  (cost=0.00..31.61 rows=1 width=8) (actual time=0.147..0.147 rows=0 loops=1)"  "              Buffers: shared hit=17 read=1"  "              ->  Index Scan using "list_alias provider_id" on list_alias  (cost=0.00..31.59 rows=1 width=8) (actual time=0.146..0.146 rows=0 loops=1)"  "                    Index Cond: (provider_id = ANY ('{1,4,5,6,7,8}'::integer[]))"  "                    Filter: (name_part_id = ANY ('{5,7,8,3,111}'::integer[]))"  "                    Buffers: shared hit=17 read=1"  "Total runtime: 0.238 ms"  

Global locking for multi-master Oracle GoldenGate replication

Posted: 06 Mar 2013 05:21 PM PST

This is a very complex scenario, but I figured a state-of-the-art challenge might interest some of the many high-end users of dba.se.

Problem

I'm working on an intercontinental data replication solution for a document production system, somewhat similar to a wiki, utilizing Oracle GoldenGate. The primary goals are to increase application performance and availability around the globe.

The solution must allow for simultaneous read/write access to the same pool of data from multiple locations, which means that we need some clever way of preventing or resolving conflicting updates without user interaction.

Focusing on collision prevention, we must allow an object (a document, an illustration, a set of metadata etc) to be locked globally, thus preventing multiple users from simultaneously editing the same object from different locations - ultimately causing a conflict.

Similarly an object must remain locked until any user's connected database have received the updated data for that object, less a user may start editing an old object without the latest updates.

Background

The application is somewhat latency sensitive, making access to a central data center slow from remote locations. Like many content focused systems, the read/write ratio is in the line of 4 to 1, making it a good candidate for a distributed architecture. If well-managed, the latter wil also work towards ensuring availability during site or network outages.

I have used a somewhat unconventional multi-loop bi-directional replication topology. This keeps the complexity at a manageable level {2(n-1) ways}, adds resilience for site outages and allows for fairly simple addition or removal of sites. The slight drawback is that it may take up to 30 seconds for a transaction to be replicated between the most remote sites via the central master database.

A more conventional design with direct replication between all sites would cut that time in half, but would also significantly increase the complexity of the configuration {n(n-1) ways}.

With five locations that would mean a 20-way replication as opposed to the 8-way replication in my design.

This illustration shows my current test environment across data centers in Europe, Asia and North America. The production environment is expected to have additional locations.

Illustration of replication topology

All the databases are Oracle 11.2.0.3 with Oracle GoldenGate 11.2.1.

My thoughts so far

I've been thinking along the lines of doing locking by inserting a row into a "locking" table over a database link to the central database, while letting the unlock (update or delete of the previously mentioned row) be replicated along with the updated data.

On behalf of the user we must then check the availability of a lock in both the central and local database before acquiring the lock and opening the object for editing. When editing is completed, we must release the lock in the local database which will then replicate the changes and the release of the lock to all other locations via the central database.

However, queries over a high latency database link can sometimes be very slow (tests show anywhere from 1.5 seconds to 7 seconds for a single insert), and I'm not sure if we can guarantee that the update or delete statement that removes a lock is the last statement to be replicated.

Calling a remote PL/SQL procedure to do the checking and locking will at least limit the operation to a single remote query, but seven seconds is still a very long time. Something like two seconds would be more acceptable. I'm hoping the database links can be optimized somehow.

There may also be an additional issues like trying to delete or update a row in the local locking table before that row have been successfully replicated from the central database.

On the bright side, with this kind of solution, it should be relatively simple to let the application enter a read-only state if communications to the central database is distrupted, or to redirect clients if a data center should become unavailable.

Are there anyone who have done anything similar? What might be the best way to approach this?

Like I said initially, this is a rather complex solution, feel free to ask about anything left unclear or left out.

SSDT 2010 and SQL70590 errors

Posted: 06 Mar 2013 12:14 PM PST

I have Visual Studio 2010 SP1 with SSDT (with Dec 2012 update). I have created a new SQL Server project and imported objects from a database on a SQL Server 2008 R2 instance. The database has a compatibility mode of 80, can't change this at the time.

Anyway. When I do a build, which has to be clean in order for me to do a schema compare, it is throwing SQL70590 errors on the variables although SQL Server itself has no problem with them. Looking at them it seems it is because of the case sensitivity of the variable (e.g. declared as @VARIABLE but referenced as @Variable).

The database collation is set to Latin1_General_BIN, is this why the error is being thrown? I can't see why it sees it as an error yet the procedure code runs without error on the database.

Is there a way to write intelligent queries against a merge table in MySQL that represents a bunch of tables partitioned by date?

Posted: 06 Mar 2013 01:33 PM PST

I have a set of MySQL tables that are partitioned by date, with a merge table representing all of them. For the query I am writing, the results are joined by date, so rows in table A_2012-12-05 will only join to rows in B_2012-12-05, and rows in A_2012-12-06 will only join to rows in B_2012-12-06, etc. Is there a way to write an intelligent query so that MySQL won't go looking for rows in B_2012-12-06 to join with rows in A_2012-12-05?


Edit by RolandoMySQLDBA 11:17 EDT

Please insert the output of these into this section:

They all look pretty much like this:

CREATE TABLE `mrg_31_session` (    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,    `phpsessionid` varchar(32) NOT NULL,    `start_datetime` datetime NOT NULL,    PRIMARY KEY (`id`),    KEY `ix_phpsessionid` (`phpsessionid`),    KEY `ix_startdatetime` (`start_datetime`),  ) ENGINE=MRG_MyISAM DEFAULT CHARSET=latin1 PACK_KEYS=1 UNION=(`session_20121205`,`session_20121204`,`session_20121203`,`session_20121202`,`session_20121201`,`session_20121130`,`session_20121129`,`session_20121128`,`session_20121127`,`session_20121126`,`session_20121125`,`session_20121124`,`session_20121123`,`session_20121122`,`session_20121121`,`session_20121120`,`session_20121119`,`session_20121118`,`session_20121117`,`session_20121116`,`session_20121115`,`session_20121114`,`session_20121113`,`session_20121112`,`session_20121111`,`session_20121110`,`session_20121109`,`session_20121108`,`session_20121107`,`session_20121106`,`session_20121105`)  

Mysql innoDB write operations are extremely slow

Posted: 06 Mar 2013 07:33 PM PST

I'm having serious performance problems with MySQL and the InnoDB engine. Even the simplest table makes writing operations (creating the table, inserting, updating and deleting) horribly slow, as you can see in the following snippet.

mysql> CREATE TABLE `test` (`id` int(11) not null auto_increment,     -> PRIMARY KEY(`id`)) ENGINE=InnoDB;  Query OK, 0 rows affected (4.61 sec)    mysql> insert into test values ();  Query OK, 1 row affected (1.92 sec)    mysql> insert into test values ();  Query OK, 1 row affected (0.88 sec)    mysql> insert into test values ();  Query OK, 1 row affected (1.10 sec)    mysql> insert into test values ();  Query OK, 1 row affected (6.27 sec)    mysql> select * from test;  +----+  | id |  +----+  |  1 |  |  2 |  |  3 |  |  4 |  +----+  4 rows in set (0.00 sec)    mysql> delete from test where id = 2;  Query OK, 1 row affected (0.28 sec)    mysql> delete from test where id = 3;  Query OK, 1 row affected (6.37 sec)  

I have been looking at htop and the long waiting times are not because of abnormal CPU load. It's almost zero, and memory usage is also normal. If I create the same table using the MyISAM engine, then it works normally. My my.cnf file contains this (if I remember right I haven't changed anything from the default Debian configuration):

[client]  port        = 3306  socket      = /var/run/mysqld/mysqld.sock  [mysqld_safe]  socket      = /var/run/mysqld/mysqld.sock  nice        = 0    [mysqld]  user        = mysql  pid-file    = /var/run/mysqld/mysqld.pid  socket      = /var/run/mysqld/mysqld.sock  port        = 3306  basedir     = /usr  datadir     = /var/lib/mysql  tmpdir      = /tmp  language    = /usr/share/mysql/english  skip-external-locking  bind-address        = 127.0.0.1  key_buffer      = 40M  max_allowed_packet  = 16M  thread_stack        = 128K  thread_cache_size       = 8  myisam-recover         = BACKUP  max_connections        = 100  table_cache            = 64  thread_concurrency     = 10  query_cache_limit   = 1M  query_cache_size        = 40M  log_slow_queries    = /var/log/mysql/mysql-slow.log  long_query_time = 2  log-queries-not-using-indexes  expire_logs_days    = 10  max_binlog_size         = 100M    [mysqldump]  quick  quote-names  max_allowed_packet  = 16M    [isamchk]  key_buffer      = 16M  !includedir /etc/mysql/conf.d/  

I have also tried to restart the server, but it doesn't solve anything.

The slow queries log doesn't give any extra information.

How large should be mysql innodb_buffer_pool_size?

Posted: 06 Mar 2013 05:38 PM PST

I have a busy database with solely InnoDB tables which is about 5GB in size. The database runs on a Debian server using SSD disks and I've set max connections = 800 which sometimes saturate and grind the server to halt. The average query per second is about 2.5K. So I need to optimize memory usage to make room for maximum possible connections.

I've seen suggestions that innodb_buffer_pool_size should be up to %80 of the total memory. On the other hand I get this warning from tuning-primer script:

Max Memory Ever Allocated : 91.97 G  Configured Max Per-thread Buffers : 72.02 G  Configured Max Global Buffers : 19.86 G  Configured Max Memory Limit : 91.88 G  Physical Memory : 94.58 G  

Here are my current innodb variables:

| innodb_adaptive_flushing                          | ON                                                                                                                     |  | innodb_adaptive_hash_index                        | ON                                                                                                                     |  | innodb_additional_mem_pool_size                   | 20971520                                                                                                               |  | innodb_autoextend_increment                       | 8                                                                                                                      |  | innodb_autoinc_lock_mode                          | 1                                                                                                                      |  | innodb_buffer_pool_instances                      | 1                                                                                                                      |  | innodb_buffer_pool_size                           | 20971520000                                                                                                            |  | innodb_change_buffering                           | all                                                                                                                    |  | innodb_checksums                                  | ON                                                                                                                     |  | innodb_commit_concurrency                         | 0                                                                                                                      |  | innodb_concurrency_tickets                        | 500                                                                                                                    |  | innodb_data_file_path                             | ibdata1:10M:autoextend                                                                                                 |  | innodb_data_home_dir                              |                                                                                                                        |  | innodb_doublewrite                                | ON                                                                                                                     |  | innodb_fast_shutdown                              | 1                                                                                                                      |  | innodb_file_format                                | Antelope                                                                                                               |  | innodb_file_format_check                          | ON                                                                                                                     |  | innodb_file_format_max                            | Antelope                                                                                                               |  | innodb_file_per_table                             | ON                                                                                                                     |  | innodb_flush_log_at_trx_commit                    | 2                                                                                                                      |  | innodb_flush_method                               | O_DIRECT                                                                                                               |  | innodb_force_load_corrupted                       | OFF                                                                                                                    |  | innodb_force_recovery                             | 0                                                                                                                      |  | innodb_io_capacity                                | 200                                                                                                                    |  | innodb_large_prefix                               | OFF                                                                                                                    |  | innodb_lock_wait_timeout                          | 50                                                                                                                     |  | innodb_locks_unsafe_for_binlog                    | OFF                                                                                                                    |  | innodb_log_buffer_size                            | 4194304                                                                                                                |  | innodb_log_file_size                              | 524288000                                                                                                              |  | innodb_log_files_in_group                         | 2                                                                                                                      |  | innodb_log_group_home_dir                         | ./                                                                                                                     |  | innodb_max_dirty_pages_pct                        | 75                                                                                                                     |  | innodb_max_purge_lag                              | 0                                                                                                                      |  | innodb_mirrored_log_groups                        | 1                                                                                                                      |  | innodb_old_blocks_pct                             | 37                                                                                                                     |  | innodb_old_blocks_time                            | 0                                                                                                                      |  | innodb_open_files                                 | 300                                                                                                                    |  | innodb_purge_batch_size                           | 20                                                                                                                     |  | innodb_purge_threads                              | 0                                                                                                                      |  | innodb_random_read_ahead                          | OFF                                                                                                                    |  | innodb_read_ahead_threshold                       | 56                                                                                                                     |  | innodb_read_io_threads                            | 4                                                                                                                      |  | innodb_replication_delay                          | 0                                                                                                                      |  | innodb_rollback_on_timeout                        | OFF                                                                                                                    |  | innodb_rollback_segments                          | 128                                                                                                                    |  | innodb_spin_wait_delay                            | 6                                                                                                                      |  | innodb_stats_method                               | nulls_equal                                                                                                            |  | innodb_stats_on_metadata                          | ON                                                                                                                     |  | innodb_stats_sample_pages                         | 8                                                                                                                      |  | innodb_strict_mode                                | OFF                                                                                                                    |  | innodb_support_xa                                 | ON                                                                                                                     |  | innodb_sync_spin_loops                            | 30                                                                                                                     |  | innodb_table_locks                                | ON                                                                                                                     |  | innodb_thread_concurrency                         | 4                                                                                                                      |  | innodb_thread_sleep_delay                         | 10000                                                                                                                  |  | innodb_use_native_aio                             | ON                                                                                                                     |  | innodb_use_sys_malloc                             | ON                                                                                                                     |  | innodb_version                                    | 1.1.8                                                                                                                  |  | innodb_write_io_threads                           | 4                                                                                                                      |  

A side note that might be relevant: I see that when I try to insert a large post (say over 10KB) from Drupal (which sits on a separate web server) to database, it lasts forever and the page does not return correctly.

Regarding these, I'm wondering what should be my innodb_buffer_pool_size for optimal performance. I appreciate your suggestions to set this and other parameters optimally for this scenario.

NuoDB with Amazon S3

Posted: 06 Mar 2013 01:31 PM PST

I think about developing my new mobile and web app against NuoDB. I believe in their technology. The thing that I care the most is backup and restore in case of data corruption and read / write speed scalability.

From what I've read, you can use NuoDB with Amazon S3 as the archive node for storing the data. That looks very useful because you aren't limited on how much data you can store on a single bucket. Furthermore, It's easy to backup the data because you can detach the archive node and you get to have a backup of the data which you can restore back by attaching the archive node back and let the other node sync with it.

Because I expect a lot of data (terabytes), it seems like a good solution, because I will not need to export the data, compress it and store it on EBS volume(s).

1) What I wanted to know is how efficient is S3 as archive node with NouDB for high read/write scenerios?

2) Is this is a good backup option to use with NouDB (the one that I described above)?

3) Should I be better to use MySQL Load balancers. I've read that their are many solutions that you can use on Amazon RDS to scale the DB horizontally and without any code and DB modification (ScaleBase?).

Search This Blog