Wednesday, July 24, 2013

[SQL Server 2008 issues] DB Backup Script

[SQL Server 2008 issues] DB Backup Script


DB Backup Script

Posted: 23 Jul 2013 04:48 PM PDT

i am using a very simple script to take back up of a TEST Database on production server.SCRIPT IS:use masterbackup database bharti_testto disk=N'E:\Reports'with copy_only;Now,,, this script is running sucessfully. but back up file is not in the desired the folder i.e. E:\ReportsNeed Help,,,, ASAP...!!!! :-)

grant access to all databases

Posted: 17 Nov 2010 06:04 PM PST

hello experts, i have around 600 databases in my server, a user need select access of all the databases. will i have to go one by one in all the dbs and create tht user and give datareader role to him. or is thr any shorter way to do so????thanks in advance

RAID 5 Vs RAID 10 Performance Questions

Posted: 23 Jul 2013 08:55 AM PDT

I'm guessing this could lead to a lot of theory and complex answers, however, I'm curious on your thoughts between RAID 5 and RAID 10. Yes, I realize that RAID 10 is faster than RAID 5, but is it fast enough to justify the potential cost hit (i.e. from what I've researched, 4k vs $10k). We are currently running a large set of databases with about equal distribution between reads and writes, leaning slightly towards writes. Figure into this about 1 million+ rows written per day. I am currently experiencing IO issues and have had some success by cleaning up indexes and performing other maintenance tasks. If I can prove that there will be a significant performance gain, it's possible we would consider switching to Raid 10. Any thoughts, experience, good article references?Thanks.

debugging question about non-yielding scheduler issue

Posted: 20 Feb 2013 12:08 PM PST

Hi all, One of our SQL servers keeps crashing and briefly analyzed the dump file and found this:Child-SP RetAddr Call Site00000081`683fd0e8 000007fe`fda84bf6 ntdll!NtCreateFile+0xa00000081`683fd0f0 00000000`76c718ed KERNELBASE!CreateFileW+0x2cd*** ERROR: Symbol file could not be found. Defaulted to export symbols for sqlservr.exe - 00000081`683fd250 00000000`01f72617 kernel32!CreateFileWImplementation+0x7d00000081`683fd2b0 00000000`01f7513f sqlservr!GetIUMSForMsxml+0x8055f300000081`683fd340 00000000`00c53aa0 sqlservr!GetIUMSForMsxml+0x80811b00000081`683fd390 00000000`00c54a5a sqlservr+0xbd3aa000000081`683fd3e0 00000000`00c5547b sqlservr+0xbd4a5a00000081`683fd420 00000000`00c550e1 sqlservr+0xbd547b00000081`683fd4f0 00000000`01989cc5 sqlservr+0xbd50e100000081`683fd580 00000000`000c67a4 sqlservr!GetIUMSForMsxml+0x21cca100000081`683fd5d0 00000000`001027b4 sqlservr+0x467a400000081`683fd600 00000000`00146525 sqlservr+0x827b400000081`683fd660 00000000`001460db sqlservr+0xc652500000081`683fd690 00000000`00101e05 sqlservr+0xc60db00000081`683fd730 00000000`001021d8 sqlservr+0x81e0500000081`683fd760 00000000`0014cef5 sqlservr+0x821d800000081`683fd8f0 00000000`015b7c97 sqlservr+0xccef500000081`683fd990 00000000`015b7f0b sqlservr!_____SQL______Process______Available+0x713d4700000081`683fda80 00000000`01527c45 sqlservr!_____SQL______Process______Available+0x713fbb00000081`683fdab0 00000000`006eacba sqlservr!_____SQL______Process______Available+0x683cf500000081`683fea60 00000000`000ec3bc sqlservr+0x66acba00000081`683febe0 00000000`000ed3ab sqlservr+0x6c3bc00000081`683fed10 00000000`0012a4c2 sqlservr+0x6d3ab00000081`683ff370 00000000`0008f280 sqlservr+0xaa4c200000081`683ff580 00000000`0008ee3e sqlservr+0xf28000000081`683ff690 00000000`0008eb3b sqlservr+0xee3e00000081`683ff6f0 00000000`001b5caa sqlservr+0xeb3b00000081`683ff760 00000000`001b5e25 sqlservr+0x135caa00000081`683ff840 00000000`001b567d sqlservr+0x135e2500000081`683ff880 00000000`001b6586 sqlservr+0x13567d00000081`683ff930 00000000`74b137d7 sqlservr+0x13658600000081`683ff9d0 00000000`74b13894 msvcr80!_callthreadstartex+0x1700000081`683ffa00 00000000`76c7652d msvcr80!_threadstartex+0x8400000081`683ffa30 00000000`7736c521 kernel32!BaseThreadInitThunk+0xd00000081`683ffa60 00000000`00000000 ntdll!RtlUserThreadStart+0x1I couldn't find anything related with this in anywhere so wonder if anyone knows about this.Can someone help me on this?Thanks much!!

Need procedure to delete .bak

Posted: 23 Jul 2013 05:03 AM PDT

Hi ,I had scheduled backups in Express edition 2008 using cmd and taskmanager. But I wantedto have an automated process to delete the .bak files rather doing manually. Please suggest me some steps or procedure.

checkpoint for committed transactions..

Posted: 23 Jul 2013 08:11 AM PDT

HiCan some body say that Is Checkpoint will write pages only from committed transactions...please help me on this.

TempDB: need to resize Data File

Posted: 23 Jul 2013 05:59 AM PDT

Hello EveryoneData File for TempDB is 32 GB big in File size and it uses only 13.5 MB so I need to re size the data file to 1GB. This is production server so please help me to do so with out restarting server.[code="plain"]name filename FileSizeMB SpaceUsedMB FreeSpaceMBtempdev S:\Data\tempdb.mdf 32617.81 13.44 32604.38templog S:\Data\templog.ldf 672.25 456.84 215.41[/code]Thanks for Your Help

Utilizing Fake Partitioning with a View

Posted: 23 Jul 2013 08:40 AM PDT

So you are running MS SQL non enterprise edition, meaning you can not utilize true partition tables. You have a 1 billion record sales fact table that contains 5 years of data called Sales_Table. You decide to break this table into 5 different ones, one for each year :[size="1"][b]Sales_Table_Y01Sales_Table_Y02Sales_Table_Y03Sales_Table_Y04Sales_Table_Y05[/b][/size]Under the assumption that each table is properly indexed, you now need to create a one location for existing code to hit all the data, thus we think of a view. So you make a view called Sales_Table so past code is seemless. The question at hand, is how to construct this view so it runs better or just as good as the 1 billion record table. Keep in mind throwing union statements such as the following will not allow for schemabinding to index the view:[b][size="1"]select col1, col2 from Sales_Table_Y01union allselect col1, col2 from Sales_Table_Y02union allselect col1, col2 from Sales_Table_Y03union allselect col1, col2 from Sales_Table_Y04union allselect col1, col2 from Sales_Table_Y05[/size][/b]Thanks in advance.

Looking for clean way to aggregate the same column multiple times in my query

Posted: 23 Jul 2013 08:00 AM PDT

Looking for tips on how best to approach the following query - 10million rows. Subquery selects, sum, CTE, over with Partionioning, etc . Is there a better way to do this in 2012? using Windowing - My code base is 2008. [code="sql"]SELECT[DateofScan],[DeviceType] ,count ofdistinct [PlugInID],count ofdistinct [PlugInID] that are Critical,count ofdistinct [PlugInID] where [VulNotDate] is NULL --'1900-01-01',Count ofdistinct (PlugInID] where [VulNotDate] <= 90 daysofSan,Count ofdistinct (PlugInID] where [VulNotDate] > 90 daysofSanFROM [TCH_Metrix].[dbo].[VulMgmt]GROUP BYDateofScan, DeviceType[/code]

Indexes

Posted: 23 Jul 2013 08:35 AM PDT

Hi All,I have a table [BusinessProcesses] with BpId as primary/clustered key/index on it.I have the following index on column CorrespondingContractNumber:[code="sql"]CREATE NONCLUSTERED INDEX [IBPCorrespondingContractNumber] ON [dbo].[BusinessProcesses]([CorrespondingContractNumber] ASC)[/code]I execute:dbcc show_statistics('dbo.BusinessProcesses','IBPCorrespondingContractNumber')/*All density Average Length Columns8.592541E-05 1.272107 CorrespondingContractNumber1.788491E-08 5.272107 CorrespondingContractNumber, BPId*/I create the next index:[code="sql"]CREATE NONCLUSTERED INDEX [IBPCorrespondingContractNumber2] ON [dbo].[BusinessProcesses]([CorrespondingContractNumber] ASC)INCLUDE(BpId)[/code]I execute:dbcc show_statistics('dbo.BusinessProcesses','IBPCorrespondingContractNumber2')/*All density Average Length Columns3.906723E-06 1.311324 CorrespondingContractNumber*/The second index is more selective.I execute this example query:[code="sql"]select CorrespondingContractNumber from BusinessProcesseswhere CorrespondingContractNumber is not null[/code]and the second index is being used by the query optimizer. Conclusion: The second index (IBPCorrespondingContractNumber2) is narrower and the optimizer simply uses it.Then i create[code="sql"]CREATE NONCLUSTERED INDEX [IBPCorrespondingContractNumber3] ON [dbo].[BusinessProcesses]( [CorrespondingContractNumber] ASC, bpid asc)[/code]dbcc show_statistics('dbo.BusinessProcesses','IBPCorrespondingContractNumber3')/*All density Average Length Columns3.906723E-06 1.311324 CorrespondingContractNumber1.788491E-08 5.311324 CorrespondingContractNumber, BPId*/I execute this example query:[code="sql"]select CorrespondingContractNumber from BusinessProcesseswhere CorrespondingContractNumber is not null[/code]Now the optimizer uses IBPCorrespondingContractNumber3 which is not narrower thatn IBPCorrespondingContractNumber2I need someones of you to put here your thinking about this...Thank you in advance,IgorMi

Stored procedure really slow..Sugestions?

Posted: 23 Jul 2013 06:28 AM PDT

[code="xml"]USE []GOALTER PROCEDURE [dbo].[NIC_ENTERPRISE_PatientActivityLogSearch] @lUser INT, @szFirst VARCHAR(20) = '', @szLast VARCHAR(40) = '', @szChartNum VARCHAR(10) = NULL, @SDate DATETIME = NULL, @EDate DATETIME = NULL, @szIdentifierValue VARCHAR(50) = '', @lIdentifierType AS INT = NULL, @nSex AS INT = NULL, @bSearchInactive AS BIT = 0, @szPhoneArea VARCHAR(10) = '', @szPhone1 VARCHAR(10) = '', @szPhone2 VARCHAR(10) = '', @DayDOB VARCHAR(10) = NULL, @MonthDOB VARCHAR(10) = NULL, @YearDOB VARCHAR(10) = NULL, @lEnterprise INTAS /** if there is any aduit record in DB for current Enterprise, the sp will check aduit, otherwise ignore the aduit check **/ if exists (select lid from MAP_AuditActionTypeTableToMappingTable WITH (NOLOCK) where luserAction=83 and lUserActionType=5 and lEnterprise=@lenterprise) update MAP_AuditActionTypeTableToMappingTable set bIsavailable=1 where luserAction=83 and lUserActionType=5 and lEnterprise=@lenterprise if exists (select lid from MAP_AuditActionTypeTableToMappingTable WITH (NOLOCK) where luserAction=84 and lUserActionType=5 and lEnterprise=@lenterprise) update MAP_AuditActionTypeTableToMappingTable set bIsavailable=1 where luserAction=84 and lUserActionType=5 and lEnterprise=@lenterpriseDECLARE @sSQLSelect NVARCHAR(4000)DECLARE @sSQLSearch NVARCHAR(4000) SET NOCOUNT ON SET TRANSACTION ISOLATION LEVEL READ COMMITTED SET @sSQLSelect = '' SET @sSQLSearch = '' IF @sDate IS NOT NULL AND @sDate <> '' AND @eDate IS NOT NULL AND @eDate <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND UALog.dDateOfAction BETWEEN ''' + CONVERT(VARCHAR, @SDate) + '''' + ' AND '+'''' + CONVERT(VARCHAR, @eDate ) + '''' END IF @szLast IS NOT NULL AND @szLast <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND p.szLast LIKE ''' + @szLast + '%''' END IF @szFirst IS NOT NULL AND @szFirst <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND p.szFirst LIKE ''' + @szFirst + '%''' END IF @szChartNum IS NOT NULL AND @szChartNum <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND p.szChartNum LIKE ''' + @szChartNum + '%''' END /* Gender check*/ IF @nSex IS NOT NULL BEGIN SET @sSQLSearch = @sSQLSearch + ' AND p.nSex = ' + CONVERT(VARCHAR, @nSex) END /* Check for date of birth */ IF @YearDOB IS NOT NULL AND @YearDOB <> '' AND @MonthDOB IS NOT NULL AND @MonthDOB <> '' AND @DayDOB IS NOT NULL AND @DayDOB <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND p.dDOB= ''' + @YearDOB + '-' + @MonthDOB + '-' + @DayDOB + ''' ' END ELSE BEGIN /* Year part of Date of Birth */ IF @YearDOB IS NOT NULL AND @YearDOB <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND Year(p.dDOB)= ''' + @YearDOB + '%'' ' END /* Month part of Date of Birth */ IF @MonthDOB IS NOT NULL AND @MonthDOB <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND Month(p.dDOB)= ''' + @MonthDOB + '%'' ' END /* Day part of Date of Birth */ IF @DayDOB IS NOT NULL AND @DayDOB <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND Day(p.dDOB)= ''' + @DayDOB + '%'' ' END END /* First part of Phone number */ IF @szPhoneArea IS NOT NULL AND @szPhoneArea <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND a.szPhoneArea LIKE ''' + @szPhoneArea + '%''' END /* Second part of Phone number */ IF @szPhone1 IS NOT NULL AND @szPhone1 <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND a.szPhone1 LIKE ''' + @szPhone1 + '%''' END /* Third part of Phone number */ IF @szPhone2 IS NOT NULL AND @szPhone2 <> '' BEGIN SET @sSQLSearch = @sSQLSearch + ' AND a.szPhone2 LIKE ''' + @szPhone2 + '%''' END IF EXISTS ( SELECT TOP 1 lid FROM MAP_AuditActionTypeTableToMappingTable WITH (NOLOCK) WHERE lEnterprise = @lEnterprise ) BEGIN --with audit record IF @lUser = 0 BEGIN IF @szIdentifierValue = '' BEGIN SET @sSQLSelect = @sSQLSelect + ' SELECT UALog.*, p.szFirst, p.szLast, p.szChartNum, o.szOfficeName, UA.szAction, UAT.szType + '' '' + UA.szAction AS szFullDescription, u.szLast AS szUserLast, u.szFirst AS szUserFirst, UALog.dDateOfAction AS dAction, UALC.szComment, mapAudit.bisavailable FROM UserActionLog UALog WITH (NOLOCK) INNER JOIN UserAction UA WITH (NOLOCK) ON UALog.lUserAction = UA.lID INNER JOIN UserActionType UAT WITH (NOLOCK) ON UAT.lid = UALog.lUserActionType INNER JOIN Map_UserToOffice map WITH (NOLOCK) ON UALog.lMap_UserToOffice = map.lID INNER JOIN Office o WITH (NOLOCK) ON map.lOffice = o.lID INNER JOIN [User] u WITH (NOLOCK) ON map.lUser = u.lID INNER JOIN Patient p WITH (NOLOCK) ON UALog.lPatient = p.lID INNER JOIN Address a WITH (NOLOCK) ON p.lAddress = a.lID LEFT JOIN Map_PatientToOffice mapPO WITH (NOLOCK) ON o.lID = mapPO.lOffice AND p.lID = mapPO.lPatient LEFT JOIN MAP_AuditActionTypeTableToMappingTable mapAudit WITH (NOLOCK) ON mapAudit.lUserAction = UALog.lUserAction --Added by Roger for Audit AND mapAudit.lEnterprise = ' + CAST( @lEnterprise AS VARCHAR) +' AND mapAudit.lUserActionType = UALog.lUserActionType ' + ' LEFT JOIN UserActionLogComment UALC WITH (NOLOCK) ON UALog.lid = UALC.lUserActionLog WHERE 1=1 ' + @sSQLSearch + ' AND o.lEnterprise = ' + CAST( @lEnterprise AS VARCHAR) + ' AND ISNULL(mapAudit.bIsavailable, 1) = 1 ' + ' ORDER BY dAction' exec sp_executesql @sSQLSelect END ELSE BEGIN SET @sSQLSelect = @sSQLSelect + ' SELECT UALog.*, p.szFirst, p.szLast, p.szChartNum, o.szOfficeName, UA.szAction, UAT.szType + '' '' + UA.szAction AS szFullDescription, UALog.dDateOfAction AS dAction, u.szLast AS szUserLast, u.szFirst AS szUserFirst, UALC.szComment, mapAudit.bIsavailable FROM UserActionLog UALog WITH (NOLOCK) INNER JOIN UserAction UA WITH (NOLOCK) ON UALog.lUserAction = UA.lID INNER JOIN UserActionType UAT WITH (NOLOCK) ON UAT.lid = UALog.lUserActionType INNER JOIN Map_UserToOffice map WITH (NOLOCK) ON UALog.lMap_UserToOffice = map.lID INNER JOIN Office o WITH (NOLOCK) ON map.lOffice = o.lID INNER JOIN [User] u WITH (NOLOCK) ON map.lUser = u.lID INNER JOIN Patient p WITH (NOLOCK) ON UALog.lPatient = p.lID LEFT JOIN Map_PatientToOffice mapPO WITH (NOLOCK) ON o.lID = mapPO.lOffice AND p.lID = mapPO.lPatient INNER JOIN Address a WITH (NOLOCK) ON p.lAddress = a.lID LEFT JOIN MAP_AuditActionTypeTableToMappingTable mapAudit WITH (NOLOCK) ON mapAudit.lUserAction = UALog.lUserAction --Added by Roger for Audit AND mapAudit.lEnterprise = ' + CAST( @lEnterprise as varchar ) +' AND mapAudit.lUserActionType = UALog.lUserActionType ' +' INNER JOIN UserIdentifier e WITH (NOLOCK) ON p.lid = e.lPatient AND e.lUserIdentifierType = ' + CAST (@lIdentifierType AS VARCHAR) + ' AND e.nStatus = 1 ' + ' AND e.sValue = ''' + @szIdentifierValue +'''' + ' LEFT JOIN UserActionLogComment UALC WITH (NOLOCK) ON UALog.lid = UALC.lUserActionLog WHERE 1=1 ' + @sSQLSearch + ' AND o.lEnterprise = ' + CAST( @lEnterprise AS VARCHAR) + ' AND ISNULL(mapAudit.bIsavailable, 1) = 1 ' + ' ORDER BY dAction' exec sp_executesql @sSQLSelect END END ELSE BEGIN IF @szIdentifierValue = '' BEGIN SET @sSQLSelect = @sSQLSelect + ' SELECT UALog.*, p.szFirst, p.szLast, p.szChartNum, o.szOfficeName, UA.szAction, UAT.szType + '' '' + UA.szAction AS szFullDescription, u.szLast AS szUserLast, u.szFirst AS szUserFirst, UALog.dDateOfAction AS dAction, UALC.szComment, mapAudit.bIsavailable FROM UserActionLog UALog WITH (NOLOCK) INNER JOIN UserAction UA WITH (NOLOCK) ON UALog.lUserAction = UA.lID INNER JOIN UserActionType UAT WITH (NOLOCK) ON UAT.lid = UALog.lUserActionType INNER JOIN Map_UserToOffice map WITH (NOLOCK) ON UALog.lMap_UserToOffice = map.lID INNER JOIN Office o WITH (NOLOCK) ON map.lOffice = o.lID INNER JOIN [User] u WITH (NOLOCK) ON map.lUser = u.lID INNER JOIN Patient p WITH (NOLOCK) ON UALog.lPatient = p.lID INNER JOIN Address a WITH (NOLOCK) ON p.lAddress = a.lID LEFT JOIN Map_PatientToOffice mapPO WITH (NOLOCK) ON o.lID = mapPO.lOffice AND p.lID = mapPO.lPatient LEFT JOIN MAP_AuditActionTypeTableToMappingTable mapAudit WITH (NOLOCK) ON mapAudit.lUserAction = UALog.lUserAction --Added by Roger for Audit AND mapAudit.lEnterprise = '+ CAST (@lEnterprise AS VARCHAR) + ' AND mapAudit.lUserActionType = UALog.lUserActionType ' SET @sSQLSelect = @sSQLSelect + ' LEFT JOIN UserActionLogComment UALC WITH (NOLOCK) ON UALog.lid = UALC.lUserActionLog ' SET @sSQLSelect = @sSQLSelect + ' WHERE map.lUser = '+ CAST( @lUser AS VARCHAR)+ @sSQLSearch + ' AND o.lEnterprise = ' + CAST( @lEnterprise AS VARCHAR) + ' AND ISNULL(mapAudit.bIsavailable, 1) = 1 ' + ' ORDER BY dAction' exec sp_executesql @sSQLSelect END ELSE BEGIN SET @sSQLSelect = @sSQLSelect + ' SELECT UALog.*, p.szFirst, p.szLast, p.szChartNum, o.szOfficeName, UA.szAction, UAT.szType + '' '' + UA.szAction AS szFullDescription, u.szLast AS szUserLast, u.szFirst AS szUserFirst, UALog.dDateOfAction AS dAction, UALC.szComment, mapAudit.bIsavailable FROM UserActionLog UALog WITH (NOLOCK) INNER JOIN UserAction UA WITH (NOLOCK) ON UALog.lUserAction = UA.lID INNER JOIN UserActionType UAT WITH (NOLOCK) ON UAT.lid = UALog.lUserActionType INNER JOIN Map_UserToOffice map WITH (NOLOCK) ON UALog.lMap_UserToOffice = map.lID INNER JOIN Office o WITH (NOLOCK) ON map.lOffice = o.lID INNER JOIN [User] u WITH (NOLOCK) ON map.lUser = u.lID INNER JOIN Patient p WITH (NOLOCK) ON UALog.lPatient = p.lID LEFT JOIN Map_PatientToOffice mapPO WITH (NOLOCK) ON o.lID = mapPO.lOffice AND p.lID = mapPO.lPatient INNER JOIN Address a WITH (NOLOCK) ON p.lAddress = a.lID INNER JOIN UserIdentifier e WITH (NOLOCK) ON p.lid = e.lPatient AND e.lUserIdentifierType = ' + CAST (@lIdentifierType AS VARCHAR) + ' AND e.nStatus = 1 ' + ' AND e.sValue = ''' + @szIdentifierValue +'''' +' LEFT JOIN MAP_AuditActionTypeTableToMappingTable mapAudit WITH (NOLOCK) ON mapAudit.lUserAction = UALog.lUserAction AND mapAudit.lEnterprise = ' + CAST( @lEnterprise AS VARCHAR) +' AND mapAudit.lUserActionType = UALog.lUserActionType ' SET @sSQLSelect = @sSQLSelect + ' LEFT JOIN UserActionLogComment UALC WITH (NOLOCK) ON UALog.lid = UALC.lUserActionLog ' SET @sSQLSelect = @sSQLSelect + + ' WHERE map.lUser = ' + CAST (@lUser AS VARCHAR) + @sSQLSearch + ' AND o.lEnterprise = ' + CAST( @lEnterprise AS VARCHAR) + ' AND ISNULL(mapAudit.bIsavailable, 1) = 1 ' + ' ORDER BY dAction' exec sp_executesql @sSQLSelect END END END --with audit record ELSE BEGIN --without audit record IF @lUser = 0 BEGIN IF @szIdentifierValue = '' BEGIN SET @sSQLSelect = @sSQLSelect + ' SELECT UALog.*, p.szFirst, p.szLast, p.szChartNum, o.szOfficeName, UA.szAction, UAT.szType + '' '' + UA.szAction AS szFullDescription, u.szLast AS szUserLast, u.szFirst AS szUserFirst, UALog.dDateOfAction AS dAction, UALC.szComment FROM UserActionLog UALog WITH (NOLOCK) INNER JOIN UserAction UA WITH (NOLOCK) ON UALog.lUserAction = UA.lID INNER JOIN UserActionType UAT WITH (NOLOCK) ON UAT.lid = UALog.lUserActionType INNER JOIN Map_UserToOffice map WITH (NOLOCK) ON UALog.lMap_UserToOffice = map.lID INNER JOIN Office o WITH (NOLOCK) ON map.lOffice = o.lID INNER JOIN [User] u WITH (NOLOCK) ON map.lUser = u.lID INNER JOIN Patient p WITH (NOLOCK) ON UALog.lPatient = p.lID INNER JOIN Address a WITH (NOLOCK) ON p.lAddress = a.lID LEFT JOIN Map_PatientToOffice mapPO WITH (NOLOCK) ON o.lID = mapPO.lOffice AND p.lID = mapPO.lPatient LEFT JOIN UserActionLogComment UALC WITH (NOLOCK) ON UALog.lid = UALC.lUserActionLog WHERE 1=1 ' + @sSQLSearch +' AND o.lEnterprise = ' + CAST( @lEnterprise AS VARCHAR) + ' ORDER BY dAction' exec sp_executesql @sSQLSelect END ELSE BEGIN PRINT 'SIX' SET @sSQLSelect = @sSQLSelect + ' SELECT UALog.*, p.szFirst, p.szLast, p.szChartNum, o.szOfficeName, UA.szAction, UAT.szType + '' '' + UA.szAction AS szFullDescription, UALog.dDateOfAction AS dAction, u.szLast AS szUserLast, u.szFirst AS szUserFirst, UALC.szComment FROM UserActionLog UALog WITH (NOLOCK) INNER JOIN UserAction UA WITH (NOLOCK) ON UALog.lUserAction = UA.lID INNER JOIN UserActionType UAT WITH (NOLOCK) ON UAT.lid = UALog.lUserActionType INNER JOIN Map_UserToOffice map WITH (NOLOCK) ON UALog.lMap_UserToOffice = map.lID INNER JOIN Office o WITH (NOLOCK) ON map.lOffice = o.lID INNER JOIN [User] u WITH (NOLOCK) ON map.lUser = u.lID INNER JOIN Patient p WITH (NOLOCK) ON UALog.lPatient = p.lID LEFT JOIN Map_PatientToOffice mapPO WITH (NOLOCK) ON o.lID = mapPO.lOffice AND p.lID = mapPO.lPatient INNER JOIN Address a WITH (NOLOCK) ON p.lAddress = a.lID INNER JOIN UserIdentifier e WITH (NOLOCK) ON p.lid = e.lPatient AND e.lUserIdentifierType = ' + CAST (@lIdentifierType AS VARCHAR) + ' AND e.nStatus = 1 ' + ' AND e.sValue = ''' + @szIdentifierValue +'''' + ' LEFT JOIN UserActionLogComment UALC WITH (NOLOCK) ON UALog.lid = UALC.lUserActionLog WHERE 1=1 ' + @sSQLSearch +' AND o.lEnterprise = ' + CAST(@lEnterprise AS VARCHAR) + ' ORDER BY dAction' exec sp_executesql @sSQLSelect END END ELSE BEGIN IF @szIdentifierValue = '' BEGIN PRINT 'SEVEN' SET @sSQLSelect = @sSQLSelect + ' SELECT UALog.*, p.szFirst, p.szLast, p.szChartNum, o.szOfficeName, UA.szAction, UAT.szType + '' '' + UA.szAction AS szFullDescription, UALog.dDateOfAction AS dAction, u.szLast AS szUserLast, u.szFirst AS szUserFirst, UALC.szComment FROM UserActionLog UALog WITH (NOLOCK) INNER JOIN UserAction UA WITH (NOLOCK) ON UALog.lUserAction = UA.lID INNER JOIN UserActionType UAT WITH (NOLOCK) ON UAT.lid = UALog.lUserActionType INNER JOIN Map_UserToOffice map WITH (NOLOCK) ON UALog.lMap_UserToOffice = map.lID INNER JOIN Office o WITH (NOLOCK) ON map.lOffice = o.lID INNER JOIN [User] u WITH (NOLOCK) ON map.lUser = u.lID INNER JOIN Patient p WITH (NOLOCK) ON UALog.lPatient = p.lID INNER JOIN Address a WITH (NOLOCK) ON p.lAddress = a.lID LEFT JOIN Map_PatientToOffice mapPO WITH (NOLOCK) ON o.lID = mapPO.lOffice AND p.lID = mapPO.lPatient LEFT JOIN UserActionLogComment UALC WITH (NOLOCK) ON UALog.lid = UALC.lUserActionLog WHERE map.lUser = '+ CAST( @lUser AS VARCHAR)+ @sSQLSearch +' AND o.lEnterprise = ' + CAST(@lEnterprise AS VARCHAR) + ' ORDER BY dAction' exec sp_executesql @sSQLSelect END ELSE BEGIN SET @sSQLSelect = @sSQLSelect + ' SELECT UALog.*, p.szFirst, p.szLast, p.szChartNum, o.szOfficeName, UA.szAction, UAT.szType + '' '' + UA.szAction AS szFullDescription, u.szLast AS szUserLast, u.szFirst AS szUserFirst, UALog.dDateOfAction AS dAction, UALC.szComment FROM UserActionLog UALog WITH (NOLOCK) INNER JOIN UserAction UA WITH (NOLOCK) ON UALog.lUserAction = UA.lID INNER JOIN UserActionType UAT WITH (NOLOCK) ON UAT.lid = UALog.lUserActionType INNER JOIN Map_UserToOffice map WITH (NOLOCK) ON UALog.lMap_UserToOffice = map.lID INNER JOIN Office o WITH (NOLOCK) ON map.lOffice = o.lID INNER JOIN [User] u WITH (NOLOCK) ON map.lUser = u.lID INNER JOIN Patient p WITH (NOLOCK) ON UALog.lPatient = p.lID LEFT JOIN Map_PatientToOffice mapPO WITH (NOLOCK) ON o.lID = mapPO.lOffice AND p.lID = mapPO.lPatient INNER JOIN Address a WITH (NOLOCK) ON p.lAddress = a.lID INNER JOIN UserIdentifier e WITH (NOLOCK) ON p.lid = e.lPatient AND e.lUserIdentifierType = ' + CAST ( @lIdentifierType AS VARCHAR ) +' AND e.nStatus = 1 ' + ' AND e.sValue = ''' + @szIdentifierValue +'''' + ' LEFT JOIN UserActionLogComment UALC WITH (NOLOCK) ON UALog.lid = UALC.lUserActionLog WHERE map.lUser = '+ CAST( @lUser AS VARCHAR)+ @sSQLSearch +' AND o.lEnterprise = ' + CAST(@lEnterprise AS VARCHAR) + ' ORDER BY dAction' exec sp_executesql @sSQLSelect END END END --without audit record[/code]

Page level compression - Indices/Partitions/Heap

Posted: 23 Jul 2013 04:49 AM PDT

I have read a lot of articles on compression but i couldn't find a definitive answer for these:i) if a table has no clustered index and has non-clustered indices, do i need to compress the table first and then the nc indices?ii)if a table has clustered index and has non-clustered indices, would just compressing the clustered index also compress the nc index?iii) When compressing the clustered index, can this be with ONLINE=ON option?

After Insert Trigger won't fire - Same Trigger exists on 35 other tables working perfectly - Why won't THIS one fire?

Posted: 23 Jul 2013 04:19 AM PDT

[b]AFTER INSERT TRIGGER WON'T FIRE[/b]The same trigger exists on 35 other tables working perfectly. Why won't this one fire?USE [UVAHS_Metadata]GO/****** Object: Trigger [mdl].[Item_After_Insert_Trgr] Script Date: 07/23/2013 13:14:41 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGO-- Trigger: Item_After_Insert_Trgr ALTER TRIGGER [mdl].[Item_After_Insert_Trgr] ON [mdl].[Item]AFTER INSERTASBEGINUPDATE mdl.ItemSET Item.Item_Updt_Dtm = GETDATE (), Item.Item_Add_User_PK = ( SELECT UVAHS_User_PK FROM mdl.UVAHS_User WHERE (UVAHS_User.UVAHS_User_Id = LTRIM(RTRIM(REPLACE(SYSTEM_USER,'HSCDOM\', ' '))) )), Item.Item_Updt_User_PK = ( SELECT UVAHS_User_PK FROM mdl.UVAHS_User WHERE (UVAHS_User.UVAHS_User_Id = LTRIM(RTRIM(REPLACE(SYSTEM_USER,'HSCDOM\', ' '))) ))FROM inserted iWHERE Item.Item_PK = i.Item_PKEND

Page Compression - Syntax

Posted: 23 Jul 2013 05:24 AM PDT

is this syntax right for compression. I have a table which is partitioned, has a clus idx and two nc index.. i want enable page level compression on all indices and partitions..is this syntax right? Thanks[code="other"]ALTER INDEX [PK_CluIndx] ON [dbo].[MyTable] REBUILD PARTITION = ALL WITH ( DATA_COMPRESSION = PAGE, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, ONLINE = ON, SORT_IN_TEMPDB = OFF ) GOALTER INDEX [Nc_1] ON [dbo].[MyTable] REBUILD PARTITION = ALL WITH ( DATA_COMPRESSION = PAGE, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, ONLINE = ON, SORT_IN_TEMPDB = OFF )ALTER INDEX [NC_2] ON [dbo].[MyTable] REBUILD PARTITION = ALL WITH ( DATA_COMPRESSION = PAGE, PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, ONLINE = ON, SORT_IN_TEMPDB = OFF ) [/code]

Viewing an xml plan as a graphical plan

Posted: 22 Jul 2013 09:22 PM PDT

I am having trouble viewing the query plan as a graphical plan. I am running 2008 r2 sp1 on the servers and i am using ssms version 10.50.2550.0 on my laptopI have a query which returns the plans in the cache and it outputs a link such as:<ShowPlanXML xmlns="http://schemas.microsoft.........>which i click on and i should get a graphical representation in management studio right ? I was about to post that all i get is the raw xml code when i click on these links, however i have just clicked on one of them and it has returned me the graphical execution plan as expected ? The query has returned 500 rows ( i had to stop it running as it was taking a long time) and up to now 99% of the links just return the raw xml and not the graphical one. Why are they not all returning me the graphical version of the plan ?Can somebody explain why this is ?

Errors while querying over VPN connection after moving to Windows 7 64-bit

Posted: 23 Jul 2013 04:02 AM PDT

I am having trouble executing queries on a new Windows 7 64-bit machine. The target database is accessed through a Cisco VPN tunnel. I have tried to use SSMS from the original install and from an Express installer. I also tried both 32 and 64 bit installs of SSMS. All result in the same errors.My errors are very similar to the ones found in this [url=http://connect.microsoft.com/SQLServer/feedback/details/361659/ssms-bad-client-side-errors]posting[/url]. But I have issues while I am connected to the VPN connection. I am not sure how to troubleshoot this as the error changes. However, if I query to a local database, I don't get the same error all the time, but they seem to occur when the query is returning many rows. I do have success when the queries are simple and only affect very few rows.My local database is running 10.0.1600 and the remote database is running 10.0.4000, and this is working with existing XP machines.

Snapshot Replication Error

Posted: 23 Jul 2013 02:52 AM PDT

Hey guys,So I have setup replication between our source production database to replicate to our reporting database so we are not hitting our production database for reporting...I set this up 2 months ago and everything was running fine...Until about 5 days ago, it started failing every night, when I go to look at the job history and see the big red X and look at the error, it gives me this:MessageExecuted as user: TRH\mssql. String or binary data would be truncated. [SQLSTATE 22001] (Error 8152). The step failed.For some reason I was under the impression it was failing during the distribution step of replication...I know what that error means but I started digging...This was yesterday - I created a backup of the source, and restored it onto reporting...Was going to see if that helped the issue...I check this morning and nope same failure, same message...So I have this redgate tool SQL Compare...It compares the schema's of the two databases! Once it finished, it showed no differences and 22723 identical objects!When looking back at the error, the light bulb came on and was like this is failing on the snapshot portion of replication! Now it makes no sense to me why I would get that error when trying to create a snapshot??? Its not pushing data anywhere so I dont know why this error would be thrown...Looking at the job history and expanding down on the today's date with big red X, I noticed some of the lines had a blue curved arrow and some of the lines have a the successful green play button, but looking at the blue curved arrow line, I noticed this:2013-07-23 07:17:43.55 [8%] The replication agent had encountered an exception.2013-07-23 07:17:43.55 Source: Replication2013-07-23 07:17:43.55 Exception Type: Microsoft.SqlServer.Replication.NativeSqlConnectionException2013-07-23 07:17:43.55 Exception Message: Failed to read column data2013-07-23 07:17:43.55 Message Code: 02013-07-23 07:17:43.55 2013-07-23 07:17:43.55 [8%] The replication agent had encountered an exception.2013-07-23 07:17:43.55 Source: Replication2013-07-23 07:17:43.55 Exception Type: Microsoft.SqlServer.Replication.NativeSqlConnectionException2013-07-23 07:17:43.55 Exception Message: Failed to read column data2013-07-23 07:17:43.55 Message Code: 02013-07-23 07:17:43.55Now let me say this again, there was not a red X on this line, just the blue curved arrow! and the line above it is a successful green one...Then it 2 lines above it is when I see the red X errors with the String or Binary data would be truncated...Details of the Snapshot Replication:Source to ReportingA bunch of articles - like at least 10,000Security Settings - Run under the SQL Server Agent service account , Connect to the publisher by impersonating the process accountRuns every morning around 3 amCan anyone give me any insight as to what might be going on here?

Query taking non-consistent time to execute

Posted: 22 Jul 2013 08:19 PM PDT

I have a query which takes large time to execute sometimes.When i checked the speed it was taking 15 seconds.But actually it should run faster than that.When i again checked the query it again executed with in 11 sec.Then i tried removing cast used in query and when executed it ran just in 8 seconds.But when i again the original query(with cast) it takes less time.I checked in several computers running the query.But in some cases i get output just in 1 or 2 seconds also.So same query takes different time to execute in different pc and in same pc also time required is not consistent.a So there is no consistency with the time taken to execute.So i am not able to find why this happens?Below is the query i used to test[code="sql"]SELECT [customer].[Customer name],[customer].[Sl_No],[customer].[Id]FROM [company].dbo.[customer]WHERE ( Charindex('9000413237',CAST([company].dbo.[customer].[Phone no] AS VARCHAR(MAX)))>0 ) [/code]Here customer table has about 30 columns.In that 'Customer name' is of datatype varchar(255),'Sl_No' is of int,'Id' is of int and 'Phone no' is of varchar(255).

differences

Posted: 22 Jul 2013 08:52 PM PDT

what are the differences while installing sql 2005 vs sql 2008 ?

Conditional Join

Posted: 22 Jul 2013 08:28 PM PDT

Hi,I have 3 tables TableA, TableB, TableC.I have to update data in TableA with TableB or TableCIf there is data at least one row in TableB , I have to completely ignore TableC and do a INNER JOIN between TableA & TableB.If there is no data at all in TableB, Then I have to join with TableC and has to update TableA.Plz help...

exclude specific tables during the database restore

Posted: 22 Jul 2013 09:41 PM PDT

Hi All,Is there a way to exclude specific tables during the SQL database restore?Thanks

view pdf in report

Posted: 22 Jul 2013 07:48 PM PDT

HIPlease assist. I have a table with PDF files. I would like to display these files in a report.i have the following table:Num Type FileSource1 pdf 0x25044462D ........... I would like to view the pdf file in a report.I've tried the following:1. Add table, replace textbox with rectangle and add image. Image source is from Database and MIME type is PNG. When I run the report I get the following error: "An error ocurred during client rendering. Parameter is not valid".2. Tried to create a Hyperlink to open PDF - Textbox Properties, Action, Go to URL, Select URL = FileSource. The hyperlink does not work.Please HELP!!! :w00t:Thank you, Monique

Tuesday, July 23, 2013

[SQL Server] First, Last row and other data from those rows

[SQL Server] First, Last row and other data from those rows


First, Last row and other data from those rows

Posted: 22 Jul 2013 06:39 PM PDT

Hi,I have the following data in tables[code="plain"]T1:ID Dept Cat1 WER 22 TTR 7[/code][code="plain"]T2:ID Date Type1 2013-07-01 GA2 2013-07-04 FS2 2013-07-08 TR2 2013-07-01 TT1 2013-04-05 RT1 2013-05-13 GG2 2013-04-18 TT[/code]I would like to query first table and then join to the second to have first and last transaction and some other data from the first and last transaction row. I would like to have the output similar to the following:[code="plain"]ID Cat FirstDate FirstType LastDate LastType1 2 2013-04-05 RT 2013-07-01 GA2 7 2013-04-18 TT 2013-07-08 TR[/code]I was trying to use MIN MAX to query the First and Last Date (T2.Date) but then I can't to extract the related First and Last Type (T2.Type)

Trying to convert varchar to datetime

Posted: 23 Jul 2013 08:31 AM PDT

I am trying to convert 20114 which is stored as varchar to datetime but I am getting the following errorSyntax error during explicit conversion of VARCHAR value '20114' to a DATETIME field.Can some one help me in solving this?

Using a Temporary Table in a View in Order to Combine three Queries

Posted: 23 Jul 2013 08:45 AM PDT

Hi all. I am trying to create a view using the three queries below and I get the error message[b] Views or functions are not allowed on temporary tables[/b]. Is there a way to do that or is there a way to combine the three queries below so I don't have to use a temp table so I create a view?Thanks! [b]--Query 1 [/b]SELECT * INTO #MOVEMENTS FROM [GW_DW].[dbo].[DimStatusHistory] dWHERE TransferFromToProgram<>'' AND d.Status=12;[b]--Query 2 [/b]SELECT DISTINCT n.[CLT_NBR] ,n.[CHILD_NAME] ,n.[ReasonKey] ,n.[ReasonDesc] ,n.[EFFECT_DT] ,n.[Status] ,n.[STATUS_DESC] ,n.[DESCRIPT] ,n.[TRAN_TYPE] ,n.[OTYPE] ,n.[Old_FID] ,n.[NTYPE] ,n.[New_FID] ,n.[TransferFromToProgram] ,[ECMS].dbo.[FN_PRIOR_EFFECT_DT_FOR_STATUS](n.[EFFECT_DT],n.[CLT_NBR],12) AS PRIOR_EFFECT_DT ,[ECMS].dbo.[FN_NEXT_EFFECT_DT_FOR_STATUS](n.[EFFECT_DT],n.[CLT_NBR],12) AS FUTURE_EFFECT_DT ,n.[TOTAL_DAYS] INTO #NEW_MOVEMENTS FROM [GW_DW].[dbo].[DimStatusHistory] n LEFT OUTER JOIN #MOVEMENTS m ON n.CLT_NBR=m.CLT_NBR WHERE n.TransferFromToProgram NOT IN ('','FBH - TFBH','TFBH - FBH') AND n.EFFECT_DT BETWEEN @from_dt AND @to_dt AND n.COUNTY='NYC' AND n.OTYPE NOT IN ('RTC', 'SLIP') AND n.NTYPE NOT IN ('RTC', 'SLIP') AND n.Status=12 ; [b] --Query 3 [/b] SELECT n.[CLT_NBR] ,n.[CHILD_NAME] ,c.CIN ,cl.DOB ,c.Age ,c.Sex ,c.PlcSource ,w.PLACED_DT AS APD ,w.IPD ,n.[ReasonKey] ,n.[ReasonDesc] ,n.[EFFECT_DT] ,n.[TransferFromToProgram] ,n.[OTYPE] ,n.[Old_FID] ,h1.Rsrc_Name AS Old_FP_Name ,h1.Orig_Cert AS Old_FP_Orig_Cert ,n.[NTYPE] ,n.[New_FID] ,h2.Rsrc_Name AS New_FP_Name ,h2.Orig_Cert AS New_FP_Orig_Cert ,n.PRIOR_EFFECT_DT ,m.Old_FID AS PRIOR_Old_FID ,n.FUTURE_EFFECT_DT ,f.New_FID AS FUTURE_New_FID ,c.STF_NBR ,c.Planner ,s.UNIT ,s.SupervisorName ,s.[SITE] ,s.DirectorName ,CASE WHEN (n.NTYPE='KINS' AND n.OTYPE <> 'KINS') THEN 1 WHEN n.ReasonKey IN ('TE','TJ') THEN 1 WHEN ((n.New_FID=m.Old_FID) AND (n.EFFECT_DT<DATEADD(D,22,n.PRIOR_EFFECT_DT))) THEN 1 ELSE 0 END AS POS_SCORE ,CASE WHEN (n.NTYPE='KINS' AND n.OTYPE <> 'KINS') THEN 0 WHEN n.ReasonKey IN ('TE','TJ') THEN 0 WHEN n.New_FID=m.Old_FID AND n.EFFECT_DT<DATEADD(D,22,n.PRIOR_EFFECT_DT)THEN 0 WHEN ((n.Old_FID=f.New_FID) AND (DATEADD(D,22,n.EFFECT_DT)>n.FUTURE_EFFECT_DT)) THEN 0 ELSE -1 END AS NEG_SCORE FROM #NEW_MOVEMENTS n LEFT OUTER JOIN #MOVEMENTS m ON n.CLT_NBR=m.CLT_NBR AND n.PRIOR_EFFECT_DT=m.EFFECT_DT LEFT OUTER JOIN #MOVEMENTS f ON n.CLT_NBR=f.CLT_NBR AND n.FUTURE_EFFECT_DT=f.EFFECT_DT LEFT OUTER JOIN GW_DW.dbo.DimClient c ON c.CLT_NBR=n.CLT_NBR LEFT OUTER JOIN [ECMS].[dbo].[WFR_CLIENT] w ON w.CLT_NBR=n.CLT_NBR LEFT OUTER JOIN [ECMS].[dbo].[CLIENT] cl ON cl.CLT_NBR=n.CLT_NBR LEFT OUTER JOIN GW_DW.dbo.DimStaff s ON s.ECMS_Wrkr_ID=c.STF_NBR LEFT OUTER JOIN [GW_DW].[dbo].[DimHome_FHD] h1 ON h1.Facility_ID=n.Old_FID LEFT OUTER JOIN [GW_DW].[dbo].[DimHome_FHD] h2 ON h2.Facility_ID=n.New_FID WHERE s.[SITE]<>'CGS' ORDER BY n.CHILD_NAME,n.EFFECT_DT DESC

Creating a view with Declare

Posted: 22 Jul 2013 04:29 PM PDT

I am trying to create a view with the below query. it should be a view as data in other tables will be updated and I want to refer to this often. Or as a last resort schedule the creation of this table over night some how?DECLARE @@string1 varchar (20) = ' Number='DECLARE @@string2 varchar (20) = ' And Id='SELECT f.linkid, f.docname, f.title, dc.catname, t.template, r.extlink ,f.libid,substring(extlink, charindex(@@string1, extlink)+len(@@string1), charindex(@@string2, extlink) -len(@@string1) - charindex(@@string1, extlink)) as IDfrom AL_TEMPLATE t INNER JOIN AL_RELATION r ON t.tempID=r.tempidINNER JOIN AL_RELATED_FILE rf ON r.relID=rf.relid INNER JOIN al_file f ON rf.linkid=f.linkID INNER JOIN al_filecats fc ON f.linkid=fc.linkid INNER JOIN AL_DOCCAT dc ON fc.catid=dc.catIDwhere charindex(@@string2, extlink) -len(@@string1) - charindex(@@string1, extlink) > 0Thanks

[how to] table with item names as a composite key - joining with missing combinations

[how to] table with item names as a composite key - joining with missing combinations


table with item names as a composite key - joining with missing combinations

Posted: 23 Jul 2013 08:59 PM PDT

I've got a table design with many levels of item, and each has a name that's inside a separate table with a composite key linking to a language table.

i.e. Country, City, Street, Building, Floor, Suite. Each with the 1 to many relation.

if a country has two languages (fr + en) the database will be polled for both, and expecting a value in both languages whether or not it exists.

I'd like to create a query (a view ideally) that can combine all these rows. The problem I keep hitting is that I either get all variations of languages/names across each item, or i lose entries that do not have that language value and any subsequent(child) items.


language: [id,name]  country: [id]  countryName: [languageid,countryid,name]  city: [id,countryid]  cityName: [languageid,cityid,name]  building: [id,cityid]  buildingName: [languageid,buildingid,name]  

etc...

what I have done is created a table that links all levels (country,city,building, etc.) only the cityname(which determines if there is an alternate name.

then I do a join the tables in a view and get all the names as-well. I've added a line that adds the alternate value which is something like this ISNULL(dbo.buildingNames.name, (SELECT TOP (1) name FROM dbo.buildingNames AS x WHERE (buildingId = dbo.buildings.id)))

it works ok, but I'm it's getting a bit messy and slow. Using views, this requires each item to have it's separate name view, which relies on recreating the hierarchy to check which languages are needed.

the other problem related to this, is that I have a location Table that can have multiple items used.

i.e. assuming that the location is restricted to being on a floor, or in a suite.

ballLocation: [ballid,floorid,suiteid]

if i have both filled out I can use the suiteName view(suite with atlernate names) which already has the floorName, but if I only provided the floorID, then I can't use the same view, but have to use two separate ones.

Do I need to perform any tasks after Postgres goes into recovery mode?

Posted: 23 Jul 2013 06:51 PM PDT

For starters, I do not have any sort of automated failover in place.

After two scenarios, I'm unsure of the state of the database and any required actions, if any, to take:

  1. My master server disappears off the face of the planet and I touch my_trigger_file, converting my standby into a master server.
  2. My master server crashes, then restarts in recovery mode

My assumption is that the server in question in each of those scenarios will first be in recovery mode, will then finish recovering, and will finally be ready as a master server. In scenario 2 my assumption means things are back to normal, while in scenario 1 my assumption means the standby has no idea it was ever a standby and is now simply a master. Are these assumptions correct, or are there other actions that need to take place in either or both scenarios after recovery finishes (aside from creating a new slave in the case of scenario 1)?

Also, is there a notable technical difference between the state of a slave server after becoming a master vs a master server undergoing recovery after a crash, or as far as Postgres is concerned, are they both essentially master servers that just recovered from something?

Note: I'm using Postgres 9.2 with asynchronous streaming replication.

MSSQL Grouping with row value basis

Posted: 23 Jul 2013 06:00 PM PDT

I have a child table that is something like this:

[Mach Status Table]

| Mach Status | Start Time | Duration|
+----------------+-------------+-----------+
| 1 | 2012-04-30 6:30 | 20.00 |
| 1 | 2012-04-30 6:50 | 50.00 |
| 2 | 2012-04-30 7:40 | 10.00 |
| 2 | 2012-04-30 7:50 | 10.00 |
| 1 | 2012-04-30 8:00 | 60.00 |
| 1 | 2012-04-30 9:00 | 10.00 |

I would like to be able to get a result set like this - one record for each status with end date: what can be a better way to achieve this in MsSql?

| Mach Status | Start Time | End Time |
+-------------+-----------------+-------------------+
| 1 | 2012-04-30 6:30 | 2012-04-30 7:40 |
| 2 | 2012-04-30 7:40 | 2012-04-30 8:00 |
| 1 | 2012-04-30 8:00 | 2012-04-30 9:10 |

Where in phpMyAdmin code can I change behavior of table statistics depending on database?

Posted: 23 Jul 2013 02:55 PM PDT

Specifically, I want to hack into the code and add some PHP code like this:

if($database != 'information_schema') { // NEW if statement to exclude information_schema from table statistics    // existing code that gets the table statistics for a db  }  

And, of course, the reason for my question is obvious to anyone who has tried to work with the information_schema database in phpMyAdmin. Once the initial view is loaded, it's fine. Most queries on information_schema tables are just as fast as any. But the initial view can take minutes, or 10's of minutes to load, because SELECT COUNT(*) (and perhaps all aggregate functions) takes a very long time on those tables for some reason.

Note that I have already added this setting under the [mysqld] section of my.cnf:

innodb_stats_on_metadata = 0  

As suggested in this stack overflow answer.

It helped a lot, but it still takes at least a few minutes to load the database page in phpMyAdmin.

Thanks for your help.

NOTE: I'm currently using command line to search through files but have not found the code snippet in question yet:

$ grep -lr "COUNT(*)" /usr/share/phpmyadmin

That's on an Ubuntu/php/mysql 5.5.28 testing server. phpMyAdmin version 3.4.10.1deb1

MySQL Memory Grows Over Time

Posted: 23 Jul 2013 05:58 PM PDT

I've got a server running a LAMP stack with 1GB RAM that is maxed out on RAM usage. What I'm observing is that directly after a reboot, it drops to 25% usage, and then slowly climbs over time so that by the end of the day, it's sitting at 100%. When it peaks out, mysql is using 940MB according to TOP.

I've tried lowering many of the settings to reduce caching and buffer sizes (not ideal I know, but just trying to get the memory usage under control first - then things can be redialed in)

Current /etc/my.cnf:

[mysqld]  # Basic settings  user = mysql  datadir = /var/lib/mysql  port = 3306  socket = /var/lib/mysql/mysql.sock    # Security settings  local-infile = 0  symbolic-links = 0    # Memory and cache settings  query_cache_type = 1  query_cache_size = 4M  thread_cache_size = 4  table_cache = 256  tmp_table_size = 8M  max_heap_table_size = 8M  join_buffer_size = 1M  key_buffer_size = 1M  max_connections = 10  wait_timeout = 300    # Innodb settings  innodb_buffer_pool_size = 16M  innodb_additional_mem_pool_size = 1M  innodb_log_buffer_size = 1M  innodb_thread_concurrency = 2    [mysqld_safe]  # Basic safe settings  log-error = /var/log/mysqld.log  pid-file = /var/run/mysqld/mysqld.pid  

MySQLTuner.pl Output:

# ./mysqltuner.pl    >>  MySQLTuner 1.2.0 - Major Hayden <major@mhtx.net>  >>  Bug reports, feature requests, and downloads at http://mysqltuner.com/  >>  Run with '--help' for additional options and output filtering    -------- General Statistics --------------------------------------------------  [--] Skipped version check for MySQLTuner script  [OK] Currently running supported MySQL version 5.5.30  [OK] Operating on 64-bit architecture    -------- Storage Engine Statistics -------------------------------------------  [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster  [--] Data in MyISAM tables: 51K (Tables: 109)  [--] Data in InnoDB tables: 15M (Tables: 387)  [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)  [!!] Total fragmented tables: 387    -------- Security Recommendations  -------------------------------------------  [OK] All database users have passwords assigned    -------- Performance Metrics -------------------------------------------------  [--] Up for: 2d 15h 14m 29s (1M q [4.411 qps], 71K conn, TX: 2B, RX: 78M)  [--] Reads / Writes: 62% / 38%  [--] Total buffers: 31.0M global + 3.6M per thread (10 max threads)  [OK] Maximum possible memory usage: 67.2M (6% of installed RAM)  [OK] Slow queries: 0% (0/1M)  [OK] Highest usage of available connections: 80% (8/10)  [OK] Key buffer size / total MyISAM indexes: 1.0M/450.0K  [!!] Key buffer hit rate: 90.7% (935 cached / 87 reads)  [OK] Query cache efficiency: 82.7% (201K cached / 243K selects)  [!!] Query cache prunes per day: 11934  [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 15K sorts)  [OK] Temporary tables created on disk: 9% (17K on disk / 194K total)  [OK] Thread cache hit rate: 99% (20 created / 71K connections)  [!!] Table cache hit rate: 6% (256 open / 3K opened)  [OK] Open file limit used: 9% (102/1K)  [OK] Table locks acquired immediately: 100% (81K immediate / 81K locks)  [OK] InnoDB data size / buffer pool: 15.7M/16.0M    -------- Recommendations -----------------------------------------------------  General recommendations:  > Run OPTIMIZE TABLE to defragment tables for better performance  > Enable the slow query log to troubleshoot bad queries  > Increase table_cache gradually to avoid file descriptor limits  Variables to adjust:  > query_cache_size (> 4M)  > table_cache (> 256)  

What I don't understand is that MySQLTuner says "Maximum possible memory usage: 67.2M (6% of installed RAM)", yet as indicated earlier, MySQL is using 940MB of RAM.

Any ideas on how to resolve this?

Use Arc Map 10.1

Posted: 23 Jul 2013 02:24 PM PDT

I have a dataset without Zipcode. Can I geocode addresses without zipcode using arcmap 10.1? During geocoding address I get the error cannot connect to the server.

How to speed up a mysql restore from large dump file

Posted: 23 Jul 2013 02:16 PM PDT

I have a 350MB msyql dump file that takes over 3 minutes to load. I'd like to speed it up. I've tried using --opt with mysqldump (shaved off 30 seconds). The best I can achieve is 42 seconds faster. Any other techniques to speed up a mysql restore from a dump file?

Deleting the MySQL 'root' user on purpose

Posted: 23 Jul 2013 07:19 PM PDT

I created a new mysql user with all the same privileges as the current 'root' user. For security reasons I thought why not create another user for this so people at least won't know the username of my super user.

I then dropped the root user.

Immediately my DB started throwing connection refused errors for all of my other users.

I quickly recreated the original 'root' user and everything magically started connecting again.

My question is then, is it possible to delete the root user in a MySQL database? And if so how?

Thanks.

MongoDB performance limits of mongo

Posted: 23 Jul 2013 01:26 PM PDT

There are 100 data files, each with 60 fields and over 4 million records. There is a perl program that inserts the records or updates them based on an userdefined _id field. There is also a History collection that stores all values ever written for three fields. A replica set with two servers and an arbiter has been set up. Initially the files were loading into the MongoDB at 45 minutes per file. After around 20 files the speed has dropped considerably. The speed at this time is 20 hours per file. The servers have started slowing down badly. I am unable to use the logout command even quickly.

I have built indexes on the _id field with hashed indexing and for the History collection I have built indexes with id and date field. The number of records at this time in the collections are 4 million for the actual data collection and around 100 million for the History collection. I have two 17 GB RAM processor, of which only around 3.5 gigs are used as per the Mongostat res command. However since the data records are to be inserted date wise sequentially, I cannot exploit parallelism either.

The limits of mongo for the specific scenario have been reached? Is this slowdown to be expected? I have fsynced manually every now and then to ensure files are being written to disk. Is there some other diagnostics that I can run to better explain the situation? Is there a solution to this?

Thanks

PHP MySQL library version 5.0.51a differs from your MySQL server version 5.1.61

Posted: 23 Jul 2013 07:19 PM PDT

After asking last question about $id=$_GET['id']; and security issues I tried a PDO connection to my server. The following message appeared; ERROR: could not find driver.

I connected to phpMyAdmin - 2.11.1.2 and had this message appear; Your PHP MySQL library version 5.0.51a differs from your MySQL server version 5.1.61. This may cause unpredictable behavior.

Is there a connection? Are PDO drivers required on the server? Would MySqli work?

Bob

Insert Speeds for large batches

Posted: 23 Jul 2013 01:51 PM PDT

In my application, my INSERTs seem to be taking a major chunk of the time. I have a large number of objects in memory (~40-50,000) which i want to insert into a table.

Lets take a sample table

CREATE TABLE bill (  id BIGINT(20) PRIMARY KEY,  amount INT(11) DEFAULT 0,  bill_date DATETIME DEFAULT NOW(),  INDEX (bill_date)  ) ENGINE=InnoDB  

Taking 3 rows as my batch size, the following are the approaches i could think of for inserting

Approach 1 - construct and fire 3 raw inserts

INSERT INTO bill (amount, bill_date) VALUES (10, '2012-01-01 00:00:00');  INSERT INTO bill (amount, bill_date) VALUES (20, '2012-01-02 00:00:00');  INSERT INTO bill (amount, bill_date) VALUES (40, '2013-02-05 00:00:00');  

Approach 2 - clubbing the values into 1 query

INSERT INTO bill (amount, bill_date) VALUES   (10, '2012-01-01 00:00:00'),  (20, '2012-01-02 00:00:00'),  (40, '2013-02-05 00:00:00');  

Approach 3 - fire this query 1 time passing 6 parameters

INSERT INTO bill (amount, bill_date) VALUES   (?, ?), (?, ?), (?, ?);  

Approach 4 - Fire this prepared query 3 times changing the 2 parameters each time

INSERT INTO bill (amount, bill_date) VALUES (?, ?);  

Any other approaches are welcome.

My question is

What is the quickest way to make multiple inserts in a table?

I have read this link on mysql insert speed and this guide to JDBC programming, but i'm not able to come to a conclusion.

My case -

Currently my table has ~ 20 columns, most of which are numbers, with a couple of varchar(60) and 1 text column. Mysql version 5.5. Running on INNODB and has 1 index on Integer primary keys. All the queries run in transaction.

I construct my queries from Java and use Spring JDBC to run the queries.

I am currently following Approach 3, It is taking around 10 seconds for 20,000 inserts to an empty table, not including the time it takes to construct the query.

To keep things in perspective, its taking 100-200 millis to fetch the data from the table.

Is there something i am missing? How do i make the inserts faster?

I need help understanding Postgres's archive cleanup functionality

Posted: 23 Jul 2013 08:32 PM PDT

I'm reading http://www.postgresql.org/docs/9.2/static/pgarchivecleanup.html and to me it seems like it will remove any and all WAL segments from my slave server that aren't required to perform recovery upon the slave server crashing (so basically everything except the last 1 or few segments, depending on how far behind my slave is running). I'd like to be able to perform emergency point in time recovery, in the unfortunate case of data corruption or accidental deletion, if possible. To do this, am I left with the following choices:

  1. Keeping all WAL segments since the last base backup was created (in my case, this is the one that the slave server was created from)
  2. Making base backups all the time or doing something with PGBarman, etc.?
  3. Am I misunderstanding something completely?

If option 1 is used, I presume the pg archive cleanup command is rendered useless to me. If option 2 is used, is it true that I should use the cleanup command as intended (deleting all the extra WAL segments immediately after they're committed)? Is there something more that I'm missing (see option 3)?

Note: I'm using Postgres 9.2.4

Proper MySQL database maintenance

Posted: 23 Jul 2013 01:39 PM PDT

I hope this isn't too broad of a question. I'm certain it'll be able to help me and any future dba noobs that stumble upon it.

I'm a systems admin that got put into the roll of DBA (because I helped the CEO with his outlook, so I can obviously manage our databases!). It's not that big or busy of a database server... a mysqldump is about 6GB in size and it's taken us 5 years to get it that big. Munin reports that we're averaging 40-60 queries a second at our peak hours.

My boss paid for me to take the Oracle University systems admin course, but having gone through it, it simply explains the different parts of the mysql server, what things do and how they do it. But it's not that in-depth and you definitely don't come out of that course a DBA.

So as the current DBA, what should I do to make sure everything is running smooth? Are there daily maintenance tasks I can perform? Are there certain metrics I should be checking? Or to put it another way, as DBAs, what do YOU do on a daily basis to keep everything in good shape?

If it'll help tailor the answer a little bit, here are some specifics. We have 171 tables, all but 3 are innodb, the others are myisam. We have Master/Slave replication set up between our primary datacenter and our disaster recovery site, using RBR. Version is 5.5.28.

What can I do?

mysql: how to copy database and required dependencies

Posted: 23 Jul 2013 08:55 PM PDT

I need to set up an automated copy of a single mysql database (not the whole DBMS). If I use mysqldump to dump and recreate a DB, the copy may be unusable because the user accounts need to also be copied. Triggers in the source DB for instance may be copied, but there's no guarantee that they will execute correctly in the new DB because the original triggers were designed to run as some particular user.

A couple of questions:

  • how can I automate copying of the user accounts and permissions?

  • is there a way, when copying a db to check whether all of its dependencies are met? (E.g. are all of the databases referred to by the stored procedures present?)

I understand that I can replicate the whole DBMS - this is overkill (and often not practical for hourly regressions)

Inline edit SQL Server database rows from Visual Studio

Posted: 23 Jul 2013 02:23 PM PDT

I'm pretty sure Microsoft have pulled one of the most useful features for performing quick edits on a SQL Server Database within the Visual Studio IDE. It seems to have only affected SQL 2012 instances, but from the Server Explorer I can no longer right click on a table "Show Table Data", pop open the SQL pane, query the data then perform inline edits on the results (as if I were modifying a spreadsheet).

Show Table Data

This means I now need to go into SSMS to make these kind of quick updates. Does anybody know of a plugin I can use with VS 2012 to bring back this functionality? It seems odd to me that Microsoft have two different development trajectories with SSDT and SSMS? Are they designed to serve different purposes? Can SSMS be integrated into the Visual Studio IDE? I'd rather have a unified development environment if possible.

Any thoughts on a workaround for this problem would be much appreciated.

EDIT

I know some purists would quiver at the thought of treating a database table like a spreadsheet, but semantically they are not a world apart, plus this is supported in SSMS. I'm more in favour of relying on GUI based approaches where I can to speed up routine tasks, why some would balk at this I have no idea?

Issue with SQL Server Aliases on a multi-instance installation

Posted: 23 Jul 2013 04:33 PM PDT

I've a 64-bit installation of SQL 2008 R2 SP2 on Server 2008 R2, with five instances of SQL.

I can get the aliases set up to work fine on the server (i.e. if I remote onto the server, then it works without problems, but from the client machine, there is no joy).

Errors:

SQL Pipes Error 40, Error 53.

Or, if specify port 1433, TCP Error 0, 11004.

If I specify port 63769 (which is mentioned as a port that SQL is listening on), TCP Error 0, 11004.

If I connect using the servername\instancename I have no problems.

I've set up the alias in 32-bit and 64-bit on the server. I've left the ports as dynamic in the TCP settings, because if I define the port as being 1433 for each instance, none of them will start. This is weird, as it's an approach I've used successfully on other servers.

There is no internal firewall, but some ports may, or may not, be shut off...

Any ideas about what to try next?

Thanks!

Do database engines update when the updated value is the same? [on hold]

Posted: 23 Jul 2013 11:53 AM PDT

Suppose I am running multiple column update on one row. Does the database engine physically write new data to the database if some of the columns contain the same values as existed previously in the database? If so, how can it be avoided?

I'm about to implement a job which will run an update on a large table and most of the values will be the same but still, it has to recalculate them all. I am wondering if the update will also rewrite each column even if there is no need to, because the storage media will degrade faster.

Dealing with data stored as arrays in a MySQL DB

Posted: 23 Jul 2013 07:15 PM PDT

So I know storing arrays in a DB field is wrong and would never do it myself, however a 3rd party plugin my company is using stores data in an array and I was wondering if you could help me try to deal with it.

It basically seems to link 2 tables and add a view count. Here is an example of the data:

a:4:{i:4;i:196;i:26;i:27;i:5;i:155;i:34;i:4;}

So I think this means there are 4 entries in the array, each with 2 attributes. The first - i.e. 4, 26, 5, 34 are "store codes". The second lot (196, 27, 155, 4) are number of plays. God knows why they are stored like this as there is already another table that links the video with the stores and they could've just stuck another column there for view count.

Anywho, what I want to do is order by view count based on store id within that array. Do you think this is possible and does anyone have any ideas how to do this? If storing data like this is a standard, do you know the name for it as I could probably take it from there?

Thanks!

Security : SQL Server Object Access From One Schema to another schema

Posted: 23 Jul 2013 05:37 PM PDT

Scenario: I have denied permissions from table1 for USER1. DENY SELECT,INSERT,UPDATE,DELETE ON OBJECT :: dbo.table1 TO USER1

Instead, I have allowed access to a view(vw_table1) which will selects from table1 and filter some data using where clause. The idea is, users won't be able to see entire data of the table. To support insert, update and delete, I have created instead of triggers.

The above scenario works fine when Table1 and VW_table1 are on same schema.

Now, I have a requirement to move the view only in a different schema (sec.vw_table1). After moving the view when I do "select * From sec.vw_table1" It gives access denied error when the view tries to access dbo.table1.

How can we achieve this in SQL Server? I don't want to give access to dbo.Table1 to User1. But should work when it is called from sec.vw_table1.

Any help would be highly appreciated!!

One to one mapping vs many to one mapping on data base design

Posted: 23 Jul 2013 07:00 PM PDT

I'm currently doing call detail records (CDR) data analysis as my project. On course of data-base design, I had separated the date (table contain id, day,month,year), time field (table contain id, second, minute, hour etc) from call table (table contain called number, duration, time_id, date_id etc) as millions of call transaction occurs in a day. Now I got confuse on how to map date, time to call table i.e. do one-to-one mapping or many-to-one mapping. One to one mapping can be done easily but data are redundant but I have no idea about many-to-one mapping. What I think about many to one mapping is I have first extract date and check if it exist or not. if exit then get its id otherwise insert date and get its id. (Background:- I have a cdr data with field card_no, service_key, calling_no, called_no, answer_time, clear_time, duration on text file on tab separated format and have to make data mart as like figure below. enter image description here

I am not good on telling problem so if any information required then comment below.

EDIT

On cdr table

  1. Card_no is mobile number of the subscriber and if the subscriber dialing to the other number than if is equal to calling and if subscriber is receiving the call then card_no is equal to called. In call table its repeating.
  2. sql_time_stamp means answer_time and as duration is already in database then why I have to store clear_time also ( if there is any reason then let me know)

On Subscriber table

Subscriber table contain dob which is static so I create another table as this contain age (which may be changing) just to separate static part and dynamic part also gender is include to reduce multiple table joining as analysis just contain age(_group) and gender.

Negative number of rows inserted

Posted: 23 Jul 2013 04:28 PM PDT

I just ran a rather simple insert statement (that I think processed alot of rows), and I got the message: "-1,311,750,183 rows inserted". Should I be concerned about the fact that it's negative?

Analyse MySQL General Query Log in Real-time?

Posted: 23 Jul 2013 04:37 PM PDT

We want to use mysql general query log to do real-time monitoring and auditing.

Currently our approach is:

  • set general_log=on;
  • sleep 15m;
  • set general_log=off;
  • scp & rm xxx.log;
  • set general_log=on;...

But the main problem is when turn on/off general log it'll cause a peak of slow query.

I also thought of another approach: turn on genlog; tail -f it and send out the log; periodically truncate the logfile (with "> xxx.log" or "cat /dev/null >xxx.log").

I'm wondering whether it's practical.

If only mysql would provide some built-in general log message queue stuff...

Innodb table is taking randomly long time to execute the insert query and in show processlist showing as queryend/updating

Posted: 23 Jul 2013 05:37 PM PDT

Innodb table is taking randomly long time to execute the insert query and in show processlist showing as queryend/updating and it happens for the similar queries for the same table and the queries got strucks for five to fifteen minutes and suddenly process all the queued up list.

I cannot understand what happens suddenly sometimes...

The same queries for the same version, configuratios of the mysql in different server has no problem...

Here is the innodb configurations:

+---------------------------------+----------------------------------+  | Variable_name                   | Value                            |  +---------------------------------+----------------------------------+  | innodb_adaptive_flushing        | ON                               |  | innodb_adaptive_hash_index      | ON                               |  | innodb_additional_mem_pool_size | 1073741824                       |  | innodb_autoextend_increment     | 8                                |  | innodb_autoinc_lock_mode        | 1                                |  | innodb_buffer_pool_instances    | 1                                |  | innodb_buffer_pool_size         | 8589934592                       |  | innodb_change_buffering         | all                              |  | innodb_checksums                | ON                               |  | innodb_commit_concurrency       | 0                                |  | innodb_concurrency_tickets      | 500                              |  | innodb_data_file_path           | ibdata1:4G;ibdata2:4G:autoextend |  | innodb_data_home_dir            | /var/lib/mysql                   |  | innodb_doublewrite              | ON                               |  | innodb_fast_shutdown            | 1                                |  | innodb_file_format              | Antelope                         |  | innodb_file_format_check        | ON                               |  | innodb_file_format_max          | Antelope                         |  | innodb_file_per_table           | ON                               |  | innodb_flush_log_at_trx_commit  | 2                                |  | innodb_flush_method             |                                  |  | innodb_force_load_corrupted     | OFF                              |  | innodb_force_recovery           | 0                                |  | innodb_io_capacity              | 400                              |  | innodb_large_prefix             | OFF                              |  | innodb_lock_wait_timeout        | 50                               |  | innodb_locks_unsafe_for_binlog  | ON                               |  | innodb_log_buffer_size          | 33554432                         |  | innodb_log_file_size            | 536870912                        |  | innodb_log_files_in_group       | 2                                |  | innodb_log_group_home_dir       | ./                               |  | innodb_max_dirty_pages_pct      | 75                               |  | innodb_max_purge_lag            | 0                                |  | innodb_mirrored_log_groups      | 1                                |  | innodb_old_blocks_pct           | 37                               |  | innodb_old_blocks_time          | 0                                |  | innodb_open_files               | 300                              |  | innodb_print_all_deadlocks      | ON                               |  | innodb_purge_batch_size         | 20                               |  | innodb_purge_threads            | 0                                |  | innodb_random_read_ahead        | OFF                              |  | innodb_read_ahead_threshold     | 56                               |  | innodb_read_io_threads          | 10                               |  | innodb_replication_delay        | 0                                |  | innodb_rollback_on_timeout      | OFF                              |  | innodb_rollback_segments        | 128                              |  | innodb_spin_wait_delay          | 6                                |  | innodb_stats_method             | nulls_equal                      |  | innodb_stats_on_metadata        | ON                               |  | innodb_stats_sample_pages       | 8                                |  | innodb_strict_mode              | OFF                              |  | innodb_support_xa               | ON                               |  | innodb_sync_spin_loops          | 30                               |  | innodb_table_locks              | ON                               |  | innodb_thread_concurrency       | 0                                |  | innodb_thread_sleep_delay       | 10000                            |  | innodb_use_native_aio           | ON                                |  | innodb_use_sys_malloc           | ON                               |  | innodb_version                  | 5.5.30                           |  | innodb_write_io_threads         | 10                               |  +---------------------------------+----------------------------------+  

Can some one suggest me... why and what exactly happens

Merging two Access tables into one

Posted: 23 Jul 2013 03:37 PM PDT

I have mostly identical tables, however there are slightly differences like missing columns in older ones. I want to merge 5 tables into one. Is there any easy way to do this?

Why disabling a clustered index makes the table inaccessible?

Posted: 23 Jul 2013 02:47 PM PDT

When an index is disabled, the definition remains in the system catalog but is no longer used.
SQL Server does not maintain the index (as data in the table changes), and the index cannot be used to satisfy queries.
If a Clustered Index is disabled, the entire table becomes inaccessible.

The question is:
why isn't it possible to access the data directly from the table discarding the B-tree?
(most likely by scanning the table row by row)
wouldn't that be more appropriate than inaccessible data at all?

DB2 db2fm proccess

Posted: 23 Jul 2013 06:37 PM PDT

Server is been up for 365 days, however i got some weird repeated procceses.

Are these normal?

ps -fea | grep db2fm

  db2inst1  643284  229516  29   May 25      - 212564:06 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  671770  229516  56   May 14      - 227447:02 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  757794 1237058   0   Apr 19  pts/7  0:00 /bin/sh /home/db2inst1/sqllib/bin/db2cc  db2inst1  774232  229516  30   Sep 25      - 94218:54 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  962750  229516  30   Jul 18      - 145256:01 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  999450  229516  29   Aug 17      - 117710:27 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1 1179898  229516  58   Nov 02      - 75788:49 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  

ps -fea | grep db2agent

  db2inst1  409770  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  450750  778412   0   Apr 18      -  0:03 db2agent (idle) 0  db2inst1  618688  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  651440  680100   0   Nov 17      -  0:20 db2agent (DATABASEA) 0  db2inst1  655508  491676   0   Apr 19      -  0:04 db2agent (idle) 0  db2inst1  684038  680100   0   Mar 23      -  0:03 db2agent (DATABASEA) 0  db2inst1  790706  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  880672  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  913438  778412   0   Nov 16      -  0:20 db2agent (idle) 0  db2inst1  946182  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  991312  778412   0   Apr 17      -  0:16 db2agent (idle) 0  db2inst1 1077466  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1134726  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1142964  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1233112  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1261748  778412   0   Jun 15      -  0:18 db2agent (idle) 0  db2inst1 1384678  778412   0   Mar 23      -  0:27 db2agent (idle) 0  db2inst1 1404936  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1421368  778412   0   Mar 22      -  0:04 db2agent (idle) 0  db2inst1 1445936  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1482864  491676   0   Jun 16      -  0:31 db2agent (idle) 0  db2inst1 1503440  778412   0   Jun 15      -  0:56 db2agent (idle) 0  db2inst1 1519842  778412   0   Mar 23      -  0:00 db2agent (DATABASEA) 0  db2inst1 1531946  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1572884  680100   0   Apr 19      -  0:00 db2agent (idle) 0  

Other info

  oslevel -g  Fileset                                 Actual Level        Maintenance Level  -----------------------------------------------------------------------------  bos.rte                                 5.3.0.40            5.3.0.0    db2fm -s -S  Gcf module 'fault monitor' is NOT operable  Gcf module '/home/db2inst1/sqllib/lib/libdb2gcf.a' state is AVAILABLE      uptime    02:14PM   up 365 days,  12:51,  6 users,  load average: 6.69, 6.89, 6.97     db2level  DB21085I  Instance "db2inst1" uses "64" bits and DB2 code release "SQL08020"  with level identifier "03010106".  Informational tokens are "DB2 v8.1.1.64", "s040812", "U498350", and FixPak "7"    

How to snapshot or version a relational database when data changes?

Posted: 23 Jul 2013 02:37 PM PDT

My system receives data feeds. Each data feed will end up creating inserts and/or updates to most tables in the (relational) database.

I need to capture the snapshot of what the entire database looked like after each data feed is received. Basically I need a way to version the database each time a data feed is run through the system.

Note, by capturing a snapshot, I dont mean literally taking a snapshot of the database, but rather writing history records or some such mechanism so that I can query the database across "versions" to see what changed between versions (among other use cases)

Do known data model designs exist that can capture a snapshot of a database version like this?

Rent weekly cost database design

Posted: 23 Jul 2013 01:37 PM PDT

I have a database which contains a table BUILDING with in each row details about some building, another table BUILDING_UNIT contains rows with details about a single building unit which refers with a foreign key to the belonging BUILDING.ID. The BUILDING_UNIT table also refers to a table CATEGORY which tells whether the BUILDING_UNIT is of category A,B,C,D again with a foreign key pointing to CATEGORY.ID.

Now the final cost of renting the building unit depends on its building, category and on the number of days it is rented and specific period of the year. We only rent them weekly so I might as well use weeks only however I'd like it to be as flexible as possible in the future.

I cannot convince myself on a table which can represent this situation.

Do I have to use a table with coefficients for each day of the year and then a table with coefficients for A,B,C,D and then a table with coefficients for each Building and then somehow calculate a result?

Is there some standard and recognized implementation for problems of this type?

Thank you

EDIT: Notice the solution should abstract from the formula for calculating the cost which might change in the future. However I might be asked to make a specific week of the year, for building unit X inside building Y to cost 300$ while the week after 600$. Generally building units inside the same building and in the same week cost the same, however that might change in future so I'd like to treat already all specific cases.

How to add 'root' MySQL user back on MAMP?

Posted: 23 Jul 2013 12:37 PM PDT

On PhpMyAdmin, I removed 'root' user by mistake. I was also logged in as 'root'. How can I add the user 'root' back, on MAMP?

Escaping T-SQL Keywords

Posted: 23 Jul 2013 01:22 PM PDT

$sql = "SELECT Kill FROM tbl_pvporderview";  

Problem is that I end up with: Incorrect syntax near the keyword 'Kill'.

Because kill is a T-SQL command... any way to bypass it?

I can't change the column name because it's used by the software a lot and I cant change the software that's using the database.

So it simply fails if I use sqlserv to select data from that column. '' or "" wont help.

The complete statement would be:

$sql = "SELECT serial,Kill FROM tbl_pvporderview WHERE Kill > (?) ORDER BY Kill DESC ";  

Search This Blog