Optimizing complex join predicates Posted: 21 Aug 2013 09:00 PM PDT Table Definitions CREATE TABLE [dbo].[Pin_Mtg] ( [MtgId] bigint NULL, [CntyCd] char(5) NOT NULL, [BatchDt] int NOT NULL, [BatchSeq] int NOT NULL, [MtgSeq] tinyint NOT NULL, [PclId] varchar(45) NULL, [PclSeqNbr] tinyint NULL, [ChronoNbr] varchar(14) NOT NULL, [PrimaryCatcd] varchar(2) NOT NULL, [Deedcattyp] varchar(3) NOT NULL, [DocTyp] char(2) NULL, [SaleDt] int NULL, [RecordingDt] int NULL, [DocYyDt] smallint NULL, [DocNbr] varchar(12) NULL, [RecordingBook] varchar(6) NULL, [RecordingPage] varchar(6) NULL, [MtgDt] int NULL, [MtgrecordingDt] int NULL, [Mtgamt] numeric(13, 2) NULL, [MtgDocYyDt] smallint NULL, [MtgDocNbr] varchar(12) NULL, [MtgRecordingBook] varchar(6) NULL, [MtgRecordingPage] varchar(6) NULL) GO ALTER TABLE [dbo].[Pin_Mtg] ADD CONSTRAINT [PK__Pin_Mtg__0BB1A1BC0D1732A2] PRIMARY KEY CLUSTERED ([CntyCd], [BatchDt], [BatchSeq], [MtgSeq]) GO CREATE TABLE [dbo].[Pin_Mtg_Suppl] ( [Rowid] bigint IDENTITY NOT NULL, [MtgId] bigint NULL, [CntyCd] char(5) NOT NULL, [BatchDt] int NOT NULL, [BatchSeq] int NOT NULL, [MtgSeq] tinyint NOT NULL, [PclId] varchar(45) NULL, [PclSeqNbr] tinyint NULL, [PrimaryCatcd] varchar(2) NULL, [ChronoNbr] varchar(14) NOT NULL, [Deedcattyp] varchar(3) NOT NULL, [DocTyp] char(2) NULL, [SaleDt] int NULL, [RecordingDt] int NULL, [DocYyDt] smallint NULL, [DocNbr] varchar(12) NULL, [RecordingBook] varchar(6) NULL, [RecordingPage] varchar(6) NULL, [MtgDt] int NULL, [MtgrecordingDt] int NULL, [Mtgamt] numeric(13, 2) NULL, [OrigDocDt] int NULL, [OrigDocNbr] varchar(12) NULL, [OrigRecordingBook] varchar(6) NULL, [OrigRecordingPage] varchar(6) NULL, [MtgDocYyDt] smallint NULL, [MtgDocNbr] varchar(12) NULL, [MtgRecordingBook] varchar(6) NULL, [MtgRecordingPage] varchar(6) NULL) GO ALTER TABLE [dbo].[Pin_Mtg_Suppl] ADD CONSTRAINT [PK__Pin_Mtg___0BB1A1BCA10F7BC8] PRIMARY KEY CLUSTERED ([CntyCd], [BatchDt], [BatchSeq], [MtgSeq]) Query UPDATE s SET MtgId = m.MtgId FROM dbo.Pin_Mtg m INNER JOIN dbo.Pin_Mtg_Suppl s ON s.CntyCd = m.CntyCd WHERE s.Mtgid IS NULL --Assignments + Releases + Notice of Defaults AND (((s.PrimaryCatCd IN ('G', 'H', 'F') OR s.DocTyp IN ('CD', 'RD')) AND ((s.MtgRecordingBook <> '' AND s.MtgRecordingPage <> '' AND s.MtgRecordingBook = m.MtgRecordingBook AND s.MtgRecordingPage = m.MtgRecordingPage) OR (s.MtgDocYYDt <> 0 AND s.MtgDocNbr <> '' AND s.MtgDocYYDt = m.MtgDocYYDt AND s.MtgDocNbr = m.MtgDocNbr) OR (s.OrigDocDt <> 0 AND s.OrigDocNbr <> '' AND s.OrigDocDt / 10000 = m.MtgDocYYDt AND s.OrigDocNbr = m.mtgdocnbr) OR (s.OrigRecordingBook <> '' AND s.OrigRecordingPage <> '' AND s.OrigRecordingBook = m.MtgRecordingBook AND s.OrigRecordingPage = m.MtgRecordingPage))) --Mechanic liens OR (s.PrimaryCatCd = 'I' AND ((s.OrigDocDt <> 0 AND s.OrigDocNbr <> '' AND s.OrigDocDt / 10000 = m.DocYYDt AND s.OrigDocNbr = m.docnbr) OR (s.OrigRecordingBook <> '' AND s.OrigRecordingPage <> '' AND s.OrigRecordingBook = m.RecordingBook AND s.OrigRecordingPage = m.RecordingPage))) -- Modifications OR (s.DocTyp IN ('MO', 'CD', 'RD') AND ((s.OrigDocDt <> 0 AND s.OrigDocNbr <> '' AND s.OrigDocDt / 10000 = m.MtgDocYYDt AND s.OrigDocNbr = m.MtgDocNbr) OR (s.OrigRecordingBook <> '' AND s.OrigRecordingPage <> '' AND s.OrigRecordingBook = m.MtgRecordingBook AND s.OrigRecordingPage = m.MtgRecordingPage)))) OPTION (RECOMPILE) Execution Plan I have been thinking on how to efficiently perform this join. Should I add many filtered indexes for every predicate and split the query into many pieces or...? The data in production will have over 600 million records. We can change the whole table design (still working on how to do this efficiently) at this point. The table statistics and execution plan are available in a zip file (SkyDrive). |
AWE memory usage growing with SQL Server 2012 Posted: 21 Aug 2013 05:33 PM PDT I've got a problem with my SQL Server 2012 installation on my Windows 2008 R2 x64 box. (heads up: I have no idea what I'm talking about...I'm in over my head, since I have no real DBA experience/training) Because my memory seemed to be rapidly disappearing (according to the Task Manager graph), but I couldn't see any programs using it, I downloaded RAMMap to get a better picture of what's going on. Turns out all of my memory was being chewed up by AWE. My memory gets eaten up by SQL Server fairly quickly, up to the maximum that I allow the instance to have (currently 3GB). When I try to research this, however, everyone seems to say that AWE isn't enabled for Sql Server 2012 for 64 bit systems, since it isn't needed; the expanded memory that AWE would enable is directly accessible. But the system still will happily chew up memory in AWE until it reaches its limit. Not sure what other information would be useful to help diagnose this...or even if I'm totally misinterpreting something...I'm at the limits of my current knowledge about this now. Ask any questions in the comments, and I'll try to update the question. (Please don't assume a lot of knowledge on my part...while I am tech savvy in general, I am absolutely not a DBA by any stretch.) |
SSIS 2008 Connection Managers (Project Specific) and Corresponding Configurations Posted: 21 Aug 2013 12:26 PM PDT These are the requirements that I want to complete: a. I want the package to only use connection managers that are project/not package bound. b. I also want these connections to be bound to the ssis config so that I can migrate these packages and they will connect to the correct environment seamlessly c. I want to make sure this will work for pre-validation runs of the package so that it will validate its meta data against the db in the config rather than the hardcoded values in the package So, my questions/concerns are: How could I make my connection managers Project and NOT package specific? I want to take these project specific connection managers and relate them to SSIS configurations Is there a way to have both SSIS configuration (Config Tables) and store those values in an Environment Variable? Any help is well appreciated. I know project bound connection managers are available in SSIS 2012, but i'm looking for an equivalent method in SSIS 2008 |
MySQL Function Error Posted: 21 Aug 2013 08:54 PM PDT In an inherited database I have two tables of current interest, records and logins . login tells me the location of the user and where and when his login occured (in and out times included). The fun part is that there is no correlating mark between the login session and the applications ran. To solve the issue I'm creating a new table login_apps and am trying to use a function to combine the two tables. So far I have: drop function if exists ltop; delimiter $$ create function ltop(id int, u varchar(10), s datetime, e datetime) returns text language sql begin declare pid varchar(40); declare pname varchar(63); declare my_return varchar(100); declare cur1 cursor for (select usageProgramID from records where usageUser=u and usageWhen>=s and usageWhen<=e and usageProgramID!=""); open cur1; read_loop: loop fetch cur1 into pid; set pname = (select programName from Programs where programID=pid); insert into login_apps(`sid`, `programName`) values(id, pname); end loop read_loop; close cur1; return "finished"; end;$$ delimiter ; It enters into the system without issue and the insert is working as expected on a test. The issue I'm coming up with is that when it's run as: select ltop("265548", "user", "2013-02-21 13:54:27", "2013-02-21 14:32:18"); I get the error: ERROR 1329 (02000): No data - zero rows fetched, selected, or processed Shouldn't the data be returned be "finished"? Secondly, I'm not really sure what would be considered the best way to run this on every value of logins any pointers for going that route? |
One statement is not update in the commit of mysql Posted: 21 Aug 2013 10:22 AM PDT We have been using lots of transaction based statement. Suddenly while doing some audit on the data we found something funny out of many query one of it is not updated accordingly. Below is the query log. The one does not show up the results is 130116 18:07:32 276 Query Update tblProduct Set tblProduct.productTotalStock = tblProduct.productTotalStock -7, tblProduct.productBooked = tblProduct.productBooked -7, tblProduct.productTotalAmount = tblProduct.productTotalAmount -44.03 Where tblProduct.productID=7921 279 Query Select tblID.lastValue From tblID Where tblID.tableName='tblTransfer' 279 Quit 276 Query Update tblID Set tblID.lastValue = lastValue + 1 Where tblID.tableName='tblTransfer' 276 Query Select tblID.lastValue From tblID Where tblID.tableName='tblTransferDetails' 276 Query Update tblID Set tblID.lastValue = lastValue + 1 Where tblID.tableName='tblTransferDetails' 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=6495 276 Query Select tblStock.stockID, tblStock.stockBalance, tblStock.stockBooked From tblStock Where tblStock.stockBalance > 0 And tblStock.productID=6495 Order By tblStock.stockID 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +1, tblStock.stockBalance = tblStock.stockBalance -1 Where tblStock.stockID=19058 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +7, tblStock.stockBalance = tblStock.stockBalance -7 Where tblStock.stockID=19275 276 Query Insert into tblTransferDetails Set transferDetailsID = 9299, transferID=3491, outletID = 12, stockID = 19058, productID= 6495, productType = 'Accessory', stockQuantity = 8, stockSIQ = '8', costPrice = 16.16, transferPrice = 16.16 130116 18:07:26 276 Query Update tblProduct Set tblProduct.productTotalStock = tblProduct.productTotalStock -8, tblProduct.productBooked = tblProduct.productBooked -8, tblProduct.productTotalAmount = tblProduct.productTotalAmount -129.28 Where tblProduct.productID=6495 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=6495 276 Query Insert into tblTransaction Set transactionID = 3491, transactionDetailsID=9299, stockID = 19058, productID= 6495, outletFromID = 12, outletToID = 4, totalAmountBefore = 695.01, totalAmountAfter = 565.73, totalQuantityBefore = 43, totalQuantityAfter = 35, averageCostBefore = 16.16, averageCostAfter = 16.16, quantity = 8, costPrice = 16.16, transactionPrice = 16.16, transactionEmployeeID = 68, transactionDate='2013:01:16', transactionTime='18:07:26', transactionType = 'to' 276 Query Select tblID.lastValue From tblID Where tblID.tableName='tblTransferDetails' 276 Query Update tblID Set tblID.lastValue = lastValue + 1 Where tblID.tableName='tblTransferDetails' 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7901 276 Query Select tblStock.stockID, tblStock.stockBalance, tblStock.stockBooked From tblStock Where tblStock.stockBalance > 0 And tblStock.productID=7901 Order By tblStock.stockID 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +4, tblStock.stockBalance = tblStock.stockBalance -4 Where tblStock.stockID=19495 276 Query Insert into tblTransferDetails Set transferDetailsID = 9300, transferID=3491, outletID = 12, stockID = 19495, productID= 7901, productType = 'Accessory', stockQuantity = 4, stockSIQ = '4', costPrice = 21, transferPrice = 21 276 Query Update tblProduct Set tblProduct.productTotalStock = tblProduct.productTotalStock -4, tblProduct.productBooked = tblProduct.productBooked -4, tblProduct.productTotalAmount = tblProduct.productTotalAmount -84 Where tblProduct.productID=7901 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7901 276 Query Insert into tblTransaction Set transactionID = 3491, transactionDetailsID=9300, stockID = 19495, productID= 7901, outletFromID = 12, outletToID = 4, totalAmountBefore = 168, totalAmountAfter = 84, totalQuantityBefore = 8, totalQuantityAfter = 4, averageCostBefore = 21, averageCostAfter = 21, quantity = 4, costPrice = 21, transactionPrice = 21, transactionEmployeeID = 68, transactionDate='2013:01:16', transactionTime='18:07:26', transactionType = 'to' 276 Query Select tblID.lastValue From tblID Where tblID.tableName='tblTransferDetails' 276 Query Update tblID Set tblID.lastValue = lastValue + 1 Where tblID.tableName='tblTransferDetails' 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=6498 276 Query Select tblStock.stockID, tblStock.stockBalance, tblStock.stockBooked From tblStock Where tblStock.stockBalance > 0 And tblStock.productID=6498 Order By tblStock.stockID 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +6, tblStock.stockBalance = tblStock.stockBalance -6 Where tblStock.stockID=19297 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +2, tblStock.stockBalance = tblStock.stockBalance -2 Where tblStock.stockID=19494 276 Query Insert into tblTransferDetails Set transferDetailsID = 9301, transferID=3491, outletID = 12, stockID = 19297, productID= 6498, productType = 'Accessory', stockQuantity = 8, stockSIQ = '8', costPrice = 7.56, transferPrice = 7.56 276 Query Update tblProduct Set tblProduct.productTotalStock = tblProduct.productTotalStock -8, tblProduct.productBooked = tblProduct.productBooked -8, tblProduct.productTotalAmount = tblProduct.productTotalAmount -60.48 Where tblProduct.productID=6498 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=6498 276 Query Insert into tblTransaction Set transactionID = 3491, transactionDetailsID=9301, stockID = 19297, productID= 6498, outletFromID = 12, outletToID = 4, totalAmountBefore = 377.97, totalAmountAfter = 317.49, totalQuantityBefore = 50, totalQuantityAfter = 42, averageCostBefore = 7.56, averageCostAfter = 7.56, quantity = 8, costPrice = 7.56, transactionPrice = 7.56, transactionEmployeeID = 68, transactionDate='2013:01:16', transactionTime='18:07:26', transactionType = 'to' 276 Query Select tblID.lastValue From tblID Where tblID.tableName='tblTransferDetails' 276 Query Update tblID Set tblID.lastValue = lastValue + 1 Where tblID.tableName='tblTransferDetails' 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7791 276 Query Select tblStock.stockID, tblStock.stockBalance, tblStock.stockBooked From tblStock Where tblStock.stockBalance > 0 And tblStock.productID=7791 Order By tblStock.stockID 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +4, tblStock.stockBalance = tblStock.stockBalance -4 Where tblStock.stockID=19138 276 Query Insert into tblTransferDetails Set transferDetailsID = 9302, transferID=3491, outletID = 12, stockID = 19138, productID= 7791, productType = 'Accessory', stockQuantity = 4, stockSIQ = '4', costPrice = 9.13, transferPrice = 9.13 130116 18:07:27 276 Query Update tblProduct Set tblProduct.productTotalStock = tblProduct.productTotalStock -4, tblProduct.productBooked = tblProduct.productBooked -4, tblProduct.productTotalAmount = tblProduct.productTotalAmount -36.52 Where tblProduct.productID=7791 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7791 276 Query Insert into tblTransaction Set transactionID = 3491, transactionDetailsID=9302, stockID = 19138, productID= 7791, outletFromID = 12, outletToID = 4, totalAmountBefore = 301.28, totalAmountAfter = 264.76, totalQuantityBefore = 33, totalQuantityAfter = 29, averageCostBefore = 9.13, averageCostAfter = 9.13, quantity = 4, costPrice = 9.13, transactionPrice = 9.13, transactionEmployeeID = 68, transactionDate='2013:01:16', transactionTime='18:07:27', transactionType = 'to' 276 Query Select tblID.lastValue From tblID Where tblID.tableName='tblTransferDetails' 276 Query Update tblID Set tblID.lastValue = lastValue + 1 Where tblID.tableName='tblTransferDetails' 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7923 276 Query Select tblStock.stockID, tblStock.stockBalance, tblStock.stockBooked From tblStock Where tblStock.stockBalance > 0 And tblStock.productID=7923 Order By tblStock.stockID 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +4, tblStock.stockBalance = tblStock.stockBalance -4 Where tblStock.stockID=19504 276 Query Insert into tblTransferDetails Set transferDetailsID = 9303, transferID=3491, outletID = 12, stockID = 19504, productID= 7923, productType = 'Accessory', stockQuantity = 4, stockSIQ = '4', costPrice = 7, transferPrice = 7 276 Query Update tblProduct Set tblProduct.productTotalStock = tblProduct.productTotalStock -4, tblProduct.productBooked = tblProduct.productBooked -4, tblProduct.productTotalAmount = tblProduct.productTotalAmount -28 Where tblProduct.productID=7923 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7923 276 Query Insert into tblTransaction Set transactionID = 3491, transactionDetailsID=9303, stockID = 19504, productID= 7923, outletFromID = 12, outletToID = 4, totalAmountBefore = 77, totalAmountAfter = 49, totalQuantityBefore = 11, totalQuantityAfter = 7, averageCostBefore = 7, averageCostAfter = 7, quantity = 4, costPrice = 7, transactionPrice = 7, transactionEmployeeID = 68, transactionDate='2013:01:16', transactionTime='18:07:27', transactionType = 'to' 276 Query Select tblID.lastValue From tblID Where tblID.tableName='tblTransferDetails' 276 Query Update tblID Set tblID.lastValue = lastValue + 1 Where tblID.tableName='tblTransferDetails' 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7922 276 Query Select tblStock.stockID, tblStock.stockBalance, tblStock.stockBooked From tblStock Where tblStock.stockBalance > 0 And tblStock.productID=7922 Order By tblStock.stockID 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +6, tblStock.stockBalance = tblStock.stockBalance -6 Where tblStock.stockID=19506 276 Query Insert into tblTransferDetails Set transferDetailsID = 9304, transferID=3491, outletID = 12, stockID = 19506, productID= 7922, productType = 'Accessory', stockQuantity = 6, stockSIQ = '6', costPrice = 5, transferPrice = 5 130116 18:07:28 276 Query Update tblProduct Set tblProduct.productTotalStock = tblProduct.productTotalStock -6, tblProduct.productBooked = tblProduct.productBooked -6, tblProduct.productTotalAmount = tblProduct.productTotalAmount -30 Where tblProduct.productID=7922 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7922 276 Query Insert into tblTransaction Set transactionID = 3491, transactionDetailsID=9304, stockID = 19506, productID= 7922, outletFromID = 12, outletToID = 4, totalAmountBefore = 60, totalAmountAfter = 30, totalQuantityBefore = 12, totalQuantityAfter = 6, averageCostBefore = 5, averageCostAfter = 5, quantity = 6, costPrice = 5, transactionPrice = 5, transactionEmployeeID = 68, transactionDate='2013:01:16', transactionTime='18:07:28', transactionType = 'to' 276 Query Select tblID.lastValue From tblID Where tblID.tableName='tblTransferDetails' 276 Query Update tblID Set tblID.lastValue = lastValue + 1 Where tblID.tableName='tblTransferDetails' 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7921 276 Query Select tblStock.stockID, tblStock.stockBalance, tblStock.stockBooked From tblStock Where tblStock.stockBalance > 0 And tblStock.productID=7921 Order By tblStock.stockID 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +7, tblStock.stockBalance = tblStock.stockBalance -7 Where tblStock.stockID=19505 276 Query Insert into tblTransferDetails Set transferDetailsID = 9305, transferID=3491, outletID = 12, stockID = 19505, productID= 7921, productType = 'Accessory', stockQuantity = 7, stockSIQ = '7', costPrice = 6.29, transferPrice = 6.29 130116 18:07:29 276 Query Update tblProduct Set tblProduct.productTotalStock = tblProduct.productTotalStock -7, tblProduct.productBooked = tblProduct.productBooked -7, tblProduct.productTotalAmount = tblProduct.productTotalAmount -44.03 Where tblProduct.productID=7921 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7921 276 Query Insert into tblTransaction Set transactionID = 3491, transactionDetailsID=9305, stockID = 19505, productID= 7921, outletFromID = 12, outletToID = 4, totalAmountBefore = 176.12, totalAmountAfter = 132.09, totalQuantityBefore = 28, totalQuantityAfter = 21, averageCostBefore = 6.29, averageCostAfter = 6.29, quantity = 7, costPrice = 6.29, transactionPrice = 6.29, transactionEmployeeID = 68, transactionDate='2013:01:16', transactionTime='18:07:29', transactionType = 'to' 276 Query Select tblID.lastValue From tblID Where tblID.tableName='tblTransferDetails' 276 Query Update tblID Set tblID.lastValue = lastValue + 1 Where tblID.tableName='tblTransferDetails' 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7926 276 Query Select tblStock.stockID, tblStock.stockBalance, tblStock.stockBooked From tblStock Where tblStock.stockBalance > 0 And tblStock.productID=7926 Order By tblStock.stockID 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +5, tblStock.stockBalance = tblStock.stockBalance -5 Where tblStock.stockID=19503 276 Query Insert into tblTransferDetails Set transferDetailsID = 9306, transferID=3491, outletID = 12, stockID = 19503, productID= 7926, productType = 'Accessory', stockQuantity = 5, stockSIQ = '5', costPrice = 9, transferPrice = 9 130116 18:07:30 276 Query Update tblProduct Set tblProduct.productTotalStock = tblProduct.productTotalStock -5, tblProduct.productBooked = tblProduct.productBooked -5, tblProduct.productTotalAmount = tblProduct.productTotalAmount -45 Where tblProduct.productID=7926 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7926 130116 18:07:31 276 Query Insert into tblTransaction Set transactionID = 3491, transactionDetailsID=9306, stockID = 19503, productID= 7926, outletFromID = 12, outletToID = 4, totalAmountBefore = 135, totalAmountAfter = 90, totalQuantityBefore = 15, totalQuantityAfter = 10, averageCostBefore = 9, averageCostAfter = 9, quantity = 5, costPrice = 9, transactionPrice = 9, transactionEmployeeID = 68, transactionDate='2013:01:16', transactionTime='18:07:31', transactionType = 'to' 276 Query Select tblID.lastValue From tblID Where tblID.tableName='tblTransferDetails' 276 Query Update tblID Set tblID.lastValue = lastValue + 1 Where tblID.tableName='tblTransferDetails' 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7921 276 Query Select tblStock.stockID, tblStock.stockBalance, tblStock.stockBooked From tblStock Where tblStock.stockBalance > 0 And tblStock.productID=7921 Order By tblStock.stockID 276 Query Update tblStock Set tblStock.stockBooked = tblStock.stockBooked +7, tblStock.stockBalance = tblStock.stockBalance -7 Where tblStock.stockID=19505 276 Query Insert into tblTransferDetails Set transferDetailsID = 9307, transferID=3491, outletID = 12, stockID = 19505, productID= 7921, productType = 'Accessory', stockQuantity = 7, stockSIQ = '7', costPrice = 6.29, transferPrice = 6.29 130116 18:07:32 276 Query Update tblProduct Set tblProduct.productTotalStock = tblProduct.productTotalStock -7, tblProduct.productBooked = tblProduct.productBooked -7, tblProduct.productTotalAmount = tblProduct.productTotalAmount -44.03 Where tblProduct.productID=7921 276 Query Select tblProduct.productTotalStock, tblProduct.productTotalAmount, tblProduct.productBooked, tblProduct.productPrice From tblProduct Where tblProduct.productID=7921 276 Query Insert into tblTransaction Set transactionID = 3491, transactionDetailsID=9307, stockID = 19505, productID= 7921, outletFromID = 12, outletToID = 4, totalAmountBefore = 132.09, totalAmountAfter = 88.06, totalQuantityBefore = 21, totalQuantityAfter = 14, averageCostBefore = 6.29, averageCostAfter = 6.29, quantity = 7, costPrice = 6.29, transactionPrice = 6.29, transactionEmployeeID = 68, transactionDate='2013:01:16', transactionTime='18:07:32', transactionType = 'to' 276 Query Delete From tblTempTransfer 276 Query COMMIT 276 Quit |
How can you tell when a query plan reuse is causing performance problems Posted: 21 Aug 2013 10:33 AM PDT When investigating a performance issue, a workmate suggested running DBCC FREEPROCCACHE to clear the plan cache. He came to this conclusion after noticing performance improved after a reboot. This feels like a rather quick and dirty approach - is there a more conclusive, definite way of ascertaining whether clearing the plan cache is the best approach? Making a mistake with this decision could result in a performance issue. Apologizes if this question is too vague :) Thanks! |
Is it necessary to drop foreign keys before adding a null column to a table and then re-add them? Posted: 21 Aug 2013 07:46 AM PDT I have generated some SQL Server(2008) upgrade scripts using SqlCompare. The primary objective is to add a null column. However, the generated script wants to drop foreign keys in a couple of other tables and the primary key of the target table before the add column. It then re-adds them. Is this necessary? What is the reasoning behind doing it?. |
'Cleanse' a SQL Server database file created with Instant File Initialization enabled? Posted: 21 Aug 2013 01:55 PM PDT I have Instant File Initialization enabled on our SQL Server, so the 'empty' part of the database file is not zeroed before being allocated. My understanding is that this means that the file could contain 'deleted' data. So now I want to send a copy of a database (probably a backup file) outside of the company. But there's all that potentially sensitive 'deleted' data sitting around inside the file. Now I would like to zero the unused portion of the file. Is that possible? I imagine I could create a new database and copy everything over, or perhaps restore a copy of the database to another server without Instant File Initialization enabled and then be aggressive with a ShrinkFile command to remove most or all of the unused portion of the database file, but is there a less manual and time consuming method? Ideally a command to tell SQL to zero the file as it would have done if Instant File Initialization was not enabled. |
DBCC CHECKDB ERROR Msg 8992 Posted: 21 Aug 2013 07:07 AM PDT I have got the Following Error When i ran the checkDB over the database. What are the steps can be taken to resolve the issue. Without affecting the users. Msg 8992, Level 16, State 1, Line 2 Check Catalog Msg 3853, State 1: Attribute (referenced_major_id=645577338,referenced_minor_id=7) of row (class=0,object_id=821577965,column_id=0,referenced_major_id=645577338,referenced_minor_id=7) in sys.sql_dependencies does not have a matching row (object_id=645577338,column_id=7) in sys.columns. Msg 8992, Level 16, State 1, Line 2 Check Catalog Msg 3853, State 1: Attribute (referenced_major_id=757577737,referenced_minor_id=7) of row (class=0,object_id=821577965,column_id=0,referenced_major_id=757577737,referenced_minor_id=7) in sys.sql_dependencies does not have a matching row (object_id=757577737,column_id=7) in sys.columns. What are the steps can be taken to resolve the issue. Without affecting the users? |
Database size - MDF too large? Posted: 21 Aug 2013 07:35 AM PDT I'm maintaining a SQL Server 2005 database which hosts approximately 2.9Tb of data (2 x 1.45Tb - I have a RAW schema and an ANALYSIS schema so basically two copies of the data ingested). The recovery model is SIMPLE and the .ldf is at 6Gb. For whatever reason, the .mdf is 7.5Tb. Now, there are only maybe 2-3 additional columns in ANALYSIS tables and not many NVARCHAR(MAX) columns which, from what I (may have mistakenly understood - please correct me if I'm wrong) may be causing additional space allocation. That's after shrinking the database just now - it was at ~9Tb prior to that. Any thoughts? And, please, let me know if you have additional questions - I'm very new to database administration and optimization efforts (I usually don't do this side of the job :)). Many thanks! Andrija |
Use surrogate or natural PK if a row is uniquely identified by two or more columns? Posted: 21 Aug 2013 12:31 PM PDT I am having a hard time choosing between natural and surrogate PK for my database table. The database is for a ranking system for an RTS game. Here is more or less what I would do if I were to go the natural route. Player table - PlayerID Int AI NN UQ - PlayerName Varchar PK - ServerName Varchar PK - Registered date, level, hero etc misc columns Rank table - PlayerName FK references player - ServerName FK references player - RankID Int AI NN UQ - PlayerRank Int NN UQ The thing is, each row in the player table is uniquely identified by the pair of PlayerName and ServerName. I thought using a surrogate key in this case is not really appropriate but I would like to hear suggestions on this. |
View with fallback (performance/optimization question) Posted: 21 Aug 2013 09:13 AM PDT I have a table with tariffs for stuff; the tariffs table is not important in this scenario, the "tariff values" are. In this Demonstration SQL Fiddle the tariff_plan is the FK to the tariffplans table (not included in the example). The tariff for each "thing" is the tariff_type (simplified to a simple char for demonstration purposes). I have, for example, a default tariffplan (key = default ); this is the tariff that goes for each customer unless another value is defined for the same tariff_type for that customer. A customer is assigned a tariffplan (key = plan_x in my example). If have tariffs defined for items a , b , c and d in the default plan. In plan_x I define "override" values for a and c . So, what I do is I select the default plan (alias p below for primary) and left-join the "override" plan (plan_x ) to it (alias s below for secondary): select * from tariff_values as p left outer join tariff_values s on (p.tariff_type = s.tariff_type) and (s.tariff_plan = 'plan_x') where (p.tariff_plan = 'default') This results, as expected, in: id tariff_plan tariff_type tariff_foo tariff_bar id tariff_plan tariff_type tariff_foo tariff_bar ---- ----------- ----------- ---------- ---------- ---- ----------- ----------- ---------- ---------- 1 default a 0.10 0.20 5 plan_x a 0.09 0.19 2 default b 0.55 0.66 NULL NULL NULL NULL NULL 3 default c 1.99 2.99 6 plan_x c 0.99 1.99 4 default d 9.99 6.33 NULL NULL NULL NULL NULL Because I want to abstract this away I want to put this into a table valued function so I can create a "dynamic view": select * from dbo.get_tariffplan_for('plan_x', default); This should result in a "virtual table" (or "dynamic view") similar to the tariff_values table, thus: not having two tariff_foo's and two tariff_bar's and let the application decide which one to use. And so, I resort to ISNULL and Case when... constructs to "override" the default values: select p.tariff_type, ISNULL(s.tariff_foo, p.tariff_foo) as tariff_foo, ISNULL(s.tariff_bar, p.tariff_bar) as tariff_bar, ISNULL(s.tariff_plan, p.tariff_plan) as tariff_plan, CASE WHEN s.id IS NULL THEN 1 ELSE 0 END as isfallback from tariff_values as p left outer join tariff_values s on (p.tariff_type = s.tariff_type) and (s.tariff_plan = 'plan_x') where (p.tariff_plan = 'default') This results in: tariff_type tariff_foo tariff_bar tariff_plan isfallback ----------- ---------- ---------- ----------- ----------- a 0.09 0.19 plan_x 0 b 0.55 0.66 default 1 c 0.99 1.99 plan_x 0 d 9.99 6.33 default 1 All I need to do now is stuff this query into a TVF: CREATE FUNCTION get_tariffplan_for ( @customerplan as varchar(50), @defaultplan as varchar(50) = 'default' ) RETURNS TABLE AS RETURN ( select p.tariff_type, ISNULL(s.tariff_foo, p.tariff_foo) as tariff_foo, ISNULL(s.tariff_bar, p.tariff_bar) as tariff_bar, ISNULL(s.tariff_plan, p.tariff_plan) as tariff_plan, CASE WHEN s.id IS NULL THEN 1 ELSE 0 END as isfallback from tariff_values as p left outer join tariff_values s on (p.tariff_type = s.tariff_type) and (s.tariff_plan = @customerplan) where (p.tariff_plan = @defaultplan) ); And there we have it. We can call our function ("dynamic view") as intended (and also use it in selects/joins etc.) select * from dbo.get_tariffplan_for('plan_x', default); --or: select * from foo inner join dbo.get_tariffplan_for('plan_x', default) bar on foo.tariff_type = bar.tariff_type Now my first question is: I have a feeling all these ISNULL (or COALESCE ) and/or CASE WHEN ... stunts seem to complicate things unnecessarily and something tells me this can be done more efficiently. However, I can't come up with a better and/or more efficient alternative. So I'm hoping someone here has some ideas on how to improve this. My second question is: What if I had a product (tariff_type q for example) that I sold exclusively to some customer; the tariff wouldn't be in the default tariff-plan so I'd have to add another select to the above (with a union) to get all exclusive tariffs for that customer in the resultset. That would result in a query like this: select p.tariff_type, ISNULL(s.tariff_foo, p.tariff_foo) as tariff_foo, ISNULL(s.tariff_bar, p.tariff_bar) as tariff_bar, ISNULL(s.tariff_plan, p.tariff_plan) as tariff_plan, CASE WHEN s.id IS NULL THEN 1 ELSE 0 END as isfallback, 0 as isexclusive from tariff_values as p left outer join tariff_values s on (p.tariff_type = s.tariff_type) and (s.tariff_plan = @customerplan) where (p.tariff_plan = @defaultplan) UNION --Exclusive values select p.tariff_type, p.tariff_foo, p.tariff_bar, p.tariff_plan, 0 as isfallback, 1 as isexclusive from tariff_values p left outer join tariff_values s on (p.tariff_type = s.tariff_type) AND (s.tariff_plan = 'default') where p.tariff_plan = 'plan_x' and s.id is null (Demonstrated in this SQL fiddle) In the above example I use another left join with s.id is null (but that could be rewritten in other ways (using count , exists , not in , having ...whatever)) to retrieve the customer-exclusive tariffs. But maybe there's a better way instead of the union? |
mongo replication not happening Posted: 21 Aug 2013 07:26 AM PDT I setup master/slave replication of mongodb on EC2. But I see not replication happening. When I do "show dbs" on master, it shows all dbs expected. But when I do the same on replica, it does not show me any db. Please help me troubleshoot. rs.config() { "_id" : "ittw", "version" : 1, "members" : [ { "_id" : 0, "host" : "ip-10-304-48-93:27017" } ] } rs.config() { "_id" : "ittw", "version" : 2, "members" : [ { "_id" : 0, "host" : "domU-17-31-19-16-88-5F:27017" }, { "_id" : 1, "host" : "ec2-50-321-52-908.compute-1.amazonaws.com:27017" } ] } rs.status() // replica { "set" : "ittw", "date" : ISODate("2013-08-12T06:55:57Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "$ip:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 356039, "optime" : Timestamp(1375934685, 1), "optimeDate" : ISODate("2013-08-08T04:04:45Z"), "self" : true } ], "ok" : 1 } rs.status() //Master { "set" : "ittw", "date" : ISODate("2013-08-12T06:57:19Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "PRI_IP:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 356543, "optime" : Timestamp(1376289725, 1), "optimeDate" : ISODate("2013-08-12T06:42:05Z"), "self" : true }, { "_id" : 1, "name" : "REP_IP:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 355869, "optime" : Timestamp(1375934685, 1), "optimeDate" : ISODate("2013-08-08T04:04:45Z"), "lastHeartbeat" : ISODate("2013-08-12T06:57:17Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 1 } ], "ok" : 1 } |
Cannot connect to server. Too many symbolic links. Postgres Error Posted: 21 Aug 2013 01:12 PM PDT I get this error when I try connecting to a PostgreSQL server: psql: could not connect to server: Too many levels of symbolic links Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What can I do to fix this? I have Postgres installed on my Mac. |
Filter on a window function without writing an outer SELECT statement Posted: 21 Aug 2013 04:12 PM PDT Since window functions cannot be included in the WHERE clause of the inner SELECT, is there another method that could be used to write this query without the outer SELECT statement? I'm using Oracle. Here is the sqlfiddle. SELECT MERCHANTID, WAREHOUSEID, PRODUCTCODE FROM ( SELECT 0 , WMP.MERCHANTID , WMP.WAREHOUSEID , WMP.PRODUCTCODE , RANK() OVER (PARTITION BY ML.MASTERMERCHANTID, WMP.PRODUCTCODE ORDER BY ML.PREFERENCEORDER ASC NULLS LAST) MERCH_RANK , RANK() OVER (PARTITION BY WMP.MERCHANTID, WMP.PRODUCTCODE ORDER BY WM.PREFERENCEORDER ASC NULLS LAST) WARE_RANK FROM MW_WAREHOUSEMERCHANTPRODUCT WMP LEFT OUTER JOIN MW_WAREHOUSEMERCHANT WM ON 0=0 AND WMP.MERCHANTID = WM.MERCHANTID AND WMP.WAREHOUSEID = WM.WAREHOUSEID LEFT OUTER JOIN MW_MERCHANTLINK ML ON 0=0 AND WMP.MERCHANTID = ML.LINKEDMERCHANTID LEFT OUTER JOIN MW_WAREHOUSEMERCHANTPRODUCT MASTER ON 0=0 AND ML.MASTERMERCHANTID = MASTER.MERCHANTID AND WMP.PRODUCTCODE = MASTER.PRODUCTCODE WHERE 0=0 AND WMP.STOCKLEVEL > 0 AND NVL(MASTER.STOCKLEVEL, 0) <= 0 ) WHERE 0=0 AND MERCH_RANK = 1 AND WARE_RANK = 1 ; |
Errors while creating multiple mysql-5.5 instances Posted: 21 Aug 2013 09:11 AM PDT I have installed 3rd mysql instance on my testing server. 2 instances already running without any issues. When I installed 3rd instance by mysql-5.5.30 zip source, it installed successfully but when I tried to restart 3rd instance of mysql it says, MySQL server PID file could not be found! [FAILED] Starting MySQL........................................................ The server quit without updating PID file. 1st instance running on 3305 BASEDIR: /usr/local/mysql Configuration File: /etc/my.cnf Socket: /tmp/mysql.stock 2nd instance running on 3306 BASEDIR: /backup/mysql-cbt Configuration File: /backup/mysql-cbt/my.cnf Socket: /backup/mysql-cbt/mysql.stock 3rd instance running on 3307 BASEDIR: /home/mysql-5/ Configuration File: /home/mysql-5/my.cnf Socket: /home/mysql-5/mysql.stock Error Log is as follows. 130513 11:22:23 mysqld_safe Starting mysqld daemon with databases from /backup/mysql-cbt/data<br> 130513 11:22:23 InnoDB: The InnoDB memory heap is disabled<br> 130513 11:22:23 InnoDB: Mutexes and rw_locks use GCC atomic builtins<br> 130513 11:22:23 InnoDB: Compressed tables use zlib 1.2.3<br> 130513 11:22:23 InnoDB: Using Linux native AIO<br> 130513 11:22:23 InnoDB: Initializing buffer pool, size = 128.0M<br> 130513 11:22:23 InnoDB: Completed initialization of buffer pool<br> 130513 11:22:23 InnoDB: highest supported file format is Barracuda.<br> 130513 11:22:23 InnoDB: Waiting for the background threads to start<br> 130513 11:22:24 InnoDB: 5.5.30 started; log sequence number 1595675<br> 130513 11:22:24 [ERROR] /backup/mysql-cbt/bin/mysqld: unknown option '--safe- show- database'<br> 130513 11:22:24 [ERROR] Aborting<br> 130513 11:22:24 InnoDB: Starting shutdown...<br> 130513 11:22:25 InnoDB: Shutdown completed; log sequence number 1595675<br> 130513 11:22:25 [Note] /backup/mysql-cbt/bin/mysqld: Shutdown complete<br> 130513 11:22:25 mysqld_safe mysqld from pid file /backup/mysql-cbt/cbt-instance.pid ended Still unable to figure out this error. How can I start the 3rd instance? Installation Here is the story from beginning. I have installed mysql via source: - extracted source mysql-5.5.30-linux2.6-x86_64.tar.gz to directory, then recursively change permission of mysql directory to user & group mysql.
- in scripts directory of mysql I run this command as root:
[root@server /backup/mysql-cbt/scripts]# ./mysql_install_db --basedir=/backup/mysql-cbt/ --datadir=/backup/mysql-cbt/data --defaults-file=/backup/mysql-cbt/my.cnf --user=mysql** Installing MySQL system tables... OK Filling help tables... OK To start mysqld at boot time you have to copy support-files/mysql.server to the right place for your system PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /backup/mysql-cbt//bin/mysqladmin -u root password 'new-password' /backup/mysql-cbt//bin/mysqladmin -u root -h 69.16.196.128 password 'new-password' Alternatively you can run: /backup/mysql-cbt//bin/mysql_secure_installation which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the manual for more instructions. You can start the MySQL daemon with: cd /backup/mysql-cbt/ ; /backup/mysql-cbt//bin/mysqld_safe & You can test the MySQL daemon with mysql-test-run.pl cd /backup/mysql-cbt//mysql-test ; perl mysql-test-run.pl Please report any problems with the /backup/mysql-cbt//scripts/mysqlbug script! When I restart this instance it gives error of updating pid and exit. Which step is missing? My my.cnf file (/backup/mysql-cbt/my.cnf ): [mysqld] federated basedir = /backup/mysql-cbt datadir = /backup/mysql-cbt/data log-error = /backup/mysql-cbt/cbt-instance.err pid-file = /backup/mysql-cbt/cbt-instance.pid #socket = /tmp/mysql.sock socket = /backup/mysql-cbt/mysql.sock port = 3310 user = mysql tmpdir = /backup/mysql-cbt/tmpdir /etc/init.d/mysql_cbt # If you want to affect other MySQL variables, you should make your changes # in the /etc/my.cnf, ~/.my.cnf or other MySQL configuration files. # If you change base dir, you must also change datadir. These may get # overwritten by settings in the MySQL configuration files. basedir=/backup/mysql-cbt datadir=/backup/mysql-cbt/data # Default value, in seconds, afterwhich the script should timeout waiting # for server start. # Value here is overriden by value in my.cnf. # 0 means don't wait at all # Negative numbers mean to wait indefinitely service_startup_timeout=900 # Lock directory for RedHat / SuSE. lockdir='/var/lock/subsys' lock_file_path="$lockdir/mysql" # The following variables are only set for letting mysql.server find things. # Set some defaults mysqld_pid_file_path=/backup/mysql-cbt/mysql-cbt-instance.pid if test -z "$basedir" then basedir=/usr/local/mysql-cbt bindir=/usr/local/mysql-cbt/bin if test -z "$datadir" then datadir=/backup/mysql-cbt/data fi sbindir=/backup/mysql-cbt/bin libexecdir=/backup/mysql-cbt/bin else bindir="$basedir/bin" if test -z "$datadir" then datadir="$basedir/data" fi sbindir="$basedir/sbin" libexecdir="$basedir/libexec" fi |
How do I identify the remote db agent name to use in create_database_destination on Oracle 11gR2? Posted: 21 Aug 2013 08:12 PM PDT I am trying to setup DBMS_SCHEDULER in Oracle 11g to run a remote database job. I have a remote Oracle 11g R2 database on unix and a local one on Windows. I read that you can install the oracle scheduler agent from the 11g client install for machines that don't have Oracle installed but this is not needed for running remote jobs if Oracle is present on both machines. With the remote agent installation, you run schagent and provide parameters to register the agent to the remote machine but I cant find any instructions on the web regarding how to register remote agents when both machines have Oracle installed or what to use as the agent name in this case. I have added an entry to tnsnames.ora for the remote DB and can tnsping, etc. If I run the dbms_scheduler.create_database_destination procedure, it requires an agent name but where can I find this for the database or how can I check that it is running on Unix or Windows? |
How can I search the full text of a stored procedure for a value? Posted: 21 Aug 2013 07:31 AM PDT I use the following script to search the text of all stored procedures when I want to find specific values. SELECT ROUTINE_NAME, ROUTINE_TYPE FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_DEFINITION LIKE @searchText ORDER BY ROUTINE_NAME I recently discovered that ROUTINE_DEFINITION stops after 4000 characters, so some procedures were not getting returned when they should have been. How can I query the full text of a stored procedure for a value? |
Loading XML documents to Oracle 11g DB with control file Posted: 21 Aug 2013 05:12 PM PDT I am using Oracle 11g XML database and trying to load XML documents to this DB with a control file and the sqlldr utility. All these XML files have an element that contains a date string with time stamp (and the letter T in the middle of it). Oracle rejects this date string because of T in it and thus the XML file is not loaded to the DB. I want to use the Oracle function TO_TIMESTAMP_TZ on the date string during the data load, but I do not know how to do it. That's where I need help. If there is any other way to import the XML (with date string with timestamp), I will try that also. Here is the date entry in XML file: <ns3:EntryDateTime cls="U">2013-04-20T21:02:52.468-04:00</ns3:EntryDateTime> And here is entire code the control file: load data infile 'filelist.dat' append into table STXP xmltype(XMLDATA) ( filename filler char(120), XMLDATA lobfile(filename) terminated by eof ) I believe that I can execute the above control file with the sqlldr utility on SQL*Plus command line also, but not sure about this option. If this is possible, I guess I can ALTER SESSION (to somehow format date string) on command line before executing the control file. The filelist.dat mentioned above contains entries for input XML file, with one line listing one XML file. The above date entry is required in each XML file. Each XML file has about 50 different elements, some required and some optional. I would greatly appreciate your help. UPDATE: I successfully registered the schema, which contains definition for the date string, and 100 other schema, with a script. Since this script is very large, I am posting only 2 registration portions of it: DECLARE SCHEMAURL VARCHAR2( 100 ); SCHEMADOC VARCHAR2( 100 ); BEGIN SCHEMAURL := 'http://www.some.org/stxp/DataTypes.xsd'; SCHEMADOC := 'DataTypes.xsd'; DBMS_XMLSCHEMA.registerSchema( SCHEMAURL, BFILENAME( 'XSD_DIR', SCHEMADOC ), LOCAL => TRUE, -- local GENTYPES => TRUE, -- generate object types GENBEAN => FALSE, -- no java beans GENTABLES => TRUE, -- generate object tables OWNER => USER ); SCHEMAURL := 'http://www.some.org/stxp/STXP.xsd'; SCHEMADOC := 'STXP.xsd'; DBMS_XMLSCHEMA.registerSchema( SCHEMAURL, BFILENAME( 'XSD_DIR', SCHEMADOC ), LOCAL => TRUE, -- local GENTYPES => TRUE, -- generate object types GENBEAN => FALSE, -- no java beans GENTABLES => TRUE, -- generate object tables OWNER => USER ); END; / The 2nd registration above is the last in the script, and this creates the table STXP, in which I am trying to load about 800 XML files. Each XML file has a root element called stxp. This is the relevant definition of date string: <xsd:simpleType name="DT" xdb:SQLType="TIMESTAMP WITH TIME ZONE"> <xsd:restriction base="xsd:dateTime"/> </xsd:simpleType> And this is how I am using the above definition: <element name="EntryDateTime" type="oth:DT"/> When I make the above element optional (for testing purpose) and remove the date string entry (mentioned near the top of this question) from my XML file, the XML file is loaded successfully to Oracle XML database. When I put this entry back to XML file (because it is required), Oracle rejects it. Because I let Oracle take care of population of STXP table with data from XML files, I am not sure if I can set a trigger to pre-process the date string from the input XML file before saving it in database. i think there is a way to do it in the control file. |
Enabling/disabling/changing Oracle auditing without a shutdown? Posted: 21 Aug 2013 08:11 AM PDT I have a large database that needs auditing on a very detailed level (every select, update, insert, and delete, along with the actual text of the statement) for about half the users. I know how to do this (here is a related question for anyone interested), but I also realize we cannot do this for any extended amount of time because of how much quickly we would be collective massive amounts of data. So while there is a scheduled downtime coming up that we can implement the auditing, to change it to fine tune it (as management changes the request of what data they desire) or to disable it once we have enough data would require us having to take the database down to disable this. While this wouldn't be too horrible to do if we were able to schedule a short downtime late at night, it would be really nice if this could be avoided altogether, but every reference I've seen so far requires the database to be brought down and back up. So, my question (which I believe to be general enough for the purposes of this site, even though the back story is specific) is if there is a way to enable/disable/change auditing without shutting down the database. Edit: Oracle version 11r2. As for AUD$ vs. FGA, I'm not sure what FGA is, but AUD is the table which will hold the data, so I am assuming that one. |
How to migrate data from SQL database to MySQL including flat files? Posted: 21 Aug 2013 07:11 AM PDT one of my friend has asked some asp developer to make online application where user can upload files also. jpg, doc, pdf and all these files are saved in SQL database as flat file. Now my friend asked me to do everything in php and he want to migrate all the data which he has in SQL datbase to MySQL database. I can do it. text can be migrated easily but the files which are saved in SQL database can be migrated to MySQL as flat files. is it possible? |
MySQL - run SELECT statement on another server without defining the table structure Posted: 21 Aug 2013 03:12 PM PDT In MySQL I can query information on another server using federated tables, as long as I've defined the same table structure locally. In MS SQL Server, however, I can run any SQL statement against a linked server. Is it possible to do the same thing in MySQL? |
consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest? Posted: 21 Aug 2013 06:12 PM PDT Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD. Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time. How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut? Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway? . |
DB2 to require password each time Posted: 21 Aug 2013 11:12 AM PDT I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested. More specifically, this is how the authentication configuration looks like: db2 get dbm cfg|grep -i auth GSS Plugin for Local Authorization (LOCAL_GSSPLUGIN) = Server Connection Authentication (SRVCON_AUTH) = NOT_SPECIFIED Database manager authentication (AUTHENTICATION) = CLIENT Alternate authentication (ALTERNATE_AUTH_ENC) = NOT_SPECIFIED Cataloging allowed without authority (CATALOG_NOAUTH) = NO Trusted client authentication (TRUST_CLNTAUTH) = SERVER Bypass federated authentication (FED_NOAUTH) = NO db2 connect to dbName Database Connection Information Database server = DB2/LINUXX8664 10.1.0 SQL authorization ID = DB2INST1 Local database alias = DBNAME db2 connect to dbName user db2inst1 using password SQL1639N The database server was unable to perform authentication because security-related database manager files on the server do not have the required operating system permissions. SQLSTATE=08001 I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck. |
How can I set a default session sql_mode for a given user? Posted: 21 Aug 2013 07:12 PM PDT I want to have a default session sql_mode that is different from the global sql_mode. I can set it manually, but I'd like to do it automatically. I thought I could add it to .my.cnf , but that doesn't work. I tried adding it to the [mysql] group in .my.cnf , and it does not throw an error. Yet, when I connect my session sql_mode, the session still inherits the global sql_mode. I'm using MySQL 5.5.16. |
oracle streams apply: how to get a reason why LCR message was not applied Posted: 21 Aug 2013 10:11 AM PDT I've set up bidirectional oracle streams replication (11gR1) using identical scripts on both machines (DB1 and DB2). Although changes from DB1 are being applied to DB2, changes from DB2 to DB1 aren't. I have only one rule for capture processes that checks for apply tag to prevent cyclic propagation, and have no rules for apply processes. LCRs from DB2 are dequeued at DB1 by apply reader process (update LCRs are among dequeued messages for sure, because when I issue 50 inserts at DB2, at DB1 dequeued messages counter increases by 50), but aren't processed by apply coordinator and servers : Apply Apply Apply Apply Name Queue Tag Status --------------- --------------- ----- ------------------------------ FROM_DB2_APP FROM_DB2_APP_Q 02 ENABLED Apply Messages Last Name State Dequeued SCN --------------- ------------------ ---------- ---------- FROM_DB2_APP DEQUEUE MESSAGES 102 1118751 Apply Trans Trans Trans Trans Trans Trans Name State Appld Assign Rcvd Ignrd Rollbk Error --------------- --------------- ------- ------- ------- ------- ------- ------- FROM_DB2_APP IDLE 0 0 0 0 0 0 Apply Server Messages Name ID Applied State --------------- ------ --------- ------------------------------ FROM_DB2_APP 1 0 IDLE As far as I understand, in that case LCRs can be silently ignored (without throwing apply error) only if SCN of LCR is lesser that instantiation SCN for a table, but instantiation SCN is 1114348 (< 1118751): Source Object Object Instantiation Database Owner Name SCN ------------ ------------ ------------ ------------- DB2 DUMMYUSR DUMMYTBL 1114348 Oracle provides means to deal with errors, but how to check why message was not applied if there was no error? |
SQL Server 2008 Setup Error 0x80070490 Posted: 21 Aug 2013 12:12 PM PDT I am trying to install SQL Server 2008 x64 on Windows 2008 R2 and keep getting the following error: SQL Server Setup has encountered the following error: Element not found. (Exception from HRESULT: 0x80070490) I have applied all required patches and there are no other instances of SQL Server on the machine. Any clues as to what the cause might be? Thanks. |
Replicating data from Oracle to MySQL Posted: 21 Aug 2013 02:12 PM PDT I work with a vendor that does data analytics, and they currently receive a replication stream from some of our databases using a product called Goldengate (which is very expensive). Goldengate has been great - it replicates transactions from the Tandem-NSK source and can apply the changes into any supported database - they're using MySQL at the remote end. We're switching our billing system to Oracle, and while we could continue to use Goldengate to move these logs, I'd like to see if there's another option. We initially chose Goldengate because nothing else could get data out of the Tandem NSK, but now that we're moving to Oracle, there may be some more native (or at least simpler) choices. I've got nothing against them - like I said, it works great - but I'm hoping that two mainstrem databases are easier to do replication between than the NSK. Are there any products of methods that would help get transactional data from an Oracle system into an MySQL database? I'm not sure if there's any way to do this kind of replication natively (I know we can do Oracle -> MSSQL using native replication, but not any way to target MySQL that I'm aware of), or if anybody knows of a product that could facilitate this (and costs less than Goldengate). Thanks for any suggestions! |
No comments:
Post a Comment