Friday, May 31, 2013

[T-SQL] Import excel custom formatting cell to sql server 2008

[T-SQL] Import excel custom formatting cell to sql server 2008


Import excel custom formatting cell to sql server 2008

Posted: 30 May 2013 11:05 PM PDT

I have an excel sheet . having different data column One column having data likeEffiency=======70%80%50%Column format = [<0]"";0%when I importing data from excel to sql server this column shows blank values in sql server I am usingOPENROWSETSELECT Effiency FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0;Database=E:\BI\Excel_Source_File\CS_Source_File\Camp201201231.xlsx', 'SELECT * FROM [Detailed_report$B7:AP39]')I also use option IMEX=1 and used OPENDATASOURCE no change same issue.Is there anyway that I can transfer data from excel to SQL Server and formatting ignored during transformation.Urgent and thanks in advance.Regards,Naseer Ahmed

Add New column and Update column in same script!

Posted: 30 May 2013 09:57 AM PDT

Hi All,i have a script as following.IF COLUMNPROPERTY( OBJECT_ID('test2.dbo.Fldr'),'ColumnName','old_pk') IS NULL BEGIN ALTER Table2 ADD old_pk INT END ;UPDATE T2SET T2.ID = T1.ID ,T2.old_pk = T1.[fldr_id] FROM Table1 T1 INNER JOIN Table2 T2 ON T1.[ID] = T2.[ID]when i execute the following i get the following errorMsg 207, Level 16, State 1, Line 39Invalid column name 'old_pk'.Everything works ok when i run the alter and update separately. but i need to do this in the same script in one go.how do i solve this?

Merge between two tables in two different servers

Posted: 30 May 2013 06:51 AM PDT

Hi friends,I have following issue:I have one table called customer in Cust database in SQL SERVER 2008R2, which is my destination table.and Other table called customer(same name as above and also same fields) on the oracle server, which is my source table.I have access of read only in the table which is on the oracle server under the LINKED SERVER folder in SQL SERVER 2008R2.now I am confuse how can I do the incremental ETL,(insert , update and delete) between source and destination table either via Stored procudre or SSIS package.Please help me.

Summing Invoice Amounts ONCE and MULTIPLE Invoice Payments in same T-SQL Statement

Posted: 30 May 2013 05:11 AM PDT

A master table contains a series of invoices and a detail table comprise, for any invoice, 1 or 2 or 3 ... payments for the invoice and an invoice also might have no payments. Obviously, if an invoice has multiple payments, the LEFT OUTER JOIN between the master and detail table will produce multiple records for the same invoice master. So the sum of the Amounts includes several repetition of the same invoice amount. And, of course, two distinct invoices may have the same amount.Been racking my brains to fabricate a SINGLE T-SQL that could simultaneously create the sum of each invoice amount and the sum of all the payments. These results have to be broken down by currency.Any ideas ?This is the required result:[code="plain"][font="Courier New"]/*GrandTotal Desired is the correct value, GrandTotal Obtained is wrongCurrency GrandTotal GrandTotal Payments Desired ObtainedEUR 15346.00 21349.00 800.80USD 6134.00 7135.00 667.3334*/[/font][/code]Here are table creation and filling scripts and the obviously failed LEFT OUTER JOIN[code="sql"][font="Courier New"]BEGIN TRANCREATE TABLE #INV( InvNo int NOT NULL IDENTITY, Amount money, Currency CHAR(3))CREATE TABLE #Pay( PayNo int NOT NULL IDENTITY(100,1), InvNo int NOT NULL, Paid money)INSERT INTO #INV (Amount, Currency)SELECT 1001, 'USD' UNIONSELECT 1011, 'USD' UNIONSELECT 1111, 'USD' UNIONSELECT 2001, 'EUR' UNIONSELECT 2011, 'EUR' UNIONSELECT 2111, 'EUR' UNIONSELECT 3001, 'EUR' UNIONSELECT 3011, 'USD' UNIONSELECT 3111, 'EUR' UNION ALLSELECT 3111, 'EUR' INSERT INTO #Pay (InvNo, Paid) SELECT TOP 1 InvNo, CONVERT(money, Amount/3.00) FROM #INVINSERT INTO #Pay (InvNo, Paid) SELECT TOP 1 InvNo, CONVERT(money, Amount/3.00) FROM #INV INSERT INTO #Pay (InvNo, Paid) SELECT TOP 1 InvNo+3, CONVERT(money, Amount/5.00) FROM #INV INSERT INTO #Pay (InvNo, Paid) SELECT TOP 1 InvNo+3, CONVERT(money, Amount/5.00) FROM #INV INSERT INTO #Pay (InvNo, Paid) SELECT TOP 1 InvNo+3, CONVERT(money, Amount/5.00) FROM #INV INSERT INTO #Pay (InvNo, Paid) SELECT TOP 1 InvNo+3, CONVERT(money, Amount/5.00) FROM #INV SELECT * FROM #INVSELECT * FROM #PaySELECT * FROM #INV LEFT OUTER JOIN #Pay ON #Pay.InvNo = #INV.InvNoSELECT #INV.Currency, SUM(#INV.Amount) AS GrandTotalFROM #INV GROUP BY #INV.CurrencySELECT #INV.Currency, SUM(#PAY.Paid) AS PaymentsFROM #Pay INNER JOIN #INV ON #Pay.InvNo = #INV.InvNoGROUP BY #INV.CurrencySELECT #INV.Currency, SUM(#INV.Amount) AS GrandTotal, SUM(#PAY.Paid) AS PaymentsFROM #INV LEFT OUTER JOIN #Pay ON #Pay.InvNo = #INV.InvNoGROUP BY #INV.CurrencyROLLBACK TRAN[/font][/code]

[SQL Server 2008 issues] Storing XML data

[SQL Server 2008 issues] Storing XML data


Storing XML data

Posted: 30 May 2013 07:18 PM PDT

I have a XML date in this format:<? xml version="1.0" encoding="UTF-8"?><ns0:GVRedReq xmlns:xsi='http://www.w3.org/2001 /XMLSchema-instance'xmlns:ns0='http://fks.com/gv/redemption'xsi :schemaLocation='http://fks.com/gv/redemption GVRedReq.xsd'><ns0:StrCode>1234</ns0:StrCode><ns0:TrmnlId>02</ns0:TrmnlId><ns0:TrnsNo>0123456789<ns0:TrnsNo><ns0:Date>03-05-2013</ns0: Date><ns0:Time>11:30</ns0:Time><ns0:GVDtls><ns0:GVNo> VVD0000021</ns0:GVNo><ns0:GVNo> VVD0000022</ns0:GVNo><ns0:GVNo> BVD0003213</ns0:GVNo></ns0:GVDtls></ns0:GVRedReq>I want to store it in my database table. Its structure is like below:Create table GV_redeemRequest( RedeemRequestID int Primary key identity(1,1) StoreCode char(4), TerminalID int, TransactionNO int, Date date, Time char(5), GVDetails varchar(max))How to achieve this?

Creating views on multiple tables

Posted: 29 May 2013 10:32 PM PDT

Hi,Is it possible to create views using multiple tables from multiple databases????I have 4 servers which are connected remotely and the same database resides on all the servers, I want to create view by using all the databases of all the servers??I want to do this to make the database available for the user regardless of the connection that they connect to which server?? means data must be available to users either they connect to one server or other???? your valuable guidelines and suggestions are welcome.Thanks in advanceBILAL AHMAD

facing issue in looping through in SOL to XML

Posted: 30 May 2013 06:56 PM PDT

I have the following query which is supposed create xml for each employee.Here is the query....select E.FirstName + ' ' + E.LastName as 'emp/@Name',P.PositionName as 'emp/@Position',D.DepartmentName as 'emp/@Department', ( select distinct B.EventName as 'event/@ID', ( Select (CAST(case when AAA.BookingID=AB.BookingID then ( select CONVERT(varchar(10),max(BF. FromDate) ,103) as DateCompetencyAchieved) else '1 May 1900' end as Varchar ))) as 'event/@AttendanceDate' from #ReportMaster X, Booking B,MU_Attendee_Bookings AAA where B.BookingID=AAA.BookingID order by B.EventName for xml PATH(''), TYPE )as 'emp'............The output is as follows: <r caption="Site of Person Involved" fieldName="SitePI"> <emp Name="name1" Position="Manager" Department="Admin"> <event ID="99hh" AttendanceDate="20/05/2013" /> <event ID="MS Excel" AttendanceDate="xx" /> <event ID="MS Office" AttendanceDate="xx" /> </emp> <emp Name="Name2" Position="Supplier" Department="Accounts"> <event ID="99hh" AttendanceDate="xx" /> <event ID="MS Excel" AttendanceDate="10/05/2013" /> <event ID="MS Office" AttendanceDate="xx" /> </emp> <emp Name="Name2" Position="Supplier" Department="Accounts"> <event ID="99hh" AttendanceDate="xx" /> <event ID="MS Excel" AttendanceDate="xx" /> <event ID="MS Office" AttendanceDate="01/08/2013" /> </emp> </r>What I need as follows. I do not want the name2 to repeat but combine the two outputs related to name2. <r caption="Site of Person Involved" fieldName="SitePI"> <emp Name="name1" Position="Manager" Department="Admin"> <event ID="99hh" AttendanceDate="20/05/2013" /> <event ID="MS Excel" AttendanceDate="xx" /> <event ID="MS Office" AttendanceDate="xx" /> </emp> <emp Name="Name2" Position="Supplier" Department="Accounts"> <event ID="99hh" AttendanceDate="xx" /> <event ID="MS Excel" AttendanceDate="10/05/2013" /> <event ID="MS Office" AttendanceDate="01/08/2013" /> </emp> </r>Can anyone please help if I am doing a mistake in looping through the event dates for each employee?

Passing one value with a @parameter but return two customers

Posted: 30 May 2013 06:54 PM PDT

Hi, I have a problem that I have struggled with for a couple of hours now, without being able to solve it.I have a report, that returns funding for a particular customer. The customer name is passed to the underlying procedure as a parameter, @customer. So far everything is fine. But here comes the tricky part. In my customer database, there have been some errors in the registration,so one of my customers have been registered with two different names, lets say customer A and customer B.And in my procedure I have a condition saying that customerName = @customer.What I want to do is check the customer parameter, and if it is A or B, then my condition should return the data from both customers, not only A or B.One way would to create two @customer parameters in the report, and let the condition be customerName = @customerA OR customerName = @customerB but I would rather not use that approach.

Create a Leveling of Intermediaries

Posted: 30 May 2013 04:07 PM PDT

Hi, I have a table called Intermediary, which have some columns likeCREATE TABLE Intermediary(IntermediaryPK INT ,IntermediaryID NVARCHAR(20),IntermediaryName NVARCHAR(200),IntermediaryTypeID NVARCHAR(1),ParentIntermediaryID NVARCHAR(20),IntermediaryTypePK TINYINT,LevelID TINYINT,ParentIntermediaryPK INT,GrandParentIntermediaryPK INT,GrandParentIntermediaryID NVARCHAR(20))The basic idea is to[b] use only the [/b][b]ParentIntermediaryID column to create a leveling of intermediaries[/b]. The algorithm is that if someone does not have any children, he will be level 3, etc. So the algorithem is:1. Divide the intermediaries into levels. 1 = grand parent (has grand children). 2 = parent (has children). 3 = leaf (no children).2. For each level 3 intermediary:a. Set level 2 parent by the parent ID givenb. Set level 1 parent by the parent's parent ID given3. For each level 2 intermediary:a. Set level 2 parent as itselfb. Set level 1 parent by the parent ID given4. For each level 1 intermediary, set level 1 parent as itselfSo I have to write a script (SQL) to accomplish this algorithem. Please help me out.

need script

Posted: 30 May 2013 12:46 AM PDT

I need a script to list all columns which is having not null constraint on it. the script i have is..USE [R]GOSELECT t.name AS table_name,SCHEMA_NAME(schema_id) AS schema_name,c.name AS column_nameFROM sys.tables AS tINNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_IDWHERE c.name is not nullORDER BY schema_name, table_name;but it is returning the columns which is having null constraint also..can any one help

Create a Leveling of Intermediaries

Posted: 30 May 2013 04:07 PM PDT

Hi, I have a table called Intermediary, which have some columns likeCREATE TABLE Intermediary(IntermediaryPK INT ,IntermediaryID NVARCHAR(20),IntermediaryName NVARCHAR(200),IntermediaryTypeID NVARCHAR(1),ParentIntermediaryID NVARCHAR(20),IntermediaryTypePK TINYINT,LevelID TINYINT,ParentIntermediaryPK INT,GrandParentIntermediaryPK INT,GrandParentIntermediaryID NVARCHAR(20))The basic idea is to [b]use only the [/b][b]ParentIntermediaryID column to create a leveling of intermediaries[/b]. The algorithm is that if someone does not have any children, he will be level 3, etc. So the algorithem is:1. Divide the intermediaries into levels. 1 = grand parent (has grand children). 2 = parent (has children). 3 = leaf (no children).2. For each level 3 intermediary:a. Set level 2 parent by the parent ID givenb. Set level 1 parent by the parent's parent ID given3. For each level 2 intermediary:a. Set level 2 parent as itselfb. Set level 1 parent by the parent ID given4. For each level 1 intermediary, set level 1 parent as itselfSo I have to write a script (SQL) to accomplish this algorithem. Please help me out.

Need some help

Posted: 30 May 2013 03:53 PM PDT

Hi friends, Recently i faced an interview... i need answers for questions which i am not comfortable in delivering.1. difference between SQL 2008 and 2008 R22. what is virtual memory3. why we need MSTDC4. what tranactions involved in MSTDC4. what is ITIL5. what is Change Management6. what is Incident Management

SQL server Query

Posted: 30 May 2013 05:42 PM PDT

Hi ,Please tell me that what is Query Recasting with example.Regards,Anshuman Saini

Rounding issue

Posted: 30 May 2013 01:21 PM PDT

Hi everyone,I have problem with rounding. There is a table[code="plain"]CREATE TABLE [dbo].[Fin3PaymentDisb]( [PaymentID] [bigint] NOT NULL, [Currency] [char](3) NOT NULL, [Amount] [money] NOT NULL, [Rate] [float] NOT NULL, [CurrencyPay] [char](3) NULL, [RatePay] [float] NOT NULL, [AmountPay] AS ([Amount]*[Rate]), CONSTRAINT [PK_Fin3PaymentDisb] PRIMARY KEY CLUSTERED [/code]with valuesAMOUNT = 3875.0000RATE = 0.17836AMOUNTPAY = 691.145 And this query returns record[code="plain"]SELECT P.*FROM Fin3Payment P INNER JOIN ( SELECT PaymentID, SUM(AmountPay) AS Amount FROM Fin3PaymentDisb GROUP BY PaymentID ) AS L ON L.PaymentID = P.PaymentID AND ROUND(Round(L.Amount, 2) - ROUND(P.Amount - P.Offset - P.SetDisc, 2), 2) <> 0[/code]where PaymentAmount = 691, Offset = -0.15 and SetDisc = 0, i.e. 691.145 is rounded to 691.14. But in the query[code="plain"]SELECT ROUND(691.145, 2)[/code]result is 691.15.What could be the problem here? It runs on MS SQL 2005 Express SP2.Thanks.

How to calculate space needed to modify a column in a table?

Posted: 30 May 2013 01:39 PM PDT

I have a situation that's driving me INSANE...The vendors of our financial processing system came to us asking that we increase a column in a table from char(15) to char(19) to allow for extra room in a needed field from the web application. Typically this type of change wouldn't such a big deal, however this table has about 90 million rows in it and has a data footprint of about 214GB.Now unless I'm missing something (which is very possible, as I'm no expert at anything), to issue an ALTER COLUMN like this via SSMS it involves the creation of a new "temp" table, transfer the data from the existing table to the new table, create all non-clustered indexes on the new table, and drop the old table, rename the new.Consider the command:[code="sql"]A-LTER TABLE MyProblemChild A-LTER COLUMN [FleetCardVehicleNo] char (19) NULLGO[/code]I did this on a testing server with attached raided storage and it took about 1.5hrs and the log file was pre-sized at 230GB - not too badWhen I attempted to run the same process on our production server (which has the data, index, and log files on a SAN, each residing on their own respective LUNS), the process ran for over 5 hours, grew the log file to over 490GB...and only stopped there because the LUN ran out of space... to which I had to kill/rollback the entire thing!How can I calculate how much log file space will be needed to successfully complete the operation?

Need MDX query

Posted: 30 May 2013 01:53 PM PDT

hi All,I need one mdx query to display the first three digits of month name please give any example queries

Strange BETWEEN behavior....anyone have an explination for this?

Posted: 30 May 2013 08:13 AM PDT

I wrote some code that will basically search an entire directory of .trn files(Log ships in my case). It uses the RESTORE HEADERONLY function and inserts the results in to a temp table and then uses the BETWEEN function ,the FirstLSN column and the lastlsn column to find the LSN you are looking for. I'll post the code in its entirety below(In case someone would find it useful) but this is the query that actually does the aforementioned function using between more or less:QUERY:select convert(NUMERIC(38), FirstLSN) as 'FirstLSN', convert(NUMERIC(38), lastlsn) as 'lastlsn'from #filelist where 17017000006794800011 BETWEEN FirstLSN AND LastLSNRESULT:FirstLSN lastlsn17017000005762601000 17017000006794799000 (WRONG)17017000006794799000 17017000007787600000 (RIGHT)The #filelist table (Which contains the results of each HEADERONLY query) defines FirstLSN and LastLSN as FLOAT and the other fields as varchar. It returns the correct result but also it returns an additional record that seems to be erroneous. In the full version below I use a variable called @SearchLSN which also is defined as type FLOAT. This is necessary for the BETWEEN function as it only compares like types.Anyone have any idea why BETWEEN would return two results with one being incorrect? The wrong record btw is always the record immediately before the correct record in the table.Here is the code in its entirety in case its useful for someone(Please excuse the formatting):[code="sql"]/*This script will search for a particular LSN number within a directory of .trn files. The log file must use .trn for theextension. It will also return ALL the infromation for every file if you enter a 0(zero) for the search lsn.Written by kenneth.gore@gmailDOTcomUse as you wish, modify as you wish..................Need more? Email me :)*/SET QUOTED_IDENTIFIER OFF GOSET ANSI_NULLS ON GOset nocount ondeclare @restoreFromDir varchar(255) = '\\Place\Your\directory\path\here\without\backslash' --Holds the location of the .trn filesdeclare @SearchLSN Float = 17017000006794800011 --Holds the lsn you are searching for. A value of 0 to diplay ALL the information from every file.declare @DisplayasWeGo bit = 1 -- Determines wether information for each file will be displayed as its found or JUST the file that you are searching for at the end of the query.if exists ( select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#dirList'))DROP TABLE #dirList;if exists ( select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#filelist'))DROP TABLE #filelist;declare @filename varchar(100), @cmd varchar(500)create table #dirList (filename varchar(255))create table #filelist ( BackupName varchar(255),BackupDescription varchar(255),BackupType varchar(255),ExpirationDate varchar(255),Compressed varchar(255),Position varchar(255),DeviceType varchar(255),UserName varchar(255), ServerName varchar(255),DatabaseName varchar(255),DatabaseVersion varchar(255),DatabaseCreationDate varchar(255), BackupSize varchar(255),FirstLSN Float, LastLSN Float,CheckpointLSN Float, DatabaseBackupLSN Float, BackupStartDate varchar(255),BackupFinishDate varchar(255), SortOrder varchar(255), [CodePage] varchar(255), UnicodeLocaleId varchar(255),UnicodeComparisonStyle varchar(255), CompatibilityLevel varchar(255),SoftwareVendorId varchar(255),SoftwareVersionMajor varchar(255), SoftwareVersionMinor varchar(255),SoftwareVersionBuild varchar(255),MachineName varchar(255),Flags varchar(255),BindingID varchar(255),RecoveryForkID varchar(255),Collation varchar(255),FamilyGUID varchar(255),HasBulkLoggedData varchar(255),IsSnapshot varchar(255),IsReadOnly varchar(255),IsSingleUser varchar(255),HasBackupChecksums varchar(255),IsDamaged varchar(255),BeginsLogChain varchar(255),HasIncompleteMetaData varchar(255),IsForceOffline varchar(255),IsCopyOnly varchar(255),FirstRecoveryForkID varchar(255),ForkPointLSN varchar(255),RecoveryModel varchar(255),DifferentialBaseLSN varchar(255),DifferentialBaseGUID varchar(255),BackupTypeDescription varchar(255),BackupSetGUID varchar(255),CompressedBackupSize varchar(255),)select @cmd = 'dir /b /on "' + @restoreFromDir+ '"'insert #dirList exec master..xp_cmdshell @cmd --select * from #dirList where filename like '%.trn%' --order by filenamedeclare BakFile_csr cursor for select * from #dirList where filename like '%.trn%' --order by filename--select * from #dirList where filename like '%.trn%'open BakFile_csrfetch BakFile_csr into @filenamewhile @@fetch_status = 0 begin select @cmd = "RESTORE HEADERONLY FROM DISK = '" + @restoreFromDir + "\" + @filename + "'" insert #filelist exec ( @cmd ) if @DisplayasWeGo = 1 exec ( @cmd ) PRINT '' PRINT 'Getting information from ' + @filename fetch BakFile_csr into @filename end -- BakFile_csr loopclose BakFile_csrdeallocate BakFile_csrif @SearchLSN = 0 select BackupName , BackupDescription , BackupType , ExpirationDate , Compressed , Position , DeviceType, UserName , ServerName , DatabaseName , DatabaseVersion , DatabaseCreationDate , BackupSize , convert(NUMERIC(38), FirstLSN) as 'FirstLSN' , convert(NUMERIC(38), lastlsn) as 'lastlsn' , convert(NUMERIC(38), CheckpointLSN) as 'CheckpointLSN', convert(NUMERIC(38), DatabaseBackupLSN) as 'DatabaseBackupLSN', BackupStartDate , BackupFinishDate , SortOrder , [CodePage] , UnicodeLocaleId , UnicodeComparisonStyle , CompatibilityLevel , SoftwareVendorId , SoftwareVersionMajor , SoftwareVersionMinor , SoftwareVersionBuild , MachineName , Flags , BindingID , RecoveryForkID , Collation , FamilyGUID , HasBulkLoggedData , IsSnapshot , IsReadOnly , IsSingleUser , HasBackupChecksums , IsDamaged , BeginsLogChain , HasIncompleteMetaData , IsForceOffline, IsCopyOnly , FirstRecoveryForkID , ForkPointLSN , RecoveryModel , DifferentialBaseLSN , DifferentialBaseGUID , BackupTypeDescription , BackupSetGUID , CompressedBackupSize from #filelistElse select 'The LSN searched for was found in the following transaction log backup.' as "Search Results", BackupName , BackupDescription , BackupType , ExpirationDate , Compressed , Position , DeviceType, UserName , ServerName , DatabaseName , DatabaseVersion , DatabaseCreationDate , BackupSize , convert(NUMERIC(38), FirstLSN) as 'FirstLSN' , convert(NUMERIC(38), lastlsn) as 'lastlsn' , convert(NUMERIC(38), CheckpointLSN) as 'CheckpointLSN', convert(NUMERIC(38), DatabaseBackupLSN) as 'DatabaseBackupLSN', BackupStartDate , BackupFinishDate , SortOrder , [CodePage] , UnicodeLocaleId , UnicodeComparisonStyle , CompatibilityLevel , SoftwareVendorId , SoftwareVersionMajor , SoftwareVersionMinor , SoftwareVersionBuild , MachineName , Flags , BindingID , RecoveryForkID , Collation , FamilyGUID , HasBulkLoggedData , IsSnapshot , IsReadOnly , IsSingleUser , HasBackupChecksums , IsDamaged , BeginsLogChain , HasIncompleteMetaData , IsForceOffline, IsCopyOnly , FirstRecoveryForkID , ForkPointLSN , RecoveryModel , DifferentialBaseLSN , DifferentialBaseGUID , BackupTypeDescription , BackupSetGUID , CompressedBackupSize from #filelist where @SearchLSN BETWEEN FirstLSN AND LastLSNdrop table #dirListdrop table #filelistreturnGOSET QUOTED_IDENTIFIER OFF GOSET ANSI_NULLS ON GO[/code]

Help With Loading XML File into Varbinary Field

Posted: 30 May 2013 08:48 AM PDT

HelloI have a table that has column with Varbinary(MAX) Data type. This column typically contains XML File or Zipped XML File. I have a requirment to replace that field value with New XML File. Can someone help me with the script? I have used OPENROWSET function to insert the xml file as new record but couldn't use that function for UPDATE.

VIEWS

Posted: 30 May 2013 04:57 AM PDT

I would like to display x amount of views (50) in one report then in that same report display which views that have not been accessed and their fields.

Merge between two tables , on two different databases.

Posted: 30 May 2013 06:51 AM PDT

Hi friends,I have following issue:I have one table called customer in Cust database in SQL SERVER 2008R2, which is my destination table.and Other table called customer(same name as above and also same fields) on the oracle server, which is my source table.I have access of read only in the table which is on the oracle server under the LINKED SERVER folder in SQL SERVER 2008R2.now I am confuse how can I do the incremental ETL,(insert , update and delete) between source and destination table either via Stored procudre or SSIS package.Please help me.

Help with next date in SQL query

Posted: 17 May 2013 10:04 PM PDT

What I need is to be able to find out when a customer next called in from the date I have select with the next date & time on any Purchase, does don't need to be same. It's like using MIN & MAX but that does not work on this query as it select the same date & time for both.SELECT ID, Name, CallDateTime, Num, RCFROM HistoryWHERE (CallDateTime > CONVERT(DATETIME, '2013-05-01 00:00:00', 102)) AND (Outcome = 'Called')As you can see in the query above that all the data is in one overall table called History, this records all the purchases.So I want to know that if a customer after the 1/05/2013 called in with the outcome of called what was he next purchases date, some customer might not have come so that can stay blank. So the query is like this nowID Name CallDateTime Num RC3936803 Name1 01/05/2013 11:16:27 84 Called5211387 Name2 01/05/2013 12:14:21 604 Called5185689 Name3 01/05/2013 12:15:28 298 Called4811923 Name4 01/05/2013 12:29:36 170 Calledbut i also want it to show the below,ID Name CallDateTime Num RC Next CallDateTime Total Number Of Days3936803 Name1 01/05/2013 11:16 84 Called 04/05/2013 11:16 35211387 Name2 01/05/2013 12:14 604 Called 04/05/2013 12:14 35185689 Name3 01/05/2013 12:15 298 Called 04/05/2013 12:15 34811923 Name4 01/05/2013 12:29 170 Called 04/05/2013 12:29 3This is the query I have at the moment BUT not show how to add two more columns to it they are next called in date & time after the first called in. I want it to show the next time & date does not matter what the RC code is next visit.So I want mine output to show the below, adding the next called time & date and how many days it take from the first date to the next date,

Servers Consolidation and Linked Servers

Posted: 28 May 2013 07:55 AM PDT

Hi,We are planning to consolidate 5 servers in a 2-node active/active cluster.Some of the servers have Linked server between them.After the consolidation, I guess Linked server between Instances inside same Server/Cluster won't be necessary anymore, but I can't figure out how i could substitute them so that instances keep communicating.Any idea ?Thanks,Dok

Service Master and Database keys

Posted: 30 May 2013 04:16 AM PDT

I've inherited some SQL Servers (2008, 2008R2) how can I tell if there is a Service Master Key or any Database keys?

Export table with Geography data type

Posted: 30 May 2013 03:17 AM PDT

Hi allas the title suggests I need to export the data from one table to another database in a completely different domain, but I ma having problems as its of a Geography data type. Using the export wizard its complaining about data type conversions, does anyone know of a better way of doing this?Thanks

No formatting when importing into Excel using connection

Posted: 30 May 2013 07:13 AM PDT

Hey I've searched around for this and found a topic, but I think they were trying to do something a bit different.http://www.sqlservercentral.com/Forums/Topic976362-391-1.aspx?Highlight=no+format+excel+connectionI think they were trying to have the formatting done on the fly while I'd like to have my formatting preset in Excel and have the connection not change column lengths, etc when it imports since I want to put some excel calculation driven tables below my connection table that have field names of different lengths.Does anyone know of a simple solution for doing this? I thought I'd seen it done before in a file I inherited from someone at a previous employer, but haven't had any luck finding a solution recently.Thanks in advance.

SQL Consulting Help

Posted: 23 May 2013 11:41 PM PDT

My company is in need of a SQL Server consultant. We need someone to come in and do an evaluation of our SQL server configuration, look at our databases, look at our SSRS configuration, look at our indexes (and lack of), etc., etc., etc. And when all said and done, we would like for them to leave us with a plan on how to monitor performance and action plan for maintaining performance.I am asking for recommendations as I do not know where to look for this type of work. I'm sure there are thousands of companies I could Google, but I'm hoping someone here can help me cut through the chees. We'd want to call around to talk to references to vouch for this company/individual as well.Does anyone have any recommendations?Thank you.Joel

Backup problem - log cant back up in time before next log back up

Posted: 30 May 2013 04:54 AM PDT

HelloOur SAN admin takes care of the SQL backups and is running into a problem and asked me for advice.He is doing a log backup every 3 hours.50 GB of changes are being recorded in the log within the 3 hours.The time it takes to back up is exceeding 3 hours causing the next log backup to fail.With the log backup failing, the log keeps growing and the drive gets full.I'm not sure what type of activity is being done or if it can be changed. What are the options to avoid this. Does the RPO need to be changed?ThanksDave

Question on SARGability

Posted: 30 May 2013 05:55 AM PDT

I have a reporting project where the users can choose to filter on multiple field parameters. The filter can be a specific field, or "ANY" to not filter on that particular column.In a nutshell, my T-SQL to pull data is simplified down to what's pasted below. In reality, there are multiple joins involved and the filter fields aren't necessarily in the same tables.My question is whether the following T-SQL might be considered SARGable, or if there's a preferred best practice that might work better in this case. Some of these queries are in fairly long stored procedures so dynamic SQL isn't necessarily a good option.Thanks in advance for your thoughts,Andre RanieriSELECT somefielda, somefieldb, somefieldc, somefieldd)FROM SomeTableWHERE somefielda = (CASE WHEN @filtera = 'ANY' THEN somefielda ELSE @filtera END) ANDsomefieldb = (CASE WHEN @filterb = 'ANY' THEN somefieldb ELSE @filterb END) ANDsomefieldc = (CASE WHEN @filterc = 'ANY' THEN somefieldc ELSE @filterc END) ANDsomefieldd = (CASE WHEN @filterd = 'ANY' THEN somefieldd ELSE @filterd END) AND

find dates where over x # of rows were created?

Posted: 30 May 2013 06:18 AM PDT

Hello - I need to query a table to find out dates on which more than a certain number of records were created. The CreatedOn/ModifiedOn columns have a datetime data type. This is just an ad hoc query so performance is not important for this query. Thanks!

NOLOCK hint.

Posted: 28 May 2013 03:08 AM PDT

we have a high activity transactional database and we have put NOLOCK hints on all SELECT statements...for sometime now we are experiencing timeouts inspite of NOLOCK hintI thought with NOLOCK hint we should not get timeouts ?The same queries run OK from SSMS and we dont see issue with optimization..the problem occurs when WCFmakes hundreds of hits per min....[It is OK for us to to have dirty reads because of NOLOCK hints but we cant have timeouts...]

using dbcc show statistics

Posted: 30 May 2013 12:51 AM PDT

Is it possible to identify candidate keys using dbcc show statistics and if so how would i do it?I am married to a download process that moves everything all the time because the business stewards maintain that there is no way to tell one record from another, so they must extract the entire system every night.This flies in the face of reality to me because if there is no way to tell what is unique, how can their application work correctly?But the fact is that their application is a 35 year old file based system that has no database, that application logic does everything, e.g., there is no database back end minding the store.Can this command help me identify what combination of columns are candidates for uniqueness so that we do not have to download and reload the same zillion records every night?thanks

Scalar Valued functions

Posted: 30 May 2013 02:59 AM PDT

I have a Stored Proc that calls a Scalar Valued function...When I run the Scalar Valued function select dbo.f_ScalarFunction (@Param1, @Param2, @Param3) and check Inlude Actual execution plan..I see that it does a CONSTANT SCAN ... ??I see my underlying Queries in the function are well optimized...they do Index Seek if run in isolation.Question is Does Scalar Valued function not utilize the Query plan of the underlying SQL Queries ?Do they always do CONSTANT SCAN ?

Data Compression - CPU Impact

Posted: 30 May 2013 02:46 AM PDT

I have a SQL 2008 R2 Enterprise server that handles a strictly reporting based workload. The database has grown very quickly, and I am looking for ways to extend the lifespan of my current storage. I have identified some tables that would be good candidates for row and/or page compression, however I am curious about the actual CPU impact of doing so. My current server is somewhat CPU bound in that a good number of reports are able to fully utilize several CPU cores on the server, however the server clearly has more cores than are required for its workload. This brings me to my question about the CPU utilization impact of page/row compression.Are the CPU resources required to compress/decompress data as it is read from disk handled in separate CPU thread(s) from the threads that are performing the actual query workload? By that I mean since I clearly have additional CPU cores available to do work, could I expect that these "unused" cores would pick up the slack of decompressing the data so that the CPU resources needed to perform the query logic would still run at peak efficiency?Thanks.

BCP Failure

Posted: 30 May 2013 12:33 AM PDT

I am using bcp to create a text file. It was working fine but stopped working. Here is the bcp command.exec master..xp_cmdshell 'bcp ''select clientmatter from mytemp order by acctid'' queryout e:\sql_backup\test.txt -c, -T' The text file is not being created. When I manually place the file in the target directory, the email will not work with the @file_attachments='e:\sql_backup\test.txt'. This is the error msg. Executed as user: NT AUTHORITY\NETWORK SERVICE. Attachment file e:\sql_backup\test.txt is invalid. [SQLSTATE 42000] (Error 22051). The step failed.When I remove the @file_attachments='e:\sql_backup\test.txt', the email will work. Thanks in advance. Mike

Policy Managment, Log Backups

Posted: 30 May 2013 02:06 AM PDT

Hello,I have created a policy that violates when a database does not have a log backup within a 15 minute time span. It works, however it alerts me of databases that are not in the maintenance plans and therefore should not be backed up. Does anyone know how to exclude these?I have a function that checks for databases that are accessible, in full recovery mode, and not system databases. I use this condition with the policy but it does not allow me to pick specific databases and I can't find a facet that checks if a database is in a maintenance plan.If you are still with me by the end of this, the email we get is not very descriptive. Is there a way to mention what databases triggered the violation in the email? Thanks for any help...

Linked servers migration

Posted: 30 May 2013 01:58 AM PDT

I have to migrate linked servers from SQL server 2005 to SQL server 2008 R2.Could somebody please tell me how to do this? should i script it out??

How to build/configure Reporting server in sql server 2008

Posted: 29 May 2013 08:36 PM PDT

Dear All,1-We have sql server 2008 with 2 nodes with cluster(Active/Passive).2-Due to performance problem we are going to separate the reports from database.3-I want to create reproting server,as i am new for this pls let me know the step by step process to create reproting server.Thanks in advance..Regards,

The log for database is not available; Database was shutdown due to error 9001 in routine 'XdesRMFull::Commit'.

Posted: 04 May 2011 07:39 AM PDT

Testing and Production db's have lost connection to the translog 2 days in a row now. Today, my prod db was marked suspect after the issue - SCARY. The other db's did not lose connection. Possibly because there was no activity at that moment. No errors in SQL log, only windows. Server Resources were not necessarily be hammered. I will be scouring the web, but wanted to reach out to all of you as well. See info and errors below.Plenty of available drive space for Log, db, and tempdb partitions. 144gb RAMSQL Server 2008 SP1; Enterprise (64-bit)OS: Win Server 2008 R2 Enterprise[b]Win app logs:[/b]error1- LogWriter: Operating system error 21(The device is not ready.) encountered.error2 - The log for database (testing) is not available. Check the event log for related error messages. Resolve any errors and restart the database.info mess3- Database was shutdown due to error 9001 in routine 'XdesRMFull::Commit'. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.[b]2 seconds later prod db goes down:[u][/u][/b]error4- The log for database is not available. Check the event log for related error messages. Resolve any errors and restart the database.error5 - During undoing of a logged operation in database, an error occurred at log record ID (86400:39070:17). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.error6 - fcb::close-flush: Operating system error (null) encountered.error7 - An error occurred during recovery, preventing the database (PRODUCTION :w00t:) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.info mess8 -CHECKDB for database finished without errors on 2011-03-14 12:12:41.503 (local time). This is an informational message only; no user action is required.

Delete take too much time

Posted: 29 May 2013 10:08 PM PDT

Hi,I m try to delete some old data to speed up select statement but it takes too much time to delete.delete from Table_name where CAST( ReportTime as date) < CAST((GETDATE()-30) as date)table have 1,00,000 records to delete.it takes more than 15 minutesThanks in advance

Thursday, May 30, 2013

[how to] Oracle datafile extension .dat or .dbf

[how to] Oracle datafile extension .dat or .dbf


Oracle datafile extension .dat or .dbf

Posted: 30 May 2013 09:08 PM PDT

I have seen these 2 extensions used for datafiles, .dat and .dbf while creating and/or altering a tablespace. I'm not sure what the difference between the 2 extensions is, or if .dat is incorrect.

Here are 2 examples from Oracle Database SQL Reference 10g Release 2 (10.2)

.dat

CREATE TABLESPACE tbs_01      DATAFILE 'tbs_f2.dat' SIZE 40M      ONLINE;  

.dbf

CREATE TABLESPACE tbs_03      DATAFILE 'tbs_f03.dbf' SIZE 20M     LOGGING;  

How to setup SQL active/active cluster to achieve Blue / Green instance switching?

Posted: 30 May 2013 08:00 PM PDT

I am wondering if anyone has ever used a multi-instance cluster (nee 'Active/Active') to achieve blue/green (or A/B) deployment scenarios, and what the best way of configuring it is (using SQL 2012 / Windows 2008 R2)?

To be specific, the scenario I want to achieve is to be able to switch between which cluster instance is being connected to by clients without either the clients or the SQL instances knowing (I stress I'm not talking about node failover here). I'm envisaging that the best way to achieve this is something like:

  • Setup 2 node cluster, each of which has InstanceA and InstanceB instances
  • Configure both InstanceA and InstanceB to listen as if they were the default instance on their cluster address
  • Use DNS to switch which virtual address clients actually connect to.

This should hopefully enable me to do the following:

  • Deploy database to instance A, and have clients connect to it via DNS alias as if default instance
  • Deploy new version of database to instance B
  • Vet new version of database (connecting explicitly to cluster\InstanceB)
  • Redirect DNS alias to point to instance B's cluster name
  • Clients now connect to InstanceB without realising anything's changed
  • Both instances can still failover to the other node in a true outage

Joining the dots, it seems like this should be possible:

... but I've never seen a full example. Has anyone done it? Will what's proposed above work? What have I missed?

Linked Server Login Timeout but SqlCmd works

Posted: 30 May 2013 08:48 PM PDT

I've got a SQL 2005 SP4 server that connects to a 2008 SP3 instance via linked servers using the SQL Server server type. Every once in a while, one of those linked servers will start throwing login timeouts. To rule out firewalls, I can RDP to the server and run sqlcmd and get in just fine, even making sure to use the same login. I'm thinking that SQL has somehow cached something that prevents it finding the right address. The remote servername is defined in that machine's host file. So far, only a reboot fixes the issue.

*Edit: linked server is setup using remote sql login.

Any ideas?

Sync two Oracle production server database

Posted: 30 May 2013 05:54 PM PDT

I have an oracle database that runs a 6 hours batchjob everyday. This process slows down performance during the 6 hours timeframe.

Is there any methods that i could build another server that runs the batchjob, once is done. sync the data to Production server. (time taken must be shorter than 6hrs)

please advise

thanks Shawn

All four data nodes in MySQL Cluster in same node group

Posted: 30 May 2013 04:47 PM PDT

I am testing MySQL Cluster 7.2. I have two servers, mysql1 and mysql2. I want the management server and two data nodes to run on mysql1 and two other data nodes to run on mysql2. My config.ini file looks like this:

[ndb_mgmd]  hostname=mysql1  datadir=/var/mysql-cluster/ndb_data  NodeId=1    [ndbd default]  noofreplicas=2  datadir=/var/mysql-cluster/ndb_data  DataMemory=8M    [ndbd]  hostname=mysql1  NodeId=3    [ndbd]  hostname=mysql2  NodeId=4    [ndbd]  hostname=mysql1  NodeId=13    [ndbd]  hostname=mysql2  NodeId=14    [mysqld]  NodeId=50  

With this configuration I would expect, as per the MySQL Cluster documentation, that the two data nodes 3 and 4 would be in nodegroup 0, while the two data nodes 13 and 14 would be in nodegroup 1.

However, when I start everything up and show the nodes, I see this:

Connected to Management Server at: localhost:1186  Cluster Configuration  ---------------------  [ndbd(NDB)]     4 node(s)  id=3    @192.168.0.42  (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0, Master)  id=4    @192.168.0.43  (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0)  id=13   @192.168.0.42  (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0)  id=14   @192.168.0.43  (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0)    [ndb_mgmd(MGM)] 1 node(s)  id=1    @192.168.0.42  (mysql-5.5.30 ndb-7.2.12)    [mysqld(API)]   1 node(s)  id=50 (not connected, accepting connect from any host)  

Everything seems to be in nodegroup 0! What do I have to do to get 3 and 4 in one group and 13 and 14 in another?

PostgreSQL 9.2.4 (Windows 7) - Service won't start, “could not load pg_hba.conf”

Posted: 30 May 2013 03:56 PM PDT

I am trying to get Postgres 9.2.4 to run as a service on Windows 7. After installing postgres, the service was running fine. However, after setting postgres up as a server for another program, the service stopped running. When I try to start the service now, I get a message saying :

"The postgresql-x64-9.2 - PostgreSQL Server 9.2 service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs."

When I try running the program that should use the database server, I get this error :

"A problem was encountered while attempting to log into or create the production database. Details: Could not connect to the server; Could not connect to remote socket. The application must now close"

I have also encountered this error once while opening the same program :

"A problem was encountered while attempting to log into or create the production database. Details: FATAL: could not load pg_hba.conf The application must now close."

I have tried running the service logged on as a local system account as well as my own account (In the postgres service properties) to no avail. I also tried restarting my computer. After a lot of troubleshooting online, I learned that a good thing to check is the pg_log file. Here are the contents of the latest pg_log entry :

2013-05-29 14:59:45 MDT LOG:  database system was interrupted; last known up at 2013-05-29 14:58:01 MDT  2013-05-29 14:59:45 MDT LOG:  database system was not properly shut down; automatic recovery in progress  2013-05-29 14:59:45 MDT LOG:  record with zero length at 0/175BB98  2013-05-29 14:59:45 MDT LOG:  redo is not required  2013-05-29 14:59:45 MDT LOG:  database system is ready to accept connections  2013-05-29 14:59:45 MDT LOG:  autovacuum launcher started  2013-05-29 15:07:00 MDT LOG:  local connections are not supported by this build  2013-05-29 15:07:00 MDT CONTEXT:  line 1 of configuration file "C:/PostgreSQL/data/pg_hba.conf"  2013-05-29 15:07:00 MDT FATAL:  could not load pg_hba.conf  2013-05-29 15:07:00 MDT LOG:  local connections are not supported by this build  2013-05-29 15:07:00 MDT CONTEXT:  line 1 of configuration file "C:/PostgreSQL/data/pg_hba.conf"  2013-05-29 15:07:00 MDT FATAL:  could not load pg_hba.conf  2013-05-29 15:09:03 MDT LOG:  received fast shutdown request  2013-05-29 15:09:03 MDT LOG:  aborting any active transactions  2013-05-29 15:09:03 MDT LOG:  autovacuum launcher shutting down  2013-05-29 15:09:03 MDT LOG:  shutting down  2013-05-29 15:09:03 MDT LOG:  database system is shut down  

It seems to be having issues with the pg_hba.conf file, which looks like this :

local all all trust  host all all 127.0.0.1 255.255.255.255 trust  host all all 0.0.0.0 0.0.0.0 trust  

As per many suggestions online, I tried editing the top line to a number of different alternatives (host all all trust / host all 127.0.0.1/32 trust / host all 192.168.0.100/24 trust , etc.). This made sense to me, as the log file was saying that local connections are unsupported by postgres and was also pointing to that line. However, none of my changes had any effect. I tried restarting my computer after every change but nothing made any difference.

When I searched for examples of what a pg_hba.conf file normally looks like, the examples looked slightly different from my file. I noticed that in the PostgreSQL program file, in addition to pg_hba.conf, there was also a "20130529-150444-old-pg_hba.conf" file which looked a lot more like the examples I was finding online. This file has several lines of comments before these last few lines :

# TYPE  DATABASE        USER            ADDRESS                 METHOD    # IPv4 local connections:  host    all             all             127.0.0.1/32            md5  # IPv6 local connections:  host    all             all             ::1/128                 md5  # Allow replication connections from localhost, by a user with the  # replication privilege.  #host    replication     postgres        127.0.0.1/32            md5  #host    replication     postgres        ::1/128                 md5  

I was hoping that this was the original pg_hba.conf file and that if I replaced the new file with the contents of the old one, postgres would start working again. No such luck. I have been hoping for more error files to be logged in pg_log to see if the previously stated error had disappeared or changed to something else, but no more files have been logged.

I have been troubleshooting online for a few days now and nothing I've found has worked. Sorry for having such a long question, but I wanted to be thorough and include all relevant information. I would appreciate it if anyone could shed some light on this problem or offer suggestions.

Different dates Oracle 11g with TOAD

Posted: 30 May 2013 03:25 PM PDT

I have the following queries:

SELECT to_date(to_char(to_date('01-FEB-1949'))) FROM DUAL;    /*this returns 2/1/2049. */    SELECT to_date(to_char(to_date('01-FEB-1949'),'dd-MON-yyyy')) FROM DUAL;   /*this returns 2/1/1949.*/  

Why does the first one returns the year 2049 instead of 1949?

By Googling I have found that I can "force" the client date format to be the one desire by changing the keyon the registry:

KEY_OraClient11g_home1  NLS_DATE_FORMAT : YYYY/MM/DD  

Thanks in advance!

How should I arrange a database replication for my site?

Posted: 30 May 2013 05:46 PM PDT

Here is my problem. I have a busy Drupal site struggling under high load. After applying all caches I see that database is the bottleneck. I have two servers to handle the site: A and B, on the same rack/subnet. The server A is frontend web server and is set to handles all database queries to the server B. Currently there is no detabase set up on A. The database on B is MariaDB 10. CPU-wise, The server A is much less powerful than B, but has the same amount of RAM. The load on server A is very low (< 0.5) The load on server B is not low (> 5). Reads / Writes ratio is currently 92% / 8%

So my questions are:

-Are there any benefit in defining master/slave database on these two servers?

-If is good idea to go master/slave route, how do you arrange the servers? (which server should be the master? Which one should be the frontend?)

Why these queries show up at the slow-query log? Interpreting EXPLAIN

Posted: 30 May 2013 08:26 PM PDT

I'm having a hard time interpreting the EXPLAIN results of these queries. They both end up in the slow-query log, but the execution time is ~0.0050ms and the final result set is always under 100 rows. What's wrong here? Is my second "improved" version any better?
Any suggestion?

  mysql> # Original    mysql> EXPLAIN SELECT SQL_NO_CACHE relSrc, relDst, 1 as relType, relTypeDesc, 0 as fracQty, '24794' as source FROM productsRelationships1        -> LEFT JOIN productsRelationshipsDesc on 1=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 2 as relType, relTypeDesc, 0 as fracQty, '24794' as source FROM productsRelationships2        -> LEFT JOIN productsRelationshipsDesc on 2=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 3 as relType, relTypeDesc, 0 as fracQty, '24794' as source FROM productsRelationships3        -> LEFT JOIN productsRelationshipsDesc on 3=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 5 as relType, relTypeDesc, 0 as fracQty, '24794' as source FROM productsRelationships5        -> LEFT JOIN productsRelationshipsDesc on 5=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 6 as relType, relTypeDesc, fracQty, '24794' as source      FROM productsRelationships6        -> LEFT JOIN productsRelationshipsDesc on 6=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 7 as relType, relTypeDesc, 0 as fracQty, '24794' as source FROM productsRelationships7        -> LEFT JOIN productsRelationshipsDesc on 7=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> ORDER BY relType, relSrc, RelDst;    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+-------+-------+-----------------------------------------+    | id | select_type  | table                     | type        | possible_keys       | key           | key_len | ref   | rows  | Extra                                   |    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+-------+-------+-----------------------------------------+    |  1 | PRIMARY      | productsRelationships1    | index       | PRIMARY,src-1       | src-1         | 2       | NULL  |   663 | Using where; Using index                |    |  1 | PRIMARY      | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    |  2 | UNION        | productsRelationships2    | index       | src-dst-2           | src-dst-2     | 4       | NULL  | 13126 | Using where; Using index                |    |  2 | UNION        | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    |  3 | UNION        | productsRelationships3    | index       | PRIMARY             | PRIMARY       | 4       | NULL  | 11459 | Using where; Using index                |    |  3 | UNION        | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    |  4 | UNION        | productsRelationships5    | index       | PRIMARY,src-5       | src-5         | 2       | NULL  |   369 | Using where; Using index                |    |  4 | UNION        | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    |  5 | UNION        | productsRelationships6    | index_merge | PRIMARY,src-6,dst-6 | dst-6,PRIMARY | 2,2     | NULL  |     2 | Using union(dst-6,PRIMARY); Using where |    |  5 | UNION        | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    |  6 | UNION        | productsRelationships7    | index       | PRIMARY,src-7       | src-7         | 2       | NULL  |     1 | Using where; Using index                |    |  6 | UNION        | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    | NULL | UNION RESULT | <union1,2,3,4,5,6>        | ALL         | NULL                | NULL          | NULL    | NULL  |  NULL | Using filesort                          |    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+-------+-------+-----------------------------------------+    13 rows in set (0.00 sec)      mysql>    mysql>    mysql> # Improved?    mysql> EXPLAIN SELECT SQL_NO_CACHE relSrc, relDst, relType, fracQty, source, relTypeDesc FROM (        -> SELECT relSrc, relDst, 1 as relType, 0 as fracQty, '24794' as source FROM productsRelationships1        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 2 as relType, 0 as fracQty, '24794' as source FROM productsRelationships2        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 3 as relType, 0 as fracQty, '24794' as source FROM productsRelationships3        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 5 as relType, 0 as fracQty, '24794' as source FROM productsRelationships5        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 6 as relType,      fracQty, '24794' as source FROM productsRelationships6        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 7 as relType, 0 as fracQty, '24794' as source FROM productsRelationships7        -> WHERE relDst='24794' OR relSrc='24794'        -> ) AS rels        -> LEFT JOIN productsRelationshipsDesc ON relType=relTypeID        -> ORDER BY relType, relSrc, RelDst;    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+--------------+-------+------------------------------------------------------+    | id | select_type  | table                     | type        | possible_keys       | key           | key_len | ref          | rows  | Extra                                                |    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+--------------+-------+------------------------------------------------------+    |  1 | PRIMARY      | <derived2>                | ALL         | NULL                | NULL          | NULL    | NULL         |    38 | Using filesort                                       |    |  1 | PRIMARY      | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | rels.relType |     1 | Using index                                          |    |  2 | DERIVED      | productsRelationships1    | index       | PRIMARY,src-1       | src-1         | 2       | NULL         |   663 | Using where; Using index                             |    |  3 | UNION        | productsRelationships2    | index       | src-dst-2           | src-dst-2     | 4       | NULL         | 13126 | Using where; Using index                             |    |  4 | UNION        | productsRelationships3    | index       | PRIMARY             | PRIMARY       | 4       | NULL         | 11459 | Using where; Using index                             |    |  5 | UNION        | productsRelationships5    | index       | PRIMARY,src-5       | src-5         | 2       | NULL         |   369 | Using where; Using index                             |    |  6 | UNION        | productsRelationships6    | index_merge | PRIMARY,src-6,dst-6 | dst-6,PRIMARY | 2,2     | NULL         |     2 | Using union(dst-6,PRIMARY); Using where; Using index |    |  7 | UNION        | productsRelationships7    | index       | PRIMARY,src-7       | src-7         | 2       | NULL         |     1 | Using where; Using index                             |    | NULL | UNION RESULT | <union2,3,4,5,6,7>        | ALL         | NULL                | NULL          | NULL    | NULL         |  NULL |                                                      |    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+--------------+-------+------------------------------------------------------+    9 rows in set (0.00 sec)  

Those are the tables definitions

  mysql> SHOW CREATE TABLE productsRelationships1\G    *************************** 1. row ***************************           Table: productsRelationships1    Create Table: CREATE TABLE `productsRelationships1` (      `relSrc` smallint(5) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      PRIMARY KEY (`relSrc`,`relDst`),      UNIQUE KEY `src-1` (`relSrc`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationships2\G    *************************** 1. row ***************************           Table: productsRelationships2    Create Table: CREATE TABLE `productsRelationships2` (      `relSrc` smallint(5) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      KEY `src-dst-2` (`relSrc`,`relDst`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationships3\G    *************************** 1. row ***************************           Table: productsRelationships3    Create Table: CREATE TABLE `productsRelationships3` (      `relSrc` smallint(5) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      PRIMARY KEY (`relSrc`,`relDst`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationships5\G    *************************** 1. row ***************************           Table: productsRelationships5    Create Table: CREATE TABLE `productsRelationships5` (      `relSrc` smallint(5) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      PRIMARY KEY (`relSrc`,`relDst`),      UNIQUE KEY `src-5` (`relSrc`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationships6\G    *************************** 1. row ***************************           Table: productsRelationships6    Create Table: CREATE TABLE `productsRelationships6` (      `relSrc` smallint(5) unsigned NOT NULL,      `relType` tinyint(2) unsigned NOT NULL DEFAULT '6',      `fracQty` int(2) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      PRIMARY KEY (`relSrc`,`relDst`),      UNIQUE KEY `src-6` (`relSrc`),      UNIQUE KEY `dst-6` (`relDst`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationships7\G    *************************** 1. row ***************************           Table: productsRelationships7    Create Table: CREATE TABLE `productsRelationships7` (      `relSrc` smallint(5) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      PRIMARY KEY (`relSrc`,`relDst`),      UNIQUE KEY `src-7` (`relSrc`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationshipsDesc\G    *************************** 1. row ***************************           Table: productsRelationshipsDesc    Create Table: CREATE TABLE `productsRelationshipsDesc` (      `relTypeID` tinyint(2) unsigned NOT NULL AUTO_INCREMENT,      `relTypeDesc` varchar(100) NOT NULL,      UNIQUE KEY `relTypeID` (`relTypeID`,`relTypeDesc`)    ) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=utf8    1 row in set (0.00 sec)  

And this is the amount of data in every table.

  mysql> SELECT COUNT(1) FROM productsRelationships1\G    *************************** 1. row ***************************    COUNT(1): 663    1 row in set (0.00 sec)      mysql> SELECT COUNT(1) FROM productsRelationships2\G    *************************** 1. row ***************************    COUNT(1): 263    1 row in set (0.00 sec)      mysql> SELECT COUNT(1) FROM productsRelationships3\G    *************************** 1. row ***************************    COUNT(1): 8551    1 row in set (0.01 sec)      mysql> SELECT COUNT(1) FROM productsRelationships5\G    *************************** 1. row ***************************    COUNT(1): 369    1 row in set (0.00 sec)      mysql> SELECT COUNT(1) FROM productsRelationships6\G    *************************** 1. row ***************************    COUNT(1): 80    1 row in set (0.00 sec)      mysql> SELECT COUNT(1) FROM productsRelationships7\G    *************************** 1. row ***************************    COUNT(1): 0    1 row in set (0.00 sec)      mysql> SELECT COUNT(1) FROM productsRelationshipsDesc\G    *************************** 1. row ***************************    COUNT(1): 7    1 row in set (0.00 sec)  

What are reasonable options for an in-memory multi-core database?

Posted: 30 May 2013 01:03 PM PDT

I'm going to preface this with pointing out that I only need somewhat persistent data. The purpose of this database platform would be to support statistical analysis in R. I usually build my tables from csv files I get from clients and query those tables to build flat files to dump into R. I can either import a .csv type file or run a query from R. So, essentially I'm performing a lot of inner and outer joins on the entire data set to get the resulting output I need. To date, my databases haven't exceeded 5-10GB. I may have projects in the near future that will be larger but I don't see anything that would exceed memory. I need maximum speed for a little while.

To admit a little guilt - I would be happy with sqlite if it supported full joins (without getting too hacky) and if it had good multi-core support. I like it's simplicity - it just doesn't perform well enough. Or I'm too ignorant.

Options I have explored are:

  • PostgreSQL in a ramdisk - unsure if a ramdisk would actually be necessary but I've seen a lot of info on the topic.

  • Using MySQL memory tables - I haven't looked to see if other databases have a similar feature. I'm sure they do.

  • McObject's eXtremeDB - It doesn't quite seem like a good fit for me. It's designed to be an embedded DB.

  • VoltDB - I was excited about this option until I read that they don't quite have outer joins and self joins working. Their SQL seems a little too limited.

I'm switching from my laptop (running ubuntu) which frequently overheats to an Amazon EC2 instance which I can scale up as much as I need. Thus the need for good multi-core support. I'll likely build my tables in an on-demand instance and do the heavy querying in spot instances. My laptop has already conditioned me for periodic shut-downs so, I'm not too worried about that. I've already built an instance with R and have been having fun playing with AWS for other projects over the last few months.

I'm not beholden to any specific database platform however, I have reached a point of information paralysis. Reasonable solutions and things to consider will be very helpful. I'm not looking for a step-by-step how to - that's what Google and the rest of stack exchange is for. I've also been avoiding Amazon's RDC service for this. I'm not exactly sure why - probably so I can use spot instances.

I'm also open to the idea that I'm looking at my problem all wrong. Should I abandon SQL all together?

Oracle pivot on a column with delimited data

Posted: 30 May 2013 12:59 PM PDT

My data is like:

keycol,col1,col2,col3  1,a;b;c,some data,some other data  2,x,some data,some other data  3,y;z,some data,some other data  

Where a column is delimited in the source system with semicolons.

And I want to pivot it to:

1,a,some data,some other data  1,b,some data,some other data  1,c,some data,some other data  2,x,some data,some other data  3,y,some data,some other data  3,z,some data,some other data  

I found a technique here, but I can't quite get it to work:

CREATE TABLE yt      (keycol int, col1 varchar2(5), col2 varchar2(9), col3 varchar2(15))  ;    INSERT ALL       INTO yt (keycol, col1, col2, col3)           VALUES (1, 'a;b;c', 'some data', 'some other data')  SELECT * FROM dual  ;    INSERT ALL       INTO yt (keycol, col1, col2, col3)           VALUES (2, 'x', 'some data', 'some other data')  SELECT * FROM dual  ;    INSERT ALL       INTO yt (keycol, col1, col2, col3)           VALUES (3, 'y;z', 'some data', 'some other data')  SELECT * FROM dual  ;  

I thought I could include the keycol in the CONNECT BY this way to get parallel recursive chains, but I guess it doesn't work like that. I'm pretty sure I've done this with recursive CTEs in SQL Server.

SELECT keycol    ,trim(regexp_substr(col1, '[^;]+', 1, level)) col1    ,col2     ,col3  FROM yt t  CONNECT BY keycol = PRIOR keycol AND instr(col1, ';', 1, level - 1) > 0  

http://sqlfiddle.com/#!4/3d378

FWIW, I'm on Oracle 10g.

Autogrow. Primary vs additional data files

Posted: 30 May 2013 12:18 PM PDT

My databases all use autogrow, which grows the primary MDF file on a percentage. But one of the databases, from a third party application grows by adding additional .NDF files.

Where is this option set? When I look at autogrow settings, there is the option to grow or not, by percentage or by xMB, and an option for limited or unlimit growth. But I see nothing that tells it whether to grow the primary MDF, or grow by adding additional NDFs.

And, is there a way to combine these NDF files back into the primary MDF?

Thanks!

RZ

SSIS How can I write to a Specific Cell in an Excel Sheet

Posted: 30 May 2013 12:59 PM PDT

I am trying to complete what I thought would be a very simple task but after hours of looking thru various articles and attempting different methods, I still have not been able to Write to a specific Cell using SSIS.

All I am trying to do is write "DOB" in cell D2 in an excel sheet.

I tried using SQL COMMAND in the Execute SQL Task componenet to do the UPDATE of the 1 cell but kept getting error messages. Below is the code I tried. SSIS came back with an error saying it was expecting at least 1 paramater...

update [Sheet1$D2:D2] SET F1='DOB'  

I also tried

INSERT INTO [Sheet1$D2:D2] VALUES ('DOB')   

but got the following error message:

This table contains cells that are outside the range of cells defined in this spreadsheet.

I tried a few different C# and VB scripts but none of them did the trick. Any ideas or suggestions?

I tried modifying the script in the below article to accomplish my task but was unsuccessful

http://bidn.com/blogs/KeithHyer/bidn-blog/2475/updating-a-single-excel-cell-using-ssis

I'm thinking there's got to be an easier way.

How much data can SQL Server full text indexing handle?

Posted: 30 May 2013 06:30 PM PDT

I realize that the question is vague and it depends on hardware and our needs.

We currently have 5 million rows of data, a total of 5GB of data which we want to index using full text indexing. Our data increases quite rapidly and it's not unreasonable to assume that in a few years it will be closer to a billion rows and a TB of data.

The index is searchable by web site users, and they expect responses within a second or two.

Is it reasonable to assume that this data set will be indexable using SQL Server 2012 full text indexing? Is it common to do full text indexing of this amount of data? And is there any good reading on the subject, for example from others' experience?

PK as ROWGUIDCOL or use a separate rowguid column?

Posted: 30 May 2013 03:02 PM PDT

There's a long-winded debate going on here so I'd like to hear other opinions.

I have many tables with uniqueidentifier clustered PK. Whether this is a good idea is out of scope here (and it's not going to change anytime soon).

Now, the database has to be merge published and the DEVs are advocating the use of a separate rowguid column instead of marking the existing PK as the ROWGUIDCOL.

Basically, they say that the application should never bring into its domain something that is used by replication only (it's only "DBA stuff" for them).

From a performance standpoint, I see no reason why I should add a new column to do something I could do with an existing one. Moreover, since it's only "DBA stuff", why not let the DBA choose?

I kind of understand the DEVs' point, but I still disagree.

Thoughts?

EDIT: I just want to add that I'm in the minority in this debate and the DEVs questioning my position are people I respect and trust. This is the reason why I resorted to asking for opinions.
I might also be missing something and could have misunderstood their point.

How can I achieve a unique constraint with two fields?

Posted: 30 May 2013 02:32 PM PDT

I have a table with e.g. Name and IsDeleted fields. I want to add a row constraint so that only one Name value can have IsDeleted as 'false'. There can be many duplicate Name values, but they must all have IsDeleted asd true.

How would I write this check constraint ?

memory used by Locks

Posted: 30 May 2013 05:08 PM PDT

I am kind of curious, one of SQL 2012 enterprise edition with 128 GB of RAM size of database is 370 GB and growing, amount of memory used by locks (OBJECTSTORE_LOCK_Manager) memory clerk showing 7466016 KB. I can also confirm that by looking at perf counter select * from sys.dm_os_performance_counters where counter_name = 'Lock Memory (KB)'

However, when I run query

select count(*) from sys.dm_tran_locks  

it shows only 16 locks. So what is using over 7 GB of locks. Is there a way to find out?

Does that mean if once memory for locks has been allocated SQL has yet not yet deallocated it? In past 1 hour I do not see lock count exceeding 500 but lock memory stays the same.

EDIT: Max Server Memory is 106 GB, We do not use lock pages in memory and I do not see any memory pressure or any errors in the error log in past 12 hours. Avialble MBytes couter shows more than 15 GB of available memory.

EDIT 2 Activity monitor consistenly shows 0 waiting tasks so obviously no blocking.

Considering SQL server lock take about 100 bytes of memory 7 GB is lots of memory and trying to find out who is using it.

EDIT 3: I run a server dash board report top transaction by lock count it says "currently no locking transactions are running on the system. However, lock memory still shows as stated above. DB is most busy during overnight hours.

Postgres wont shutdown due to wal archiving

Posted: 30 May 2013 07:03 PM PDT

I commanded Postgres to shutdown using the init.d scripts (Linux) over 18h ago.

I can still see the processes running:

-bash-3.2$ ps -fe | grep postg  postgres  2299  3265  0 16:06 pts/5    00:00:00 ps -fe  postgres  2300  3265  0 16:06 pts/5    00:00:00 grep postg  root      3263 10185  0 May23 pts/5    00:00:00 su - postgres  postgres  3265  3263  0 May23 pts/5    00:00:01 -bash  root      5985 13676  0 May20 pts/3    00:00:00 su - postgres  postgres  5987  5985  0 May20 pts/3    00:00:01 -bash  postgres 14266     1  0 May23 ?        00:06:34 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data  postgres 14268 14266  0 May23 ?        00:01:51 postgres: logger process  postgres 14270 14266  0 May23 ?        00:01:30 postgres: writer process  postgres 14281 14266  0 May23 ?        00:00:09 postgres: archiver process   last was 000000010000028F000000A3  postgres 14282 14266  0 May23 ?        00:03:07 postgres: stats collector process  postgres 14283 14266  0 May23 ?        00:56:49 postgres: wal sender process postgres 10.40.227.238(12032) streaming 28F/A4000650  postgres 14306 14266  9 May28 ?        04:01:55 postgres: opera_man om 10.40.227.146(44745) SELECT  

On the standby server (running normally) I see that:

$ ps -fe | grep postg  cluser   20724  7090  0 09:54 pts/0    00:00:00 psql -U postgres report  postgres 20726 21475  0 09:54 ?        00:01:12 postgres: postgres report [local] idle  postgres 21475     1  0 Apr24 ?        00:00:03 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data  postgres 21477 21475  0 Apr24 ?        00:00:01 postgres: logger process  postgres 21478 21475  0 Apr24 ?        05:34:10 postgres: startup process   recovering 000000010000028F000000A4  postgres 21485 21475  0 Apr24 ?        00:07:16 postgres: writer process  postgres 21486 21475  0 Apr24 ?        00:00:18 postgres: stats collector process  postgres 24091 21475  0 May23 ?        00:46:49 postgres: wal receiver process   streaming 28F/A40006E0  cluser   32136 30224  0 16:09 pts/16   00:00:00 grep postg  

The log shows 'FATAL: the database system is shutting down', what could be the reason of this and how do I get it back running?

Algorithm for finding the longest prefix

Posted: 30 May 2013 03:28 PM PDT

I have two tables.

First one is a table with prefixes

code name price  343  ek1   10  3435 nt     4  3432 ek2    2  

Second is call records with phone numbers

number        time  834353212     10  834321242     20  834312345     30  

I need write a script which find longest prefix from prefixes for each record, and write all this data to third table, like this:

 number        code   ....   834353212     3435   834321242     3432   834312345     343  

For number 834353212 we must trim '8', and then find the longest code from prefix table, its 3435.
We must always drop first '8' and prefix must be in the beginning.

I solved this task long time ago, with very bad way. Its was terrible perl script which do a lot of queries for each record. This script:

  1. Take a number from calls table, do substring from length(number) to 1 => $prefix in the loop

  2. Do the query : select count(*) from prefixes where code like '$prefix'

  3. If count>0 then take first prefixes and write into table

First problem is query counts - it's call_records * length(number). Second problem is LIKE expressions. I am afraid those are slow.

I tried to solve the second problem by:

CREATE EXTENSION pg_trgm;  CREATE INDEX prefix_idx ON prefix USING gist (code gist_trgm_ops);  

That speeds up each query, but did not solve problem in general.

I have 20k prefixes and 170k numbers now, and my old solution is bad. Looks like I need some new solution without loops.

Only one query for each call record or something like this.

I can't start Mysql 5.6 server due to "TIMESTAMP with implicit DEFAULT value is deprecated" Error?

Posted: 30 May 2013 03:17 PM PDT

Ok, Here is my story, I went to mysql.com site & downloaded the file mysql-5.6.11-winx64.zip into C:, then I unziped it. Then I went to bin folder & started the server by using this command:

C:\mysql-5.6.11-winx64\bin\mysqld --standalone --local-infile=1  

Everything was ok as the Mysql server started smoothly. I then stopped server using this command:

C:\mysql-5.6.11-winx64\bin\mysqladmin -u root shutdown  

The Server was shutdowned properly.

I used this way to start & stop mysql server a few times without any problem.

However, yesterday, I started the Mysql server but then, at the end of the day, i turned off my PC while my MySQL server was still in the Starting Mode (ie, i did not shutdown mysql using "bin\mysqladmin -u root shutdown" before turned off my PC).

Also, when my PC got turned off at that time, the Win 7 was starting to download some packages from the internet to update Win7 so the configuration of win7 could be changed.

But today I could not start Mysql Server using the above command as there's an error:

  [Warning] TIMESTAMP with implicit DEFAULT value is deprecated.   Pls use --explicit_defaults_for_timestamp server option (see documentation for more details).  

I searched over internet & some people said that I have to go to my.cnf file & add this line into:

explicit_defaults_for_timestamp = TRUE  

However, there is no my.cnf file in mysql 5.6, there a lot of cnf file in mysql5.6 but with different names:

mysql-5.6.11-winx64\data\auto.cnf  mysql-5.6.11-winx64\mysql-test\include\default_client.cnf  mysql-5.6.11-winx64\mysql-test\include\default_my.cnf  mysql-5.6.11-winx64\mysql-test\include\default_mysqld.cnf  mysql-5.6.11-winx64\mysql-test\include\default_mysqld_autosize.cnf  mysql-5.6.11-winx64\mysql-test\include\default_mysqld_ndbd.cnf  

I tried to add explicit_defaults_for_timestamp = TRUE into all above cnf file but no help, i still couln't start mysql

I don't want to reinstall cos I created a big DB in the current mysql Server.

So how to fix it?

Note: when first time I ran Mysql server, win7 pop up a message saying something (i couldn't remember) such as "do you allow ... Firewall", so do u think that is causing the issue since Win7 got its configuration updated & somehow it reset the Firewall so the Mysql server couldn't start?

How to set SQL Server index pages per fragment?

Posted: 30 May 2013 02:50 PM PDT

I have SQL Server 2008, and a number of databases. I have discovered that one of my table's indexes is extremely fragmented (How I know: http://msdn.microsoft.com/en-us/library/ms189858.aspx)

The avg_fragmentation_in_percent is a whopping 97% or more and the fragment count is in the thousands. Very interestingly, the avg_fragment_size_in_pages is just slightly more than 1. I can defrag it, which helps, but I'd like to prevent this from happening in the first place.

Why is the pages per fragment so low? I have FILEGROWTH = 128MB for the DB as a whole, but this particular table is the most active - so is there a way to tell SQL Server to allocate more growth to this table or index so that the pages-per-fragment is higher?

pgAdmin3 can't connect properly to Postgres 9.2

Posted: 30 May 2013 02:48 PM PDT

I have installed pgadmin3 version 1.10.2, however when it connects to PostgreSQL server version 9.2, it says:

Warning:

This version of pgAdmin has only been tested with PostgreSQL version 8.4 and below and may not function correctly with this server. Please upgrade pgAdmin.

Then it will throw this error:

An error has occurred:

ERROR: column "datconfig" does not exist LINE 1: ...b.dattablespace AS spcoid, spcname, datallowconn, datconfig,...

etc.

Also I am quite stuck with my working version of pgAdmin3 as I am using Ubuntu Lucid. So is there a work-around for pgAdmin3 to work properly keeping my curent version of pgAdmin3?

Relation to original tables or to existing linking table

Posted: 30 May 2013 11:27 AM PDT

In my database I have a table with different settings for my app. Each setting is in relation to a guest (table guests) and an event (table events). So basically each guest has specific settings for each event he is linked to.

Every guest which has settings is are allready linked to the events for other reasons so there is an existing event_guest table with the necessary links.

So I'm not exactly sure about how I should link the settings table with the others.

Option 1

I link the settings with the table event_guest which links guests and events.

enter image description here

Option 2

I link the settings with the "original" tables guests and events.

enter image description here

Spontaneous I would go with option 1 but I'm a little bit confused about it...

My concern with option 1 is, that if I have a lot of deep relations, maybe even another table after settings, I need more complex sql queries to get for example data from settings, guests and events. Could this become a performance issue?

Which is the better solution and what are its advantages and disadvantages?

OK to put temp tablespace on volatile storage or to omit it from backups? (Postgresql)

Posted: 30 May 2013 07:14 PM PDT

I would intuit that it's fine, but I just want to make sure there are no gotchas from a recovery point of view:

If I were to lose my temp tablespace upon system crash, would this prevent proper crash recovery?

Also, if I were to omit the temp tablespace from the base backup, would that prevent proper backup recovery?

mysql optimize table crash

Posted: 30 May 2013 02:14 PM PDT

When I try OPTIMIZE TABLE `table` (MyISAM) on a table which is about 300MB, then it is crashed and must be repaired. What could cause this problem? The same problem occurs on other tables over 300MB.

Is it possible to have extra tables in a Slave with MySQL Replication

Posted: 30 May 2013 01:14 PM PDT

As my title mention I have a Master and a Slave database.

Master if for operations data and my slave mainly for reporting stuff.

The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system.

How could I add tables on the Slave without Master caring about it ? Is it even possible ?

Replication issue - CREATE SELECT alternative?

Posted: 30 May 2013 03:14 PM PDT

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

query processor ran out of internal resources and could not produce a query plan

Posted: 30 May 2013 07:16 PM PDT

This is showing up in the logs several times a night. How do I find the query causing the issue? SQL Server 2008 R2 Sp1.

Thank you

Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress?

Posted: 30 May 2013 04:14 PM PDT

I'm trying to monitor the progress of a sqlite3 command importing a large .sql file into a database using zenity --progress.

I've tried the following which will import the file, however progress is not shown:

sqlite3 DATABASE < import_file.sql | zenity --progress --percentage=0 --auto-close  

I know I need to provide zenity a numeric source for the progress, but don't really know how to obtain the number.

Can anyone help me?

Search This Blog