Saturday, March 23, 2013

[SQL Server 2008 issues] Check Data While Inserting

[SQL Server 2008 issues] Check Data While Inserting


Check Data While Inserting

Posted: 19 Mar 2013 06:24 PM PDT

Dear All,I have a UI like,LineNo ProdID Discount(%) and a "Insert" Button-------- ------ --------I can insert a single or multiple lines in this UI.While inserting the following data, it will check whether there is any data in Discount Column greater than 18. If so, it will not insert any data and raise an error message.[code="plain"]LineNo ProdID Discount(%)------ ------ -------1 1000 02 1001 23 1002 194 1003 0[/code]I coded but it always checks LineNo1. Please help me to code whether I can check data that is greater than 18 in Discount Column against all the LineNo .Regards,Akbar

Best approach to have mssql 2008 mirror certain fields to MySQL on Web

Posted: 22 Mar 2013 08:33 AM PDT

We have a mssql onsite server.We want to create a Website probably using LAMP and designed with Joomla.We want to link or mirror certain fields from our onsite DB to the DB on the Web, so that clients can go to the web and see their status, tracking,etc.Should I create a new table in MsSQL called web, then mirror that to the Web host? I would like some suggestions, as to what may be the best approach.

Database Restore Causes Database User To Become Inoperable

Posted: 22 Mar 2013 06:23 AM PDT

Frequently, we will send out or receive a backup from our clients. We all have the same login and database user. When we restore the DB, we have to drop the database user and re-create him each time since our ASP.NET application user is not able to log into the database. He is able to log into the server, but not our database. As our client base increases this small little annoyance could really become a problem.Any ideas on how to be able to restore or backup the database in a way that enables the database user to stay valid and operable would be greatly appreciated.Thanks,Jim

How to set query editor tab to display "Executing" in SSMS ?

Posted: 22 Mar 2013 07:46 AM PDT

In my SSMS, tabs headers don't display when query is currently executing. It is especially difficult when I have multiple tabs open. How to set it up to display it ? My version is 2008 R2 x64 EE. I tried Tools/Options but nothing I could find for this.Thanks .

Should I backup more than the database(s) on a server?

Posted: 22 Mar 2013 06:10 AM PDT

I'm a rookie when it comes to backing up a server. I've been able to successfully backup and restore database(s) on SQL Server 2008. But I'm wondering if I should be backing up more than just the database(s). What I mean by this is what if something happens to the server or someone screws up a SSIS package (.dtsx) or screws up a SQL Agent job or I need to recover yesterdays version of some T-SQL code? How would I be able to restore these to an older version?Should I be backing these items to another location off the server or am I over thinking things?I read where some of you forum posters say "I'll be putting that in my toolbox". Where do you put that and how do you take your toolbox from one job to the next? If yes, is there a script or proc I could use as a guide?Thanks.

concatenate rows using for xml path and new line feed carriage

Posted: 21 Mar 2013 10:31 PM PDT

Good day Guysi have this code--------Create TABLE #tempCityState (State Varchar(5), City Varchar(50))Insert Into #tempCityStateSelect 'CO', 'Denver' UnionSelect 'CO', 'Teluride' UnionSelect 'CO', 'Vail' UnionSelect 'CO', 'Aspen' UnionSelect 'CA', 'Los Anggeles' UnionSelect 'CA', 'Hanford' UnionSelect 'CA', 'Fremont' UnionSelect 'AK', 'Wynne' UnionSelect 'AK', 'Nashville'Select Distinct State, (Select Stuff((Select ',' + City From #tempCityState Where State = t.State FOR Xml Path('')),1,1,'')) AS CitiesFrom #tempCityState t-- or tentatively--Select Distinct State, (Select Substring((Select ',' + City-- From #tempCityState-- Where State = t.State-- FOR Xml Path('')),2,200000)) AS Cities--From #tempCityState tDrop table #tempCityState-----------it gives this results:State City------------------AK Nashville,WynneCA Fremont,Hanford,Los AnggelesCO Aspen,Denver,Teluride,Vailbut i want output like this (3 rows exactly with new line feed carriage after each item)http://www.flickr.com/photos/14261289@N02/8580154328/Thanks in adv. for your replies

XML Data Type as a parameter in a Stored Procedure

Posted: 20 Mar 2013 07:06 AM PDT

Hi,I've table as follows,[code="sql"]CREATE TABLE [dbo].[majikanAG_subMajikan_1]( [idx] [int] IDENTITY(-2147483648,1) NOT NULL, [batch_Id] [uniqueidentifier] NOT NULL, [icNo (Baru)] [varchar](100) NULL, [icNo (Lama)] [varchar](100) NULL, [payerNme] [varchar](300) NULL, [zakatAmount] [decimal](10, 2) NULL, [subMajikan] [varchar](100) NULL, CONSTRAINT [PK__majikanA__51EFEBF8002AF460] PRIMARY KEY CLUSTERED ( [idx] ASC, [batch_Id] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY][/code]My Stored Procedure as follows,[code="sql"]CREATE PROCEDURE [dbo].[addAGSummary_SubMajikan_Process1]@agItem xml,@batch_Id uniqueidentifier outputASBEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON;set transaction isolation level repeatable readBegin transactionBegin Tryselect @batch_Id=NEWID()insert into majikanAG_subMajikan_1(batch_Id, [icNo (Baru)], [icNo (Lama)],payerNme, zakatAmount, subMajikan)select @batch_Id,a.b.value('icNo[1]','varchar(200)') as icNo, --as input1,a.b.value('icNoOld[1]','varchar(15)') as icNoOld, --as input2,upper(a.b.value('payerNme[1]','varchar(100)')) as payerNme, --as input3,--a.b.value('amt[1]','decimal(10,2)') as amt, --as input4,a.b.value('amt[1]','varchar(100)') as amt, --as input4,a.b.value('subCd[1]','varchar(100)') as subCd --as input5,from@agItem.nodes('/data/ag') a(b)COMMIT transactionEnd TryBegin Catch-- Whoops, there was an error--IF @@TRANCOUNT > 0ROLLBACK transaction-- Raise an error with the details of the exceptionDECLARE @ErrMsg nvarchar(4000), @ErrSeverity intSELECT @ErrMsg = ERROR_MESSAGE(),@ErrSeverity = ERROR_SEVERITY()RAISERROR(@ErrMsg, @ErrSeverity, 1)End Catch END[/code]There are 2 scenario1- If @agItem did not have so much data (1000 records), the stored procedure run well2- If @agItem have a so much data (10000 records), the stored procedure cannot process as expectedWhy no (2) is happening? Did XML Data Type is not suitable for 10000 records?Please help. I'm stuck

help with Store procedure

Posted: 22 Mar 2013 03:45 AM PDT

hi guys,can anyone help me1.i need to Write a SP to accept comma separated values and return as list of string values2. i need to Write a SP to find occurences of a character in a string3. i need to Write a SP to find number of words in a sentence

Column reference

Posted: 21 Mar 2013 08:48 PM PDT

Hi,I have a table Gv_voucher which has data like(VoucherId, VoucherNo, TranactionID)1, VVB0001, TRN0012,VVB0002, TRN0013,VVB003,TRN0024,VVB0004, TRN002I have created another table GV_ReceivedOffice in which I required these column VoucherNo and TransactionIDI have created a unique constraint on column VoucherNO in GV_Vocuher table so that I can use it in my another table Gv_received.But how to use column TransactionID of Gv_Voucher table in GV_ReceivedOffice. Is it so that without creating constraint we cant reference a column to another table's column?Please suggest.

Comparing all columns in table

Posted: 22 Mar 2013 02:46 AM PDT

HelloNeed help to solve one problem that I somehow caused... I have a table that has nearly 100 columns, and i have a "copy" of this table with 2 extra columns, to maintain a kind of history. This History table it's filled with the help of a trigger in the main table, that adds a copy of the original row before the change, plus the kind of change made (delete,update), and the date of the change.Now i'm trying to get the history for one particular ID, the PK for the main table. How can I, create a temp table, with every row of the History table, but in each line only show differences from the previous line. For example:I have a table with columns from A to Z, and the original values for the rows were a to z.The first row in the History table, has all the values, but the column A changed from a to 10.The second row, the column T changed to 99...The result that i'm trying to get is something like:Row1 only the 10 in the A column, and the other columns set to nullRow2 only the 99 in the T column, and the other columns set to null...

Checklist after Migrating Databases from 2005 to 2008 R2

Posted: 20 Mar 2013 03:48 AM PDT

I am migrating SQL databases from 2005 to 2008 R2 Enterprise edition. There are only 5 databases but they all are 500 GB or more. I would like to know what checks to make on the dbs after they are moved to the new server. Great hardware on the new server. We have a very small maintenance window and most of the time will be going towards moving databases, testing jobs etc. I am considering running following things in this order, but worried about the time it takes on such a large databases and performance impact. DBCC UPDATEUSAGEDBCC CHECKDBREBUILD/REORGANIZE INDEXESUPDATE STATISTICS (for Indexes that were ReOrganized)RECOMPILE All the procs.Can anyone please provide expert comments if we really need all these or not? Thanks in advance...

Curious: performance difference when using function in a where clause.

Posted: 21 Mar 2013 11:12 PM PDT

Hello all,This is just something I noticed and am [b]wondering[/b] about (I think this is an unexpected effect), there is [b]no problem[/b] to solve.I get a performance difference when I use a function in a where clause. See the code, the performance numbers are below the code.Sorry I was not able to isolate the problem and give definitions and data to reproduce this.Please read the code.[code="sql"]------------------------------------------------------------------------------------select * from client_order where -- ABS(ClientNr) = 12345 [b]ClientNr = 12345[/b] and startdttm = '2012-03-16'----(1 row(s) affected)----Table 'Encounter'. [b]Scan count 30, logical reads 62, physical reads 61[/b], read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.---- SQL Server Execution Times:---- CPU time = 0 ms, elapsed time = 765 ms.-- Estimated execution plan: Estimated Subtree Cost : [b]0.106408[/b]-- Actuel execution plan: Estimated Subtree Cost : [b]0.106408[/b]------------------------------------------------------------------------------------------------------------------------------select * from client_order where [b]ABS(ClientNr) = 12345[/b] -- ClientNr = 12345 and startdttm = '2012-03-16'----(1 row(s) affected)----Table 'client_order'. [b]Scan count 1, logical reads 4, physical reads 4[/b], read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.---- SQL Server Execution Times:---- CPU time = 0 ms, elapsed time = 49 ms.-- Estimated execution plan: Estimated Subtree Cost :[b] 0.0183334[/b]-- Actuel execution plan: Estimated Subtree Cost : [b]0.0183334[/b]------------------------------------------------------------------------------------[/code]Thanks for your time and attention,ben brugman

create sql str with special condition

Posted: 22 Mar 2013 04:05 AM PDT

create sql str with special conditionhellowI have 2 tableTable a(code, price, check)Table b(code, percent)I want to create a sql string such below:Select a.code, a.price, (price * percent) as new_fieldFrom a,b Where a.code = b.codeAnd this condition must be consider in sql str:If (check = 1){ New_field = price * percent}Else{ New_field = price * percent * 8}Now how can put above condition in sql str?Thanks very much

Count dots '.'

Posted: 22 Mar 2013 02:52 AM PDT

Hi geniuses,How do I Count the number of '.' (dots) from a column value:[code="sql"]EconSearchD.01.01.01D.01.01.02D.01.01.03D.01.01.03.00[/code]And if there's 3 '.' -> Level 1 if there's 4 '.' -> Level 2 if there's 4 '.' -> Level 3 to Level 5Thanxs people! Regards

Omnibackup.exe backup got failed while trying through this setup.

Posted: 22 Mar 2013 03:36 AM PDT

In SQL Server,to talk a backup for various databases from various server we are using omnibackup setup to execute SQL database backup not through SQL AGENT JOBS.This is script we are using,OmniBackup.exe -b D:\GenetecBackups -s (local)\OMI -d VideoSQL -MAX_BACKUPS 12(any error on this script pl let me know)We are getting error while executing this setup, BACKUP DATABASE failed with an application error.Error number on LOG : Error: 17836, Severity: 20, State: 17We are getting the above error when trying to back up the Security SQL databases (all servers).I have read this and not sure what size we would make this, any recommendations would be great.Kindly revert me with an good solution.Waiting for a valuable replies.Thanks in Advance friends.

Integration Director access

Posted: 22 Mar 2013 01:05 AM PDT

I am DBA at medium sized company. We recently hired a new Director for project integration. The director requested Sysadmin privilages to all sql servers in the domain. I just do not want get into argument by saying NO to this request so wondering how you folks manage these kind of situations.In most cases director might need is read access I know it might be different from company to company but I just want to hear from experts how to handle these situations.Thanks in advance,Sarwan.

Moving Windows (A/D) Domains

Posted: 22 Mar 2013 03:09 AM PDT

I have SQL Servers that must be moved from one windows domain to another. Moving the servers hasn't been an issue yet, most of gone smoothly. Now we need to move the user accounts.Basically I need to either rename the domain account or copy it to the new domain account.So the old domain user would be OldDomain\user1 needs to be NEWDomain\user1 with all the same database access.I would have thought there would be a simple way to do this but it looks like I need to script out the access and recreate the account. Anyone have any scripts or advice on how to do this. I need to move 20+ servers and hundreds of accounts.....Thanks!

IDENTITY value becoming too big

Posted: 21 Mar 2013 10:59 PM PDT

I have a table that gets a fresh set of data from our source each day. This mean I DELETE almost all the records each day and repopulate it with new data.I do not TRUNCATE the table because according to business rules all the records can't be deleted, so my script is:DELETE FROM Table1WHERE .....This deletes almost all the records each day.Being a DELETE, this means the primary key column, which is an IDENTITY, does not get reseeded but rather this values just keeps getting bigger and bigger.I am running into a scenario where the INT data type on this IDENTITY column is going to become too small.My ID column is on 1,500,254,112 and the maximum for an INT is 2,147,483,647. Now before I just go and change the data type to a bigint, I was hoping someone could give me a better solution to keep the ID value in check?Any suggestions?

Backup task failing

Posted: 22 Mar 2013 01:30 AM PDT

Hi,I have a backup task that isnt working, it used to but a couple of days ago stopped. I created a new version of the task and it has also reported a failure. All of the bak files have been created by the task and I can recover from them. I therefore have no idea why it is reporting a failure.Please see the notes below, not much to go on I know, they are usually more helpful than this:Date 22/03/2013 12:57:06Log Job History (Non-ODM Backups.Backup Databases)Step ID 1Server SQLJob Name Non-ODM Backups.Backup DatabasesStep Name Backup DatabasesDuration 00:37:42Sql Severity 0Sql Message ID 0Operator Emailed Operator Net sent Operator Paged Retries Attempted 0MessageExecuted as user: NEFTEXOIL\SQLAgent. :30:02.41 Code: 0xC002F210 Source: Back Up Database Task Execute SQL Task Description: Executing the query "BACKUP DATABASE [Team] TO DISK = N'K:\SQL_Backups..." failed with the following error: "A nonrecoverable I/O error occurred on file "K:\SQL_Backups\Team_Backups\Team_backup_2013_03_22_125713_7338597.bak:" 112(failed to retrieve text for this error. Reason: 15100). BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. End Error Progress: 2013-03-22 13:34:34.23 Source: Back Up Database Task Executing query "BACKUP DATABASE [Team_Archive] TO DISK = N'K:\SQL...".: 50% complete End Progress Progress: 2013-03-22 13:34:46.19 Source: Back Up Database Task Executing query "declare @backupSetId as int select @backupSetId =...".: 100% complete End Progress Error: 2013-03-22 13:34:46.71 Code: 0xC002F210 Source: Back Up Database Task Execute SQL Task Description: Executing the query "BACKUP DATABASE [webmap] TO DISK = N'K:\SQL_Backu..." failed with the following error: "A nonrecoverable I/O error occurred on file "K:\SQL_Backups\Team_Backups\webmap_backup_2013_03_22_125713_7418605.bak:" 112(failed to retrieve text for this error. Reason: 15105). BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 12:57:07 Finished: 13:34:47 Elapsed: 2260.64 seconds. The package execution failed. The step failed.Many Thanks for your help,Oliver

2008 SP2 Recompile

Posted: 22 Mar 2013 02:51 AM PDT

Just read this from Gail Shaw ....http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/'No, you'll still get the non-optimal plan. In 2005 the optimiser is required to create plans that are safe for reuse, even if they won't be reused (because of recompile). Only in 2008 SP2 onwards does the recompile work properly – plans optimal for that particular parameter set.Gail said this on March 24th, 2011 at 23:08'How does it not work properly in SQL 2008 SP1?

Tme dimension Best Way

Posted: 22 Mar 2013 02:59 AM PDT

I currently have two tables, one with information in it and one with time and timeID. They are linked using timeID as a foreign key in the info table. How would I go about making a Time Dimension for this set up?

Remove "SQL Server 2005 (64-bit)" after in-place upgrade to 2008 R2?

Posted: 04 Mar 2013 03:53 AM PST

Hi all,When upgrading SQL, we normally either detach/reattach or backup/restore. However, we have just been handed a server that had an "in-place" (?) upgrade performed from SQL 2005 to 2008 R2. This seems to have left SQL 2005 on the Programs List. In other words, if I go to Add/Remove programs, I see all the usual entries for 2008 (such as "[b]Microsoft SQL Server 2008 (64-bit)[/b]" and "[b]Microsoft SQL Server 2008 Native Client[/b]" and "[b]Microsoft SQL Server 2008 Browser[/b]", etc...).However, there are also two entries for 2005:[b]Microsoft SQL Server 2005 (64-bit)Microsoft SQL Server 2005 Backward compatibility[/b]Note that, in the services list, I [b][i]only [/i][/b]see services for the expected 2008 R2 instance and the users have had no problems connecting to and using the upgraded SQL DBs.So, [b][u]is it safe to remove the two SQL 2005 items listed in the add/remove programs list[/u][/b]? I ask because the files are apparently throwing up red flags when we run our patch/hotfix scans.Thanks!

Search entire database (including procedures, views, etc.) for a string

Posted: 22 Mar 2013 01:28 AM PDT

I've read several articles I found online suggesting various ways to do this, but I thought I would check here. Basically, I need to find every instance of 'COMM" in one of our databases. Any help is appreciated!EDIT: I'm running 2008 R2 in case that helps.

Controlling Replication from Sybase via SQL Server

Posted: 19 Mar 2013 12:19 AM PDT

Hi all,Without me having to look too deep into this, can somebody give me a quick Yes/No on the possibility of a SQL 2008 R2 box controlling the replication of data FROM a Sybase ASE 15 database server (on linux)?SQL 2008 R2 Supports -Publishing data from Oracle to SQL Server.Publishing data from SQL Server to non-SQL Server Subscribers.So therefore a big NO? Or an undocumented possibility?CheersShark

SQL C# CLR TVF issues

Posted: 22 Mar 2013 12:21 AM PDT

This problem is really starting to annoy me now!I created a SQL C# CLR TVF which accepts around 30 parameters. The function was running like a charm, processing around 250000 records in 9 seconds which is amazing considering the amount of string manipulation performed by the CLR function.But for some weird reason and without changing the code of the function or the data set, it started taking 80-90 seconds to run the same code on the sama data set. I even restarted the SQL instance to see if it would go back to the initial speed but to no avail.The query looks like this:[code="sql"]SELECT top 10000 g.ID, g.GUID, g.mkTitle, g.mkNameKey, g.mkAddressKey, g.mkName1, g.mkName2, g.mkName3, g.mkNormalisedName, g.mkOrganisationKey, g.mkNormalizedOrganization, g.mkOrgName1, g.mkOrgName2, g.mkorgName3, g.mkPostIn, g.mkPostOut, g.mkPhoneticStreet, g.mkPremise, g.mkPhoneticTown, g.mkEmailAddress, g.mkTelephoneNumber, g.mkMobileNumber, 1 FROM dbo.Load1XX CROSS APPLY[dbo].[GenerateKeys](ID, GUID, ISNULL('COLNOTPROV', ''), ISNULL(FullContactName, ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL(Address1, ''), ISNULL(Address2, ''), ISNULL('COLNOTPROV', ''),ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL(Town, ''), ISNULL(County, ''), ISNULL(Postcode, ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL(Country, ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL(Email, ''), ISNULL(HomeTelephone, ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL('COLNOTPROV', ''), ISNULL(CustomerURN, ''), ISNULL('COLNOTPROV', '')) as g[/code]The actual execution plan looks like this, so it appears the nested loops is taking the longest time. Is there any way round this?![img]http://s1.postimg.org/awhfrhmkf/friday1.jpg[/img][img]http://s13.postimg.org/zb54ox5mv/friday2.jpg[/img]

Row to Column (Pivot)

Posted: 21 Mar 2013 09:28 PM PDT

[code="sql"]SELECT S.UniversityNo ,S.Name ,U.SSubjectCode ,U.IsPassFROM SStudentUnivPerformamce U INNER JOIN SStudent S oN U.SStudentID = S.StudentID INNER JOIN SSubject SU ON U.SSubjectID = SU.SSubjectIDWHERE U.SBatchID = 4 GROUP BY S.UniversityNo ,S.Name ,U.SSubjectID ,U.SSubjectCode ,SU.Descriptions ,U.IsPassORDER BY S.UniversityNo,U.SSubjectID[/code] I want the Output Like This-------------------------------------------------------------------------UniversityNo Name Sub1 Sub2 Sub3 ......-------------------------------------------------------------------------123 Raja P F P ....124 Kumar P F P ....125 Ravi P F P ....126 Raji P F P ....

Creating reference

Posted: 21 Mar 2013 07:50 PM PDT

HI,I am creating a table GV_ReceivedOffice with this strucutre:But VoucherNo column in this table will reference to one more table GV_Voucher which has column VoucherNo. VoucherNo in GV_Voucher doesnt have any constaint on it.so If i try do in this way to reference VoucherNo in ReceivedOfficetable CONSTRAINT FK_GV_ReceivedOffice_VoucherNo Foreign KEY (VoucherNO) REFERENCES GV_Voucher(VoucherNo)it is not correct. How to do this?Create table GV_ReceivedOffice(ReceivedOfficeID int primary key identity(1,1),VoucherNo varchar(15),TransactionID varchar(10),UserID int,ReceivedDate datetime)

SSRS 2008 Running Value Function

Posted: 14 Jul 2010 11:10 AM PDT

Hi,I am trying to calculate running total by order using SSRS 2008. When i pass following code to SSRS reports getting errror.I am using simple program to calculate running total (RTTL fieldname)by order.Calculated fieldname is RTTL= RunningValue(Field!Orderamt.value,Sum,Nothing) getting errro.When i am using Dataset = RunningValue(Field!Orderamt.value,"Dataset1") getting the same errorWhen i am using groupname getting the same error.Can somebody provide the details how to resolve this issue in SSRS 2008.ThanksA S

Is there ready function for a column to break up into two ?

Posted: 21 Mar 2013 08:31 PM PDT

Hi , I have a table.Table name of LG_XXX_. I have a "Definition_" string column in Table.Locate the first space character, this column space.The left side of the "name" as the right side of the "LastName" column.For Example ;Now position :/*(Field Name)*/_____ Definition_ ___________________ ==========(Value)_____________ ERSOY AYDINNext position :(Field Name1) ___ (Field Name2)NAME___________ LASTNAME=========_____ ========ERSOY__________ AYDINIs there ready function for a column to break up into two ?Best regards

Order by CASE

Posted: 21 Mar 2013 08:54 PM PDT

Hi geniuses, need some help here.[code="sql"]Select r.Title, r.impact,r.probabilityFROM DB_Tabs_Risk rOrder by r.impact desc[/code][code="sql"]Title impact probabilityRiskA 10 0,1RiskB 10 0,5RiskC 9 0,8RiskD 5 0,5RisckE 5 0,3[/code]I need to order this based on something likeWhen r.impact >= 7 AND r.probability >= .7 (on top)When r.impact <= 3 AND r.probability <= .3 (bottomElse (middle)How do I T-SQL'it?Thanks in advnce.

Masking a password in a stored procedure input

Posted: 21 Mar 2013 08:41 PM PDT

Hi Folks,I'm making a stored procedure that receive many parameters. One of them is a user password.When I execute the stored procedure using the Management Studio, I write the password in a text box field.Is it possible to mask this input?Any comment should be appreciated.Kind Regards.

Friday, March 22, 2013

[SQL Server] Need help with Select statement!!

[SQL Server] Need help with Select statement!!


Need help with Select statement!!

Posted: 22 Mar 2013 07:22 AM PDT

final project is due tonight at midnight and i cant finish up these select statements a. Find all those customers who have not purchased anything from Niles Video Inc.b. Get the total number of DVDs and video tapes sold per genre.c. Get the average number of DVDs per purchase.heres the er model [IMG]http://i45.tinypic.com/34qx021.png[/IMG]followed by the relational[IMG]http://i47.tinypic.com/154gjzn.png[/IMG]please let me know if my er model or relational model is wrong

Customers last order date

Posted: 21 Mar 2013 05:35 PM PDT

Hi All,I'm having a bit of trouble with the following query, I get the results I need with the exception of the "last_order_date" field... it is emptyWhat I'm trying to do is this...Pull the records for the days orders and also return the most recent order for each customer (if they have one).This is what I have... your help is greatly appreciatedselecto.idOrderCustom, o.orderDate, o.total, o.idCustomer,c.name, c.lastName, c.customerCompany, c.email, c.city, c.stateCode,c.phone, last_order_datefromorders oLEFT JOIN customers c ON o.idCustomer = c.idCustomerLEFT JOIN(select top 1orders.orderDate as last_order_date,orders.idCustomer from orders whereorders.orderDate < '3/21/2013'AND orders.orderStatus = 4order by orders.orderDate desc) l ON o.idCustomer = l.idCustomerwhereo.orderDate = '3/21/2013'ThanksRick

[how to] Special syntax sth.name() in the CREATE FUNCTION statement, what does it mean?

[how to] Special syntax sth.name() in the CREATE FUNCTION statement, what does it mean?


Special syntax sth.name() in the CREATE FUNCTION statement, what does it mean?

Posted: 22 Mar 2013 08:26 PM PDT

In this SO question about stored procedures in plpgsql, the stored procedure look like sth.name(). I don't know what is the meaning of the prefix sth.

For example:

create or replace function something.function_name()  returns setof record as  $$  -- code  $$ language plpgsql;  

Looking in this book in the "Part IV: Programming with PostgreSQL" I have found no mention of this type of creation declaration with a function name in two parts.

In the postgresql documentation, at the create function section, the only similar thing is when they deal about argtype or regtype than could be written in the form: table_name.column_name%TYPE. But it's not related to the name of the function.

So, what is this syntax related to ?

Linux 32b MySQL periodically crashing with signal 11 and OOM errors

Posted: 22 Mar 2013 08:33 PM PDT

I have a MySQL DB with mostly InnoDB tables that I'm fairly certain isn't tuned properly based on what's happening, but I don't have enough know how to pinpoint what to change. The server is a 8G 32b Linux system with the following in my.cnf:

[mysql]  port                           = 3306  socket                         = /var/run/mysqld/mysqld.sock  [mysqld]  user                           = mysql  default_storage_engine         = InnoDB  socket                         = /var/run/mysqld/mysqld.sock  pid_file                       = /var/run/mysqld/mysqld.pid  key_buffer_size                = 64M  myisam_recover                 = FORCE,BACKUP  max_allowed_packet             = 16M  max_connect_errors             = 1000000  datadir                        = /var/lib/mysql/  tmpdir                         = /tmp  tmp_table_size                 = 64M  max_heap_table_size            = 64M  query_cache_type               = 0  query_cache_size               = 0  max_connections                = 200  thread_cache_size              = 50  open_files_limit               = 65535  table_definition_cache         = 8192  table_open_cache               = 8192  innodb_flush_method            = O_DIRECT  innodb_log_files_in_group      = 2  innodb_log_file_size           = 128M  innodb_flush_log_at_trx_commit = 2  innodb_file_per_table          = 1  innodb_buffer_pool_size        = 2G  log_error                      = /var/log/mysql/mysql-error.log  log_queries_not_using_indexes  = 0  slow_query_log_file            = /var/log/mysql/mysql-slow.log  slow_query_log                 = 1  long_query_time                = 2  general_log                    = 0  general_log_file               = /var/log/mysql/general.log  [isamchk]  key_buffer                     = 16M  [mysqldump]  quick  quote-names  max_allowed_packet             = 16M  

Currently, here are the non-zero global status stats:

Aborted_clients 28  Aborted_connects    667  Bytes_received  283596894  Bytes_sent  3709581404  Com_admin_commands  24456  Com_change_db   10267  Com_delete  167  Com_insert  1355  Com_kill    1  Com_select  1018481  Com_set_option  19563  Com_show_processlist    74  Com_show_status 30  Com_show_table_status   1  Com_show_tables 22  Com_show_variables  5  Com_update  2208  Connections 11157  Created_tmp_disk_tables 5131  Created_tmp_files   6  Created_tmp_tables  11044  Flush_commands  1  Handler_commit  1019009  Handler_delete  160  Handler_read_first  29551  Handler_read_key    3051320  Handler_read_last   3  Handler_read_next   5038745  Handler_read_prev   251210  Handler_read_rnd    685831  Handler_read_rnd_next   22756239  Handler_rollback    38  Handler_update  1166988  Handler_write   557183  Innodb_adaptive_hash_cells  8850419  Innodb_adaptive_hash_heap_buffers   1630  Innodb_adaptive_hash_hash_searches  4990439  Innodb_adaptive_hash_non_hash_searches  4315600  Innodb_background_log_sync  4145  Innodb_buffer_pool_pages_data   129440  Innodb_buffer_pool_pages_dirty  4  Innodb_buffer_pool_pages_flushed    9952  Innodb_buffer_pool_pages_LRU_flushed    237  Innodb_buffer_pool_pages_made_young 273289  Innodb_buffer_pool_pages_misc   1631  Innodb_buffer_pool_pages_old    47761  Innodb_buffer_pool_pages_total  131071  Innodb_buffer_pool_read_ahead   607  Innodb_buffer_pool_read_ahead_evicted   1325  Innodb_buffer_pool_read_requests    35806735  Innodb_buffer_pool_reads    373297  Innodb_buffer_pool_write_requests   30891  Innodb_checkpoint_age   365  Innodb_checkpoint_max_age   216721613  Innodb_checkpoint_target_age    209949063  Innodb_data_fsyncs  5575  Innodb_data_read    1834913792  Innodb_data_reads   401613  Innodb_data_writes  17424  Innodb_data_written 332080128  Innodb_dblwr_pages_written  9952  Innodb_dblwr_writes 431  Innodb_dict_tables  27606  Innodb_history_list_length  1979  Innodb_ibuf_free_list   9  Innodb_ibuf_merged_delete_marks 13  Innodb_ibuf_merged_deletes  3  Innodb_ibuf_merged_inserts  201  Innodb_ibuf_merges  144  Innodb_ibuf_segment_size    11  Innodb_ibuf_size    1  Innodb_log_write_requests   5819  Innodb_log_writes   6591  Innodb_lsn_current  77982531271  Innodb_lsn_flushed  77982531271  Innodb_lsn_last_checkpoint  77982530906  Innodb_master_thread_1_second_loops 4131  Innodb_master_thread_10_second_loops    411  Innodb_master_thread_background_loops   15  Innodb_master_thread_main_flush_loops   15  Innodb_master_thread_sleeps 4130  Innodb_max_trx_id   576268483  Innodb_mem_adaptive_hash    62128140  Innodb_mem_dictionary   109012014  Innodb_mem_total    2179465216  Innodb_mutex_os_waits   779  Innodb_mutex_spin_rounds    36022  Innodb_mutex_spin_waits 5369  Innodb_oldest_view_low_limit_trx_id 576268482  Innodb_os_log_fsyncs    3498  Innodb_os_log_written   5761024  Innodb_page_size    16384  Innodb_pages_created    94  Innodb_pages_read   374004  Innodb_pages_written    9952  Innodb_purge_trx_id 576267172  Innodb_rows_deleted 160  Innodb_rows_inserted    1323  Innodb_rows_read    28554644  Innodb_rows_updated 2078  Innodb_s_lock_os_waits  1278  Innodb_s_lock_spin_rounds   40952  Innodb_s_lock_spin_waits    2563  Innodb_x_lock_os_waits  132  Innodb_x_lock_spin_rounds   4893  Innodb_x_lock_spin_waits    176  Key_blocks_unused   57983  Key_blocks_used 10  Key_read_requests   6724  Key_reads   7  Key_write_requests  1441  Max_used_connections    25  Open_files  11  Open_table_definitions  8192  Open_tables 8192  Opened_files    138868  Opened_table_definitions    117810  Opened_tables   126475  Queries 1062631  Questions   1062631  Select_full_join    1211  Select_range    17271  Select_scan 27709  Slow_queries    4  Sort_range  215835  Sort_rows   723648  Sort_scan   20726  Table_locks_immediate   1055875  Threads_cached  21  Threads_connected   4  Threads_created 25  Threads_running 1  Uptime  4148  Uptime_since_flush_status   4148  

And finally, ulimit -a:

ulimit -a  core file size          (blocks, -c) 0  data seg size           (kbytes, -d) unlimited  scheduling priority             (-e) 0  file size               (blocks, -f) unlimited  pending signals                 (-i) 64613  max locked memory       (kbytes, -l) 64  max memory size         (kbytes, -m) unlimited  open files                      (-n) 1024  pipe size            (512 bytes, -p) 8  POSIX message queues     (bytes, -q) 819200  real-time priority              (-r) 0  stack size              (kbytes, -s) 8192  cpu time               (seconds, -t) unlimited  max user processes              (-u) 64613  virtual memory          (kbytes, -v) unlimited  file locks                      (-x) unlimited  

Does anything pop out that is clearly mis-configured?

Correct use of VOLATILE COST (and ROWS) indications in Postgresql stored procedure

Posted: 22 Mar 2013 08:21 PM PDT

While looking at several examples of pl/python and pl/pgsql, I have seen many - but not all - using volatile cost.

ie:

CREATE OR REPLACE FUNCTION my_function()  RETURNS setof record AS  $BODY$  -- code  $BODY$  LANGUAGE plpgsql VOLATILE  COST 100;  

Searching for more information about volatile cost I have discovered (at first sight) that roughly at least 90% of web examples are using volatile cost 100, and sometimes somethimes volatile cost 1. (for rows it's 1000).

As I have understood, this indication helps the query plan optimizer to decide how to set priorities in short-circuit boolean operations.

Is it premature optimization if I start to give an estimate cost or rows for each of my stored procedures ? Should I only do it when I want to optimize certain query ? Is it an art for choosing the good value of cost ?

I know about the command explain, which I have have not learnt yet. Is this command helpful for estimating cost ?

PostgreSQL CREATE EXTENSION file not found error on Mac OS

Posted: 22 Mar 2013 08:02 PM PDT

On PostgreSQL 9.2.3 when trying this:

CREATE EXTENSION btree_gist;  

I get this error:

ERROR: could not open extension control file "/opt/local/share/postgresql92/extension/btree_gist.control": No such file or directory

Running SELECT pg_available_extensions(); shows that indeed is not available.

Unable to run union in parallel on SQL Server

Posted: 22 Mar 2013 03:40 PM PDT

I am unable to get this fairly simple query to parallelize the union operation:

select va.ObjectId, 0 as IsFlag    from Oav.ValueArray va        where va.PropertyId = @pPropertyId                     and va.value in (select value from #MatchValues)                     group by va.ObjectId  having count(distinct va.Value) = (select count(*) from #MatchValues)    union all        select distinct codv.ObjectId, 1 as IsFlag    from Pub.OtherTable codv   where PropertyId = 2551     and Id in (select value from #Ids)      and Flag = @pFlag     and Value in (select value from #MatchValues)   group by codv.ObjectId  having count(distinct codv.Value) = (select count(*) from #MatchValues)  

Running with MAXDOP 1 gives an expected .8s (.5 + .3). I was hoping that increasing MAXDOP to 2 would optimize for the biggest gain by using one processor for each side but that is not the case. Maxdop zero on a lightly loaded 12 Cpu machine all ~4% only results in parallel execution about 10% of the time.

Is there a way to weight the hints so that parallelization at the union point is the most important? Does the syntax support separate MAXDOP's for each side?

I have tried (concat/hash/merge union) with little change.

Match values is usually a small table (~10 rows).

Generate script to automate renaming of default constraints

Posted: 22 Mar 2013 03:03 PM PDT

Background: Some of our default column constraints were generated without explicit names, so we get fun names that vary from server to server like: DF__User__TimeZoneIn__5C4D869D

I would prefer to have them all manageable with a consistent naming like DF_Users_TimeZoneInfo so that we can ensure that the appropriate constraints exist on future target tables (like in RedGate compare, or even just visually)

I've got a script that mostly works for what I want:

select 'sp_rename N''[' + s.name + '].[' + d.name + ']'',      N''[DF_' + t.name + '_' + c.name + ']'', ''OBJECT'';'  from sys.tables t      join      sys.default_constraints d          on d.parent_object_id = t.object_id      join      sys.columns c          on c.object_id = t.object_id          and c.column_id = d.parent_column_id      join sys.schemas s          on t.schema_id = s.schema_id  WHERE d.NAME like 'DF[_][_]%'  

But this just gives me a resultset, and not something I can actually pass into an exec or whatever.

How can I make this so I can just execute those sp_rename scripts without having to resort to copying out all the returned elements and pasting them into a new query window and running them again? Trying to save as many keystrokes as possible so I can correct this in many environments.

enter image description here

Limit connection rights on SQL Server

Posted: 22 Mar 2013 01:55 PM PDT

I have an app to deploy in production that uses 'honor system' security. That is, all users connect to the DB using a SQL user/passwd credential and the app manages permissions itself. The latter part doesn't bother me as much as the fact that the connection object contains embedded credentials and can be copied around freely. I'm try to find some way to limit connections to a more limited set of clients. I can create firewall rules to limit by IP, of course. Is there any way to 'prequalify' SQL logins either by Machine account or domain membership?

php cannot talk to mysql from apache [closed]

Posted: 22 Mar 2013 12:46 PM PDT

I have loaded a new centos system that I have installed php 5.4.13, apache 2.2.24. and mysql 5.6.10. I get an error from mysqli_connect() that it cannot connect. However, a 500 error is given. However, I can connect from the same program run from the command line without error.

SELECT every second row without auto_increment

Posted: 22 Mar 2013 02:17 PM PDT

How can I select every second row in MySQL, without using any data of the table or some auto_increment?
Something like

SELECT EVERY SECOND ROW FROM `table`;  

Thank you!

Modeling a ticket system based on a set of disparate kinds of tickets?

Posted: 22 Mar 2013 11:42 AM PDT

I'm working on a project that allows for the creation of "support tickets." These are not confined to something that needs fixing, however, so in a way they could more accurately be called "jobs."

For the frontend of the project, the tickets are to be created out of predefined "templates." I've listed three examples below:

  1. Removing a discontinued product(s) from our online storefronts. This template would require information on which vendor the discontinued product(s) belong to, which product(s) is/are discontinued, the reasoning for the discontinuing.
  2. Resolving product upload errors. Information would need to be provided on the upload's batch id, the number of errors needing fixing, from which vendor the errors came from.
  3. Fixing a workstation. Information on the workstation number, what the problem is, and the urgency are fields that would be required.

Of course, each kind of ticket would share common attributes like the name of the ticket (a short summary of the issue/assignment), the issuer's user_id, the deadline, whether or not it has been resolved, etc.

I am trying to model this system relationally, but I am not sure if it's appropriate. The biggest struggle I'm having is how I can model and relate the different "kinds" of tickets. This seems perfect for some kind of inheritance, but should this be stored in a relational database?

It has felt natural to try some sort of EAV model, but I've heard that this should be avoided like the plague (I don't know whether or not this is an accurate evaluation).

Here is a diagram of my current attempt:

Current Attempt

I've included the rest of the relations in the image for context.

The accompanying (generic) SQL code as generated by SQLEditor:

CREATE TABLE actions  (    action_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    name VARCHAR NOT NULL,    PRIMARY KEY (action_id)  );    CREATE TABLE departments  (    department_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    name VARCHAR NOT NULL UNIQUE,    PRIMARY KEY (department_id)  );    CREATE TABLE entities  (    entity_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    entity_name VARCHAR NOT NULL UNIQUE,    PRIMARY KEY (entity_id)  );    CREATE TABLE activity_types  (    activity_type_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    entity_id INTEGER NOT NULL,    name VARCHAR NOT NULL,    PRIMARY KEY (activity_type_id)  );    CREATE TABLE objectives  (    objective_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    name VARCHAR NOT NULL,    PRIMARY KEY (objective_id)  );    CREATE TABLE statuses  (    status_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    name VARCHAR,    PRIMARY KEY (status_id)  );    CREATE TABLE notifications  (    notification_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    target_id INTEGER NOT NULL,    active INTEGER NOT NULL,    action_id INTEGER,    closed INTEGER NOT NULL,    activity_id INTEGER NOT NULL,    PRIMARY KEY (notification_id)  );    CREATE TABLE ticket_keys  (    ticket_key_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    name VARCHAR NOT NULL UNIQUE,    PRIMARY KEY (ticket_key_id)  );    CREATE TABLE tasks  (    task_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    ticket_id INTEGER NOT NULL,    name VARCHAR NOT NULL,    resolved INTEGER NOT NULL,    PRIMARY KEY (task_id)  );    CREATE TABLE ticket_vals  (    task_val_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    ticket_key_id INTEGER NOT NULL UNIQUE,    ticket_id INTEGER NOT NULL,    val VARCHAR NOT NULL,    PRIMARY KEY (task_val_id)  );    CREATE TABLE users  (    user_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    username VARCHAR NOT NULL UNIQUE,    department_id INTEGER NOT NULL,    first_name VARCHAR NOT NULL,    last_name VARCHAR NOT NULL,    hash VARCHAR NOT NULL,    salt VARCHAR NOT NULL UNIQUE,    PRIMARY KEY (user_id)  );    CREATE TABLE comments  (    comment_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    ticket_id INTEGER NOT NULL,    commenter_user_id INTEGER NOT NULL,    comment VARCHAR NOT NULL,    PRIMARY KEY (comment_id)  );    CREATE TABLE targets  (    target_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    ticket_id INTEGER,    task_id INTEGER,    objective_id INTEGER,    user_id INTEGER,    department_id INTEGER,    PRIMARY KEY (target_id)  );    CREATE TABLE sessions  (    session_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    user_id INTEGER NOT NULL,    time_in VARCHAR NOT NULL UNIQUE,    time_out VARCHAR UNIQUE,    duration INTEGER,    PRIMARY KEY (session_id)  );    CREATE TABLE tickets  (    ticket_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    date_created DATE NOT NULL,    name VARCHAR NOT NULL UNIQUE,    issuer_user_id INTEGER NOT NULL,    deadline VARCHAR NOT NULL,    resolved INTEGER NOT NULL,    recurring INTEGER NOT NULL,    recur_interval VARCHAR,    objective_id INTEGER,    status_id INTEGER NOT NULL,    PRIMARY KEY (ticket_id)  );    CREATE TABLE activities  (    activity_id INTEGER NOT NULL AUTO_INCREMENT  UNIQUE,    user_id INTEGER,    activity_type_id INTEGER NOT NULL,    source_id INTEGER,    PRIMARY KEY (activity_id)  );    ALTER TABLE activity_types ADD FOREIGN KEY (entity_id) REFERENCES entities (entity_id);    ALTER TABLE notifications ADD FOREIGN KEY (target_id) REFERENCES targets (target_id);    ALTER TABLE notifications ADD FOREIGN KEY (action_id) REFERENCES actions (action_id);    ALTER TABLE notifications ADD FOREIGN KEY (activity_id) REFERENCES activities (activity_id);    ALTER TABLE ticket_keys ADD FOREIGN KEY (ticket_key_id) REFERENCES ticket_vals (ticket_key_id);    ALTER TABLE tasks ADD FOREIGN KEY (ticket_id) REFERENCES tickets (ticket_id);    ALTER TABLE ticket_vals ADD FOREIGN KEY (ticket_key_id) REFERENCES ticket_keys (ticket_key_id);    ALTER TABLE ticket_vals ADD FOREIGN KEY (ticket_id) REFERENCES tickets (ticket_id);    ALTER TABLE users ADD FOREIGN KEY (department_id) REFERENCES departments (department_id);    ALTER TABLE comments ADD FOREIGN KEY (ticket_id) REFERENCES tickets (ticket_id);    ALTER TABLE comments ADD FOREIGN KEY (commenter_user_id) REFERENCES users (user_id);    ALTER TABLE targets ADD FOREIGN KEY (ticket_id) REFERENCES tickets (ticket_id);    ALTER TABLE targets ADD FOREIGN KEY (task_id) REFERENCES tasks (task_id);    ALTER TABLE targets ADD FOREIGN KEY (objective_id) REFERENCES objectives (objective_id);    ALTER TABLE targets ADD FOREIGN KEY (user_id) REFERENCES users (user_id);    ALTER TABLE targets ADD FOREIGN KEY (department_id) REFERENCES departments (department_id);    ALTER TABLE sessions ADD FOREIGN KEY (user_id) REFERENCES users (user_id);    ALTER TABLE tickets ADD FOREIGN KEY (issuer_user_id) REFERENCES users (user_id);    ALTER TABLE tickets ADD FOREIGN KEY (objective_id) REFERENCES objectives (objective_id);    ALTER TABLE tickets ADD FOREIGN KEY (status_id) REFERENCES statuses (status_id);    ALTER TABLE activities ADD FOREIGN KEY (user_id) REFERENCES users (user_id);    ALTER TABLE activities ADD FOREIGN KEY (activity_type_id) REFERENCES activity_types (activity_type_id);  

So the idea was to store all of the possible attributes in ticket_keys and relate the relevant attributes to a ticket through ticket_vals. But this seems to be more of a workaround than a solution: there is no concept of a "template" with this model. Perhaps that is not even necessary?

One of the reasons I felt like it would be necessary to include the concept of a template is for privileges: only certain users can see certain templates. For instance, only admin can open a ticket for Uploading products from a new vendor. But feels like it'd be a pain given my current setup. Maybe I could just add a privilege_required to ticket_keys, but again it does not seem quite right.

I would appreciate any advice regarding my naive attempt and whether or not the relational model is a good fit for this project. Is a document store more appropriate?

Can the OUTPUT clause create a table?

Posted: 22 Mar 2013 11:49 AM PDT

I'm doing an update like this:

UPDATE dbo.Table1  SET BirthDate = b.BirthDate  FROM Table1 a  JOIN Table2 b      ON a.ID = b.ID  

And I want to use the OUTPUT clause to back up my changes.

UPDATE dbo.Table1  SET BirthDate = b.BirthDate  OUTPUT       inserted.Id, inserted.BirthDate AS New_BirthDate,       deleted.BirthDate AS Old_BirthDate      INTO OutputTable  FROM Table1 a  JOIN Table2 b      ON a.ID = b.ID  

What I want to know is if there is a way for the OUTPUT clause to create the table OutputTable or do I have to make sure it already exists before running the statement?

Why can't we write ddl statement directly into the PL/SQL block

Posted: 22 Mar 2013 01:51 PM PDT

Why can't we write ddl statements directly in PL/SQL block, for example when i write

CREATE OR REPLACE PROCEDURE test IS  BEGIN      truncate table table_name; // error  END test;  /  

But,

CREATE OR REPLACE PROCEDURE test IS  BEGIN      execute immediate 'truncate table table_name'; // works fine  END test;  /  

Why second one executed successfully ?

sql replication conflicts after migration from 2000 to 2008

Posted: 22 Mar 2013 10:32 AM PDT

I got a suggestion over at Stackoverflow to post here....greatful for any and all help.

Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with sscesa20.dll the entire time.

We recently migrated this entire solution as follow:

  • Web Box - New Win Server 2008 R2 running IIS 7.5
  • SQL Box - New Win Server 2008 R2 running SQL 2008
  • Mobile app - small updates converted to Visual Studio 2008 and Windows for Moblie 5

The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call sqlcesa35.dll from the mobile device to perform merge replication.

The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL DB and be removed from their iPaq. The inspection has additional work flow on the web app from here on out.

Now on to the problem. We migrated everything Publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the DB every record has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user synchs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact.

It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration.

Here is a summary of the order of operation: New record created in the DB>>Mobile user receives data>>mobile user updates data>>synchronizes - Data is lost. Conflicts show up for all data lost.

From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly.

Thanks for taking the time to read please help. I am stuck after 3 days.

Materialized view log not updating

Posted: 22 Mar 2013 12:26 PM PDT

Under what conditions would a materialized view log not update?

I must be missing something. I've got a 10gR2 database. I can create the following MV logs:

CREATE MATERIALIZED VIEW LOG ON linked_t1   WITH SEQUENCE, ROWID, PRIMARY KEY INCLUDING NEW VALUES;    CREATE MATERIALIZED VIEW LOG ON t1   WITH SEQUENCE, ROWID, PRIMARY KEY INCLUDING NEW VALUES;  

Then if I do an insert:

INSERT INTO linked_t1 (id, link_id, link_analysis_id) VALUES ('11111111','22222222','0000000001');  

I see my insert in the MV Log:

SQL> select * from MLOG$_LINKED_T1 t;    LINK_ID  M_ROW$$            SEQUENCE$$ SNAPTIME$$  DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$  -------- ------------------ ---------- ----------- --------- --------- ---------------  11111111 AAHvaaAAHAABvDsACR         11 1/1/4000    I         N         FE  

Not really sure what to make of the 1/1/4000 snaptime.

As soon as I create a materialized view:

CREATE MATERIALIZED VIEW mv$sub2 (c1, c2, m1, m2, m3, m4, m5)  USING INDEX TABLESPACE idx   REFRESH FAST WITH ROWID ON COMMIT DISABLE QUERY REWRITE AS   SELECT la.rowid c1, ar.rowid c2, ar.analysis_id m1, ar.id m2, ar.valid m3, la.analysis_id m4, la.id m5     FROM linked_t1 la, t1 ar   WHERE ar.analysis_id = la.analysis_id;  

The materialized view log on LINKED_T1 is emptied, and will no longer track any DML! Why is that? I have not made any changes to T1 for this test.

I should mention that this MV was suggested by DBMS_ADVISOR.TUNE_MVIEW.

SQL Server Sandbox

Posted: 22 Mar 2013 02:34 PM PDT

I'm attempting to set up a sandbox for our report developers to their work in. My current plan is to "reset" the database every evening but I'm not sure how to go about doing so. What I mean by reset is that I want to essentially drop any user tables, views, stored procedures, etc from all but one database on the server. I suppose another option would be to drop and recreate the database as well but I'm pretty sure that'd mean regranting access to all of the appropriate AD groups/people too.

I really don't know what would be the best way to go about doing this so I'm hoping some of you will be able to provide some good ideas/suggestions. Thanks.

For clarity, we essentially want to do this with our database: http://try.discourse.org/t/this-site-is-a-sandbox-it-is-reset-every-day/57. Only difference being is that we don't want to recreate our users every day.

Version: SQL Server 2008
Edition: Developer & Enterprise

Backup Access Database Daily/Weekly

Posted: 22 Mar 2013 08:36 PM PDT

We have a Microsoft Access database split into backend/frontend. What I am looking for is to be able to automatically backup these files on a daily or weekly basis - what is the best way to go about this? We don't want to have to worry about backups for the database, just know that it is occuring automatically on a schedule.

Thanks.

PostgreSQL EXCLUDE USING error: Data type integer has no default operator class

Posted: 22 Mar 2013 07:48 PM PDT

In PostgreSQL 9.2.3 I am trying to create this simplified table:

CREATE TABLE test (      user_id INTEGER,      startend TSTZRANGE,      EXCLUDE USING gist (user_id WITH =, startend WITH &&)  );  

But I get this error:

ERROR:  data type integer has no default operator class for access method "gist"  HINT:  You must specify an operator class for the index or define         a default operator class for the data type.  

The PostgreSQL docs use these this example, which does not work for me (http://www.postgresql.org/docs/9.2/static/rangetypes.html):

CREATE TABLE room_reservation (      room text,      during tsrange,      EXCLUDE USING gist (room WITH =, during WITH &&)  );    ERROR:  data type integer has no default operator class for access method "gist"  HINT:  You must specify an operator class for the index or define         a default operator class for the data type.  

And this one, which does not work for me either (http://www.postgresql.org/docs/9.2/static/btree-gist.html):

CREATE TABLE zoo (      cage   INTEGER,      animal TEXT,      EXCLUDE USING gist (cage WITH =, animal WITH <>)  );    ERROR:  data type text has no default operator class for access method "gist"  HINT:  You must specify an operator class for the index or define          a default operator class for the data type.  

I am able to create this able without any problem:

CREATE TABLE test (      user_id INTEGER,      startend TSTZRANGE,      EXCLUDE USING gist (startend WITH &&)  );  

and this:

CREATE TABLE test (      user_id INTEGER,      startend TSTZRANGE,      EXCLUDE USING btree (user_id WITH =)  );  

I've spent quite a bit of time searching for hints about figuring out how to make this work, or figuring out why it won't work. Any ideas?

sql replication conflicts after migration from 2000 to 2008

Posted: 22 Mar 2013 02:12 PM PDT

Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with sscesa20.dll the entire time.

We recently migrated this entire solution as follow:

  • Web Box - New Win Server 2008 R2 running IIS 7.5
  • SQL Box - New Win Server 2008 R2 running SQL 2008
  • Mobile app - small updates converted to Visual Studio 2008 and Windows for Moblie 5

The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call sqlcesa35.dll from the mobile device to perform merge replication.

The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL DB and be removed from their iPaq. The inspection has additional work flow on the web app from here on out.

Now on to the problem. We migrated everything Publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the DB every record has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user synchs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact.

It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration.

Here is a summary of the order of operation: New record created in the DB>>Mobile user receives data>>mobile user updates data>>synchronizes - Data is lost. Conflicts show up for all data lost.

From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly.

Thanks for taking the time to read please help. I am stuck after 3 days.

IntelliSense alternative using SSMS 2012 in a contained database as a contained user?

Posted: 22 Mar 2013 11:59 AM PDT

According to this answer about contained database disadvantages:

If you connect to a contained database as a contained user, SSMS will not fully support IntelliSense. You'll get basic underlining for syntax errors, but no auto-complete lists or tooltips and all the fun stuff. I filed a bug about this issue, and it remains open.

So is there any way to get IntelliSense-style functionality? Can anyone vouch for a product that will work in this scenario?

Does the SQL Server Backup Database Command also backup views from a Database?

Posted: 22 Mar 2013 11:28 AM PDT

I have a backup job (SQL Server Agent job) which will iterate through all of our SQL Server databases in SSMS, invoke the BACKUP DATABASE command to create a .bak file on a different server. This other server has its own job iterating through all of the .baks and restoring them (with the RESTORE DATABASE command).

EDIT: The server running the RESTORE job restores the .baks into its own installation of SSMS

The jobs report success in the SQL Agent job history so I'm wondering whether or not these jobs should also backup and restore the Views in the databases (as they haven't seemed to) or if this is an indication that the scripts aren't quite working?

postgres backup / restore: restored database much smaller?

Posted: 22 Mar 2013 09:02 PM PDT

I am worried that my restored database is very different from the original:

#check size of postgres database  postgres@db1:/tmp$ psql -c "select pg_size_pretty(pg_database_size('test_db'));"   pg_size_pretty  ----------------   2105 MB  (1 row)    #backup database  postgres@db1:/tmp$ pg_dump -Fc test_db > test_db_Fc.dump    #rename postgres database (i.e. park it nearby)  postgres@db1:/tmp$ psql -c "alter database test_db rename to test_db_20130322;"  ALTER DATABASE  -------  (1 row)    #restore test_db  postgres@db1:/tmp$ pg_restore -Fc -C -d postgres test_db_Fc.dump    #check size of restored postgres database  postgres@db1:/tmp$ psql -c "select pg_size_pretty(pg_database_size('test_db'));"   pg_size_pretty  ----------------   257 MB  (1 row)  

The original db is many times larger than the restored one. What is going on here? As far as i can tell, the website that test_db serves is still working perfectly well, after a restore, but i need to know what's up before i use a backup in live context.

FYI if i run vacuumdb on each database there seems to be no change in database size.

[Addendum, added later] in the tradition of RTFM I have gone hunting in the manual for PostrgeSQL. Here are some more pieces to the puzzle

#is the autovacuum switched on?  postgres@db1:/tmp$ psql -c "SHOW autovacuum;"   autovacuum  ------------   on  (1 row)    #The "track counts" parameter is needed by autovacuum which uses statistics from database activity to know where to do its job  postgres@db1:/tmp$ psql -c "SHOW track_counts;"   track_counts  --------------   on  (1 row)    #is there an autovacuum daemon resident in memory?  postgres@db1:/tmp$ ps  -ef | grep 'autovacuum'  postgres  1261  1021  0 Jan23 ?        00:08:27 postgres: autovacuum launcher process                               postgres 18347 18149  0 00:33 pts/0    00:00:00 grep autovacuum      #what's been happening on the live server?  postgres@LIVEdb1:/tmp$ psql -c "SELECT relname, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze from pg_stat_all_tables;"  #result is list of 65 tables (out about 300), all empty (no dates at all)  

Gratefully following up on @craig-ringer's advice to look into VACUUM FULL I turned to PostgreSQL documentation, (paraphrased) "... In the worst case where your disk is nearly full, VACUUM FULL may be the only workable alternative. ... (but) the usual goal of routine vacuuming is to avoid needing VACUUM FULL. The autovacuum daemon attempts to do standard VACUUMs often enough to maintain steady-state usage of disk space ..."

I will follow @chris-travers advice and map out the number of rows in tables from each version of the database. I think in my case it'd be fair to say that VACUUM FULL will relieve the pressure on disk space, and it'll make original_vs_restored look better but there's still this red flag of excessive bloat. I think autovacuum isn't doing anything and that's a worry! Thx for the guidance so far, it's fascinating.

Primary Key efficiency

Posted: 22 Mar 2013 02:55 PM PDT

If I need a table to hold Point of Sales transactions, and am told I need to store:

Country ID  Store Number  POS Terminal Number  Transaction Date  Item Code  Teller ID  Another Field  More Fields  

Now, in this case, the uniqueness would be:

Country ID, Store Number, POS Terminal Number, Transaction Date, Item Code

I am always unsure if it's best to have a identity colmn as the primary key - in this case, maybe TransactionID INT NOT NULL PRIMARY KEY, and then a unique constraint across the unique fields.

Or, should a primary key be created across all the unique fields?

The benefit of the TransactionId I guess would be, joins. To join back to a transaction, you just use the single field. I can't see the benefit of the PK across a number of fields. (Save space of an extra column?).

SQL Server split mdf into multiple files

Posted: 22 Mar 2013 02:05 PM PDT

I have a database called example.mdf with a total size of 1GB which suffers from performance issues. I checked the allocated hardware and it is higher than required, I double checked the design and every thing looks normal, when I look at the .mdf files in their physical location (C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\example.mdf) I found that the database is split into multiple files example_1.mdf, example_2.mdf, example_3.mdf, up to example_7.mdf.

I have another database file on the same SQL Server that has the same issue.

Why does this happen? Does this affect the performance? How can I prevent or stop SQL Server from splitting my .mdf files? Moreover I need to combine back the already split files.

Even when user has 'bulkadmin' role, query says user does not have role - SQL Server 2012

Posted: 22 Mar 2013 11:02 AM PDT

I am facing a weird issue with SQL Server 2012.

I have a user (say RS\sqluser) which has bulkadmin role.

Now when i run this query

Select IS_SRVROLEMEMBER('bulkadmin', 'RS\sqluser')

output is 0. But when i execute this query

sp_helpsrvrolemember 'bulkadmin'

I can see RS\sqluser present in the list.

Can someone please help me understand how is the output differing or is it BUG in SQL Server 2012? (Can't be a BUG as same query with different user works fine)

Table Size Analysis on SQL Server 2000

Posted: 22 Mar 2013 05:02 PM PDT

Our SQL Server 2000 database .mdf file is 27Gb large which seems much larger than is plausible. Using the following query we tried to analyse table sizes:

select cast(object_name(id) as varchar(50)) AS name,      sum(CASE WHEN indid<2 THEN rows END) AS rows,      sum(reserved)*8 AS reserved,      sum(dpages)*8 AS data,      sum(used-dpages)*8 AS index_size,      sum(reserved-used)*8 AS unused  from sysindexes with (nolock)      where indid in(0,1,255) and id>100       GROUP BY id with rollup      ORDER BY sum(reserved)*8 desc  

The results were as follows:

Name           Rows       Reserved  Data     Index_Size  Unused  NULL           15274279   26645456  5674592  17361464    3609400  BigTable         875966   16789712   471096  13349816    2968800  
  1. How can we find out which objects are causing this massive NULL space usage?
  2. It seems that approx 26GB are "reserved" for NULL, 16GB for BigTable - is this basically a waste of space or are real records involved?

MySQL General log not working on Mac OS X

Posted: 22 Mar 2013 12:47 PM PDT

I want to log all queries to the database running on my MacBook Pro (Mac OS X 10.8.1). I created a file /etc/my.cnf with the following content:

[mysqld]  general_log=1  general_log_file=/Users/wdb/mysql.log  

I restarted MySQL and tried some queries on the mysql command line, but nothing gets logged. The file is not getting created. I also created the file myself afterwards to test with touch, but the file remains empty.

When I check, MySQL did read my settings:

mysql> show variables like '%general_log%';  +------------------+----------------------+  | Variable_name    | Value                |  +------------------+----------------------+  | general_log      | ON                   |  | general_log_file | /Users/wdb/mysql.log |  +------------------+----------------------+  2 rows in set (0.01 sec)  

Is there anything else I need to do? I suppose any query I type on the command line should get logged, right?

I am using MySQL 5.5.24

Handling growing number of Tenants in Multi-tenant Database Architecture

Posted: 22 Mar 2013 07:34 PM PDT

Handling a modest number of customers (tenants) in a common server with separate databases for each tenant's instance of the application is relatively straightforward and is normally the correct way to do this. Currently I am looking at the architecture for an application where each tenant has their own database instance.

However, the problem is that this application will have a large number of tenants (5,000-10,000) with a substantial number of users, perhaps 2,000 for a single tenant. We will need to support growing the system by several tenants every week.

In addition, all tenants and their users will be presented with a common login process (i.e. each tenant cannot have their own URL). To do this, I need a centralised login process and a means to dynamically add databases to the system and register users.

  • How could the registration and database creation process be automated robustly?

  • Is the process of creating and registering tenants' databases on the system likely to cause performance or locking issues. If you think this could be an issue, can anyone suggest ways to mitigate it?

  • How can I manage central authentication in a way where user credentials will be associated with a particular tenant's database but the user can log in through a common page (i.e. all through the same login URL, but their home application will be on some specific tenant's database). The tenants will have to be able to maintain their own logins and permissions, but a central login system must be aware of these. Can anyone suggest a way to do this?

  • If I need to 'scale out' by adding multiple database servers, can anyone suggest what issues I might have to deal with in managing user identies across servers (impersonation etc.) and some way to mitigate those issues?

MySQL Replication Troubleshooting

Posted: 22 Mar 2013 03:23 PM PDT

Here's what I did. On the master, change /etc/my.cnf:

[mysqld]  server-id=1  log-bin=mysql-bin  innodb_flush_log_at_trx_commit=1  sync_binlog=1  

Save and restart mysql, then log in.

>GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%' IDENTIFIED BY '123';  Mysql> flush table with read lock;//  Start copy data master (myisam) chuyển sang slave .  Mysql> show master status;  

On the slave, change /etc/my.cnf.

[mysqld]  server-id=2  log-bin=mysql-bin  innodb_flush_log_at_trx_commit=1  sync_binlog=1  

Save and restart mysql. Create a database on the slave (Snapshop master). Move data from master to slave.

Mysql> stop slave;  Mysql>CHANGE MASTER TO  MASTER_HOST='server2',  MASTER_USER='repl',  MASTER_PASSWORD='password',  MASTER_LOG_FILE='xxxx',  MASTER_LOG_POS=xxx;  

On the master:

Mysql> unlock tables;  

On the slave:

Mysql> start slave;  Mysql> show slave status /G  

My problem is: I forget answer command belows.

Mysql> show master status;  

when I start the slave. The error is

236 | Got fatal error 1236 from mast er when reading data from binary log: 'Could not find first log file name in binary log index file' |  

What does the error mean, and how can I solve it?

SQL Server 2008 - Cross database performance on same physical machine and server instance

Posted: 22 Mar 2013 12:38 PM PDT

Is there any performance hit when doing a select across another DB on the same physical machine? So I have 2 databases on the same physical machine running within the same SQL 2008 instance.

For instance in SomStoreProc on_this_db I run SELECT someFields FROM the_other_db.dbo.someTable

So far from what I have read on the internet, most people seem to indicate NO.

Search This Blog