Tuesday, March 26, 2013

[SQL Server 2008 issues] Enabling Trace Flag T1118

[SQL Server 2008 issues] Enabling Trace Flag T1118


Enabling Trace Flag T1118

Posted: 25 Mar 2013 04:17 PM PDT

Hi, We are using Sql server 2008 R2 express. We heard enabling T1118 is good for tempdb contention.Somewhere it said its not much help sql server 2005 aftwrwards. We are thinking about enabling it. Is there any disadvantages of enabling T1118?Thank You

calculating period of time

Posted: 25 Mar 2013 06:37 PM PDT

hi, and here comes my next question :-)i need to calculate the sum of quantity by period of daysfrom beginning of time till today without writing a sum for each period of days.i know i need to use sum and time diff to calculate the quantity but how do i tell the table, give me the first 5 days and then the next 5, etc etci saw something that is called interval (which of course i have no clue how to use) or maybe i should build a loop (which of course i dont know how to do in sql either :w00t:)what would you recommend? :-)

calculating time difference

Posted: 24 Mar 2013 08:20 PM PDT

hi,I have a table ID UserID Purpose DateCreated1 500 login 2013-03-24 14:39:43.2732 501 login 2013-03-24 14:39:43.2773 502 login 2013-03-24 14:39:43.2774 503 login 2013-03-24 14:39:43.2775 500 logout 2013-03-24 14:44:43.2806 501 logout 2013-03-24 14:44:43.280I need to calculate the time spent on total for each user idthe result needs to look something likeUserID TotalTime500 25501 25502 25503 25i've been trying a bunch of thing, and i think i need to make cases for the login/logout but it is not working...this is what i wrote that wrong in so many levels :w00t:select userid, sum(datediff(mi,loginn,logoutn)) as TotalTime,case when Purpose like '%login%' then DateCreated else 0 end as [loginn],case when Purpose like '%logout%' then DateCreated else 0 end as [logoutn]

how to copy all the scripts one sql server database server to another sql server database

Posted: 25 Mar 2013 05:06 PM PDT

Dear all,Hope you doing...,actually i am changing all my tables from 10.101.__.17.dailyreps database name to another server 10.101.--.23.daily reps.i copied all the tables successfully,now i am having multiple scripts in my database of source i want move all the scripts into my destination data.can you please how it is possible all at a time other than manually copying and paste one by one.?

Inserting child between descendants using HierarchyID datratype

Posted: 25 Mar 2013 06:10 PM PDT

Hi,We are using Sql server 2008 R2 express. We are trying to use HierarchyID datatype to represent organizational structure. We have to insert a child by using parent id. I found this query [code="sql"]declare @root hierarchyid,@last_id hierarchyid,@id hierarchyidselect @root=orgnode from [adventureworks].HumanResources.EmployeeDemo where employeeid=1 SELECT @last_id = MAX(orgnode) FROM [adventureworks].HumanResources.EmployeeDemo WHERE orgnode.GetAncestor(1) = @root SELECT @id=@root.GetDescendant(@last_id, NULL)INSERT into [adventureworks].HumanResources.EmployeeDemo values(@id,12,'seh','Application Developer',1905-02-12);[/code]But this inserts a node at end in particular level and i sometimes want to insert node in between other nodes in that level.How can i achieve that?

SQL Server 2008 Transaction log backup issue...

Posted: 25 Mar 2013 05:49 PM PDT

Hello SQL Masters,Greetings!I would just like to ask regarding Transaction log backup. While backing up the T-log, is there still incoming transaction to write on log?Thank you and Best Regards,dev1.bohol@gmail.com

Creation of auto Increment row

Posted: 25 Mar 2013 05:42 PM PDT

Hi,I am using Sql server 2008 R2 express.I want a column [b]id[/b] with auto incrementing value in my table.But i don't want to use built in Auto increment property because I want to delete a row and then insert row in middle etc and Built in Auto increment property will cause some restriction on operations.So i want to increment that value during inserting a row manually.I thought i can do like this [code="sql"]declare @root int select @root=max(id)from [AdventureWorks].dbo.seh_test insert into AdventureWorks.dbo.seh_test values(@root+1,'seh')[/code]Is there any way to do same thing using single query? Or any optimizations for my query?Thank you

select 200 column out of 250 column

Posted: 25 Mar 2013 04:57 PM PDT

If My table consist of 250 colum and i want to select 200 column so it is possible to write query to select 200 column without write all column name in select statement

certificate error

Posted: 25 Mar 2013 05:32 PM PDT

Iam unable to open the reporting services url as getting getting error as There is a problem with this website's security certificateplease suggest how to resolve this .My windows is win2008 r2

Assistance with this query

Posted: 25 Mar 2013 10:01 AM PDT

I'm a bit new to SQL, and I'm attempting my first real project. Currently, my setup is: SQL Server 2008 R2 with a linked SQL Server 2000.A couple points that I'm having issues comprehending:[li]Each record in the "employee" table has it's own ACTIONDATE, which allows more than one record for the same person, so getting one result for one person is appearing difficult. EMPLOYEEIDs are unique to the user, so how would I word the query to grab the most recent ACTIONDATE for each employeeid?[/li][li]Multiple joins are causing my queries to run for over 3 minutes. Each table has 100k records, minimum. How would I optimize a query with multiple joins with big tables? Or is there a better way to grab this data?[/li]I'm attempting to make a hierarchy from a SQL Server that houses are workforce software and place it into my SQL Server 2008 R2 to be utilized in another application. I want to grab all active employees (ENDDATE IS NULL) with the most recent ACTIONDATE of each active EMPLOYEEID.The query I'm attempting to run is:[code="sql"]SELECT DISTINCT employee.EMPLOYEEID, employee.FIRSTNAME, employee.LASTNAME, employee.STARTDATE, employee.EMAILADDRESS, org.ORGANIZATIONID, job.JOBTITLEIDFROM linkedServer.linkedDB.dbo.AUEMPLOYEE as employee RIGHT JOIN linkedServer.linkedDB.dbo.AUEMPLOYEEORGANIZATION as org ON employee.employeeid = org.employeeid RIGHT JOIN linkedServer.linkedDB.dbo.AUEMPLOYEEJOBTITLE as job ON employee.employeeid = job.employeeidWHERE employee.ENDDATE IS NULL[/code]

Import .txt file into SQL Table - Not working!

Posted: 25 Mar 2013 01:37 AM PDT

Hi there, I would really appreciate it if someone could help me. At the moment part of our customer information is held in Lotus Approach and part in SQL. I have exported the Approach information to Excel and am trying to import this into SQL. I have also saved the Excel sheet as a .txt file.My issue is that three of the Approach fields are free text boxes, with lots of carriage returns. I have tried the following query to enter the information into SQL:BULK INSERT livedev.dbo.CarriageReturnsFROM 'C:\Users\janitor\Desktop\TestData\Test.txt'WITH ( FIELDTERMINATOR = '\t',ROWTERMINATOR = '\r',FIRSTROW=2 ) This works fine on the simple fields but the large text fields are carrying over to the next row and creating lots of errors.I have read lots of forums but just can't seem to find anything that works. I should have 437 records imported into SQL, but looking at a Hex editor - my data is showing as having 1124 lines because of the carriage returns.It must be something to do with the ROWTERMINATOR and it doesn't know where the row ends - but I can't find a solution for this!ThanksCaroline

Trace/Log Transaction Rollback

Posted: 25 Mar 2013 08:38 AM PDT

Hello,I am looking for a good (if any) way to log the event of transaction rollback. I'am on SQL2008 R2 Standart.Situation:Transaction is opened from .NET application. Request timed out. Transaction (in SQL) has been rolled back by the application. No errors happened on the SQL side, so I have nothing to log in the CATCH.Is there any way to log the event of the rollback itself (like we do for dead locks for example)? With or without details (e.g ID, time).Thanks,Greg

Propagate view changes.

Posted: 25 Mar 2013 11:20 AM PDT

For security reasons, we have exact same view ( with same name) in different schemas based on different clients. Now if we have 100 clients, a change in one view or a if an new view is added to that that schema, they need be propagated to all other views in other schemas. I was wondering about a convenient yet easy approach to reflect these changes in a one view and propagate them to all the others schemas. Thanks.

Trace - to Capture all or most queries

Posted: 18 Mar 2013 01:49 AM PDT

A. Are black box and defaula traces same?B. Is it possible to capture all the queires in the black/default trace, if it does not degrade sql performance.

Lock only one row in update (lock a row)

Posted: 24 Mar 2013 09:40 PM PDT

Hello!!!The question is (if it is possible without any workaround, like add a state column)How can i achieve this... I understand that I can't lock a row with select statement, but can I achive this:--TRANS 1 while a transation is executing an update statement modifing one row, lock this row. Nobody can select this row while transaction is running.--TRANS 2 while TRANS1 is running I can update other ROWS (not row in TRANS 1)[b]--example 1[/b]--TRANS 1 => update table set field = 1 where id = 1 (LOCK THIS ROW ONLY THE ROW!!!)--TRANS 2 => select * from table where id = 1 =====> MEANWHILE TRANS 1 IS RUNNING, HAS TO FAIL BECAUSE TRANS 1 is updating...[b]--example 2[/b]--TRANS 1 => update table set field = 1 where id = 1--TRANS 2 => select * from table where id = 2 =====> OK ![b]--example 3[/b]--TRANS 1 => update table set field = 1 where id = 1--TRANS 2 => update table set field = 1 where id = 2 =====> OK ! no one is locking this row!!!* I disable lock escalation on the table (LOCK_ESCALATION = DISABLE)* No cluster index on the table.Hope you understand, i try a lot of query hints but noone get the results I spected (rowlock, xlock) (updlock)when I use hits I test the lock results with use masterSELECT * from sys.dm_tran_locks where request_session_id in (55,60) order by request_session_idand see a PAGE lock for the resuouce, trans id 55 and 60 both takes differentes rows,for a more especific example i post the las example i try.TRANS 55[quote]begin transactionupdate reque_pa with (rowlock) set MINIMO = 2 WHERE CEREQ = 1 --with (rowlock,updlock) WAITFOR DELAY '00:00:10';commit transaction[/quote]The waitfor is to simulate a more long transaction to force the lock.TRANS 60[quote]begin transactionSELECT * FROM Reque_pa with (rowlock) WHERE CEREQ = 2commit transaction[/quote]hope you can help me, thanks in advance.!!!

Database Diagram printing problems

Posted: 29 Oct 2010 03:08 AM PDT

Greetings all; I have seen this issue/problem for a number of releases now and I have not been able to resolve it. I will admit I may be missing something here. When I design a database I typically create subject area database diagrams. Something I've done for quite some time and is essential when designing complex data structures. The problem that I have is printing the diagrams. I will spend a fair amount of time to insure I have my tables lined up and not overlapping the page. However when I go to print my entire ER diagram will shift causing tables to overlap pages. This is so annoying! 1) Has anyone else experience this? 2) Does anyone have a resolution for this? 3) Is there something that I'm doing wrong? or is this a known issue?Thanks in advance.Kurt

Validating trace file events

Posted: 08 Mar 2013 05:09 AM PST

Hello,If I have a trace running and populating some *trc files..My question is how can I validate that i am indeed tracing the correct sp_trace_setevent filters I have set ?Are there dynamic tables where I can fetch the information on what is being traced?Thank you !JR

How to add more partitions to existed table ?

Posted: 31 Dec 2011 02:54 AM PST

I have a table that contains records of transactions with ID column is primary keyI use partition follow ID column, each partition have 1 million records.CREATE PARTITION FUNCTION [pfTBLTRANS_ID](int) AS RANGE LEFT FOR VALUES (1000000, 2000000, 3000000, 4000000, 5000000, 6000000, 7000000, 8000000, 9000000, 10000000)CREATE PARTITION SCHEME [psTBLTRANS_ID] AS PARTITION [pfTBLTRANS_ID] TO ([GROUP01], [GROUP02], [GROUP03], [GROUP04], [GROUP05], [GROUP06], [GROUP07], [GROUP08], [GROUP09], [GROUP10], [GROUP11])But now I have more records with IDs that are greater than 11.000.000. So how can I add more partitions to this table ?Thanks

Concatenation help

Posted: 25 Mar 2013 05:48 AM PDT

Hi all,I'm having a brain fart. What is the best way to achive the below results?name LabMICROSOFT VISUAL STUDIO .NET 101MICROSOFT VISUAL STUDIO .NET 118MICROSOFT VISUAL STUDIO .NET 256MICROSOFT VISUAL STUDIO .NET 367MICROSOFT VISUAL STUDIO .NET 51MICROSOFT VISUAL STUDIO .NET 901PYTHON 256PYTHON 376PYTHON 256name lab(s)MICROSOFT VISUAL STUDIO .NET 101, 118, 256,367,51,901PYTHON 256PYTHON 376, 256

Urgent - Reporting services migration and upgrade to 2008 R2

Posted: 25 Mar 2013 01:03 AM PDT

Hello,We have a task to move reporting services 2005 instance to new servers with 2008 R2 version.This is what I did so far:1. Since web and database servers are different, I first installed sql server on database server. 2. Installed reporting services on web server. 3. Restored ReportServer and ReportserverTempdb to the new 2008 R2 database server.4. Configured reporting services and selected resored reportserver and reportservertempdb on the new server.I can access the URLs and reports now.This is my first time touching reporting services. Our reporting services guy in the team is gone and we have a very tight deadline to deliver test, stage and prod environments in next 3 weeks.We have over 250 reports. From the collected info, I believe that there is no custom configuration.Our Development team has given us estimate of 2 manual months offshore and 80 hours onsite to convert all reports to 2008 R2. First of all, we don't have that kind of time and second, I am trying to find out what is left to do now and whether it is worth paying so much money to development.I searched the forums and articles but I am not clear on when to conclude that the migration is done. Do I need to open every report in BIDs and save it in order to upgrade it? If yes, is there any script or utility availabel to do that?Thanks,Nikita

Importing 00:00 into Table

Posted: 25 Mar 2013 03:18 AM PDT

So I've created a query that adds times to get a certain calculation. The end result I would get a time of 09:26, which means 9 minutes and 26 seconds. I can easily use the query directly into the report, but I'm wanting to dump the results into a table for historical reasons.However, I'm running into an issue. When I attempt to import the results into a table, the import wants to change my 09:26 to an actual date and time, which I dont want.What data type should the column be in the table so I'm allowed to import the 09:26 as it is?Thanks!

SSMS Performing Slow over remote connection

Posted: 25 Mar 2013 05:19 AM PDT

Hi Mates,One of the instance is hosted in a cluster node with 128GB of ram and 80 processors allocated to this instance...And last week, we had an issue with temp DB as it was full.We increased the drive space over the weekend.Now whenever we connect this instance from other server through remote SSMS, we see, refreshing the object including connecting to this instance is very very slow...It takes more than 10 sec. Initially this was not the case. Even hear from end user that the application is performing gradually slow.I check the network, fregmentation, settings across this instance, and all looks good.Any suggestions please? Thanks

deploying a cube

Posted: 25 Mar 2013 03:48 AM PDT

Hiwhat are my options for deploying a cube?is there an interface users can use to browse the cube as I can in BIDS?I know excel is an option but not quite as user friendly as the cube browser.I am new to ssas BTW and i am just looking at my options.Thanks in advance

Trigger For External Server Insertion

Posted: 25 Mar 2013 03:35 AM PDT

Hi All,Please i need an answer to this question. I have two servers, one locally and one hosted, i normally upload data to the online one using data export wizard. but the local database is now connected to an application that adds data to it almost every second. This is what i want to do1. write a trigger for insert that will insert the data to the hosted server(database) anytime the local database table is added with data(pardon my English)2. Is it posible to do upload data from a local database to an online database through a trigger, a sample script will guide me through.ThanksTim

Documenting RDL files

Posted: 25 Mar 2013 03:34 AM PDT

I'm working in SSRS2008 R2 and I have several projects, which in turn have several report definitions held within them. Each rdl may have many datasets, tablix, matrix definitions detailed within it. My problem is documentation! I need to document all the datasets, tablix/matrix names, stored procedures etc Is there an easy way of extracting this information and creating either an excel workbook using VBA, or detailing them back into a SQL2008 table that I can then use SSRS to produce the report?I have read a similar post that suggests using the reportserver database to get this information from, but if the rdl hasn't been deployed yet it won't be in the database. Any help would be much appreciated as it will save me going through over 150 rdl's manually and writing down the information. Thanks

SQL 2005 - Get Deadlock information

Posted: 25 Mar 2013 02:56 AM PDT

IN sql 2005, I use DMV to get the number of deadlocks per sec. How can i get those deadlock's details? I havnt enabled the Traces or run profiler. These deadlocks already happend and i suspect if we could get the necessray details

Increase values with update statement

Posted: 25 Mar 2013 03:18 AM PDT

Hi everyone I hope somebody can help...I have a table in SQL with the following columns: FinancialYear (varchar), Site (varchar), ProductRange (varchar), ProductName (varchar), PlannedSales (decimal). I want to increase the PlannedSales in the table for each of the sites but by a different amount. For the North Site I want to increase PlannedSales by 0.05% and for the South Site by 0.19%. All the other sites should keep their current PlannedSales totals. The increase will need to be spread across all product ranges and actual products ie all products should increase their PlannedSales by 0.05% in the North Site. Thanks in advance. BO

SQL Server Install Location

Posted: 25 Mar 2013 02:46 AM PDT

Can the SQL Server Binaries (2005,2008,2012) be installed in a location other than the C: drive.How can I specify the location to install the SQL Server on a drive of my choosing.

Truncate table

Posted: 21 Mar 2013 01:35 AM PDT

When we truncate the table does the indexes made on them exists in memory??Can we check whether they exist or no???

Deadlock on update command

Posted: 18 Mar 2013 10:33 PM PDT

I have the following dead lock happening quite frequently at 3-4 times a day. The deadlock seems to happen on 2 things one being an index on the status column keylock hobtid=72057605790367744 dbid=9 objectname=dbname.dbo.orderha indexname=IX_status id=lock2189e7200 mode=S associatedObjectId=72057605790367744the other on the primary clustered keykeylock hobtid=72057602492792832 dbid=9 objectname=dbname.dbo.orderha indexname=PK_orderha id=lock1399f2200 mode=X associatedObjectId=72057602492792832DEADLOCK INFO:Node:1 KEY: 9:72057602492792832 (7900b77d3449) CleanCnt:2 Mode:X Flags: 0x1Grant List 1:Owner:0x00000001A34042C0 Mode: X Flg:0x40 Ref:1 Life:02000000 SPID:67 ECID:0 XactLockInfo: 0x00000002643C19B0SPID: 67 ECID: 0 Statement Type: UPDATE Line #: 1Input Buf: Language Event: [b]update orderha set status=2 where sessionid='7560129' and orderha=1[/b]Requested by:ResType:LockOwner Stype:'OR'Xdes:0x00000001A183B780 Mode: S SPID:64 BatchID:0 ECID:0 TaskProxy:(0x00000000D7EAC538) Value:0x118db7c0 Cost:(0/0)Node:2 KEY: 9:72057605790367744 (7a00de2866cc) CleanCnt:2 Mode:S Flags: 0x1Grant List 0:Owner:0x00000002E14CBCC0 Mode: S Flg:0x40 Ref:0 Life:00000001 SPID:64 ECID:0 XactLockInfo: 0x00000001A183B7C0SPID: 64 ECID: 0 Statement Type: SELECT Line #: 1Input Buf: Language Event: [b]select rsn from orderha where sessionid='7558101' and status < 3[/b]Requested by:ResType:LockOwner Stype:'OR'Xdes:0x00000002643C1970 Mode: X SPID:67 BatchID:0 ECID:0 TaskProxy:(0x0000000281984538) Value:0x42de2bc0 Cost:(0/456)Victim Resource Owner:ResType:LockOwner Stype:'OR'Xdes:0x00000001A183B780 Mode: S SPID:64 BatchID:0 ECID:0 TaskProxy:(0x00000000D7EAC538) Value:0x118db7c0 Cost:(0/0)Should I use a Lock hint to force a block rather then encounter a deadlock , such as UPDLOCK? or HOLDLOCK?

Assistance in converting INT into date.

Posted: 25 Mar 2013 12:16 AM PDT

I have a query that has two date fields that are Int....20050307. I need to convert them to date fields. 03/07/2005 I can not seem to get them to convert. Can anyone out there offer me some assistance please. The fields are fact.StartDtID and fact.EndDtId. See query below and Thank you in advance. SELECT dimUser.EmpFullName1 [User Name],dimUser.MgrEmpFullName1 AS Manager,dimUser.PrimaryJobName AS [Primary Job], dimUser.EmpNo AS [User Number], CONVERT(VARCHAR(10), fact.StartDtID, 101) [Start Date], CONVERT(VARCHAR(10),fact.EndDtID, 101) [Completion Date],dimUser.PrimaryOrgCode, dimUser.SecondaryOrg1 AS [Secondary Org 1], dimUser.SecondaryOrg2 AS [Secondary Org 2], dimUser.PrimaryOrgName AS Org, fact.EndDtID,dimUser.EmpStat AS [Emp Status], CASE WHEN fact.completionStatusID = '-1' THEN 'Not Satisfied' ELSE 'Satisfied' END AS Status FROM factAttempt AS fact INNER JOIN dimActivity AS A ON A.ID = fact.ActivityID INNER JOIN dimUser ON dimUser.ID = fact.UserID INNER JOIN dimDate AS DD ON DD.DateID = fact.EndDtID WHERE (dimUser.EmpStat = 'active') AND (a.code = 'A2102')AND (fact.StartDtID >= '20130101')

Is SQL2K8 SSMS slower than SQL2K5 SSMS?

Posted: 24 Mar 2013 10:57 PM PDT

We have a DB [MYDB] in SQL2K5 box, we are migrating the DB to SQL2K8 box, hence restored the backup from 2K5 box to 2K8. I have a question with following queryset statistics time on SELECT TOP 1000 * FROM [MYDB].[my].[MY_TABLE] set statistics time off In 2K5 SSMS it takes around 3000 ms and in 2K8 SSMS it is taking 9000 ms. Now if I execute the query from 2K5 SSMS for both the DB then execution in 2K8 DB is less than 3000 msand if I execute the query from 2K8 SSMS for both the DB then execution in 2K5 DB is more than 9000 msWhat could be the reason? Is there any feature I need to disable in SSMS 2K8?

BCP

Posted: 21 Mar 2013 07:16 AM PDT

I have a stored procedure that exports data from a table into .csv file. There are several different record sets that get exported. They are all comma delimited with double quotes as a text qualifier. All of my files export and you can double click open the .csv file into excel and everything is fine, except for one. The interesting thing about this record set is that some of the data has a trademark symbol.BCP:DECLARE @bcp varchar(1000) = 'bcp "SELECT * from dbo.tempExportData" queryout "' + @filePath + + @FileOutputName + '" -T -w';EXEC @Result = xp_cmdshell @bcp;In the process I am getting the data:DECLARE @ColumnList varchar(5000) select @ColumnList = coalesce(@ColumnList + ',','') + quotename(cast(Name as varchar(50)) , '"') from tempdb.sys.columns where object_id = object_id('tempdb..#tempData') insert into dbo.tempExportData (outputData)values (@ColumnList) insert into dbo.tempExportData (outputData)selectquotename(isnull(FieldA, ''), '"') + ',' +quotename(isnull(FieldB, ''), '"')from #tempDataCSV:FieldA,"FieldB"ACME®,"some more data","even more data"Big Company,"still more data"All of the data is contained in column A of the spreadsheet and FieldA is not text qualified. If you open the .csv in notepad,textpad, etc all of the columns are text qualified. I know that I can open excel and use the import wizard to successfully import the data but I am wondering why when you open the .csv excel is not handling it correctly?Let me know if you need more information.

Is DBCC CHECKDB an appropriate weekly check for a 365x24x7 availability db?

Posted: 24 Mar 2013 10:08 PM PDT

We have a db that has a mission critical website sitting on top of it. We check it weekly with a DBCC CHECKDB command, which takes 1 to 2.5 minutes to run. Are web users likely to see issues during that time? If so, is there a better way to do a weekly health check on it?

Validate a date held in a text field.

Posted: 24 Mar 2013 09:22 PM PDT

Our systems administrator has took it upon themselves to use a free txt field to house a data value.This is all good and well if users enter the date in the right format, which of course they do not. I need to come up with some way of listing all dates in said field that do not appear in the the following format via a SQL query.Acceptable format = dd/mm/yyyy (ie 13/03/2013)Any suggestions extremely welcome.Thanks

Monday, March 25, 2013

[how to] Speed efficient query for membership first joined, latest category from membership table (min, max)

[how to] Speed efficient query for membership first joined, latest category from membership table (min, max)


Speed efficient query for membership first joined, latest category from membership table (min, max)

Posted: 25 Mar 2013 06:57 PM PDT

I have the following table representing membership information:

    CREATE TABLE IF NOT EXISTS `membership` (        `id` int(11) NOT NULL AUTO_INCREMENT,        `organisation_id` int(11) NOT NULL,        `membership_subcategory_id` int(11) NOT NULL,        `start` datetime DEFAULT NULL,        `end` datetime DEFAULT NULL,        `amount` decimal(9,2) DEFAULT NULL,        `amount_paid` decimal(9,2) DEFAULT NULL,        `notes` mediumtext,        `order_id` int(11) DEFAULT NULL,        `payment_type` varchar(20) NOT NULL,        `active` tinyint(4) NOT NULL DEFAULT '1',        `cancelled` tinyint(4) NOT NULL DEFAULT '0',        `cancelled_date` datetime DEFAULT NULL,        `cancellation_reason` mediumtext,        `certificate_sent` date DEFAULT NULL,        `welcome_email_sent` date DEFAULT NULL,        PRIMARY KEY (`id`),        UNIQUE KEY `id` (`id`),        UNIQUE KEY `order_id_2` (`order_id`,`start`,`end`,`organisation_id`),        KEY `membership_subcategory_id_idx` (`membership_subcategory_id`),        KEY `organisation_id_idx` (`organisation_id`),        KEY `order_id` (`order_id`)      )  
  • organisation_id is a member
  • the membership year goes from 1 Jul to 30 Jun, start records when the membership in each year has started - this may be anywhere in the first year, but then its always 1 Jul unless a year is skipped
  • membership_subcategory_id is an industry category the membership applies to for each year.

I need an efficient query to get the date joined and latest membership category.

I've tried this query, but I get "Invalid use of group function" as an error

SELECT m.organisation_id, m2.membership_subcategory_id, MIN( m.start )   FROM membership m  INNER JOIN membership m2 ON m.organisation_id = m2.organisation_id  WHERE MAX( m.start ) = m2.start  GROUP BY m.organisation_id, m2.membership_subcategory_id  

For sample mysql runtime, is this performant or should I have concerns?

Posted: 25 Mar 2013 06:33 PM PDT

For sample mysql runtime, is this performant or should I have concerns? If so, what are the key concerns? Please note that persistent connection is set to TRUE for my application, which I believe impacts "Aborted". Please advise. What steps can I take to rectify the issue.

MySQL my.cnf won't take any effect

Posted: 25 Mar 2013 08:32 PM PDT

I'm experiencing issues using MySQL on Linux server.

I set some timeout and characterset options in my.cnf but they won't take any effect.
My goal is to set all character set default as utf8 and set wait_timeout and/or interactive_timeout to 30 seconds.

My environment

Linux CentOS 5.x
MySQL Server 5.6
MySQL client

I installed both MySQL Server and client as root using RPM. I start MySQL server with

] mysqld -u root  

I stop MySQL server with

service mysql stop  

The reason why I use mysqld -u root is that it often says this.

[root@kserver145-208 ~]# service mysql stop  Shutting down MySQL....                                    [  OK  ]  [root@kserver145-208 ~]# service mysql start  Starting MySQL...The server quit without updating PID file [실패]lib/mysql/kserver145-208.pid).  

Here is what mysql says from command line.

select @@session.wait_timeout, @@global.wait_timeout;  +------------------------+-----------------------+  | @@session.wait_timeout | @@global.wait_timeout |  +------------------------+-----------------------+  |                  28800 |                 28800 |  +------------------------+-----------------------+  1 row in set (0.00 sec)       select @@session.character_set_database, @@global.character_set_database;                          +----------------------------------+---------------------------------+  | @@session.character_set_database | @@global.character_set_database |  +----------------------------------+---------------------------------+  | latin1                           | latin1                          |  +----------------------------------+---------------------------------+  1 row in set (0.00 sec)  

But I previously set my.cnf like below. Sorry I't a bit long. I could have just cut the parts that count but I don't know what might be wrong with the way I set options in this file. So I'll just show everything to you here.

# For advice on how to change settings please see  # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html    [mysqld]    # Remove leading # and set to the amount of RAM for the most important data  # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.  # innodb_buffer_pool_size = 128M    # Remove leading # to turn on a very important data integrity option: logging  # changes to the binary log between backups.  # log_bin    # These are commonly set, remove the # and set as required.  # basedir = .....  # datadir = .....  # port = .....  # server_id = .....  # socket = .....    # Remove leading # to set options mainly useful for reporting servers.  # The server defaults are faster for transactions and fast SELECTs.  # Adjust sizes as needed, experiment to find the optimal values.  # join_buffer_size = 128M  # sort_buffer_size = 2M  # read_rnd_buffer_size = 2M    sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES  wait_timeout=30  interactive_timeout=30  character-set-server=utf8  collation-server=utf8_general_ci  character-set-database=utf8    #init_connect=SET collation_connection =utf8_general_ci  #init_connect=SET NAMES utf8  #init_connect=SET character_set_database = utf8    [client]  character-set-database=utf8  character-set-server=utf8  wait_timeout=30  interactive_timeout=30    [mysqldump]  #default-character-set=utf8    [mysql]  wait_timeout=30  interactive_timeout=30  character-set-database=utf8  character-set-server=utf8  

I've been banging my head against this wall over 72 hours. What have I done wrong ?

PS Here is what my kserver145-208.err says. Hope it helps understanding what's wrong.

130326 09:51:47 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  2013-03-26 09:51:47 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).  2013-03-26 09:51:47 13587 [Note] Plugin 'FEDERATED' is disabled.  ^G/usr/sbin/mysqld: Can't find file: './mysql/plugin.frm' (errno: 13 - Permission denied)  2013-03-26 09:51:47 13587 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.  2013-03-26 09:51:47 13587 [Note] InnoDB: The InnoDB memory heap is disabled  2013-03-26 09:51:47 13587 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2013-03-26 09:51:47 13587 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-03-26 09:51:47 13587 [Note] InnoDB: CPU does not support crc32 instructions  2013-03-26 09:51:47 13587 [Note] InnoDB: Using Linux native AIO  2013-03-26 09:51:47 13587 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2013-03-26 09:51:47 13587 [Note] InnoDB: Completed initialization of buffer pool  2013-03-26 09:51:47 13587 [Note] InnoDB: Highest supported file format is Barracuda.  2013-03-26 09:51:47 2ac0d44d9590  InnoDB: Operating system error number 13 in a file operation.  InnoDB: The error means mysqld does not have the access rights to  InnoDB: the directory.  2013-03-26 09:51:47 13587 [ERROR] InnoDB: Could not find a valid tablespace file for 'yoursmart/S_MEM_POINT'. See http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html for how to resolve the issue.  2013-03-26 09:51:47 13587 [ERROR] InnoDB: Tablespace open failed for '"yoursmart"."S_MEM_POINT"', ignored.  2013-03-26 09:51:47 13587 [Note] InnoDB: 128 rollback segment(s) are active.  2013-03-26 09:51:47 13587 [Note] InnoDB: Waiting for purge to start  2013-03-26 09:51:47 13587 [Note] InnoDB: 1.2.10 started; log sequence number 55877336  2013-03-26 09:51:47 13587 [ERROR] /usr/sbin/mysqld: unknown variable 'character-set-database=utf8'  2013-03-26 09:51:47 13587 [ERROR] Aborting    2013-03-26 09:51:47 13587 [Note] Binlog end  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'partition'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'BLACKHOLE'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'ARCHIVE'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_TABLES'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_CONFIG'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_DELETED'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_INSERTED'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_METRICS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMPMEM'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMP_RESET'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMP'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_LOCKS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_TRX'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'InnoDB'  2013-03-26 09:51:47 13587 [Note] InnoDB: FTS optimize thread exiting.  2013-03-26 09:51:47 13587 [Note] InnoDB: Starting shutdown...  2013-03-26 09:51:49 13587 [Note] InnoDB: Shutdown completed; log sequence number 55877346  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'MRG_MYISAM'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'MEMORY'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'CSV'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'MyISAM'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'sha256_password'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'mysql_old_password'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'mysql_native_password'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'binlog'  2013-03-26 09:51:49 13587 [Note] /usr/sbin/mysqld: Shutdown complete    130326 09:51:49 mysqld_safe mysqld from pid file /var/lib/mysql/kserver145-208.pid ended  

EDIT I changed my.cnf to replace character-set-database to character_set_database along with the other variables in my.cnf as Phill suggested. However, kserver***.err still says

2013-03-26 10:14:32 14300 [ERROR] /usr/sbin/mysqld: unknown variable 'character_set_database=utf8'  

Also

mysql> select @@session.wait_timeout, @@global.wait_timeout;  +------------------------+-----------------------+  | @@session.wait_timeout | @@global.wait_timeout |  +------------------------+-----------------------+  |                  28800 |                 28800 |  +------------------------+-----------------------+  1 row in set (0.00 sec)    mysql> select @@session.character_set_database, @@global.character_set_database;  +----------------------------------+---------------------------------+  | @@session.character_set_database | @@global.character_set_database |  +----------------------------------+---------------------------------+  | latin1                           | latin1                          |  +----------------------------------+---------------------------------+  1 row in set (0.01 sec)  

EDIT2

I have modified my.cnf as rolaldo suggested. kserver***.err does not show those signs of error anymore. So it's more promising.
But when I issue this command

mysql> show create database yoursmart;  +-----------+--------------------------------------------------------------------+  | Database  | Create Database                                                    |  +-----------+--------------------------------------------------------------------+  | yoursmart | CREATE DATABASE `yoursmart` /*!40100 DEFAULT CHARACTER SET utf8 */ |  +-----------+--------------------------------------------------------------------+  1 row in set (0.00 sec)    mysql> select @@session.character_set_server, @@global.character_set_server;  +--------------------------------+-------------------------------+  | @@session.character_set_server | @@global.character_set_server |  +--------------------------------+-------------------------------+  | latin1                         | latin1                        |  +--------------------------------+-------------------------------+  1 row in set (0.00 sec)  

It still says character_set_server is latin1. I don't understand... Maybe this is normal ?
The database contains Korean characters and I'm not able to do like search with korean words.

Concatenation of setof type or setof record

Posted: 25 Mar 2013 05:47 PM PDT

I use Postgresql 9.1 with Ubuntu 12.04.

In a plpgsql function I try to concatenate setof type returned from another function.

the type pair_id_value in question is created with create type pair_id_value as (id bigint, value integer);

the function that returns elementary setof pair_id_value (those that will be concatenated later) is this one:

create or replace function compute_pair_id_value(id bigint, value integer)      returns setof pair_id_value  as $$      listResults = []      for x in range(0,value+1):          listResults.append({ "id": id, "value": x})      return listResults  $$  language plpython3u;  

this straigth-forward plpython code should be good, for exemple the query: select * from compute_pair_id_value(1712437,2); returns nicely:

  id            | value    ---------------+-----------          1712437 |         0          1712437 |         1          1712437 |         2   (3 rows)  

this python function is fairly simple for now, for this example, but above all for my proof of concept. It will be more complex in the near future.


The problem arises when I try to concatenate all the result tables from multiples id.

create or replace function compute_all_pair_id_value(id_obj bigint)      returns setof pair_id_value as $$  declare      pair pair_id_value;  begin      for pair in (select compute_pair_id_value(t.id, t.obj_value) from my_obj as t where t.id = id_obj)      loop              return next pair;      end loop;      return;   end; $$ language plpgsql;  

I receive the error: invalid input syntax for integer "(1712437,0)" as if it is no longer seen as a pair_id_value with two columns but as a tuple (1712437,0).

So I changed the output type of the function from setof pair_id_value to setof record... and if I execute this similar concatenation function:

create or replace function compute_all_pair_id_value(id_obj bigint)      returns setof record as $$  declare      pair record;  begin      for pair in (select compute_pair_id_value(t.id, t.obj_value)  from my_obj as t where t.id = id_obj)      loop              return next pair;      end loop;      return;   end; $$ language plpgsql;  

I get the error: a column definition list is required for functions returning "record"

Trying to follow the answer to this SO question: I have tried defining the column definition in the select this way select compute_pair_id_value(t.id, t.obj_value) as f(id bigint, value integer), the complete code is here:

create or replace function compute_all_pair_id_value(id_obj bigint)      returns setof record as $$  declare      pair record;  begin      for pair in (select compute_pair_id_value(t.id, t.obj_value) as f(id bigint, value integer) from my_obj as t where t.id = id_obj)      loop              return next pair;      end loop;      return;   end; $$ language plpgsql;  

But when launching the sql script, psql doesn't accept to create the function: syntax error at or near "(" select compute_pair_id_value(t.id, t.obj_value) as f(id bigint, value integer) ... pointing the finger at the f(

Any idea how to do it properly ?

Should I consider to create temporary table to do the job ?

Why Am I Getting Deadlock - Please Read Trace

Posted: 25 Mar 2013 07:03 PM PDT

can anyone help me? i'm getting deadlock on this trace, i'm using symmetricds and i can't understand why i'm getting a deadlock on a certain table.

I'm using sql server 2008 r2, read_committed_snapshot is turned on. The table has one clustered index and it is on node_id

I don't know why I'm getting deadlock, can anyone explain the trace file?

waiter id=process1f53674c8 mode=U requestType=wait  waiter-list  owner id=process2200f5948 mode=U  owner-list  keylock hobtid=72057594175946752 dbid=19 objectname=tester-prd-tester-main.dbo.sym_node indexname=PK__sym_node__5F19EF1676C4B313 id=lock1a3f14e80 mode=U associatedObjectId=72057594175946752  waiter id=process2200f5948 mode=U requestType=wait  waiter-list  owner id=process1f53674c8 mode=X  owner-list  keylock hobtid=72057594175946752 dbid=19 objectname=tester-prd-tester-main.dbo.sym_node indexname=PK__sym_node__5F19EF1676C4B313 id=lock250b1cb00 mode=X associatedObjectId=72057594175946752  resource-list  (@P0 nvarchar(4000)@P1 nvarchar(4000)@P2 nvarchar(4000)@P3 int@P4 nvarchar(4000)@P5 nvarchar(4000)@P6 nvarchar(4000)@P7 nvarchar(4000)@P8 nvarchar(4000)@P9 datetime@P10 nvarchar(4000)@P11 int@P12 int@P13 nvarchar(4000)@P14 nvarchar(4000))update dbo.sym_node set "node_id" =  @P0  "node_group_id" =  @P1  "external_id" =  @P2  "sync_enabled" =  @P3  "sync_url" =  @P4  "schema_version" =  @P5  "symmetric_version" =  @P6  "database_type" =  @P7  "database_version" =  @P8  "heartbeat_time" =  @P9  "timezone_offset" =  @P10  "batch_to_send_count" =  @P11  "batch_in_error_count" =  @P12  "created_at_node_id" =  @P13  where "node_id" =  @P14  inputbuf  update dbo.sym_node set "node_id" =  @P0  "node_group_id" =  @P1  "external_id" =  @P2  "sync_enabled" =  @P3  "sync_url" =  @P4  "schema_version" =  @P5  "symmetric_version" =  @P6  "database_type" =  @P7  "database_version" =  @P8  "heartbeat_time" =  @P9  "timezone_offset" =  @P10  "batch_to_send_count" =  @P11  "batch_in_error_count" =  @P12  "created_at_node_id" =  @P13  where "node_id" =  @P14  frame procname=adhoc line=1 stmtstart=504 sqlhandle=0x020000007430040fc90acec8b230d13e24f8efc99977f395  executionStack  process id=process1f53674c8 taskpriority=0 logused=1488 waitresource=KEY: 19:72057594175946752 (fb992e63ce7c) waittime=4927 ownerId=5484183322 transactionname=implicit_transaction lasttranstarted=2012-11-28T19:47:10.653 XDES=0xb08a83b0 lockMode=U schedulerid=11 kpid=7984 status=suspended spid=3683 sbid=0 ecid=0 priority=0 trancount=2 lastbatchstarted=2012-11-28T19:47:11.647 lastbatchcompleted=2012-11-28T19:47:11.647 clientapp=jTDS hostname=clienttesterAPP hostpid=123 loginname=tester-PRD-tester-MAIN isolationlevel=read committed (2) xactid=5484183322 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128058  (@P0 nvarchar(4000)@P1 nvarchar(4000)@P2 nvarchar(4000)@P3 int@P4 nvarchar(4000)@P5 nvarchar(4000)@P6 nvarchar(4000)@P7 nvarchar(4000)@P8 nvarchar(4000)@P9 datetime@P10 nvarchar(4000)@P11 int@P12 int@P13 nvarchar(4000)@P14 nvarchar(4000))update dbo.sym_node set "node_id" =  @P0  "node_group_id" =  @P1  "external_id" =  @P2  "sync_enabled" =  @P3  "sync_url" =  @P4  "schema_version" =  @P5  "symmetric_version" =  @P6  "database_type" =  @P7  "database_version" =  @P8  "heartbeat_time" =  @P9  "timezone_offset" =  @P10  "batch_to_send_count" =  @P11  "batch_in_error_count" =  @P12  "created_at_node_id" =  @P13  where "node_id" =  @P14  inputbuf  update dbo.sym_node set "node_id" =  @P0  "node_group_id" =  @P1  "external_id" =  @P2  "sync_enabled" =  @P3  "sync_url" =  @P4  "schema_version" =  @P5  "symmetric_version" =  @P6  "database_type" =  @P7  "database_version" =  @P8  "heartbeat_time" =  @P9  "timezone_offset" =  @P10  "batch_to_send_count" =  @P11  "batch_in_error_count" =  @P12  "created_at_node_id" =  @P13  where "node_id" =  @P14  frame procname=adhoc line=1 stmtstart=504 sqlhandle=0x020000007430040fc90acec8b230d13e24f8efc99977f395  executionStack  process id=process2200f5948 taskpriority=0 logused=488 waitresource=KEY: 19:72057594175946752 (303c5a46f094) waittime=4972 ownerId=5484183543 transactionname=implicit_transaction lasttranstarted=2012-11-28T19:47:11.230 XDES=0x283e0b950 lockMode=U schedulerid=22 kpid=6320 status=suspended spid=3581 sbid=0 ecid=0 priority=0 trancount=2 lastbatchstarted=2012-11-28T19:47:11.543 lastbatchcompleted=2012-11-28T19:47:11.543 clientapp=jTDS hostname=clienttesterAPP hostpid=123 loginname=tester-PRD-tester-MAIN isolationlevel=read committed (2) xactid=5484183543 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128058  process-list  deadlock victim=process2200f5948  deadlock-list  ResType:LockOwner Stype:'OR'Xdes:0x0000000283E0B950 Mode: U SPID:3581 BatchID:0 ECID:0 TaskProxy:(0x00000000EE40C538) Value:0xdc87ce80 Cost:(0/488)  Victim Resource Owner:  Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:  ResType:LockOwner Stype:'OR'Xdes:0x00000000B08A83B0 Mode: U SPID:3683 BatchID:0 ECID:0 TaskProxy:(0x00000002E1E62538) Value:0x5ae0b9c0 Cost:(0/1488)  Requested by:  Input Buf: Language Event: (@P0 nvarchar(4000)@P1 nvarchar(4000)@P2 nvarchar(4000)@P3 int@P4 nvarchar(4000)@P5 nvarchar(4000)@P6 nvarchar(4000)@P7 nvarchar(4000)@P8 nvarchar(4000)@P9 datetime@P10 nvarchar(4000)@P11 int@P12 int@P13 nvarchar(4000)@P14 nvarchar(4000))upd  SPID: 3581 ECID: 0 Statement Type: UPDATE Line #: 1  Owner:0x000000026872AC80 Mode: U        Flg:0x40 Ref:0 Life:00000001 SPID:3581 ECID:0 XactLockInfo: 0x0000000283E0B990  Grant List 2:  KEY: 19:72057594175946752 (fb992e63ce7c) CleanCnt:2 Mode:U Flags: 0x1  Node:2  Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:  ResType:LockOwner Stype:'OR'Xdes:0x0000000283E0B950 Mode: U SPID:3581 BatchID:0 ECID:0 TaskProxy:(0x00000000EE40C538) Value:0xdc87ce80 Cost:(0/488)  Requested by:  Input Buf: Language Event: (@P0 nvarchar(4000)@P1 nvarchar(4000)@P2 nvarchar(4000)@P3 int@P4 nvarchar(4000)@P5 nvarchar(4000)@P6 nvarchar(4000)@P7 nvarchar(4000)@P8 nvarchar(4000)@P9 datetime@P10 nvarchar(4000)@P11 int@P12 int@P13 nvarchar(4000)@P14 nvarchar(4000))upd  SPID: 3683 ECID: 0 Statement Type: UPDATE Line #: 1  Owner:0x000000025A793580 Mode: X        Flg:0x40 Ref:0 Life:02000000 SPID:3683 ECID:0 XactLockInfo: 0x00000000B08A83F0  Grant List 1:  KEY: 19:72057594175946752 (303c5a46f094) CleanCnt:2 Mode:X Flags: 0x1  Node:1  Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:  Wait-for graph  Deadlock encountered .... Printing deadlock information  

Oracle 11g: performance improvements of inserts

Posted: 25 Mar 2013 06:33 PM PDT

I have a table of 500 millions of rows (and growing)

I did the following to improve performance of inserts:

On database side:

  • dropped all indexes and constraints
  • disabled logging

On application side:

  • switched from JPA managed entities to native insert queries, added APPEND Oracle hint to the query
  • tried to commit in batches per 1k/2k/3k of rows
  • tried to write in parallel (multiple threads, thread count = to core count on server) to one table

This gave me about 300 rows per second

Additionally tried:

  • write in parallel in batches to multiple tables (to group then back results using UNION)

This gave me about 1k rows per second, but on empty tables. But when I filled tables with dummy data (200 of millions each), speed of inserts dropped to 250 - 300 per second.

Could anyone suggest what else can I do to speed-up inserts? Basically I want to understand what is (what could be) the bottleneck first.

UPD: Table is partitioned by insert date, table has about 60 columns - most of columns are VARCHAR2(2000 BYTE)

Landed as BI, but databases are a big WTF, what to do?

Posted: 25 Mar 2013 01:00 PM PDT

Maybe a duplicate, but I believe my case is a bit different. From one of the answers I got to this post on SQL Server Central that also comes handy too but is not quite the same scenario: 9 Things to Do When You Inherit a Database

Couple of weeks ago started on a new job. I'm supposed to be working as BI analyst and SQL Developer. But right on the firsts assignments noticed in general everything was taking long to execute. Asked the person that is guiding me on the first days, my supervisor you could say, and he told me that they know the databases are a mess. Asked if I could take a look and see what could be done, got a yes as answer.

So I began to digg in, using several really handy scripts, like for example:

What I've found is a big mess as they told me. As an example, blitzindex procedure returns almost 2000 rows with lot of duplicate indexes, NC indexes including all the columns from a table, lot of heap tables, really wide indexes and more. As for backups, none is done since several weeks, asked about it and IT guys just copy the databases each night to a different server. Couple of databases are over 100Gb and several others are close to that size too. Statistics are updated everyday for every table. There are reports that take more than hour to finish, on not so big tables (just couple of millions of rows). And so on.

As a test I spent couple of days tuning couple of big tables and different procedures and queries that use them. Prepared a baseline, using the profiler. Then made few changes and ran again the test queries. As spected, a report that was taking about 8min now is running in around a minute and a couple of other queries also now take less than half the time. All these changes are done on a test server, we still have a reporting server and a production server.

Taking into consideration that I'm supposed to be a BI and sql developer with limited rights new on the office and not a DBA. What other actions do you recomend me to do in order to approach this scenario? There is an appointed DBA but seems to be just an sql developer doing some dba tasks. There was DBA but he left around half year ago they told me. Should I forget about these issues? Or as someone that is heavily using the database I must point out the problems and propose solutions? Has someone been on the same scenario?

Postgres continuous backups - should WAL numbers be sequential after pg_start_backup call?

Posted: 25 Mar 2013 05:32 PM PDT

I'm using PostgreSQL WAL archiving and pg_start_backup as outlined in the Continuous Archiving section of the Postgres manual to backup a large database. Twice a month we do a full backup and every other night we just copy the WALs over to our backup space so if there's ever an issue we just do a restore of the last full backup + the WALs that have been created since then.

I've noticed that the WALs are sequential except where pg_start_backup is called (or technically it may be when calling pg_stop_backup I'm not sure which) at which point it skips a few. Is this normal behaviour? I know when calling pg_stop_backup moves to the next WAL segment but I'd expect that to be the next number incrementally or am I missing something?

For example on the 14th of March we had the following WALs after a 1st of March full backup: 000000010000008500000090 000000010000008500000091 000000010000008500000092 000000010000008500000093

After the backup it then went to: 000000010000008500000096 000000010000008500000096.00000020.backup

After this normal WAL creation resumed: 000000010000008500000097 000000010000008500000098 and so on

So should there be a 94 or 95 in this case?

I'm in the process of doing a test restore but as that involves copying 60GB (compressed) of data between two servers in different continents, I'd prefer to know sooner rather than later if I'm going to have an issue!

Can't select database for transaction log backup in maintenance plan

Posted: 25 Mar 2013 07:17 PM PDT

I am running into a problem while trying to save a transaction log database backup maintenance task:

  1. Add the 'Back Up Database Task' to the maintenance plan.
  2. Edit the task.
  3. Select backup type: 'Transaction Log'.
  4. Select any database running with Full Recovery Model.
  5. Close the task.
  6. Re-open the task. Notice that the database I selected is not checked anymore.

I can select all databases, all user databases, specific databases...none of them ever save. Every time I re-open the task that field is switched back to < Select one or more > option.

I've connected as user sa and I've connected with Windows Authentication. Same result. Also, I've double-checked that the databases I'm trying to backup are in Full recovery model.

I'm running SQL Server 2005 SP2 on Windows 2003 R2 SP2.

enter image description here

EDIT: This happens even if the database had a full backup run against it.

SQL Server 2012, restore database

Posted: 25 Mar 2013 09:21 PM PDT

I am using MS SQL Management Studio 2012 to restore the database from a .bak file. (That file is from the backup of the database on server)

In my new database, the triggers are missing. Why did I lose the triggers?

MS SQL Server not importing data after expanding Partition Schema and Function

Posted: 25 Mar 2013 07:42 PM PDT

Context: I use SSIS to import data into MS SQL Server. A regular import process failed by saying that the particular FileGroup I was trying to import data into was already full and couldn't allocate more space. So I created new FileGroups and associated new Files to it, however the import was still trying to import to the same "already full" Files instead of the new ones that I created. At this point I deleted the FileGroups and Files without emptying them which caused me to not be able to create new Files with the same name as the deleted Files. Initially this was the problem but soon I was able to fix this with the Help of MSSQLSupport and the rest of the problem I describe below came up.

Problem:

  • Cannot import data into a table (with 4B rows, 2300 Files 30+ FileGroups) in a DB with (50+ tables, 10+ TB) database using SSIS import package (this package can import data to other database without any modifications)
  • Database is partitioned into Files and FileGroups using a Partition Schema and a Partition Function
  • Partition is based on an Integer field in the data that corresponds to a Week Number
  • Initial Partition function range and the Schema was defined up until the last week of Dec 2012
  • After creating additional Partition function and Schema and Files and FileGroups the data will still not import

I will provide additional information if I knew where to start. Any pointers or places to look will be much appreciated. I have also contacted few support services for SQLServer and have not gotten any answers.

How can I count the total number of document indexed by my oracle.text index?

Posted: 25 Mar 2013 03:47 PM PDT

I know I can use ctx_query.count_hits to count the number of document matching a query, and similarly, I can use

count(*) where CONTAINS(...)  

But these are only options if I actually have a query. Those functions throw an exception if I pass an empty string as a query (thanks oracle...). So,

Is there a way to count the total number of document indexed by my oracle.text index?

Foreign Key - One Child Table to Multiple Parent Tables

Posted: 25 Mar 2013 10:27 AM PDT

Maybe I'm thinking of this problem in the wrong way, but we maintain an AccessControl table that is a child to several parent tables (e.g. CompanyItems, SystemFiles, etc.). I'm trying to make sure we don't create orphaned AccessControl recs, but I'm not sure how to properly set up this relationship. Any tips?

AccessControl    ItemId    AccessId (for user access)     ItemType     OwnerCd (1 for owner, 0 for read-only access)    CompanyItems    CompanyId    ItemId    ItemType    ItemName    SystemFiles    FileId    FileName    FileType  

Sample Data:

  AccessControl: ItemId=1, AccessId=1 (UserId), ItemType=0(for a CompanyItems rec), OwnerCd=1 (owner)  AccessControl: ItemId=1, AccessId=2 (different UserId), ItemType=0, OwnerCd=0 (read-only access)  AccessControl: ItemId=10, AccessId=1, ItemType=1 (for SystemFiles entry), OwnerCd=0  CompanyItems: CompanyId={whatever company}, ItemId=0, ItemType=0, ItemName='Test Item'  SystemFiles: FileId=10, FileId='Test File', FileType='pdf', etc.  

If my CompanyItems is deleted, the two corresponding AccessControl recs should also be deleted. If my SystemFiles rec is deleted, then its one corresponding AccessControl rec should be deleted.

InnoDB best index practises

Posted: 25 Mar 2013 08:58 AM PDT

I have an InnoDB table with around ~3.7m rows in it. Here are my current indexes:

+-------------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+-  | Table       | Non_unique | Key_name      | Seq_in_index | Column_name   | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment   +-------------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+-  | hotel_avail |          0 | PRIMARY       |            1 | id            | A         |     3720035 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          0 | PRIMARY       |            2 | nights        | A         |     3720035 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          1 | hotelname     |            1 | hotelname     | A         |        5914 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          1 | destair       |            1 | destair       | A         |         347 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          1 | boardbasis    |            1 | boardbasis    | A         |        2436 |     NULL | NULL   | YES  | BTREE      |         |               |  | hotel_avail |          1 | iso           |            1 | iso           | A         |         347 |     NULL | NULL   | YES  | BTREE      |         |               |  | hotel_avail |          1 | cheapestprice |            1 | cheapestprice | A         |      372003 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          1 | dates         |            1 | checkindate   | A         |        6262 |     NULL | NULL   | YES  | BTREE      |         |               |  | hotel_avail |          1 | dates         |            2 | checkoutdate  | A         |        6262 |     NULL | NULL   | YES  | BTREE      |         |               |  | hotel_avail |          1 | engineid      |            1 | engineid      | A         |         347 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          1 | itemcode      |            1 | itemcode      | A         |       28182 |     NULL | NULL   |      | BTREE      |         |               |  +-------------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+-  

Does it look like overkill? Can I merge some of these indexes further? I've seen in InnoDB you can create a PRIMARY KEY with several fields?

I'm always selecting on several of these fields and filtering the results using WHERE and JOIN a few of them as well.

Foreign Key constraint on fixed value field - Ever appropriate?

Posted: 25 Mar 2013 08:55 AM PDT

I have a short (15 rows) look-up table that lists the valid values for several columns in the database.
I don't think it can be considered a One-True-Lookup-Table, it is about a single definite concept, but it may conceptually be partitioned in some subgroups (3).
I now happen to have to add a column that actually needs to accept only the values from one of these subgroups.

The proper thing to do now would probably to make one table for each subgroup and turn the original table into a simple list of IDs, from which the subgroup tables take their primary ids from.

It is though very unlikely that I ever further need to refer to the subgroups, I will instead frequently use things for which all the values in the original table are valid.

So the partitioning option would make things more complex for the vast part of the application just to support one case.

The only other option I know of is to add a Type column to the original lookup table and a fixed-value column to the single table that need to refer to the subgroup, and use a two-columns foreign key (to ID + Type) in this single case.

This is very ugly, and I'm not sure if it is theoretically correct to use columns that are not the primary key as foreign key referents, but given the context, is it acceptable, or even the best way?

Error on creating database in SSMS

Posted: 25 Mar 2013 08:19 AM PDT

I am using SQL Server Management Studio 2008. I am connect to localhost and I want to create a database. When I click on New Database, type the name and click ok, I get the message:

Index was outside the bounds of the array (Microsoft.SQLServer.Smo).

Replicating databases for local and cloud instances

Posted: 25 Mar 2013 07:50 AM PDT

Forgive me if this is an inappropriate place for this architectural question but I'm not sure where it best fits in the StackExchange family.

I have been tasked with developing an online/live auction system that operates at multiple locations simultaneously as well as online. These auctions are high paced, selling thousands of items in 3-4 hours with roughly 3 "auction blocks" at each location. The point is, there are several people connecting to this at once. Normally, I'd begin designing this "in the cloud" but I've been told it MUST operate even if internet access is lost (and this internet access is of low quality to begin with).

Initially I thought it might be best to simply write two pieces of software, one running locally and one running in the cloud with an API bridge between them. The unfortunate part of this is the database would have to live locally and be served from the location and I don't particularly care for that especially considering the bandwidth limitations. Additionally, the scope requires that each location should be able to access another locations auction items.

Lately I've thought a replication method may work where the database is replicated across all locations and the local "client" operates with the cloud until latency hits a predefined threshold or internet completely dies and then switches to the local copy of the database (it's ok if the online auction gets shut off in a bad connections scenario – it would be the first casualty).

Does anyone have any best practices or experience in dealing with large amounts of data, changing rapidly with tons of connections, that must be available both on a local network and online (in the cloud)? What works best in these scenarios?

EDIT

Would MySQL or MongoDB using replication, replicate fast enough to be used in this scenario? Perhaps with the primary database in the cloud and several "fail-over" (one at each location).

How to write this self join based on three columns

Posted: 25 Mar 2013 06:24 PM PDT

Hello there I have a following table

------------------------------------------  | id | language | parentid | no_daughter |  ------------------------------------------  | 1  |     1    |    0     |      2      |  ------------------------------------------  | 1  |     1    |    0     |      2      |  ------------------------------------------  | 2  |     1    |    1     |      1      |  ------------------------------------------  | 2  |     2    |    1     |      1      |  ------------------------------------------  | 3  |     1    |    1     |      0      |  ------------------------------------------  | 3  |     2    |    1     |      0      |      ------------------------------------------  | 4  |     1    |    2     |      0      |  ------------------------------------------  | 4  |     2    |    2     |      0      |  ------------------------------------------  | 5  |     1    |    2     |      0      |  ------------------------------------------  | 5  |     2    |    2     |      1      |  -----------------------------------------  | 5  |     1    |    4     |      1      |  ------------------------------------------  | 5  |     2    |    4     |      1      |  ------------------------------------------  

Scenario

Every record has more than one rows in table with different language ids. parentid tells who is the parent of this record. no_daughter columns tells against each record that how many child one record has. Means in Ideal scenario If no_daughter has value 2 of id = 1 , it means 1 should be parentid of 2 records in same table. But If a record has more than one exitance with respect to language, it will be considered as one record.

My Problem

I need to find out those records where no_daughter value is not correct. It means if no_daughter is 2, there must be two records whoes parentid has that id. In above case record with id = 1 is valid. But record having id = 2 is not valid because the no_daughter = 1 but actual daughter of this record is 2. Same is the case with id=4

Can any body tell me how can I find these faulty records?

So far help received

SELECT DISTINCT   id   FROM  tbl_info t   INNER JOIN     (SELECT       parentid,      COUNT(DISTINCT id) AS childs     FROM      tbl_info     GROUP BY parentid) AS parentchildrelation     ON t.id = parentchildrelation.parentid     AND t.no_daughters != parentchildrelation.childs   

This query is returning those ids who have been used as parentid somewhere in table but having wrong no_daughter values. But not returning ids that has value in no_daugter columns but have not been used as parentid any where in table. For exampl id = 5 has no_daughter = 1 but it is not used as parentid in table. So it is also a faulty record. But above query is not capturing such records.

Any help will be much appreciated.

Optimization of a select statement

Posted: 25 Mar 2013 06:30 PM PDT

I'm using MySQL and have a table user_data like this:

user_id         int(10) unsigned  reg_date        int(10) unsigned  carrier         char(1)  

The reg_data is the unix timestamp of the registration date, and the carrier is the type of carriers, the possible values of which could ONLY be 'D', 'A' or 'V'.

I need to write a sql statement to select the registered user number of different carriers on each day from 2013/01/01 to 2013/01/31. So the desirable result could be:

2013/01/01   D   10  2013/01/01   A   31  2013/01/01   V   24  2013/01/02   D    9  2013/01/02   A   23  2013/01/02   V   14  ....  2013/01/31   D   11  2013/01/31   A   34  2013/01/31   V   22  

Can anyone help me with this question? I'm required to give the BEST answer, which means I can add index if necessary, but I need to keep the query efficient.

This is what I have right now:

select FLOOR((FROM_UNIXTIME(reg_date)) / 1000000) as reg_day, carrier, count(user_id) as user_count  from user_data  where reg_date >= UNIX_TIMESTAMP('2013-01-01 00:00:00') and reg_date < UNIX_TIMESTAMP('2013-02-01 00:00:00')  group by reg_day, carrier;  

Thanks!

The question has been moved to here and please find all the updates in the link.

why would command text not execute at all?

Posted: 25 Mar 2013 03:43 PM PDT

in my report i have::

SELECT         column1,          column2,           'poop'  from mytable    

i am using sql server profiler to see exactly what statement is being set.

i have set only two filters:

  1. databaseName
  2. enter image description here

yet after running the report, no statement gets intercepted.

i suspect that because i am a beginner at SSRS, i am missing something crucial here.

for what reason would commandtext not be executed at all?

i did follow this question, to make sure that i am using sql profiler correctly, and indeed, i am: http://stackoverflow.com/questions/9107383/sql-server-profiler-capture-calls-to-your-databases-stored-procs-during-ssrs

another bit of important information is although the chart shows no data:

enter image description here

i actually am indeed showing data when i run the commandtext from ssms!

SQL Server split mdf into multiple files

Posted: 25 Mar 2013 01:53 PM PDT

I have a database called example.mdf with a total size of 1GB which suffers from performance issues. I checked the allocated hardware and it is higher than required, I double checked the design and every thing looks normal, when I look at the .mdf files in their physical location (C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\example.mdf) I found that the database is split into multiple files example_1.mdf, example_2.mdf, example_3.mdf, up to example_7.mdf.

I have another database file on the same SQL Server that has the same issue.

Why does this happen? Does this affect the performance? How can I prevent or stop SQL Server from splitting my .mdf files? Moreover I need to combine back the already split files.

Rent weekly cost database design

Posted: 25 Mar 2013 11:01 AM PDT

I have a database which contains a table BUILDING with in each row details about some building, another table BUILDING_UNIT contains rows with details about a single building unit which refers with a foreign key to the belonging BUILDING.ID. The BUILDING_UNIT table also refers to a table CATEGORY which tells whether the BUILDING_UNIT is of category A,B,C,D again with a foreign key pointing to CATEGORY.ID.

Now the final cost of renting the building unit depends on its building, category and on the number of days it is rented and specific period of the year. We only rent them weekly so I might as well use weeks only however I'd like it to be as flexible as possible in the future.

I cannot convince myself on a table which can represent this situation.

Do I have to use a table with coefficients for each day of the year and then a table with coefficients for A,B,C,D and then a table with coefficients for each Building and then somehow calculate a result?

Is there some standard and recognized implementation for problems of this type?

Thank you

EDIT: Notice the solution should abstract from the formula for calculating the cost which might change in the future. However I might be asked to make a specific week of the year, for building unit X inside building Y to cost 300$ while the week after 600$. Generally building units inside the same building and in the same week cost the same, however that might change in future so I'd like to treat already all specific cases.

Proper procedure for migrating a MySQL database to another Debian machine?

Posted: 25 Mar 2013 08:51 AM PDT

I have one server running an older Debian version with MySQL 5.x and a newer Debian server, also running MySQL.

I've created a backup of all databases on the first server like so:

mysqldump -uuser -ppass --all-databases > dump.sql  

On the other server, I did a:

mysql -uuser -ppass < dump.sql  

At first, everything seemed great. I could browse my databases in phpMyAdmin, but as soon as I tried logging in again, it failed. Turns out, my root password had been overwritten with the one from the older database.

I wanted to reset it, but in order to do so, I would have needed to start mysqld_safe. Which I couldn't because the password for the debian-sys-maint user had been overwritten as well in the database. When I thought all hell had broken loose, I somehow reset both the root and debian-sys-maint passwords to the original values of the new server, and I managed to revert to a clean state.

Since I obviously don't want to go down that road again, here's the question(s):

  • Was I right with my approach of using a complete --all-databases dump?
  • Was there something I needed to do in advance to reading in that dump to prevent this desaster from happening? Or even before creating the dump?

If I'm going about this the wrong way:

  • What is the proper procedure for migrating all databases and their users to another server?

Note that I'm not that experienced with MySQL and server administration at all, so I might be missing something obvious. All the tutorials and how-tos I've found never mention anything like this and just talk about importing the complete dump.

How to add 'root' MySQL user back on MAMP?

Posted: 25 Mar 2013 11:51 AM PDT

On PhpMyAdmin, I removed 'root' user by mistake. I was also logged in as 'root'. How can I add the user 'root' back, on MAMP?

Database Link from Oracle 10g on Windows over to MySQL database

Posted: 25 Mar 2013 10:57 AM PDT

Didn't see any answers on this subject so hoping for a miracle! :-)

I've got an Oracle 10g database running on Windows Server 2003 and I'd like to create a database link to an external MySQL database which is running in the Amazon Web Service RDS cloud.

We may have the option of upgrading the Oracle version to 11g if that would help?

Has anyone done anything like this before?

Thanks for your help! Josh

Slow insert with MySQL full-text index

Posted: 25 Mar 2013 07:51 AM PDT

I use a full-text index in a MySQL table, and each insert into this table takes about 3 seconds. It seems that MySQL rebuilds (a part) of the full text index after each insert/update. Is this right?

How can I get better performance from the INSERT? Is there perhaps an option to set when MySQL rebuilds the full-text index?

SQL Server equivalent to functionality of Oracle RAC?

Posted: 25 Mar 2013 10:38 AM PDT

I did some Googling and couldn't find an answer to this question more recent than a few years ago, so I thought I'd ask. Oracle's RAC feature offers load-balancing for both read and write transactions, as well as scale-out and high-availability without downtime (at least, as I understand it - we're about to deploy our first databases that use RAC, so we'll see how it goes).

Is there any SQL Server feature set (or third party component you could install on top) that delivers equivalent functionality? We've always used Windows clustering, where a failover event causes about 20-30 seconds of SQL downtime - always tolerable, but not ideal. Now, with AlwaysOn in SQL 2012, SQL Server shrinks that to about 15 seconds and adds the concept of read-only-secondary databases, but they still require that write transactions are choked through a single connection point (much improved, since many transactions are just read, but still not really load balancing), and in the case of a node failure or the need to patch, there's still downtime.

I suppose it's just more curiosity - I feel like this is the only area that SQL Server falls behind Oracle (at least among the features I've personally seen used). I wanted to see if there are any options out there to close that gap and possibly improve our own SQL Server deployment while we wait for Microsoft's equivalent feature to be added - maybe in SQL 2014/2015?

Howto use Windows Authentication with SQL Server 2008 Express on a workgroup network?

Posted: 25 Mar 2013 08:29 PM PDT

I have two computers running SQL Server 2008 Express: c01 and c02, I setup both for remote connection using windows authentication. Worked fine for c02 but not for c01.

This is the error message I'm getting:

Cannot connect to ACAMP001\SQLEXPRESS.


ADDITIONAL INFORMATION:

Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)

For help, click: http://go.microsoft.com/fwlink?>ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=18452&LinkId=20476

I don't know if I'm missing something, here is what I did:

  1. Enabled TCP/IP protocol for client from Sql Server Configuration Manager.
  2. Modified Windows firewall exceptions for respective ports.
  3. Started the Sql Browser service as a local service
  4. Added Windows user to this group: "SQLServerMSSQLUser$c01$SQLEXPRESS"
  5. From Management Studio, I added "SQLServerMSSQLUser$c01$SQLEXPRESS" to SQLEXPRESS instance's logins under security folder, and I granted sysadmin permissions to it.
  6. Restarted c01\SQLEXPRESS
  7. Restarted Sql Browser service.
  8. There is no domain here. It's only a workgroup.

Please any help is appreciated, Thank you.

[MS SQL Server] Mostly accessed objects in database

[MS SQL Server] Mostly accessed objects in database


Mostly accessed objects in database

Posted: 25 Mar 2013 04:24 AM PDT

Is there a way or a query that I can find out in the past year what tables and view are most accessed or heavily used?Thanks,

Cannot backup - Old killed backup still running

Posted: 24 Mar 2013 11:57 PM PDT

Hi I have one DB that on a multi DB server that people cannot connect to. I'm not familar with the DB.I THINK the problem was that an reindex or backup was ongoing when the Server was restarted.If I try and backup the DB now I get this error:System.Data.SqlClient.SqlError: Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. Reissue the statement after the current backup or file manipulation operation is completed. (Microsoft.SqlServer.Smo)I can find 2 processes which have been killed: KILLED/ROLLBACK status=SUSPENDED. Both have wait time (ms) of a couple of dayskill 85 with statusonlySPID 85: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 0 seconds.How can I remove these so I may get the DB back up and working. Would a SQL restart work..Thanks

Why should we have multiple FileGroups instead of one?

Posted: 24 Mar 2013 11:50 PM PDT

I just happen to visit a good forum site on SQL Server. Over there, there was one article which says that we should have more than one FileGroup because it is good pratice. But over there the author failed to mention why should we not have single FileGroup ? Could any body help me understand this?

[Articles] Car Data

[Articles] Car Data


Car Data

Posted: 24 Mar 2013 11:00 PM PDT

Today's automobiles might collect more data than you think. All that might data gives us opportunities to find new and interesting ways to use this data with software.

Search This Blog