Saturday, June 22, 2013

[SQL Server 2008 issues] Big Performance Problem with Merge Statement

[SQL Server 2008 issues] Big Performance Problem with Merge Statement


Big Performance Problem with Merge Statement

Posted: 20 Jun 2013 07:12 AM PDT

Problem Summary: Merge Statement takes several times longer to execute than equivalent Update, Insert and Delete as separate statements. Why?Details: I have a relatively large table (about 35,000,000 records, approximately 13 GB uncompressed and 4 GB with page compression - including indexes). A MERGE statement pretty consistently takes two or three minutes to perform an update, insert and delete. At one extreme, updating 82 (yes 82) records took 1 minute, 45 seconds. At the other extreme, updating 100,000 records took about five minutes.When I changed the MERGE to the equivalent separate UPDATE, INSERT & DELETE statements (embedded in an explicit transaction) the entire update took only 17 seconds. The query plans for the separate UPDATE, INSERT & DELETE statements look [i]very[/i] similar to the query plan for the combined MERGE. However, all the row count estimates for the MERGE statement are way off.Obviously, I am going to use the separate UPDATE, INSERT & DELETE statements. :-) However, I would like to understand what is happening here. I've read some forum posts that talk about various bugs in the MERGE operation. Has anyone run into this particular problem before? Does anyone know why this might happen?The actual query plans for the four statements ( combined MERGE and the separate UPDATE, INSERT & DELETE ) are attached. SQL Code to create the source and target tables and the actual queries themselves are below. I've also included the statistics created by my test run. Nothing else was running on the server when I ran the test.Please let me know if you need any further information, and [i][b]thank you[/b][/i] very much for your help! :-)Server Configuration:SQL Server 2008 R2 SP1, Enterprise Edition3 x Quad-Core Xeon ProcessorMax Degree of Parallelism = 8148 GB RAMSQL Code:Target Table:[code="sql"]USE TPS;IF OBJECT_ID('dbo.ParticipantResponse') IS NOT NULL DROP TABLE dbo.ParticipantResponse;CREATE TABLE dbo.ParticipantResponse( RegistrationID INT NOT NULL ,ParticipantID INT NULL ,ResponseDate SMALLDATETIME NULL ,CallNumber INT NULL ,ElementCategory NVARCHAR(80) NULL ,ElementNameID INT NULL ,ElementName NVARCHAR(80) NULL ,Response VARCHAR(3000) NULL ,SourceTable VARCHAR(30) NOT NULL ,SourceTableID INT NOT NULL ,SpecialistName VARCHAR(80) NULL ,ID BIGINT NOT NULL ,TransferKey INT NOT NULL ,CONSTRAINT [PK_ParticipantResponse] PRIMARY KEY ( SourceTableID ,ID ) )WITH ( DATA_COMPRESSION = PAGE ); /* Index IX_PersonElement*/CREATE NONCLUSTERED INDEX IX_PersonElement ON dbo.ParticipantResponse ( RegistrationID ASC, ParticipantID ASC, ElementName ASC ) WITH ( MAXDOP = 1, SORT_IN_TEMPDB = ON, ONLINE = OFF, DATA_COMPRESSION = PAGE );/*Index IX_ParticipantResponse*/CREATE NONCLUSTERED INDEX IX_ParticipantResponse ON dbo.ParticipantResponse ( RegistrationID ASC, ParticipantID ASC, ResponseDate ASC, ElementName ASC ) WITH ( MAXDOP = 1, SORT_IN_TEMPDB = ON, ONLINE = OFF, DATA_COMPRESSION = PAGE ); /*Index IX_ParticipantID*/CREATE NONCLUSTERED INDEX IX_ParticipantID ON dbo.ParticipantResponse ( ParticipantID ASC ) WITH ( MAXDOP = 1, SORT_IN_TEMPDB = ON, ONLINE = OFF, DATA_COMPRESSION = PAGE );/*Index IX_ParticipantCall*/CREATE NONCLUSTERED INDEX IX_ParticipantCall ON dbo.ParticipantResponse ( ParticipantID ASC, CallNumber ASC ) WITH ( MAXDOP = 1, SORT_IN_TEMPDB = ON, ONLINE = OFF, DATA_COMPRESSION = PAGE );/*Index IX_ElementParticipant*/CREATE NONCLUSTERED INDEX IX_ElementParticipant ON dbo.ParticipantResponse ( ElementName ASC, ParticipantID ASC ) WITH ( MAXDOP = 1, SORT_IN_TEMPDB = ON, ONLINE = OFF, DATA_COMPRESSION = PAGE );[/code]Source Table:[code="sql"] IF OBJECT_ID('ETL.ParticipantResponseBuild') IS NOT NULL DROP TABLE ETL.ParticipantResponseBuild; CREATE TABLE ETL.ParticipantResponseBuild ( RegistrationID INT NOT NULL ,ParticipantID INT NULL ,ResponseDate SMALLDATETIME NULL ,CallNumber INT NULL ,ElementCategory NVARCHAR(80) NULL ,ElementNameID INT NULL ,ElementName NVARCHAR(80) NULL ,Response VARCHAR(3000) NULL ,SourceTable VARCHAR(30) NOT NULL ,SourceTableID INT NOT NULL ,SpecialistName VARCHAR(80) NULL ,ID BIGINT NOT NULL ,TransferKey INT NOT NULL ); ALTER TABLE ETL.ParticipantResponseBuild ADD PRIMARY KEY ( SourceTableID, ID ); CREATE INDEX [IX_TransferKey] ON ETL.ParticipantResponseBuild ( TransferKey ) WITH ( MAXDOP = 1, SORT_IN_TEMPDB = ON ); CREATE INDEX [IX_ParticipantID] ON ETL.ParticipantResponseBuild ( ParticipantID ) WITH ( MAXDOP = 1, SORT_IN_TEMPDB = ON );[/code]Combined MERGE Statement:[code="sql"]USE TPS;DECLARE @LastKeyCompleted INT = 476161;SET STATISTICS IO,TIME ON; MERGE INTO TPS.dbo.ParticipantResponse WITH (HOLDLOCK) AS Production USING TPS.ETL.ParticipantResponseBuild AS Build ON Production.SourceTableID = Build.SourceTableID AND Production.ID = Build.ID WHEN MATCHED AND Build.TransferKey > @LastKeyCompleted THEN UPDATE SET RegistrationID = Build.RegistrationID ,ParticipantID = Build.ParticipantID ,ResponseDate = Build.ResponseDate ,CallNumber = Build.CallNumber ,ElementCategory = Build.ElementCategory ,ElementNameID = Build.ElementNameID ,ElementName = Build.ElementName ,Response = Build.Response ,SourceTable = Build.SourceTable ,SpecialistName = Build.SpecialistName ,TransferKey = Build.TransferKey WHEN NOT MATCHED BY TARGET THEN INSERT ( RegistrationID ,ParticipantID ,ResponseDate ,CallNumber ,ElementCategory ,ElementNameID ,ElementName ,Response ,SourceTable ,SourceTableID ,SpecialistName ,ID ,TransferKey ) VALUES ( Build.RegistrationID ,Build.ParticipantID ,Build.ResponseDate ,Build.CallNumber ,Build.ElementCategory ,Build.ElementNameID ,Build.ElementName ,Build.Response ,Build.SourceTable ,Build.SourceTableID ,Build.SpecialistName ,Build.ID ,Build.TransferKey ) WHEN NOT MATCHED BY SOURCE AND Production.ParticipantID IN ( SELECT ParticipantID FROM ETL.ParticipantResponseBuild ) THEN DELETE;[/code]Statistics for MERGE Statement:[code="plain"]SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time: CPU time = 16 ms, elapsed time = 83 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.Table 'ParticipantResponse'. Scan count 1, logical reads 162202, physical reads 0, read-ahead reads 137853, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Worktable'. Scan count 5, logical reads 41074, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'ParticipantResponseBuild'. Scan count 35964835, logical reads 72281824, physical reads 0, read-ahead reads 1, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.(9731 row(s) affected)(1 row(s) affected) SQL Server Execution Times: CPU time = 138981 ms, elapsed time = 143911 ms.SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.[/code]Separate UPDATE, INSERT & DELETE Statements:[code="sql"]USE TPS;DECLARE @LastKeyCompleted INT = 476161;SET STATISTICS IO,TIME ON; BEGIN TRANSACTION UPDATE dbo.ParticipantResponse SET RegistrationID = Build.RegistrationID ,ParticipantID = Build.ParticipantID ,ResponseDate = Build.ResponseDate ,CallNumber = Build.CallNumber ,ElementCategory = Build.ElementCategory ,ElementNameID = Build.ElementNameID ,ElementName = Build.ElementName ,Response = Build.Response ,SourceTable = Build.SourceTable ,SpecialistName = Build.SpecialistName ,TransferKey = Build.TransferKey FROM dbo.ParticipantResponse AS Production JOIN ETL.ParticipantResponseBuild AS Build ON Production.SourceTableID = Build.SourceTableID AND Production.ID = Build.ID WHERE Build.TransferKey > @LastKeyCompleted; INSERT dbo.ParticipantResponse ( RegistrationID ,ParticipantID ,ResponseDate ,CallNumber ,ElementCategory ,ElementNameID ,ElementName ,Response ,SourceTable ,SourceTableID ,SpecialistName ,ID ,TransferKey ) SELECT Build.RegistrationID ,Build.ParticipantID ,Build.ResponseDate ,Build.CallNumber ,Build.ElementCategory ,Build.ElementNameID ,Build.ElementName ,Build.Response ,Build.SourceTable ,Build.SourceTableID ,Build.SpecialistName ,Build.ID ,Build.TransferKey FROM dbo.ParticipantResponse AS Production RIGHT JOIN ETL.ParticipantResponseBuild AS Build ON Production.SourceTableID = Build.SourceTableID AND Production.ID = Build.ID WHERE Production.SourceTableID IS NULL; DELETE dbo.ParticipantResponse FROM dbo.ParticipantResponse AS Production LEFT JOIN ETL.ParticipantResponseBuild AS Build ON Production.SourceTableID = Build.SourceTableID AND Production.ID = Build.ID WHERE Build.SourceTableID IS NULL AND Production.ParticipantID IN ( SELECT ParticipantID FROM ETL.ParticipantResponseBuild ); COMMIT TRANSACTION[/code]Statistics for Separate UPDATE, INSERT & DELETE Statements:[code="plain"]SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time: CPU time = 77 ms, elapsed time = 77 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.Table 'ParticipantResponse'. Scan count 0, logical reads 39541, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Worktable'. Scan count 5, logical reads 41074, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'ParticipantResponseBuild'. Scan count 1, logical reads 794, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.(9731 row(s) affected)(1 row(s) affected) SQL Server Execution Times: CPU time = 483 ms, elapsed time = 483 ms.Table 'ParticipantResponseBuild'. Scan count 9, logical reads 873, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'ParticipantResponse'. Scan count 9, logical reads 53986, physical reads 0, read-ahead reads 15, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Worktable'. Scan count 13, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.(0 row(s) affected)(1 row(s) affected) SQL Server Execution Times: CPU time = 15290 ms, elapsed time = 1955 ms.Table 'ParticipantResponse'. Scan count 2313, logical reads 7497, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Worktable'. Scan count 5, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'ParticipantResponseBuild'. Scan count 2, logical reads 332, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.(0 row(s) affected)(1 row(s) affected) SQL Server Execution Times: CPU time = 93 ms, elapsed time = 102 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 87 ms.SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.[/code]

xml queries with exist and contains giving unexpected results

Posted: 21 Jun 2013 05:59 PM PDT

I have a query as shown below.[code="sql"]DECLARE @tbl TABLE ( id INT, col XML ) INSERT INTO @tbl VALUES (1,'<Root> <Row> <User>xyz</User> <Rowid>1</Rowid> </Row> <Maxrowid>1</Maxrowid></Root>'),(2,'<Root> <Row> <User>xyz</User> <Rowid>1</Rowid> </Row> <Row> <User>mnj</User> <Rowid>2</Rowid> </Row> <Maxrowid>2</Maxrowid></Root>'),(3,'<Root> <Row> <User>abs</User> <Rowid>1</Rowid> </Row> <Row> <User>xra</User> <Rowid>2</Rowid> </Row> <Maxrowid>2</Maxrowid></Root>') --table before---SELECT *FROM @tbl t1------------------DECLARE @id varchar SELECT @id = 'xyz' SELECT * FROM @tbl t1 WHERE col.exist('//*/text()[contains(.,sql:variable("@id"))]') = 1 [/code]I am expecting this query to output rows which has 'xyz' in xml column 'col' anywhere. But it returns all rows.(when i checked it returns all rows having 'x' in xml column. So please tell me why does it happen?where i am going wrong?Also please correct it.Also i have another problem where below query for the same above xml does not return any output.I expect it to return xml column having 'xyz' anywhere in it.So please have a look at it also and correct my errors.[code="sql"]DECLARE @id varchar SELECT @id = 'xyz' SELECT * FROM @tbl t1 WHERE t1.col.exist('//*[text()=sql:variable("@id")]')=1 [/code]

Delete data and Transaction Logs

Posted: 20 Jun 2013 09:01 AM PDT

I have already Bulk inserted 9 million records from one table into another using SSIS; my problem now is how to delete these 8M records from the original table without blowing up my transaction logs? This table is heavily used 24-7, and honestly, I don't know what 'pitfalls' might occur if I change the recovery from Full to Simple, and back again after the delete. Do the transaction logs pickup where they ended? Don't they realize somethings missing :w00t:?I found this query, where the writer said it was more efficient, but I'm just not experienced enough to know. From everything I've read, I'm assuming there is no way to bypass the transaction logs all together?DELETE FROM myProductionTable WHERE EXISTS(SELECT * FROM myNewArchiveTable as arcWHERE arc.id = myProductionTable.id)Any advice would be great!! Thanks!!

Change the system date to future date in desired database

Posted: 21 Jun 2013 10:01 AM PDT

How to change the system date in only one database?When an application accesses this database, it should show the future date. But the rest of the database should show system date and when I'm done I can set it backIs this possible in sql server. I know it's possible in Oracle.

Need Help putting a Join to this Query

Posted: 21 Jun 2013 08:26 AM PDT

Hi,I am badly stuck in making the query which I guess needs a couple of joins.I have a list of visitors whose IPs I have recorded. Now for a specific report, I need to see which country each visitor has come from.I divided this task into two part, first getting all the IPs of the user who have uniquely logged in and the second part is to search two tables with country/IP information (from ip2nation) and to get the country with the provided IP.[b]First Part - getting all the IPs of the user who have uniquely logged in------------------------------------------------------------------------[/b][quote] SELECT a.uid, a.hostname, a.timestamp, COUNT(*) AS times FROM login_activity a GROUP BY a.hostname ORDER BY times desc[/quote]This gave me IPs (hostname) of all the past-logged in users. Works fine.[b]Second Part - Get country from the two tables (both have thousands of records) by inputting IP-------------------------------------------------------------[/b][quote] SELECT c.country FROM ip2nationCountries c, ip2nation i WHERE i.ip < INET_ATON( "157.191.122.36" ) AND c.code = i.country ORDER BY i.ip DESC LIMIT 0 , 1[/quote]This works great too.Now, for the real problem. Joining these two queries, to get the country (instead of an IP) from all the logged in users. This is what I wrote:- [quote] SELECT a.uid, a.hostname, a.timestamp, c.country, COUNT(*) AS times FROM login_activity a, ip2nationCountries c, ip2nation i WHERE i.ip < INET_ATON(a.hostname) AND c.code = i.country GROUP BY a.hostname ORDER BY times desc;[/quote]This has two problems:-- it takes a lot of time to load.- it gives wrong data (showing the visits in thousands for each row).- Basically, it shows all the data wrong.Could you help me in making this SQL?Just in case, I am putting in Structure and Sample Data of all three Tables.[b]Structure/Data of the Tables is :------------------------------[/b][b]ip2nation[/b] (has a lot of data)---(Structure) [quote] CREATE TABLE ip2nation ( ip int(11) unsigned NOT NULL default '0', country char(2) NOT NULL default '', KEY ip (ip) );[/quote](Data) [quote] INSERT INTO ip2nation (ip, country) VALUES(0, 'us'); INSERT INTO ip2nation (ip, country) VALUES(687865856, 'za'); INSERT INTO ip2nation (ip, country) VALUES(689963008, 'eg'); INSERT INTO ip2nation (ip, country) VALUES(691011584, 'za'); INSERT INTO ip2nation (ip, country) VALUES(691617792, 'zw'); INSERT INTO ip2nation (ip, country) VALUES(691621888, 'lr'); INSERT INTO ip2nation (ip, country) VALUES(691625984, 'ke'); INSERT INTO ip2nation (ip, country) VALUES(691630080, 'za'); INSERT INTO ip2nation (ip, country) VALUES(691631104, 'gh'); INSERT INTO ip2nation (ip, country) VALUES(691632128, 'ng'); INSERT INTO ip2nation (ip, country) VALUES(691633152, 'zw'); INSERT INTO ip2nation (ip, country) VALUES(691634176, 'za'); INSERT INTO ip2nation (ip, country) VALUES(691650560, 'gh'); INSERT INTO ip2nation (ip, country) VALUES(691666944, 'ng'); INSERT INTO ip2nation (ip, country) VALUES(691732480, 'tz'); INSERT INTO ip2nation (ip, country) VALUES(691798016, 'zm'); INSERT INTO ip2nation (ip, country) VALUES(691863552, 'za'); INSERT INTO ip2nation (ip, country) VALUES(691994624, 'zm'); INSERT INTO ip2nation (ip, country) VALUES(692011008, 'za'); INSERT INTO ip2nation (ip, country) VALUES(692027392, 'mg'); INSERT INTO ip2nation (ip, country) VALUES(692035584, 'ao'); INSERT INTO ip2nation (ip, country) VALUES(692043776, 'na'); INSERT INTO ip2nation (ip, country) VALUES(692060160, 'eg'); INSERT INTO ip2nation (ip, country) VALUES(692191232, 'ci'); INSERT INTO ip2nation (ip, country) VALUES(692207616, 'za'); INSERT INTO ip2nation (ip, country) VALUES(692240384, 'gh'); INSERT INTO ip2nation (ip, country) VALUES(692256768, 'sd');[/quote][b]ip2nationCountries (has a lot of data)[/b]---(Structure) [quote] CREATE TABLE ip2nationCountries ( code varchar(4) NOT NULL default '', iso_code_2 varchar(2) NOT NULL default '', iso_code_3 varchar(3) default '', iso_country varchar(255) NOT NULL default '', country varchar(255) NOT NULL default '', lat float NOT NULL default '0', lon float NOT NULL default '0', PRIMARY KEY (code), KEY code (code) );[/quote](Data) [quote]INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('ad', 'AN', 'AND', 'Andorra', 'Andorra', 42.3, 1.3); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('ae', 'AR', 'ARE', 'United Arab Emirates', 'United Arab Emirates', 24, 54); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('af', 'AF', 'AFG', 'Afghanistan', 'Afghanistan', 33, 65); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('ag', 'AT', 'ATG', 'Antigua and Barbuda', 'Antigua and Barbuda', 17.03, -61.48); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('ai', 'AI', 'AIA', 'Anguilla', 'Anguilla', 18.15, -63.1); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('al', 'AL', 'ALB', 'Albania', 'Albania', 41, 20); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('am', 'AR', 'ARM', 'Armenia', 'Armenia', 40, 45); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('an', 'AN', 'ANT', 'Netherlands Antilles', 'Netherlands Antilles', 12.15, -68.45); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('ao', 'AG', 'AGO', 'Angola', 'Angola', -12.3, 18.3); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('aq', 'AT', 'ATA', 'Antarctica', 'Antarctica', -90, 0); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('ar', 'AR', 'ARG', 'Argentina', 'Argentina', -34, -64); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('as', 'AS', 'ASM', 'American Samoa', 'American Samoa', -14.2, -170); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('at', 'AU', 'AUT', 'Austria', 'Austria', 47.2, 13.2); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('au', 'AU', 'AUS', 'Australia', 'Australia', -27, 133); INSERT INTO ip2nationCountries (code, iso_code_2, iso_code_3, iso_country, country, lat, lon) VALUES('aw', 'AB', 'ABW', 'Aruba', 'Aruba', 12.3, -69.58);[/quote][b]login_activity[/b]---(Structure) [quote]CREATE TABLE IF NOT EXISTS `mslop_login_activity` ( `aid` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'The primary identifier for an activity (session).', `uid` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'The mslop_users.uid corresponding to a session, or 0 for anonymous user.', `host_user_agent` varchar(256) NOT NULL DEFAULT '' COMMENT '$_SERVER["HOST_USER_AGENT"] string. This can be used with get_browser() in PHP.', `hostname` varchar(128) NOT NULL DEFAULT '' COMMENT 'The IP address that was used for this session.', `timestamp` int(11) NOT NULL DEFAULT '0' COMMENT 'The UNIX timestamp when the session was started.', PRIMARY KEY (`aid`), KEY `aid` (`aid`), KEY `uid` (`uid`), KEY `timestamp` (`timestamp`) );[/quote](Data) [quote]INSERT INTO `mslop_login_activity` (`aid`, `uid`, `host_user_agent`, `hostname`, `timestamp`) VALUES (1, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:19.0) Gecko/20100101 Firefox/19.0', '172.24.1.143', 1363038356), (2, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:19.0) Gecko/20100101 Firefox/19.0', '172.24.1.143', 1363038374), (3, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17', '172.24.1.143', 1363193841), (4, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17', '172.24.1.143', 1363194789), (5, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17', '172.24.1.143', 1363197889), (6, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/536.26.17 (KHTML, like Gecko) Version/6.0.2 Safari/536.26.17', '172.24.1.143', 1363207361), (7, 35, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:19.0) Gecko/20100101 Firefox/19.0', '172.24.1.143', 1363301612), (8, 35, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:19.0) Gecko/20100101 Firefox/19.0', '172.24.1.143', 1363301751), (9, 1, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:19.0) Gecko/20100101 Firefox/19.0', '172.24.1.143', 1363364574), (10, 1, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17', '172.24.1.143', 1363374517), (11, 1, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17', '172.24.1.143', 1363377701), (12, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17', '172.24.1.143', 1363714792), (13, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17', '172.24.1.143', 1363714911), (14, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17', '172.24.1.143', 1363714929), (15, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:19.0) Gecko/20100101 Firefox/19.0', '172.24.1.143', 1363715946), (16, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.28.10 (KHTML, like Gecko) Version/6.0.3 Safari/536.28.10', '172.24.1.161', 1363791080), (17, 4, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.28.10 (KHTML, like Gecko) Version/6.0.3 Safari/536.28.10', '172.24.1.161', 1363791124), (18, 1, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/536.28.10 (KHTML, like Gecko) Version/6.0.3 Safari/536.28.10', '172.24.1.161', 1363791144), (19, 3, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_1) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.152 Safari/537.22', '172.24.1.143', 1363791365), (20, 64, 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_1) AppleWebKit/537.22 (KHTML, like Gecko) Chrome/25.0.1364.152 Safari/537.22', '172.24.1.143', 1363791650);[/quote]Thanks.

SQL Server T-SQL Boot Camp 3-5 days anywhere in the United States

Posted: 21 Jun 2013 08:54 AM PDT

I have a team of coders that I need to train how to code better in SQL 2005 and 2008. Is there a Microsoft approved partner that anyone knows about that I can send 2-4 people to train them on coding? I don't want to send them to DBA courses, because they are simply coders. I just want to beef up their coding knowledge so we can improve performance, technique, and efficiency.

dead lock

Posted: 21 Jun 2013 12:14 AM PDT

how dead lock terminates in sql ?

record stored in log

Posted: 21 Jun 2013 12:13 AM PDT

when we delete the record it will store in log file , how the transaction is stored in log file ?

OS + ldf or mdf + ldf sharing on RAID?

Posted: 21 Jun 2013 05:10 AM PDT

Hi,At work today I noticed that most of our SQL Server environments have the same setup:2 disks in RAID 14 disks in RAID 5The database config also follows this format: OS s/w & backups on the RAID 1db s/w, mdf & ldf all on the RAID 5Wouldn't it be better for performance if the OS & logs were on the RAID 1 instead of the logs being on the same disks as the datafiles? I guess the split was decided to ensure the backup were on a separate RAID collection but was just wondering~ it's not as if these servers are ever pushed much

An issue with my Execution plan

Posted: 03 Mar 2013 09:27 PM PST

I have a view. I made some changes (included few CASEs in SELECT list for a business requirement)i did a relative cost check between these old and new script with estimated and actual execution plans.it was 50-50%. so, i conclude that i didnt disturb my query performance.But, the perfo. went bad.so, where am i wrong.Help me pls.Appreciating your helps always.

a delema that migrating a SQL Server 2005 database with huge tables.

Posted: 21 Jun 2013 06:13 AM PDT

I run into problems that migrating a SQL Server 2005 database. The database size is 1 TB and there are two huge tables.Those tables have over 2 billions records. The steps that I prefer to take are:1. Archive those big table*************************************************************The SQL Server 2005 are running in two node clusters (SAN) and there are BI packagesThose packages are configured with connection string, etc pointed to old Cluster.I installed Microsoft SQL Server 2008 R2 with Service Pack 2 in the new three nodes cluster.However, the BI packages configured, etc pointed to old SQL Cluster nodes.2. I plan to create SQL Alias.http://www.mssqltips.com/sqlservertip/1620/how-to-setup-and-use-a-sql-server-alias/ Would somone help on the best practice to implement?Tank you.Edwin

How to exclude rows that have a specific column blank

Posted: 21 Jun 2013 03:34 AM PDT

I am looking for a case statement that will only show me rows that Have the Column First name filled in, if the first name column is blank then I do not want to see the entire row...I hope I expained this correctcase when First.Name is null then ???

How to prevent SQL Injection Attack from SQL server side

Posted: 20 Jun 2013 05:05 AM PDT

Hi Everyone,Is it possible to stop SQL Injection Attack at SQL server level?I have gone through some posts and articles that suggest all the checks at application level so that only authentic data can be entered into database.My client has a travel portal and facing SQL injection attack. My knowledge is limited in this topic. Please can anyone help and let me know in case we can do something at SQL server level so that it can be stopped.An early response would be highly appreciated.thanks in advance.

confused With View , Function and Store Procedure

Posted: 03 Mar 2013 06:08 PM PST

i get confused where to use function , store procedure and view.plz help me;-)

Nested if"S

Posted: 20 Jun 2013 09:39 PM PDT

HiI figured I'd put this out there before I leave for work...looking to see if I can do a nested if like this:if value1 = '1' and value2 = '2' then 'A' if value1 = '1' and value9 = '2' then 'B' else if value99 = 'l' and value9 = 'x' then 'C' end as? What I'm trying to do is find values between 2-3 fields then assign a code.also one supersede the other , so if first and second if conditions are met i want the first if valueThanks In AdvanceJoe

Performance Dashboard

Posted: 04 Apr 2011 12:59 AM PDT

Good morning,I have encountered an error when clicking on the "System CPU Utilization" graph. The error says:A data source instance has not been supplied for the data source 'CHECK_DEPENDENCIES'I have altered the stored proc, usp_Main_GetSessionInfo. To resolve an earlier date overflow error, I changed the idle_connection_time from:sum(convert(bigint, datediff(ms, login_time, getdate()))) - sum(convert(bigint, s.total_elapsed_time)) as idle_connection_timetosum(convert(bigint, CAST ( DATEDIFF ( minute, login_time, getdate()) AS BIGINT)*60000 + DATEDIFF ( millisecond, DATEADD ( minute,DATEDIFF ( minute, login_time, getdate() ), login_time ),getdate() ))) - sum(convert(bigint, s.total_elapsed_time)) as idle_connection_timeAnd now I have the CHECK_DEPENDENCIES error. Does anyone know how to resolve this?

Could you help me to extract any relevant information from this errorr dump?

Posted: 21 Jun 2013 02:17 AM PDT

One of our sql servers crashed 2 times in last 2 weeks and I noticed similar error logs. The problem is that I'm not able to extract anything useful from it. I wonder if you can guide me. Please don't hesitate in requesting any extra information you may need.ThanksCurrent time is 07:25:02 06/21/13. ===================================================================== BugCheck Dump ===================================================================== This file is generated by Microsoft SQL Server version 10.50.1600.1 upon detection of fatal unexpected error. Please return this file, the query or program that produced the bugcheck, the database and the error log, and any other pertinent information with a Service Request. Computer type is Intel(R) Xeon(R) CPU E5620 @ 2.40GHz. Bios Version is INTEL - 6040000 PhoenixBIOS 4.0 Release 6.0 8 X64 level 8664, 14 Mhz processor (s). Windows NT 6.1 Build 7601 CSD Service Pack 1. Memory MemoryLoad = 52% Total Physical = 16383 MB Available Physical = 7858 MB Total Page File = 32765 MB Available Page File = 23720 MB Total Virtual = 8388607 MB Available Virtual = 8371713 MB **Dump thread - spid = 0, EC = 0x0000000000000000 ***Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\LOG\SQLDump0001.txt * ******************************************************************************* * * BEGIN STACK DUMP: * 06/21/13 07:25:02 spid 1564 * * ex_terminator - Last chance exception handling * * * MODULE BASE END SIZE * sqlservr 0000000000CF0000 000000000489DFFF 03bae000 * ntdll 0000000077850000 00000000779F8FFF 001a9000 * kernel32 0000000077730000 000000007784EFFF 0011f000 * KERNELBASE 000007FEFD940000 000007FEFD9AAFFF 0006b000 * ADVAPI32 000007FEFE620000 000007FEFE6FAFFF 000db000 * msvcrt 000007FEFDB70000 000007FEFDC0EFFF 0009f000 * sechost 000007FEFDE70000 000007FEFDE8EFFF 0001f000 * RPCRT4 000007FEFDC60000 000007FEFDD8CFFF 0012d000 * MSVCR80 0000000075350000 0000000075418FFF 000c9000 * MSVCP80 0000000074FD0000 00000000750D8FFF 00109000 * sqlos 0000000074F10000 0000000074F16FFF 00007000 * Secur32 000007FEFD200000 000007FEFD20AFFF 0000b000 * SSPICLI 000007FEFD490000 000007FEFD4B4FFF 00025000 * pdh 000007FEF6E70000 000007FEF6EBDFFF 0004e000 * SHLWAPI 000007FEFE980000 000007FEFE9F0FFF 00071000 * GDI32 000007FEFEC10000 000007FEFEC76FFF 00067000 * USER32 0000000077630000 0000000077729FFF 000fa000 * LPK 000007FEFE320000 000007FEFE32DFFF 0000e000 * USP10 000007FEFEC80000 000007FEFED48FFF 000c9000 * USERENV 000007FEFC960000 000007FEFC97DFFF 0001e000 * profapi 000007FEFD630000 000007FEFD63EFFF 0000f000 * WINMM 000007FEF6E30000 000007FEF6E6AFFF 0003b000 * IPHLPAPI 000007FEFB0E0000 000007FEFB106FFF 00027000 * NSI 000007FEFE310000 000007FEFE317FFF 00008000 * WINNSI 000007FEFB110000 000007FEFB11AFFF 0000b000 * opends60 0000000074F00000 0000000074F07FFF 00008000 * NETAPI32 000007FEFAE40000 000007FEFAE55FFF 00016000 * netutils 000007FEFCAF0000 000007FEFCAFBFFF 0000c000 * srvcli 000007FEFD0D0000 000007FEFD0F2FFF 00023000 * wkscli 000007FEFAE20000 000007FEFAE34FFF 00015000 * LOGONCLI 000007FEFCCB0000 000007FEFCCDFFFF 00030000 * SAMCLI 000007FEFA110000 000007FEFA123FFF 00014000 * BatchParser 0000000074ED0000 0000000074EFCFFF 0002d000 * IMM32 000007FEFDC10000 000007FEFDC3DFFF 0002e000 * MSCTF 000007FEFDA60000 000007FEFDB68FFF 00109000 * psapi 0000000077A10000 0000000077A16FFF 00007000 * instapi10 0000000074F40000 0000000074F4CFFF 0000d000 * cscapi 000007FEF6E20000 000007FEF6E2EFFF 0000f000 * sqlevn70 0000000074CC0000 0000000074EC0FFF 00201000 * CRYPTSP 000007FEFCEC0000 000007FEFCED6FFF 00017000 * rsaenh 000007FEFCBC0000 000007FEFCC06FFF 00047000 * CRYPTBASE 000007FEFD4C0000 000007FEFD4CEFFF 0000f000 * BROWCLI 000007FEF63B0000 000007FEF63C1FFF 00012000 * AUTHZ 000007FEFD090000 000007FEFD0BEFFF 0002f000 * MSCOREE 000007FEFA060000 000007FEFA0CEFFF 0006f000 * mscoreei 000007FEF9F00000 000007FEF9F8FFFF 00090000 * ole32 000007FEFEA00000 000007FEFEC02FFF 00203000 * credssp 000007FEFCAB0000 000007FEFCAB9FFF 0000a000 * msv1_0 000007FEFCDF0000 000007FEFCE40FFF 00051000 * cryptdll 000007FEFD1E0000 000007FEFD1F3FFF 00014000 * kerberos 000007FEFCEE0000 000007FEFCF93FFF 000b4000 * MSASN1 000007FEFD6D0000 000007FEFD6DEFFF 0000f000 * schannel 000007FEFCC50000 000007FEFCCA5FFF 00056000 * CRYPT32 000007FEFD6E0000 000007FEFD849FFF 0016a000 * security 0000000074CB0000 0000000074CB2FFF 00003000 * WS2_32 000007FEFE700000 000007FEFE74CFFF 0004d000 * SHELL32 000007FEFEDD0000 000007FEFFB57FFF 00d88000 * OLEAUT32 000007FEFDD90000 000007FEFDE66FFF 000d7000 * ftimport 0000000060000000 0000000060024FFF 00025000 * MSFTE 0000000049980000 0000000049D2DFFF 003ae000 * VERSION 000007FEFC770000 000007FEFC77BFFF 0000c000 * dbghelp 0000000073DA0000 0000000073EFDFFF 0015e000 * WINTRUST 000007FEFDA00000 000007FEFDA38FFF 00039000 * ncrypt 000007FEFD040000 000007FEFD08CFFF 0004d000 * bcrypt 000007FEFD010000 000007FEFD031FFF 00022000 * mswsock 000007FEFCE60000 000007FEFCEB4FFF 00055000 * wship6 000007FEFCE50000 000007FEFCE56FFF 00007000 * wshtcpip 000007FEFC840000 000007FEFC846FFF 00007000 * ntdsapi 000007FEFA880000 000007FEFA8A6FFF 00027000 * DNSAPI 000007FEFCCE0000 000007FEFCD3AFFF 0005b000 * rasadhlp 000007FEFA920000 000007FEFA927FFF 00008000 * fwpuclnt 000007FEFAFA0000 000007FEFAFF2FFF 00053000 * bcryptprimitives 000007FEFCB00000 000007FEFCB4BFFF 0004c000 * ntmarta 000007FEFC740000 000007FEFC76CFFF 0002d000 * WLDAP32 000007FEFE5C0000 000007FEFE611FFF 00052000 * SAMLIB 000007FEFA040000 000007FEFA05CFFF 0001d000 * CLBCatQ 000007FEFDE90000 000007FEFDF28FFF 00099000 * sqlncli10 0000000073750000 0000000073A07FFF 002b8000 * COMCTL32 000007FEF83B0000 000007FEF844FFFF 000a0000 * COMDLG32 000007FEFE110000 000007FEFE1A6FFF 00097000 * SQLNCLIR10 00000000722A0000 00000000722D6FFF 00037000 * netbios 000007FEEDE00000 000007FEEDE09FFF 0000a000 * xpsqlbot 0000000072290000 0000000072297FFF 00008000 * xpstar 0000000072200000 0000000072287FFF 00088000 * SQLSCM 0000000073740000 000000007374DFFF 0000e000 * ODBC32 000007FEF0C70000 000007FEF0D20FFF 000b1000 * ATL80 0000000075300000 000000007531FFFF 00020000 * odbcint 00000000736C0000 00000000736F7FFF 00038000 * clusapi 000007FEF0C20000 000007FEF0C6FFFF 00050000 * resutils 000007FEF1DC0000 000007FEF1DD8FFF 00019000 * xpstar 00000000721D0000 00000000721F4FFF 00025000 * xplog70 00000000721C0000 00000000721CFFFF 00010000 * xplog70 00000000721B0000 00000000721B1FFF 00002000 * dbghelp 000000006D090000 000000006D1EDFFF 0015e000 * * P1Home: 0044004C00510053: * P2Home: 0000000000518284: 0122002500210023 0024002900230027 0026002D0025002B 002800310027002F 002A003500290033 002C0039002B0037 * P3Home: 0000000000519CF0: 0000000009994C00 0000000019473730 0000000000000000 0000000000000000 0000000000000000 00000000195BC360 * P4Home: 0000000000000000: * P5Home: 0052004500AE00AC: * P6Home: 00000000194B4E98: 0050005C003A0043 00720067006F0072 00460020006D0061 00730065006C0069 00630069004D005C 006F0073006F0072 * ContextFlags: 000000000010000F: * MxCsr: 0000000000001F80: * SegCs: 0000000000000033: * SegDs: 000000000000002B: * SegEs: 000000000000002B: * SegFs: 0000000000000053: * SegGs: 000000000000002B: * SegSs: 000000000000002B: * EFlags: 0000000000000202: * Rax: 000000007EC2CD2C: * Rcx: 00000000095ABDA0: 0044004C00510053 0000000000518284 0000000000519CF0 0000000000000000 0052004500AE00AC 00000000194B4E98 * Rdx: 0000000000000000: * Rbx: 0000000000000000: * Rsp: 00000000095AC3B0: 0000000000000000 0000000000000000 00000000095AC5E0 000007FFFFF92000 00000000000042AC 0000000000000000 * Rbp: 0000000003A7887C: 0000000719930522 0000000202D888A4 0000000F02D88954 0000016002D888DC 0000000000000000 00000000FFFFFFFF * Rsi: 0000000000000000: * Rdi: 00000000095AC5E0: 00000000194B4E90 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 * R8: 0000000000000000: * R9: 0000000000000000: * R10: 0000000000000000: * R11: 00000000095ACBC0: 0044004C00510053 0000000000518284 0000000000519CF0 0000000000000000 0052004500AE00AC 00000000194B4E98 * R12: 0000000000000001: * R13: 0000000000000000: * R14: 00000000031DDF00: 0074005F00780065 0069006D00720065 006F00740061006E 0020002D00200072 007400730061004C 0061006800630020 * R15: 000000000000002F: * Rip: 000007FEFD949E5D: C3000000C8C48148 9090909090909090 08245C8948909090 D98B4820EC834857 00002AE8FA8B098B 2301840FC0854800 * ******************************************************************************* * ------------------------------------------------------------------------------- * Short Stack Dump 000007FEFD949E5D Module(KERNELBASE+0000000000009E5D) 0000000002D9981D Module(sqlservr+00000000020A981D) 0000000002D9DD3A Module(sqlservr+00000000020ADD3A) 0000000002D9D89D Module(sqlservr+00000000020AD89D) 00000000031DDDC2 Module(sqlservr+00000000024EDDC2) 00000000031DD5AB Module(sqlservr+00000000024ED5AB) 000000007538ACF0 Module(MSVCR80+000000000003ACF0) 0000000075389E0B Module(MSVCR80+0000000000039E0B) 000000007538A62B Module(MSVCR80+000000000003A62B) 000000007538A86B Module(MSVCR80+000000000003A86B) 000000007538ABE7 Module(MSVCR80+000000000003ABE7) 0000000077879D0D Module(ntdll+0000000000029D0D) 00000000778691AF Module(ntdll+00000000000191AF) 00000000778A1278 Module(ntdll+0000000000051278) 0000000000CF6F5A Module(sqlservr+0000000000006F5A) 0000000000CF6FB9 Module(sqlservr+0000000000006FB9) 0000000000CF72D7 Module(sqlservr+00000000000072D7) 000000000112A5C8 Module(sqlservr+000000000043A5C8) 000000000148B3CE Module(sqlservr+000000000079B3CE) 000000000115D83D Module(sqlservr+000000000046D83D) 000000000115DC99 Module(sqlservr+000000000046DC99) 000000000112AD10 Module(sqlservr+000000000043AD10) 000000000112ABB1 Module(sqlservr+000000000043ABB1) 0000000001486BDA Module(sqlservr+0000000000796BDA) 0000000000D3DC11 Module(sqlservr+000000000004DC11) 0000000000CFBBD8 Module(sqlservr+000000000000BBD8) 0000000000CFB8BA Module(sqlservr+000000000000B8BA) 0000000000CFB6FF Module(sqlservr+000000000000B6FF) 0000000001218FB6 Module(sqlservr+0000000000528FB6) 0000000001219175 Module(sqlservr+0000000000529175) 0000000001219839 Module(sqlservr+0000000000529839) 0000000001219502 Module(sqlservr+0000000000529502) 00000000753537D7 Module(MSVCR80+00000000000037D7) 0000000075353894 Module(MSVCR80+0000000000003894) 000000007774652D Module(kernel32+000000000001652D) 000000007787C521 Module(ntdll+000000000002C521)

How to convert binary file to text file in ssis or in SQL server.

Posted: 21 Jun 2013 12:49 AM PDT

How to convert binary file to text file in ssis package and i want to load it into the table.

Generate a table with all possibilities.

Posted: 21 Jun 2013 12:23 AM PDT

Hello All,I need to generate a table.6 Fields.(Possible values in all fields are -1, 0, a,b,c,d,e,f)A field can contain a minus 1,A field can contain a zero,A field can contain the next value not yet used. (a,b,c,d,e,f)A field can contain a value allready used. (a,b,c,d,e)So the first field can contain -1, 0 or an a.The second field can contain -1, 0, a or an b (b only if first field is a).So for a row:0 a -1 b a x (The x can be a -1, 0, a, b or c)Not possiblea b a c f x (f is used before d)With these rules I want to generate all possible rows.I spend some time on this, but have not managed yet.Started of with building the table with all combinations and deleting combinations which were not allowed.At the moment I am thinking of doing something with strings.(First a should be before the first b etc.).Haven's solved the problem yet,Any handy and elegant methods to do this ?Thanks for your attention,Ben Brugman

SSIS Scheduled Job Completes in 2 Seconds

Posted: 20 Jun 2013 09:44 PM PDT

I have a long running SSIS package that I have run manually so far via "Execute Package Utility". Now I created a job to schedule and run it. If I start the job, instead of the package, it completes in 1 second as successful. I thought maybe I commented out the command, but it looks fine:[code="sql"]exec xp_cmdshell 'dtexec /DTS "\MSDB\Website_FTP_Import" /SERVER MyServer /CHECKPOINTING OFF /REPORTING V '[/code]I run other SSIS packages this way ..... strange. Maybe it's a permission thing somehow, but no errors.Any ideas ?

DMV's

Posted: 21 Jun 2013 01:40 AM PDT

What is the useful dmv's as dba needs in daily life ?DMVto check the blocking other than sp_who2

Need help with a SQL/OLAP Query (average of a particular weekday over the year)

Posted: 21 Jun 2013 12:33 AM PDT

Hi all,I'm struggling with the following query:I'm trying to get the average sale form [u]each[/u] Saturday in year 2005 and in the end the average from [u]all[/u] Saturdays in 2005.This is how it's supposed to look likeOrderDate AverageSale01.01.2005 1857,1208.01.2005 1754,2510.01.2005 1539,86... ...... ...17.12.2005 1754,5724.12.2005 2076,1630.12.2005 1829,79ALL 1947,64I already got this, which shows me each Saturday with it's average sale ...except the last line.SELECT OrderDate AVG(CAST((ProductPrice * OrderQuantity) AS MONEY)) AS AverageSaleFROM TEST3WHERE DATENAME(WEEKDAY, OrderDate) = 'Saturday' ANDYear(orderdate) = 2005GROUP BY (OrderDate)Does some know how to get the last line with the average sale form all Saturdays in 2005?? Perhaps with a OLAP function?Thanks in advance for any advice!!cheers anna

How to remove an unknown special character?

Posted: 19 Jun 2013 10:39 PM PDT

Hi,I am extracting data from a table to a .csv file in SSIS. The data in one of the fields has been copied from what looks like Microsoft Word and contains bullet points - some (not all) of these are causing the package to fail. I am unable to identify what sort of bullet point it is exactly as it loses any recognizable properties when copied, to replace it in the SQL query.Any ideas how I can select JUST the alpha-numeric text from the field excluding ALL special characters? (Instead of multiple REPLACE statements - even this won't work though as I don't know what the character is to map to its ASCII value).Thanks.

Unable to Replay Default Profiler Trace

Posted: 19 Jun 2013 09:20 PM PDT

Hello Everyone,I am facing an issue wherein the default trace file(.trc) when opened with profiler on the same server doesn't allow to replay and I get the error as below.I tried to find oout on google if this is a problem and what would be a resolution but to no avail.Cans omeone please share if there is something that I am doing wrong or if this is a default behaviour ? My server is Sql Server 2008 R2

Dynamic query validation before executing

Posted: 20 Jun 2013 06:14 PM PDT

How do we validate a string query before executing the query.e.g I have a table dbo.Test with columns A int, B int and C int.I have a three VARCHAR variables@VChrTotalString, @VChrString and @VChrFilterCondition (Filter condition can have multiple conditions)@VChrString = 'SELECT * FROM dbo.Test WHERE '@VChrFilterCondition = ' C = 5'@VChrTotalString = @VChrString + @VChrFilterCondition We do EXEC sp_executesql @VChrTotalString .This runs fine and is perfectly legitimate.Suppose @VChrFilterCondition = 'C=5 AND B IN (8,7,5,6) OR A1 NOT IN (4,2)'Here A1 is not a valid column in the table.If we execute this query we will get and error saying Invalid column A1.I want to show a custom message for the same saying "Invalid Filter condition"but for that i want value from the EXEC sp_executesql @VChrTotalString.on which I can display a message.If we use SET PARSEONLY ON; Then this query is parse correctly SELECT * FROM dbo.Test WHERE A1 = 2Is there any way of capturing the error before executing the query.Can you please help me resolving this issue.

Bulk Insert format file but input file not always fixed

Posted: 20 Jun 2013 05:18 AM PDT

I am trying to do a bulk insert using a non-xml file format. When my input file exactly matches my format file the bulk insert works fine. I have data that is older and was not written in the exact same manner as the newer input data. In the case of this older data, the last field may either not be there at all or be shorter than the allotted characters. This causes an unexpected end of file error. Here is an example of what I mean:Format file:8.031 sqlchar 0 5 "" 1 recid ""2 sqlchar 0 5 "" 2 testfield ""3 sqlchar 0 0 "/r/n" 0 skipfield ""The data that works is1 test12 test2The data that does not work is1 test2 tesanother one that does not work (the first field is 5 characters with nothing for the second field)12Without creating a format file for each case, is there a way to have the bulk insert work with the variable length in the last field?

SQL QUERY REQUIRED

Posted: 19 Jun 2013 07:50 PM PDT

Dear all,I have following tablesTABLE : AENTERPRISE_MAIN_CODE ENTERPRISE_SUB_CODE ENTERPRISE_DESCRIPTION1 0 A2 0 B3 0 C4 0 D5 0 E6 0 F7 0 G8 0 H9 0 ITABLE : BID_NO ENTERPRISE_MAIN_CODE ENTERPRISE_SUB_CODE1 1 12 1 23 7 14 4 15 4 1 6 6 17 6 5TABLE : CID_NO ENTERPRISE_ID_NO ENTERPRISE_DATE ENTERPRISE_AMT1 1 20130109 10001 2 20130315 2000 2 3 20120120 30002 4 20120305 40003 5 20141111 50004 6 20130115 60005 7 20130320 7000user inputs only month and year: for example 201303.the date format is yyyymmddo/p details: it has 4 parts which are explained below1. User should see the count and the amount of the selected year and month (201303 i.e 2013 march data)2. the previous year of the selected year and month ( count and the amount ) (201203 i.e 2012 march data)3. the first month of selected year till the selected year's selected month (count and the amount) ( 20130101 - 20130330 )4. the first month of previous year till the previosu year's month (count and the amount) ( i.e 20120101 - 20120330 )The expected op is as follows ENTERPRISE_DESCRIPTION count amt count amt count amt count amt A 1 2000 1 4000 2 3000 2 7000B 0 0 0 0 0 0 0 0C 0 0 0 0 0 0 0 0D 1 7000 0 0 2 13000 0 0E 0 0 0 0 0 0 0 0F 0 0 0 0 0 0 0 0G 0 0 0 0 0 0 0 0H 0 0 0 0 0 0 0 0I 0 0 0 0 0 0 0 0ThanksNick

Need Help:Error log

Posted: 19 Jun 2013 11:10 PM PDT

Hi all,when i checked my error log it has taken 100 gb of space and my disk is out of space. when i checked log folder location i found so many sqldump mdmp files. i am not sure whats going on... i deleted all those files and started sql server.. but i can see they are still generating mdmp files and falling in to the log folder.

data migration from ms access database to ms sql server 2012

Posted: 20 Jun 2013 10:07 PM PDT

I have a client who is currently using a Microsoft Access Database system to store data of which the system has the maximum capacity storage of up to 2 GB. The system is currently 1.4G and is estimated to reach 1.6 GB in about 6 month's time. **I would like to know if whether does ms sql server 2012 have a UI (user Interface ) part that can be used to develop the Front-end/User Interface of the tool** **Does it have the ability to convert the current UI, developed in CLARION?** PLEASE ASSIST...

index statistics updation

Posted: 20 Jun 2013 06:25 PM PDT

Hi,We are using Sql server 2008 R2 Express. We have clustered index with incremental key for our almost all tables. We noticed our queries is executing slow. So we needed to optimize query working in server.We do de-fragment indexes with high fragmentation monthly.Fill factor for indexes are set to 100.I monitored [b]Access Methods\Page Splits/sec[/b] counter and average value is around 1.Is there any need to modify Fill factor setting? Auto create statistics and Auto update statistics set to true.Is there any need to update statistics manually? If so how often i need to update statistics? Is there any reasons for query slowing down?Thank you

Move currentdate+1 to field when null?

Posted: 20 Jun 2013 05:25 AM PDT

HiI have a where clausetable.AdminDate <= @startdate AND table.DischargeDate >= @enddateBut I want to move currentdate+1 to Dischargedate if Dischargedate isnull ?ThanksJoe

Search for string in xml column

Posted: 20 Jun 2013 04:26 PM PDT

I have a table with column name 'xmlmycolumn' which is of datatype xml. I want to search for string in that whole xml. So to do this i am casting column as varchar and then i use charindex as shown below.SELECT * FROM mytableWHERE ( Charindex('abc',CAST([xmlmycolumn] AS VARCHAR(MAX)))>0 ) Is there any other method other than casting the column as varchar so that performance of the query is increased?

Connect with different AD credentials - Alias set up

Posted: 20 Jun 2013 06:32 PM PDT

Hello , everyoneCould some one tell me how to setup an alias on my windows 7 desktop that will allow me to connect to a SQL server in a different domain using a different AD login.Can't for the life of me remember where to set up in Windows 7..thanks all

COLLATION issue

Posted: 20 Jun 2013 04:53 PM PDT

Hi all,I have created a database on server1 with default collation, now I want to restore that database on another server server2 which has different collation due to this I am getting collation error in my scripts. Is there any solution for this?

Friday, June 21, 2013

[how to] Why aren't my first few Postgres WAL segments being archived?

[how to] Why aren't my first few Postgres WAL segments being archived?


Why aren't my first few Postgres WAL segments being archived?

Posted: 21 Jun 2013 09:03 PM PDT

The pg_xlog directory for my cluster (on master) looks like this:

-rw------- 1 postgres postgres  16M Jun 21 21:42 000000010000000000000001  -rw------- 1 postgres postgres  16M Jun 21 22:42 000000010000000000000002  -rw------- 1 postgres postgres  16M Jun 21 23:42 000000010000000000000003  -rw------- 1 postgres postgres  16M Jun 22 00:42 000000010000000000000004  -rw------- 1 postgres postgres  16M Jun 22 01:42 000000010000000000000005  -rw------- 1 postgres postgres  16M Jun 22 01:49 000000010000000000000006  -rw------- 1 postgres postgres  16M Jun 22 01:55 000000010000000000000007  -rw------- 1 postgres postgres  16M Jun 22 02:05 000000010000000000000008  -rw------- 1 postgres postgres  16M Jun 22 02:30 000000010000000000000009  -rw------- 1 postgres postgres  16M Jun 22 02:50 00000001000000000000000A  

The pg_xlog/archive_status directory looks like this:

-rw------- 1 postgres postgres 0 Jun 22 01:49 000000010000000000000006.done  -rw------- 1 postgres postgres 0 Jun 22 01:55 000000010000000000000007.done  -rw------- 1 postgres postgres 0 Jun 22 02:05 000000010000000000000008.done  -rw------- 1 postgres postgres 0 Jun 22 02:30 000000010000000000000009.done  

When I first turned on archiving 006 was the current segment, but why weren't 001-005 archived anyway? Last time I set up archiving on an existing cluster (which was yesterday - I've been experimenting), all log segments were archived when I turned archiving on, despite the pg_xlog directory containing more than 50 previous log segments.

Relevant settings:

wal_level = hot_standby  archive_mode = on  wal_keep_segments = 2048  # I know, right :)  archive_timeout = 3600  archive_command = 'rsync -aq --delay-updates --ignore-existing %p postgres@db-slave:/mnt/pgsql/'  

Note: I did change all of these settings (including was_level, which was minimal previously) when I turned on archiving earlier. Could it be that my wal level wasn't sufficient to warrant archiving, or because there is a mismatch between the prior level and the current level?

Cannot start sdows.ear module on WebLogic 12c

Posted: 21 Jun 2013 07:52 PM PDT

I've setup a CentOS 6.3 x64 box and OpenJDK 1.6, and installed Oracle 11g2 and WebLogic 12c. I'm trying to deploy a WFS to the box, as exploded EAR.

Consulted the Oracle hands on lab document, the 2nd lab here, I've tried to deploy the sdows.ear. But have got weird results. The deployment seems OK, console web pages reports no error, and shows all successful messages. However, all I see in the WFS URL are errors:

<ogc:ServiceExceptionReport version="1.2.0" xsi:schemaLocation="http://www.opengis.net/ogc http://localhost:8888/examples/servlet/xsds/OGC-exception.xsd">      <ogc:ServiceException code="WFS-1042">Exception during processing request</ogc:ServiceException>  </ogc:ServiceExceptionReport>   

When I check the back end server output I see:

........  <WS data binding error>Ignoring element declaration {http://svrproxy.ws.spatial.oracle/}makeSpatialRequestElement because there is no entry for its type in the JAXRPC mapping file.  <WS data binding error>Ignoring element declaration {http://svrproxy.ws.spatial.oracle/}makeSpatialRequestResponseElement because there is no entry for its type in the JAXRPC mapping file.  [oracle.spatial.ws.svrproxy.SdoWsService, INFO] Ws Configure file is /oracle/app/oracle/11.2.0/dbhome_1/md/jlib/sdowsDD.ear/sdows.warWEB-INF/conf/WSConfig.xml  [oracle.spatial.ws.WSProperties, ERROR] error parsing config file: /oracle/app/oracle/11.2.0/dbhome_1/md/jlib/sdowsDD.ear/sdows.warWEB-INF/conf/WSConfig.xml (No such file or directory)  [oracle.spatial.ws.WSProperties, ERROR] error closing reader: null  [oracle.spatial.ws.WSProperties, ERROR] Oracle Spatial WS Server could not set up configuration  parameters due to exception: null  ........  

I've also tried to deploy the module directly using the sdows.ear file, but similar error occurs:

........  <WS data binding error>Ignoring element declaration {http://svrproxy.ws.spatial.oracle/}makeSpatialRequestElement because there is no entry for its type in the JAXRPC mapping file.  <WS data binding error>Ignoring element declaration {http://svrproxy.ws.spatial.oracle/}makeSpatialRequestResponseElement because there is no entry for its type in the JAXRPC mapping file.  [oracle.spatial.ws.svrproxy.SdoWsService, INFO] Ws Configure file is nullWEB-INF/conf/WSConfig.xml  [oracle.spatial.ws.svrproxy.SdoWsService, FATAL] java.lang.NullPointerException  ........  

What have I done wrong?

In Job Step > Properties, only see two tabs: Package and Configuration

Posted: 21 Jun 2013 04:34 PM PDT

I am trying to set a job step that executes a package to use the 32-bit runtime, as it uses the SQL 10 driver against a SQL 2000 database. The job is running on SQL Server 2012. I see all kinds of examples of how to do this, where the job step properties page has 9 tabs (Set Values, Verification, Command Line, General, Configurations, Command Files, Data Sources, Execution Options and Logging). Execution Options has a checkbox for 32-bit mode.

When I look at my job step, logged in to the server as an Admin, running SSMS as administrator, all I see are two tabs: Package and Configuration. I deployed the packages using the Project Deployment method. How do I get those other tabs to show up?

Update existing SQL Server 2008 table via Flat File using SSIS

Posted: 21 Jun 2013 01:50 PM PDT

I am trying to update existing records or add any new ones to a SQL Server table using a flat file. Also I need it work as an SSIS package. I don't get an option to do a source query like I do for Excel, etc. when setting up the import task and even when I was trying to use Excel my options seemed limited to simple selects for the source query.

I really need to be able to run a full script that checks the first column (which is my key), updates existing records, then adds any new ones. Maybe I am taking the wrong approach here altogether, but I thought there was a way to do this with SSIS. Can someone point me in the right direction? If it were two tables and I didn't have to use SSIS I would just write the sql and be done with it. It's really the bridge with using flat files and SSIS (automating the import) that I'm looking for here.

Just to give a quick background of what I ultimately am trying to accomplish: We do exports of our data into flat files and those will go to a ftp server. We then have the customer import from those files into table representations of them so they can report against them.

How to gracefully handle a source database restore from backup in a data warehouse

Posted: 21 Jun 2013 01:13 PM PDT

We are facing a challenging situation with our data warehouse source databases.

Frequently these sources databases will be restored from backups. There is a high turnover of data entry persons who use the system and they make many mistakes. So the business will just restore from a backup and start over.

But at this point, the data in the data warehouse will have already been processed and needs to be corrected. There could be thousands of rows of fact data which is no longer valid. Is there an appropriate design pattern to handle this scenario?

For example, would you need a way to rebuild the data warehouse from scratch? Would you attempt to restore a backup of the data warehouse and then build ETLs to synchronize? Would you delete data from your fact tables and then re-insert?

Mitigating MySQL replication delay in single master/multiple slave setup

Posted: 21 Jun 2013 01:10 PM PDT

For a particular web application that requires high availability I am thinking of scaling out by setting-up MySQL replication with one "write" master and multiple "read" slaves (as explained here http://dev.mysql.com/doc/refman/5.1/en/replication-solutions-scaleout.html).

The only issue that I have is how to mitigate the replication delay between the master and the slaves? For example, let's say that the user posts an item (writing to the master), and very quickly thereafter wants to view his posted items (reading from one of the slaves). What efficient solution can I put in place to make sure that the a read operation from one of the slaves will always have all the content of any previously completed write operation to the master?

My MySQL server is charged? [closed]

Posted: 21 Jun 2013 12:26 PM PDT

What factors can help to know if a server mysql is charged or not ?

It's about the number of users connected, CPU, ..??

Thanks.

How to calculate sigma > 1 in MySQL

Posted: 21 Jun 2013 09:47 AM PDT

I have a big set of data and calculating AVG and STD already. But as STD is only 1 sigma and 31% of all data points are outside of avg +/- std, I want to know, if there is any good way to STD with sigma > 1. Sigma 2 or 3 would be enough.

Thanks

Changing the representation of NULLs in pg_dump plaintext output

Posted: 21 Jun 2013 06:44 PM PDT

I'm trying to convert a large-ish Postgres DB (500 GB) to SQL Server 2012. After investigating a few third-party tools and being disappointed in features, performance, or both, I started pursuing a simple pg_dump/bulk import solution. Things looked promising until I realized that pg_dump represents NULLs in plaintext as "\N", which causes the bulk insert to vomit in cases of type mismatch. Even if I were to automate the pg_dump process to produce a single file per table, some of the individual tables involved are very large (20-50 GB) and performing a comprehensive find-replace using even fast file editing options in linux, or a Perl script, add too much overhead to the time required for the import/export.

I'm hoping there's a way to modify the NULL representation in the pg_dump output that I'm not aware of, or failing that, to get some recommendations for alternative approaches to this process in terms of tools or strategies. Thanks in advance for your help.

Formatting T-SQL in SSMS 2012

Posted: 21 Jun 2013 03:56 PM PDT

According to this Microsoft document:

http://msdn.microsoft.com/en-us/library/ms174205.aspx

I am supposed to be able to use ctrl+K then ctrl+D to format my SQL documents in SQL Server Management Studio 2012 but when I use that combo I get the error:

The key combination (Ctrl+K, Ctrl+D) is bound to command (Format Document) which is not currently available.

I am trying to make modifications to an existing SQL document that has no formatting to it at all which makes it extremely difficult to read. Does anyone know how to make the Format Document command available so I can have SQL format this code for me?

Query for master in Postgres replication

Posted: 21 Jun 2013 12:40 PM PDT

I'm trying to find out if there is a way to query for the master, from a slave PostgreSQL server that has been set up with server replication.

From a slave, I can:

SELECT pg_is_in_recovery()  

And this will give me a 't' result if I'm on a slave and an 'f' result on the master, which is step one.

Next, I'd like to run a query that gives me some information about the master that it's replicating. Preferably an IP address or hostname.

For the record, I can query the master with:

SELECT * from pg_stat_replication  

And this will give me information about slaves. I am hoping that there is a reciprocal method for querying a slave.

Is this possible? If so, how?

Replication Options with data subsets

Posted: 21 Jun 2013 09:59 AM PDT

We have an application that makes of of a SQL Server 2012 (ServerA) database. Some data that we need to display, though, comes from another database, but it's SQL Server 2008 (ServerSOURCE). The database is also on a separate server.

The plan is to create a replication database on the same server as ServerA, called ServerREP, and replicate data from ServerSOURCE into ServerREP. But we only want certain tables from ServerSOURCE. It's a small subset of the tables. ServerSOURCE has hundreds of tables. We only need around 10.

Additionally, the data in the tables could be reduced. For example, say we have a Person table, which has a PersonTypeId. We only want rows where PersonTypeId = x.

Can this be achieved with replication? Can we get subsets of data from a subset of tables from the source, into our new replicated database? Would it be something like creating a VIEW on the source, and replicating the results of that view, as a table in the replicated database?

Issues starting Postgres server [closed]

Posted: 21 Jun 2013 09:59 AM PDT

I'm trying to go through the book "7 Databases in 7 Weeks" and I'm completely stuck at starting a Postgres server.

My current issue is that when I run pg_ctl start I get the following error:

LOG:  database system was shut down at 2013-06-12 17:18:24 PDT  LOG:  invalid magic number 0000 in log file 0, segment 1, offset 0  LOG:  invalid primary checkpoint record  LOG:  invalid secondary checkpoint record  PANIC:  could not locate a valid checkpoint record  LOG:  startup process (PID 1454) was terminated by signal 6: Abort trap  LOG:  aborting startup due to startup process failure  

I have no idea how to go about fixing this. I'm running PostgreSQL 9.2.4 on Mountain Lion 10.8.3.

Copy a SQL Server database from one server to another on demand

Posted: 21 Jun 2013 08:29 PM PDT

I have two servers, Prod and Test, both running SQL Server 2008 RTM.

Is it possible to have a PowerShell- or VBScript that could copy a database from one server to another?

The copy should of course include all contents of the database, with the exception of privileges so I don't lock myself out from editing the destination.

The destination database should be cleared/reset or completely overwritten, so both source and destination are identical.

Additionally, the connection to the source should be read-only and (if possible) only able to initiate the copy process without actually having access to the data.

I am slightly familiar with PowerShell, so if this only means connecting and starting a task it should be doable. Or do I have to look for advanced solutions?

Thank you.

Migration to databases Domain Users do not have access

Posted: 21 Jun 2013 12:34 PM PDT

I migrated databases to new servers, however the applications that were previously used with the databases are failing to load. I have changed the connections and etc. The jobs also seem to be failing. I have a domain account who is the job owner. However, when I try to execute the job under my User name i get the following error:

Executed as user: NT AUTHORITY\SYSTEM. Login failed for user.....[SQLSTATE 28000) (Error 18456).

Is this related to Domain Users not having appropriate read and write access to the database. Also how would I give All domain users permissions to execute stored procedures.

Creating a global temp table in MySQL

Posted: 21 Jun 2013 10:34 AM PDT

I am working on a MySQL stored procedure.

I need to create a temp table and I want to access this temp table whenever I execute this stored procedure.

I know we can't access the temp table in another request in MySQL.

Is there a way to create a temp table like globally, or how can I access the temp table across the multiple requests?

MongoDB problems recovering a member of the replica set

Posted: 21 Jun 2013 07:35 PM PDT

I have a sharded database with 2 replica sets (RS1 and RS2) each one of the RSs with 2 servers. I had a problem yesterday with one member of the RS2, the mongod instance crashed throwing an error. After that I tried to recover the member making it sync with the other member of the replica set (it took a long time to finish the sync) and then I'm getting the same error again:

Tue May  7 12:37:57.023 [rsSync]   Fatal Assertion 16233  0xdcf361 0xd8f0d3 0xc03b0f 0xc21811 0xc218ad 0xc21b7c 0xe17cb9 0x7f57205f2851 0x7f571f99811d   /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdcf361]   /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xa3) [0xd8f0d3]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl17syncDoInitialSyncEv+0x6f) [0xc03b0f]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl11_syncThreadEv+0x71) [0xc21811]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl10syncThreadEv+0x2d) [0xc218ad]   /usr/bin/mongod(_ZN5mongo15startSyncThreadEv+0x6c) [0xc21b7c]   /usr/bin/mongod() [0xe17cb9]   /lib64/libpthread.so.0(+0x7851) [0x7f57205f2851]   /lib64/libc.so.6(clone+0x6d) [0x7f571f99811d]  Tue May  7 12:37:57.155 [rsSync]    ***aborting after fassert() failure      Tue May  7 12:37:57.155 Got signal: 6 (Aborted).    Tue May  7 12:37:57.159 Backtrace:  0xdcf361 0x6cf729 0x7f571f8e2920 0x7f571f8e28a5 0x7f571f8e4085 0xd8f10e 0xc03b0f 0xc21811 0xc218ad 0xc21b7c 0xe17cb9 0x7f57205f2851 0x7f571f99811d   /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdcf361]   /usr/bin/mongod(_ZN5mongo10abruptQuitEi+0x399) [0x6cf729]   /lib64/libc.so.6(+0x32920) [0x7f571f8e2920]   /lib64/libc.so.6(gsignal+0x35) [0x7f571f8e28a5]   /lib64/libc.so.6(abort+0x175) [0x7f571f8e4085]   /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xde) [0xd8f10e]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl17syncDoInitialSyncEv+0x6f) [0xc03b0f]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl11_syncThreadEv+0x71) [0xc21811]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl10syncThreadEv+0x2d) [0xc218ad]   /usr/bin/mongod(_ZN5mongo15startSyncThreadEv+0x6c) [0xc21b7c]   /usr/bin/mongod() [0xe17cb9]   /lib64/libpthread.so.0(+0x7851) [0x7f57205f2851]   /lib64/libc.so.6(clone+0x6d) [0x7f571f99811d]  

Any idea of why this may be happening? How can I make this server sync and work? My last surviving server is now running as secondary, is there a way to make it primary for a while to get the data out of it?

Thanks in advance!

MySQL server crashed.

Posted: 21 Jun 2013 03:35 PM PDT

Help! I managed to crash MySQL last night. I am on a Mac using the native version that came with Mountain Lion. I was upgrading from 5.5 to 5.6. I have followed instructions in this forum to delete the installation, but trying to re-install 5.5 says that there is a newer version and won't install. Trying to install 5.6 fails. I found this error in the console:

4/21/13 10:16:56.000 PM kernel[0]: exec of /Volumes/mysql-5.6.11-osx10.7-x86/mysql-5.6.11  osx10.7-x86.pkg/Contents/Resources/preflight denied since it was quarantined by TextEdit    and created without user consent, qtn-flags was 0x00000006  

Help me please ?? I am stuck and in a world of hurt and despair.

SQL Server replication conflicts after migration from 2000 to 2008

Posted: 21 Jun 2013 06:35 PM PDT

I got a suggestion over at Stackoverflow to post here....greatful for any and all help.

Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL Server 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL Server via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with sscesa20.dll the entire time.

We recently migrated this entire solution as follow:

  • Web Box - New Win Server 2008 R2 running IIS 7.5
  • SQL Server Box - New Win Server 2008 R2 running SQL Server 2008
  • Mobile app - small updates converted to Visual Studio 2008 and Windows for Mobile 5

The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call sqlcesa35.dll from the mobile device to perform merge replication.

The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL Server database and be removed from their iPaq. The inspection has additional work flow on the web app from here on out.

Now on to the problem. We migrated everything publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the database every row has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user syncs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact.

It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration.

Here is a summary of the order of operation:

New row created in the database >> Mobile user receives data >> mobile user updates data >> synchronizes - data is lost. Conflicts show up for all data lost.

From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly.

Thanks for taking the time to read please help. I am stuck after 3 days.

How to connect to a Database made by Oracle SQL Database?

Posted: 21 Jun 2013 05:35 PM PDT

So I am fairly new at this, so if you could keep that in mind in your answers, it would be much appreciated. I installed Oracle SQL Database on my Windows PC. It came in two zip files. I installed it and the online portion of it works fine. I can login with Username: sys and Password: **. What I am trying to do is connect to this newly created database on another computer through SQL Developer. I have read that in order to do this, you need to change the hostname of the Database from "localhost" to an IP Address. How do I do that and is there anything else I need to do to make this work?

I also found this LINK. Is this something I should do? I do not have a Domain though.

listener.ora

# listener.ora Network Configuration File:    C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\listener.ora  # Generated by Oracle configuration tools.    SID_LIST_LISTENER =  (SID_LIST =   (SID_DESC =     (SID_NAME = CLRExtProc)     (ORACLE_HOME = C:\app\hyderz\product\11.2.0\dbhome_2)     (PROGRAM = extproc)     (ENVS = "EXTPROC_DLLS=ONLY:C:\app\hyderz\product\11.2.0\dbhome_2\bin\oraclr11.dll")   )  )    LISTENER =  (DESCRIPTION_LIST =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))    )  )    ADR_BASE_LISTENER = C:\app\hyderz  

tnsnames.ora

# tnsnames.ora Network Configuration File:   C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\tnsnames.ora  # Generated by Oracle configuration tools.    LISTENER_ORCL =    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))      ORACLR_CONNECTION_DATA =  (DESCRIPTION =  (ADDRESS_LIST =    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))  )  (CONNECT_DATA =    (SID = CLRExtProc)    (PRESENTATION = RO)  )  )    ORCL =  (DESCRIPTION =  (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))  (CONNECT_DATA =    (SERVER = DEDICATED)    (SERVICE_NAME = orcl)  )  )  

Backup plan for MySQL NDB cluster databse not innoDB

Posted: 21 Jun 2013 01:34 PM PDT

I have a Database which will grow more than 250GB all data is in NDB engine(2 datanodes) and no other mysql engine used for data store.

  • What are the best approaches or plans for MySQL Cluster NDB database backup.

  • Is MySQL Enterprise Backup( part of MySQL Cluster CGE) or any other tool does backup for NDB?

  • Can a beginner like me do these backup of NDB easily or needs specialised skills?

Kind regards,

Easiest Way to Move 100 Databases

Posted: 21 Jun 2013 09:10 PM PDT

I need to move about 150 databases from one server to another server.

  1. SQL Server 2008 Web Edition
  2. SQL Server 2012 Preview (in a different datacenter - East Coast Azure)

I was planning on moving them one at a time using RedGate Packager, however this will take a while.

Is there a faster and easier way?

How to optimize a log process in MySQL?

Posted: 21 Jun 2013 02:34 PM PDT

In my project, I have about 100.000 users and can't control their behavior. Now, what I would like to do is log their activity in a certain task. Every activity, is one record which includes columns like user_id and some tag_id's.

The problem I have, is that these tasks in some cases can go up to 1.000.000 per year per user. So if I would store all these activities in one table. that would obviously become very big (=slow).

What is best to do here? Create a single table per user (so I have 100.000 log tables) or put all these activities in one table? And what kind of engine should I use?

One important thing to note: Although i simplified the situation a bit the following doesn't look normal, but users can also change values in these tables (like tag_id's).

MySQL: Lock wait timeout exceeded

Posted: 21 Jun 2013 08:35 PM PDT

I have a developer that has been trying to alter a large table (~60M rows). Via LiquidBase/JDBC they're running

ALTER TABLE foo DROP FOREIGN KEY fk_foo_1;  

Today while it was running I checked in on it periodically; everything looked normal, the query was running, in state "copying to tmp table", I could see the temp table on disk getting larger and larger (still plenty of free space). I could see other processes blocked waiting for this table to be unlocked. Finally after about 4.5 hours, they got the "lock wait timeout exceeded; try restarting transaction" error. This is actually the 2nd time they've tried, and it seems to fail right about when I would expect it to complete.

innodb_lock_wait_timeout is set to the default 50, I can't imagine it would run for so long to be affected by this. No errors logged, no deadlocks or other weirdness seen in 'show engine innodb status'. Can anyone help me with other ideas? I'm fairly stumped on this one.

thanks

postgresql: how do I dump and restore roles for a cluster?

Posted: 21 Jun 2013 12:14 PM PDT

Where are roles stored in a cluster, and how do I dump them?

I did a pg_dump of a db and then loaded it into a different cluster, but I get a lot of these errors:

psql:mydump.sql:3621: ERROR:  role "myrole" does not exist   

So apparently the dump of my db does not include roles. I tried dumping the 'postgres' db, but I don't see the roles there either.

Do I need to use pg_dumpall --roles-only ?

Postgresql versions 8.4.8 and 9.1.4 OS: Ubuntu 11.04 Natty

Converting RTF in a text column to plain text in bulk

Posted: 21 Jun 2013 05:29 PM PDT

I have a legacy system with about 10 million rows in a table. In that table there is a column of type text, most of them are standard text but about 500 thousand rows have RTF markup in them. I need to convert the RTF formatted text in to plain text.

My current method is I have a C# program that loads the query in to a DataTable using a SqlDataAdapter and uses the winforms RichTextBox control to do the conversion.

void bw_DoWork(object sender, DoWorkEventArgs e)  {      count = 0;        rtbRTFToPlain = new RichTextBox();        using (SqlDataAdapter ada = new SqlDataAdapter("select note_guid, notes from client_notes", Globals.SQLConnectionString))      using(SqlCommandBuilder cmb = new SqlCommandBuilder(ada))      {          DataTable dt = new DataTable();          ada.UpdateCommand = cmb.GetUpdateCommand();            ada.Fill(dt);            int reportEvery = dt.Rows.Count / 100;          if (reportEvery == 0)              reportEvery = 1;          foreach (DataRow row in dt.Rows)          {              if (count % reportEvery == 0)                  bw.ReportProgress(count / reportEvery);                try              {                  if (((string)row["notes"]).TrimStart().StartsWith("{") == true)                  {                      rtbRTFToPlain.Rtf = (string)row["notes"];                      row["notes"] = rtbRTFToPlain.Text;                  }              }              catch              {              }                count++;            }          bw.ReportProgress(100);            this.Invoke(new Action(() =>               {                  this.ControlBox = false;                  this.Text = "Updating database please wait";              }));          ada.Update(dt);      }  }  

This is working great for small tables, however this is the first time I had to run it on a table with such a large data-set (some of the rtf files can be several megabytes in size with embedded pictures) and I am getting OutOfMemory errors with my C# program.

I know I can chunk my query down in to a smaller batches, but I wanted to see if there is a better way that I was missing to strip off RTF formatting.

Should I just do the same thing as my current solution but only query out data smaller chunks at a time, or is there a better way to do this?

Designing a database for a site that stores content from multiple services?

Posted: 21 Jun 2013 04:35 PM PDT

I'm building a site that implements David Allen's Getting Things Done that pulls in your email, Facebook newsfeed, tweets from those you follow on Twitter, and more services are planned. The problem is that I'm not a DBA, and I'm not sure how to design the database so that as I add features to the site, I won't have to artificially corrupt people's raw data for the purposes of storing it (for example, I want to add the ability to get RSS feeds sometime in the future, but I'm not sure how I'd do that without making a mess).

I've put down my initial ideas using DBDesigner 4, below, you'll find the diagram and the SQL.

A few notes to help clarify clarify things.

  • The Accounts table is for storing authentication tokens and such for facebook, twitter, and such.
  • The messages table is incomplete.
  • The password fields in emailconfiguration and users are encrypted, users with a one-way hash, emailconfiguration with a two-way.
  • I'm using a MySQL database using the InnoDB storage engine on Amazon RDS.
  • Each project may have one context associated with it.
  • Each message may have a project and context, but it's not required.
  • The imap, smtp, and pop3 tables exist to remove duplication within email configuration.
  • queries to this database are generated by Korma, a clojure library.

Can someone please point me in the right direction? I'd also be willing to look at using a NoSQL database if suggested. Thank you for your time and consideration.

site database schema

Here's the SQL create script just in case anyone wants to see it.

CREATE TABLE Pop3 (    domain VARCHAR NOT NULL,    host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE Imap (    domain VARCHAR NOT NULL,    Host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE users (    Username VARCHAR NOT NULL AUTO_INCREMENT,    email VARCHAR NULL,    password_2 VARCHAR NULL,    activation VARCHAR NULL,    is_active BOOL NULL,    PRIMARY KEY(Username)  )  TYPE=InnoDB;    CREATE TABLE smtp (    domain VARCHAR NOT NULL,    host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE projects (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    name VARCHAR NULL,    description TEXT NULL,    context INTEGER UNSIGNED NULL,    PRIMARY KEY(id, users_Username),    INDEX projects_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    -- ------------------------------------------------------------  -- This is the table where access info for facebook, twitter, and others is stored.  -- ------------------------------------------------------------    CREATE TABLE Accountsi (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    owner INTEGER UNSIGNED NULL,    service VARCHAR NULL,    username VARCHAR NULL,    send INTEGER UNSIGNED NULL,    receive INTEGER UNSIGNED NULL,    info TEXT NULL,    PRIMARY KEY(id, users_Username),    INDEX Accountsi_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE EmailConfiguration (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    owner INTEGER UNSIGNED NOT NULL,    address VARCHAR NULL,    psswd VARCHAR BINARY NULL,    domain VARCHAR NULL,    PRIMARY KEY(id, users_Username),    INDEX EmailConfiguration_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE Messages (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    message_id VARCHAR NULL,    user_id VARCHAR NULL,    account INTEGER UNSIGNED NULL,    service VARCHAR NULL,    project INTEGER UNSIGNED NOT NULL,    context INTEGER UNSIGNED NOT NULL,    PRIMARY KEY(id, users_Username),    INDEX Messages_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE context (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    projects_id INTEGER UNSIGNED NOT NULL,    projects_users_Username VARCHAR NOT NULL,    users_Username VARCHAR NOT NULL,    name VARCHAR NULL,    description TEXT NULL,    PRIMARY KEY(id, projects_id, projects_users_Username, users_Username),    INDEX context_FKIndex1(projects_id, projects_users_Username),    INDEX context_FKIndex2(users_Username),    FOREIGN KEY(projects_id, projects_users_Username)      REFERENCES projects(id, users_Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION,    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;  

How to free up disk space? which logs/directories to clean? [ORACLE 11g]

Posted: 21 Jun 2013 11:34 AM PDT

I want to free up the disk space on my Linux machine. I've drill down the space usage and found that the following directories have a big size

/u01/app/11.2.0/grid/cv/log  /u01/app/11.2.0/grid/log/diag/tnslsnr/r1n1/listener_scan2/alert (Contains xml files)  /u01/app/11.2.0/grid/rdbms/audit(Contains .aud files)  /home/oracle/oradiag_oracle/diag/clients/user_oracle/host_XXXXXXXXXX/alert(Contains xml files)  /u01/app/oracle/diag/rdbms/crimesys/crimesys1/alert (Contains xml files)  

Can I delete contents from these directories? Note: I mean contents and not directories

Thanks!

[SQL Server] Stored procedures

[SQL Server] Stored procedures


Stored procedures

Posted: 21 Jun 2013 01:47 AM PDT

I'm writing a stored procedure to update an employee in the employee table. Each employee in the table has 17 fields. The data is coming from a web form where 3 fields are used to identify the employee. If any other fields contain data, the employee needs to be updated, but only for those fields. In my .net page, I concatenate together the SQL string based upon which form fields contain data. IE:sql = "UPDATE notitiae.dbo.tblpeople SET"If Request.Form("title") <> "" Then sql = sql & " notitiae.dbo.tblPeople.Title = @title"end if...How do I duplicate this in my stored procedure? Do I create a local variable like @SQL and concatenate the string the same way? How then would I tell the procedure to process the string in the variable @SQL?

Changing the Time of a field under a certain condition

Posted: 21 Jun 2013 04:31 AM PDT

This stemmed from my previous post yesterday but is a completely different question What i Need to do: i Need to have the first Enddate Time of the first ID group to be set to 0 and the DateStart Time of the last of the IDs be set to 0 if the dateStart time is greater than dateEnd timeThe end result is so that i can find out how many hours are in the AM and PMThe amount of Ids can vary but will always be different Any help or suggestions would be appreciated [code="sql"]--===== If the test table already exists, drop it IF OBJECT_ID('TempDB..#mytable','U') IS NOT NULL DROP TABLE #mytable IF OBJECT_ID('TempDB..#outputtable','U') IS NOT NULL DROP TABLE #outputtable--------Create test table---------- CREATE TABLE #mytable ( DateStart DATETIME, DateEnd DATETIME, ID int )---------Sample Data------- Insert into #mytable (DateStart,DateEnd,ID) Select '2013-05-25 17:00:00.000', '2013-05-25 07:15:00.000', 879 UNION ALL Select '2013-05-26 17:00:00.000', '2013-05-26 07:15:00.000', 879 UNION ALL Select '2013-05-27 17:00:00.000', '2013-05-27 07:15:00.000', 879 Union ALL Select '2013-05-31 17:00:00.000', '2013-05-31 07:15:00.000', 880 union ALL Select '2013-06-01 17:00:00.000', '2013-06-01 07:15:00.000', 880 Union All Select '2013-06-02 17:00:00.000', '2013-06-02 07:15:00.000', 880 union ALL Select '2013-03-15 08:00:00.000', '2013-03-15 17:00:00.000', 1266 Union ALL Select '2013-03-16 08:00:00.000', '2013-03-16 17:00:00.000', 1266 union ALL Select '2013-03-17 08:00:00.000', '2013-03-17 17:00:00.000', 1266 --------Create desired Output table---------- CREATE TABLE #outputtable ( DateStart DATETIME, DateEnd DATETIME, ID int )---------OutPut Data------- Insert into #outPutTable(DateStart,DateEnd,ID) Select '2013-05-25 17:00:00.000', '2013-05-25 00:00:00.000', 879 UNION ALL -- the EndDate Time has changed to 0 Select '2013-05-26 17:00:00.000', '2013-05-26 07:15:00.000', 879 UNION ALL Select '2013-05-27 00:00:00.000', '2013-05-27 07:15:00.000', 879 Union ALL -- the StartDate time has changed to 0 Select '2013-05-31 17:00:00.000', '2013-05-31 00:00:00.000', 880 union ALL -- the EndDate Time has changed to 0 Select '2013-06-01 17:00:00.000', '2013-06-01 07:15:00.000', 880 Union All Select '2013-06-02 00:00:00.000', '2013-06-02 07:15:00.000', 880 union ALL -- the startDate Time has changed to 0 Select '2013-03-15 08:00:00.000', '2013-03-15 17:00:00.000', 1266 Union ALL-- this set is fine because the startdate time is less then enddate time Select '2013-03-16 08:00:00.000', '2013-03-16 17:00:00.000', 1266 union ALL Select '2013-03-17 08:00:00.000', '2013-03-17 17:00:00.000', 1266 select * from #mytable select * from #OutPutTable /** What i Need to do: i Need to have the first Enddate Time of the first ID to be set to 0 and the DateStart Time of the last of the ID group be set to 0 if the dateStart time is greater than dateEnd time **/[/code] The amount of Ids can vary but will always be different

Can some take a look at my code and tell me if it is structured right.

Posted: 21 Jun 2013 03:00 AM PDT

I am having trouble where it starts with:AND jomast.fcompany LIKE 'AT&T*', 'Centerpoint*', 'Cingular*', 'Dynis*', 'Global*', 'Sabre*', 'US*', 'Verizon*'and down.It does not Like the (Like) statement. I have tried everything.The bottom half is to go into the database and find all job boms that match AT&T, Centerpoint, etc and if there is a bom, it is tolook at the fparent name and if it ends with '%-10', AVG the fquantity used and place in a new colunm named 'Casting-10' and so on down the line.SELECT jomast.fpartno AS 'Bom Part No', jomast.fdescript AS 'Bom Desc', jomast.fmeasure AS 'U/M', inmast.fstdcost AS 'Unit Cost', inmast.flocate1 AS 'Bin Loc', inmast.fytdiss AS 'YTD Issued' FROM m2mdata01.dbo.inboms inboms, m2mdata01.dbo.inmast inmast, m2mdata01.dbo.jomast jomast WHERE inboms.identity_column = jomast.identity_column AND inmast.identity_column = inboms.identity_column AND inmast.identity_column = jomast.identity_columnAND jomast.fcompany LIKE 'AT&T*', 'Centerpoint*', 'Cingular*', 'Dynis*', 'Global*', 'Sabre*', 'US*', 'Verizon*'AND inboms.fparent LIKE '%-10' (AVG(jomast.fquantity)) AS 'Casting-10'AND inboms.fparent LIKE '%-20' (AVG(jomast.fquantity)) AS 'Carpentry-20' AND inboms.fparent LIKE '%-30' (AVG(jomast.fquantity)) AS 'Conduit-30' AND inboms.fparent LIKE '%-40' (AVG(jomast.fquantity)) AS 'Electrical-40'AND inboms.fparent LIKE '%-50' (AVG(jomast.fquantity)) AS 'Mechanical-50'AND inboms.fparent LIKE '%-51' (AVG(jomast.fquantity)) AS 'Doors-51'AND inboms.fparent LIKE '%-75' (AVG(jomast.fquantity)) AS 'Generators-75'GROUP BY Bom Part No

SQL Locks

Posted: 20 Jun 2013 10:58 PM PDT

Hello Masters,How can we count the total number of locks in sql server 2005\2008 ?Is there any query\options to check it ?

Reporting Services: Please help

Posted: 20 Jun 2013 06:35 PM PDT

HiI am using ssrs, now i have this report that runs hourly everyday.. I have reduce the hours so it must run from 12 am to 06 pm and must not run on weekends. How do I go about doing that?

[Articles] A Release from Data

[Articles] A Release from Data


A Release from Data

Posted: 20 Jun 2013 11:00 PM PDT

What do you do when work is too frustrating. Steve Jones talks about finding a way to release stress and frustration.

Search This Blog