Wednesday, June 5, 2013

[SQL Server 2008 issues] Combining SQL versions on Cluster

[SQL Server 2008 issues] Combining SQL versions on Cluster


Combining SQL versions on Cluster

Posted: 04 Jun 2013 07:07 PM PDT

Hi AllIs it possible to combine SQL server 2008 R2 and SQL server 2012 on the same Cluster? We are starting of Sharepoint 2013 project and the developers wants a 2012 SQL and the cluster is currently 2008 R2 and we don't want to upgrade the existing instances to 2012.I believe it should be possible since all binaries are located in different directories. The only problem are the shared feature, which has to be upgraded, but I believe they are backward compatible.Hope someone here has experience in this or have some insights.Thanks/Bo

Import Data replaces NULL with 0

Posted: 04 Jun 2013 06:56 PM PDT

I am using sql server 2008 r2 express. I am getting some problems when i use import data feature to import data from one database to another. I am using SSMS to do it. While using the wizard i choose "Select identity insert" option to copy the identity element property. By using this identity elements are imported properly. But after importing i compared the source database and destination database and found some changes!!. All NULL values in the table are replaced by 0. So how can i avoid it.Is there any option to copy it as it while using import data wizard? Please help

How to get a list of replicated articles ?

Posted: 04 Jun 2013 01:49 AM PDT

I need to programmatically to get a list of all replicated articles for a given database. I tried select * from sys.dm_repl_articles, but it returned only 2 out of 8 articles.In Replication properties/Articles GUI I see 8 checked tables as articles.What another sys.* or sp_* can I use?Thanks

Performace issue while updating records and trigger on table

Posted: 04 Jun 2013 03:57 PM PDT

Hi All,I am having a performance issue while updating records into sql server table,I have created below trigger to update related tables if the status of this table is updated/Changed.USE [EMEA_SERVICEMAX]GO/****** Object: Trigger [Servicemax].[ERP_ACCOUNT_DATE_UPDATE] Script Date: 06/05/2013 10:19:56 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGO CREATE TRIGGER [Servicemax].[ERP_ACCOUNT_DATE_UPDATE]ON [EMEA_SERVICEMAX].[Servicemax].[ERP_ACCOUNT_STG]FOR UPDATEAS BEGINDECLARE @nOldValue varchar(20),@nNewValue varchar(20),@record_dt date,@CUST_NUM varchar(20),@SOURCE_SERVER varchar(15),@COMPANY_NO intSELECT @nOldValue=b.CUSTOMER_STATUS, @nNewValue=a.CUSTOMER_STATUS ,@record_dt=a.record_insert_dt,@CUST_NUM=a.customer_number,@SOURCE_SERVER = a.SOURCE_SERVER,@COMPANY_NO=a.COMPANY_NOFROM inserted a, deleted bIF @nNewValue = @nOldValuereturn else BEGINupdate EMEA_SERVICEMAX.Servicemax.SALES_ORDERS_STGset record_insert_dt=@record_dtwhere customer_number=@CUST_NUMand SOURCE_SERVER=@SOURCE_SERVERand COMPANY_NO=@COMPANY_NOupdate EMEA_SERVICEMAX.Servicemax.INSTALLATION_STGset record_insert_dt=@record_dtwhere LOCATION_CUSTOMER_NO=@CUST_NUMand SOURCE_SERVER=@SOURCE_SERVERand COMPANY_NO=@COMPANY_NOupdate EMEA_SERVICEMAX.Servicemax.ERP_ACCOUNT_PROJECTS_STGset record_insert_dt=@record_dtwhere customer_number=@CUST_NUMand SOURCE_SERVER=@SOURCE_SERVERand COMPANY_NO=@COMPANY_NOupdate EMEA_SERVICEMAX.Servicemax.SERVICE_CONTRACTS_STGset record_insert_dt=@record_dtwhere customer_number=@CUST_NUMand SOURCE_SERVER=@SOURCE_SERVERand COMPANY_NO=@COMPANY_NOupdate EMEA_SERVICEMAX.Servicemax.SERVICE_ORDER_STGset record_insert_dt=@record_dtwhere customer_number=@CUST_NUMand SOURCE_SERVER=@SOURCE_SERVERand COMPANY_NO=@COMPANY_NOENDENDGOAny help on this may be appreciated.Thanks,Nilesh

Using the same partition function and scheme for multiple tables

Posted: 04 Jun 2013 02:58 PM PDT

I have a database where most of the tables have an integer field say "SetId" denoting a batch of data.If a new SetId is created all these tables will get data related to the new SetId.SetId governs data retention, purging, it also forms part of filtering in many user queries on these tables.I am thinking of partitioning all these tables on the SetID column, which I am sure would be beneficial for performance and data maintenance.My question is, do I use a common partition function and partition scheme to partition all the tables on SetId, or is it more advisable to use separate partition functions and schemes for each tables?

Query Help

Posted: 04 Jun 2013 07:29 AM PDT

Hello I need one help in developing Query[code="sql"]CREATE TABLE #TEMP(cStudentID varchar(09),iSchoolCode int,cGradeCode char(02),dtEntryDate datetime,dtWithdrawDate datetime,iSchoolYearCode int)INSERT INTO #TEMP VALUES ('001223436',301,'10','2004-09-02 00:00:00.000','2005-06-16 00:00:00.000',2003)INSERT INTO #TEMP VALUES ('001223436',301,'11','2005-09-01 00:00:00.000','2006-06-15 00:00:00.000',2004)INSERT INTO #TEMP VALUES ('001223436',301,'12','2006-08-31 00:00:00.000','2007-06-12 00:00:00.000',2005)INSERT INTO #TEMP VALUES ('001223436',301,'10','2007-08-30 00:00:00.000','2008-06-11 00:00:00.000',2006)INSERT INTO #TEMP VALUES ('001223436',301,'11','2008-08-28 00:00:00.000','2009-06-12 00:00:00.000',2007)INSERT INTO #TEMP VALUES ('001223436',301,'12','2009-09-03 00:00:00.000','2010-06-16 00:00:00.000',2008)INSERT INTO #TEMP VALUES ('001223436',301,'12','2010-09-02 00:00:00.000','2011-06-13 00:00:00.000',2009)INSERT INTO #TEMP VALUES ('001223436',301,'12','2011-06-14 00:00:00.000','2011-06-15 00:00:00.000',2010)INSERT INTO #TEMP VALUES ('001223436',301,'99','2011-09-01 00:00:00.000','2012-06-13 00:00:00.000',2011)[/code]If Student GradeCode is repeat in Next School Year, we need to set flag for that.so Expected result is[code="plain"]StudentID iSchoolYearCode IsRepeat001223436 2003 0001223436 2004 0001223436 2005 0001223436 2006 1001223436 2007 1001223436 2008 1001223436 2009 1001223436 2010 1001223436 2011 0[/code]Please help me to develop this.If Anyone do this w/o CTE will greatly Appreciate because i need to use in one of the Logic in my Script.Thanks

How to view individual records?

Posted: 04 Jun 2013 09:22 AM PDT

As a newbie, I have a question which I hope isn't too stupid:In SSMS, I know I can view information about a database viaObject Explorer | Database Engine instance | database (right-click) | Properties But isn't there any way to view individual records in the database? Otherwise, how do I even know what data fields are in the individual records? Without knowing even that much, how can I even practice making queries?

How to calculate space needed to modify a column in a table?

Posted: 30 May 2013 01:39 PM PDT

I have a situation that's driving me INSANE...The vendors of our financial processing system came to us asking that we increase a column in a table from char(15) to char(19) to allow for extra room in a needed field from the web application. Typically this type of change wouldn't such a big deal, however this table has about 90 million rows in it and has a data footprint of about 214GB.Now unless I'm missing something (which is very possible, as I'm no expert at anything), to issue an ALTER COLUMN like this via SSMS it involves the creation of a new "temp" table, transfer the data from the existing table to the new table, create all non-clustered indexes on the new table, and drop the old table, rename the new.Consider the command:[code="sql"]A-LTER TABLE MyProblemChild A-LTER COLUMN [FleetCardVehicleNo] char (19) NULLGO[/code]I did this on a testing server with attached raided storage and it took about 1.5hrs and the log file was pre-sized at 230GB - not too badWhen I attempted to run the same process on our production server (which has the data, index, and log files on a SAN, each residing on their own respective LUNS), the process ran for over 5 hours, grew the log file to over 490GB...and only stopped there because the LUN ran out of space... to which I had to kill/rollback the entire thing!How can I calculate how much log file space will be needed to successfully complete the operation?

Create FirstName and LastName to Replace Existing FirstName and LastName

Posted: 04 Jun 2013 10:50 AM PDT

Hello EveryoneI hope that you all are having a very nice day.I am wanting to change all the FirstName and LastName values in a table. This is more for Demo security really. Creating some fake names from existing names. These are stored in two separate columns. I cannot think of any way to easily do this. I need approx 25,000 rows. LOL There are names in the columns currently, I would just like to change them so that the entire record does not show an actual person. I want to use real names, not anything like FirstName1, FirstName2, etc....So Frank Smith, would be changed to Joe JacksonI have already changed all the dates and numeric values, those were simple.I greatly appreciate any and all assistance, suggestions and comments.ThanksAndrew SQLDBA

SSIS and Varbinary Column

Posted: 04 Jun 2013 11:37 AM PDT

Hi All,Hope someone can help here.I have an SSIS package, just reads a row and inserts it into a table on another server.In Design with a hard coded row id the apckage works as expected.But in the actual environment the package is executed through a stored procedure.When this is executed the package completes with no errors but the varbinary column is null all other column data is inserted.The source is a sql selectSELECT Org_ID, pDate, SheetNumber, Title, Contents, TestGroup, PhotoNumberFact, PhotoNumberFactBlob, isDeletedFROM dbo.fact_sheetWHERE (update_id = ?)The destination is a straight column mapping to the destination table.The stored proc uses sp_add_job, sp_add_jobserver, sp_add_jobstep, sp_update_job and sp_start_job.Does anyone have an idea as to why this is happening?Ken

How to add Just a delete word in the column of the destination table while the record is found to be deleted. while using the stored procdure with MERGE.

Posted: 04 Jun 2013 06:34 AM PDT

Hi Lowel, SSCChampion, SSCcrazy eights and other friends.I am still finding problem in regards of my early posts.My issue is: [b]When I use the following stored procedure , to merge data between source and destination , I also need to add fix value/string in the one column called STATUS in the destination.This value/string is "Deleted", I need to add this values when merge query finds that particular record should be deleted via `WHEN NOT MATCHED BY THE SOURCE`` sql command.I actully don`t want to delete that records but just want to add delete word in that record`s column which is status.I need your guidance.[/b].thanks. my procudre is on http://www.sqlservercentral.com/articles/EDW/77100/or CREATE PROCEDURE [TEST_SCHEMA].[generate_merge] @SrcDB SYSNAME, --Name of the Source database @SrcSchema SYSNAME, --Name of the Source schema @SrcTable SYSNAME, --Name of the Source table @TgtDB SYSNAME, --Name of the Target database @TgtSchema SYSNAME, --Name of the Target schema @TgtTable SYSNAME, --Name of the Target table @predicate SYSNAME = null, --Comma-delimited list (between single qutoes '') of items to match when automatic predicate is not possible. [Does not remove leading/trailing spaces since column names can have leading/trailing spaces] @match_qual SYSNAME = null, --Pass in additional qualifier logic for the 'WHEN MATCHED THEN' portion of the MERGE statment @not_match_qual SYSNAME = null, --Pass in additional qualifier logic for the 'WHEN NOT MATCHED THEN' portion of the MERGE statment @debug SMALLINT = null --Pass in 1 to kick out just the MERGE statement text without executing itASBEGIN DECLARE @merge_sql NVARCHAR(MAX); --overall dynamic sql statement for the merge DECLARE @columns_sql NVARCHAR(MAX); --the dynamic sql to generate the list of columns used in the update, insert, and insert-values portion of the merge dynamic sql DECLARE @pred_sql NVARCHAR(MAX); --the dynamic sql to generate the predicate/matching-statement of the merge dynamic sql (populates @pred) DECLARE @pk_sql NVARCHAR(MAX); --the dynamic sql to populate the @pk table variable that holds the primary keys of the target table DECLARE @updt NVARCHAR(MAX); --contains the comma-seperated columns used in the UPDATE portion of the merge dynamic sql (populated by @columns_sql) DECLARE @insert NVARCHAR(MAX); --contains the comma-seperated columns used in the INSERT portion of the merge dynamic sql (populated by @insert_sql) DECLARE @vals NVARCHAR(MAX); --contains the comma-seperated columns used in the VALUES portion of the merge dynamic sql (populated by @vals_sql) DECLARE @pred NVARCHAR(MAX); --contains the predicate/matching-statement of the merge dynamic sql (populated by @pred_sql) DECLARE @pred_param NVARCHAR(MAX) = @predicate; --populated by @predicate. used in the dynamic generation of the predicate statment of the merge DECLARE @pred_item NVARCHAR(MAX); --used as a placeholder of each individual item contained within the explicitley passed in predicate DECLARE @done_ind SMALLINT = 0; --used in the dynamic generation of the predicate statment of the merge DECLARE @dsql_param NVARCHAR(500); --contains the necessary parameters for the dynamic sql execution /************************************************************************************************ * Generate the dynamic sql (@columns_sql) statement that will * * populate the @columns temp table with the columns that will be used in the merge dynamic sql * * The @columns table will contain columns that exist in both the source and target * * tables that have the same data types. * ************************************************************************************************/ --Create the temporary table to collect all the columns shared --between both the Source and Target tables. DECLARE @columns TABLE ( table_catalog VARCHAR(100) NULL, table_schema VARCHAR(100) NULL, table_name VARCHAR(100) NULL, column_name VARCHAR(100) NULL, data_type VARCHAR(100) NULL, character_maximum_length INT NULL, numeric_precision INT NULL, src_column_path VARCHAR(100) NULL, tgt_column_path VARCHAR(100) NULL ) /************************************************************************************************ * Generate the dynamic sql (@columns_sql) statement that will * * populate the @columns temp table with the columns that will be used in the merge dynamic sql * * The @columns table will contain columns that exist in both the source and target * * tables that have the same data types. * ************************************************************************************************/ set @columns_sql = 'SELECT tgt.table_catalog, tgt.table_schema, tgt.table_name, tgt.column_name, tgt.data_type, tgt.character_maximum_length, tgt.numeric_precision, (src.table_catalog+''.''+src.table_schema+''.''+src.table_name+''.''+src.column_name) AS src_column_path, (tgt.table_catalog+''.''+tgt.table_schema+''.''+tgt.table_name+''.''+tgt.column_name) AS tgt_column_path FROM ' + @TgtDB + '.information_schema.columns tgt with(nolock) INNER JOIN ' + @SrcDB + '.information_schema.columns src with(nolock) ON tgt.column_name = src.column_name AND tgt.data_type = src.data_type AND (tgt.character_maximum_length IS NULL OR tgt.character_maximum_length >= src.character_maximum_length) AND (tgt.numeric_precision IS NULL OR tgt.numeric_precision >= src.numeric_precision) WHERE tgt.table_catalog = ''' + @TgtDB + ''' AND tgt.table_schema = ''' + @TgtSchema + ''' AND tgt.table_name = ''' + @TgtTable + ''' AND src.table_catalog = ''' + @SrcDB + ''' AND src.table_schema = ''' + @SrcSchema + ''' AND src.table_name = ''' + @SrcTable + ''' ORDER BY tgt.ordinal_position' --execute the @columns_sql dynamic sql and populate @columns table with the data INSERT INTO @columns exec sp_executesql @columns_sql /************************************************************************************** * Create the temporary table to collect all the primary key columns * * These primary key columns will be filtered out of the update portion of the merge * * We do not want to update any portion of clustered index for performance * **************************************************************************************/ DECLARE @pk TABLE ( column_name VARCHAR(100) NULL ); set @pk_sql = 'SELECT ' + 'ccu.column_name ' + 'FROM ' + @TgtDB + '.INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc_tgt with(nolock) ' + 'INNER JOIN ' + @TgtDB +'.INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu with(nolock) ' + 'ON tc_tgt.CONSTRAINT_NAME = ccu.Constraint_name ' + 'AND tc_tgt.table_schema = ccu.table_schema ' + 'AND tc_tgt.table_name = ccu.table_name ' + 'WHERE ' + 'tc_tgt.CONSTRAINT_TYPE = ''Primary Key'' ' + 'and tc_tgt.table_catalog = ''' + @TgtDB + ''' ' + 'and tc_tgt.table_name = ''' + @TgtTable + ''' ' + 'and tc_tgt.table_schema = ''' + @TgtSchema + ''' ' INSERT INTO @pk exec sp_executesql @pk_sql /**************************************************************************************** * This generates the matching statement (aka Predicate) statement of the Merge. * * If a predicate is explicitly passed in, use that to generate the matching statement. * * Else execute the @pred_sql statement to decide what to match on and generate the * * matching statement automatically. * ****************************************************************************************/ IF @pred_param is not null -- If a comma-separated list of predicate match items were passed in via @predicate BEGIN -- These next two SET statements do basic clean-up on the comma-separated list of predicate items (@pred_param) -- This basic clean-up only removes leading/trailing commas that are passed in. -- This DOES NOT remove leading/trailing spaces since column names can have leading/trailing spaces (as stupid as that sounds) -- If the user passed in a predicate that begins with a comma, strip it out SET @pred_param = case when SUBSTRING(ltrim(@pred_param),1,1) = ',' then SUBSTRING(@pred_param,(charindex(',',@pred_param)+1),LEN(@pred_param)) else @pred_param end -- If the user passed in a predicate that ends with a comma, strip it out SET @pred_param = case when SUBSTRING(rtrim(@pred_param),LEN(@pred_param),1) = ',' then SUBSTRING(@pred_param,1,LEN(@pred_param)-1) else @pred_param end -- End clean-up of (@pred_param) -- loop through the comma-seperated predicate that was passed in via the paramater and construct the predicate statement WHILE (@done_ind = 0) BEGIN set @pred_item = case when charindex(',',@pred_param) > 0 then SUBSTRING(@pred_param,1,(charindex(',',@pred_param)-1)) else @pred_param end set @pred_param = SUBSTRING(@pred_param,(charindex(',',@pred_param)+1),LEN(@pred_param)) set @pred = case when @pred IS NULL then (coalesce(@pred,'') + 'src.[' + @pred_item + '] = ' + 'tgt.[' + @pred_item + ']') else (coalesce(@pred,'') + ' and ' + 'src.[' + @pred_item + '] = ' + 'tgt.[' + @pred_item + ']') end set @done_ind = case when @pred_param = @pred_item then 1 else 0 end END END ELSE -- If an explicite list of predicate match items was NOT passed in then automatically construct the predicate -- match statement based on the primary keys of the Source and Target tables BEGIN set @pred_sql = ' SELECT @predsqlout = COALESCE(@predsqlout+'' and '','''')+' + '(''''+''src.''+column_name+'' = tgt.''+ccu.column_name)' + ' FROM ' + @TgtDB + '.INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc_tgt with(nolock) ' + ' INNER JOIN ' + @TgtDB +'.INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu with(nolock) ' + ' ON tc_tgt.CONSTRAINT_NAME = ccu.Constraint_name' + ' AND tc_tgt.table_schema = ccu.table_schema' + ' AND tc_tgt.table_name = ccu.table_name' + ' WHERE' + ' tc_tgt.CONSTRAINT_TYPE = ''Primary Key''' + ' and tc_tgt.table_catalog = ''' + @TgtDB + '''' + ' and tc_tgt.table_name = ''' + @TgtTable + '''' + ' and tc_tgt.table_schema = ''' + @TgtSchema + '''' set @dsql_param = ' @predsqlout nvarchar(max) OUTPUT' EXEC sp_executesql @pred_sql, @dsql_param, @predsqlout = @pred OUTPUT; END /************************************************************************* * A Merge statement contains 3 seperate lists of column names * * 1) List of columns used for Update Statement * * 2) List of columns used for Insert Statement * * 3) List of columns used for Values portion of the Insert Statement * **************************************************************************/ --1) List of columns used for Update Statement --Populate @updt with the list of columns that will be used to construct the Update Statment portion of the Merge set @updt = CAST((SELECT ',tgt.[' + column_name + '] = src.[' + column_name + ']' FROM @columns c where c.column_name != 'meta_orignl_load_dts' --we do not want the original time the row was created to be overwritten and not exists (select 'x' from @pk p where p.column_name = c.column_name) --we do not want the primary key columns updated for performace FOR XML PATH('')) AS NVARCHAR(MAX) ) --2) List of columns used for Insert Statement --Populate @insert with the list of columns that will be used to construct the Insert Statment portion of the Merge set @insert = CAST((SELECT ',' + '[' + column_name + ']' FROM @columns FOR XML PATH('')) AS NVARCHAR(MAX) ) --3) List of columns used for Insert-Values Statement --Populate @vals with the list of columns that will be used to construct the Insert-Values Statment portion of the Merge set @vals = CAST((SELECT ',src.' + '[' + column_name + ']' FROM @columns FOR XML PATH('')) AS NVARCHAR(MAX) ) /************************************************************************************* * Generate the final Merge statement using the following... * * -The parameters (@TgtDB, @TgtSchema, @TgtTable, @SrcDB, @SrcSchema, @SrcTable) * * -The predicate matching statement (@pred) * * -The update column list (@updt) * * -The insert column list (@insert) * * -The insert-value column list (@vals) * * -Filter out Primary Key from the update (updating primary key essentially * * turns the update into an insert and you lose all efficiency benefits) * *************************************************************************************/ SET @merge_sql = (' MERGE into ' + @TgtDB + '.' + @TgtSchema + '.' + @TgtTable + ' tgt ' + ' using ' + @SrcDB + '.' + @SrcSchema + '.' + @SrcTable + ' src ' + ' on ' + @pred + ' when matched ' + coalesce(@match_qual,'') + ' then update ' + ' set ' + SUBSTRING(@updt, 2, LEN(@updt)) + ' when not matched ' + coalesce(@not_match_qual,'') + ' then insert (' + SUBSTRING(@insert, 2, LEN(@insert)) + ')' + ' values ( ' + SUBSTRING(@vals, 2, LEN(@vals)) + ')' + ' when not matched by source then delete;' --ADDED THIS LINE TO DELTE ROWS OFF OF TARGET TABLE THAT DO NOT EXIST IN SOURCE ); --Either execute the final Merge statement to merge the staging table into production --Or kick out the actual merge statement text if debug is turned on (@debug=1) IF @debug = 1 BEGIN -- If debug is turned on simply select the text of merge statement and return that select @merge_sql; END ELSE BEGIN -- If debug is not turned on then execute the merge statement EXEC sp_executesql @merge_sql; END END;

How To Prevent New Line/Carriage Return In Query Results Within SSMS

Posted: 04 Jun 2013 03:53 AM PDT

I realize there is porbably a term or common phrase to describe this ( something better then New LIne / Carriage Return) but I don't know what it is. The problem is that if the source data includes CHAR(10) and CHAR(13) in a VARCHAR field then when I include that field in my query where the destination is GRID then the contents of that field that come after the CHAR(10) & CHAR(13) are paced in column 1 of the next row in the grid. I know that CHAR(10) and CHAR(13) are both within the VARCHAR field because I;ve searched for and found them and so I can manually strip them from teh results via teh REPLACE() function but I swear that this did not use to be an issue for me with SSMS Query results. I'm also certain that the CHAR(10) and CHAR(13) have always been stored in various VARCHAR columsn of various tables of the database I work with and in the past have never caused this wrapping effect in query results. So why in the world are the CHAR(10) and CHAR(13) instances causing my query results to go to new rows and in the first column insetad of either not being a factor at all or wrapping to a new line within in the same row and column?

Best way to find certain records.

Posted: 17 May 2013 08:48 AM PDT

I have to find certain data from a table based on date range & and activity.So, if a records is 60 days old for a certain activity and do not fall within 90 & 120 days for other activity and do not have anything less than 60 days as create date, we should delete it.The script is to crete table & data is as below.So from the given example we have to delete only client id 5.[code="sql"]Create table TestPurgeClient(ClientId int,ActivityId int,CreateDate Datetime)--select * from TestPurgeClient--Allinsert into TestPurgeClient values(1,72, '2013-03-18 00:00:00.000')insert into TestPurgeClient values(1,33, '2013-02-16 00:00:00.000')insert into TestPurgeClient values(1,34,'2013-01-17 00:00:00.000')-- 60 day + Currentinsert into TestPurgeClient values(2,72, '2013-03-18 00:00:00.000')insert into TestPurgeClient values(2,72,GETDATE())--60 + 90 dayinsert into TestPurgeClient values(3,72, '2013-03-18 00:00:00.000')insert into TestPurgeClient values(3,33, '2013-02-16 00:00:00.000')--All + currentinsert into TestPurgeClient values(4,72, '2013-03-18 00:00:00.000')insert into TestPurgeClient values(4,33, '2013-02-16 00:00:00.000')insert into TestPurgeClient values(4,34,'2013-01-17 00:00:00.000')insert into TestPurgeClient values(4,34,GETDATE())--60 Day Onlyinsert into TestPurgeClient values(5,72, '2013-03-18 00:00:00.000')[/code]

How To Prevent New Line/Carriage Return In Query Results Within SSMS

Posted: 04 Jun 2013 04:32 AM PDT

I realize there is probably a term or common phrase to describe this ( something better then New LIne / Carriage Return) but I don't know what it is. The problem is that if the source data includes CHAR(10) and CHAR(13) in a VARCHAR field then when I include that field in my query where the destination is GRID then the contents of that field that come after the CHAR(10) & CHAR(13) are paced in column 1 of the next row in the grid. I know that CHAR(10) and CHAR(13) are both within the VARCHAR field because I;ve searched for and found them and so I can manually strip them from teh results via teh REPLACE() function but I swear that this did not use to be an issue for me with SSMS Query results. I'm also certain that the CHAR(10) and CHAR(13) have always been stored in various VARCHAR columsn of various tables of the database I work with and in the past have never caused this wrapping effect in query results. So why in the world are the CHAR(10) and CHAR(13) instances causing my query results to go to new rows and in the first column insetad of either not being a factor at all or wrapping to a new line within in the same row and column?

Clustered index creating performance issues

Posted: 04 Jun 2013 07:55 AM PDT

Hi everyone. I'm going through my index usage and finding some cases where the clustered indexes that I created might be doing more harm than good. The four listed here are the worst performers. I think that that the data is bulk loaded and as a result, I may be impacting performance by sorting the data with clustered indexes. These numbers represent a typical workload start to finish.Table 1 clustered index: 34 total reads, 472,498 total writesTable 2 clustered index: 1578 total reads, 862,368 total writesTable 3 clustered index: 0 total reads, 1,759,849 total writesTable 4 clustered index: 3,551,723 total reads, 5,467,635 total writesIs there a chance that converting these back to heaps and selectively creating nonclustered indexes might be a performance boost?Thanks,Howard

Database growth

Posted: 04 Jun 2013 06:30 AM PDT

I have same database across 2 different servers. One of them has it (mdf file) as 200GB and another one has it has 160GB. The one with 200GB is simple recovery model and another one with 160GB is Full with log backup taken every 2 hours. I am wondering why there is so much difference in size when the data is essentially the same. Now, I remember one time one of my team member did index rebuild on the one with 200GB db server. I am not sure if that has to do anything with its growth. What can I do to bring the size down on 200GB db?

what are a given user's connection properties?

Posted: 04 Jun 2013 06:47 AM PDT

I'm having trouble with my keyword searching today, hopefully someone can help. I'm trying to find out what a given SPID's connection properties are? Something like what DBCC UserOptions return but for a given SPID. Specifically, what is the isolation level of a given SPID. Thanks in advance!!!!

Debugging SQL server Stored Procedures

Posted: 19 Oct 2010 07:09 PM PDT

HI Everyone,Any one out there,Please advise me the alternate permissions other than the SYSADMIN to grant the user to debug sql stored procedures

How to delete the old post in this forum?

Posted: 04 Jun 2013 05:54 AM PDT

How can I delete my previous posts for which I got the answer from major member of this forum!

Allowing users to directly query data mart

Posted: 04 Jun 2013 03:55 AM PDT

I am looking for resources to help evaluate options for allowing users to query our data mart directly. (T-SQL). These are not experienced users and we can't risk having them block existing processes or using too many server resources. This is a large DM. If someone can point me to some resources or articles addressing this I'd appreciate it. I'm aware of the options but not the pros/cons of each regarding this scenario. Additional hardware is not an option but new instances/databases are.Specs:SQL Server 2008 R2 active/passive 2 node cluster40TB database2TB largest fact tableThanks.-Brian

Creatin of role & assign users

Posted: 04 Jun 2013 02:31 AM PDT

One Employee have been asked to Create a User defined Role named (ReadOnlyXYZ)then create individual 90 login account :w00t:with read only access to a Specific Database..:w00t:can anyone help in automating this??is their any easy process for it...

What are the options for splitting/reducing large mdf files?

Posted: 04 Jun 2013 12:37 AM PDT

I have a database with 2 mdf files. One file is 750gb and the other file is 112gb. There are performance issues with backing up this database and the database will continue to grow larger. What are the options in looking into managing large data files? Is there some mechanism for splitting it up?Any input is appreciated!

Go from SQL 2008 R2 standard edition to Developer?

Posted: 04 Apr 2011 08:36 AM PDT

Is it possible to 'downgrade' SQL 2008 R2 from standard edition to Developer edition? I'm working on developing a long-term SQL Server infrastructure for my company. We're evaluating using the Enterprise edition, but my bet at this point is that we'll go with Standard at least short-term. If, at some point in the future we decide to go to Enterprise, we'd want to (A) upgrade our production Standard installs to Enterprise, and (B) change our non-production installs to Developer to reduce licensing costs. (I'd be OK with the production environments having slightly less capabilities, since Developer is now equivalent to Data Center)From what I've seen ([url]http://msdn.microsoft.com/en-us/library/ms143393.aspx[/url]), it doesn't look like what I'd like to do is possible. Hoping someone knows otherwise. I'd like to avoid having to install SQL Server from scratch if we changed editions. Thanks....Brian Kukowski

Taks SSIS-package is not executed at all

Posted: 04 Jun 2013 02:48 AM PDT

Hi,there is a for-each-loop-container within a simple script task to set a variable and two following data-flow-tasks.Between Script- and data-flow-tasks there are Precedence Constraints, wich check the value of variable.That all works properly, but there is another task behind the data-flow-tasks wich is not executed at all. Thanks in advance for any hint.Best regardsMatze

Help with next date in SQL query

Posted: 17 May 2013 10:04 PM PDT

What I need is to be able to find out when a customer next called in from the date I have select with the next date & time on any Purchase, does don't need to be same. It's like using MIN & MAX but that does not work on this query as it select the same date & time for both.SELECT ID, Name, CallDateTime, Num, RCFROM HistoryWHERE (CallDateTime > CONVERT(DATETIME, '2013-05-01 00:00:00', 102)) AND (Outcome = 'Called')As you can see in the query above that all the data is in one overall table called History, this records all the purchases.So I want to know that if a customer after the 1/05/2013 called in with the outcome of called what was he next purchases date, some customer might not have come so that can stay blank. So the query is like this nowID Name CallDateTime Num RC3936803 Name1 01/05/2013 11:16:27 84 Called5211387 Name2 01/05/2013 12:14:21 604 Called5185689 Name3 01/05/2013 12:15:28 298 Called4811923 Name4 01/05/2013 12:29:36 170 Calledbut i also want it to show the below,ID Name CallDateTime Num RC Next CallDateTime Total Number Of Days3936803 Name1 01/05/2013 11:16 84 Called 04/05/2013 11:16 35211387 Name2 01/05/2013 12:14 604 Called 04/05/2013 12:14 35185689 Name3 01/05/2013 12:15 298 Called 04/05/2013 12:15 34811923 Name4 01/05/2013 12:29 170 Called 04/05/2013 12:29 3This is the query I have at the moment BUT not show how to add two more columns to it they are next called in date & time after the first called in. I want it to show the next time & date does not matter what the RC code is next visit.So I want mine output to show the below, adding the next called time & date and how many days it take from the first date to the next date,

Who Dropped Database

Posted: 03 Jun 2013 11:57 PM PDT

Hi Team,How can i know who dropped database in sql server...Plz bit urgent.

List of queries using MAXDOP

Posted: 29 May 2013 09:47 AM PDT

Gurus, currently, in our environment we configured our sql servers for MAXDOP of 4. here is our config:4 CPUS - each 8 Core - 16 logical processorsCan you please help me determine if it is possible to find which queries are utilizing MAXDOP for query processing?ThanksJagan K

Application not Working

Posted: 04 Jun 2013 12:57 AM PDT

Here we have an application which uses a Stored procedure.The user have created a SP but the application fails to pick the SP..ALTER PROCEDURE [INDIADNS\Name.sirname].[EMPLOYEE_SP]Note - INDIADNS - Its the Domain name...Now problem here is... this issue never came.. coz when the same user use to create a SP the application picks the SP well & do execute it well..Today when debugging i came to know that due to the Domain name & username before SP the problem occurs... Please suggest

differences

Posted: 04 Jun 2013 12:52 AM PDT

what are the differences b\w 32 bit vs 64 bit in sql ?

Database Virtualisation Tool

Posted: 03 Jun 2013 11:52 PM PDT

Hi,I would like to know if anyone has come across or used Delphix virtualised tool?Thanks

SQL Server 2000 to SQL Server 2008 Migration and Performance

Posted: 03 Jun 2013 11:45 PM PDT

Hi, am new to sql server dba and recently my client has moved his db's from SQL 2000 to SQL 2008 through Detach and Attach. We have run the update statistics, but we have not found any improvement in the report generation. The reports are getting generated from dynamic queries, and now, am looking how to improve the performance from SQL Server side, not on the code. The code part will come later. Need suggestions on what settings do I need to do on SQL Server 2008.

SSIS Package execution progress

Posted: 04 Jun 2013 12:10 AM PDT

HI,We recently upgraded our servers to 2012 and when running SSIS packages from cmd it's giving execution progress of each and every data flow task which is pretty long..before we never got these execution progress outputs. So is there a way to avoid these execution progress list for each and every dataflow task?Thank you!

SSRS - reports rendering incorrectly

Posted: 02 Jun 2013 05:16 PM PDT

Hi allI am currently having this problem on SSRS rending incorrectly.The problem is basicall that reports do not display correctly in Chrome, Firefox, Safari browsers etc – the IFRAME's height defaults to a few hundred pixels, so you only see the top 2 inches of the report. There are artlices which suggest changes to the ReportViewer.aspx file found under SSRS directory. I am not confortable making changes to SQL system files, but if it has to get done, then so be it. Other than making a change to this file, is there any other options i can look at?thanks all for inputs!!!

Tuesday, June 4, 2013

[how to] Defining a two-way link

[how to] Defining a two-way link


Defining a two-way link

Posted: 04 Jun 2013 05:28 PM PDT

I have a users table, and I want to define a "friends" relationship between two arbitrary users.

Up until now, I've used two different methods for this:

  1. The friends table contains user1 and user2. Searching for users involves a query that looks like
    ... WHERE @userid IN (`user1`,`user2`), which is not terribly efficient
  2. The friends table contains from and to fields. Initiating a friend request creates a row in that direction, and if it accepted then a second row is inserted with the opposite direction. There is additionally a status column that indicates that this has happened, making the search something like:
    ... WHERE `user1`=@userid AND `status`=1

I'm not particularly satisfied with either of these solutions. The first one feels messy with that IN usage, and the second seems bloated having two rows to define a single link.

So that's why I'm here. What would you suggest for such a link? Note that I don't need any more information saved with it, I just need two user IDs associated with each other, and preferably some kind of status like ENUM('pending','accepted','blocked'), but that's optional depending on what the best design for this is.

Is it worth to separate columns into multiple tables for one-to-one relational table

Posted: 04 Jun 2013 05:14 PM PDT

I need to make a decision for database structure on whether to separate one-to-one relational columns into multiple tables and link with one relationship id or just add all columns into one table.

The number of columns would be around 45 and I need to sort data on different columns on different query (one sort per query).

I will be using MyISAM storage engine.

Furthermore, there will be millions of data in the table(s).

how to rebuild / reinstall ssrs (reportserver, reportservertempdb) databases?

Posted: 04 Jun 2013 04:30 PM PDT

Our server crashed. We got it back up and running however, the mentioned databases have been corrupted.

Is there a programmatic / automatic way of rebuilding or reinstalling the SSRS databases?

If not:

  1. Since we can still select * from the databases, how do I grab the permissions and roles on different reports?
  2. How do we retrieve subscription information?

"Row not found at subscriber" with a row filter

Posted: 04 Jun 2013 03:53 PM PDT

I had a production issue today where delivery of a handful of update statements failed at the subscriber with "row not found". What's odd about it is that I have a horizontal filter set up on the article in question such that the rows in question shouldn't have been at the subscriber. What's especially odd is that there were many other rows within the same transaction that also qualified for exclusion via the filter that didn't trigger the same error. I got past it by setting the distribution agent to ignore errors. Does anyone have any idea what happened and how I can keep it from happening in the future?

Select unique value whereas the time is highest in the most optimal way

Posted: 04 Jun 2013 03:24 PM PDT

Given a simple, with a text and time field, I want to select X unique values from the text field, whereas that row contains the highest value for time.

INSERT INTO `test` (`id`, `text`, `time`) VALUES  (1, 'test1', 1),(2, 'test1', 3),(3, 'test1', 2),(4, 'test2', 1),(5, 'test2', 100),(6, 'test2', 20)  

The query that meets most of my requirements is:

SELECT a.* FROM      test a  INNER JOIN (      SELECT `text`, MAX(`time`) AS `time`      FROM          test      WHERE          `text` LIKE "te%"      GROUP BY          `text`) b  ON      a.`text` = b.`text` AND      a.`time` = b.`time`  ORDER BY      a.`text` ASC  LIMIT 0,50  

For small tables, this works perfect. Thou in my table (300k+ rows) it makes mysql crash, due to the subquery.

Is it possible to optimize this query? If it cannot be optimized, would it be possible to select the last inserted unique values for text? (the id and time are theoretically uncorrelated, though in 99% of the cases a correlation will be found, whereas the higher the id, the higher the time)

Thank you

How to restrict row explosion in join? - Distinct or union?

Posted: 04 Jun 2013 02:10 PM PDT

Here are the cardinalities of my tables:

24    in     (T0)  24    in     (T1)  24    in     (T2)  576   in join(T0,T1)  576   in join(T1,T2)  13824 in join(T0,T1,T2)  

I am joining in cross-product, which as you can see outputs |T_0|*...*|T_n-1| rows of data.

The join is in a WHERE clause, and is precluded by some limits and offsets values which limit the size of the output to 24.

There are no constraints on the tables, as this is an OLAP data-set. I can however impose some uniqueness constraints. Each table is joined on the same attribute.

How do I reduce the size of the output to only show the distinct results, rather than the full cross-product?

How to run a SELECT query within while loop in PHP?

Posted: 04 Jun 2013 03:47 PM PDT

Within a SELECT query via PHP while loop, I need to run a mysql SELECT query as

$result1 = $mysqli->query("SELECT * FROM table1");    while ( $row = $result->fetch_assoc() ) {    if ( $row['X'] == 'X' ) {  $result2 = $mysqli->query("SELECT * FROM table2");  .....  }    }  

but this does not work. I cannot JOIN the tables, as the if statement is complicated to perform SELECT from different tables.

If I update a column record in a table, will indexes that do NOT have this column in it be affected?

Posted: 04 Jun 2013 02:16 PM PDT

In terms of performance if I have a table like so:

CREATE TABLE [TESTDATA].[TableA](      [Col1] [nchar](5) NOT NULL,      [Col2] [nchar](2) NULL,      [Col3] [float] NULL  CONSTRAINT [TableA_PK] PRIMARY KEY CLUSTERED   (      [Col1] ASC  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]  ) ON [PRIMARY]  

And then create a non-clustered index like this:

CREATE NONCLUSTERED INDEX [_idx_TableA]  ON [TESTDATA].[TableA] ([Col2])  WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]  GO  

If I perform an update to the table, only changing Col3 does the DB need to touch the index _idx_TableA?

Just curious how far performance touches all indexes?

Using wm_concat to concatenate rows, but in an order determined by another column

Posted: 04 Jun 2013 01:10 PM PDT

Let's say I have 3 columns: p_id, description, order_by. I am trying to do the following:

I would like to concatenate the description for all like p_id values. So we are talking a group by p_id sort of thing. But then, I want the description to be concatenated in the order of the order_by column (which is an integer). So my ideal query (not-working) would look like

select p_id, wm_concat(description)  from my_table  where p_id = 12345  group by p_id  order by order_by asc  

How do I concatenate rows in this fashion?

Firebird database performance after server upgrade/restart

Posted: 04 Jun 2013 12:02 PM PDT

Got a 350 GB database (more than 40M records plus 0 - 1000 BLOBs for each record in another table). After upgrading Firebird to version 2.1.5 (mainly because of filesystem cache issue) database became terribly slow, both insertion and fetching. How to restore performance? I tried running some queries to force caching, it was somewhat helpful, and currently left it with gbak running, but what's the"proper" way?

SQL Agent embedded PowerShell script in CmdExec step fails with import-module sqlps

Posted: 04 Jun 2013 12:14 PM PDT

SQL Server 2008R2 PowerShell 2.1

I am trying to create a SQL Agent job that dynamically backs up all non-corrupted SSAS databases on an instance without the use of SSIS. In my SQL Agent job, when I create a CmdExec step and point to a PowerShell script file (.ps1) like this:

powershell.exe "c:\MyPSFile.ps1"   

the job executes successfully (or at least gets far enough to only encounter logic or other syntax issues).

This approach won't work for a final solution, because there is a requirement to keep the PowerShell script internal to SQL. So I have a different CmdExec step that embeds the PowerShell script like so:

powershell.exe "import-module sqlps –DisableNameChecking    $server_name = "localhost"  $backup_location = "C:\BackupsGoHere"    [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices") | out-null  $server = New-Object Microsoft.AnalysisServices.Server  $server.connect($server_name)    # Generate an array of databases to be backed up  foreach ($database in ($server.get_Databases() | Where {$_.EstimatedSize -gt 0 -and $_.Cubes.Count -gt 0})) {      $directory_path = $backup_location + "\" + $database.Name      if (!(Test-Path -Path $directory_path)) {          New-Item $directory_path -type directory | out-null      }      [string] $timestamp = date      $timestamp = $timestamp.Replace(':','').Replace('/','-').Replace(' ','-')      $database.Backup("$directory_path\$database-$timestamp.abf")  }  $server.disconnect()"  

However, when executed with the embedded script, the job errors out quickly with the following response:

The specified module 'sqlps' was not loaded because no valid module file was found in any module directory.

Why can't I reference the module from an embedded script, but doing so in a ps1 file works just fine?

Database replication using wamp?

Posted: 04 Jun 2013 02:05 PM PDT

I have created a POS system for our corporation, in the HQ we have a wamp server with the main database, we also have more than 25 branches across the country.

I will setup a wamp server on each branch, so I can acces its database directly when putting wamp online.

I want to make MySQL replication with all branches, so every query on any branch will affect the main database on HQ.

I tried to test but found no one explain how to do it using wamp on different PCs.

Can I add a unique constraint that ignores existing violations?

Posted: 04 Jun 2013 06:45 PM PDT

I have a table which currently has duplicate values in a column.

I cannot remove these erroneous duplicates but I would like to prevent additional non-unique values from being added.

Can I create a UNIQUE that doesn't check for existing compliance?

I have tried using NOCHECK but was unsuccessful.

In this case I have a table which ties licensing information to "CompanyName"

EDIT: Having multiple rows with the same "CompanyName" is bad data, but we can't remove or update those duplicates at this time. One approach is to have the INSERTs use a stored procedure which will fail for duplicates... If it was possible to have SQL check the uniqueness on its own, that would be preferable.

This data is queried by company name. For the few existing duplicates this will mean that multiple rows are returned and displayed... While this is wrong, it's acceptable in our use case. The goal is to prevent it in the future. It seems to me from the comments that I have to do this logic in the stored procedures.

Postgresql constrains on FK

Posted: 04 Jun 2013 12:23 PM PDT

I am trying to design a (part of a) database which has to accomplish the following:

  • There is a students table, containing a bunch of students.
  • There are educations in the database.
  • Each student can have 0..n educations.
  • There are x different types of educations, in which x is small (<10) and known in advance.
  • The different types of education all have the same type of data associated with it (location, name etc.)
  • Some more data is associated with a student-education connection (e.g. grades, start date etc.).
  • The type of data which is associated with a student-education connection depends on the type of education (e.g. a masters degree has a specialization, a course does not).

I try to create a good database design to represent this data, however there are quite a few difficulties. A design I came up with is as followed:

  • Student table, which contains student data
  • Education table, which contains data of educations. There is a type column to specify the type.
  • Student_Education junction table, which links students with educations.
  • x <type>Education tables which will contain data associated with student-education connections (thus a FK to Student_Education is always present). Each education type will have it's own table.

However, there is a problem with this design: a <type>Education row should only be allowed to reference a Student_Education connection when the education type matches. E.g. a MasterEducation row can only reference a row in the Student_Education table that references a row in the Education table with type == master.

Would it be possible to add a constraint which can check exactly that?

If not, what other options are available?

How can I reset a mysql table auto-increment to 1 in phpMyAdmin?

Posted: 04 Jun 2013 01:01 PM PDT

I know that in MySQL at the command line I can reset a table's auto-increment field to 1 with this:

ALTER TABLE tablename AUTO_INCREMENT = 1  

I am curious if there is a way to do this from within phpMyAdmin. Something like a check box to reset the auto-increment or something else along those lines?

Not that there is anything wrong with the command line approach. More one of those curiosity things I keep thinking on... Thanks in advance!

Convert Oracle database to Derby

Posted: 04 Jun 2013 08:10 PM PDT

I need to migrate an existing Oracle Database into a Derby one. I want to know if there's a tool, a script or another way to do that work.

It is using any of the interesting features of Oracle, as I can see from the database information from SQL Developer, except sequences and indexes.

Thanks!

Index on foreign key makes query extremely slow

Posted: 04 Jun 2013 01:40 PM PDT

We are recently experiencing a tremendous query slowdown with spilled over temp tablespace. A specific query causes this problem.

The queried table (table3) has an indexed PK, three FK with indexes and a compound unique constraint on the three FKs. The offensive query looks like this:

SELECT ...    FROM table1 t1, table2 t2, table3 t3    WHERE t1.abs_id = ?      AND t3.vgs_id = t1.vgs_id      AND t3.ai_id > ?      AND t2.id = t1.t2_id      AND t2.status = 2      AND t2.felddimension = 0      ...  

Only instance restart solved the issue. Even killing connections did no help.

After futher investigation on the FKs and the indexes, it turned out that the index on the t3.ai_id column causes the severe drop in performance. After disabling this one the unique constaint served the query extremely fast.

The problematic part is AND t3.ai_id > ? (range scan). Unique scan does not cause any trouble.

Now the question is, how can an index cause such a slowdown and moreover, how can I investigate the cause? It simply doesn't add up for me.

Competitive times: normal 10 s, slowdown > 2 min or never returning.

How should I best handle a rapidly growing database?

Posted: 04 Jun 2013 03:31 PM PDT

I have a database that I need to maintain.

Sadly, the setup and use of that database I can't change, much (thanks to some internal politics).

It's running on SQL Server 2008r2.

Its only been live for 5 days and has grown from 20GB to upwards of 120GB in that time. (essentially most of the data gets deleted and then imported, but like I say I can't control that side of things)

I would love to run nightly jobs to shrink the database and reorganise the indexes, but I know that's a long way from best practices and could lead to more problems than I've already got!

QUESTIONS

  • What's the best way to handle a database that's rapidly increasing in size?
  • Should I be looking at moving the file group around to keep the physical size on disk down?
  • Is there any way to stop the server running out of space within a month?

Why would increase in innodb_buffer_pool_size slow down MySQL?

Posted: 04 Jun 2013 01:27 PM PDT

5.1.68-cll - MySQL Community Server on CentOS

The system has 32GB of RAM.

I increased innodb_buffer_pool_size from 10240M to 15360M (10GB -> 15GB).

Time taken for a series of identical operations increased from 720 to 822 seconds (14% increase).

This was the result only a single test at each setting. But 4 previous tests performed a few months ago resulted in times between 726 and 740s.

I just tried running it again with 8GB, and the time taken was 719s.

Why would more memory result in a slower process?

EDIT: More details on process

The process that I'm testing involves emptying some tables and rebuilding them from data from existing tables. I'm not sure if it's using SELECT INSERT or if it's SELECTing the data, then using PHP to create long INSERT statements. If that matters then I can find out.

There are no schema definition changes being made.

Here is the output of numactl --hardware while the server is relatively idle:

root@server [~]# numactl --hardware  available: 1 nodes (0)  node 0 cpus: 0 1 2 3 4 5 6 7  node 0 size: 32740 MB  node 0 free: 6216 MB  node distances:  node   0    0:  10  

And free -m

root@server [~]# free -m               total       used       free     shared    buffers     cached  Mem:         32081      25864       6216          0       2591      12791  -/+ buffers/cache:      10482      21599  Swap:        15994         16      15977  

Edit by RolandoMySQLDBA

Please run this query

SELECT      InnoDBSpace / POWER(1024,1) InnoDB_KB,      InnoDBSpace / POWER(1024,2) InnoDB_MB,      InnoDBSpace / POWER(1024,3) InnoDB_GB  FROM  (      SELECT SUM(data_length+index_length) InnoDBSpace      FROM information_schema.tables      WHERE ENGINE='InnoDB'  ) A;  

RESULT:

InnoDB_KB InnoDB_MB InnoDB_GB  8413536 8216.34375 8.02377319335938  

and this one

SHOW GLOBAL STATUS LIKE 'innodb_buffer_pool_pages%';  

RESULT:

Innodb_buffer_pool_pages_data  410035  Innodb_buffer_pool_pages_dirty  204  Innodb_buffer_pool_pages_flushed  826954  Innodb_buffer_pool_pages_free  99231  Innodb_buffer_pool_pages_misc  15022  Innodb_buffer_pool_pages_total  524288  

How to import a text file with '|' delimited data to PostgreSQL database?

Posted: 04 Jun 2013 09:10 PM PDT

I have a text file with | delimited data that I want to import to a table in PostgreSQL database. PgAdminIII only exports CSV files. I converted the file to a CSV file using MS excel but still was unsuccessful importing data to PostgreSQL database.

It says an error has occurred: Extradata after last expected column. CONTEXT: COPY , line1:

What I am doing wrong here?

Column header:
KNUMBER,APPLICANT,CONTACT,STREET1,STREET2,CITY,STATE,ZIP,DEVICENAME,DATERECEIVED,DECISIONDATE,DECISION,REVIEWADVISECOMM,PRODUCTCODE,STATEORSUMM,CLASSADVISECOMM,SSPINDICATOR,TYPE,THIRDPARTY,EXPEDITEDREVIEW,,
Sample data:
K000001,BOSTON SCIENTIFIC SCIMED/ INC.,RON BENNETT,5905 NATHAN LN.,,MINNEAPOLIS,MN,55442,WALLGRAFT TRACHEOBRONCHIAL ENDOPROSTHESIS AND UNISTEP DELIVERY SYSTEM,1/3/00,6/5/00,SE,SU,JCT,Summary,SU,,Traditional,N,N,, K000002,USA INSTRUMENTS/ INC.,RONY THOMAS,1515 DANNER DR.,,AURORA,OH,44202,MAGNA 5000 PHASED ARRAY CTL SPINE COIL,1/3/00,2/23/00,SE,RA,MOS,Summary,RA,,Traditional,N,N,,

Table Schema: CREATE TABLE medicaldevice1 ( medical_device_id serial NOT NULL, k_number character varying(8), applicant character varying(150) NOT NULL, contact character varying(50), street1 character varying(80), street2 character varying(40), city character varying(50), state character varying(8), zip character varying(16), device_name character varying(500) NOT NULL, date_received character varying(8), decision_date character varying(8), decision character varying(2), review_advise_comm character varying(2), product_code character varying(3), state_or_summary character varying(16), class_advise_comm character varying(2), ssp_indicator character varying(25), third_party character varying(2), expedited_review character varying(4), CONSTRAINT medical_device_id_pk PRIMARY KEY (medical_device_id) )

How to recover/restore corrupted Innodb data files?

Posted: 04 Jun 2013 02:38 PM PDT

A while ago, my Windows 7 system on which a MySQL Server 5.5.31 was running crashed and corrupted the InnoDB database. The weekly backup that's available does not cover all the tables that were created in the meantime, therefore I would endeavor to recover as much as possible from the data. Right after the crash, I copied the whole data folder of MySQL to an external drive. I would like use this as the starting point for my rescue attempts.

In the following I'll describe the steps of my (not yet convincing) rescue attempt so and would be thankful for any comments or guidance on how to improve it:

  1. I've now done a fresh install of MySQL Server 5.5.31 on another PC
  2. I stop the MySQL service with "net stop MySQL" at the command prompt.
  3. I already figured that I need to adjust the size of the innodb log file in the my.ini file as it deviates (256 MB) from the default value (19MB).
  4. In the my.ini, I also set innodb_force_recovery=6
  5. In the data folder of the fresh installation, I overwrite the ibdata1, iblogfile0, iblogfile1 files with those recovered from the crashed machine. I also copy the relevant database (UPDATE: and the mysql) folders into here (NOT the standard mysql, test and performance folders).
  6. I start the MySQL service with "net start MySQL".
  7. I go into MySQL Workbench, open my server instance, go to Data Export, basically leave the default settings, and have every table of my databases exported as an individual dump file. I also set stored procedures to be dumped. Otherwise I do not change the default settings there.
  8. I start the dump process; it makes its way through 43 out of 195 tables. Of these 43,
    • some cannot be recovered yielding an error "mysqldump: Got error: 1146: Table '...whatever...' doesn't exist when doing LOCK TABLES",
    • but many can. I assume that when the dump does not yield any error, the table's data is non-corrupted.
      Then, after the 44th, all the other table dumps fail as it is reported that the server cannot be connected to anymore:
      "mysqldump: Got error: 2003: Can't connect to MySQL server on 'localhost' (10061) when trying to connect
      Operation failed with exitcode 2
      "
      These errors then go on for all the remaining tables from the 44th to the 195th.
      For the 44th table itself, the error is the following: "mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table ...table 44... at row: 57". So it seems that for this table corruption is or begins at row 57.

Now to my questions:

  • Why is the connection breaking down given that innodb_force_recovery is set to 6?
  • How to proceed? I worked out what the 44th table was where the connection was lost and can try to resume the process from the 45th table. But isn't there a better way to do it?
  • Once the data has been copied and the server restarts well, should I just try a dump of each table or what alternatives are there?

Thanks.


UPDATE: Additional notes for my later reference
- When re-creating stored routines backed up using SHOW CREATE PROCEDURE ... and SHOW CREATE FUNCTION ..., they must be imported using DELIMITER // (create procedure code of procedure 1)// (create procedure code of procedure 2)// DELIMITER ;

Mysql reliable with 1000 new entries / minute?

Posted: 04 Jun 2013 06:47 PM PDT

I have been developing an application that in the WORST case writes 1000 entries each minute into a database for over a year...

I wanted to use Mysql as DB, but I have read that with high datatransfers it becomes unreliable when writing. Is this true? Is 1000 entries considered a high amount of data? What would be such a high amount of data? Would corrupt data mean that I miss one entry or that I lose the whole table?

Thanks

TokuDB not much faster than MySQL

Posted: 04 Jun 2013 07:55 PM PDT

I have converted a MySQL database with 80.000.000 rows to TokuDB.

Now when I run:

 select count(id) from xxx where active=1  

it takes 90% of the time of the normal MySQL request.

What do I have to further optimize so that it runs faster?


The table definition:

CREATE TABLE `adsDelivered` (    `id` bigint(20) NOT NULL AUTO_INCREMENT,    `uid` varchar(40) NOT NULL,    `_adsDelivered` bigint(20) NOT NULL DEFAULT '0',    `_campaign` bigint(20) NOT NULL DEFAULT '0',    `_ad` bigint(20) NOT NULL DEFAULT '0',    `session` varchar(44) NOT NULL,    `referer` text NOT NULL,    `refererDomain` varchar(256) NOT NULL,    `pageTime` int(11) NOT NULL DEFAULT '0',    `pageVisibleTime` int(11) NOT NULL DEFAULT '0',    `browser` varchar(256) NOT NULL,    `ip` varchar(15) NOT NULL,    `clicks` int(11) NOT NULL DEFAULT '0',    `clickTimeLast` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',    `tag` varchar(256) NOT NULL,    `countryShort` varchar(2) NOT NULL,    `timeCreated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    `timeUpdated` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',      PRIMARY KEY (`id`),    UNIQUE KEY `uid` (`uid`),    KEY `_campaign` (`_campaign`),    KEY `_ad` (`_ad`),    KEY `_adsDelivered` (`_adsDelivered`),    KEY `session` (`session`),    KEY `tag` (`tag`),    KEY `ip` (`ip`),    KEY `countryShort` (`countryShort`),    KEY `refererDomain` (`refererDomain`)  ) ENGINE=TokuDB AUTO_INCREMENT=7420143 DEFAULT CHARSET=utf8;  

How database administrators can see my requests to SQL Server?

Posted: 04 Jun 2013 06:36 PM PDT

I'm a SQL Server 2008 user. I have access to some tables. I need to request few columns from table as I usually do. But I need to do it once (for example) in 5 seconds and system administrators shouldn't see (feel:) my activity.

Result of request - table with approximately 100 lines. My query contains only select and where clause by index. (it is light and it is executing very fast)

As I know, SELECT operations don't write to transaction log. I mean, if I only read database, where is log of my select actions SQL Server keep? Can administrator see my select queries?

C2 audit, as I can see in properties, is disabled.

Is there any other ways to see my activity?

Thanks.

How do I check if a constraint exists on Firebird?

Posted: 04 Jun 2013 09:19 PM PDT

I'm about to publish a script which will update a lot of Firebird databases all at once. Some will not have this constraint, so I would like to check for the existence of a constraint before I try to drop it.

ALTER TABLE PROCESS_CATEGORY DROP CONSTRAINT INTEG_669;  

ORA-16000 when trying to perform select on read only access ORACLE database

Posted: 04 Jun 2013 01:23 PM PDT

My application's SQL encounters ORA-16000 when trying to access read only Oracle Database

ORA-16000: database open for read-only access ORA-06512: at "SYS.XMLTYPE",   line 272 ORA-06512: at line 1 ### The error may exist in com/xyz/report/dao/Transaction001Mapper.xml  

This is the query that involves the XMLTYPE, the INTERFACE_CONTENT is a CLOB COLUMN :

SELECT CONTENTS FROM ERRCODES WHERE          CODE=(SELECT xmltype(INTERFACE_CONTENT).extract('/Message/PaymentStatus/ps:FIToFIPmtStsRpt/ps:TxInfAndSts/ps:StsRsnInf/ps:Rsn/ps:Prtry/text()','xmlns="urn:abcde" xmlns:head="urn:iso:std:iso:20022:tech:xsd:head.001.001.01" xmlns:ps="urn:iso:std:iso:20022:tech:xsd:pacs.002.001.03"').getstringval() APP_CODE  FROM MESSAGE_EXTERNAL_INTERACTION MEI WHERE MEI.MSG_TYPE='Pacs_002'      AND MEI.MID='MY_ID')  

I also did A lot OF EXTRACTVALUE( ) method on an XML FIELD TYPE.

The SQL is working perfectly if the Database is not read only ( read write ).

My Question here is what is the issue here - Is this related to some missing priviledges/grant ?

How to run a cold backup with Linux/tar without shutting down MySQL slave?

Posted: 04 Jun 2013 03:23 PM PDT

I run the following before tar-ing up the data directory:

STOP SLAVE;  FLUSH TABLES WITH READ LOCK;  FLUSH LOGS;   

However, tar will sometimes complain that the ibdata* and ib_logfiles* files are updated during the process. What am I missing?

The slave machine is in a cold standby machine so there are no client processes running while tar is running.

CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution.

SQL to read XML from file into PostgreSQL database

Posted: 04 Jun 2013 07:45 PM PDT

How can I write SQL to read an XML file into a PostgreSQL XML value?

PostgreSQL has a native XML data type with the XMLPARSE function to parse a text string to that type. It also has ways to read data from the filesystem; the COPY statement, among others.

But I don't see a way to write native PostgreSQL SQL statements to read the content from a filesystem entry and use that to populate an XML value. How can I do this?

Search This Blog