Tuesday, September 3, 2013

[how to] How to view table data in the newest 6.0 MySQL Workbench

[how to] How to view table data in the newest 6.0 MySQL Workbench


How to view table data in the newest 6.0 MySQL Workbench

Posted: 03 Sep 2013 08:47 PM PDT

I basically fail to view data in the new Workbench version without firing queries. I recall it was clearly possible in the previous version by simply using context menu something like "view table data".

Now in 6.0 I can't see this option anymore. Though I can see "search table data" which gives me next window where I have to input some mysterial query the pattern of which I fail to recognize. (see second screenshot - Start Search button is disabled)

enter image description here

enter image description here

It is not a big problem for me to fire queries, but still this is quite annoying especially if you don't see sql on daily basis using ORM and you just want to take a quick look at the table.

Oracle update - 1, :-)

My apologies to all db evangelists.

coercibility and collations

Posted: 03 Sep 2013 08:47 PM PDT

I have a similar situation as described here - http://forums.mysql.com/read.php?103,63845,63845; basically

If I run the query in this SP directly, it executes fine:

However, if I execute the stored procedure, I get an "Illegal mix of collations" error:

It is, however, not clear how to get rid of it.

From http://dev.mysql.com/doc/refman/5.6/en/charset-collation-expressions.html:

If both sides have the same coercibility, then:

If both sides are Unicode, or both sides are not Unicode, it is an error.  

which seems to be my particular case. My error message:

ERROR 1267 (HY000): Illegal mix of collations (utf8_unicode_ci,IMPLICIT) and (utf8_general_ci,IMPLICIT) for operation '='  

Oh, and it used to work fine, but toyed with the db, cant remember what I have done though, and ... this is the outcome nw :S

Version: 5.1.67-0ubuntu0.11.10.1-log. Issue observer from both mysql-client and another custom java client.

Thanks

What is a "leaf expression"?

Posted: 03 Sep 2013 07:29 PM PDT

What is a "leaf expression"?

Partition-clause expressions are limited in the following ways:      Your partition-clause must calculate a single non-null value for each row. You can reference multiple columns, but each row must return a single value.      You can specify leaf expressions, functions, and operators in the partition clause expression.      All leaf expressions in the partition clause must be either constants or columns of the table.  

Source https://my.vertica.com/docs/6.1.x/HTML/index.htm#18125.htm

Best way to migrate a database involved in bidirectional replication

Posted: 03 Sep 2013 08:12 PM PDT

I have two SQL2005 servers that are to be migrated to SQL2008R2 servers. The original pair of servers contain a database that are involved in bidirectional transactional replication between them. In other words SQL2005ServerA.DatabaseA and SQL2005ServerB.DatabaseA are involved in bidirectional transactional replication.

Now I have to move these databases to two new servers, call them SQL2008ServerA and SQL2008ServerB.

Is this pretty straight-forward? After disconnecting the application server, do I just backup and restore to the new servers, and then recreate the replication without snapshots?

Can I design this database better? [on hold]

Posted: 03 Sep 2013 07:35 PM PDT

I have the following database design:

enter image description here

Script to reproduce:

  /*      DROP TABLE [dbo].[feedback]      DROP TABLE [dbo].[action]      DROP TABLE [dbo].[download]      DROP TABLE [dbo].[user]      DROP TABLE [dbo].[company]      DROP TABLE [dbo].[resource]      DROP TABLE [dbo].[task]      DROP TABLE [dbo].[subcategory]      DROP TABLE [dbo].[category]    */      SET ANSI_NULLS ON  GO    SET QUOTED_IDENTIFIER ON  GO    SET ANSI_PADDING ON  GO    CREATE TABLE [dbo].[user]      (        [UserId] [int] IDENTITY(1, 1)                       NOT NULL ,        [CompanyID] [int] NOT NULL ,        [DepartmentID] [int] NOT NULL ,        [EmailAddress] [varchar](150) NOT NULL ,        [Title] [varchar](50) NULL ,        [FirstName] [varchar](100) NOT NULL ,        [MiddleName] [varchar](100) NULL ,        [LastName] [varchar](100) NULL ,        [DateOfBirth] [date] NULL ,        [ResidentialAddress] [varchar](500) NULL ,        [ResidentialSuburb] [varchar](150) NULL ,        [ResidentialPostCode] [varchar](20) NULL ,        [ResidentialState] [varchar](150) NULL ,        [ResidentialCountry] [varchar](100) NULL ,        [PostalAddress] [varchar](500) NULL ,        [PostalSuburb] [varchar](150) NULL ,        [PostalPostCode] [varchar](20) NULL ,        [PostalState] [varchar](150) NULL ,        [PostalCountry] [varchar](100) NULL ,        [Phone] [varchar](50) NULL ,        [Mobile] [varchar](50) NULL ,        [BrowserDetails] [varchar](500) NULL ,        [IsActive] [bit] NOT NULL ,        [LastLogin] [datetime] NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_users] PRIMARY KEY CLUSTERED ( [UserId] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    CREATE TABLE [dbo].[company]      (        [CompanyID] [int] IDENTITY(1, 1)                          NOT NULL ,        [CompanyName] [varchar](150) NULL ,        [DepartmentID] [int] NOT NULL ,        [DepartmentName] [varchar](150) NULL ,        [ResidentialAddress] [varchar](500) NULL ,        [ResidentialSuburb] [varchar](150) NULL ,        [ResidentialPostCode] [varchar](20) NULL ,        [ResidentialState] [varchar](150) NULL ,        [ResidentialCountry] [varchar](100) NULL ,        [PostalAddress] [varchar](500) NULL ,        [PostalSuburb] [varchar](150) NULL ,        [PostalPostCode] [varchar](20) NULL ,        [PostalState] [varchar](150) NULL ,        [PostalCountry] [varchar](100) NULL ,        [Phone] [varchar](50) NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_department] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    ALTER TABLE [company]  ADD CONSTRAINT CompanyID_DepartmentID_UQ  UNIQUE (CompanyID, DepartmentID);  GO    ALTER TABLE [user]  ADD CONSTRAINT Company_User_FK  FOREIGN KEY (CompanyID, DepartmentID)   REFERENCES company (CompanyID, DepartmentID);  GO    CREATE TABLE [dbo].[resource]      (        [ResourceID] [int] IDENTITY(1, 1)                           NOT NULL ,        [ResourceName] [varchar](1000) NOT NULL ,        [ResourceUrl] [varchar](1000) NOT NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_resource] PRIMARY KEY CLUSTERED ( [ResourceID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    CREATE TABLE [dbo].[download]      (        [DownloadID] [int] IDENTITY(1, 1)                           NOT NULL ,        [ResourceID] [int] NOT NULL ,        [UserID] [int] NOT NULL ,        [DownloadDate] [datetime] NOT NULL ,        [ActionID] [int] NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_download] PRIMARY KEY CLUSTERED ( [DownloadID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    ALTER TABLE [dbo].[download]  WITH CHECK ADD  CONSTRAINT [FK_download_resource] FOREIGN KEY([ResourceID])  REFERENCES [dbo].[resource] ([ResourceID])    GO    ALTER TABLE [dbo].[download]  WITH CHECK ADD  CONSTRAINT [FK_download_userid] FOREIGN KEY([UserID])  REFERENCES [dbo].[user] ([UserID])    GO    CREATE TABLE [dbo].[category]      (        [CategoryID] [int] IDENTITY(1, 1)                           NOT NULL ,        [CategoryTitle] [varchar](200) NOT NULL ,        [WordpressPostID] [varchar](50) NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_category] PRIMARY KEY CLUSTERED ( [CategoryID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    CREATE TABLE [dbo].[subcategory]      (        [SubCategoryID] [int] IDENTITY(1, 1)                              NOT NULL ,        [SubCategoryTitle] [varchar](200) NOT NULL ,        [CategoryID] [int] NOT NULL ,        [WordpressPostID] [varchar](50) NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_subcategory] PRIMARY KEY CLUSTERED          ( [SubCategoryID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    ALTER TABLE [dbo].[subcategory]  WITH CHECK ADD  CONSTRAINT [FK_subcategory_categoryid] FOREIGN KEY([CategoryID])  REFERENCES [dbo].[category] ([CategoryID])    GO    CREATE TABLE [dbo].[task]      (        [TaskID] [int] IDENTITY(1, 1)                       NOT NULL ,        [TaskTitle] [varchar](200) NOT NULL ,        [SubCategoryID] [int] NOT NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_task] PRIMARY KEY CLUSTERED ( [TaskID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    ALTER TABLE [dbo].[task]  WITH CHECK ADD  CONSTRAINT [FK_task_SubCategoryID] FOREIGN KEY([SubCategoryID])  REFERENCES [dbo].[subcategory] ([SubCategoryID])    GO    CREATE TABLE [dbo].[action]      (        [ActionID] [int] IDENTITY(1, 1)                         NOT NULL ,        [ActionTitle] [varchar](250) NULL ,        [TaskID] [int] NOT NULL ,        [UserID] [int] NOT NULL ,        [StartDate] [date] NULL ,        [EndDate] [date] NULL ,        [OnGoing] [bit] NULL ,        [Completed] [bit] NULL ,        [CompletedDate] [datetime] NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_action] PRIMARY KEY CLUSTERED ( [ActionID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    ALTER TABLE [dbo].[action]  WITH CHECK ADD  CONSTRAINT [FK_action_TaskID] FOREIGN KEY([TaskID])  REFERENCES [dbo].[task] ([TaskID])    GO    ALTER TABLE [dbo].[action]  WITH CHECK ADD  CONSTRAINT [FK_action_UserID] FOREIGN KEY([UserID])  REFERENCES [dbo].[user] ([UserID])    GO    CREATE TABLE [dbo].[feedback]      (        [FeedbackID] [int] NOT NULL ,        [FeedbackDate] [datetime] NOT NULL ,        [ActionID] [int] NULL ,        [CategoryID] [int] NOT NULL ,        [UserID] [int] NOT NULL ,        [Feedback] [varchar](MAX) NULL ,        CONSTRAINT [PK_feedback] PRIMARY KEY CLUSTERED ( [FeedbackID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY] TEXTIMAGE_ON [PRIMARY]    GO    ALTER TABLE [dbo].[feedback]  WITH CHECK ADD  CONSTRAINT [FK_feedback_UserID] FOREIGN KEY([UserID])  REFERENCES [dbo].[user] ([UserID])    GO    ALTER TABLE [dbo].[feedback]  WITH CHECK ADD  CONSTRAINT [FK_feedback_CategoryID] FOREIGN KEY([CategoryID])  REFERENCES [dbo].[category] ([CategoryID])    GO    SET ANSI_PADDING OFF    GO  

Database Intent The intent of the database is that a user will complete quizzes (actions) in various sub-categories and have the ability to provide feedback on each action (as well as track the resources the user downloads as they proceed).

Usage Information It's expected that there is an average of 43 actions per subcategory and there will be five subcategories total (to begin with). Our user base is ~30,000 users of which we expect only 5,000 or so to actually use this system.

Is there a better approach to this design? What holes do I have in my design that I should look to resolving?

Return all possible combinations for each group

Posted: 03 Sep 2013 03:17 PM PDT

I have got a table looks like this one here shows the student ID and all classes this student took:

+-------------+  | ID | Class  |  +-------------+  | 01 |   A    |  | 01 |   B    |  | 01 |   C    |  | 02 |   A    |  | 02 |   C    |  | 02 |   D    |  | 03 |   A    |  | 03 |   B    |  +-------------+  

And I would like to convert this table to the one below in order to see all possible combinations of classes for each student.

+----+---------+---------+  | ID | Class 1 | Class 2 |  +----+---------+---------+  | 01 | A       | B       |  | 01 | A       | C       |  | 01 | B       | C       |  | 02 | A       | C       |  | 02 | A       | D       |  | 02 | C       | D       |  | 03 | A       | B       |  +----+---------+---------+  

Does anyone know how to get this?

Conditional Counting

Posted: 03 Sep 2013 03:11 PM PDT

I'm analyzing a database of login records for a university and I'm trying to write a query that will give me an idea as to how many users are using the labs for short bursts, verse a more extended stay.

I have the query below query, which tells me how many times each user has used a lab for less than or equal to 10 minutes. What I want is a result that tells me this and their total number of logins.

select username, count(*) as "<= 10 min"      from logins      where duration<="600"      group by username      order by username asc      limit 10;  

Example desired output:

+----------+----------+----------+  | username | <=10 Min |  total   |  +----------+----------+----------+  | user1    |        4 |        7 |  | user2    |       11 |       22 |  | user3    |        1 |        3 |  | user4    |        4 |        8 |  +----------+----------+----------+  

SSMS "New Query" Behavior

Posted: 03 Sep 2013 01:53 PM PDT

On my desktop and on most other instances of Sql Server 2008 R2 and 2012, the default behavior when you right click on a database in the solution explorer of SSMS and choose "New Query" is a new query window opens and changes the database context to the database you clicked on. If you click new query button from the query window, it will open a new query from the same database. However, on some other machines, including my boss', the new query opens but the context is the default database (master by default). Clicking new query will likewise open a query in the default database. What is the option to make his instance act like mine?

If I connect to his database from my ssms, it still behaves like it does on his machine (incorrectly).

Change collation of one database! [duplicate]

Posted: 03 Sep 2013 01:27 PM PDT

This question already has an answer here:

I´m new in this forum e I see the problem of Peter. I have de same problem: I want to change the collation SQL_Latin1_General_CP1_CI_AS to Latin1_General_CI_AS of one database in SQL Server 2008.

Kin, do you have some scripts that can help me do it easly or do I have to configure something else?

Thank you!

Simple update query balloons tempdb

Posted: 03 Sep 2013 02:20 PM PDT

I am trying to do a bulk update of a column from a csv file, however when I do the actual update operation my tempdb balloons to over 20 GB in size (overtaking the entire size of the SSD the tempdb is running on).

I load the csv file using a bulk copy in to a temporary table that has matching schema to the destination table, I then do the query

Update [Med_Rec_Entries]   set [Comments] = [Tmp_Med_Rec_Entries].[Comments]   from [Tmp_Med_Rec_Entries]   where [Tmp_Med_Rec_Entries].[ID] = [Med_Rec_Entries].[ID]  

And that is where temp db blows up in size. Both the source and destination tables have 1,770,373 rows in it (there is a 1=1 on ID relationship for the update).

I tried running the database tuning advisor and it had me make a new index, but it did not help. The database I am inserting in to is just the middle step in a data migration and no queries other than other csv imports like one I am doing here and a bulk export at the end will be run against this database so it does not have any indexes or keys itself.

Here is the create table statement for Med_Rec_Entries

CREATE TABLE [dbo].[Med_Rec_Entries](      [ID] [nvarchar](255) NULL,      [Patient_ID] [nvarchar](80) NULL,      [Med_Rec_Code] [nvarchar](20) NULL,      [Tx_Date_Performed] [nvarchar](254) NULL,      [Comments] [nvarchar](max) NULL,      [Staff_Code] [nvarchar](21) NULL,      [Quantity] [real] NULL,      [Hide_This_Entry] [bit] NULL,      [Exclude_From_Printed_History] [bit] NULL,      [Image_ID] [int] NULL,      [Remote_Status] [nvarchar](2) NULL,      [Special_Flag] [nvarchar](20) NULL,      [Tx_Time_Performed] [nvarchar](254) NULL,      [Tx_Date_Recorded] [nvarchar](254) NULL,      [Appended_Comments] [nvarchar](max) NULL,      [Tx_Time_Recorded] [nvarchar](254) NULL,      [Entry_Is_Locked] [bit] NULL,      [Create_User] [nvarchar](21) NULL,      [Audit_DTS] [nvarchar](20) NULL,      [Audit_Actor] [nvarchar](21) NULL,      [vQPixAreax_] [varbinary](max) NULL,      [Route] [nvarchar](80) NULL,      [Units] [nvarchar](20) NULL,      [CFR_Reason_Code] [nvarchar](40) NULL,      [rpl_Key] [nvarchar](14) NULL,      [rpl_DateTime] [nvarchar](14) NULL,      [Medical_Description] [nvarchar](max) NULL,      [caseID] [int] NULL,      [Sign_Off_Target] [nvarchar](21) NULL,      [Sign_Off_Date] [nvarchar](254) NULL,      [Location_Code] [nvarchar](50) NULL,      [ACUP_ID] [nvarchar](15) NULL,      [ExcludeFromWeb] [bit] NULL,      [SecondarySort] [int] NULL,      [AutosaveState] [nvarchar](20) NULL,      [_CommentsAndFindingsText] [nvarchar](max) NULL,      [NewFormat] [bit] NULL,      [Temp] [nvarchar](80) NULL,      [Colour] [nvarchar](20) NULL,      [PrimaryKey] [nvarchar](200) NULL,      [Sign_Off_Time] [nvarchar](254) NULL  ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]    --From the database engine tuning advisor  CREATE NONCLUSTERED INDEX [_dta_index_Med_Rec_Entries_13_933578364__K1] ON [dbo].[Med_Rec_Entries]  (      [ID] ASC  )WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]  

Here is the create table statement for Tmp_Med_Rec_Entries

CREATE TABLE [dbo].[Tmp_Med_Rec_Entries](      [ID] [nvarchar](255) NULL,      [Comments] [nvarchar](max) NULL  ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]  

What can I do to stop tempdb from growing too large?

One or several tables for attributes

Posted: 03 Sep 2013 12:37 PM PDT

I just spent some time discussing two alternative DB designs with a coworker and I'm not convinced. Neither of us is strictly DBA so we may be missing something.

The overall goal is to create the ability to attach open-ended free-text attributes for each of three (possibly 4) entities.

Let's call the business entities Device, Location and Part; they have relationships between themselves.

Design A: Create DeviceAttribute, LocationAttribute and PartAttribute tables, each of which has an ID, Reference ID (FK to respective table), Name, Value and Type.

Design B: Create Attribute table with (ID, Name,Value and Type columns) and three reference tables - each holding a reference from one of the entity tables to one of the Attribute tables' IDs.

The concern is mostly about performance:
- will 3 separate xxxAttribute tables perform better when querying only for an Entity's data, such as "give me Device X and all its attributes", or do both designs perform the same? - will 1 Attribute table (Design B) perform better when querying for entities with given attribute name/value, such as "give me all entities that have Attribute.Name='GPS'", or is it equivalent to querying a view that combines the 3 tables of Design A? - in case of Design B: does update on an entity table (Location, Device, Part) cause a lock on querying in other entity tables?

The system may have tens of thousands of Parts, thousands of Devices and hundreds of Locations and may have to process in the order of tens to few hundreds queries per second.

Running a UPDATE on 3 million records

Posted: 03 Sep 2013 11:49 AM PDT

I have two actors in this transaction. One actor is a table "update_assets" is a set of records with data that is up-to-date and new. The second actor is a table "application_assets" which is a table used by an application and needs to be updated. The problem is that the update on this is taking far too long. The UPDATE is now going on 4 hours. The transaction is running on a vm with 3 amd cores allocated to it. 8 gb of ram. The application is not running in the background.

Tables:

update_assets ~3.3 million records 18 columns 1 index on amsid

application_assets ~7 million records 5 columns 1 index on id

Query:

  UPDATE application_assets as ams SET number = amu.num, letter = amu.letter   FROM update_assets amu  WHERE ams.id = amu.amsid  AND ams.company_id = '4';  

Explain output:

  Update on application_assets ams  (cost=219382.14..747965.42 rows=3303562 width=85)    ->  Hash Join  (cost=219382.14..747965.42 rows=3303562 width=85)          Hash Cond: ((ams.id)::text = (amu.amsid)::text)          ->  Seq Scan on application_assets ams  (cost=0.00..244642.25 rows=7883425 width=63)          ->  Hash  (cost=145825.62..145825.62 rows=3303562 width=55)                ->  Seq Scan on update_assets amu  (cost=0.00..145825.62 rows=3303562 width=55)  

If I removed all of the columns on the update_assets table, except for the pertinent columns for this transaction, will that speed up the transaction? I noticed that in the Hash Cond portion of the transaction that it does a :: casting operation on each of the columns. If the columns were already in the text data type format, would the data be updated quicker? Does postgresql always change data to text data for these types of joins? What else can I do to expedite this process?

Optimising a query of an SSRS dataset

Posted: 03 Sep 2013 11:47 AM PDT

Consider the query given below:

SELECT * FROM ((SELECT          c.cust_id ,                  c.username ,                  REPLACE(qs.seg_type_ref_key_02, 'N/A', 'Non Vip') casino_group,                  REPLACE(qs.seg_type_ref_key_03, 'N/A', 'Non Vip') bingo_group,                  REPLACE(qs.seg_type_ref_key_04, 'N/A', 'Non Vip') games_group,                  REPLACE(qs.seg_type_ref_key_12, 'N/A', 'Non Vip') poker_group,                  REPLACE(qs.seg_type_ref_key_01, 'N/A', 'Non ViP') sportsbook_group,                  c.country                  , c.contactable                  , c.email                  , c.dob                  , c.[status]                  , c.first_name                  , c.last_name                  , c.[master]                  , c.[language]                  , c.gender  FROM  warehouse.dbo.dim_customer c (nolock)    INNER JOIN warehouse . dbo . dim_segmentationcodehistory ( nolock )sc   ON sc . cust_id = c . cust_id  INNER JOIN warehouse . dbo . q_dim_segments qs ( nolock )   ON sc . seg_code_ref_key = qs . seg_code_ref_key  WHERE SC.active=1 and qs.seg_type_ref_key_04 <>'N/A' AND c.active = 1  and (qs.seg_type_ref_key_02 <> 'Prospect' and qs.seg_type_ref_key_03 <> 'Prospect' and qs.seg_type_ref_key_04 <> 'Prospect' and qs.seg_type_ref_key_12 <> 'Prospect' and qs.seg_type_ref_key_01 <> 'Prospect')) A  LEFT JOIN( SELECT   c.cust_id cust_dup,                      SUM(fc.turnover) AS Turnover_GBP,                      SUM(fc.grosswin) AS GrossWin_GBP,                      SUM(fc.chip_purch_amount_gbp) AS chip_purch_amount_gbp  FROM   warehouse.dbo.fact_games fc (nolock)  INNER JOIN warehouse.dbo.dim_date d (nolock)   ON d.date_key = fc.date_ref_key  INNER JOIN warehouse.dbo.dim_customer c (nolock)   ON c.cust_ref_key = fc.cust_ref_key  INNER JOIN warehouse.dbo.dim_gamesgame gg(nolock)   ON gg.games_game_ref_key = fc.game_ref_key  WHERE d.[date] between getdate()- 10 AND getdate()-9              AND gg.Game_Group_Description <> 'Bingo'  GROUP BY c.cust_id )B  ON A.cust_id = B.cust_dup)  

This query takes a little more than an hour. However, I require that this completes in as little time as possible.

Below is the level to which I have been able to optimize it:

IF OBJECT_ID('tempdb..#temp_shash_A') IS NOT NULL  DROP TABLE #temp_shash_A    IF OBJECT_ID('tempdb..#temp_shash_B') IS NOT NULL  DROP TABLE #temp_shash_B    -- A  (SELECT          c.cust_id ,              c.username ,              REPLACE(qs.seg_type_ref_key_02, 'N/A', 'Non Vip')     casino_group,              REPLACE(qs.seg_type_ref_key_03, 'N/A', 'Non Vip')     bingo_group,              REPLACE(qs.seg_type_ref_key_04, 'N/A', 'Non Vip')     games_group,              REPLACE(qs.seg_type_ref_key_12, 'N/A', 'Non Vip') poker_group,              REPLACE(qs.seg_type_ref_key_01, 'N/A', 'Non ViP') sportsbook_group,              c.country              , c.contactable              , c.email              , c.dob              , c.[status]              , c.first_name              , c.last_name              , c.[master]              , c.[language]              , c.gender  INTO #temp_shash_A                FROM  warehouse.dbo.dim_customer c (nolock)    INNER JOIN warehouse . dbo . dim_segmentationcodehistory ( nolock )sc  ON sc . cust_id = c . cust_id  INNER JOIN warehouse . dbo . q_dim_segments qs ( nolock )  ON sc . seg_code_ref_key = qs . seg_code_ref_key  WHERE SC.active=1 and qs.seg_type_ref_key_04 <>'N/A'  AND c.active = 1  and (qs.seg_type_ref_key_02 <> 'Prospect' and qs.seg_type_ref_key_03 <> 'Prospect' and      qs.seg_type_ref_key_04 <> 'Prospect' and qs.seg_type_ref_key_12 <> 'Prospect' and      qs.seg_type_ref_key_01 <> 'Prospect')  )    create clustered index S_1 on #temp_shash_A (cust_id)    -- B    ( SELECT   c.cust_id cust_dup,                  SUM(fc.turnover) AS Turnover_GBP,                  SUM(fc.grosswin) AS GrossWin_GBP,                  SUM(fc.chip_purch_amount_gbp) AS chip_purch_amount_gbp  INTO #temp_shash_B  FROM   warehouse.dbo.fact_games fc (nolock)  INNER JOIN warehouse.dbo.dim_date d (nolock)  ON d.date_key = fc.date_ref_key  INNER JOIN warehouse.dbo.dim_customer c (nolock)  ON c.cust_ref_key = fc.cust_ref_key  INNER JOIN warehouse.dbo.dim_gamesgame gg(nolock)  ON gg.games_game_ref_key = fc.game_ref_key  WHERE d.[date] between getdate()- 10 AND getdate()-9          AND gg.Game_Group_Description <> 'Bingo'  GROUP BY c.cust_id )      create clustered index S_2 on #temp_shash_B (cust_dup)      SELECT * FROM #temp_shash_A A  LEFT JOIN #temp_shash_B B  ON A.cust_id = B.cust_dup  

This took just around 5-6 minutes when ran initially. However, it took around 35 minutes when ran today. Can anyone suggest a way for me to optimize this? Any help appreciated.

PS: I'm working on SQL Server 2008 R2 DB. The query is a dataset query for an SSRS report.

Oracle specifying join order hint

Posted: 03 Sep 2013 11:41 AM PDT

Oracle 2 node RAC 11G R2

I have a query (this is a simplified version, but still exhibits the same issue as the larger query) that looks like this:

SELECT     t1.trip_id,     t1.origin,     t2.sighting_date  FROM trips t1  JOIN trip_details t2 on t1.trip_id = t2.trip_id  WHERE   --don't the get the information if the trip *only* belongs to customerid 99  99 <> any(select customerid from customer_trip ct where ct.trip_id = t1.trip_id);  
  • trip has trip level information - trip_id, origin, destination, etc
  • trip_details has sightings for the trips - trip_id, sighting_date, sighting_location, etc
  • customer_trip has which trips belong to which customers - some trips may be shared between customers and some may not, so the basic structure is just customer_id, trip_id

The problem is that it keeps joining trip and trip_details first, and then limiting by the 99<> any.. statement. The 99 <> any.. statement greatly reduces the rows from trip and would reduce the rows looked up in trip_details (which is very, very large - 100x trip). So, ideally, I'd like it join trips and the customertrip subquery first, and then do trip_details.

But I can't figure out if there's a way to specify a join order such that it does it in that order. The LEADING hint doesn't seem to have a way to reference the subquery.

Stats are all up to date.

table scheme definiation it this way that insert duplicate into another table

Posted: 03 Sep 2013 10:14 AM PDT

I have a table in which there is a column name with SP varchar(10) NOT NULL. I want that column always to be unique so i created unique index on that column . My table schema as follows :

CREATE TABLE IF NOT EXISTS `tblspmaster` (    `CSN` bigint(20) NOT NULL AUTO_INCREMENT,    `SP` varchar(10) NOT NULL,    `FileImportedDate` date NOT NULL,    `AMZFileName` varchar(50) NOT NULL,    `CasperBatch` varchar(50) NOT NULL,    `BatchProcessedDate` date NOT NULL,    `ExpiryDate` date NOT NULL,    `Region` varchar(50) NOT NULL,    `FCCity` varchar(50) NOT NULL,    `VendorID` int(11) NOT NULL,    `LocationID` int(11) NOT NULL,    PRIMARY KEY (`CSN`),    UNIQUE KEY `SP` (`SP`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=10000000000 ;  

Now i want that if anybody tries to insert duplicate record then that record should be inserted into a secondary table name tblDuplicate.

I have gone through this question http://stackoverflow.com/questions/812437/mysql-ignore-insert-error-duplicate-entry but i am not sure that instead of

INSERT INTO tbl VALUES (1,200) ON DUPLICATE KEY UPDATE value=200;  

can i insert duplicate row into another table ?

what changes needed to be done in main table scheme or index column ?

Note : Data will be inserted by importing excel or csv files.

Create a secondary read-only copy of DB in a different server

Posted: 03 Sep 2013 02:08 PM PDT

We have users that query our prod database (mostly ODBC through MS-Access) to create some customs reports. We want to copy our data to a second server (replicate / log shipping / ???) so the load on the prod DB is less. Many of the tables in the Prod DB do not have primary keys and is a SQL Server 2005. Our target DB is SQL Server 2012 Standard (though we can down-grade the destination server if we have to).

How could we accomplish that? We've tried replication and failed because of the lack of primary keys. We also tried log-shipping but the second DB cannot be brought online because of the difference in SQL versions.

Thanks, Jim

Combining multiple left join for one single line

Posted: 03 Sep 2013 11:58 AM PDT

We have a query as below.

SELECT tblSData.header,  tblEAlert.eMessage  FROM tblSData  LEFT JOIN tblEAlert ON tblSData.sDataID=tblEAlert.sDataID   Where tblSData.aID=".$aID." Order By tblSData.dateTimer  Asc  

Structure for the tables are as below:

CREATE TABLE IF NOT EXISTS `tblSData` (    `sDataID` int(11) NOT NULL AUTO_INCREMENT,     `header` varchar(255) NOT NULL,       `aID` int(5) NOT NULL,    `gFInID` int(5) NOT NULL,    `gFOutID` int(5) NOT NULL,    `gFAInID` int(5) NOT NULL,    `gFAOutID` int(5) NOT NULL,    `eAlertID` int(5) NOT NULL,    `dateTimer` datetime NOT NULL     PRIMARY KEY (`sDataID`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;      CREATE TABLE IF NOT EXISTS `tblEAlert` (    `eAlertID` int(11) NOT NULL AUTO_INCREMENT,    `sDataID` int(5) NOT NULL,    `eID` int(5) NOT NULL,    `aID` int(5) NOT NULL,    `eDateTime` datetime NOT NULL,    `eInsertDateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    `eMessage` varchar(255) NOT NULL,    PRIMARY KEY (`eAlertID`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;      Sample data for tblSData    1,"A1",1122,100,102,1,2,1,2013-07-13 15:30:19  2,"A3",1122,104,103,3,4,3,2013-07-13 15:45:19  3,"A4",1122,105,108,5,6,7,2013-07-13 15:55:19      Sample data for tblEAlert    1,1,1,1122,2013-07-13 15:30:19,2013-07-13 15:30:19,"Alert 1"  2,1,2,1122,2013-07-13 15:30:19,2013-07-13 15:30:19,"Alert 2"  3,2,2,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 3"  4,2,3,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 4"  5,2,4,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 5"  6,2,5,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 6"  

Sample output is.

"A1","Alert 1"  "A1","Alert 2"  "A2","Alert 3"  "A2","Alert 4"  "A2","Alert 5"  "A2","Alert 6"  

The problem now is how to show say for e.g. A1 all the Alert Message in one line and same goes for A2 etc. Now each alert message is on a different line.

Oracle, Create table with inline constraint declarations using a different tablespace

Posted: 03 Sep 2013 10:59 AM PDT

I have two questions about inline constraints declarations for Oracle tables:

  1. Is it a bad practice? If so, why?

  2. How could one declare a different tablespace for the primary key and index like it's done when using the outline declaration? Something like

      create table THIS_TABLE (          id number,           constraint THIS_TABLE_PK (id) tablespace INDEX_TABLESPACE      ) tablespace DATA_TABLESPACE;  

How to know what version of mysql client is compatible with what version of the server?

Posted: 03 Sep 2013 08:46 AM PDT

Is there backward or forward compatibility tables from the client and/or server point of view?

For a given version of the server, how to know which clients are compatible, and conversely?

db2 equivalent for oracle drop user cascade

Posted: 03 Sep 2013 07:44 AM PDT

This question is related to creating and initialising test and development databases for an application.

On oracle, we have

  • an os user (on linux) called oracle,
  • an admin oracle user called sys or system
  • a fixed predefined oracle user that is the application object owner, let's call it appowner
  • an oracle user that is the application user, ie has limited previleges on the objects owned by appowner, let's call it appuser

whenever there is a need to initialise the database, ie to start from scratch, we first connect to oracle using sys or system oracle user and then issue this command:

DROP USER appowner CASCADE;  

and then we go about recreating the user and the objects it owns from scratch. We also grant some previliges to appuser on the objects created by appowner

The application always logs in as appuser rather than appowner unless there are any administrative tasks to be performed on the database.

Now we are porting this application to db2, and this is where we are flummoxed.

For starters, db2 creates these os users, which are also db2 users:

dasusr1  db2inst1  db2fenc1  

How do these three users map to sys/system, appowner and appuser?

I believe dasusr1 is the rough equivalent of sys/system, db2inst1 is the rough equivalent of appowner, and db2fenc1 is the rough equivalent of appuser

(please correct me if I'm wrong, and I fully appreciate that the mapping will not be exact)

That being the case, if I have to remove all objects owned by db2inst1, do I login as dasusr1 and drop the user db2inst1?

There isn't really a one to one mapping between the oracle users and db2 users, because

  • db2inst1 can create multiple databases whereas appuser is mapped to one database in oracle
  • within a db2 database, there can be multiple schemas, whereas in oracle, one users maps to one schema

So it's a little confusing. It would help if someone with experience in porting applications from oracle to db2 could throw some light on the equivalence between the db2 and oracle users, which would finally lead to finding out the db2 equivalent for oracle drop user cascade.

Install SP 3 on SQL Server 2008, where merge replication is running

Posted: 03 Sep 2013 08:02 AM PDT

I have a SQL Server 2008 and I should install Service Pack 3 for it.

On my server there are several databases replicated, and mobile devices sync data via merge replication.

Can problems occur, if I install SP3, especially with my mobile devices (where SQL CE 3.5 is running).

I can't update all of the mobile devices.

Get missed transactions from slave

Posted: 03 Sep 2013 11:24 AM PDT

I'm trying to write a disaster recovery plan, I have one way replication with 1 master(A) and 3 slaves(B,C,D)

     A       /|\     / | \    B  C  D  

Let's say my master crash, and I loose all data on that server, I still have 3 slaves, and I'll promote the most up-to-date to be my new master, let say slave B is up to date and C and D are not, at the point I promote B to master my slaves have the follow result for SHOW SLAVE STATUS\G:

slaveB> SHOW SLAVE STATUS\G  *************************** 1. row ***************************        Relay_Master_Log_File: master-bin.000002          Exec_Master_Log_Pos: 1476884    slaveC> SHOW SLAVE STATUS\G  *************************** 1. row ***************************        Relay_Master_Log_File: master-bin.000002          Exec_Master_Log_Pos: 78684    slaveD> SHOW SLAVE STATUS\G  *************************** 1. row ***************************        Relay_Master_Log_File: master-bin.000001          Exec_Master_Log_Pos: 746848  

In order to promote B to master, I want to apply all missing transactions on C and D, so before B become the new master and start to receive queries from applications, I've all slaves on a consistent state, how can I find the missed transactions from C and D on binary logs from B (I've the --log-slave-updates option enabled on all server).

For legacy issues, the envoirement use MySQL 5.0

Moving from SQL 2005 [SQL_Latin1_General_CP1_CI_AS] to 2008 - will I lose any features by using 'backwards compatibility'

Posted: 03 Sep 2013 11:00 AM PDT

We're moving from SQL 2005 [Instance and DB have collation of SQL_Latin1_General_CP1_CI_AS] to SQL 2008 [which defaults to Latin1_General_CI_AS].

I completed a SQL 2008 R2 installation, and used default Latin1_General_CI_AS collation, with the restoration of the database still on SQL_Latin1_General_CP1_CI_AS. The excepted problems occurred - the #temp tables where in Latin1_General_CI_AS whilst the db was in SQL_Latin1_General_CP1_CI_AS and this is where I am now - I need advice on the pitfalls now please.

On installation of SQL 2008 R2, I have the option on installation to use 'SQL Collation, used for backwards compatibility' where I have the option to select the same collation as the 2005 database : SQL_Latin1_General_CP1_CI_AS.

  1. This will allow me to not have problems with #temp tables, but are there pitfalls?

  2. Would I lose any functionality or features of any kind by not using a "current" collation of SQL 2008?

  3. What about when we move (e.g. in 2 years ) from 2008 to SQL 2012? Will I have problems then?
  4. Would I at some point be forced to go to Latin1_General_CI_AS?

  5. I read that some DBA's script complete the rows of complete databases, and then run the insert script into the database with the new collation - I'm very scared and wary of this - would you recommend doing this?

Calculating the median value in a MySQL table with CodeIgniter

Posted: 03 Sep 2013 05:18 PM PDT

This is my model file in codeigniter. I need to calculate the median for this variable: budget_breakdown.breakdown_grants. How can I do this calculation?

function reports_details1() {          $this->db->select('budget_breakdown.breakdown_grants');      //survey_respondent_info.state,survey_respondent_info.survey_id,budgets.budget_id,          $this->db->from('survey_respondent_info');          $this->db->join('budgets',                          'budgets.survey_id=survey_respondent_info.survey_id' , 'left');          $this->db->join('budget_breakdown',                              'budgets.budget_id=budget_breakdown.budget_id' , 'left');          $this->db->where('budgets.budget_option_id', 2);          $query1 = $this->db->get();          $result = $query1->result();          return $result;      }  

This is just to demonstrate a display bug

Posted: 03 Sep 2013 01:17 PM PDT

...and now has been updated to demonstrate a comment thingy on SO meta.

Unless I asked you to, please do not act on this question, even if it appears in the suggested aardvark edit review queue or the close votes review queue.

Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries)

Posted: 03 Sep 2013 09:18 AM PDT

I'm trying to configure a load balancing system. I've a python script, invoked through mod_wsgi on Apache, that generates a query and executes it on pgpool: request-> wsgi python -> pgpool -> postgresql. Pgpool is configured as load balancer using 4 servers with 24GB ram and 350GB ssh hd. Our db is about 150GB and a query takes about 2 seconds. These are the configurations:

Pgpool

  • num_init_children 500
  • max_pool 2
  • child_life_time 300 seconds

Apache (mpm_prefork)

  • StartServers 100
  • MinSpareServers 20
  • MaxSpareServers 45
  • ServerLimit 2000
  • MaxClients 100
  • MaxRequestsPerChild 1000

PostgreSQL

  • max_connections = 1000
  • shared_buffers = 6GB
  • work_mem = 4GB

It seems not working When I try to submit more than 150 concurrent queries, although pgpool log file doesn't have any errors I get this error from the python script:

OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

Any ideas?

What are database statistics, and how can I benefit from them?

Posted: 03 Sep 2013 01:31 PM PDT

I've heard mention of statistics that SQL Server keeps by default. What are they tracking, and how can I use this information to improve my database?

No comments:

Post a Comment

Search This Blog