Showing posts sorted by date for query related:dev.mysql.com/doc/refman/5.0/en/update.html Using REPLACE in an UPDATE statement. Sort by relevance Show all posts
Showing posts sorted by date for query related:dev.mysql.com/doc/refman/5.0/en/update.html Using REPLACE in an UPDATE statement. Sort by relevance Show all posts

Tuesday, September 3, 2013

[how to] How to view table data in the newest 6.0 MySQL Workbench

[how to] How to view table data in the newest 6.0 MySQL Workbench


How to view table data in the newest 6.0 MySQL Workbench

Posted: 03 Sep 2013 08:47 PM PDT

I basically fail to view data in the new Workbench version without firing queries. I recall it was clearly possible in the previous version by simply using context menu something like "view table data".

Now in 6.0 I can't see this option anymore. Though I can see "search table data" which gives me next window where I have to input some mysterial query the pattern of which I fail to recognize. (see second screenshot - Start Search button is disabled)

enter image description here

enter image description here

It is not a big problem for me to fire queries, but still this is quite annoying especially if you don't see sql on daily basis using ORM and you just want to take a quick look at the table.

Oracle update - 1, :-)

My apologies to all db evangelists.

coercibility and collations

Posted: 03 Sep 2013 08:47 PM PDT

I have a similar situation as described here - http://forums.mysql.com/read.php?103,63845,63845; basically

If I run the query in this SP directly, it executes fine:

However, if I execute the stored procedure, I get an "Illegal mix of collations" error:

It is, however, not clear how to get rid of it.

From http://dev.mysql.com/doc/refman/5.6/en/charset-collation-expressions.html:

If both sides have the same coercibility, then:

If both sides are Unicode, or both sides are not Unicode, it is an error.  

which seems to be my particular case. My error message:

ERROR 1267 (HY000): Illegal mix of collations (utf8_unicode_ci,IMPLICIT) and (utf8_general_ci,IMPLICIT) for operation '='  

Oh, and it used to work fine, but toyed with the db, cant remember what I have done though, and ... this is the outcome nw :S

Version: 5.1.67-0ubuntu0.11.10.1-log. Issue observer from both mysql-client and another custom java client.

Thanks

What is a "leaf expression"?

Posted: 03 Sep 2013 07:29 PM PDT

What is a "leaf expression"?

Partition-clause expressions are limited in the following ways:      Your partition-clause must calculate a single non-null value for each row. You can reference multiple columns, but each row must return a single value.      You can specify leaf expressions, functions, and operators in the partition clause expression.      All leaf expressions in the partition clause must be either constants or columns of the table.  

Source https://my.vertica.com/docs/6.1.x/HTML/index.htm#18125.htm

Best way to migrate a database involved in bidirectional replication

Posted: 03 Sep 2013 08:12 PM PDT

I have two SQL2005 servers that are to be migrated to SQL2008R2 servers. The original pair of servers contain a database that are involved in bidirectional transactional replication between them. In other words SQL2005ServerA.DatabaseA and SQL2005ServerB.DatabaseA are involved in bidirectional transactional replication.

Now I have to move these databases to two new servers, call them SQL2008ServerA and SQL2008ServerB.

Is this pretty straight-forward? After disconnecting the application server, do I just backup and restore to the new servers, and then recreate the replication without snapshots?

Can I design this database better? [on hold]

Posted: 03 Sep 2013 07:35 PM PDT

I have the following database design:

enter image description here

Script to reproduce:

  /*      DROP TABLE [dbo].[feedback]      DROP TABLE [dbo].[action]      DROP TABLE [dbo].[download]      DROP TABLE [dbo].[user]      DROP TABLE [dbo].[company]      DROP TABLE [dbo].[resource]      DROP TABLE [dbo].[task]      DROP TABLE [dbo].[subcategory]      DROP TABLE [dbo].[category]    */      SET ANSI_NULLS ON  GO    SET QUOTED_IDENTIFIER ON  GO    SET ANSI_PADDING ON  GO    CREATE TABLE [dbo].[user]      (        [UserId] [int] IDENTITY(1, 1)                       NOT NULL ,        [CompanyID] [int] NOT NULL ,        [DepartmentID] [int] NOT NULL ,        [EmailAddress] [varchar](150) NOT NULL ,        [Title] [varchar](50) NULL ,        [FirstName] [varchar](100) NOT NULL ,        [MiddleName] [varchar](100) NULL ,        [LastName] [varchar](100) NULL ,        [DateOfBirth] [date] NULL ,        [ResidentialAddress] [varchar](500) NULL ,        [ResidentialSuburb] [varchar](150) NULL ,        [ResidentialPostCode] [varchar](20) NULL ,        [ResidentialState] [varchar](150) NULL ,        [ResidentialCountry] [varchar](100) NULL ,        [PostalAddress] [varchar](500) NULL ,        [PostalSuburb] [varchar](150) NULL ,        [PostalPostCode] [varchar](20) NULL ,        [PostalState] [varchar](150) NULL ,        [PostalCountry] [varchar](100) NULL ,        [Phone] [varchar](50) NULL ,        [Mobile] [varchar](50) NULL ,        [BrowserDetails] [varchar](500) NULL ,        [IsActive] [bit] NOT NULL ,        [LastLogin] [datetime] NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_users] PRIMARY KEY CLUSTERED ( [UserId] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    CREATE TABLE [dbo].[company]      (        [CompanyID] [int] IDENTITY(1, 1)                          NOT NULL ,        [CompanyName] [varchar](150) NULL ,        [DepartmentID] [int] NOT NULL ,        [DepartmentName] [varchar](150) NULL ,        [ResidentialAddress] [varchar](500) NULL ,        [ResidentialSuburb] [varchar](150) NULL ,        [ResidentialPostCode] [varchar](20) NULL ,        [ResidentialState] [varchar](150) NULL ,        [ResidentialCountry] [varchar](100) NULL ,        [PostalAddress] [varchar](500) NULL ,        [PostalSuburb] [varchar](150) NULL ,        [PostalPostCode] [varchar](20) NULL ,        [PostalState] [varchar](150) NULL ,        [PostalCountry] [varchar](100) NULL ,        [Phone] [varchar](50) NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_department] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    ALTER TABLE [company]  ADD CONSTRAINT CompanyID_DepartmentID_UQ  UNIQUE (CompanyID, DepartmentID);  GO    ALTER TABLE [user]  ADD CONSTRAINT Company_User_FK  FOREIGN KEY (CompanyID, DepartmentID)   REFERENCES company (CompanyID, DepartmentID);  GO    CREATE TABLE [dbo].[resource]      (        [ResourceID] [int] IDENTITY(1, 1)                           NOT NULL ,        [ResourceName] [varchar](1000) NOT NULL ,        [ResourceUrl] [varchar](1000) NOT NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_resource] PRIMARY KEY CLUSTERED ( [ResourceID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    CREATE TABLE [dbo].[download]      (        [DownloadID] [int] IDENTITY(1, 1)                           NOT NULL ,        [ResourceID] [int] NOT NULL ,        [UserID] [int] NOT NULL ,        [DownloadDate] [datetime] NOT NULL ,        [ActionID] [int] NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_download] PRIMARY KEY CLUSTERED ( [DownloadID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    ALTER TABLE [dbo].[download]  WITH CHECK ADD  CONSTRAINT [FK_download_resource] FOREIGN KEY([ResourceID])  REFERENCES [dbo].[resource] ([ResourceID])    GO    ALTER TABLE [dbo].[download]  WITH CHECK ADD  CONSTRAINT [FK_download_userid] FOREIGN KEY([UserID])  REFERENCES [dbo].[user] ([UserID])    GO    CREATE TABLE [dbo].[category]      (        [CategoryID] [int] IDENTITY(1, 1)                           NOT NULL ,        [CategoryTitle] [varchar](200) NOT NULL ,        [WordpressPostID] [varchar](50) NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_category] PRIMARY KEY CLUSTERED ( [CategoryID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    CREATE TABLE [dbo].[subcategory]      (        [SubCategoryID] [int] IDENTITY(1, 1)                              NOT NULL ,        [SubCategoryTitle] [varchar](200) NOT NULL ,        [CategoryID] [int] NOT NULL ,        [WordpressPostID] [varchar](50) NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_subcategory] PRIMARY KEY CLUSTERED          ( [SubCategoryID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    ALTER TABLE [dbo].[subcategory]  WITH CHECK ADD  CONSTRAINT [FK_subcategory_categoryid] FOREIGN KEY([CategoryID])  REFERENCES [dbo].[category] ([CategoryID])    GO    CREATE TABLE [dbo].[task]      (        [TaskID] [int] IDENTITY(1, 1)                       NOT NULL ,        [TaskTitle] [varchar](200) NOT NULL ,        [SubCategoryID] [int] NOT NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_task] PRIMARY KEY CLUSTERED ( [TaskID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    ALTER TABLE [dbo].[task]  WITH CHECK ADD  CONSTRAINT [FK_task_SubCategoryID] FOREIGN KEY([SubCategoryID])  REFERENCES [dbo].[subcategory] ([SubCategoryID])    GO    CREATE TABLE [dbo].[action]      (        [ActionID] [int] IDENTITY(1, 1)                         NOT NULL ,        [ActionTitle] [varchar](250) NULL ,        [TaskID] [int] NOT NULL ,        [UserID] [int] NOT NULL ,        [StartDate] [date] NULL ,        [EndDate] [date] NULL ,        [OnGoing] [bit] NULL ,        [Completed] [bit] NULL ,        [CompletedDate] [datetime] NULL ,        [IsActive] [bit] NOT NULL ,        [Created] [datetime] NOT NULL ,        [CreatedBy] [varchar](50) NOT NULL ,        [LastModified] [datetime] NULL ,        [LastModifiedBy] [varchar](50) NULL ,        CONSTRAINT [PK_action] PRIMARY KEY CLUSTERED ( [ActionID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY]    GO    ALTER TABLE [dbo].[action]  WITH CHECK ADD  CONSTRAINT [FK_action_TaskID] FOREIGN KEY([TaskID])  REFERENCES [dbo].[task] ([TaskID])    GO    ALTER TABLE [dbo].[action]  WITH CHECK ADD  CONSTRAINT [FK_action_UserID] FOREIGN KEY([UserID])  REFERENCES [dbo].[user] ([UserID])    GO    CREATE TABLE [dbo].[feedback]      (        [FeedbackID] [int] NOT NULL ,        [FeedbackDate] [datetime] NOT NULL ,        [ActionID] [int] NULL ,        [CategoryID] [int] NOT NULL ,        [UserID] [int] NOT NULL ,        [Feedback] [varchar](MAX) NULL ,        CONSTRAINT [PK_feedback] PRIMARY KEY CLUSTERED ( [FeedbackID] ASC )          WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,                 IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,                 ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]      )  ON  [PRIMARY] TEXTIMAGE_ON [PRIMARY]    GO    ALTER TABLE [dbo].[feedback]  WITH CHECK ADD  CONSTRAINT [FK_feedback_UserID] FOREIGN KEY([UserID])  REFERENCES [dbo].[user] ([UserID])    GO    ALTER TABLE [dbo].[feedback]  WITH CHECK ADD  CONSTRAINT [FK_feedback_CategoryID] FOREIGN KEY([CategoryID])  REFERENCES [dbo].[category] ([CategoryID])    GO    SET ANSI_PADDING OFF    GO  

Database Intent The intent of the database is that a user will complete quizzes (actions) in various sub-categories and have the ability to provide feedback on each action (as well as track the resources the user downloads as they proceed).

Usage Information It's expected that there is an average of 43 actions per subcategory and there will be five subcategories total (to begin with). Our user base is ~30,000 users of which we expect only 5,000 or so to actually use this system.

Is there a better approach to this design? What holes do I have in my design that I should look to resolving?

Return all possible combinations for each group

Posted: 03 Sep 2013 03:17 PM PDT

I have got a table looks like this one here shows the student ID and all classes this student took:

+-------------+  | ID | Class  |  +-------------+  | 01 |   A    |  | 01 |   B    |  | 01 |   C    |  | 02 |   A    |  | 02 |   C    |  | 02 |   D    |  | 03 |   A    |  | 03 |   B    |  +-------------+  

And I would like to convert this table to the one below in order to see all possible combinations of classes for each student.

+----+---------+---------+  | ID | Class 1 | Class 2 |  +----+---------+---------+  | 01 | A       | B       |  | 01 | A       | C       |  | 01 | B       | C       |  | 02 | A       | C       |  | 02 | A       | D       |  | 02 | C       | D       |  | 03 | A       | B       |  +----+---------+---------+  

Does anyone know how to get this?

Conditional Counting

Posted: 03 Sep 2013 03:11 PM PDT

I'm analyzing a database of login records for a university and I'm trying to write a query that will give me an idea as to how many users are using the labs for short bursts, verse a more extended stay.

I have the query below query, which tells me how many times each user has used a lab for less than or equal to 10 minutes. What I want is a result that tells me this and their total number of logins.

select username, count(*) as "<= 10 min"      from logins      where duration<="600"      group by username      order by username asc      limit 10;  

Example desired output:

+----------+----------+----------+  | username | <=10 Min |  total   |  +----------+----------+----------+  | user1    |        4 |        7 |  | user2    |       11 |       22 |  | user3    |        1 |        3 |  | user4    |        4 |        8 |  +----------+----------+----------+  

SSMS "New Query" Behavior

Posted: 03 Sep 2013 01:53 PM PDT

On my desktop and on most other instances of Sql Server 2008 R2 and 2012, the default behavior when you right click on a database in the solution explorer of SSMS and choose "New Query" is a new query window opens and changes the database context to the database you clicked on. If you click new query button from the query window, it will open a new query from the same database. However, on some other machines, including my boss', the new query opens but the context is the default database (master by default). Clicking new query will likewise open a query in the default database. What is the option to make his instance act like mine?

If I connect to his database from my ssms, it still behaves like it does on his machine (incorrectly).

Change collation of one database! [duplicate]

Posted: 03 Sep 2013 01:27 PM PDT

This question already has an answer here:

I´m new in this forum e I see the problem of Peter. I have de same problem: I want to change the collation SQL_Latin1_General_CP1_CI_AS to Latin1_General_CI_AS of one database in SQL Server 2008.

Kin, do you have some scripts that can help me do it easly or do I have to configure something else?

Thank you!

Simple update query balloons tempdb

Posted: 03 Sep 2013 02:20 PM PDT

I am trying to do a bulk update of a column from a csv file, however when I do the actual update operation my tempdb balloons to over 20 GB in size (overtaking the entire size of the SSD the tempdb is running on).

I load the csv file using a bulk copy in to a temporary table that has matching schema to the destination table, I then do the query

Update [Med_Rec_Entries]   set [Comments] = [Tmp_Med_Rec_Entries].[Comments]   from [Tmp_Med_Rec_Entries]   where [Tmp_Med_Rec_Entries].[ID] = [Med_Rec_Entries].[ID]  

And that is where temp db blows up in size. Both the source and destination tables have 1,770,373 rows in it (there is a 1=1 on ID relationship for the update).

I tried running the database tuning advisor and it had me make a new index, but it did not help. The database I am inserting in to is just the middle step in a data migration and no queries other than other csv imports like one I am doing here and a bulk export at the end will be run against this database so it does not have any indexes or keys itself.

Here is the create table statement for Med_Rec_Entries

CREATE TABLE [dbo].[Med_Rec_Entries](      [ID] [nvarchar](255) NULL,      [Patient_ID] [nvarchar](80) NULL,      [Med_Rec_Code] [nvarchar](20) NULL,      [Tx_Date_Performed] [nvarchar](254) NULL,      [Comments] [nvarchar](max) NULL,      [Staff_Code] [nvarchar](21) NULL,      [Quantity] [real] NULL,      [Hide_This_Entry] [bit] NULL,      [Exclude_From_Printed_History] [bit] NULL,      [Image_ID] [int] NULL,      [Remote_Status] [nvarchar](2) NULL,      [Special_Flag] [nvarchar](20) NULL,      [Tx_Time_Performed] [nvarchar](254) NULL,      [Tx_Date_Recorded] [nvarchar](254) NULL,      [Appended_Comments] [nvarchar](max) NULL,      [Tx_Time_Recorded] [nvarchar](254) NULL,      [Entry_Is_Locked] [bit] NULL,      [Create_User] [nvarchar](21) NULL,      [Audit_DTS] [nvarchar](20) NULL,      [Audit_Actor] [nvarchar](21) NULL,      [vQPixAreax_] [varbinary](max) NULL,      [Route] [nvarchar](80) NULL,      [Units] [nvarchar](20) NULL,      [CFR_Reason_Code] [nvarchar](40) NULL,      [rpl_Key] [nvarchar](14) NULL,      [rpl_DateTime] [nvarchar](14) NULL,      [Medical_Description] [nvarchar](max) NULL,      [caseID] [int] NULL,      [Sign_Off_Target] [nvarchar](21) NULL,      [Sign_Off_Date] [nvarchar](254) NULL,      [Location_Code] [nvarchar](50) NULL,      [ACUP_ID] [nvarchar](15) NULL,      [ExcludeFromWeb] [bit] NULL,      [SecondarySort] [int] NULL,      [AutosaveState] [nvarchar](20) NULL,      [_CommentsAndFindingsText] [nvarchar](max) NULL,      [NewFormat] [bit] NULL,      [Temp] [nvarchar](80) NULL,      [Colour] [nvarchar](20) NULL,      [PrimaryKey] [nvarchar](200) NULL,      [Sign_Off_Time] [nvarchar](254) NULL  ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]    --From the database engine tuning advisor  CREATE NONCLUSTERED INDEX [_dta_index_Med_Rec_Entries_13_933578364__K1] ON [dbo].[Med_Rec_Entries]  (      [ID] ASC  )WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]  

Here is the create table statement for Tmp_Med_Rec_Entries

CREATE TABLE [dbo].[Tmp_Med_Rec_Entries](      [ID] [nvarchar](255) NULL,      [Comments] [nvarchar](max) NULL  ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]  

What can I do to stop tempdb from growing too large?

One or several tables for attributes

Posted: 03 Sep 2013 12:37 PM PDT

I just spent some time discussing two alternative DB designs with a coworker and I'm not convinced. Neither of us is strictly DBA so we may be missing something.

The overall goal is to create the ability to attach open-ended free-text attributes for each of three (possibly 4) entities.

Let's call the business entities Device, Location and Part; they have relationships between themselves.

Design A: Create DeviceAttribute, LocationAttribute and PartAttribute tables, each of which has an ID, Reference ID (FK to respective table), Name, Value and Type.

Design B: Create Attribute table with (ID, Name,Value and Type columns) and three reference tables - each holding a reference from one of the entity tables to one of the Attribute tables' IDs.

The concern is mostly about performance:
- will 3 separate xxxAttribute tables perform better when querying only for an Entity's data, such as "give me Device X and all its attributes", or do both designs perform the same? - will 1 Attribute table (Design B) perform better when querying for entities with given attribute name/value, such as "give me all entities that have Attribute.Name='GPS'", or is it equivalent to querying a view that combines the 3 tables of Design A? - in case of Design B: does update on an entity table (Location, Device, Part) cause a lock on querying in other entity tables?

The system may have tens of thousands of Parts, thousands of Devices and hundreds of Locations and may have to process in the order of tens to few hundreds queries per second.

Running a UPDATE on 3 million records

Posted: 03 Sep 2013 11:49 AM PDT

I have two actors in this transaction. One actor is a table "update_assets" is a set of records with data that is up-to-date and new. The second actor is a table "application_assets" which is a table used by an application and needs to be updated. The problem is that the update on this is taking far too long. The UPDATE is now going on 4 hours. The transaction is running on a vm with 3 amd cores allocated to it. 8 gb of ram. The application is not running in the background.

Tables:

update_assets ~3.3 million records 18 columns 1 index on amsid

application_assets ~7 million records 5 columns 1 index on id

Query:

  UPDATE application_assets as ams SET number = amu.num, letter = amu.letter   FROM update_assets amu  WHERE ams.id = amu.amsid  AND ams.company_id = '4';  

Explain output:

  Update on application_assets ams  (cost=219382.14..747965.42 rows=3303562 width=85)    ->  Hash Join  (cost=219382.14..747965.42 rows=3303562 width=85)          Hash Cond: ((ams.id)::text = (amu.amsid)::text)          ->  Seq Scan on application_assets ams  (cost=0.00..244642.25 rows=7883425 width=63)          ->  Hash  (cost=145825.62..145825.62 rows=3303562 width=55)                ->  Seq Scan on update_assets amu  (cost=0.00..145825.62 rows=3303562 width=55)  

If I removed all of the columns on the update_assets table, except for the pertinent columns for this transaction, will that speed up the transaction? I noticed that in the Hash Cond portion of the transaction that it does a :: casting operation on each of the columns. If the columns were already in the text data type format, would the data be updated quicker? Does postgresql always change data to text data for these types of joins? What else can I do to expedite this process?

Optimising a query of an SSRS dataset

Posted: 03 Sep 2013 11:47 AM PDT

Consider the query given below:

SELECT * FROM ((SELECT          c.cust_id ,                  c.username ,                  REPLACE(qs.seg_type_ref_key_02, 'N/A', 'Non Vip') casino_group,                  REPLACE(qs.seg_type_ref_key_03, 'N/A', 'Non Vip') bingo_group,                  REPLACE(qs.seg_type_ref_key_04, 'N/A', 'Non Vip') games_group,                  REPLACE(qs.seg_type_ref_key_12, 'N/A', 'Non Vip') poker_group,                  REPLACE(qs.seg_type_ref_key_01, 'N/A', 'Non ViP') sportsbook_group,                  c.country                  , c.contactable                  , c.email                  , c.dob                  , c.[status]                  , c.first_name                  , c.last_name                  , c.[master]                  , c.[language]                  , c.gender  FROM  warehouse.dbo.dim_customer c (nolock)    INNER JOIN warehouse . dbo . dim_segmentationcodehistory ( nolock )sc   ON sc . cust_id = c . cust_id  INNER JOIN warehouse . dbo . q_dim_segments qs ( nolock )   ON sc . seg_code_ref_key = qs . seg_code_ref_key  WHERE SC.active=1 and qs.seg_type_ref_key_04 <>'N/A' AND c.active = 1  and (qs.seg_type_ref_key_02 <> 'Prospect' and qs.seg_type_ref_key_03 <> 'Prospect' and qs.seg_type_ref_key_04 <> 'Prospect' and qs.seg_type_ref_key_12 <> 'Prospect' and qs.seg_type_ref_key_01 <> 'Prospect')) A  LEFT JOIN( SELECT   c.cust_id cust_dup,                      SUM(fc.turnover) AS Turnover_GBP,                      SUM(fc.grosswin) AS GrossWin_GBP,                      SUM(fc.chip_purch_amount_gbp) AS chip_purch_amount_gbp  FROM   warehouse.dbo.fact_games fc (nolock)  INNER JOIN warehouse.dbo.dim_date d (nolock)   ON d.date_key = fc.date_ref_key  INNER JOIN warehouse.dbo.dim_customer c (nolock)   ON c.cust_ref_key = fc.cust_ref_key  INNER JOIN warehouse.dbo.dim_gamesgame gg(nolock)   ON gg.games_game_ref_key = fc.game_ref_key  WHERE d.[date] between getdate()- 10 AND getdate()-9              AND gg.Game_Group_Description <> 'Bingo'  GROUP BY c.cust_id )B  ON A.cust_id = B.cust_dup)  

This query takes a little more than an hour. However, I require that this completes in as little time as possible.

Below is the level to which I have been able to optimize it:

IF OBJECT_ID('tempdb..#temp_shash_A') IS NOT NULL  DROP TABLE #temp_shash_A    IF OBJECT_ID('tempdb..#temp_shash_B') IS NOT NULL  DROP TABLE #temp_shash_B    -- A  (SELECT          c.cust_id ,              c.username ,              REPLACE(qs.seg_type_ref_key_02, 'N/A', 'Non Vip')     casino_group,              REPLACE(qs.seg_type_ref_key_03, 'N/A', 'Non Vip')     bingo_group,              REPLACE(qs.seg_type_ref_key_04, 'N/A', 'Non Vip')     games_group,              REPLACE(qs.seg_type_ref_key_12, 'N/A', 'Non Vip') poker_group,              REPLACE(qs.seg_type_ref_key_01, 'N/A', 'Non ViP') sportsbook_group,              c.country              , c.contactable              , c.email              , c.dob              , c.[status]              , c.first_name              , c.last_name              , c.[master]              , c.[language]              , c.gender  INTO #temp_shash_A                FROM  warehouse.dbo.dim_customer c (nolock)    INNER JOIN warehouse . dbo . dim_segmentationcodehistory ( nolock )sc  ON sc . cust_id = c . cust_id  INNER JOIN warehouse . dbo . q_dim_segments qs ( nolock )  ON sc . seg_code_ref_key = qs . seg_code_ref_key  WHERE SC.active=1 and qs.seg_type_ref_key_04 <>'N/A'  AND c.active = 1  and (qs.seg_type_ref_key_02 <> 'Prospect' and qs.seg_type_ref_key_03 <> 'Prospect' and      qs.seg_type_ref_key_04 <> 'Prospect' and qs.seg_type_ref_key_12 <> 'Prospect' and      qs.seg_type_ref_key_01 <> 'Prospect')  )    create clustered index S_1 on #temp_shash_A (cust_id)    -- B    ( SELECT   c.cust_id cust_dup,                  SUM(fc.turnover) AS Turnover_GBP,                  SUM(fc.grosswin) AS GrossWin_GBP,                  SUM(fc.chip_purch_amount_gbp) AS chip_purch_amount_gbp  INTO #temp_shash_B  FROM   warehouse.dbo.fact_games fc (nolock)  INNER JOIN warehouse.dbo.dim_date d (nolock)  ON d.date_key = fc.date_ref_key  INNER JOIN warehouse.dbo.dim_customer c (nolock)  ON c.cust_ref_key = fc.cust_ref_key  INNER JOIN warehouse.dbo.dim_gamesgame gg(nolock)  ON gg.games_game_ref_key = fc.game_ref_key  WHERE d.[date] between getdate()- 10 AND getdate()-9          AND gg.Game_Group_Description <> 'Bingo'  GROUP BY c.cust_id )      create clustered index S_2 on #temp_shash_B (cust_dup)      SELECT * FROM #temp_shash_A A  LEFT JOIN #temp_shash_B B  ON A.cust_id = B.cust_dup  

This took just around 5-6 minutes when ran initially. However, it took around 35 minutes when ran today. Can anyone suggest a way for me to optimize this? Any help appreciated.

PS: I'm working on SQL Server 2008 R2 DB. The query is a dataset query for an SSRS report.

Oracle specifying join order hint

Posted: 03 Sep 2013 11:41 AM PDT

Oracle 2 node RAC 11G R2

I have a query (this is a simplified version, but still exhibits the same issue as the larger query) that looks like this:

SELECT     t1.trip_id,     t1.origin,     t2.sighting_date  FROM trips t1  JOIN trip_details t2 on t1.trip_id = t2.trip_id  WHERE   --don't the get the information if the trip *only* belongs to customerid 99  99 <> any(select customerid from customer_trip ct where ct.trip_id = t1.trip_id);  
  • trip has trip level information - trip_id, origin, destination, etc
  • trip_details has sightings for the trips - trip_id, sighting_date, sighting_location, etc
  • customer_trip has which trips belong to which customers - some trips may be shared between customers and some may not, so the basic structure is just customer_id, trip_id

The problem is that it keeps joining trip and trip_details first, and then limiting by the 99<> any.. statement. The 99 <> any.. statement greatly reduces the rows from trip and would reduce the rows looked up in trip_details (which is very, very large - 100x trip). So, ideally, I'd like it join trips and the customertrip subquery first, and then do trip_details.

But I can't figure out if there's a way to specify a join order such that it does it in that order. The LEADING hint doesn't seem to have a way to reference the subquery.

Stats are all up to date.

table scheme definiation it this way that insert duplicate into another table

Posted: 03 Sep 2013 10:14 AM PDT

I have a table in which there is a column name with SP varchar(10) NOT NULL. I want that column always to be unique so i created unique index on that column . My table schema as follows :

CREATE TABLE IF NOT EXISTS `tblspmaster` (    `CSN` bigint(20) NOT NULL AUTO_INCREMENT,    `SP` varchar(10) NOT NULL,    `FileImportedDate` date NOT NULL,    `AMZFileName` varchar(50) NOT NULL,    `CasperBatch` varchar(50) NOT NULL,    `BatchProcessedDate` date NOT NULL,    `ExpiryDate` date NOT NULL,    `Region` varchar(50) NOT NULL,    `FCCity` varchar(50) NOT NULL,    `VendorID` int(11) NOT NULL,    `LocationID` int(11) NOT NULL,    PRIMARY KEY (`CSN`),    UNIQUE KEY `SP` (`SP`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=10000000000 ;  

Now i want that if anybody tries to insert duplicate record then that record should be inserted into a secondary table name tblDuplicate.

I have gone through this question http://stackoverflow.com/questions/812437/mysql-ignore-insert-error-duplicate-entry but i am not sure that instead of

INSERT INTO tbl VALUES (1,200) ON DUPLICATE KEY UPDATE value=200;  

can i insert duplicate row into another table ?

what changes needed to be done in main table scheme or index column ?

Note : Data will be inserted by importing excel or csv files.

Create a secondary read-only copy of DB in a different server

Posted: 03 Sep 2013 02:08 PM PDT

We have users that query our prod database (mostly ODBC through MS-Access) to create some customs reports. We want to copy our data to a second server (replicate / log shipping / ???) so the load on the prod DB is less. Many of the tables in the Prod DB do not have primary keys and is a SQL Server 2005. Our target DB is SQL Server 2012 Standard (though we can down-grade the destination server if we have to).

How could we accomplish that? We've tried replication and failed because of the lack of primary keys. We also tried log-shipping but the second DB cannot be brought online because of the difference in SQL versions.

Thanks, Jim

Combining multiple left join for one single line

Posted: 03 Sep 2013 11:58 AM PDT

We have a query as below.

SELECT tblSData.header,  tblEAlert.eMessage  FROM tblSData  LEFT JOIN tblEAlert ON tblSData.sDataID=tblEAlert.sDataID   Where tblSData.aID=".$aID." Order By tblSData.dateTimer  Asc  

Structure for the tables are as below:

CREATE TABLE IF NOT EXISTS `tblSData` (    `sDataID` int(11) NOT NULL AUTO_INCREMENT,     `header` varchar(255) NOT NULL,       `aID` int(5) NOT NULL,    `gFInID` int(5) NOT NULL,    `gFOutID` int(5) NOT NULL,    `gFAInID` int(5) NOT NULL,    `gFAOutID` int(5) NOT NULL,    `eAlertID` int(5) NOT NULL,    `dateTimer` datetime NOT NULL     PRIMARY KEY (`sDataID`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;      CREATE TABLE IF NOT EXISTS `tblEAlert` (    `eAlertID` int(11) NOT NULL AUTO_INCREMENT,    `sDataID` int(5) NOT NULL,    `eID` int(5) NOT NULL,    `aID` int(5) NOT NULL,    `eDateTime` datetime NOT NULL,    `eInsertDateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    `eMessage` varchar(255) NOT NULL,    PRIMARY KEY (`eAlertID`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;      Sample data for tblSData    1,"A1",1122,100,102,1,2,1,2013-07-13 15:30:19  2,"A3",1122,104,103,3,4,3,2013-07-13 15:45:19  3,"A4",1122,105,108,5,6,7,2013-07-13 15:55:19      Sample data for tblEAlert    1,1,1,1122,2013-07-13 15:30:19,2013-07-13 15:30:19,"Alert 1"  2,1,2,1122,2013-07-13 15:30:19,2013-07-13 15:30:19,"Alert 2"  3,2,2,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 3"  4,2,3,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 4"  5,2,4,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 5"  6,2,5,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 6"  

Sample output is.

"A1","Alert 1"  "A1","Alert 2"  "A2","Alert 3"  "A2","Alert 4"  "A2","Alert 5"  "A2","Alert 6"  

The problem now is how to show say for e.g. A1 all the Alert Message in one line and same goes for A2 etc. Now each alert message is on a different line.

Oracle, Create table with inline constraint declarations using a different tablespace

Posted: 03 Sep 2013 10:59 AM PDT

I have two questions about inline constraints declarations for Oracle tables:

  1. Is it a bad practice? If so, why?

  2. How could one declare a different tablespace for the primary key and index like it's done when using the outline declaration? Something like

      create table THIS_TABLE (          id number,           constraint THIS_TABLE_PK (id) tablespace INDEX_TABLESPACE      ) tablespace DATA_TABLESPACE;  

How to know what version of mysql client is compatible with what version of the server?

Posted: 03 Sep 2013 08:46 AM PDT

Is there backward or forward compatibility tables from the client and/or server point of view?

For a given version of the server, how to know which clients are compatible, and conversely?

db2 equivalent for oracle drop user cascade

Posted: 03 Sep 2013 07:44 AM PDT

This question is related to creating and initialising test and development databases for an application.

On oracle, we have

  • an os user (on linux) called oracle,
  • an admin oracle user called sys or system
  • a fixed predefined oracle user that is the application object owner, let's call it appowner
  • an oracle user that is the application user, ie has limited previleges on the objects owned by appowner, let's call it appuser

whenever there is a need to initialise the database, ie to start from scratch, we first connect to oracle using sys or system oracle user and then issue this command:

DROP USER appowner CASCADE;  

and then we go about recreating the user and the objects it owns from scratch. We also grant some previliges to appuser on the objects created by appowner

The application always logs in as appuser rather than appowner unless there are any administrative tasks to be performed on the database.

Now we are porting this application to db2, and this is where we are flummoxed.

For starters, db2 creates these os users, which are also db2 users:

dasusr1  db2inst1  db2fenc1  

How do these three users map to sys/system, appowner and appuser?

I believe dasusr1 is the rough equivalent of sys/system, db2inst1 is the rough equivalent of appowner, and db2fenc1 is the rough equivalent of appuser

(please correct me if I'm wrong, and I fully appreciate that the mapping will not be exact)

That being the case, if I have to remove all objects owned by db2inst1, do I login as dasusr1 and drop the user db2inst1?

There isn't really a one to one mapping between the oracle users and db2 users, because

  • db2inst1 can create multiple databases whereas appuser is mapped to one database in oracle
  • within a db2 database, there can be multiple schemas, whereas in oracle, one users maps to one schema

So it's a little confusing. It would help if someone with experience in porting applications from oracle to db2 could throw some light on the equivalence between the db2 and oracle users, which would finally lead to finding out the db2 equivalent for oracle drop user cascade.

Install SP 3 on SQL Server 2008, where merge replication is running

Posted: 03 Sep 2013 08:02 AM PDT

I have a SQL Server 2008 and I should install Service Pack 3 for it.

On my server there are several databases replicated, and mobile devices sync data via merge replication.

Can problems occur, if I install SP3, especially with my mobile devices (where SQL CE 3.5 is running).

I can't update all of the mobile devices.

Get missed transactions from slave

Posted: 03 Sep 2013 11:24 AM PDT

I'm trying to write a disaster recovery plan, I have one way replication with 1 master(A) and 3 slaves(B,C,D)

     A       /|\     / | \    B  C  D  

Let's say my master crash, and I loose all data on that server, I still have 3 slaves, and I'll promote the most up-to-date to be my new master, let say slave B is up to date and C and D are not, at the point I promote B to master my slaves have the follow result for SHOW SLAVE STATUS\G:

slaveB> SHOW SLAVE STATUS\G  *************************** 1. row ***************************        Relay_Master_Log_File: master-bin.000002          Exec_Master_Log_Pos: 1476884    slaveC> SHOW SLAVE STATUS\G  *************************** 1. row ***************************        Relay_Master_Log_File: master-bin.000002          Exec_Master_Log_Pos: 78684    slaveD> SHOW SLAVE STATUS\G  *************************** 1. row ***************************        Relay_Master_Log_File: master-bin.000001          Exec_Master_Log_Pos: 746848  

In order to promote B to master, I want to apply all missing transactions on C and D, so before B become the new master and start to receive queries from applications, I've all slaves on a consistent state, how can I find the missed transactions from C and D on binary logs from B (I've the --log-slave-updates option enabled on all server).

For legacy issues, the envoirement use MySQL 5.0

Moving from SQL 2005 [SQL_Latin1_General_CP1_CI_AS] to 2008 - will I lose any features by using 'backwards compatibility'

Posted: 03 Sep 2013 11:00 AM PDT

We're moving from SQL 2005 [Instance and DB have collation of SQL_Latin1_General_CP1_CI_AS] to SQL 2008 [which defaults to Latin1_General_CI_AS].

I completed a SQL 2008 R2 installation, and used default Latin1_General_CI_AS collation, with the restoration of the database still on SQL_Latin1_General_CP1_CI_AS. The excepted problems occurred - the #temp tables where in Latin1_General_CI_AS whilst the db was in SQL_Latin1_General_CP1_CI_AS and this is where I am now - I need advice on the pitfalls now please.

On installation of SQL 2008 R2, I have the option on installation to use 'SQL Collation, used for backwards compatibility' where I have the option to select the same collation as the 2005 database : SQL_Latin1_General_CP1_CI_AS.

  1. This will allow me to not have problems with #temp tables, but are there pitfalls?

  2. Would I lose any functionality or features of any kind by not using a "current" collation of SQL 2008?

  3. What about when we move (e.g. in 2 years ) from 2008 to SQL 2012? Will I have problems then?
  4. Would I at some point be forced to go to Latin1_General_CI_AS?

  5. I read that some DBA's script complete the rows of complete databases, and then run the insert script into the database with the new collation - I'm very scared and wary of this - would you recommend doing this?

Calculating the median value in a MySQL table with CodeIgniter

Posted: 03 Sep 2013 05:18 PM PDT

This is my model file in codeigniter. I need to calculate the median for this variable: budget_breakdown.breakdown_grants. How can I do this calculation?

function reports_details1() {          $this->db->select('budget_breakdown.breakdown_grants');      //survey_respondent_info.state,survey_respondent_info.survey_id,budgets.budget_id,          $this->db->from('survey_respondent_info');          $this->db->join('budgets',                          'budgets.survey_id=survey_respondent_info.survey_id' , 'left');          $this->db->join('budget_breakdown',                              'budgets.budget_id=budget_breakdown.budget_id' , 'left');          $this->db->where('budgets.budget_option_id', 2);          $query1 = $this->db->get();          $result = $query1->result();          return $result;      }  

This is just to demonstrate a display bug

Posted: 03 Sep 2013 01:17 PM PDT

...and now has been updated to demonstrate a comment thingy on SO meta.

Unless I asked you to, please do not act on this question, even if it appears in the suggested aardvark edit review queue or the close votes review queue.

Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries)

Posted: 03 Sep 2013 09:18 AM PDT

I'm trying to configure a load balancing system. I've a python script, invoked through mod_wsgi on Apache, that generates a query and executes it on pgpool: request-> wsgi python -> pgpool -> postgresql. Pgpool is configured as load balancer using 4 servers with 24GB ram and 350GB ssh hd. Our db is about 150GB and a query takes about 2 seconds. These are the configurations:

Pgpool

  • num_init_children 500
  • max_pool 2
  • child_life_time 300 seconds

Apache (mpm_prefork)

  • StartServers 100
  • MinSpareServers 20
  • MaxSpareServers 45
  • ServerLimit 2000
  • MaxClients 100
  • MaxRequestsPerChild 1000

PostgreSQL

  • max_connections = 1000
  • shared_buffers = 6GB
  • work_mem = 4GB

It seems not working When I try to submit more than 150 concurrent queries, although pgpool log file doesn't have any errors I get this error from the python script:

OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

Any ideas?

What are database statistics, and how can I benefit from them?

Posted: 03 Sep 2013 01:31 PM PDT

I've heard mention of statistics that SQL Server keeps by default. What are they tracking, and how can I use this information to improve my database?

Friday, May 3, 2013

[how to] Transfer data from multiple different servers to single database server

[how to] Transfer data from multiple different servers to single database server


Transfer data from multiple different servers to single database server

Posted: 03 May 2013 05:28 PM PDT

I am trying to transfer a particular table from multiple different servers(30) into one server.I do not want to create 30 different ole db source and merger them. Is there a simple way to resolve this issue?

Thanks!

DR - Moving databases from Active/Active cluster to Standalone server with LUN sync

Posted: 03 May 2013 07:27 PM PDT

We have an active/active cluster on LUNs (with one named instance on each node and a total of an 100 DBs)

For DR purpose, LUNs will be replicated (with same drive letters) and attached to a standalone server with same configuration of SQL Server but not same Windows Server editions (enterprise vs standard, but i don't think it may be an issue), and the two same named instances pre-installed.

I proposed the Attach / Detach solution, with a script that auto-generate the attach / detach to simplify the tasks, and other scripts to copy Logins, jobs and so on.

Other DBA said a better option will be to rename the MASTER and MSDB databases from the pre-installed named instances on standalone server with _old, and to attach the MASTER / MSDB databases from Cluster node instance to both corresponding named instance in standalone instance.

He said it works and had tested it.

However, I've never heard of such a solution nor can i find any documentation of it.

I would like to hear some other opinions.

Thank you, Kukah

Ubuntu / MySQL - How to choose from the various downloads?

Posted: 03 May 2013 05:02 PM PDT

I have Ubuntu 12.04.2 LTS. I would like install MySQL 5.5.29.

So I go to the MySQL 5.5 archive page
http://downloads.mysql.com/archives.php?p=mysql-5.5&o=other and scroll down to section 5.5.29

Here I see the following options:

enter image description here

There appear to be duplicates here, so I hover the mouse over the links.

mysql-5.5.29-linux2.6-x86_64.tar.gz  mysql-5.5.29-linux2.6-i686.tar.gz  mysql-5.5.29-debian6.0-x86_64.deb  mysql-5.5.29-debian6.0-i686.deb  mysql-5.5.29.tar.gz  mysql-5.5.29.zip  

My operating system, Ubuntu 12.04.2, is 64 bit, so that rules out #2 and #4. I assume the .zip is for Windows, so that rules out #6.

That leaves #1, #3 & #5.

Please explain how I choose from these remaining options? Perhaps Ubuntu is considered similar to Debian so thats what I should choose?
Or should I select the generic download?

How to create Database in MS SQL 2012 from lot of XML and XSD

Posted: 03 May 2013 05:55 PM PDT

I´m new in programming form MS SQL 2012. I got few thousands of XML data and several tens of XSD, which having a dependency among themselves, and comes form one huge database.

I would like to create a tables by the XSD definitions and populate the tables from XML data. I´m working with Microsoft SQL Server 2012 and Visual Studio 2012.

Please can anyone help me, how to do that??

I tried many ways - for example I tried to create XML Schema Collection from XSD in T-SQL (CREATE XML SCHEMA COLLECTION - http://msdn.microsoft.com/en-us/library/ms176009.aspx) but did not find how to do this from file.

So I tried VS 2012, but advices and guidance are only for VS 2010 (for example http://msdn.microsoft.com/en-us/library/dd193276(v=vs.100).aspx) and in VS 2012 it´s seems to be different.

I´m starting to be desperate, so I´d like to pleased you for any help or advice how to solve this problem whether or not it is necessary to first create the XSD schema and then perform the XML.

Thanks a lot!

Select first row (grouping) + add aggregate function

Posted: 03 May 2013 03:03 PM PDT

First have a look at this question on StackOverflow.

I'm looking to accomplish the same task, except I also need to add an aggregate function (PostGIS's ST_Union) to my query.

How can I combine the answer using DISTINCT with an aggregation...

I've tried:

SELECT DISTINCT ON (name, zonedistrict_id) ST_Union(geom) as geom, gid, name, zonedistrict_id, zonestyle_id, longname   FROM zones  ORDER BY name, zonedistrict_id, zonestyle_id;  

Which screams: column "zones.gid" must appear in the GROUP BY clause or be used in an aggregate function

Which is strange, because if I remove ST_Union(geom) as geom,, the query works. But it's absent of the union'd geometry.

MySQL - Index on a query with order

Posted: 03 May 2013 01:29 PM PDT

I have the following table with >1M rows:

CREATE TABLE `wishlist_place` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `wishlist_id` int(11) DEFAULT NULL,    `place_id` int(11) DEFAULT NULL,    `city_id` bigint(20) DEFAULT NULL,    `created_by_id` int(11) DEFAULT NULL,    `created_at` datetime DEFAULT NULL,    PRIMARY KEY (`id`),    UNIQUE KEY `wishlist_place_unique` (`wishlist_id`,`place_id`),    KEY `IDX_20B83B35FB8E54CD` (`wishlist_id`),    KEY `IDX_20B83B358BAC62AF` (`city_id`),    KEY `new_places_on_wishlist` (`wishlist_id`,`place_id`,`id`)  ) ENGINE=InnoDB AUTO_INCREMENT=3079689 DEFAULT CHARSET=utf8;  

And I'm doing the following query to get the newest places added to that wishlist:

SELECT place_id   FROM wishlist_place as wp  WHERE wishlist_id = 113   ORDER BY wp.id desc   LIMIT 0, 9  

I thought the index 'new_places_on_wishlist' would work for this query, but it uses the index 'wishlist_place_unique' instead:

id  select_type table   type    possible_keys                                                       key                     key_len ref     rows    Extra  1   SIMPLE      wp      ref     wishlist_place_unique,IDX_20B83B35FB8E54CD,new_places_on_wishlist   wishlist_place_unique   5       const   28      Using where; Using index; Using filesort  

UPDATE

I've noticed that the index that works best is:

KEY `new_places_on_wishlist` (`wishlist_id`,`id`,`place_id`)  

but I still have to use the USE INDEX statement:

SELECT place_id   FROM wishlist_place as wp  USE INDEX(new_places_on_wishlist)  WHERE wishlist_id = 113   ORDER BY wp.id desc   LIMIT 0, 9  

Why does MySQL takes the UNIQUE index first? The order for the indexes are fields in where, fields in order, fields in select?

--

And another related question, if I do a JOIN with the city table to get the wishlists of that city, is the index IDX_20B83B358BAC62AF the best one or it would be better (city_id, wishlist_id)?

Does MySQL 5.6 support fan-in replication?

Posted: 03 May 2013 03:01 PM PDT

I've known for quite awhile that Continuent's Tungsten Replicator supports fan-in replication (example).

By fan-in replication I mean combining multiple datasources into a single instance (like a datawarehouse).

I recently came across this MySQL deck that seems to suggest that this could be done with MySQL 5.6. Specifically slide/page 35 says "Arbitrary Topology" under Topologies:.

Can someone confirm that fan-in replication is possible with MySQL 5.6?

Has anyone setup this topology in 5.6?

Thanks!

Oracle - Best way to export 30 million rows to csv

Posted: 03 May 2013 11:30 AM PDT

I have a query that hits multiple tables in an ORACLE db (11g). I don't have access to the db machine itself. I use sql developer or sqlplus to connect.

there are about 31 million rows, and 7 columns, and I want to dump it into a csv.

Looking to learn about the various/best options for this case.

Different Result When Insert Data Contain Quote on Localhost and Online MySQL Hosting

Posted: 03 May 2013 06:25 PM PDT

I installed xampp on localhost and insert data contain quote using this php code

$content    = htmlentities($_POST['content']);  $content_esc = mysql_real_escape_string($content);  $save = mysql_query("INSERT INTO tbl_post (title,content,date,publish) values('$title_esc','$content_esc','$date','$publish')");  

The result on localhost website is

this is 'example data'  

The row on database filled with this

this is &quot;example data&quot;  

But using the same code to insert data the result from my hosting provider is different

this is \'example data\'  

The row on database filled with this

this is \&quot;example data\&quot;  

I test my website on localhost, insert post contain quote and it show on page correctly. Then i surprise when i upload on online hosting, insert post and the post page show '\' before the quote.

At this time i replace the '\' with nothing. How to insert the quote so it can show same result on local mysql database and online mysql database ? What is wrong, my php code or my mysql database ?

Can I use Try_Cast to give me a query result column with different datatypes?

Posted: 03 May 2013 11:53 AM PDT

I have a column that is varchar(max). It contains mostly valid XML. But sometimes it has invalid xml (that I still need).

I am trying to make a view for this table that allows the value to display as XML (with the nice link feature that comes with SSMS) when the data is valid XML, but still shows the string when it is invalid XML.

Like this:

SELECT TRY_CAST([RunLog] AS xml) as [Log], -- works         RunLog,                             -- Also Works           -- This is the one I want, but it tries to parse all of them          -- (even the ones that fail the cast)         Case              When Try_Cast(RunLog as xml) is NULL                   Then NettinRunLoggLog else CAST(RunLog AS xml)         end as CombinedLog    FROM   LoggingTable  

I think that this is a lost cause, because SQL Server will not allow a result to have more than one datatype. But I thought I would ask the experts just to be sure.

How can I copy a user while preserving all privs?

Posted: 03 May 2013 08:48 AM PDT

We're changing domains. I'd like to create a new account for myself, NewDomain\Bob which has the same su-level privileges as OldDomain\Bob, to minimize the chance of disruption when IT flips the switch.

I can easy assign the new login the same database roles. However, I'd like to associate the new login with the same users. Sadly, it seems that a user can have only one login. So what are the alternatives?

I could create the new login with a temporary user, reassign ownership of schemas, drop the old dbo user, and finally rename the new user. However, this would strip some privileges from the old login. In practice this may not matter, but I don't like the idea of cutting off any of my own privs before the new account is available.

The sp_change_users_login sproc looked promising to repoint the dbo user to the new login, but it has some serious limits - like not being usable with Windows logins.

Is there a workaround, or am I perhaps trying to fix the wrong problem?

What's the meaning of the Collation Code (blank, 90, 100, 110) in SQL Server 2012

Posted: 03 May 2013 07:57 AM PDT

During the installation of SQL Server 2012, in the Collation tab, i can choose between French_CI_AS and French_100_CI_AS.

What do they mean and which one is newer? When there's no Code, is it newer or older than the 100?

MySQL InnoDB does not return the last row on select with DESC order

Posted: 03 May 2013 09:09 AM PDT

We have a table scheme like following:

CREATE TABLE IF NOT EXISTS `offers` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `campaign_id` int(11) NOT NULL,    `user_id` int(11) NOT NULL,    `price` double NOT NULL,    `ip` varchar(15) NOT NULL,    `cdate` int(11) NOT NULL,    PRIMARY KEY (`id`),    KEY `campaign_id` (`campaign_id`,`price`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin5 AUTO_INCREMENT=190514 ;  

On each new offer by the user we check if the last order is given by the same user with:

"select user_id from offers where campaign_id='".$campaign['id']."' order by id desc limit 1"  

if the user_id is same we prevent the new offer to protect the user from accidental double clicks.

if there is not any problem with the offer we insert the offer with:

"insert into offers(campaign_id,user_id,price,ip,cdate) values (".$campaign['id'].",".$user['id'].",'".$price."','".$_SERVER['REMOTE_ADDR']."',".time().")"  

But the problem is the select only returns the last inserted row after about 1 second. That means the user can insert multiple offers if they click the button too fast.

We use 5.5.30-30.2-log Percona Server as our database server. Following is our my.cnf file:

[mysqld]  datadir                         = /var/lib/mysql  tmpdir                          = /var/lib/mysqltmp  socket                          = /var/lib/mysql/mysql.sock  skip-external-locking           = 1  skip-name-resolve  open-files-limit                = 40000  max_heap_table_size             = 64M  tmp_table_size                  = 64M  log-error                       = /var/log/mysqld.log  thread-cache-size               = 50  table-cache                     = 4096  table-open-cache                = 4096  table-definition-cache          = 512  query-cache-size                = 0  query-cache-limit               = 16M  query-cache-type                = 0  sort-buffer-size                = 1M  read-buffer-size                = 1M  read-rnd-buffer-size            = 8M  join-buffer-size                = 1M  tmp-table-size                  = 64M  max-heap-table-size             = 64M  back-log                        = 100  max-connections                 = 10000  max-connect-errors              = 10000  max-allowed-packet              = 256M  interactive-timeout             = 360  wait-timeout                    = 360  innodb                          = FORCE  key-buffer-size                 = 32M  myisam-sort-buffer-size         = 4M  innodb-buffer-pool-size         = 60G  innodb-log-file-size            = 256M  innodb_log_files_in_group       = 2  innodb-log-buffer-size          = 4M  innodb-file-per-table          = 1  innodb-thread-concurrency       = 8  innodb-flush-log-at-trx-commit  =2  server-id                       = 1  slow-query-log                  = 1  slow-query-log-file             = /var/lib/mysqllogs/slow-log  

How would i make online exam application? [closed]

Posted: 03 May 2013 04:55 AM PDT

  1. As per user login The exam will be scheduled with 30 questions randomly for each user.
  2. It have the timer of 30 - 50 mns.
  3. Each question have 4 options and 1 answer is correct.
  4. All the questions and answer are coming from the database.

So, I want to know what will be the database structure for this.

Working Linked Server Query fails in sp_send_dbmail

Posted: 03 May 2013 10:40 AM PDT

Take the following example:

EXEC msdb.dbo.sp_send_dbmail   @recipients = 'me@whatever.co.uk' ,  @query = 'SELECT TOP 10 * FROM LINKEDSERVERA.DATABASE.dbo.TABLE' ,  @attach_query_result_as_file = N'True' ,  @query_attachment_filename = 'test.txt' ,  @subject = 'test' ,  @body = 'test' ,  @body_format = 'HTML' ;  

This is giving the following error (even when ran under a windows credential that has sysadmin privileges to both servers):

Msg 22050, Level 16, State 1, Line 0  Error formatting query, probably invalid parameters  Msg 14661, Level 16, State 1, Procedure sp_send_dbmail, Line 504  Query execution failed: OLE DB provider "SQLNCLI10" for linked server "LINKEDSERVERA" returned message "Login timeout expired".  OLE DB provider "SQLNCLI10" for linked server "LINKEDSERVERA" returned message "A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correc  t and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.".  HResult 0xFFFF, Level 16, State 1  SQL Server Network Interfaces: Error getting enabled protocols list from registry [xFFFFFFFF].   

Now, the linked server in question is configured as follows (note the use of @useself & no mappings):

EXEC master.dbo.sp_addlinkedserver @server = N'LINKEDSERVERA', @srvproduct=N'SQL Server'  EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'LINKEDSERVERA',@useself=N'True',@locallogin=NULL,@rmtuser=NULL,@rmtpassword=NULL  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'collation compatible', @optvalue=N'false'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'data access', @optvalue=N'true'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'dist', @optvalue=N'false'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'pub', @optvalue=N'false'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'rpc', @optvalue=N'true'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'rpc out', @optvalue=N'true'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'sub', @optvalue=N'false'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'connect timeout', @optvalue=N'0'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'collation name', @optvalue=null  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'lazy schema validation', @optvalue=N'false'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'query timeout', @optvalue=N'0'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'use remote collation', @optvalue=N'true'  EXEC master.dbo.sp_serveroption @server=N'LINKEDSERVERA', @optname=N'remote proc transaction promotion', @optvalue=N'true'  

Here is where it gets funky: If I run the following in SSMS manually (i.e. not within sp_send_dmail) the query runs fine!

SELECT TOP 10 * FROM LINKEDSERVERA.DATABASE.dbo.TABLE  

So the linked server IS working, its just sp_send_dmail thats having trouble with it.

So next I assumed it was a problem with the credential being used by the SQL Server services (including agent) - both of these are running as a DOMAIN\SQL service account - If I login to MSSMS as that service account and run the query manually - again it works, so the service account definitely has the permissions to run the query over the linked server.

I have double checked the protocols enabled on the native client (on both servers), TCP is definitly enabled on both servers and as I say, it works fine when not ran within sp_send_dbmail.

To confirm sp_send_dmail was executing the query as the service account I ran the following (note the use of SYSTEM_USER):

EXEC msdb.dbo.sp_send_dbmail   @recipients = 'me@whatever.co.uk' ,  @query = 'SELECT SYSTEM_USER' ,  @subject = 'test' ,  @body = 'test'  

This returned the DOMAIN\SQL credential.

The last part of the error mentions something about the registry but I don't understand why that would only be effecting sp_send_dbmail (nor am I 100% certain on what needs checking in the registry) – the only article I could find was for SQL 2005 and the key location seems to be different in SQL 2008 R2?

My question: how do I get this linked server query to work in sp_send_dbmail when it works fine outside sp_send_dbmail?

PS: articles like this get round the issue by providing different credentials on the linked server - this isnt an option as this Linked Server is used by hundreds of other cross-server queries and they are all working fine.

Similar Issues:

Same Parameter in MySQL Stored Procedure

Posted: 03 May 2013 08:56 AM PDT

How do I cache dynamic query from store procedure? Right now I have created my store procedure like this :

CREATE PROCEDURE usp_MyProcedure (    IN UserID INT,    ....   )  BEGIN       SET @sqlQuery = CONCAT("SELECT Name From Users WHERE UserID > ", UserID, " AND UserID  IN ( SELECT UserID FROM OtherTable WHERE UserID = ", UserID, " ) Order by Name")        PREPARE stmt FROM @sqlQuery;     EXECUTE stmt;     DEALLOCATE PREPARE stmt;    END;  

But this kind of query does not cached. so, every time it gets more time to execute/process query.

Now I have tried some other method like this:

CREATE PROCEDURE usp_MyProcedure (    IN UserID INT,    ....   )  BEGIN     SET @UserID = UserID;       SET @sqlQuery = "SELECT Name From Users WHERE UserID > ? AND UserID  IN ( SELECT UserID FROM OtherTable WHERE UserID = ? ) Order by Name";       PREPARE stmt FROM @sqlQuery;     EXECUTE stmt @UserID, @UserID; -- here i passed same variable twice.     DEALLOCATE PREPARE stmt;  END;  

In the above case I have to pass same variable (@UserID) twice, because it is used 2 times in my query. but this job is very hectic in long or complex query. so, how do I avoid this?

One another method I tried as follows:

CREATE PROCEDURE usp_MyProcedure (    IN UserID INT,    ....   )  BEGIN     SET @UserID = UserID;       SET @sqlQuery = "SELECT Name From Users WHERE UserID > @UserID AND UserID  IN ( SELECT UserID FROM OtherTable WHERE UserID = @UserID ) Order by Name";       PREPARE stmt FROM @sqlQuery;     EXECUTE stmt;     DEALLOCATE PREPARE stmt;  END;  

But above query again does not cached. so, execution time is very long. and this type of variable declared as session global variable has may be conflict with another store procedure's variable. because I have call store procedure within this store procedure and variable name should be same in another store procedure.

So, let me know what is the best solution for the same.

Thanks.

Changing PostgreSQL port using command line

Posted: 03 May 2013 10:05 AM PDT

My PostgreSQL default port is 5432, I want to change default port by using command line (in Windows 7).

I saw this solution before: http://stackoverflow.com/questions/187438/want-to-change-pgsql-port

I tried running the following command, but it failed:

postgres --p 5431  

Could someone tell me how to change port in using command line?

Real-Time Merge of two SQL Server databases on different servers

Posted: 03 May 2013 06:26 AM PDT

One of our customer requirements is to merge two databases on two different machines in a real time context. This means that we have to build some sync policy (or use SQL Server Tools to manage that). But I have some doubts:

  • How can I be sure that my databases are totally merged? I mean, we are in a real time environment with n/sec web and local requests, it's impossible to share table updates/merges with the same speed.

  • Is this a good architecture in spite of a master/slaves one (with an unique DB) topology? I'm pretty sure sync both databases each other will cost too much since our tables store a lot of data in terms of rows and information.

  • Does everyone provide a good solution to build the best architecture possible?

Thanks!

Is it possible to backup an sql database if it's status is not online?

Posted: 03 May 2013 04:56 AM PDT

We have an sql job which backs up all database in the sql server every night(excluding tempdb) whose state is 0 (online), but due to some reason it didn't back msdb. So the issue is at that time msdb state was other then 0. So my question is : is it possible to backup an sql database if it's status is not online (either RESTORING, RECOVERING, RECOVERY_PENDING, SUSPECT, EMERGENCY)?

Windows Server 2008 R2, Active Directory, and Oracle Database Server

Posted: 03 May 2013 05:03 AM PDT

I have a group of clients, Windows server 2008 R2(Active Directory Installed in it), and an oracle database Server

I Want to know how to make the configuration of the following Diagram in this link:

So how to configure the clients to reach the database server through the active directory?

enter image description here

Update

If Active Directory is not of any use here (as comments suggest), is there another way I can do remote routing from clients to the DB server through a proxy?

What's are the different ways to keep track of active and archived data?

Posted: 03 May 2013 07:05 PM PDT

I'm looking for different ways to keep track of both active and archived data so I can pro and con them.

The system: I have a computer with a database on it. The database has several tables in it; one of which contains a list of users that can use the computer; and several tables for auditing (user 1 did this, user 2 did that, etc). This database is a slave of a master database in which a Content Management System is used to say, add a new user and see reports on what user did what.

Example: As stated above, I have a table (lets call it users) that keeps track of all the users that are allowed to use the computer. As time goes by users will be added and removed. The problem is the audit tables keep track of a user ID so if the user is removed I lose the user information because the rows can't be joined. One idea I had was to use MySql's triggers so that if a user is added, an insert trigger is triggered and inserts a copy of the data to an 'archived' user table (lets call it users_archive). That way the computer can use users to determine if the user has permission to use it and reports can use users_archive for reports.

This seems like the easiest and most simple way to do it, but I can't find any other ways via google search to see if there are any other ways to do something like this.

Database stuck in restoring and snapshot unavailable

Posted: 03 May 2013 11:05 AM PDT

I tried to restore my database from a snapshot. This usually took around a minute to complete the last couple of times. When I did it today, it didn't complete for around 30 minutes and the spid was in a suspended state. I stopped the query and now my database is stuck in restoring state and my snapshot is unavailable. Am I screwed?

USE master;  RESTORE DATABASE QA from   DATABASE_SNAPSHOT = 'QA_Snap_Testing';  GO  

Database design - do I need another table?

Posted: 03 May 2013 08:05 PM PDT

I am trying to make a database that follows a form that the company uses. When a client walks in the membes of staff have to fill in a form and the data is recorded. The form on paper is 10 pages long. The first time a client is seen the entire form is filled in and the client gets a clientID.

I have split the form into sections that make sense like accommodation and employment. I know I can link these tables together with the clientsID. Thats the simple part.

Now when a client returns the form comes out again but this time only certain parts are filled in, what ever the clients needs are. The records for most parts don't need updating but a new record needs inserting. what would be the best way around this.

So at the moment I have for example a table called client with an id and name another table called accommodation with clientid and address and another table employment with clientid and employer.

But how do I go about it when a client comes in to let us know he has a new employer. I cant update the current one as that is needed but I would need to add new record for the client.

Would this mean I would have to add a look up table for all my current tables?

event scheduler not called

Posted: 03 May 2013 02:05 PM PDT

I had created one event scheduler which looks like this

mysql> show create event event1      -> ;  +--------+----------+-----------+--------------------------------------------------------------------------------------------+  | Event  | sql_mode | time_zone | Create Event  | character_set_client | collation_connection | Database Collation |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+    | event1 |          | SYSTEM    | CREATE DEFINER=`root`@`localhost` EVENT `event1` ON SCHEDULE EVERY 1 MONTH STARTS '2013-02-02 00:00:00' ON COMPLETION NOT PRESERVE ENABLE DO BEGIN   update samp set col1 =col1  + 1; END | utf8                 | utf8_general_ci      | latin1_swedish_ci  |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+---------  -----------+  1 row in set (0.00 sec)  

This events has not called on 1st of month. So i tried show processlist\g to find it is runnung or not; it showed following o/p

mysql> show processlist\g;  +-----+-----------------+-----------------+---------------+---------+------+---     | Id  | User            | Host            | db            | Command | Time | State                       | Info             |  +-----+-----------------+-----------------+---------------+---------+------+---  | 136 | event_scheduler | localhost       | NULL          | Daemon  | 1855 | Waiting for next activation | NULL    |  

so NULL in db col means that no DB is assigned to it?

Please help me to solve it.

How can I improve my table design for different types of an entity?

Posted: 03 May 2013 06:05 PM PDT

Consider an accounting system as an example. I have an Entity called Client. Client can be of different types, with different fields applicable to different types. I consider creating separate tables for different types of Client, each having fields applicable to the respective type and have one master table referencing all of them and have fields applicable to all types.

Currently, I come up with the following design:

enter image description here

But I don't think my design is efficient enough (or even correct and free of errors). What would you suggest? Also, if this is important in any way, I am planning to utilize MariaDB.

FETCH API_CURSOR causing open transaction in tempDB

Posted: 03 May 2013 03:05 PM PDT

A select statement run from Oracle 11gR1 to SQL Server 2005 using Gateway leaves an open transaction in tempdb. The user is a datareader in Database XYZ. Not sure why the open tran is in tempDB when the statement run is SELECT.

Any one had seen this issue before ?

Thanks in advance sekhar

What's the best way to backup DB files via network on Linux and Solaris machines?

Posted: 03 May 2013 08:17 AM PDT

I wonder what the best way to backup files via the network.

I have Solaris machines with Oracle 10i on a Veritas cluster. Machines are connected to EMC storage.

The /data/oracle directory is mounted on the EMC storage.

What I want is to backup the /data/oracle directory (70G) on some backup machine via the network (include soft-linked files). The reliability of copying files is very important.

I checked and found some ideas to do that.

For example the first option is to use rsync:

rsync   -WavH –progress   /data/oracle $backup_server_ip:/Backup_dir  

The second option is transfering the files with tar and ssh, dealing with compression on both sides of the pipe:

cd /directory_that_we_want_to_backup  tar cpf - . | bzip2 -c | \    ssh  $backup_server_ip  "cd /Backup_dir && bzip2 -d | tar xpf -"  

and so on.

I need advice on what is more reliability from the options above, and maybe on what other good options there are for this.

Cannot find MySQL Server Config Wizard for MySQL 5.6 build

Posted: 03 May 2013 01:05 PM PDT

Not sure if this is the right Stack Exchange site for this but...

I am trying to reconfigure my mySQL instance but cannot find the config wizard. I looked here: http://dev.mysql.com/doc/refman/5.0/en/mysql-config-wizard-starting.html

As one user pointed out, the config .exe file is not included in version above 5.5. Does anyone know how to reconfigure the MySQL server instance?

What causes InnoDB to write 100% more pages while slowing down answering queries?

Posted: 03 May 2013 06:05 AM PDT

I have a somewhat big database server: 4 processors, 32 cores, 288GB RAM, 2 ethernet cards bounded together, 2 independent RAID controller cards with 1GB cache each, 24 2.5" disks, being 8 SAS, 15KRPM, in the data partition in RAID10, and 3 SSD, in the Journal partition in RAID5, and 2 SAS, 15KRPM in RAID 1 for the operating system. The data partition has it's own RAID controller, and the O.S. and Journal share the other card.

I'm running Ubuntu 12.04.1 LTS and MySQL 5.1.56 with InnoDB plugin 12.7 from Percona on top of that.

To the Data Partition MySQL writes indexes and data files; to the Journal partition it writes binary logs, slow and error logs, and innodb journal files.

In the last few weeks I've been observing some weird behaviour: MySQL slows down answering to queries, and at the same time doubles the "InnoDB Buffer Pool Pages Written" metric. At the same time, I see InnoDB Log buffer filling up 5 times the usual levels for the usual workload. Also, I don't observe any mutex and lock granting activity changes on the graph.

This is my current configuration:

  basedir=/usr/    datadir=/var/lib/mysql/data    tmpdir=/var/lib/mysql/tmp    server-id=1    socket=/var/run/mysqld/mysqld.sock    port=3306    user=mysql    pid-file=/var/run/mysqld/mysqld.pid    language=/usr/share/mysql/english    default-storage-engine=InnoDB    read_only    max_heap_table_size=256M    tmp_table_size=256M    table_cache=1024    thread_cache_size=64    thread_stack=1024K    max_allowed_packet=16M    max_connections=255    max_user_connections=250    skip-external-locking    skip-slave-start    master-info-file=/var/lib/mysql/relay/master.info    relay-log=/var/lib/mysql/relay/relay-bin    relay-log-index=/var/lib/mysql/relay/relay-bin.index    relay-log-info-file=/var/lib/mysql/relay/relay-log.info    log-slave-updates=1    expire_logs_days=1    sync_binlog=1    max_binlog_size=1G    binlog-format=MIXED    log-bin=/var/lib/mysql/binary/mysqld-binlog    log-bin-index=/var/lib/mysql/binary/mysqld-binlog.index    log-warnings=2    log-error=/var/lib/mysql/mysqld-err.log    slow-query-log    log_slow_slave_statements=1    log_slow_timestamp_every=1    slow_query_log_microseconds_timestamp=1    log_slow_verbosity=full    long-query-time=0.05    slow-query-log-file=/var/lib/mysql/mysqld-slow.log    innodb_adaptive_flushing=1    innodb_additional_mem_pool_size=20M    innodb_buffer_pool_size=16G    innodb_data_file_path=ibdata1:20M:autoextend    innodb_data_home_dir=/var/lib/mysql/data    innodb_doublewrite_file=/var/lib/mysql/journal/ib_doublewrite    innodb_fast_shutdown=0    innodb_file_per_table    innodb_flush_log_at_trx_commit=1    innodb_flush_method=O_DIRECT    innodb_io_capacity=1500    innodb_log_group_home_dir=/var/lib/mysql/journal/    innodb_max_dirty_pages_pct=75    innodb_open_files=1024    innodb_rollback_on_timeout    innodb_thread_concurrency=20    query_cache_size=0    query_cache_type=0    key-buffer-size=200M    server-id=233111    sql-mode=NO_AUTO_CREATE_USER    max_connections=850    max_user_connections=800    read-only    table-open-cache=1300    log-error=/var/lib/mysql/log/dbserver-err.log    slow-query-log-file=/var/lib/mysql/log/dbserver-slow.log    relay-log=/var/lib/mysql/relay/dbserver-relay-bin    relay-log-index=/var/lib/mysql/relay/dbserver-relay-bin.index    relay-log-info-file=/var/lib/mysql/relay/dbserver-relay-log.info    log-bin=/var/lib/mysql/binary/dbserver-mysqld-binlog    log-bin-index=/var/lib/mysql/binary/dbserver-mysqld-binlog.index    relay_log_purge=0    innodb_buffer_pool_size=240G    innodb_log_buffer_size=2G    innodb_log_file_size=4G    large-pages  

And this is my current filesystem organization:

  SSD RAID5, Controller #0    /var/lib/mysql/journal -> /srv/mysql/ssd/journal    /var/lib/mysql/log     -> /srv/mysql/ssd/log    /var/lib/mysql/relay   -> /srv/mysql/ssd/relay    /var/lib/mysql/tmp     -> /srv/mysql/ssd/tmp      SAS RAID 10, Controller #1    /var/lib/mysql/backup  -> /srv/mysql/sas/backup    /var/lib/mysql/binary  -> /srv/mysql/ssd/binary    /var/lib/mysql/data    -> /srv/mysql/sas/data  

Can you please help me understanding what is going on with my database server? Why is it slowing down service and increasing the InnoDB Buffer Pool Write activity?

Get the rank of a user in a score table

Posted: 03 May 2013 09:43 AM PDT

I have a very simple MySQL table where I save highscores. It looks like that:

Id     Name     Score  

So far so good. The question is: How do I get what's a users rank? For example, I have a users Name or Id and want to get his rank, where all rows are ordinal ordered descending for the Score.

An Example

Id  Name    Score  1   Ida     100  2   Boo     58  3   Lala    88  4   Bash    102  5   Assem   99  

In this very case, Assem's rank would be 3, because he got the 3rd highest score.

The query should return one row, which contains (only) the required Rank.

Search This Blog