Monday, March 4, 2013

[how to] query is taking long time to execute around 2mins

[how to] query is taking long time to execute around 2mins


query is taking long time to execute around 2mins

Posted: 04 Mar 2013 08:48 PM PST

my query is taking long time to execute around 2mins. any one can help me to rewrite this query. Please find Explain plan and table structutre from below.

Query:

select count(*) as col_0_0_  from t_keyword tkeyword0_, t_tag_hierarchy_rel ttaghierar1_, t_tag_hierarchy_keyword_pattern ttaghierar2_ where tkeyword0_.own_domain_id = 263 and tkeyword0_.id=ttaghierar1_.resource_id  and ttaghierar1_.hierarchy_id = 18 and ttaghierar2_.id=ttaghierar1_.pattern_id;  

Explain Plan:

+----+-------------+--------------+--------+----------------------------------------------------------+---------------------+---------+----------------------------------+--------+-------------+  | id | select_type | table        | type   | possible_keys                                            | key                 | key_len | ref                              | rows   | Extra       |  +----+-------------+--------------+--------+----------------------------------------------------------+---------------------+---------+----------------------------------+--------+-------------+  |  1 | SIMPLE      | ttaghierar1_ | ref    | hierarcyHierarchyId,domain_id_hierarchy_id_resource_type | hierarcyHierarchyId | 4       | const                            | 111456 |             |  |  1 | SIMPLE      | ttaghierar2_ | eq_ref | PRIMARY                                                  | PRIMARY             | 4       | actonia.ttaghierar1_.pattern_id  |      1 | Using index |  |  1 | SIMPLE      | tkeyword0_   | ref    | id,own_domain_id                                         | id                  | 4       | actonia.ttaghierar1_.resource_id |      1 | Using where |  +----+-------------+--------------+--------+----------------------------------------------------------+---------------------+---------+----------------------------------+--------+-------------+  3 rows in set (1.62 sec)  

Table Structure:

mysql> show create table t_keyword\G  *************************** 1. row ***************************         Table: t_keyword  Create Table: CREATE TABLE `t_keyword` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `keyword_name` varchar(255) DEFAULT NULL,    `keyword_value` varchar(255) DEFAULT NULL,    `type` int(11) DEFAULT NULL,    `description` varchar(2000) DEFAULT NULL,    `own_domain_id` int(11) DEFAULT NULL,    `rank_check` int(11) DEFAULT NULL,    `rank1` int(11) DEFAULT NULL COMMENT 'yesterday rank value',    `rank2` int(11) DEFAULT NULL COMMENT 'the day before yesterday rank value',    `rank3` int(11) DEFAULT NULL COMMENT 'special date rank for overstock.com',    `yesterday_entrances` int(11) DEFAULT NULL COMMENT 'yesterday entrances',    `week_entrances` int(11) DEFAULT NULL COMMENT '7 days entrances',    `current_ctr` float(16,4) DEFAULT NULL COMMENT 'Current CTR',    `monthly_search_volume` int(11) DEFAULT NULL COMMENT 'Most Recent Month search volume',    `avg_monthly_search_volume` int(11) DEFAULT NULL COMMENT 'avg_monthly_search_volume',    `traffic_increase` int(11) DEFAULT NULL COMMENT 'Traffic Increase',    `rank_improvement` int(11) DEFAULT NULL COMMENT 'Rank Improvement',    `rank_update_date` date DEFAULT NULL COMMENT 'rank be updated for Special Date    `top_rank_targeturl_id` int(11) DEFAULT NULL,    `frequency` int(10) DEFAULT '1' COMMENT '1: daily, 2: weekly, 3: monthly',    `score` float DEFAULT NULL,    `create_date` datetime DEFAULT NULL,    `bing_rank1` int(10) DEFAULT NULL,    `bing_rank2` int(10) DEFAULT NULL,    `yesterday_bing_entrances` int(11) DEFAULT NULL,    `bing_rank_improvement` int(11) DEFAULT NULL,    KEY `id` (`id`),    KEY `keyword_name` (`keyword_name`),    KEY `own_domain_id` (`own_domain_id`,`rank_check`),    KEY `rank_check` (`rank_check`)  ) ENGINE=InnoDB AUTO_INCREMENT=720988063 DEFAULT CHARSET=utf8  /*!50100 PARTITION BY RANGE (`rank_check`)  (PARTITION p0 VALUES LESS THAN (0) ENGINE = InnoDB,   PARTITION p1 VALUES LESS THAN (1) ENGINE = InnoDB,   PARTITION p2 VALUES LESS THAN (2) ENGINE = InnoDB,   PARTITION pEOW VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */  1 row in set (0.03 sec)    mysql> show create table t_tag_hierarchy_rel\G  *************************** 1. row ***************************         Table: t_tag_hierarchy_rel  Create Table: CREATE TABLE `t_tag_hierarchy_rel` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `domain_id` int(10) NOT NULL DEFAULT '0',    `hierarchy_id` int(10) NOT NULL DEFAULT '0',    `resource_id` int(10) NOT NULL DEFAULT '0',    `resource_type` int(10) NOT NULL DEFAULT '0',    `pattern_id` int(10) NOT NULL,    `top_hierarchy_id` int(11) NOT NULL,    PRIMARY KEY (`id`),    UNIQUE KEY `domain_id_resource_id_resource_type_top_hierarchy_id` (`domain_id`,`resource_id`,`resource_type`,`top_hierarchy_id`),    KEY `hierarcyHierarchyId` (`hierarchy_id`),    KEY `domain_id_hierarchy_id_resource_type` (`resource_id`,`hierarchy_id`)  ) ENGINE=MyISAM AUTO_INCREMENT=126564587 DEFAULT CHARSET=utf8  1 row in set (0.00 sec)    mysql> show create table t_tag_hierarchy_keyword_pattern\G  *************************** 1. row ***************************         Table: t_tag_hierarchy_keyword_pattern  Create Table: CREATE TABLE `t_tag_hierarchy_keyword_pattern` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `pattern` varchar(500) NOT NULL DEFAULT '0',    `hierarchy_id` int(10) NOT NULL DEFAULT '0',    `match_level` int(10) NOT NULL DEFAULT '0',    `create_date` datetime DEFAULT NULL,    `flg` int(10) DEFAULT NULL COMMENT '1: add keyword into hierarchy is done.',    `is_regular_expression` int(2) DEFAULT NULL COMMENT '0 = string ,1 = java regular expression,2 = javascript regular expression',    `is_case_sensitive` int(2) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `hierarchy_id` (`hierarchy_id`)  ) ENGINE=MyISAM AUTO_INCREMENT=2293 DEFAULT CHARSET=utf8  

ORA-00990 error while trying to grant priviliges: Why is that?

Posted: 04 Mar 2013 07:05 PM PST

After creating a user named suhail I am trying to grant that user some privileges. But I am getting an error saying missing or invalid privilege . Why is that ?

C:\Windows\system32>SQLPLUS / AS SYSDBA    SQL*Plus: Release 11.2.0.2.0 Production on Tue Mar 5 08:11:35 2013    Copyright (c) 1982, 2010, Oracle.  All rights reserved.      Connected to:  Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production  SQL> CREATE USER suhail IDENTIFIED BY InnocentDesire02;  User created.  SQL> GRANT CREATE TABLE,ALTER TABLE,INSERT TABLE,SELECT TABLE,DELETE TABLE,DROP   VIEW,CREATE ROLE,CREATE SESSION,ALTER SESSION TO suhail;    GRANT CREATE TABLE,ALTER TABLE,INSERT TABLE,SELECT TABLE,DELETE TABLE,DROP VIEW,CREATE     ROLE,CREATE SESSION,ALTER SESSION TO suhail                   *  ERROR at line 1:  ORA-00990: missing or invalid privilege  

SQL Server 2008 R2 not starting after moving model database

Posted: 04 Mar 2013 07:01 PM PST

SQL Server 2008 R2 not starting after moving model database. Any ideas? Error: 945, Severity: 14, State: 2. Any ideas? Sql 2008 R2 will not start with trace -f -T3608 or single user mode -m. Need params to start SMS to move Model back.

How can I get multiple rows instead of columns after self-joins?

Posted: 04 Mar 2013 05:03 PM PST

This is my situation (the real table isn't about people but about paths):

[Table: People]  id | name | type    | related_to  ----------------------------   1 |      |         |  0   2 | Nico | Friend  |  1   3 | Mika | Wife    |  2   4 | Olaf | Child   |  3   5 | Pete | Friend  |  1   6 | Duke | Friend  |  1  

So my query is as followed:

 SELECT * FROM `people` base    JOIN `people` related1 ON base.id = related1.related_to   JOIN `people` related2 ON related1.id = related2.related_to   JOIN `people` related3 ON related2.id = related3.related_to   WHERE related1.name = 'Nico'    AND  (related2.name = 'Mika' OR related2.type != 'FRIEND')    AND  (related3.name = 'Duke' OR related3.type != 'FRIEND');  

My problem:
I'm getting a lot of columns back now but I just want a row per related.

  1. Is the only solution to seperate each join into query's and use UNION to get one result? Or is there a faster/cleaner/better way?

  2. This query can be fired alot, is there a solution to make a view for it? Keep in mind that this query is automaticly created so there may be not 3 relations but also 1 or max. 8.

  3. Is there a better term for self-join? Because my google results really sucked while I searched for a solution..

Really thanks for even reading this with my bad grammar! You rock!

Greetings Guido

Change Data Capture and the __$update_mask binary

Posted: 04 Mar 2013 04:12 PM PST

We're using CDC to capture changes made to a production table. The changed rows are being exported out to a data warehouse (informatica). I know that the __$update_mask column stores what columns were updated in a varbinary form. I also know that I can use a variety of CDC functions to find out from that mask what those columns were.

My question is this. Can anyone define for me the logic behind that mask so that we can identify the columns that were changed over in the warehouse? Since we're processing outside of the server we don't have easy access to those MSSQL CDC functions. I would rather just break down the mask myself in code.

I found an article by Pinal Dave indicating that it was a simple bitmap but in the sample masks I extracted I see many more digits than just bit values and no simple logic is jumping out at me as to how this field is built logically.

In short, I'd like to identify changed columns by hand from the __$update_mask field.

Why do I need to use a sub query to filter down a grouped select?

Posted: 04 Mar 2013 02:52 PM PST

If I do this --

SELECT dv.Name        ,MAX(hb.[DateEntered]) as DE  FROM       [Devices] as dv      INNER JOIN       [Heartbeats] as hb ON hb.DeviceID = dv.ID  WHERE DE < '2013-03-04'  GROUP BY dv.Name  

I get this error --

Msg 207, Level 16, State 1, Line 17 Invalid column name 'DE'.

If I do this --

SELECT Name, DE FROM (      SELECT dv.Name            ,MAX(hb.[DateEntered]) as DE      FROM           [Devices] as dv          INNER JOIN           [Heartbeats] as hb ON hb.DeviceID = dv.ID      GROUP BY dv.Name  ) as tmp WHERE tmp.DE < '2013-03-04'  

it works as expected.

Can someone explain why I need to nest my main query as a subquery to limit my data set?

Also, is there maybe a better way to achieve the goal here? Retrieve all records from one table, and the single "top" related record ordered by [DateEntered] descending?

Tuning Postgres config for new Database Server

Posted: 04 Mar 2013 06:31 PM PST

I have finally gotten a Purpose Built Database machine for a project i'm working on.

The server is a 4 Core Xeon with 64GB of Ram and a Raid10 of 10krpm drives.

I've just gotten the Database Moved over to the new machine, and am actually seeing worse performance with the same code that I did when this machine was running on a VM.

I'm looking for Suggestions on what settings to adjust to what values.

Currently, I've upped shared_buffers to 60GB and the kernal settings needed to make that change.

temp_buffers is 32MB

work_mem is 5MB

I'm working on doing some stuff I'd like to get loaded in quickly so I have synchronous_commit set to off.

Can anyone point in the right direction as to how to crank the speed up? I had it running quite a bit faster on a slower machine with much less memory and sharing drives with the machines that were making calls to it, so I'm not really sure what the issues are.

Thanks,

How to display Oracle Enterprise Manager 11g (OEM) alerts and reports in Nagios?

Posted: 04 Mar 2013 12:57 PM PST

I am working on a project that requires an unobtrusive integration of OEM data into a Nagios front-end. I know there are ways to use SNMP traps to do this, but I am not completely familiar with how SNMP traps work. I came across the following:

http://snmptt.sourceforge.net/

http://sourceforge.net/projects/nagtrap/

If anyone has experience displaying OEM source alert and monitoring data in Nagios (or another tool) via SNMP or API, please provide some examples and resources.

Thanks!

Are "create user" and "grant connect" functionally equivalent?

Posted: 04 Mar 2013 11:50 AM PST

I don't have a problem per se, but I've noticed that if I grant a database permission to a principal that doesn't exist in the database, the permission shows up in sys.database_permissions but that the user cannot connect to the database (as expected). If I then grant connect to the user, everything is fine. Which made me wonder if create user and grant connect were functionally equivalent. That is, is there anything additional that create user does that grant connect doesn't?

SQL Server Database Mail - Why can't I run an execute against xp_sysmail_activate?

Posted: 04 Mar 2013 01:11 PM PST

SQL Server 2008 R2 Standard x64

I can't send e-mail from this SQL Server (out of MANY that are configured pretty much identically AFAIK). I have created the mail profile and account. However, any time I tried to send an e-mail with sp_send_dbmail (with my account - in sysadmin role) or Database Mail->"Send Test E-Mail", nothing happens. Nothing goes into sysmail_log. The SQL Server error below is generated:

 The activated proc '[dbo].[sp_sysmail_activate]' running on queue 'msdb.dbo.ExternalMailQueue' output the following:  'The EXECUTE permission was denied on the object 'xp_sysmail_activate', database 'mssqlsystemresource', schema 'sys'.'  

Details

  • msdb.dbo.sysmail_mailitems has a record for each mail
  • The database mail log is empty (msdb.dbo.sysmail_log)
  • msdb.dbo.sysmail_sentitems is empty
  • I have run sysmail_stop_sp and sysmail_start_sp
  • "is_broker_enabled" is set to "1" in msdb

How is the execute permission denied when I am part of the sysadmin role? Any suggestions on what could be happening?

Can I query data_space names when not in context of the database for multiple databases?

Posted: 04 Mar 2013 10:31 AM PST

I have a data set pulled from sys.master_files. The data set looks something like this:

database_id filename         physical_name data_space_id  ----------- ---------------- ------------- -------------  6           PRIMARY_06       some location 1  6           primary_09       some location 1  21          PRIMARY_01       some location 1  26          PRIMARY_01       some location 1  52          PRIMARY_01       some location 1  75          PRIMARY_01       some location 1  

I would like to join in the FILEGROUP name to this dataset. It should just be a matter of joining on database_id and data_space_id, but I can't seem to find a function or system view analogous to sys.master_files for filegroups/dataspaces.

The brute force method would be to create this list as a temporary lookup using sp_MSForEachDB, but I would like to know if there's a better way.

moving to AWS RDS (InnoDB vs MyISAM

Posted: 04 Mar 2013 01:09 PM PST

I am in the process of moving my servers from stand alone hosting to AWS using RDS (MySQL) for database usage.

One of the first things RDS states is that in order to use their backup, and snapshot features of RDS the tables must be using InnoDB.

My site has always run on MyIsam, probably mainly because 10+ years ago, it seemed like the right choice.

Because of this, my database structure is quite simple, and quite separate, there are no join queries on the site, so every select is only getting data from one table.

some of the tables have quite a few indexed fields, The site requires, frequent inserts, regular updates and many reads,

I don't use full text searching, the majority of queries revolve around either the UserID or the tables autoincrement ID

So, my question is, what to expect if i was to move to Innodb? I am not that worried about the transaction side of it. and for this reason MyISAM is ok. I am more concerned about backup, previously I had a cron job that would automate the backup , but of course this came at a price of lockouts while the backup was in process.

MY understanding of InnoDB is that because it does row level locking, this would not be an issue.

But what about performance? I have read so many reviews and bench tests, but i dont know how they relate to my situation.

Can I move over to InnoDB without worrying about setting up foreign keys etc, can i keep things as simple as they are now, or would that negatively affect the system if using InnoDB.

I realize that InnoDB has become the default option in MySQL these days, which makes me think that all dbs should be running on it anyway.

What I dont want to do is have my database fall over because I didnt set up innodb properly.

Is it even worth moving my tables over to it, given its been running just fine on MyISAM for years.

Or is it a case of, this is good cause to change?

Question about Oracle's interpretation of "host cpu utilization" in v$sysmetric view

Posted: 04 Mar 2013 10:21 AM PST

I am studying the v$sysmetric view from Oracle and trying to understand what is Oracle's interpretation of "Host CPU Utilization"

It says in the metric_unit as % busy/(idle + busy), but what is busy? Is that the sum of usr + sys time? I tried comparing the query results from v$sysmetric against output from mpstat but it didn't seem to match.

How many queries per second is my Postgres executing?

Posted: 04 Mar 2013 05:29 PM PST

How can I tell how many queries per second my Postgres database is executing?

How to copy MySQL Database from Linux to Windows?

Posted: 04 Mar 2013 07:12 PM PST

I would like to copy a MySQL database form Linux to Windows, from a little research I found this url http://www.cyberciti.biz/tips/howto-copy-mysql-database-remote-server.html where the author says we could copy MySQL Database using the following command in Linux

Linux to Linux

mysqldump -u username -p 'password' db-name | ssh user@remote.box.com mysql -u username -p 'password' db-name  

Is there a command for copying MySQL database from Linux to Windows?

Help with tricky update statement

Posted: 04 Mar 2013 07:39 PM PST

I've tried writting this update statement every possible way I can think of but I either wind up producing invalid results or run into a syntax barrier.

I have two table variables:

DECLARE     @Measurements TABLE(Precidence int, -- the relative scale of a measurement                         Measurement varchar(max), -- Grams, Kilograms, Liters, etc                         MeasurementType varchar(max)); -- Weight, Volume, etc    Example:    Precidence | Measurement | MeasurementType       2     |     G       |   Weight       1     |     KG      |   Weight       1     |     GAL     |   Volume       2     |     L       |   Volume       3     |     ML      |   Volume    DECLARE     @Items TABLE(ItemType varchar(max),                   Quantity float,                   Measurement varchar(max),                   ToMeasurement varchar(max));  

The @Items table can contain multiple measurements for the same ItemType. For each ItemType I need to identify the largest measurement, taking into account incompatible measurement types and update the ToMeasurement. The ultimate goal is to convert the Quantity of each ItemType to the largest measurement present in the @Items table so items of the same ItemType can be summed. I've already written conversion function and the sum operation.

Given the following table input:

ItemType | Quantity | Measurement | ToMeasurement  Widget   |    1     |    G        |    NULL  Widget   |    1     |    KG       |    NULL  Widget   |    1     |    ML       |    NULL  Widget   |    1     |    L        |    NULL  

Should be updated to:

ItemType | Quantity | Measurement | ToMeasurement  Widget   |    1     |    G        |    KG  Widget   |    1     |    KG       |    KG  Widget   |    1     |    ML       |    L  Widget   |    1     |    L        |    L  

I've rewritten the update multiple times and each time I come up short. At one point I had subqueries going four levels deep. I felt like I was getting close but it was getting so complex I couldn't see the forest for the trees. My most recent attempt is simpler but once again produces incorrect results:

UPDATE A  SET A.ToMeasurement = E.Measurement  FROM @Items A       JOIN(SELECT C.ItemType,                    D.Measurement            FROM @Measurements B                 JOIN @Items C                 ON C.Measurement = B.Measurement                 JOIN @Measurements D                 ON D.MeasurementType = B.MeasurementType)E       ON E.ItemType = A.ItemType;  

I know I'm not even using the Precidence column, which is one of the reasons it is failing to produce the results I'm looking for.

Notes

Here's a query that produces the result I'm looking for (I think) but I'm still not sure how to turn it into an update statement:

SELECT A.ItemType,          A.Quantity,          A.Measurement,          (SELECT TOP 1 M.Measurement FROM @Measurements M         JOIN @Items C ON C.Measurement = M.Measurement         WHERE M.MeasurementType = B.MeasurementType         AND  C.ItemType = A.ItemType         ORDER BY Precidence)ToMeasurement   FROM @Items A       JOIN @Measurements B       ON A.Measurement = B.Measurement;  

FETCH API_CURSOR causing open transaction in tempDB

Posted: 04 Mar 2013 01:14 PM PST

A select statement run from Oracle 11gR1 to SQL Server 2005 using Gateway leaves an open transaction in tempdb. The user is a datareader in Database XYZ. Not sure why the open tran is in tempDB when the statement run is SELECT.

Any one had seen this issue before ?

Thanks in advance sekhar

How to manage dynamic tables in Visual Studio 2010 Sql Server Database Projects

Posted: 04 Mar 2013 12:04 PM PST

I am migrating a common set of databases in to a VS 2010 Sql Server Db project so we can keep it in source control, and deploy common changes etc.
The databases are for the most part the same, except for a couple schema changes based on the current environment.
The schema changes generally consist of altering how computed columns are calculated, or how stored procedures and functions calculate things (converting euro's to usd for monetary calculations when the envrionment is in the EU would be an example)

Currently the database has a set of stored procedures that make a few DDL changes based on data in the database. I need to add some DDL changes based on the environment, irrespective of data.

My question is, can I (or should I) manage the DDL differences in the db project post deploy script via if clauses and alter statements or should I migrate all of the changes to one time use stored procedures?

MDW performance_counter_instances table running out of identity values

Posted: 04 Mar 2013 12:53 PM PST

My MDW database's performance_counter_instances table has run out of identity values.

Has anyone ever encountered this?

Is it safe to change the data type of the identity field performance_counter_id from INT to BIGINT without breaking anything?

Bonus Question - What is the impact of this field running out of identity values?

I can't find anything useful about this issue online.

Thank you!

A database in MySQL is a schema in Oracle : I do not understand this statement

Posted: 04 Mar 2013 10:51 AM PST

Please help me understand this statement:

A database in MySQL is a schema in Oracle.

I have just started to use Oracle and I find it different from other RDBMS softwares I have used like MSSQL, MySQL and Derby.

For example to create a database, when I use create database ghazals, it throws an error:

ERROR at line 1:  ORA-01501: CREATE DATABASE failed   ORA-01100: database already mounted  

Also, commands like show databases do not work here.

SQL Server primary key create clustered index

Posted: 04 Mar 2013 11:17 AM PST

I've inherited a database where no primary keys were defined on the tables. There are also no clustered indexes assigned to the tables.

If I perform an alter table to assign a primary key will this result in SQL Server also creating a clustered index? If so should I expect slowness on the database due to IO from the data being repositioned on the harddisk?

Efficiently retrieve big amount of data based on date range

Posted: 04 Mar 2013 11:18 AM PST

Table EVENTS

TimeStamp  |O.Name   |Location  |Source     |  2008-11-12 |Pretorian|Európe    |England    |  2009-12-24 |Lex Rosa |Európe    |Italy      |  2010-01-01 |Nasdaq   |USA       |Whasington |  2010-02-12 |Plank    |Australia |Western Aus|  2010-03-11 |Pretorian|Európe    |England    |  2011-05-01 |Pretorian|Európe    |England    |  

The above table is a representation of the table where different events are getting recorded. Events are created in fast time periods, so every minute there is an event created, therefore the table containing the data is really big.

I have a query to select the rows that match some kind of criteria taken from a different table and different database. This is how the 2nd table looks like.

Table TAGS

ID| Location| Source| Tagname  01| EU      | ENG   | Pretorian  02| EU      | IT    | Lex Rosa  03| USA     | WA    | Nasdaq  04| AUS     | WA    | Plank  

The idea is that whenever I choose a date range e.g 2009-01-01 and 2011-06-06 I should get these rows:

2010-03-11 |Pretorian|Európe    |England    |  2011-05-01 |Pretorian|Európe    |England    |  

For this I have the following query, but the execution time is really slow:

DECLARE @MaxTS Datetime     SELECT @MaxTS = MAX(TimeStamp)  FROM LinkedServer.EVENTS  WHERE O.Name IN   (SELECT Tagname COLLATE DATABASE_DEFAULT FROM MyDatabase.TAGS  WHERE LOCATION = 'EU' AND SOURCE = 'ENG')    IF @MaxTS > '2010-01-01'  BEGIN    SELECT * FROM LinkedServer.EVENTS    WHERE O.Name IN                    (SELECT Tagname COLLATE DATABASE_DEFAULT FROM MyDatabase.TAGS                    WHERE LOCATION = 'EU' AND SOURCE ='ENG')  END  ELSE     PRINT 'Date out of range'  

In this query I first look up the max date for the entry where location is EU and Source is ENG, and after I look up all the dates till the selected date if this is smaller than the MaxDate. Even though I get the results I am looking for, I am more curious if there is any way I could make this perform better. If you have any suggestions please let me know. Thank you!

EDIT: stored procedure used

ALTER PROCEDURE [dbo].[MyProc]   @StartDate  NVARCHAR(19),  @EndDate    NVARCHAR(19)  AS  BEGIN  -- SET NOCOUNT ON added to prevent extra result sets from  -- interfering with SELECT statements.  SET NOCOUNT ON;    -- Insert statements for procedure here  DECLARE @TSQL nvarchar(4000);  DECLARE @MaxDateTS nvarchar(1500);        SELECT @MaxDateTS = MAX(TimeStamp)      FROM LinkedServer.Events          WHERE O.Name IN       (SELECT Tagname COLLATE DATABASE_DEFAULT FROM MyDatabase.TAGS       WHERE LOCATION = 'EU' AND SOURCE = 'ENG');    IF @MaxDateTS > @StartDate  BEGIN       SELECT * FROM LinkedServer.Events                        WHERE O.Name IN       (SELECT Tagname COLLATE DATABASE_DEFAULT FROM MyDatabase.TAGS       WHERE LOCATION = 'EU' AND SOURCE = 'ENG') AND TIMESTAMP >=  @StartDate  AND TIMESTAMP <  @EndDate  END  ELSE  PRINT 'Date out of range'    SET @TSQL = '(' + @MaxDateTS + ') ORDER BY TimeStamp';    PRINT(@TSQL);    EXEC (@TSQL);  END  

Consolidating indexes

Posted: 04 Mar 2013 05:09 PM PST

I have one big table that is used to generate business intelligence cube. Currently it has around 40M rows and 55 columns. A lot of the cube dimensions are generated by running 'select distinct' on a column of the big table.

Currently I have one index for each 'select distinct' and 16 indexes in total. Is it better to merge some of those indexes? My thinking is that the cube needs to process all rows anyway, and the indexes speed up the process because it is much smaller than the entire table, and if I put two or three columns in one index the index size will not grow so much. Maybe the good candidates are columns that have low specificity.

Am I going in the right direction? Can the query make use of the second column (or third, fourth, ...) of the index?

Why can't RDBM's cluster the way NoSQL does?

Posted: 04 Mar 2013 06:39 PM PST

One of the big plusses for nosql DBMS is that they can cluster more easily. Supposedly with NoSQL you can create hundreds of cheap machines that store different pieces of data and query it all at once.

My question is this, why can't relational DBMS do this like mysql or sql server? Is it that the vendors just haven't figured out a technical way to do this with their existing product, or is there some issue with the relational model that prevents this from being feasible? What is so great about the NoSQL way of storing and accessing data (key/value, documents, etc) that makes clustering easier, if this is true at all?

Database Is In InRecovery Mode.

Posted: 04 Mar 2013 02:52 PM PST

I have a database that has entered "Recovery Mode". It was working fine, but ended up in this status a few days ago.

I have been told that a cause can be the partition holding the datafiles running out of space. However in this case the drive has plenty of space. The log files for the database are also of a reasonable size (300-450mb)

I found this script on google that is meant to return the about of time a database has left before recovery is complete. It dosn't give me any results. We do not have a backup of the database, and we really need its data back

DECLARE @DBName VARCHAR(64) = 'databasename'    DECLARE @ErrorLog AS TABLE([LogDate] CHAR(24), [ProcessInfo] VARCHAR(64), [TEXT]  VARCHAR(MAX))    INSERT INTO @ErrorLog  EXEC sys.xp_readerrorlog 0, 1, 'Recovery of database', @DBName    SELECT TOP 5  [LogDate]  ,SUBSTRING([TEXT], CHARINDEX(') is ', [TEXT]) + 4,CHARINDEX(' complete (', [TEXT])  - CHARINDEX(') is ', [TEXT]) - 4) AS PercentComplete  ,CAST(SUBSTRING([TEXT], CHARINDEX('approximately', [TEXT]) + 13,CHARINDEX(' seconds remain', [TEXT]) - CHARINDEX('approximately', [TEXT]) - 13) AS FLOAT)/60.0 AS MinutesRemaining  ,CAST(SUBSTRING([TEXT], CHARINDEX('approximately', [TEXT]) + 13,CHARINDEX(' seconds remain', [TEXT]) - CHARINDEX('approximately', [TEXT]) - 13) AS FLOAT)/60.0/60.0 AS HoursRemaining  ,[TEXT]    FROM @ErrorLog ORDER BY [LogDate] DESC  

MySQL Benchmark on FreeBSD and Ubuntu

Posted: 04 Mar 2013 12:32 PM PST

I'd like to benchmark two db servers, one is running Ubuntu the other FreeBSD. Unfortunately we are unable to install SysBench on the FreeBSD OS. Could anyone recommend any alternatives? Or should I look into creating some intense queries based on the schema?

How do I ensure consistent MySQL backups?

Posted: 04 Mar 2013 08:32 PM PST

How can we ensure MySQL backups are consistent when they are taken as full backups as well as incremental backups using binlogs?

According to me, I record the COUNT(*) from random tables after issuing flush logs on the database. I then take binlogs generated until it flushes the binary file and ensuring the table is not altered while doing SELECT COUNT(*). I then restore the binary logs in different database server and cross check COUNT(*) to ensure the random tables are the same or not against production database and restored database.

Any other thoughts? Keep in mind that hits from application to tables can be paused only for a few minutes.

SHOW SLAVE HOSTS shows incorrect output

Posted: 04 Mar 2013 01:32 PM PST

I had used --report-host option in one of my slaves, so that I could see that under SHOW SLAVE HOSTS. Now that server is no longer in use. I have uninstalled that server as well. Unfortunately I still see the details of that server under SHOW SLAVE HOSTS(When this is executed on the uninstalled server's master) which is incorrect.

What changes should I make such that SHOW SLAVE HOSTS gives me the correct output?

CREATE INDEX for long columns

Posted: 04 Mar 2013 08:52 AM PST

I am looking for alternative to create index on long column:

create table line  (      field_key   integer not null,      value       varchar(4000),      ...  );    create index key_value_idx on line (field_key, value);  

results DB2 SQL Error: SQLCODE=-614, SQLSTATE=54008. Documentation says: The sum of the stored lengths of the specified columns must not be greater than 1024. For such cases in MySQL there is syntax:

create index key_value_idx on line (field_key, value(1000));  

and HSQLDB just works without any limitations.

What is the analogue for DB2?

No comments:

Post a Comment

Search This Blog