Monday, March 4, 2013

[how to] query is taking long time to execute around 2mins

[how to] query is taking long time to execute around 2mins


query is taking long time to execute around 2mins

Posted: 04 Mar 2013 08:48 PM PST

my query is taking long time to execute around 2mins. any one can help me to rewrite this query. Please find Explain plan and table structutre from below.

Query:

select count(*) as col_0_0_  from t_keyword tkeyword0_, t_tag_hierarchy_rel ttaghierar1_, t_tag_hierarchy_keyword_pattern ttaghierar2_ where tkeyword0_.own_domain_id = 263 and tkeyword0_.id=ttaghierar1_.resource_id  and ttaghierar1_.hierarchy_id = 18 and ttaghierar2_.id=ttaghierar1_.pattern_id;  

Explain Plan:

+----+-------------+--------------+--------+----------------------------------------------------------+---------------------+---------+----------------------------------+--------+-------------+  | id | select_type | table        | type   | possible_keys                                            | key                 | key_len | ref                              | rows   | Extra       |  +----+-------------+--------------+--------+----------------------------------------------------------+---------------------+---------+----------------------------------+--------+-------------+  |  1 | SIMPLE      | ttaghierar1_ | ref    | hierarcyHierarchyId,domain_id_hierarchy_id_resource_type | hierarcyHierarchyId | 4       | const                            | 111456 |             |  |  1 | SIMPLE      | ttaghierar2_ | eq_ref | PRIMARY                                                  | PRIMARY             | 4       | actonia.ttaghierar1_.pattern_id  |      1 | Using index |  |  1 | SIMPLE      | tkeyword0_   | ref    | id,own_domain_id                                         | id                  | 4       | actonia.ttaghierar1_.resource_id |      1 | Using where |  +----+-------------+--------------+--------+----------------------------------------------------------+---------------------+---------+----------------------------------+--------+-------------+  3 rows in set (1.62 sec)  

Table Structure:

mysql> show create table t_keyword\G  *************************** 1. row ***************************         Table: t_keyword  Create Table: CREATE TABLE `t_keyword` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `keyword_name` varchar(255) DEFAULT NULL,    `keyword_value` varchar(255) DEFAULT NULL,    `type` int(11) DEFAULT NULL,    `description` varchar(2000) DEFAULT NULL,    `own_domain_id` int(11) DEFAULT NULL,    `rank_check` int(11) DEFAULT NULL,    `rank1` int(11) DEFAULT NULL COMMENT 'yesterday rank value',    `rank2` int(11) DEFAULT NULL COMMENT 'the day before yesterday rank value',    `rank3` int(11) DEFAULT NULL COMMENT 'special date rank for overstock.com',    `yesterday_entrances` int(11) DEFAULT NULL COMMENT 'yesterday entrances',    `week_entrances` int(11) DEFAULT NULL COMMENT '7 days entrances',    `current_ctr` float(16,4) DEFAULT NULL COMMENT 'Current CTR',    `monthly_search_volume` int(11) DEFAULT NULL COMMENT 'Most Recent Month search volume',    `avg_monthly_search_volume` int(11) DEFAULT NULL COMMENT 'avg_monthly_search_volume',    `traffic_increase` int(11) DEFAULT NULL COMMENT 'Traffic Increase',    `rank_improvement` int(11) DEFAULT NULL COMMENT 'Rank Improvement',    `rank_update_date` date DEFAULT NULL COMMENT 'rank be updated for Special Date    `top_rank_targeturl_id` int(11) DEFAULT NULL,    `frequency` int(10) DEFAULT '1' COMMENT '1: daily, 2: weekly, 3: monthly',    `score` float DEFAULT NULL,    `create_date` datetime DEFAULT NULL,    `bing_rank1` int(10) DEFAULT NULL,    `bing_rank2` int(10) DEFAULT NULL,    `yesterday_bing_entrances` int(11) DEFAULT NULL,    `bing_rank_improvement` int(11) DEFAULT NULL,    KEY `id` (`id`),    KEY `keyword_name` (`keyword_name`),    KEY `own_domain_id` (`own_domain_id`,`rank_check`),    KEY `rank_check` (`rank_check`)  ) ENGINE=InnoDB AUTO_INCREMENT=720988063 DEFAULT CHARSET=utf8  /*!50100 PARTITION BY RANGE (`rank_check`)  (PARTITION p0 VALUES LESS THAN (0) ENGINE = InnoDB,   PARTITION p1 VALUES LESS THAN (1) ENGINE = InnoDB,   PARTITION p2 VALUES LESS THAN (2) ENGINE = InnoDB,   PARTITION pEOW VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */  1 row in set (0.03 sec)    mysql> show create table t_tag_hierarchy_rel\G  *************************** 1. row ***************************         Table: t_tag_hierarchy_rel  Create Table: CREATE TABLE `t_tag_hierarchy_rel` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `domain_id` int(10) NOT NULL DEFAULT '0',    `hierarchy_id` int(10) NOT NULL DEFAULT '0',    `resource_id` int(10) NOT NULL DEFAULT '0',    `resource_type` int(10) NOT NULL DEFAULT '0',    `pattern_id` int(10) NOT NULL,    `top_hierarchy_id` int(11) NOT NULL,    PRIMARY KEY (`id`),    UNIQUE KEY `domain_id_resource_id_resource_type_top_hierarchy_id` (`domain_id`,`resource_id`,`resource_type`,`top_hierarchy_id`),    KEY `hierarcyHierarchyId` (`hierarchy_id`),    KEY `domain_id_hierarchy_id_resource_type` (`resource_id`,`hierarchy_id`)  ) ENGINE=MyISAM AUTO_INCREMENT=126564587 DEFAULT CHARSET=utf8  1 row in set (0.00 sec)    mysql> show create table t_tag_hierarchy_keyword_pattern\G  *************************** 1. row ***************************         Table: t_tag_hierarchy_keyword_pattern  Create Table: CREATE TABLE `t_tag_hierarchy_keyword_pattern` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `pattern` varchar(500) NOT NULL DEFAULT '0',    `hierarchy_id` int(10) NOT NULL DEFAULT '0',    `match_level` int(10) NOT NULL DEFAULT '0',    `create_date` datetime DEFAULT NULL,    `flg` int(10) DEFAULT NULL COMMENT '1: add keyword into hierarchy is done.',    `is_regular_expression` int(2) DEFAULT NULL COMMENT '0 = string ,1 = java regular expression,2 = javascript regular expression',    `is_case_sensitive` int(2) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `hierarchy_id` (`hierarchy_id`)  ) ENGINE=MyISAM AUTO_INCREMENT=2293 DEFAULT CHARSET=utf8  

ORA-00990 error while trying to grant priviliges: Why is that?

Posted: 04 Mar 2013 07:05 PM PST

After creating a user named suhail I am trying to grant that user some privileges. But I am getting an error saying missing or invalid privilege . Why is that ?

C:\Windows\system32>SQLPLUS / AS SYSDBA    SQL*Plus: Release 11.2.0.2.0 Production on Tue Mar 5 08:11:35 2013    Copyright (c) 1982, 2010, Oracle.  All rights reserved.      Connected to:  Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production  SQL> CREATE USER suhail IDENTIFIED BY InnocentDesire02;  User created.  SQL> GRANT CREATE TABLE,ALTER TABLE,INSERT TABLE,SELECT TABLE,DELETE TABLE,DROP   VIEW,CREATE ROLE,CREATE SESSION,ALTER SESSION TO suhail;    GRANT CREATE TABLE,ALTER TABLE,INSERT TABLE,SELECT TABLE,DELETE TABLE,DROP VIEW,CREATE     ROLE,CREATE SESSION,ALTER SESSION TO suhail                   *  ERROR at line 1:  ORA-00990: missing or invalid privilege  

SQL Server 2008 R2 not starting after moving model database

Posted: 04 Mar 2013 07:01 PM PST

SQL Server 2008 R2 not starting after moving model database. Any ideas? Error: 945, Severity: 14, State: 2. Any ideas? Sql 2008 R2 will not start with trace -f -T3608 or single user mode -m. Need params to start SMS to move Model back.

How can I get multiple rows instead of columns after self-joins?

Posted: 04 Mar 2013 05:03 PM PST

This is my situation (the real table isn't about people but about paths):

[Table: People]  id | name | type    | related_to  ----------------------------   1 |      |         |  0   2 | Nico | Friend  |  1   3 | Mika | Wife    |  2   4 | Olaf | Child   |  3   5 | Pete | Friend  |  1   6 | Duke | Friend  |  1  

So my query is as followed:

 SELECT * FROM `people` base    JOIN `people` related1 ON base.id = related1.related_to   JOIN `people` related2 ON related1.id = related2.related_to   JOIN `people` related3 ON related2.id = related3.related_to   WHERE related1.name = 'Nico'    AND  (related2.name = 'Mika' OR related2.type != 'FRIEND')    AND  (related3.name = 'Duke' OR related3.type != 'FRIEND');  

My problem:
I'm getting a lot of columns back now but I just want a row per related.

  1. Is the only solution to seperate each join into query's and use UNION to get one result? Or is there a faster/cleaner/better way?

  2. This query can be fired alot, is there a solution to make a view for it? Keep in mind that this query is automaticly created so there may be not 3 relations but also 1 or max. 8.

  3. Is there a better term for self-join? Because my google results really sucked while I searched for a solution..

Really thanks for even reading this with my bad grammar! You rock!

Greetings Guido

Change Data Capture and the __$update_mask binary

Posted: 04 Mar 2013 04:12 PM PST

We're using CDC to capture changes made to a production table. The changed rows are being exported out to a data warehouse (informatica). I know that the __$update_mask column stores what columns were updated in a varbinary form. I also know that I can use a variety of CDC functions to find out from that mask what those columns were.

My question is this. Can anyone define for me the logic behind that mask so that we can identify the columns that were changed over in the warehouse? Since we're processing outside of the server we don't have easy access to those MSSQL CDC functions. I would rather just break down the mask myself in code.

I found an article by Pinal Dave indicating that it was a simple bitmap but in the sample masks I extracted I see many more digits than just bit values and no simple logic is jumping out at me as to how this field is built logically.

In short, I'd like to identify changed columns by hand from the __$update_mask field.

Why do I need to use a sub query to filter down a grouped select?

Posted: 04 Mar 2013 02:52 PM PST

If I do this --

SELECT dv.Name        ,MAX(hb.[DateEntered]) as DE  FROM       [Devices] as dv      INNER JOIN       [Heartbeats] as hb ON hb.DeviceID = dv.ID  WHERE DE < '2013-03-04'  GROUP BY dv.Name  

I get this error --

Msg 207, Level 16, State 1, Line 17 Invalid column name 'DE'.

If I do this --

SELECT Name, DE FROM (      SELECT dv.Name            ,MAX(hb.[DateEntered]) as DE      FROM           [Devices] as dv          INNER JOIN           [Heartbeats] as hb ON hb.DeviceID = dv.ID      GROUP BY dv.Name  ) as tmp WHERE tmp.DE < '2013-03-04'  

it works as expected.

Can someone explain why I need to nest my main query as a subquery to limit my data set?

Also, is there maybe a better way to achieve the goal here? Retrieve all records from one table, and the single "top" related record ordered by [DateEntered] descending?

Tuning Postgres config for new Database Server

Posted: 04 Mar 2013 06:31 PM PST

I have finally gotten a Purpose Built Database machine for a project i'm working on.

The server is a 4 Core Xeon with 64GB of Ram and a Raid10 of 10krpm drives.

I've just gotten the Database Moved over to the new machine, and am actually seeing worse performance with the same code that I did when this machine was running on a VM.

I'm looking for Suggestions on what settings to adjust to what values.

Currently, I've upped shared_buffers to 60GB and the kernal settings needed to make that change.

temp_buffers is 32MB

work_mem is 5MB

I'm working on doing some stuff I'd like to get loaded in quickly so I have synchronous_commit set to off.

Can anyone point in the right direction as to how to crank the speed up? I had it running quite a bit faster on a slower machine with much less memory and sharing drives with the machines that were making calls to it, so I'm not really sure what the issues are.

Thanks,

How to display Oracle Enterprise Manager 11g (OEM) alerts and reports in Nagios?

Posted: 04 Mar 2013 12:57 PM PST

I am working on a project that requires an unobtrusive integration of OEM data into a Nagios front-end. I know there are ways to use SNMP traps to do this, but I am not completely familiar with how SNMP traps work. I came across the following:

http://snmptt.sourceforge.net/

http://sourceforge.net/projects/nagtrap/

If anyone has experience displaying OEM source alert and monitoring data in Nagios (or another tool) via SNMP or API, please provide some examples and resources.

Thanks!

Are "create user" and "grant connect" functionally equivalent?

Posted: 04 Mar 2013 11:50 AM PST

I don't have a problem per se, but I've noticed that if I grant a database permission to a principal that doesn't exist in the database, the permission shows up in sys.database_permissions but that the user cannot connect to the database (as expected). If I then grant connect to the user, everything is fine. Which made me wonder if create user and grant connect were functionally equivalent. That is, is there anything additional that create user does that grant connect doesn't?

SQL Server Database Mail - Why can't I run an execute against xp_sysmail_activate?

Posted: 04 Mar 2013 01:11 PM PST

SQL Server 2008 R2 Standard x64

I can't send e-mail from this SQL Server (out of MANY that are configured pretty much identically AFAIK). I have created the mail profile and account. However, any time I tried to send an e-mail with sp_send_dbmail (with my account - in sysadmin role) or Database Mail->"Send Test E-Mail", nothing happens. Nothing goes into sysmail_log. The SQL Server error below is generated:

 The activated proc '[dbo].[sp_sysmail_activate]' running on queue 'msdb.dbo.ExternalMailQueue' output the following:  'The EXECUTE permission was denied on the object 'xp_sysmail_activate', database 'mssqlsystemresource', schema 'sys'.'  

Details

  • msdb.dbo.sysmail_mailitems has a record for each mail
  • The database mail log is empty (msdb.dbo.sysmail_log)
  • msdb.dbo.sysmail_sentitems is empty
  • I have run sysmail_stop_sp and sysmail_start_sp
  • "is_broker_enabled" is set to "1" in msdb

How is the execute permission denied when I am part of the sysadmin role? Any suggestions on what could be happening?

Can I query data_space names when not in context of the database for multiple databases?

Posted: 04 Mar 2013 10:31 AM PST

I have a data set pulled from sys.master_files. The data set looks something like this:

database_id filename         physical_name data_space_id  ----------- ---------------- ------------- -------------  6           PRIMARY_06       some location 1  6           primary_09       some location 1  21          PRIMARY_01       some location 1  26          PRIMARY_01       some location 1  52          PRIMARY_01       some location 1  75          PRIMARY_01       some location 1  

I would like to join in the FILEGROUP name to this dataset. It should just be a matter of joining on database_id and data_space_id, but I can't seem to find a function or system view analogous to sys.master_files for filegroups/dataspaces.

The brute force method would be to create this list as a temporary lookup using sp_MSForEachDB, but I would like to know if there's a better way.

moving to AWS RDS (InnoDB vs MyISAM

Posted: 04 Mar 2013 01:09 PM PST

I am in the process of moving my servers from stand alone hosting to AWS using RDS (MySQL) for database usage.

One of the first things RDS states is that in order to use their backup, and snapshot features of RDS the tables must be using InnoDB.

My site has always run on MyIsam, probably mainly because 10+ years ago, it seemed like the right choice.

Because of this, my database structure is quite simple, and quite separate, there are no join queries on the site, so every select is only getting data from one table.

some of the tables have quite a few indexed fields, The site requires, frequent inserts, regular updates and many reads,

I don't use full text searching, the majority of queries revolve around either the UserID or the tables autoincrement ID

So, my question is, what to expect if i was to move to Innodb? I am not that worried about the transaction side of it. and for this reason MyISAM is ok. I am more concerned about backup, previously I had a cron job that would automate the backup , but of course this came at a price of lockouts while the backup was in process.

MY understanding of InnoDB is that because it does row level locking, this would not be an issue.

But what about performance? I have read so many reviews and bench tests, but i dont know how they relate to my situation.

Can I move over to InnoDB without worrying about setting up foreign keys etc, can i keep things as simple as they are now, or would that negatively affect the system if using InnoDB.

I realize that InnoDB has become the default option in MySQL these days, which makes me think that all dbs should be running on it anyway.

What I dont want to do is have my database fall over because I didnt set up innodb properly.

Is it even worth moving my tables over to it, given its been running just fine on MyISAM for years.

Or is it a case of, this is good cause to change?

Question about Oracle's interpretation of "host cpu utilization" in v$sysmetric view

Posted: 04 Mar 2013 10:21 AM PST

I am studying the v$sysmetric view from Oracle and trying to understand what is Oracle's interpretation of "Host CPU Utilization"

It says in the metric_unit as % busy/(idle + busy), but what is busy? Is that the sum of usr + sys time? I tried comparing the query results from v$sysmetric against output from mpstat but it didn't seem to match.

How many queries per second is my Postgres executing?

Posted: 04 Mar 2013 05:29 PM PST

How can I tell how many queries per second my Postgres database is executing?

How to copy MySQL Database from Linux to Windows?

Posted: 04 Mar 2013 07:12 PM PST

I would like to copy a MySQL database form Linux to Windows, from a little research I found this url http://www.cyberciti.biz/tips/howto-copy-mysql-database-remote-server.html where the author says we could copy MySQL Database using the following command in Linux

Linux to Linux

mysqldump -u username -p 'password' db-name | ssh user@remote.box.com mysql -u username -p 'password' db-name  

Is there a command for copying MySQL database from Linux to Windows?

Help with tricky update statement

Posted: 04 Mar 2013 07:39 PM PST

I've tried writting this update statement every possible way I can think of but I either wind up producing invalid results or run into a syntax barrier.

I have two table variables:

DECLARE     @Measurements TABLE(Precidence int, -- the relative scale of a measurement                         Measurement varchar(max), -- Grams, Kilograms, Liters, etc                         MeasurementType varchar(max)); -- Weight, Volume, etc    Example:    Precidence | Measurement | MeasurementType       2     |     G       |   Weight       1     |     KG      |   Weight       1     |     GAL     |   Volume       2     |     L       |   Volume       3     |     ML      |   Volume    DECLARE     @Items TABLE(ItemType varchar(max),                   Quantity float,                   Measurement varchar(max),                   ToMeasurement varchar(max));  

The @Items table can contain multiple measurements for the same ItemType. For each ItemType I need to identify the largest measurement, taking into account incompatible measurement types and update the ToMeasurement. The ultimate goal is to convert the Quantity of each ItemType to the largest measurement present in the @Items table so items of the same ItemType can be summed. I've already written conversion function and the sum operation.

Given the following table input:

ItemType | Quantity | Measurement | ToMeasurement  Widget   |    1     |    G        |    NULL  Widget   |    1     |    KG       |    NULL  Widget   |    1     |    ML       |    NULL  Widget   |    1     |    L        |    NULL  

Should be updated to:

ItemType | Quantity | Measurement | ToMeasurement  Widget   |    1     |    G        |    KG  Widget   |    1     |    KG       |    KG  Widget   |    1     |    ML       |    L  Widget   |    1     |    L        |    L  

I've rewritten the update multiple times and each time I come up short. At one point I had subqueries going four levels deep. I felt like I was getting close but it was getting so complex I couldn't see the forest for the trees. My most recent attempt is simpler but once again produces incorrect results:

UPDATE A  SET A.ToMeasurement = E.Measurement  FROM @Items A       JOIN(SELECT C.ItemType,                    D.Measurement            FROM @Measurements B                 JOIN @Items C                 ON C.Measurement = B.Measurement                 JOIN @Measurements D                 ON D.MeasurementType = B.MeasurementType)E       ON E.ItemType = A.ItemType;  

I know I'm not even using the Precidence column, which is one of the reasons it is failing to produce the results I'm looking for.

Notes

Here's a query that produces the result I'm looking for (I think) but I'm still not sure how to turn it into an update statement:

SELECT A.ItemType,          A.Quantity,          A.Measurement,          (SELECT TOP 1 M.Measurement FROM @Measurements M         JOIN @Items C ON C.Measurement = M.Measurement         WHERE M.MeasurementType = B.MeasurementType         AND  C.ItemType = A.ItemType         ORDER BY Precidence)ToMeasurement   FROM @Items A       JOIN @Measurements B       ON A.Measurement = B.Measurement;  

FETCH API_CURSOR causing open transaction in tempDB

Posted: 04 Mar 2013 01:14 PM PST

A select statement run from Oracle 11gR1 to SQL Server 2005 using Gateway leaves an open transaction in tempdb. The user is a datareader in Database XYZ. Not sure why the open tran is in tempDB when the statement run is SELECT.

Any one had seen this issue before ?

Thanks in advance sekhar

How to manage dynamic tables in Visual Studio 2010 Sql Server Database Projects

Posted: 04 Mar 2013 12:04 PM PST

I am migrating a common set of databases in to a VS 2010 Sql Server Db project so we can keep it in source control, and deploy common changes etc.
The databases are for the most part the same, except for a couple schema changes based on the current environment.
The schema changes generally consist of altering how computed columns are calculated, or how stored procedures and functions calculate things (converting euro's to usd for monetary calculations when the envrionment is in the EU would be an example)

Currently the database has a set of stored procedures that make a few DDL changes based on data in the database. I need to add some DDL changes based on the environment, irrespective of data.

My question is, can I (or should I) manage the DDL differences in the db project post deploy script via if clauses and alter statements or should I migrate all of the changes to one time use stored procedures?

MDW performance_counter_instances table running out of identity values

Posted: 04 Mar 2013 12:53 PM PST

My MDW database's performance_counter_instances table has run out of identity values.

Has anyone ever encountered this?

Is it safe to change the data type of the identity field performance_counter_id from INT to BIGINT without breaking anything?

Bonus Question - What is the impact of this field running out of identity values?

I can't find anything useful about this issue online.

Thank you!

A database in MySQL is a schema in Oracle : I do not understand this statement

Posted: 04 Mar 2013 10:51 AM PST

Please help me understand this statement:

A database in MySQL is a schema in Oracle.

I have just started to use Oracle and I find it different from other RDBMS softwares I have used like MSSQL, MySQL and Derby.

For example to create a database, when I use create database ghazals, it throws an error:

ERROR at line 1:  ORA-01501: CREATE DATABASE failed   ORA-01100: database already mounted  

Also, commands like show databases do not work here.

SQL Server primary key create clustered index

Posted: 04 Mar 2013 11:17 AM PST

I've inherited a database where no primary keys were defined on the tables. There are also no clustered indexes assigned to the tables.

If I perform an alter table to assign a primary key will this result in SQL Server also creating a clustered index? If so should I expect slowness on the database due to IO from the data being repositioned on the harddisk?

Efficiently retrieve big amount of data based on date range

Posted: 04 Mar 2013 11:18 AM PST

Table EVENTS

TimeStamp  |O.Name   |Location  |Source     |  2008-11-12 |Pretorian|Európe    |England    |  2009-12-24 |Lex Rosa |Európe    |Italy      |  2010-01-01 |Nasdaq   |USA       |Whasington |  2010-02-12 |Plank    |Australia |Western Aus|  2010-03-11 |Pretorian|Európe    |England    |  2011-05-01 |Pretorian|Európe    |England    |  

The above table is a representation of the table where different events are getting recorded. Events are created in fast time periods, so every minute there is an event created, therefore the table containing the data is really big.

I have a query to select the rows that match some kind of criteria taken from a different table and different database. This is how the 2nd table looks like.

Table TAGS

ID| Location| Source| Tagname  01| EU      | ENG   | Pretorian  02| EU      | IT    | Lex Rosa  03| USA     | WA    | Nasdaq  04| AUS     | WA    | Plank  

The idea is that whenever I choose a date range e.g 2009-01-01 and 2011-06-06 I should get these rows:

2010-03-11 |Pretorian|Európe    |England    |  2011-05-01 |Pretorian|Európe    |England    |  

For this I have the following query, but the execution time is really slow:

DECLARE @MaxTS Datetime     SELECT @MaxTS = MAX(TimeStamp)  FROM LinkedServer.EVENTS  WHERE O.Name IN   (SELECT Tagname COLLATE DATABASE_DEFAULT FROM MyDatabase.TAGS  WHERE LOCATION = 'EU' AND SOURCE = 'ENG')    IF @MaxTS > '2010-01-01'  BEGIN    SELECT * FROM LinkedServer.EVENTS    WHERE O.Name IN                    (SELECT Tagname COLLATE DATABASE_DEFAULT FROM MyDatabase.TAGS                    WHERE LOCATION = 'EU' AND SOURCE ='ENG')  END  ELSE     PRINT 'Date out of range'  

In this query I first look up the max date for the entry where location is EU and Source is ENG, and after I look up all the dates till the selected date if this is smaller than the MaxDate. Even though I get the results I am looking for, I am more curious if there is any way I could make this perform better. If you have any suggestions please let me know. Thank you!

EDIT: stored procedure used

ALTER PROCEDURE [dbo].[MyProc]   @StartDate  NVARCHAR(19),  @EndDate    NVARCHAR(19)  AS  BEGIN  -- SET NOCOUNT ON added to prevent extra result sets from  -- interfering with SELECT statements.  SET NOCOUNT ON;    -- Insert statements for procedure here  DECLARE @TSQL nvarchar(4000);  DECLARE @MaxDateTS nvarchar(1500);        SELECT @MaxDateTS = MAX(TimeStamp)      FROM LinkedServer.Events          WHERE O.Name IN       (SELECT Tagname COLLATE DATABASE_DEFAULT FROM MyDatabase.TAGS       WHERE LOCATION = 'EU' AND SOURCE = 'ENG');    IF @MaxDateTS > @StartDate  BEGIN       SELECT * FROM LinkedServer.Events                        WHERE O.Name IN       (SELECT Tagname COLLATE DATABASE_DEFAULT FROM MyDatabase.TAGS       WHERE LOCATION = 'EU' AND SOURCE = 'ENG') AND TIMESTAMP >=  @StartDate  AND TIMESTAMP <  @EndDate  END  ELSE  PRINT 'Date out of range'    SET @TSQL = '(' + @MaxDateTS + ') ORDER BY TimeStamp';    PRINT(@TSQL);    EXEC (@TSQL);  END  

Consolidating indexes

Posted: 04 Mar 2013 05:09 PM PST

I have one big table that is used to generate business intelligence cube. Currently it has around 40M rows and 55 columns. A lot of the cube dimensions are generated by running 'select distinct' on a column of the big table.

Currently I have one index for each 'select distinct' and 16 indexes in total. Is it better to merge some of those indexes? My thinking is that the cube needs to process all rows anyway, and the indexes speed up the process because it is much smaller than the entire table, and if I put two or three columns in one index the index size will not grow so much. Maybe the good candidates are columns that have low specificity.

Am I going in the right direction? Can the query make use of the second column (or third, fourth, ...) of the index?

Why can't RDBM's cluster the way NoSQL does?

Posted: 04 Mar 2013 06:39 PM PST

One of the big plusses for nosql DBMS is that they can cluster more easily. Supposedly with NoSQL you can create hundreds of cheap machines that store different pieces of data and query it all at once.

My question is this, why can't relational DBMS do this like mysql or sql server? Is it that the vendors just haven't figured out a technical way to do this with their existing product, or is there some issue with the relational model that prevents this from being feasible? What is so great about the NoSQL way of storing and accessing data (key/value, documents, etc) that makes clustering easier, if this is true at all?

Database Is In InRecovery Mode.

Posted: 04 Mar 2013 02:52 PM PST

I have a database that has entered "Recovery Mode". It was working fine, but ended up in this status a few days ago.

I have been told that a cause can be the partition holding the datafiles running out of space. However in this case the drive has plenty of space. The log files for the database are also of a reasonable size (300-450mb)

I found this script on google that is meant to return the about of time a database has left before recovery is complete. It dosn't give me any results. We do not have a backup of the database, and we really need its data back

DECLARE @DBName VARCHAR(64) = 'databasename'    DECLARE @ErrorLog AS TABLE([LogDate] CHAR(24), [ProcessInfo] VARCHAR(64), [TEXT]  VARCHAR(MAX))    INSERT INTO @ErrorLog  EXEC sys.xp_readerrorlog 0, 1, 'Recovery of database', @DBName    SELECT TOP 5  [LogDate]  ,SUBSTRING([TEXT], CHARINDEX(') is ', [TEXT]) + 4,CHARINDEX(' complete (', [TEXT])  - CHARINDEX(') is ', [TEXT]) - 4) AS PercentComplete  ,CAST(SUBSTRING([TEXT], CHARINDEX('approximately', [TEXT]) + 13,CHARINDEX(' seconds remain', [TEXT]) - CHARINDEX('approximately', [TEXT]) - 13) AS FLOAT)/60.0 AS MinutesRemaining  ,CAST(SUBSTRING([TEXT], CHARINDEX('approximately', [TEXT]) + 13,CHARINDEX(' seconds remain', [TEXT]) - CHARINDEX('approximately', [TEXT]) - 13) AS FLOAT)/60.0/60.0 AS HoursRemaining  ,[TEXT]    FROM @ErrorLog ORDER BY [LogDate] DESC  

MySQL Benchmark on FreeBSD and Ubuntu

Posted: 04 Mar 2013 12:32 PM PST

I'd like to benchmark two db servers, one is running Ubuntu the other FreeBSD. Unfortunately we are unable to install SysBench on the FreeBSD OS. Could anyone recommend any alternatives? Or should I look into creating some intense queries based on the schema?

How do I ensure consistent MySQL backups?

Posted: 04 Mar 2013 08:32 PM PST

How can we ensure MySQL backups are consistent when they are taken as full backups as well as incremental backups using binlogs?

According to me, I record the COUNT(*) from random tables after issuing flush logs on the database. I then take binlogs generated until it flushes the binary file and ensuring the table is not altered while doing SELECT COUNT(*). I then restore the binary logs in different database server and cross check COUNT(*) to ensure the random tables are the same or not against production database and restored database.

Any other thoughts? Keep in mind that hits from application to tables can be paused only for a few minutes.

SHOW SLAVE HOSTS shows incorrect output

Posted: 04 Mar 2013 01:32 PM PST

I had used --report-host option in one of my slaves, so that I could see that under SHOW SLAVE HOSTS. Now that server is no longer in use. I have uninstalled that server as well. Unfortunately I still see the details of that server under SHOW SLAVE HOSTS(When this is executed on the uninstalled server's master) which is incorrect.

What changes should I make such that SHOW SLAVE HOSTS gives me the correct output?

CREATE INDEX for long columns

Posted: 04 Mar 2013 08:52 AM PST

I am looking for alternative to create index on long column:

create table line  (      field_key   integer not null,      value       varchar(4000),      ...  );    create index key_value_idx on line (field_key, value);  

results DB2 SQL Error: SQLCODE=-614, SQLSTATE=54008. Documentation says: The sum of the stored lengths of the specified columns must not be greater than 1024. For such cases in MySQL there is syntax:

create index key_value_idx on line (field_key, value(1000));  

and HSQLDB just works without any limitations.

What is the analogue for DB2?

[SQL Server] How to split query result into seperate columns

[SQL Server] How to split query result into seperate columns


How to split query result into seperate columns

Posted: 04 Mar 2013 12:45 AM PST

I have one table that contains all my data. I can group into a result as below (the weight would the the sum per day):[u][b]Time[/b][/u]_____________________[u][b]Plant[/b][/u] ______[u][b]Weight[/b][/u]01/03/2013 10:00___________AP1________1.202/03/2013 11:00___________AP1________2.303/03/2013 12:00 __________AP1 ________1.101/03/2013 10:00 __________AP2_________1.402/03/2013 10:00 __________AP2 ________1.103/03/2013 10:00 __________AP2_________1.1How do I go about splitting the sum(weight) up into seperate columns based on 'Plant' so the query result looks like this:[b]Time[u][/u][/b]_____________________[b]AP1[u][/u][/b]______[b]AP2 [u][/u][/b] 01/03/2013 10:00__________1.2_______1.4 02/03/2013 11:00__________2.3_______1.103/03/2013 12:00__________1.1_______1.1Basically I want to group the results into columns rather than one long result.Thanks

UNION statement and duplicate rows issue

Posted: 04 Mar 2013 05:09 AM PST

I'm guessing I'm misunderstanding how to use UNION or its results. But here's what I have and the results:SELECT a.PLANT_NO AS PlantNumber ,SUM(a.APPLIED_VOL) AS AppliedVolume FROM [EdwStaging].[RAW].[FactSettleFee_TIPS_QRMTIPS_QPOST_SETTLE_FEE] aGROUP BY a.PLANT_NOUNIONSELECT CASE WHEN a.plant_no = 'ALL' THEN e.[PlantNumber] ELSE a.plant_no END AS ResolvedPlantNumber ,SUM(TRANS_VOL) AS AppliedVolume FROM [EdwStaging].[RAW].[FactSettleFee_TIPS_QRMTIPS_QPOST_RPTS_INVOICE_DTL] aLEFT OUTER JOIN [EdwStaging].[PSTG].[ContractToPlantMapping] e ON a.ctr_no = e.[ContractNumber] AND a.PROD_DT BETWEEN e.[PlantEffectiveFromDate] AND e.[PlantEffectiveToDate] AND a.PROD_DT BETWEEN e.[ContractEffectiveFromDate] AND e.[ContractEffectiveToDate] AND a.plant_no = 'ALL'GROUP BY CASE WHEN a.plant_no = 'ALL' THEN e.[PlantNumber] ELSE a.plant_no END Example result:PlantNumber --- AppliedVolume002 --- 324334.00 002 --- 215943448.81043 --- 3513440.00043 --- 407522014.65Is there a way for me to have the plant numbers grouped together so there aren't duplicates?Such as:PlantNumber --- AppliedVolume002 ---- 216,267,782.81 043 --- 411,035,454.65

Problem with a query - wrong update results

Posted: 04 Mar 2013 05:34 AM PST

Hello experts,I have 2 tables.One of the tables has the data I need to know which user input it. On the table I have a column name called owner_m, when I run the data import for some reason my sistem is bringing me instead of the user_names the id_user_numbers that is store in tbluser table.I need to update my data table with the users name and I have the following query for that:update Aset final_owner=tblUser.tNameFROM A INNER JOINtblUser ON cast(A._owner_m as int) = cast(tblUser.aUserID as int)owner_m is a varchar and User id is an int. Thats why Im using the cast option.But for some reason the query is giving me the same user id for all the fields as follows:owner_m tname19 Carol20 Carol28 Carol instead of19 Carol 20 Rolando28 Estefania.Im asumming is because the data type and I already fix that with the cast function.I dont know what else to do, please help!!!

Linked Server Anonymose Logon Error

Posted: 04 Mar 2013 04:54 AM PST

Hi,So we are changing the SQL service account. The account we are changing to is in a domain user and the previous one was a domain admin. The account is inside a group and the group has been added to the local administrators group per server using a group policy. The group has been given logon rights to the servers, and the sysadmin server role.When I changed the sql service account to the new account it seemed to go fine. A few days later developers are complaining about linked servers being disconnected. So I logged in to a server through ssms that had a link server to the server that I changed the account on. When I test the connection to the linked server that had the service account changed I get "Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. (Microsoft SQL Server, Error: 18456)."Are there specific permissions that a domain account needs to be used for link servers? We temporary resolved the issue by changing the logon under the linked server properties to the sa account, but the sql service still runs as the newly changed account. I have seen some documentation but it's pretty hard to follow i'm pretty new to the more complex areas of sql.

Sql Server 2005 doesn't connect for windows xp SP3

Posted: 17 Dec 2012 12:53 PM PST

I have installed sql server 2005 in Windows XP SP3. Installed successfully but [b]a) server didn't connect gave error message ;[/b]TITLE: Connect to Server------------------------------Cannot connect to localhost.------------------------------ADDITIONAL INFORMATION:An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2)again tried with computer name gave same message.b) Service is not listed in the windows service list (should have service name mssqlserver)c) Version information Sql Server 2005 :Microsoft SQL Server Management Studio 9.00.1399.00Microsoft Analysis Services Client Tools 2005.090.1399.00Microsoft Data Access Components (MDAC) 2000.085.1132.00 (xpsp.080413-0852)Microsoft MSXML 2.6 3.0 5.0 6.0 Microsoft Internet Explorer 6.0.2900.5512Microsoft .NET Framework 2.0.50727.42Operating System 5.1.2600Can anyone help me to run sql Server 2005 in windows xp sp3?

Case Statement - DateDiff and DatePart

Posted: 03 Mar 2013 06:14 PM PST

I have been asked to come up with a case statement that gives me the date of birth, no I know there are easier ways to do it but its about writing the actual statement which I have done but I have got a bit confused.Can you Help?Declare @DOB DatetimeDeclare @Today DatetimeSet @DOB = '01 November 1971'Set @Today = GETDATE()SELECT Datediff(Year,@DOB,@Today) AS Years, Datepart(Month,@DOB) As DOB_Month, Datepart(Day, @DOB) as DOB_Day,DatePart(Month, @Today) As Current_Month, Datepart(Day,@Today) AS Current_DayCASE WHEN DatePart(Month, @Today) < Datepart(Month,@DOB) THEN Datediff(Year,@DOB,@Today) THEN -1 ELSEDatePart(Month, @Today) = Datepart(Month,@DOB) THEN Datepart(Day,@Today) < Datepart(Day, @DOB) THEN Datediff(Year,@DOB,@Today) THEN -1 ELSE END

[MS SQL Server] resource governor with multiple cpu on server

[MS SQL Server] resource governor with multiple cpu on server


resource governor with multiple cpu on server

Posted: 04 Mar 2013 04:01 AM PST

is it possible to use resource governor when there more number of cpu or it is a multiprocessor environment and how it can be configured??

SSMS open sessions

Posted: 04 Mar 2013 03:56 AM PST

We recently found quite a few errors in SQL error log at weekend the past Sunday, it lasts about 8 hours. . The login is from an untrusted domain and cannot be used with Windows authentication. [CLIENT 10.50...)We traced to the client IP address, and I asked if she tried to login over the weekend, but she didn't. But she said this monday morning, windows asked her to change the password, this is because we have policy that have to change password in 3 months.So on Sunday she had some SSMS queries open that was left so from last Friday.So we guess it is probably from SSMS of her workstation that keep trying to connect the server. and since her password is expired, it shows errors for windows authentication errors.My question is, when SSMS is open from client machine, even she was not working on it the whole weekend, why it keep trying to connect to server?is it supposed to work that way for SSMS?Thanks

Database Backup Issue

Posted: 18 Aug 2011 02:39 AM PDT

Hello All,I'm running SQL 2008 R2. I have an intermediate issue with my backup for this one database. Some days the backup completes successfully with no problem and other days - could be 3 or 4 in a row - the backup fails with the following error:Executed as user: NT AUTHORITY\SYSTEM. Cannot open backup device 'F:\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\SMS.BAK'. Operating system error 32(The process cannot access the file because it is being used by another process.). [SQLSTATE 42000] (Error 3201) BACKUP DATABASE is terminating abnormally. [SQLSTATE 42000] (Error 3013). The step failed.The command used to backup the database is:BACKUP DATABASE [SMS] TO DISK = N'F:\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\SMS.BAK' WITH NOFORMAT, INIT, NAME = N'SMS- Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10GOIs there any way to determine what that process is that is preventing the backup from occurring? Or are there any suggestions on how I can ensure that the backup device isn't causing an issue with the backup?Let me know if you have any questions.Thanks in advance for your help.Ronnie

SQL Server doesn't accept remote connections.

Posted: 03 Mar 2013 06:50 AM PST

Hello,I am trying to use my SQL Server in my Perl application. I am hosting my Perl code on an external hosting server.When I was developing my code I was running it on a computer on the same network as the server and everything worked fine. But, when I uploaded the code to the hosting service (replacing the local IP with the public IP of the server), it started failing in the part where it connects to the database saying that the server wasn't found. Maxwell.P.S. I did forward port 1433 and I even put the server as DMZ Host (which puts that computer in front of the firewall).

resource governor calculation for min and max utilization

Posted: 04 Mar 2013 12:17 AM PST

Can anyone explain how to calculate the resources for different resource poolsgone through BOL bit not actually getting the nderstnding?what is the term shared% means in resource governor?

Can not connect to SQLSERVER

Posted: 28 Dec 2010 02:19 PM PST

Hi, i've installed SQL Server 2008 R2 in my system and after that i've updated the 'sa' account using the scriptUSE [master]GOALTER LOGIN [sa] WITH PASSWORD=N'blah' MUST_CHANGEGOand after the restart of services, i tried to login as 'sa' user. Unfortunately it is giving me some error:A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.) (.Net SqlClient Data Provider)Server Name: XYZ\SQLEXPRESSError Number: 233Severity: 20State: 0i've enable namedpipes in Sql Server configuration manager. Could anyone pl. tell me how can i fix this issue.Thanks&Regards,Anilkumar M.S

Attach Database Error after moving MDF

Posted: 03 Mar 2013 04:31 AM PST

I needed to move the MDF to another location so I detached the database, and dragged the MDF to a new folder. I was thinking it had finished copying, so I attempted to attach it, but got an error because the MDF was in use. No surprise. After the copy finished, I tried to attach again, but now I get the error:"Unable to open the phuysical file "F:\log\search_log.LDF Operating system error 5 "Access is denied" SQL Error 5120.I did not make any changes to the LDF, so not quite sure why that is throwing the error.Any suggestions ?

[Articles] Does Connect Work?

[Articles] Does Connect Work?


Does Connect Work?

Posted: 03 Mar 2013 11:00 PM PST

The Connect system at Microsoft allows users to submit bugs and suggestions for products. Does this work? Steve Jones isn't so sure.

[SQL 2012] Disk configuration

[SQL 2012] Disk configuration


Disk configuration

Posted: 03 Mar 2013 11:05 PM PST

Hi,I'm getting a new server this week and have some say in the disk configuration. Basically i've got 2 600GB SAS drives configured in RAID1 for the OS (Windows Server 2008R2 64Bit Std). I've then got 12 7.2k SAS disks for the data/logs giving 6TB of storage.My question is this....Is there any performance advantage to be had by separating logs from data files. This is going to be a data warehousing server...Ignoring the OS partition because there's nothing i can really do there....Is it more sensible to have a) 4 disks in a RAID 10 config for logs and then 8 disks in RAID 10 for Data?b) 12 disks in RAID 10 and have data and logs on the same drives?With regards to TempDB what would you do? Put it on the faster OS disks? or split it amongst the 4/8 or 12 disk configs?Thanks for your help!Simon

SQL Server Express and SQL Server Agent?

Posted: 03 Mar 2013 07:33 AM PST

I just installed SQL Server Express with Advanced Services but can't manage to get SQL Server Agent service started...[url=http://msdn.microsoft.com/en-us/library/cc645993.aspx][u]Features Supported by the Editions of SQL Server 2012[/u][/url] seems to indicate that SQL Server Agent is not supported with the Express EditionHowever, sql server agent seems installed...Is there anyway to get it working?I find slightly silly to install it with SQL Server Express if it's not meant to work??? :angry:

[T-SQL] MaxDOP Query Hint

[T-SQL] MaxDOP Query Hint


MaxDOP Query Hint

Posted: 04 Mar 2013 12:04 AM PST

Is it possible to pass a variable for MAXDop as a query hint? My query below works when I set a constant for MAXDop, but as soon as I use a variable, I get a syntax error.declare @MAXDoParallelism as int set @MAXDoParallelism = 1select object_id as ObjectID, index_id as IndexID, partition_number as PartitionNum, avg_fragmentation_in_percent as Frag, page_count as Pages, alloc_unit_type_desc into #Fragmentation from sys.dm_db_index_physical_stats (db_id('sysutility_mdw'), null, null , null, 'sampled') where avg_fragmentation_in_percent >= 1 and avg_fragmentation_in_percent <= 100 and index_id > 0 and Page_Count >= 0 and Page_Count <= 1000000000000000000000000 and avg_fragmentation_in_percent <> 0 and avg_fragmentation_in_percent is not null order by avg_fragmentation_in_percent desc option (maxdop @MAXDoParallelism)

Time - Adding minutes and seconds

Posted: 27 Jan 2010 03:06 AM PST

I have the 2 columns with time datatypes.select top 5 [Connect Time2],[totalTime2] from dbo.verizonDetailsConnect Time2 totalTime208:05:44.0000000 00:13:00.000000008:05:57.0000000 00:01:00.000000009:07:42.0000000 00:03:00.000000009:07:46.0000000 00:09:00.000000009:08:08.0000000 00:01:00.0000000I want to add the time from totalTime2 to column [Connect Time2]. The desired result to look like this:Connect Time2 totalTime2 endTime08:05:44.0000000 00:13:00.0000000 08:18:44.000000trying to run the following:select top 5 [Connect Time2],[totalTime2], ([Connect Time2] + [totalTime2]) as endTimefrom dbo.verizonDetailsbut getting error:Operand data type time is invalid for add operator.

Query Help

Posted: 03 Mar 2013 04:37 PM PST

below is the code [code="sql"]DROP TABLE Class,Amountcreate table Class(code int,CName varchar(10))create table Amount(code int,Currency char(3),Amount float)insert into Classselect 1 , 'ASD' UNION select 2 , 'SSS' UNION select 3 , 'BBB' UNION select 4 , 'EEE' UNION select 5 , 'MMM' insert into Amountselect 3, 'CAD',230.00 UNION select 5, 'CAD',440.00 UNION select 1, 'CAD',666.00 UNION select 5, 'USD',40.00 UNION select 2, 'USD',66.00 select * from Classselect * from Amount[/code]Desired output [quote]Code Currency AmountASD CAD 666SSS CAD NABBB CAD 230EEE CAD NAMMM CAD 440ASD USD NASSS USD 66BBB USD NAEEE USD NAMMM USD 40 [/quote]

Query optimisation

Posted: 03 Mar 2013 07:39 PM PST

HiCan anyone help improving a query i haveI have a table with a list of phone numbersTable1Telephone012122345670121223456801212234569012122345440121223451701212234527012122345370121223454701212234557Iam given 5 phone numbers and i need to check if each one exists in Table1At the moment i am doing 5 individual querysset @existsTel1 = 0select @existsTel1 = 1 from Table1 where Telephone=@telephone1I would like to do this in one query, can anybody help at allthanksSimon

DB Normalisation Help required

Posted: 03 Mar 2013 02:18 PM PST

I have the data as below. I need to normalise the table with this data.. can someone help me how perfect i can normalise this?[b]Table A: (Master)[/b]ComputerNameOSOSVersionManufacturer[b]Table B:[/b]SoftwareNameSoftwareVersionSoftwareLiscencekeyInstalledversionI have attached data in Attachment.Requirement:1. [b] Both tables are having duplicate values. [/b]2. One Software will be used in multiple computers and one computer will use multiple softwaresPlease normalise this table and provide me a solution to proceed further.

Group by help

Posted: 03 Mar 2013 04:53 PM PST

Hi all This seems like a simple thing to be able to do but I am really struggling with itSome DDL[code="sql"]create table table1 (name nvarchar(50), purchase_date datetime,product nvarchar(50),quantity int)insert into table1values('John','2013-03-02 14:53:47.000','Car','2'),('Mike','2013-03-02 15:04:17.000','Car','3'),('John','2013-03-02 15:04:29.000','Car','5')[/code]How can I get the following select statement to work?[code="sql"]select name,MAX(purchase_date),product,quantityfrom table1group by name[/code]What I need returned is a distinct list of the names with the max purchase_date, including the product and the quantity for that specific max purchase dateIs this possible with a group by statement?Thanks

cursor

Posted: 03 Mar 2013 04:49 AM PST

Hello,Following is my requirement.[quote][code="other"]Table rows:SLN Id StartDate EndDate Duration TimeElapsed1 1 1/1/2012 10.12 1/1/2012 10.13 0day 00:01:00 0-days 00:01:001 1 1/1/2012 10.14 1/1/2012 10.15 0day 00:01:00 0-days 00:03:001 1 1/1/2012 10.15 1/2/2012 10.16 1day 00:01:00 1-days 00:04:00 (total- counter reset) 2 2 1/1/2012 10.12 1/1/2012 10.13 0day 00:01:00 0-days 00:01:002 2 1/1/2012 10.14 1/1/2012 10.15 0day 00:01:00 0-days 00:03:002 2 1/1/2012 10.15 1/2/2012 10.16 1day 00:01:00 1-days 00:04:00 (total- counter reset) [/code][/quote]I need to calculate Duration and TimeElapsed.Can somebody please give me sample query for this requirement.... I have all these values in my temp table, so either I use cursor or call a function and use cursor inside it... or is there any other option Achieving this C# is quite a cake walk ... Is using cursor the only option or do we have more options in db side.Many thanks

Grouping sets

Posted: 20 Feb 2013 03:39 AM PST

Hi All,how to use the groupingsets to a query having more than 32 columns plz help out

Hierarchy example - Ken Henderson's book - not working

Posted: 23 Feb 2013 10:16 AM PST

I'm working on the hierarchy lesson in Ken Henderson's The Guru's Guide to Transact-SQL. He wants to show two things. First, how to write a query that populates a temporary table (#org_chart) with all possible chartdepth levels detected in the staff table. Secondly, how to write a query that shows all the boss/subordinate relationships regardless of the levels by which they are removed from each other. But, I am having trouble with the first part, the query for building the #org_chart table. I've written it below, verbatum from book. It is syntactically correctly but when I run it adds no subsequent rows to the #org_chart table. Will someone tell me what is missing? Run in this order, the DDL, query 1, and query 2, should produce a table with 40+ rows:[code="plain"] --DDL for creating and populating staff tablecreate table staff(employee int primary key, employee_name varchar(10),supervisor int null references staff (employee))insert staff values (1, 'groucho', 1),(2, 'chico', 1),(3, 'harpo', 2),(4, 'zeppo', 2),(5, 'moe', 1),(6, 'larry', 5),(7, 'curly', 5),(8, 'shemp', 5),(9, 'joe', 8),(10, 'curly joe', 9);--query #1--establishes first 10 rows of table, with first level of chart depthselect chartdepth=1, employee=o2.employee, supervisor=o1.employeeinto #org_chartfrom staff o1 inner join staff o2 on (o1.employee=o1.supervisor)--query #2 does not work for me--it is supposed to populate #org_chart with remaining levels of chartdepth. --A while loop repeats the insert as many times as necessary to process all levels in staff tablewhile (@@rowcount > 0) begininsert #org_chart (chartdepth, employee, supervisor)select distinct o1.chartdepth+1, o2.employee, o1.supervisorfrom #org_chart o1 join #org_chart o2 on (o1.employee=o2.supervisor)where o1.chartdepth=(select max(chartdepth) from #org_chart)and o1.supervisor<>o1.employeeend--if following the execution of query #1 you run the below query about 4-5 times, it accomplishes what query #2 is supposed to be able to --accomplish with a While loopinsert into #org_chartselect distinct o1.chartdepth+1, o2.employee, o1.supervisorfrom #org_chart o1 join #org_chart o2 on (o1.employee=o2.supervisor)where o1.chartdepth=(select max(chartdepth) from #org_chart)[/code]Why is query #2 not working? Thanks.

[SQL Server 2008 issues] confused With View , Function and Store Procedure

[SQL Server 2008 issues] confused With View , Function and Store Procedure


confused With View , Function and Store Procedure

Posted: 03 Mar 2013 06:08 PM PST

i get confused to where i use function , store procedure and view.what is the definition and deference of them?;-);-):hehe:

Identifying performance issues using Profiler trace

Posted: 03 Mar 2013 06:00 PM PST

Hi All,I have been told that I would be given a production profiler trace to figure out the performance issues during application peak time. Assuming required properties are chosen while producing trace & SqlServer 2008R2 is the version. I am aware of some tuning methologies if I identify the bad performing query (adding indexes, re-write logic). However, I need to start from trace and narrow down to bad performing queries. please help me with some formal way how to progress once I receive the Trace.Any steps/docs would help me to kick start pls.Thank you all.Regards,Suresh

Just accepted a Sr. DBA job. Time for a gut check?

Posted: 03 Mar 2013 12:46 PM PST

Hi SQLSC!Buckle up, I apologize in advance for the long post. In many ways, typing it out is as much as my own sanity as it is for me asking for everyone's advice. I hope you take the time to read it and comment; I am need of advice.First off, I'm a new poster in the forums, but not to the forums or SQL Server central itself; I've been a lurker here for some time, and I have to say that I am routinely impressed with the knowledge level, maturity, and overall helpful attitude of all the members that ask questions and provide solutions here. Which is why I'm here, because I need some advice from the people who know the most about my "dilemma."I recently interviewed for, was offered, and accepted a senior database administrator role. And now I'm having a form of "buyer's remorse" so to speak, for two main reasons:1. I've been at my current employer for almost 10 years, and2. I am wondering if my skills are where they need to be for this role.Aside from the fact that I'm leaving a relatively stable and secure job for a new role at a new company where I'll be "the new guy" (which is stressful enough), I have a terrible fear of failure; I know that's a "me problem" and it's not confined to my job. I feel this way about a lot of things, and truthfully I should probably be medicated :w00t:Let me provide a little background:I started my career as an "accidental DBA" back in 2002. At the time, I was tasked with creating a local web page for people to post articles, announcements, and so forth for everyone in the building to see. I went to school to be a software developer, and I go swept up in the web application development craze while there. I started out coding pages in PHP with a MySQL back-end, and graduated to active server pages with Microsoft Access databases that I "inherited" when other people left the company. In 2004 I obtained my first SQL Server instance: a SQL server 2000 installation that I ended up migrating all my legacy application databases to.I had never really used a full DBMS before, so naturally I started out slow, learning the system and how it worked. I built tables and views, and I even messed around with DTS packages to automate some of my tasks. I learned how the transaction logs work (the hard way, when thew grew out of control) and how to manage them. I learned the value of backing up databases, and the value developing a solid disaster recovery plan (and making sure people knew how to execute it when I was not around).From there, I made the switch to ASP.NET, coding in C#. I recoded lots of applications, and around 2007 I took the opportunity to migrate to a brand new instance of SQL Server 2005. I started coding TSQL stored procedures and functions, and started the best practice of using them for the actual business logic and SQL code instead of embedding it in my applications. I got to work with SSIS, which I love, and continue to use for many things. And up until recently, I wrote many complex stored procedures with CTE's, temp variables and table variables, and the like. I've also followed all the best practices I've can in regards to "least required" access rights, creating application log ins, restricting users to certain schemas based on business needs, and even using "EXECUTE AS" when needed, both with local SQL server log ins and domain ID's. And today, I manage 3 severs (2 virtual 2008 instances and 1 physical 2005 instance) that my applications all use to store, read, and execute on data.So why leave? Well in my current role, I'm not really considered to be in "IT." I'm in a position where I'm part of the operations of the business because of my diverse skills; I get to work with IT, though, in securing new instances of SQL server hardware, VM's, or other developers in getting access to other systems and/or providing access to my own database servers. In many ways, I feel my development background gives me a proclivity to working with TSQL and developing complex queries. Coupled with the fact that I've been writing SQL for almost 10 years now, I feel I have a pretty solid handle on what makes good, quick-executing queries and what doesn't. And I love SQL server; working with data sets is something I truly enjoy. In many ways, I feel my development background gives me a proclivity to working with TSQL and developing complex queries. Coupled with the fact that I've been writing SQL for almost 10 years now, I feel I have a pretty solid handle on what makes good, quick-executing queries and what doesn't. And I love SQL server; working with data sets is something I truly enjoy.But I found myself at a cross-roads; while I'm viewed as a "go-to" person for solutions, I felt I've hit a ceiling. Since I wasn't IT, I knew I wasn't going to be able to learn, say, SQL Server 2012, or be able to attend classes for certification or training; it's just not in the budget for the area of the company I worked for. And in some ways, my job could clearly be viewed as "redundant" since it's an IT-based role in the non-IT division. It's been tried before, and there were many times I was told (in confidence) that my job had to be justified at the highest levels. Which is always good to hear...At any rate, I started looking. I felt I had two career paths: to continue as a developer, or to look for database administrator jobs. Again, being self taught, I felt my development skills are "behind the curve" a bit (seriously, does anyone want web forms anymore? Everything is MVC now), I decided to look for DBA roles. It didn't start out good; the first job I interviewed for (which was an ETL-type role) I interviewed sat me right down and had had me do some "live" testing by developing some SSIS packages in front of some of the other staff. I was a little flustered, and while I was able to complete the tasks, I think they felt I was a little under-prepared and passed on me. The second job I interviewed for was even worse; I never got to talk to the hiring manager. HR brought me in, and sat me in front of a computer terminal with some testing software. I bombed horribly, and was told that my skills were sub-par.So was starting to feel defeated; I studied up, and practiced at my current job, learning as much as I could. So when I went to my third interview, I wanted to be better prepared. But when I got to my third interview, there was no testing at all; First, there was a phone interview where they asked me some questions about what I did now. I told them basically my story above: that I was an application developer that had to work with SQL server. They wanted to hear more, so I gave them all the answers I could. They asked some questions that I felt I didn't have good answers to, and I told them so. The interview lasted a half an hour, and I hung up the phone thinking "that went well, but there are probably better qualified people."Imagine my surprise when, almost 10 minutes later, my phone rang. I was my recruiter; he told me that the hiring manager and current DBA were "blown away" with my experience and wanted me to come in for an interview asap. He told me that many people didn't even make it past the phone interview! So I was psyched, but nervous: what if I bombed another test? I shared my worry wit my recruiter; he told me just to do my best.I showed up to the in-person, and first met with the hiring manager in person. We talked for almost an hour, but it was more about the kind of person I was and what I liked. Why did I want the job? In effect, he was measuring up my personality, which, as a person who had hired some analysts in the past, I appreciated. From there, I met with the members of his team: server administrators, storage administrators, the security team, active directory administrators, exchange team... the meetings went great. I got to spend some time with the current DBA (who, I should mention, has been there two years and I am going to be working WITH, not replacing). Finally, I met with members of the development staff. This was the most technically demanding part of the interview; they grilled me on some SQL stuff ("Would you ever use a cursor?" "What's a left join?"), and and some application development stuff ("Ever worked with Hadoop?") and also about my current release management. I gave the best answers I could.When we were finished, I glanced at my watch, I was there about 3 hours! I was escorted back down to HR where I met with the HR senior manager. She talked more about the company and the benefits, and then she told me that she wanted to make me an offer right then and there. I was floored. I asked her if maybe she should get some feedback from the team to make sure, to which she replied "I wouldn't be making this offer if every single person you met with didn't come down and say 'hire this guy.'"I had a few days to think about it, and I decided to accept; the company seems great, it's growing, and the culture is very appealing. I'll have access to a formal training and certification budget to keep my skills sharp, and I get to get back to an pure IT role.Sounds great, right?Well, back to my dilemma: since accepting, I am having a pure panic attack. Do I really have the skills to do this? There are certain things in the environment that will be new to me, such as:1. The scale: there will be many servers to manage, much more than I've had to work with in the past.2. The size of the instances: While the company is MUCH smaller than where I was, they have massive amounts of data. My tables maybe held 10's of thousands of rows, but some of their production databases hold 100's of thousands of rows.3. They utilize a HA set up on a cluster; I don't have any experience with those.4. They have replication set up on a few of the production databases to create an instance for the BI and operations teams to query; I've never set up or changed replication schemes.Also, I've always been the person who's had to develop the application code as well as the TSQL code to support it, I've never really had to troubleshoot major performance issues; that's not to say that I don't understand query execution plans, but I haven't had to use database tuning adviser, or SQL Server profiler to really dig into a query issue. If something wasn't working I could pretty easily identify where an index was needed or a procedure that needed re-written. So I don't have a firm grasp on what the "best" way to troubleshoot performance issues are.But part of the reason they liked me for this role was because of my development experience; the current DBA is bombarded with questions for the various development teams asking him for help with their queries. The fact that I have development experience is why they selected me: to help ease the burden on the current DBA, and to help suggest different ways of accomplishing tasks to the teams. They are undergoing an massive overhaul of many of their systems, and trying to position them company's information systems for future growth.I'm not exaggerating when I say I haven't been sleeping well; I feel like I don't have all the skills I need for this job, and that I'll fail and regret giving up what I had. Ever since accepting, I've been reading all my books, re-visiting the stairwell articles here, and reading and testing myself to make sure I'm ready. I start the job in a couple weeks, and I want to make sure I'm ready. My absolute low point came last night: I had recently bought the test prep books from Microsoft press for two of the certifications. I read the books and studied a bit, and I took the practice tests: I failed both of them. Cue panic.If you've read this far, I want to ask for all of your advice: do you think I accepted this role too hastily? Am I missing critical job skills? Are there areas that you would suggest me boning up on? I had an opportunity to speak to the current DBA, and he doesn't seem concerned about my skill set, but I don't want to be another person who burdens him with questions when I don't understand something. Or am I over-thinking it? Should I just have faith in my abilities and move forward? It all boils down to: I want to come in and be successful. I've been good at my job in the past and this will be a change for me. In six months (or less!) I don't want them to think they made a mistake in hiring me. I really feel the company I going to is a good one, with a very bright future, and that the role is a positive step for my career.Having typed all that out, what do you all think? Did you go through something similar? Any advice or comments are welcome, and I thank you all for your time and thought. I'll certainly be monitoring the thread, so if there's other things that you want to know about me or my skills, feel free to ask!

Use parameter value in .csv output filename

Posted: 03 Mar 2013 05:02 AM PST

Table_1[code="sql"]ESTAB_ID ESTAB_CODE FORENAME SURNAME1 9009001 John Jones1 9009001 Mike Smith1 9009001 Mary Yates2 9009005 Ann Ennis2 9009005 Kelly Homes3 9009014 Harry Brand3 9009014 James Casey3 9009014 Chris Balls[/code]Table_1 contains a list of people belonging to different establishments. The ESTAB_ID is a unique sequential identifier. The ESTAB_CODE is an integer value and not sequential. I have written the following to extract subsets of the table (by ESTAB_CODE) and output them as .csv files.[code="sql"]DECLARE @ESTAB_CODE VARCHAR (7)SET @ESTAB_CODE = (SELECT ESTAB_CODE FROM TABLE_1 WHERE ESTAB_ID = 1 GROUP BY ESTAB_CODE)SELECT [ESTAB_CODE],[first_name],[last_name]FROM TABLE_1WHERE ESTAB_CODE = @ESTAB_CODE:OUT C:\FOLDER\9009001.csv[/code]I set SQLCMD mode to 'ON'.I manually change the ESTAB_ID number in the SET line to produce the different subset tables - I am happy to do that at this stage.I get the correct output in the correct destination folder in .csv format.[code="sql"]ESTAB_CODE FORENAME SURNAME9009001 John Jones9009001 Mike Smith9009001 Mary Yates[/code]My problem is the filename for the .csv file. I want to use the ESTAB_CODE number for the filename and have to imput it manually. Is there any way this filename can be changed automatically to the current ESTAB_CODE using the @ESTAB_CODE parameter value when I execute the code?I am very new to SQL having spent many years in a SAS environment so tend to expect SQL to work in a similar way - which is causing me problems!!Any assistance would be much appreciated.

Query in SSRS

Posted: 27 Feb 2013 07:51 PM PST

hi All ,i have one requirement where the SSRS report is displaying only certain data of a column based on a filter on a date range ,whereas i should get all the data even though the date range is selected or not for that particular column ,how can we achieve this is SSRS report

how to get Mac Address in SSRS

Posted: 27 Feb 2013 02:45 PM PST

Hi,I am having a requirement of taking the MAC Address using SSRS is it is possible to take?My Requirement: To take and keep the mac address of each user and make a note that he does not make double entry of comments ,there is no log restriction like using userid and password , just to access the page and user will give some comments.Other than MAC Address can we do this?If so how to find the uniqueness on the users.Notes:A user should not have double entry.No user is provided Userid and password.Thanks!

Use parameter value in .csv output filename

Posted: 03 Mar 2013 04:32 AM PST

Hit the send button in error. see my later post with the same title

fetch tables most often queried

Posted: 26 Feb 2013 12:24 PM PST

Is there a way to determine which tables of a database are being queried most often, sorted in descending order?

Not seeing savings in sparse columns

Posted: 02 Mar 2013 10:23 PM PST

Ive ran a test comparing a table with a few sparse columns to a table with no sparse columns, and i am seeing zero space saving.I have two tables, both storing Address info mainly in varchar columns. both tables allow nulls, one has columns sparse property set.I insert 1000 rows of default values in each (default values are null). Sparse columns store nulls differently so i believe i should see a space saving. but on running sp_spaceUsed i dont see any saving. Any ideas on what i am doing wrong or where my understanding is incorrect?[code="sql"]Create Table SparseColTest_NonSparse( AddressID int identity(1,1) not null, AddressLine1 varchar(500) null, AddressLine2 varchar(500) null, AddressLine3 varchar(500) null, PostalCode varchar(20) null, Country varchar(50) )Create Table SparseColTest_Sparse( AddressID int identity(1,1) not null, AddressLine1 varchar(500) sparse null, AddressLine2 varchar(500) sparse null, AddressLine3 varchar(500) sparse null, PostalCode varchar(20) sparse null, Country varchar(50) )declare @i intset @i = 0while(@i <= 100000)BEGIN insert into SparseColTest_NonSparse Default values insert into SparseColTest_Sparse default values set @i = @i + 1ENDexec sp_spaceUsed 'SparseColTest_NonSparse'exec sp_spaceUsed 'SparseColTest_Sparse'/*name rows reserved data index_size unused----------------------------- -------------------- ------------------ ------------------ ------------------ ------------------SparseColTest_NonSparse 210003 2888 KB 2840 KB 8 KB 40 KBname rows reserved data index_size unused----------------------------- -------------------- ------------------ ------------------ ------------------ ------------------SparseColTest_Sparse 210003 2888 KB 2840 KB 8 KB 40 KB****NOTE - even with 210k rows sparse and non sparse tables are identical in size.*/[/code]

partition_id VS. hobt_id

Posted: 02 Mar 2013 09:44 PM PST

Hello!There is 2 columns in [url=http://msdn.microsoft.com/en-us/library/ms175012.aspx]sys.partitions[/url] view - [b]partition_id[/b] and [b]hobt_id[/b]. From my point of view/experience for any given row from this view [b]partition_id[/b] always equal [b]hobt_id[/b] and I never see the 2 different values. It's seems reasonable, because (simplifying complex thing) we can say: "partition is hobt, hobt is partition". But at the same time article about another view - [url=http://msdn.microsoft.com/en-us/library/ms189792(v=sql.105).aspx]sys.allocation_units[/url] tell us:[quote]If type = 1 or 3, container_id = [b]sys.partitions.hobt_id[/b].If type is 2, then container_id = [b]sys.partitions.partition_id[/b].[/quote]So - these 2 values can ever be NOT equal?? As I said I had never see this in practical. Can you please give the example of table (and DB, if it needs), for which [b]partition_id[/b] and [b]hobt_id[/b] will be NOT equal?

Search This Blog