Tuesday, June 18, 2013

[how to] Deploying database changes in high availability environment

[how to] Deploying database changes in high availability environment


Deploying database changes in high availability environment

Posted: 18 Jun 2013 07:22 PM PDT

We are currently looking at our deploy process, which consists of an application tier and data tier for a web solution.

We'd like to develop a deployment strategy that does not require downtime for our sites.

From an application point of view this is simple enough - we have a load balanced environment, and can turn servers on and off as we deploy the code. However we are not really sure how to approach the database side of things, which at the moment is a bit of a single point of failure.

The database is used for both reads and writes, and it is not really feasible (as far as we can see) to turn either off without bringing sites offline. So we need to keep the database in sync with whichever version of the application is currently servicing requests. We also need to allow enough time for the database deploys to be made, without causing any problems.

Is this a solved problem in DBA circles? Does anyone have any tips, stories or resources that might help us approach this problem? At the moment we are not really sure what our first step in this process should be.

Postgres ODBC driver versions

Posted: 18 Jun 2013 06:44 PM PDT

On the Postgres ODBC driver page the name for each file is of the form:

psqlodbc_09_01_0100-x64.zip  psqlodbc_09_01_0100.zip  psqlodbc_09_01_0100-1.zip  psqlodbc_09_01_0200-x64.zip  psqlodbc_09_01_0200.zip  psqlodbc_09_01_0200-1.zip  psqlodbc_09_02_0100-x64.zip  psqlodbc_09_02_0100.zip  

I'm trying to make sense of the file names, and haven't been able to find any documentation. The 09_01 is clearly the major/minor version and the x64 is pretty clear. What do the middle terms, like 0100-1 mean?

Do I need to use -f to fix insufficient max server memory settings?

Posted: 18 Jun 2013 07:07 PM PDT

SQL Server Version:

  SELECT @@VERSION;    /*  Microsoft SQL Server 2012 - 11.0.2100.60 (X64)   Developer Edition (64-bit) on Windows NT 6.1  (Build 7601: Service Pack 1)  */  

Issue:

So, I was playing around with max memory and I accidentally set max memory to 128MB and now, I cannot open a connection to SQL Server. When I tried to open a session and fix the mistake, I got this wonderful pop-up.

enter image description here

In order to save myself, I decided to add the -f parameter to SQL Server and bounce the service.

See more about that here:

Server Memory Server Configuration Options

and here:

Database Engine Service Startup Options

enter image description here

Once the service was re-started, I tried once again to connect. I got the following message:

enter image description here

This was expected according to the information I had about the -f flag. I then was able to open a connection using the Dedicated Admin Connection (DAC). I was then able to run sp_configure and alter the max memory setting. Here is what I saw in sp_configure before I made a change. It appears that the running value is again infinity (or close enough).

enter image description here

I ran this update to increase the max memory to 3GB:

  USE master;  GO  sp_configure         'max server memory (MB)'      , 3072;  GO   RECONFIGURE;  GO  

And then I removed the -f parameter and re-started SQL Server. At that point, the server became accessible in multi-user mode and I had a max memory value equal to 3GB. This is not the most elegant solution I have ever encountered; but, alas it worked; so I decided to share it here.

Does someone have a quicker and cleaner solution?

mysql: commands like create/grant/drop always return "0 rows affected" if OK

Posted: 18 Jun 2013 07:39 PM PDT

I was considering why some special commands in mysql like create user, grant privileges, drop user when correct passed return "0 rows affected", which is not true! (rows in mysql db are affected indeed)

I know there are not standard queries, but if we do delete from user where ... on mysql database, we will see N rows affected.

so it's countable in rather easy way.
otherwise response "Query OK (0.00 sec)" will be much accurate.

or am I missing sth? I verify above with a few mysql version 5.1-5.5.

How can I get my linked server working using Windows authentication?

Posted: 18 Jun 2013 02:29 PM PDT

I'm trying to get a linked server to ServerA created on another server, ServerB using "Be made using the login's current security context" in a domain environment. I read that I'd need to have SPNs created for the service accounts that run SQL Server on each of the servers in order to enable Kerberos. I've done that and both now show the authentication scheme to be Kerberos, however, I'm still facing the error:

"Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'".  

In Active Directory, I can see that the service account for ServerB is trusted for delegation to MSSQLSvc, but I noticed that the service account for ServerA does not yet have "trust this user for delegation" enabled. Does the target server also need to have that option enabled? Is anything else necessary to be able to use the current Windows login to use a linked server?

MySQL multiple rows with same id meeting two different conditions at same time

Posted: 18 Jun 2013 05:17 PM PDT

I am using OTRS helpdesk ticket management system. When i execute following Query:

SELECT  `ticket`.`id`,    `ticket_history`.`ticket_id`,    `ticket_history`.`id`,  `ticket_history`.`name`,    `ticket_history`.`create_time`   FROM  `ticket_history`   INNER JOIN  `ticket` ON  `ticket_history`.`ticket_id` =  `ticket`.`id`   WHERE  `ticket_history`.`name` LIKE  '%Raw% %new%'  OR  `ticket_history`.`name` LIKE  '%Close'  ORDER BY  `ticket_history`.`ticket_id`, `ticket_history`.`id` ASC   

I get the following output:

+----+-----------+-----+-------------------------------------------+---------------------+  | id | ticket_id | id  |                   name                    |     create_time     |  +----+-----------+-----+-------------------------------------------+---------------------+  |  1 |         1 |  79 | %%Close                                   | 2013-06-10 11:50:33 |  |  2 |         2 |   2 | %%2013060810000011%%Raw%%3 normal%%new%%2 | 2013-06-08 21:59:02 |  |  3 |         3 |   5 | %%2013060810000021%%Raw%%3 normal%%new%%3 | 2013-06-08 21:59:03 |  |  3 |         3 |  22 | %%Close                                   | 2013-06-08 22:10:41 |  |  3 |         3 |  82 | %%Close                                   | 2013-06-10 11:50:49 |  |  4 |         4 |  88 | %%Close                                   | 2013-06-10 11:51:32 |  |  5 |         5 |  64 | %%2013060910000019%%Raw%%3 normal%%new%%5 | 2013-06-09 17:12:09 |  |  5 |         5 |  85 | %%Close                                   | 2013-06-10 11:51:10 |  |  6 |         6 |  92 | %%2013061010000016%%Raw%%3 normal%%new%%6 | 2013-06-10 12:00:24 |  |  7 |         7 |  95 | %%2013061010000025%%Raw%%3 normal%%new%%7 | 2013-06-10 13:05:05 |  |  8 |         8 |  98 | %%2013061110000014%%Raw%%3 normal%%new%%8 | 2013-06-11 19:05:06 |  |  8 |         8 | 109 | %%Close                                   | 2013-06-17 23:57:35 |  |  9 |         9 | 163 | %%2013061810000011%%Raw%%3 normal%%new%%9 | 2013-06-18 02:05:06 |  +----+-----------+-----+-------------------------------------------+---------------------+  

I need to modify the above query so I can only list rows of each ticket_id which has "%RAW% %new%" in name and at the same time same ticket_id has a row which has "%Close" in it.

In other words, Three rows of ticket_id 3, Two rows of ticket_id 5 and Two Rows of ticket_id 8 should be displayed from the above output.

What can I add to a server to make SQL restores faster?

Posted: 18 Jun 2013 02:54 PM PDT

I have a 2.8TB SQL database (mostly data files, some 400GB of log files) that currently takes around 9 hours to restore. This database is used for testing purposes and must be deleted and restored from a backup between each run, to make sure we're always starting from the same point.

My question is, the server currently has 12 cores and 92GB of RAM, with a RAID 5 disk subsystem that the database is on. What areas usually cause bottlenecks for SQL restore processes? Is it the disk, memory, or CPU?

How to write a query to find all tables in a db that have a specific column name

Posted: 18 Jun 2013 05:08 PM PDT

I've got a database with about 100 tables and I need to build a join query to get specific data from two of them. I know one but not the other. Basically I need something like:

select <tables> from <database> where exists table.column name;

How can I do this?

Trying to find the last time that a value has changed

Posted: 18 Jun 2013 06:09 PM PDT

I have a table that has an ID, a value, and a date. There are many IDs, Values, and dates in this table.

Records are inserted into this table periodically. The ID will always stay the same but occasionally the value will change.

How can I write a query that will give me the ID plus the most recent time the value has changed? Note: the value will always increase.

From this sample data:

  Create Table Taco   (  Taco_ID int,      Taco_value int,      Taco_date datetime)    Insert INTO Taco   Values (1, 1, '2012-07-01 00:00:01'),          (1, 1, '2012-07-01 00:00:02'),          (1, 1, '2012-07-01 00:00:03'),          (1, 1, '2012-07-01 00:00:04'),          (1, 2, '2012-07-01 00:00:05'),          (1, 2, '2012-07-01 00:00:06'),          (1, 2, '2012-07-01 00:00:07'),          (1, 2, '2012-07-01 00:00:08')  

The result should be:

Taco_ID      Taco_date  1            2012-07-01 00:00:05  

(Because 00:05 was the last time Taco_Value changed.)

Create primary keys based upon higher-level entity in access?

Posted: 18 Jun 2013 01:45 PM PDT

Is it possible to have access create primary keys for items by appending a string to the primary key one level above it?

I am trying to create an inventory management database for my company's server ops team, to keep track of things such as switches, servers, UPS's, etc. I have things organized to separate our different locations, the different rooms in said locations, the different racks in said rooms, and finally a 1-1 relationship between an occupied rack U and a device. The primary key for locations would be a simple, 3-letter designation that we use as a site code. The primary key for a room would be that site code plus "-###" where "###" is the displayed room number. This is to simplify the attributes for the room/lower levels (i originally had an auto-gen number for pKey, with a separate attribute for display number). this would continue down, until the rackU and/or device entities. For them, i'm looking to allow the admins to use a form, select the correct Location, Room, and Rack#, then enter a rackU and serial number/asset tag for the device. Then the rackU primary key would need to generate itself to be the primary key value of its associated rack, plus the U number.

To simplify, I would be manually inputting each entry down to rackU, and primary keys would look like:

[Loc] = ABC
[Room] = [Loc]-###
[Rack] = [Room]-A/1 (some racks are lettered, some are numbered)

Then, when entering a new device, user would select Location, Room, Rack, and input a rackU number. I need some way to auto-generate the rackU primary key to be the linked Rack primary key plus the given U number.

It's one of those things where I'm fairly certain I could make it work if I was writing this database from scratch in python or PHP, or if i was using an actual mySQL database, but I'm completely lost as to how to handle this in Access.

Cannot run pgAdmin from the terminal: No such file libwx_gtk2u

Posted: 18 Jun 2013 02:04 PM PDT

(Probably irrelevant) I'm using Puppet to setup/manage a class of computers that are equipped with PostgreSQL and PostGIS, and this implies that each computer goes through an unattended install:

PostgreSQL

command     => '/bin/mount -o remount exec /tmp                ;                  /bin/mkdir -m 0755 -p /opt/PostgreSQL/9.1      ;                  /tmp/setup/postgres-9.1.4.run                  \                    --mode unattended                            \                    --superaccount postgres                      \                    --superpassword sUP34_se<uR3^P4sSw0rd        \                    --datadir /opt/PostgreSQL/9.1/data           \                    --locale 1                                   ;                  /bin/mount -o remount /tmp                     ;                  /bin/touch $installflag',  

I should mention that, even though the install succeeds (exit code 0, at least), PostgreSQL does not appear under Applications.

PostGIS

command     => '/bin/mount -o remount exec /tmp      ;                  /tmp/setup/postgis-1.5.bin           \                    --mode unattended                  \                    --pgport 5432                      \                    --pguser postgres                  \                    --pgpassword sUP34_se<uR3^P4sSw0rd ;                  /bin/mount -o remount /tmp',  

When I now attempt to run /opt/PostgreSQL/9.1/bin/pgAdmin3 --version, I receive the following error (formatting added):

./pgAdmin3: error while loading shared libraries:               libwx_gtk2u_stc-2.8.so.0:               cannot open shared object file:               No such file or directory  

running locate libwx_gtk2u gives me the path /opt/PostgreSQL/9.1/pgAdmin3/lib as its location, so following the advice of a forum post, I alter LD_LIBRARY_PATH, previously empty, to be

LD_LIBRARY_PATH=/opt/PostgreSQL/9.1/pgAdmin3/lib  

to no effect. I'm out of both ideas and pages of Google search results.

I am running Red Hat Enterprise Linux 5 x86_64.

pgAdmin to display multiple result sets

Posted: 18 Jun 2013 01:58 PM PDT

With MS SQL Query Browser I can run multiple queries at the same time and each result set will display in it's own window.

Does PostgreSQL/pgAdmin have this functionality? Plugin perhaps?

How do I maintain more audit history?

Posted: 18 Jun 2013 11:23 AM PDT

I created a SQL Server audit and when I view the audit logs I only see today's audit. I am saving it to a file and I want to store audit for one year. Is there a way to store database and server audit for one year? Does SQL Server store audits for only a day?

CREATE SERVER AUDIT [Audit-20130618-142022] TO FILE   (    FILEPATH = N'C:\Documents\Audit\Audit' ,    MAXSIZE = 0 MB ,    MAX_ROLLOVER_FILES = 2147483647 ,    RESERVE_DISK_SPACE = OFF   )   WITH   (     QUEUE_DELAY = 1000 ,    ON_FAILURE = CONTINUE   );  

Query mail logs for xp_sendmail

Posted: 18 Jun 2013 10:29 AM PDT

Using SQL Server 2008, I can query sysmail_log and other tables to see the results of sql emails.

Is anything remotely similar available for querying email logs in SQL Server 2000?

CSV File Into MYSQL Merging - LOAD DATA on Duplicate Key?

Posted: 18 Jun 2013 11:53 AM PDT

Hey Dba Stack Exchange,

My question to you is with regards to mysql. Right now I have a medium to small sized csv with 25k lines I need imported into mysql for a client. Problem is, their is only 4411 unique key rows. Rest have the same primary key and thus are skipped. I cant replace into because all the columns are present in the csv. I need to do (in mongodb terms Upsert).

Basically my question is what is the best method to update the existing columns like a ON DUPLICATE KEY during an insert for bulk csv file loading.

Database is Mysql

Table1 {    userID VARCHAR(50) PK,    UnixTimestamp VARCHAR(100),    likesCars TINYINT(1),    likesTrucks TINYINT(1),    HasMorgage TINYINT(1),    ... 1000 more    }  

Data:

2323232,1111111,1,0,0,0,1...    2323232,2333222,0,0,0,1,0... (needs merge)    1121121,222222,0,1,1,0,0...  

goes on and on haha.

Was thinking maybe since they are mainly tiny ints changing

Load Data InFile CSv... (userId, @unixTimestamp, @likesCars, @likesTrucks...)    SET unixTimestamp = @unixTimestamp    SET likesCars = @LikesCars OR LikesCars    SET likesTrucks = @LikesTrucks OR LikesTrucks  

Am I on the right track here?

Nonclustered index is faster than clustered index?

Posted: 18 Jun 2013 07:23 PM PDT

Both tables have same structure and 19972 rows in each table. for practicing indexing, i created both tables having same structure and created

clustered index on persontb(BusinessEntityID)  

and

nonclustered index on Persontb_NC(BusinessEntityId)  

and table structure

BusinessEntityID int  FirstName varchar(100)  LastName  varchar(100)                                                                                                                            -- Nonclusted key on businessentityid takes 38%  SELECT  BusinessEntityId from Persontb_NC  WHERE businessentityid BETWEEN 400 AND 4000    -- CLustered key businessentityid takes 62%  SELECT BusinessEntityId  from persontb   WHERE businessentityid BETWEEN 400 AND 4000  

enter image description here

Why clustered index takes 62% and non clustered 38%?

Central stored procedure to execute in calling database context

Posted: 18 Jun 2013 09:21 AM PDT

I am working on a customized maintenance solution using the sys.dm_db_index_physical_stats view. I currently have it being referenced from a stored procedure. Now when that stored procedure runs on one of my databases, it does what I want it to do and pulls down a listing of all records regarding any database. When I place it on a different database it pulls down a listing of all records relating to only that DB.

For example (code at bottom):

  • Query run against Database 6 shows [requested] information for databases 1-10.
  • Query run against Database 3 shows [requested] information for only database 3.

The reason I want this procedure specifically on database three is because I'd prefer to keep all maintenance objects within the same database. I'd like to have this job sit in the maintenance database and work as if it were in that application database.

Code:

ALTER PROCEDURE [dbo].[GetFragStats]       @databaseName   NVARCHAR(64) = NULL      ,@tableName     NVARCHAR(64) = NULL      ,@indexID       INT          = NULL      ,@partNumber    INT          = NULL      ,@Mode          NVARCHAR(64) = 'DETAILED'  AS  BEGIN      SET NOCOUNT ON;        DECLARE @databaseID INT, @tableID INT        IF @databaseName IS NOT NULL          AND @databaseName NOT IN ('tempdb','ReportServerTempDB')      BEGIN          SET @databaseID = DB_ID(@databaseName)      END        IF @tableName IS NOT NULL      BEGIN          SET @tableID = OBJECT_ID(@tableName)      END        SELECT D.name AS DatabaseName,        T.name AS TableName,        I.name AS IndexName,        S.index_id AS IndexID,        S.avg_fragmentation_in_percent AS PercentFragment,        S.fragment_count AS TotalFrags,        S.avg_fragment_size_in_pages AS PagesPerFrag,        S.page_count AS NumPages,        S.index_type_desc AS IndexType      FROM sys.dm_db_index_physical_stats(@databaseID, @tableID,              @indexID, @partNumber, @Mode) AS S      JOIN          sys.databases AS D ON S.database_id = D.database_id      JOIN          sys.tables AS T ON S.object_id = T.object_id      JOIN          sys.indexes AS I ON S.object_id = I.object_id                          AND S.index_id = I.index_id      WHERE           S.avg_fragmentation_in_percent > 10      ORDER BY           DatabaseName, TableName, IndexName, PercentFragment DESC      END  GO  

How to add column to big table in MySQL

Posted: 18 Jun 2013 10:45 AM PDT

I'm php developer so dont be strict. I have a big table ~5.5gb dump. Our PM decided to make new column in it to perform new feature. Table is InnoDB So what i tried:

  1. Alter table in screen with table lock. Took ~30hours and nothing. So i just stopped it. First i made mistake cause didn't end all transactions but 2nd time was no multilock. Status was copy to tmp table.

  2. Since i also need to apply partitioning for this table we decide to make dump, rename and make table with same name and new structure. But dump is making strict copy(atleast i didnt found something else). So i added to dump a new column with sed and query it. But some strange errors began. I believe it was caused by charset. Table in utf-8 and file became us-ascii after sed. So i got errors(unknown command '\'') on 30% of data. So this is also a bad way.

What are other options to accomplish this and speed performance(i can do it with php script, but it will took ages). What will be performance of INSERT SELECT in this case.

Thanks for any advance.

Create a trigger to update table data on another Server's database

Posted: 18 Jun 2013 06:36 PM PDT

I am creating a trigger in MySQL and I need a little help.

I have 2 websites, 2 databases (same name) on 2 different web servers, S1 & S2.

These databases have the same tables names.

I want both the user data on both the websites to be the same.

So if one user registers on S1, then that user registration information should be passed to S2.

If a user registration information is updated on S1, the same information should be updated on S2.

And the same applies for S2.

How can I create a trigger so that every time there is an insert / update / delete in database on S1, then the user table on S2 also gets automatically updated.

And every time there is an insert / update / delete in database on S2, then the user table on S1 also get automatically updated.

Is this possible? Could you provide some examples?

How to properly clean Merge Replication Setup

Posted: 18 Jun 2013 12:46 PM PDT

What is the proper way to completely clean Merge Replication setup ?

I have tried to delete the subscriptions, then delete the publication then "Disable Publishing & Distribution" & delete the distribution database then started setting up the replication again.

unfortunately, after that I discovered that dbo.msmergesubscriptions table still have old records about old subscriptions I used to have before 2 years !!

Is there any official documentation about how to properly & totally clean Merge Replication setup?

PostgreSQL LDAP authentication

Posted: 18 Jun 2013 09:15 AM PDT

We are setting up a PostgreSQL 9.1 database at work. There are no classified data in the database, but we want to know who is doing what. We are using LDAP authentification on the network, so we would like to use that for the database as well, but according to the documentation, all users anyhow need to be defined as database users.

We could of course do a "dump" of users to the database, but we would prefer if it could be possible to define users in the database as soon as they have authenticated in LDAP. This would not be too difficult using a wrapper script, but is it somehow possible to do this directly? (Most users log in using psql.)

Tool to export data with all relational data?

Posted: 18 Jun 2013 04:32 PM PDT

Is there a tool to export data from selected rows in a table with all data stored in other tables in other tables linked by relational design?

The purpose is to ease migrations of bits of data between servers for adhoc migrations. I am looking specifically for a tool for MySQL InnoDB with defined foreign keys.

mysql: need help to optimize my query/table

Posted: 18 Jun 2013 09:32 AM PDT

I'm wondering if someone could help me optimize my tables/query to speed up a query. It is currently running ridiculously slow. I think a well-thought out index could help me. Any help would be really appreciated

Tables URLS and TAGS mentioned below are 2 and 20 million rows respectively (will probably end up having 10x). A query like the one below already takes 10 seconds to run.

An Example: http://whatrethebest.com/php+tutorials

Tables

CREATE TABLE IF NOT EXISTS `TAGS` (  `hash` varchar(255) NOT NULL,  `tag` varchar(255) NOT NULL,  UNIQUE KEY `my_unique_key` (`hash`,`tag`),  KEY `tag` (`tag`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

and

CREATE TABLE IF NOT EXISTS `URLS` (  `url` text NOT NULL,  `domain` text,  `title` text NOT NULL,  `description` text,  `numsaves` int(11) NOT NULL,  `firstsaved` varchar(256) DEFAULT NULL,  `md5` varchar(255) NOT NULL DEFAULT '',  PRIMARY KEY (`md5`),  UNIQUE KEY `md5` (`md5`),  KEY `numsaves` (`numsaves`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

QUERY

SELECT urls.md5, urls.url, urls.title, urls.numsaves  FROM urls  JOIN tags ON urls.md5 = tags.hash  WHERE tags.tag  IN (  'php', 'tutorials'  )  GROUP BY urls.md5  HAVING COUNT( * ) =2  ORDER BY urls.numsaves DESC  LIMIT 20  

EXPLAIN

I'm not sure what this shows

id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  tags    range   my_unique_key,tag   tag     767     NULL    230946  Using where; Using index; Using temporary; Using filesort  1   SIMPLE  urls    eq_ref  PRIMARY,md5     PRIMARY     767     jcooper_whatrethebest_urls.tags.hash    1     

So I think the problem is:

certain tags like 'php have 34,000 entries, most of which only have under 5 saves. But in order to get the 20 most saved it is having to sort them all.Right?

I can't really create a 'numsaves' column in TAGS and index on that because that number will be changing up and down, and that wouldnt make sense. Is it possible to create a cross-table index between urls.numsaves and tags.tag? Or a third table to use in my query somehow? Would this solve my problem? I know almost nothing about indexing.

Any help would be really appreciated!


EDITS BELOW

RESPONSE TO YperCube:

Thank you, Thank you, your suggestions have sped up my queries by a factor of 10-20X . This is an immense improvement. I can't thank you enough.

I'm posting my current queries and tables with execution times in case you or anyone else has any more optimization suggestions. I am worried that as my table grows I may not be able to keep my search times under 3 seconds, which would be a killer.

New Query Example 1

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN tags AS t1 ON t1.url_id = u.id  AND t1.tag = 'programming'  JOIN tags AS t2 ON t2.url_id = u.id  AND t2.tag = 'language'  ORDER BY u.numsaves DESC  LIMIT 20     Showing rows 20 - 19 ( 20 total, Query took 0.2009 sec)     id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  t2  ref     tag_id  tag_id  767     const   53820   Using where; Using index; Using temporary; Using filesort  1   SIMPLE  t1  ref     tag_id  tag_id  772     const,jcooper_whatrethebest_urls.t2.url_id  1   Using where; Using index  1   SIMPLE  u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t2.url_id    1     

Neq Query Example 2 (seems to be slower)

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN   ( SELECT ui.id, ui.numsaves  FROM urls AS ui  JOIN tags AS t1 ON  t1.url_id = ui.id  AND t1.tag = 'programming'  JOIN tags AS t2 ON  t2.url_id = ui.id  AND t2.tag = 'language'  ORDER BY ui.numsaves DESC  LIMIT 20  ) AS ulim ON ulim.id = u.id  ORDER BY ulim.numsaves DESC ;    Showing rows 0 - 29 ( 2,794,577 total, Query took 0.4633 sec)    id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>  ALL     NULL    NULL    NULL    NULL    20  Using filesort  1   PRIMARY     u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   ulim.id     1     2   DERIVED     t2  ref     tag_id  tag_id  767         53820   Using where; Using index; Using temporary; Using filesort  2   DERIVED     t1  ref     tag_id  tag_id  772     jcooper_whatrethebest_urls.t2.url_id    1   Using where; Using index  2   DERIVED     ui  eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t2.url_id    1     

Using Query Example on a Single Tag (slower by a lot)

SELECT u.id, u.url, u.title, u.numsaves  FROM urls AS u  JOIN tags AS t1 ON t1.url_id = u.id  AND t1.tag = 'programming'  ORDER BY u.numsaves DESC  LIMIT 20     Showing rows 20 - 19 ( 20 total, Query took 3.7395 sec)    id  select_type     table   type    possible_keys   key     key_len     ref     rows    Extra  1   SIMPLE  t1  ref     tag_id  tag_id  767     const   200576  Using where; Using index; Using temporary; Using filesort  1   SIMPLE  u   eq_ref  PRIMARY,id_numsaves_IX  PRIMARY     4   jcooper_whatrethebest_urls.t1.url_id    1     

I'm not sure why this one is so much slower?

Do you have any ideas of a query to optimize for querying a single tag?

My Current Tables

CREATE TABLE `urls` (  `id` int(11) NOT NULL AUTO_INCREMENT,  `url` text NOT NULL,  `domain` text,  `title` text NOT NULL,  `description` text,  `numsaves` int(11) NOT NULL,  `firstsaved` varchar(256) DEFAULT NULL,  `md5` varchar(255) NOT NULL DEFAULT '',  PRIMARY KEY (`id`),  UNIQUE KEY `md5` (`md5`),  KEY `id_numsaves_IX` (`id`,`numsaves`)  ) ENGINE=InnoDB AUTO_INCREMENT=2958560 DEFAULT CHARSET=utf8    CREATE TABLE `tags` (  `url_id` int(11) DEFAULT NULL,  `hash` varchar(255) NOT NULL,  `tag` varchar(255) NOT NULL,  UNIQUE KEY `tag_id` (`tag`,`url_id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8  

Thank you again

Replication on MySQL server

Posted: 18 Jun 2013 02:32 PM PDT

I had to stop the slave server to test something. After I started the server again there is a problem with replication on MySQL server

On the problematic server

mysql> SHOW SLAVE STATUS\G  *************************** 1. row ***************************                 Slave_IO_State: Connecting to master                    Master_Host: servera                    Master_User: replica                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: servera-bin.000024            Read_Master_Log_Pos: 808459481                 Relay_Log_File: serverb-relay-bin.000071                  Relay_Log_Pos: 4          Relay_Master_Log_File: servera-bin.000024               Slave_IO_Running: No              Slave_SQL_Running: Yes                Replicate_Do_DB:            Replicate_Ignore_DB:             Replicate_Do_Table:         Replicate_Ignore_Table:        Replicate_Wild_Do_Table:    Replicate_Wild_Ignore_Table:                     Last_Errno: 0                     Last_Error:                   Skip_Counter: 0            Exec_Master_Log_Pos: 808459481                Relay_Log_Space: 106                Until_Condition: None                 Until_Log_File:                  Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:             Master_SSL_CA_Path:                Master_SSL_Cert:              Master_SSL_Cipher:                 Master_SSL_Key:          Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 1129                  Last_IO_Error: error connecting to master 'replica@servera:3306' - retry-time: 60  retries: 86400                 Last_SQL_Errno: 0                 Last_SQL_Error:  

on the problematic server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | serverb   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

on the main server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | servera   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

according to what I've read, there is a need to execute the following command om the main server:

mysql> FLUSH HOSTS;   

What will happen then? if there is any application connected to it - will it disconnect it also?

Merge Replication identity field issues

Posted: 18 Jun 2013 01:32 PM PDT

One of our clients is using our software with merge replication of a database on a SQL Server 2008 R2 machine. There are two production environments in separate geographical locations only one of which is live at any one time, so basically one and live one on standby. Only the live database is updated by teh applications. Every couple of months they failover between the datacentres and the standby environment become the live centre. There is an instance of SQL Server 2008 in each datacentre and merge replication is used to keep them in sync. This was all working ok until the beginning of the year when we started getting replication errors with some lTID columns in various tables that have the Identity property set.

The errors were like this one:

The insert failed. It conflicted with an identity range check constraint in database 'GateMain', replicated table 'dbo.tGateCalcsLog', column 'lTID'. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.

Then after the last failover we noticed we had an issue with the lTID values in one specific table. Our application relies on the lTID value always having incremented in order such that the highest lTID value is always the newest entry in the table. We've found that due to how the identity ranges are being managed by replication that when the system is failed over that the lTID range of the now live database server may have a range of values that are lower than those already present in the table. Is there a way to manage this in merge replication so we can guarantee that the next identity value allocated to the lTID column in greater than any lTID currently in the table? Or do we need to use a different type of replication or possibly mirroring?

MySQL Slaves lag behind master

Posted: 18 Jun 2013 03:32 PM PDT

I have one master and four slaves. Sometimes all my slaves lag behind the master. I have implemented the heartbeat for monitoring replication lag. Now I am trying to find why the slaves are lagging behind the master.

I saw the slow queries (for today) on the master and I found that the slowest query (DML) was taking 138 seconds. But the slaves were lagging about 1400 seconds and there were also no slow queries on the slaves for DML (update, delete, insert, etc.).

Points to be taken into consideration:

  1. All tables are InnoDB.
  2. 68 GB of RAM (Master as well as slaves).
  3. Data size about 1 TB.
  4. Master and slave are running from a long.

What may be the reason for lag?

MySQL replication between VPS and shared host

Posted: 18 Jun 2013 05:32 PM PDT

I have a VPS where I have ssh access and shared host with CPanel. Both have MySQL.

I need to create a master/slave setup among the two server. As per this tutorial: How To Set Up Database Replication In MySQL, I will need to modify the my.conf file.

Is it possible to achieve this with the following permissions on both the servers?

  • VPS

    I have root access with ssh login where I can modify /etc/my.conf file.

  • Shared host

    Cpanel with phpmyadmin.

Partition of tablespace

Posted: 18 Jun 2013 08:32 PM PDT

My postgres server is running in a windows server and three tablespace are created. Postgresql server is installed in C: drive and the tablespaces are in D: and E: drive. So how to take a backup and restore. Is it similar to normal backup and while restore, in another machine, do i need to have the same setup or i can restore it in a single drive? From the tablespace folder, is it possible to retreive information for only one table which is huge?

How do I make a date set in Analysis Services Tabular Model

Posted: 18 Jun 2013 10:51 AM PDT

In SQL Server 2012 Analysis Services Tabular, how do I make a Rolling 12 Month set so a user can select it?

[MS SQL Server] BACKUP LOG cannot be performed because there is no current database backup.

[MS SQL Server] BACKUP LOG cannot be performed because there is no current database backup.


BACKUP LOG cannot be performed because there is no current database backup.

Posted: 18 Jun 2013 03:46 AM PDT

I have a vendor's production MicroTel.It has 3 databases. They are all at bulk-logged recovery mode.I setup a maintenanance plan to do full at weekend and every day a differential, and every 3 hours a transaction log backup.I got a failed error:BACKUP LOG cannot be performed because there is no current database backup.I do see it has a full backup and a differential before the transaction log backup, why it said no current backup?I can also verify it has a full backup by using restore the database from SSMS, I see it listed the most recent full back up and differential backup.THanks,

Cannot access full database

Posted: 18 Jun 2013 03:52 AM PDT

I've got a database with a tran log that expanded and then filled up the disk. Unfortunately, I can't even execute "use database" without an error (cannot be opened due to insufficient disk). If you can't "use" the database, then you can't alter the thing to set it offline then online to fix the problem. And if you can't "use" the database then you can't truncate the log via dbcc shrinkfile (databaseName_Log, 1) -- shrink truncated log file to 1 megSometimes your sys admin can expand the disk the log is on. If that is not possible, how can I bring this database back online? There's always a restore from the latest backup, but I'm wondering if there is any other viable solution.

Restroing DB to multiple files

Posted: 18 Jun 2013 02:05 AM PDT

Is it possible to restore a database's logical path to multiple physical files?The following produces an expected file or id error message.RESTORE DATABASE TestDB FROM DISK = 'Z:\SQLServerBackups\AdventureWorks2012.bak' WITH MOVE 'AdventureWorks2012_Data' TO 'C:\MySQLServer\testdb.mdf', 'C:\MySQLServer\testdb2.ndf', MOVE 'AdventureWorks2012_Log' TO 'C:\MySQLServer\testdb.ldf';GO

Stop particular event logging in SQL Server errorlog

Posted: 17 Jun 2013 05:19 PM PDT

Hi,I need to stop a particular event "Error: 1105, Severity: 17, State: 2.Could not allocate space for object 'dbo.XXX'.'XXXXX' in database 'XXXXX' because the 'PRIMARY' filegroup is full" writing in SQL SErver errorlog.Could you please let me know with the trace number which I need to use to stop the warning writing in SQL Server errorlog?

page life expectancy

Posted: 17 Jun 2013 08:53 AM PDT

I'd like to get an idea of what is normal when it comes to the page life expectancy. I was reading that on an average server the PLE is about 300 or 5 minutes. Some of my database servers have PLE of 2244, 74252 and 6707. should I worry about this hight numbers?I'm using the query below to pull this information.SELECT [object_name], [counter_name], [cntr_value] FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Manager%' AND [counter_name] = 'Page life expectancy'

SQL Server Agent jobs running under SA

Posted: 18 Jun 2013 12:49 AM PDT

I have an issue with some of my jobs. They are owned by the same domain account that the SQL Server Agent uses. When they run as scheduled everything appears fine. When they are run manually, we occasionally get feedback that they are running under the SA account.I'm scratching my head on this one trying to figure out why they run under the proper account when scheduled but sometimes (not always) run under SA when manually executed by a member of the DBA team.Books Online is not helpful and my google-fu does not seem to come up with articles pointing out the authentication routines of the SQL Agent jobs. Does anyone have links they can point me to that might explain this issue?EDIT: More information.The agent is using an account assigned to the SysAdmin role (I don't control this) and there are no proxy accounts being used on any of the job steps.We are getting the information from a login watch job that looks at master.dbo.sysprocess and gets the login information from that table.So what process is Microsoft using to load the sysprocesses table? I'm wondering if that might hold a clue as to why we're seeing SA as the account when a job is manually run.

Tracking Job Info from AutoEmail

Posted: 10 Jan 2013 10:48 PM PST

I'm trying to track down the job_id for a job that sent an auto-email alert to us this morning. The information I have is:[quote]The host SERVERNAME has logged in with the functional login 'sa' using SQLAgent - TSQL JobStep (Job 0x3FF647C915B5604EAAAEBB0385D81250 : Step 1).[/quote]The number doesn't equate to a UNIQUEIDENTIFIER or any Job_ID in sysjobs. Does anyone have advice on how I can make the connection?

How do you deal with Sql Service Pack Upgrades....?

Posted: 17 Jun 2013 03:40 PM PDT

Years ago we installed a sql server service pack on a non-clustered server and the results were usually fine. Now they want a fallback plan in case the service pack install fails. This is a good idea but I'm not sure what comprises a good fallback plan for sql. (Uninstall the service pack -- never done that before. We can start by backing up all databases but restoring them, including the system databases, will be time consuming. Maybe we should image the entire server to trim the restoration time.)Any thoughts you have on this matter will be appreciated.TIA,BarkingdogP.S. I presume a sql service pack can possibly update\modify both system and user databases.

The database principal owns objects in the database and cannot be dropped

Posted: 02 May 2013 05:57 PM PDT

Hi guys, Recently we got this error:The database principal owns objects in the database and cannot be dropped.The database version is SQL Server 2008,I have checked that user doesn't have schema,procedure,function,table,view and so on.My question is: how to check the user's objects?

[Articles] The Best Programmers

[Articles] The Best Programmers


The Best Programmers

Posted: 17 Jun 2013 11:00 PM PDT

How do keep the best programmers? Steve Jones has a few thoughts that might cause you to rethink how you manage your programming stars.

[SQL 2012] SOS DB In-Recovery

[SQL 2012] SOS DB In-Recovery


SOS DB In-Recovery

Posted: 18 Jun 2013 01:58 AM PDT

I am running SQL Server 2012 on Win 7 machine. Yesterday, due to extended power failure and other issues, the machine froze.Now, when I rebooted the machine, the database is In-Recovery mode.Please help. What do I do?

Cleaning the cache for a Query.

Posted: 17 Jun 2013 10:35 PM PDT

Dear All, While testing a query when we run it second time, it uses cache so performs better then previous run.[b] Do we have any way to clean cache of a particular query without affecting cache of other queries?[/b]

Indexing with Ifilters

Posted: 18 Jun 2013 01:13 AM PDT

I've scoured all the posts on this and followed all the advice and examples and still can't seem to get SQL to index pdfs or office documents so I figure I must be missing something really basic!I have set up a database table for the documents and checked the various filters are installed and enabled (see code below).I know that full text is installed and working as ifI upload a text document via a webpage it indexes fine and a containTable picks up the indexed words. If I do the same with a pdf or word doc then there are no errors, and the fulltext properties say that the document has been added but no index terms appear (using SELECT display_term, column_id, document_count FROM sys.dm_fts_index_keywords (DB_ID('test'), OBJECT_ID('documents'))Any help at all greatly appreciated as I'm losing marbles over this!Ta,Jeff/* code so far*//*not sure of the order some of these statements should appear in but have tried various permutations.. clearly not the right one! */CREATE TABLE [dbo].[Documents]( [ID] INT IDENTITY(1000000,1) , [Extension] [VARCHAR] (10) NOT NULL , [Content] [VARBINARY] (MAX) NOT NULL , [FileSize] [INT] NOT NULL , [FileName] [NVARCHAR] (500) NOT NULL , [Stamp] [TIMESTAMP] NOT NULL )ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]GOALTER TABLE [dbo].[Documents] WITH NOCHECK ADD CONSTRAINT [PK_Documents] PRIMARY KEY CLUSTERED ([ID])GOExec sp_fulltext_service 'load_os_resources',1Exec sp_fulltext_service 'verify_signature',0EXEC sp_fulltext_service 'update_languages'reconfigure with overrideCREATE FULLTEXT CATALOG testcatalogGOCREATE FULLTEXT INDEX ON [dbo].[Documents]( content TYPE COLUMN extension Language 1033 ) KEY INDEX pk_documents ON testcatalog; GOif (select DATABASEPROPERTY(DB_NAME(), N'IsFullTextEnabled')) <> 1 exec sp_fulltext_database N'enable' GOif not exists (select * from dbo.sysfulltextcatalogs where name = N'Documents') BEGIN SELECT 'Creating new FT Catalogue' exec sp_fulltext_catalog N'Documents', N'create' endGOexec sp_fulltext_table N'[dbo].[Documents]', N'activate' GO/*check adobe filter installedEXEC sp_help_fulltext_system_components 'filter' --pdf and doc filters show up paths correct!!SELECT * from sys.fulltext_document_types*/EXEC sp_fulltext_service 'restart_all_fdhosts' --tried out of desperation - no luck!

AlwaysOn secondary log size

Posted: 17 Jun 2013 09:20 AM PDT

Hi, hoping someone with some experience can help me to decide what to do with the log of an AlwaysOn secondary database.We are backing up the primary database including its log file but the active read only secondary which is used only for reporting has a log that keeps on growing. I can't set the secondary to simple recovery so I can't shrink the log. I don't want to do log backups on the secondary as we are already backing up the primary. I noticed this issue when the log for the secondary filled the entire drive.Is there a solution to this?

SQL2012 crashes

Posted: 17 Jun 2013 09:55 PM PDT

Hi,We have a new MSServer 2012 / MSSQLServer 2012 Enterprise Edition, running under VMWare in a datacenter.Currently we are migrating our customers from the production 2008R2 server.We came across the following: the server crashed with the following event:[quote]Faulting application name: sqlservr.exe, version: 2011.110.3128.0, time stamp: 0x50deadadFaulting module name: myodbc5.dll, version: 5.1.12.0, time stamp: 0x51017fd2Exception code: 0xc0000005Fault offset: 0x0000000000027ad3Faulting process id: 0x398Faulting application start time: 0x01ce56239a757beeFaulting application path: C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\sqlservr.exeFaulting module path: C:\Program Files\MySQL\Connector ODBC 5.1\myodbc5.dllReport Id: cb0ad250-d802-11e2-93fc-005056971d36Faulting package full name:Faulting package-relative application ID:[/quote]We use the MyODBC5.dll to connect to a number of MySQL databases. We've been doing this the past few years on our 2008R2 server and a crash never happened. We can even reproduce the crashing by connecting to a MySQL database.Obvious question: what can we do to prevent SQL2012 from crashing? Thanks,Raymond

error configuration of report server on SSMS

Posted: 17 Jun 2013 08:53 PM PDT

hello alli found some troubles when i want to set in the report server on SSMS :ReportmanagerEnabled to ----> TRUEandWebServiceandHTTPaccesssEnabled to ----> TRUEi have this message : this operation could not be completed (Winmgmt ) any help plz ?

[T-SQL] Query to delete the records with top 3 marks from a student table

[T-SQL] Query to delete the records with top 3 marks from a student table


Query to delete the records with top 3 marks from a student table

Posted: 17 Jun 2013 02:20 PM PDT

HiPls help me to find a Query to delete the records with top 3 marks from a student table

Searching on Binary data type columns

Posted: 18 Jun 2013 01:08 AM PDT

Hey guys,We have this system that is not built for reporting but we are having to write reports for it anyways...Problem:All of the following tables are related but the id's are stored in 4 different waysTable1 - Stored as a varchar...Example: D19377322B06194DB6A9B34667D506A6Table2 - Stored as a binary...Example: 0xD19377322B06194DB6A9B34667D506A6Table2 - Stored as nvarchar...Example: {E867D837-EB0C-4EDB-B56E-D7083C181E52}Table3 - Strred as a uniqueidentifier...Example: E867D837-EB0C-4EDB-B56E-D7083C181E52I have to get to Table4 to retrieve a column starting with Table1...I can do this manually and here is how:1.) Get ID from Table1 -D19377322B06194DB6A9B34667D506A62.) Manually add 0x to that ID - 0xD19377322B06194DB6A9B34667D506A63.) Using that new ID and hard coding it into the WHERE clause, I select a separate id from that table : {E867D837-EB0C-4EDB-B56E-D7083C181E52}4.) Take the brackets off of that ID - E867D837-EB0C-4EDB-B56E-D7083C181E525.) Use that new ID to retrieve the column I needSo doing this manually, no problem!Problem is there are thousands of these and I cant do it manually one by one! I get stuck on step 3, in the WHERE clause: when specifying a binary, you dont use quotes, but I cant figure out how to get around that? I have tried setting the value using a variable and in the WHERE clause, I just say where ID = @num, but that doesnt work!So my question is: How can I take a varchar value and make it a binary so I can search on it!Any confusion as to what I am asking, let me know!Thanks

smart first name matching in TSQL

Posted: 17 Jun 2013 08:08 PM PDT

Hello experts,I looking for script (actually for data) for smart first name conversion.I.e. if user enter William or Billy then script should return Bill, if user enter Alexander, Aleks, Sasha then script should return Alex, etc.Actually I can write SQL to do it, but I can't find list of all possible first names.Is anybody have it? Much thanks, Alex.

combining multiple rows into one

Posted: 17 Jun 2013 09:16 PM PDT

Hello all,I have a table that stores pictures for my users but now i need to get multiple pictures into one row.Hereunder you'll find my start code:[code="sql"]CREATE TABLE pictures(userid int, picture varchar(100))INSERT INTO pictures (userid, picture) VALUES (1, 'picture1.gif')INSERT INTO pictures (userid, picture) VALUES (1, 'picture2.gif')INSERT INTO pictures (userid, picture) VALUES (1, 'picture3.gif')INSERT INTO pictures (userid, picture) VALUES (1, 'picture4.gif')INSERT INTO pictures (userid, picture) VALUES (1, 'picture5.gif')SELECT userid, picture FROM picturesDROP TABLE pictures[/code]Of course it is simple to get five records (always five) with pictures as mentioned above, but now i need a result that shows me one record like this:userid, pic1, pic2, pic3, pic4, pic 51, picture1, picture2, picture3, picture4, picture5I know this is possible, but unfortunately still struggling.Can someone help me pleaseThanks a lot!Mike

If/Then in table valued functions

Posted: 17 Jun 2013 06:50 AM PDT

Hello All,Can I use conditional logic in a function that returns a table?CREATE FUNCTION dbo.TestFunction (@param1 int)RETURNS TABLE AS RETURNSELECT1 as column1,2 as column2This worksSELECT * FROM dbo.TestFunction(1)Really I would like to do something like the below but I am getting the errorIncorrect syntax near the keyword 'IF'.IF @param1 = 1BEGINSELECT1 as column1,2 as column2ENDELSEBEGINSELECT3 as column1,4 as column2END

Function return request

Posted: 17 Jun 2013 09:16 AM PDT

Hello,I hopefully can explain my self, first the code :)[code="sql"] -- Declare the return variable here DECLARE @sql int,@tablename varchar(max) = 'POHeader',@Results int set @sql = 'select max(PROGRESS_RECID) from Epicor_SGI.dbo.'+@tablename -- Add the T-SQL statements to compute the return value here exec @sql set @Results = @sql print @results -- Return the result of the function RETURN @Results[/code]so what I would like is from the @Sql which runs a sql script in varchar, to return the results of the max row into int, so i can return it from a function and use it for another database for auditing purposes... sadly its not converting... how can i accomplish this, again sorry if it doesnt make sense, not sure if you need a sample database let me know as i can easily just put out a simple 2 column table.thanks in advance

options for a column

Posted: 17 Jun 2013 03:56 AM PDT

We have a view that is using concatenate string for street.( prefix+ streetname+ suffix) Now since we are using entity framework in front end, we would like to use it as a a table.We found there may be better add an extra column as a computed column in the table. Or another option, doing update or insert trigger, inserting or update the street by concatenating using other columns.which should be the better option?Thanks

WHERE NOT EXISTS() causes query to hang

Posted: 17 Jun 2013 02:41 AM PDT

Hi Friends,I have an unusual problem that I've been unable to find any info to help, so I'm hoping someone can give me a clue. :crazy:I have VBA code that builds dynamic queries that are sent in pass-through queries to SQL Server. This has been working fine. Now, a client reported that this process hangs.On stepping through the code, what I found is that each subquery with a NOT EXISTS() is causing the query to hang.When I rewrite the query to use a LEFT OUTER JOIN, then the query runs.I believe something on the server has changed since these queries ran before.I don't want to rewrite all of my SQL just for the sake of this one installation. Is there some setting on SQL Server or an update they may have run that could cause EXISTS or subqueries to not work?Below is only one example of how I rewrote a query to make it run. Then the process went ahead until it hit another NOT EXISTS subquery.Here is one query with NOT EXISTS() that hangs:-- Using NOT EXISTS()[font="Courier New"]INSERT INTO tForecast ( ExportDate, Dept, Class, Item, Vendor, WeekEndDate, DPCI, BaseSales, EventSales, TotalSales, BaseOrder, EventOrder, TotalOrder, EventDescrip, CollabSales, CollabOrders, EventType, UPC, ReasonCode, FSI, PromotionalDisplay, PicturedItem, Lift, Shipment_Import, Shipment_Domestic, Orders_Total, Orders_Import ) SELECT X.ExportDate, X.Dept, X.Class, X.Item, X.Vendor, X.WeekEndDate, X.DPCI, X.BaseSales, X.EventSales, X.TotalSales, X.BaseOrder, X.EventOrder, X.TotalOrder, X.EventDescrip, X.CollabSales, X.CollabOrders, X.EventType, X.UPC, X.ReasonCode, X.F_S_I, X.Promotional_Display, X.Pictured_Item , X.Lift, X.Shipment_Import, X.Shipment_Domestic, X.Orders_Total, X.Orders_Import FROM ( SELECT '5/25/2013' AS ExportDate, tForecast.Dept, tForecast.Class, tForecast.Item, tForecast.Vendor, tForecast.WeekEndDate, tForecast.DPCI, tForecast.BaseSales, tForecast.EventSales, tForecast.TotalSales, tForecast.BaseOrder, tForecast.EventOrder, tForecast.TotalOrder, tForecast.EventDescrip, tForecast.CollabSales, tForecast.CollabOrders, tForecast.EventType, tForecast.UPC, tForecast.ReasonCode, IsNull([FSI],0) AS F_S_I, IsNull([PromotionalDisplay],0) AS Promotional_Display, IsNull([PicturedItem],0) AS Pictured_Item, tForecast.Lift, tForecast.Shipment_Import, tForecast.Shipment_Domestic, tForecast.Orders_Total, tForecast.Orders_Import FROM tForecast INNER JOIN (SELECT DISTINCT DeptID, ClassID, VendorID FROM AVP_FORECAST_HOLDING_usersName) AS XQ2 ON (tForecast.Class = XQ2.ClassID) AND (tForecast.Dept = XQ2.DeptID) AND (tForecast.Vendor = XQ2.VendorID) WHERE (((tForecast.WeekEndDate)>'5/18/2013') AND ((tForecast.ExportDate)='5/18/2013')) ) X WHERE NOT EXISTS(SELECT DPCI FROM tForecast WHERE x.DPCI = tForecast.DPCI AND tForecast.ExportDate = '5/25/2013')[/font]-- Here is the same query rewritten to use LEFT OUTER JOIN that runs:[font="Courier New"]INSERT INTO tForecast ( ExportDate, Dept, Class, Item, Vendor, WeekEndDate, DPCI, BaseSales, EventSales, TotalSales, BaseOrder, EventOrder, TotalOrder, EventDescrip, CollabSales, CollabOrders, EventType, UPC, ReasonCode, FSI, PromotionalDisplay, PicturedItem, Lift, Shipment_Import, Shipment_Domestic, Orders_Total, Orders_Import ) SELECT X.ExportDate, X.Dept, X.Class, X.Item, X.Vendor, X.WeekEndDate, X.DPCI, X.BaseSales, X.EventSales, X.TotalSales, X.BaseOrder, X.EventOrder, X.TotalOrder, X.EventDescrip, X.CollabSales, X.CollabOrders, X.EventType, X.UPC, X.ReasonCode, X.F_S_I, X.Promotional_Display, X.Pictured_Item , X.Lift, X.Shipment_Import, X.Shipment_Domestic, X.Orders_Total, X.Orders_Import FROM ( SELECT '5/25/2013' AS ExportDate, tForecast.Dept, tForecast.Class, tForecast.Item, tForecast.Vendor, tForecast.WeekEndDate, tForecast.DPCI, tForecast.BaseSales, tForecast.EventSales, tForecast.TotalSales, tForecast.BaseOrder, tForecast.EventOrder, tForecast.TotalOrder, tForecast.EventDescrip, tForecast.CollabSales, tForecast.CollabOrders, tForecast.EventType, tForecast.UPC, tForecast.ReasonCode, IsNull([FSI],0) AS F_S_I, IsNull([PromotionalDisplay],0) AS Promotional_Display, IsNull([PicturedItem],0) AS Pictured_Item, tForecast.Lift, tForecast.Shipment_Import, tForecast.Shipment_Domestic, tForecast.Orders_Total, tForecast.Orders_Import FROM tForecast INNER JOIN (SELECT DISTINCT DeptID, ClassID, VendorID FROM AVP_FORECAST_HOLDING_usersName) AS XQ2 ON (tForecast.Class = XQ2.ClassID) AND (tForecast.Dept = XQ2.DeptID) AND (tForecast.Vendor = XQ2.VendorID) WHERE (((tForecast.WeekEndDate)>'5/18/2013') AND ((tForecast.ExportDate)='5/18/2013')) ) X LEFT OUTER JOIN (SELECT DISTINCT DPCI, ExportDate FROM tForecast WHERE tForecast.ExportDate = '5/25/2013') Y ON x.dpci = Y.DPCI AND x.ExportDate = Y.exportdate WHERE y.DPCI IS NULL[/font]

Arithmetic overflow error

Posted: 17 Jun 2013 03:23 AM PDT

I have a select statment,but get an error:Arithmetic overflow error converting float to data type numeric. SELECT CAST(SUM(C.PeriodsAbsent) AS numeric(4,1)) PeriodsAbsentfrom calendar cI see the distinct PeriodsAbsent values are 0, 1, 2, 3, ..to 14.How can i fix this?thanks

data type change

Posted: 17 Jun 2013 04:57 AM PDT

what are the list of things to be considered when data type is changed in a table...how to do impact analysis...

Search This Blog