Thursday, March 7, 2013

[how to] SQL Max() revision number

[how to] SQL Max() revision number


SQL Max() revision number

Posted: 07 Mar 2013 06:49 PM PST

This SQL returns a table with everything unique, which is what I want. However, if there are two (or more) event numbers, I need it to return the line with the highest rev_num for that eventnum.

I have read about the max() function, but I'm not having any success with it.

select distinct a.eventnum, a.num_cust, a.out_minutes, d.completion_remarks      , d.weather_code, rev_num  from mv_outage_duration a      inner join mv_outhist_transformers b          on a.restore_operation_id = b.phase_restored_operation_id      inner join mv_aeven d          on d.restore_operation_id = b.PHASE_RESTORED_OPERATION_ID      inner join mv_we_outage_premise e          on e.xfmr = b.xfmr_name  where e.we_premise = '995184'  

Here's what the output looks like:

output

Apply in a CSV file to multiple databases

Posted: 07 Mar 2013 03:08 PM PST

Recently, I have been working with a client whom has asked us to provide their dataset for use in a mobile application we are making for them. The requirements for this project include the ability to update the local dataset on the mobile devices with any changes that our client delivers to us. The local databases are to be stored in sqlite according to the mobile device standard.

The Problems:

  1. The Dataset: Our clients can only deliver to us an entire dump of their database in CSV format. We must be able to apply only the changes between the new CSV and our existing databases without causing duplicate entries or stale data.
  2. The Delivery: Only the changes to the dataset may be sent to the mobile databases. Resending of the entire database is too expensive.

Approach:

Currently, the solution we are looking into involves using one of our backend MySQL servers to maintain a backup of database stored on the mobile sqlite databases. When a new dataset arrives, we were planning to make a new database with the new data, and then to be able to use diff and patch linux commands to update the server side data. However, with this approach, knowing what or how to update in the mobile phone's local databases gives us some issues. I am currently unaware of any way to use the patching or the diff file from MySQL in either the Android or iOS environments.

Ideal Approach:

Preferably, when we receive the entirely new dataset, we would like to be able to detect just the changes in the new CSV against the current state of the database. Then, with this list of changes, we would generate INSERT/UPDATE/DELETE statements that would be able to apply each of the changes to both the MySQL and sqlite databases. We would store a list of each of the INSERT/UPDATE/DELETE statements along with a the date updated so that upon request each of the phones can be updated accordingly to the newest data.

However, I am also currently unaware of a way to detect changes from a new CSV to an existing database which makes constructing INSERT/UPDATE/DELETE statements difficult.

The Questions:

  1. What is the best way to get the changes between the CSV file and our current database implementation in a portable and meaningful format?
  2. What is the preferred method for cascading such changes from our server side database to the phone's local databases?

I am currently seeking any advice for either improvements to our design or proposed implementation. I feel like I can't be the first person to try and reflect changes to a CSV in a database or synchronize changes in a server side database to a local offline database.

Thank you everyone in advance for your help. I look forward to hearing alternative approaches to this problem.

Database Market Share [closed]

Posted: 07 Mar 2013 02:44 PM PST

Does anyone have any knowledge or references as to who the latest database market share players are?

Basically, I'm looking to see how to following stacks up in terms of support in the industry: Oracle, IBM DB2, Microsoft SQL Server, MySQL, PostgreSQL, and Sybase ASE.

I've seen data points from Gartner studies come out but was unable to view them on their site. I did find an old data point from 2008. Is there a better source for this information elsewhere?

Is there a way that I can export a list of all table triggers on a server?

Posted: 07 Mar 2013 01:50 PM PST

I have two SQL Server instances that I manage. One is a SQL Server 2000 instance, and the other is 2005.

Some where, on these servers, I recall setting up a couple table triggers that were executed under certain conditions. I need to look-up these triggers as a point of reference for a new project, however, for the life of me I can not find them.

Is there some crafty SQL statement that I can execute that can iterate over all my databases and, subsequently, iterate over each table and output any triggers associated with the tables?

Which database is designed for storing and incrementing values?

Posted: 07 Mar 2013 12:02 PM PST

We're trying to build a stats server that stores visits for each document on a section for a specific date, so we need a table that has the following parameters:

  • documentID
  • sectionID
  • date
  • visits

The plan is then to increment the visits field.

Now, we tried to do this using MongoDB (albeit a very simple instance of it) and it turns out this is too write intensive for it. Is there another database system that's better suited for this task? Or should we just start growing the MongoDB infrastructure that we're using?

How can I calculate time spent at work in a single column?

Posted: 07 Mar 2013 06:42 PM PST

I would like to calculate the amount of time spent working from rows of data that contain a single date/time field.

The first occurrence on a given day can be considered the start time, with the next occurrence being the end time, alternating back and forth on any given day. So if I start at 9:00am and go on lunch at 12:00pm, return from lunch at 12:30pm, then leave for the day at 5:00pm, my data would look like:

2013-03-07 09:00:00  2013-03-07 12:00:00  2013-03-07 12:30:00  2013-03-07 17:00:00  

Here is a further sample (in my locale we use . as the date separator):

22. 2. 2013 9:09:00  22. 2. 2013 13:28:09  25. 2. 2013 9:43:16  25. 2. 2013 15:15:19  25. 2. 2013 19:30:00  25. 2. 2013 21:30:00  26. 2. 2013 8:27:54  26. 2. 2013 16:31:51  27. 2. 2013 8:52:59  27. 2. 2013 15:08:35  28. 2. 2013 8:37:54  28. 2. 2013 16:37:08  28. 2. 2013 18:30:00  28. 2. 2013 20:30:00  

I would like to know the total time spent at work for any given day, and would also like to be able to tally the time spent over, for instance, any given week or month.

Logging only mySQL logins on heavily used system

Posted: 07 Mar 2013 06:23 PM PST

I have an audit requirement where they want all database access (logins) logged. We have a very large distributed system of hundreds of mySQL servers (shards, slaves, slaves of slaves) that are serving a thousand queries per second. As a result, I can't just turn on general query logging as recommended in Audit logins on MySQL database, it'll be multiple GB/day per instance and kill our disk IO.

I've looked and see two options to do discretionary logging with filtering - an Oracle plugin and a McAfee plugin. The Oracle one requires you to be paying them $ for enterprise mySQL, and with hundreds of db servers I'm not really prepared to start doing that today. The McAfee one is brand new and I'm worried about stability on a large scale system like we have.

Any other good solutions to log only logins to our mySQL databases? It doesn't have to be integral to mySQL, we'd consider UNIX level port sniffing shenanigans if they'd scale.

Defining the Relationship - Some Basic Help [closed]

Posted: 07 Mar 2013 09:58 AM PST

I thought I had this figured out but I am starting to doubt myself on something that should be simple. I have a table of Group, and a table for Activity. Each Group has one Activity. Different groups can have the same Activity.

Right now I have the foreign key in the Group table connected to the primary key in the Activity table. I started to doubt myself when I started entering data into the Access database table Group.

When I got to the foreign key field to enter the data for Activity ID in the Group table, I did not get the (+) symbol to allow a drop down choice.

Should the table Group have a foreign key for Activity table (the way I have it now), or should the Activity table have a foreign key for Group ID?

I am having a rough week, with stomach flu, and I am just not on my game

Thank you.

Does this query make sense?

Posted: 07 Mar 2013 09:16 AM PST

I recently migrated a database from Sybase ASE to Microsoft SQL 2008r2. One category of queries that was broken in transit is:

SELECT person_id    FROM leaveperiods   WHERE group_id=146   GROUP BY person_id  HAVING start_date = max(start_date)  

According to Sybase, this is a valid query. But Microsoft SQL is stricter, so I get this error:

Column 'leaveperiods.start_date' is invalid in the HAVING clause  because it is not contained in either an aggregate function or the GROUP BY clause.  

While looking at this query to try and correct it, I realized that it does not make sense to me. Isn't this exactly the same as the below?

SELECT DISTINCT person_id    FROM leaveperiods   WHERE group_id=146  

Or maybe I should write like this?

SELECT DISTINCT person_id    FROM leaveperiods   WHERE group_id=146     AND start_date IS NOT NULL  

There are several queries like this in one application and I need to fix them without breaking anything else. Please help.

Integer comparison in MySQL. does it cause lots of overhead?

Posted: 07 Mar 2013 09:46 AM PST

I have an 8 digit number in my db. I want to do a select on the table in MySql (Innodb) and get the list of all numbers that are equal or smaller than my number. Then I sort them from larger to smaller.

I store the number as Integer. Let's say my number is : 12040670

and after the select I get this list :

10340070 12005608 . . . etc

Then I sort them from large to small.

If my table contains 1,000,000 records, and I run such comparison, do you think it causes lots of load on the db server ? How about indexing the column ? Does indexing help in lowering overhead when it comes to integer comparison ?

Logging and saving user activity

Posted: 07 Mar 2013 02:44 PM PST

Database:Oracle 11gR2 OS: Windows 2008 Server R2

I'm not trained DBA, just have to run things on an Oracle database temporarily, which is used to perform OLAP processes.

I need to automatize user activity logging (about 30 users) and saving this data (as a text file) if possible.

What I need to log:

SQL_TEXT,  PARSING_SCHEMA_NAME,  FIRST_LOAD_TIME,  DISK_READS,  ROWS_PROCESSED,  ELAPSED_TIME,   SERVICE,  MODULE,  IP_ADDRESS  

It would be great to automatically save this logs on an hourly basis.

I've made a research about RMAN, but it seems complicated for me at this stage.

Thanks.

Linked server error not caught by TRY-CATCH

Posted: 07 Mar 2013 09:04 AM PST

I am setting up a job to loop through a list of linked servers and execute a specific query against each one. I am trying to execute the query inside a TRY-CATCH block so if there's a problem with one particular server I can log it but then carry on with the other servers.

The query I'm executing inside the loop looks something like this:

BEGIN TRY      SELECT *      FROM OPENQUERY([server1], 'SELECT 1 AS c;');  END TRY  BEGIN CATCH      SELECT ERROR_NUMBER(), ERROR_MESSAGE();  END CATCH;    PRINT 'We got past the Catch block!';  

If there is a problem connecting to the server the code just fails immediately and doesn't transfer to the CATCH block. If the server connects but there is an error in the actual query, e.g. divide by zero, then this is caught as expected by the CATCH block.

For example, I created a linked server to a name that I know doesn't exist. When executing the above I just get:

OLE DB provider "SQLNCLI" for linked server "nonserver" returned message       "Login timeout expired".  OLE DB provider "SQLNCLI" for linked server "nonserver" returned message       "An error has occurred while establishing a connection to the server.       When connecting to SQL Server 2005, this failure may be caused by the       fact that under the default settings SQL Server does not allow remote      connections.".  Msg 53, Level 16, State 1, Line 0  Named Pipes Provider: Could not open a connection to SQL Server [53].  

I've read BOL on TRY-CATCH and know that it won't catch level 20+ errors that break the connection but this doesn't seem to be the case (this is only level 16).

Does anyone know why these errors aren't caught correctly?

Key Lookup and Full-text index

Posted: 07 Mar 2013 03:58 PM PST

I have the following query:

SELECT T2.Title   FROM TitleTable T1      INNER JOIN TitleTable T2 ON T2.FKID1 = T1.FKID1  WHERE T1.FKID2 = @ID_PARAM1       AND T2.FKID2 = @ID_PARAM2       AND CONTAINS(T1.Title, '"' + @SINGLE_WORD_PARAM +'"')  

Title is of type NVARCHAR(100). All the IDs are INT.

My Execution Plan is such that it is using the Key Lookup (Clustered) operation 80+% of the time:

  • Predicate: TitleTable.FKID2 AS T2.FKID2 = @ID_PARAM2
  • Object: TitleTable T2

Each and every mentioned column is in its own non-clustered index, Title is additionally in the full-text index and there's a clustered index on the primary key as well. Is there a way to include the full-text index in the covering index or something?

Large time difference between two almost identical simple queries....why?

Posted: 07 Mar 2013 10:37 AM PST

I'm trying to understand why is there so much difference in execution time and CPU usage between two simple queries that only differ in a computation. The queries are the following:

SELECT m.Metadata, COUNT(*) As Count  FROM TWG10MinData twg, Metadata m  WHERE twg.metadata = m.Metadata and (m.Metadata % 1000) = 100  GROUP BY m.Metadata  ORDER BY m.Metadata  

Some statistics for this query are: 67000 logical reads, 3000 ms CPU time, 800 ms time elapsed.

SELECT Metadata, COUNT(*) As Count   FROM TWG10MinData twg   WHERE Metadata IN (1100,2100,3100,4100)   GROUP BY Metadata   ORDER BY Metadata  

Some statistics for this query are: 20 logical reads, 0 ms CPU time, 0 ms time elapsed.

Table "twg" has a nonclustered index on "Metadata" field (its primary key fields don't appear on the query). Table "Metadata", has its field "Metadata" as primary key and therefore it has a clustered index on it. As you can see, the only difference is specifying the concrete values that, at the moment, result from this computation "(Metadata % 1000) = 100". Shouldn't SQL engine compute first the valid values for Metadata and then apply the filter? Is this so much time consuming to cause such a big difference in their performance numbers?

Thanks in advance!!

upgrade mysql 5.1 to 5.5 on Ubuntu 11.10

Posted: 07 Mar 2013 05:37 PM PST

I currently have mysql server 5.1 installed via apt-get on my production Ubuntu 11.10 server

root@Ubuntu-1110-oneiric-64-minimal$ dpkg --get-selections | grep sql-server     mysql-server                    install   mysql-server-5.1                install   mysql-server- core-5.1          install  

I would like to upgrade this to 5.6, but the mysql docs seem to suggest upgrading to 5.5 first, and from there to 5.6.

I've seen various lengthy guides describing how to upgrade from 5.1 to 5.5, but they all seem to describe how to upgrade by installing the tarball rather than using the package manager. Is there a simpler to upgrade using the package manager if the current version was installed using apt-get.

Obviously I want my existing configuration and databases to be retained after the upgrade and I will be sure to backup my databases using mysqldump before performing the upgrade.

MySQL information_schema doesn't update

Posted: 07 Mar 2013 06:22 PM PST

I have a database, say abc, in mysql server. It has only one table named test. test uses innodb engine and I've set innodb_file_per_table to true.

After I run the query delete from abc.test, I want to calculate the database size of abc. Here is the query I use:

SELECT      table_schema "name",      sum( IF(engine = "MyISAM", data_length + index_length -  data_free,      data_length + index_length)) "size"  FROM information_schema.TABLES  where table_schema like "abc";  

The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone.

I've done this kind of test many times, this strange behavior happens sometimes.

I'm using percona mysql server 5.5.29-rel29.4.

Can anybody tell me what is wrong?

Update:

Actually, I use another thread to check the database size periodically.

How do I add a database in IBM DB2

Posted: 07 Mar 2013 09:29 AM PST

I'm supposed to migrate a MySQL database into IBM DB2, but I thought I'd take baby steps first before I do something like that. So for now, I'd just like to add a database.

I'll admit that I have no experience at all with IBM DB2. Here's what I did so far:

  • I've installed the software, IBM Data Server Client Version 9.7 Fix Pack 7. Went without a hitch, although I must say, it appears simpler than either MySQL or MS SQL.
  • I've ran Control Center under General Administration Tools. I've had to run it as administrator for it to work.
  • I read somewhere that the flow is System -> Instance -> Database, so I created a system with the following details:

    • System Name: RNOLD-PC
    • Host Name: RNOLD-PC
    • Node Name: RNOLDNOD
    • OS: NT
    • Protocol: TCP/IP

    Hitting the OK button with those details produces no error. For reference, the Computer Name of the laptop I'm using is RNOLD-PC.

  • I created an Instance under that with the following details:

    • Instance Name: testInstance
    • Instance Node Name: RNOLDNOD
    • OS: NT
    • Protocol: TCIP/IP
    • Host Name: RNOLD-PC
    • Service Name: RNOLD-PC
    • (Everything else is left blank)

    Clicking the Discover button near Instance Name gets me an SQL4404N error: The DB2 Administration Server does not exist, while clicking the Retrieve Button near Port Number (which I left blank) gets me a "The Service Name RNOLD-PC was not found." If I try to add a database under that Instance, it gives me the same error.

Can I have some help on this? Where do I go from here, and how do I create a database?

EDIT: I added a Port Number (12345), which then allowed me to create a database, albeit with an error:

[IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS".

EDIT: Additional information: - Using Windows 7 32-bit

What is the maximum no of rows in clustered index on datetime column

Posted: 07 Mar 2013 03:55 PM PST

would like know, what is the max no of rows could be in clustered index(non-unique) on datetime column table in SQL Server 2008R2

Thanks,

Can I configure MySQL's typecasting to consider 0 != 'foo'?

Posted: 07 Mar 2013 05:40 PM PST

This article points out the following security problem with MySQL:

USERS  id | email            | password_reset_token  1  | one@example.com  | QTlXww)uV!Hdg3U6aGwKV2FPvAqzVgPx  2  | two@example.com  | CZor5t7WbX#LTeqiG3v@6f3@z#)BfK*n  

Here we have nice, random tokens that a user must prove they have in order to reset their account. But the user manages to submit a reset token of 0, so we run this query:

SELECT * FROM `users` WHERE `email` = 'one@example.com' AND `password_reset_token` = 0  

MySQL converts the token VARCHAR to an INT in order to make the comparison. It considers a VARCHAR that does not start with a number to equal 0. Therefore, an attacker can match any non-numeric string and take over the account.

Even if the reset token starts some numbers, the situation is not much better. MySQL ignores all characters but the initial numbers when making this comparison, so it considers 12blahblah3blah4 to be equal to 12, which makes guessing exponentially easier.

Can I configure MySQL not to do this kind of typecasting? If, for example, it cast INT to VARCHAR instead of vice versa, this attack would not work.

Note

If the query is run with '0' instead of 0, this doesn't work. The article discusses this vulnerability in terms of Ruby on Rails' acceptance of XML, where a type=integer property convinced Rails to send an actual integer in the query.

That bug has been patched in Rails; it now converts all request parameters to strings, so it would never build a query with an integer. But I still think MySQL should be configurable to prevent this.

Select MaxDate where hh:mm:ss makes the difference

Posted: 07 Mar 2013 11:12 AM PST

How to select the maximum date from a table where hh:mm:ss is selected?

My query is the following, and returns the latest date, but if there are different times recorded, the date will be returned twice. What I want is to get the absolute maxDate.

   SELECT * FROM [MyDatabase].[dbo].[DogTags] t1       JOIN      OPENQUERY(LinkedServer, 'SELECT [TimeStamp]      ,[Name]      ,[Owner]      ,[Address]       FROM PETS   WHERE TimeStamp IN (SELECT MAX(TimeStamp) FROM [LinkedServer].[PETS]      GROUP BY [Name])')t2      ON t1.TAGS collate DATABASE_DEFAULT = t2.Name collate DATABASE_DEFAULT  

This query returns the following table:

TAGS|TimeStamp              | Name  |Owner | Address    |  ROLF|2010-03-03 16:59:01.190| ROLF  |OLSEN | Juniper St.|  ROLF|2010-03-03 19:40:01.182| ROLF  |OLSEN | Juniper St.|  BLAD|2012-05-01 14:35:11.182| BLAD  |JOHN  | Maple St.  |  

The records I want is:

TAGS|TimeStamp              | Name  |Owner | Address    |  ROLF|2010-03-03 19:40:01.182| ROLF  |OLSEN | Juniper St.|  BLAD|2012-05-01 14:35:11.182| BLAD  |JOHN  | Maple St.  |  

Does anyone knows the solution how to filter the records to max time?

Thanks for the help.

Change Data Capture and the __$update_mask binary

Posted: 07 Mar 2013 10:56 AM PST

We're using CDC to capture changes made to a production table. The changed rows are being exported out to a data warehouse (informatica). I know that the __$update_mask column stores what columns were updated in a varbinary form. I also know that I can use a variety of CDC functions to find out from that mask what those columns were.

My question is this. Can anyone define for me the logic behind that mask so that we can identify the columns that were changed over in the warehouse? Since we're processing outside of the server we don't have easy access to those MSSQL CDC functions. I would rather just break down the mask myself in code. Performance of the cdc functions on the SQL end is problematic for this solution.

In short, I'd like to identify changed columns by hand from the __$update_mask field.

Update:

As an alternate sending a human readable list of changed columns over to the warehouse was also accepatable. We found this could be performed with performance far greater than our original approach.

The CLR answer to this question below meets this alternative and includes details of interpreting the mask for future visitors. However the accepted answer using XML PATH is the fastest yet for the same final result.

How to succesfully run a batch file in an SQL Agent job?

Posted: 07 Mar 2013 03:28 PM PST

I have a SQL Agent Job which generates a specific report in PDF-file and then copies the PDF to a network directory and then deletes the PDF file in the source directory.

The SQL Jobs consists of 2 steps: 1. Generate the report 2. Copy the report to the network location.

For step 2 I made a bat-file which handles the copying and removal of the pdf file.

The bat-file is as follows:

set source_dir=C:\Reports\Energie\Uur  set dest_dir=\\KWS2-WEB-SERVER\Share\Reports\Uur    C:\Windows\System32\Robocopy.exe %source_dir% %dest_dir% /MOV /Z  

However, when I run my the Job, it hangs on the second step. The status just stays on "Executing".

This is the line which I stated in the step (location of the bat-file to execute):

cmd.exe /c "C:\Reports\rapport_uur_copy.bat"  

My job-settings are as follows:

Step 1

Type: Operating system (CmdExec) On Success: Go to the next step

On Failure: Quit the job reporting failure

Step 2

Type: Operating system (CmdExec)

On Success: Quit the job reporting success

On Failure: Quit the job reporting failure

Some facts:

  • I have read/write permissions on the network directory
  • I run the steps as the Administrator-account (the logged in user, default)
  • Step 1 succeeds
  • I run Windows Server 2008 R2 Standard
  • I have SQL Server 2008 R2
  • When I run the script from the cmd manually, it works (under Administrator account).

MySQL partitioned tables?

Posted: 07 Mar 2013 11:33 AM PST

I have a database that supports a web application with several large tables. I'm wondering if partitioned tables will help speed up certain queries. Each of these tables has a colum called client_id. Data for each client_id is independent from every other client_id. In other words, web queries will always contain a where clause with a single client_id. I'm thinking this may be a good column on which to partition my large tables.

After reading up on partitioned tables, I'm still a little unsure as to how best to partition. For example, a typical table may have 50 million rows distributed more or less evenly across 35 client_ids. We add new client_ids periodically but in the short term the number of client_ids is relatively fixed.

I was thinking something along these lines:

CREATE TABLE foo (      id INT NOT NULL PRIMARY KEY,      ... more column defs here...      client_id int      )  PARTITION BY KEY(client_id)  PARTITIONS 35;  

My question. Is this an optimal strategy for partitioning these types of tables? My tests indicate a considerable speedup over indexing on client_id, but can I do better with some other form of partitioning (i.e. hash or range)?

Best way to defrag/compact a database for archival purposes

Posted: 07 Mar 2013 01:10 PM PST

We've got an SQL Server instance that's used for email archiving (courtesy of a 3rd party archiving package). Every so often, the software is rolled over to a new empty database. We've done this quarterly in the past, but we're looking to do it monthly now. The amount of data being archived is about 15 - 20 GB per month, and the bulk of the data resides in only a handful of tables (usually 2 - 4).

Once we roll over to a new database, the old one becomes used on a strictly read-only basis. What I'd like to do is optimize it into a nice, tight data file, with all the tables/indexes contiguous and having a very high fill factor, and not much empty space at the end of the data file. Also, we're using Standard Edition on this server, with all the limitations that implies (otherwise I'd be using data compression already).

A few possibilities I can think of:

  1. REBUILD/REORGANIZE indexes, DBCC SHRINKFILE (Okay, this isn't a sensible option, since DBCC SHRINKFILE will fragment the piss out of anything it touches, but I'm including it for completeness.)
  2. Create a new database with auto-stats off. Script and recreate all tables from the source database. Use bcp to export/import the data into the new database, in cluster-key order. Script and recreate all indexes. Recalculate all statistics with full scan.
  3. Create a new database with auto-stats off. Script and recreate all tables from the source database. Use SSIS or T-SQL to transfer data to the new database. Script and recreate all indexes. Recalculate all statistics with full scan.

The final step in every case would be setting the database to read-only mode.

What other good/better options are there for doing this? My concern is moving the data over in such a way to preserve a high fill factor, and in a logically contiguous fashion.

Edit:

I should mention that about 75% of the data seems to be stored in image (LOB) columns.

Using MySQL EXPLAIN to analyze the performance of a function

Posted: 07 Mar 2013 02:41 PM PST

You can precede a SELECT statement in MySQL with EXPLAIN, but how would you explain a function? I have a function that does a lot of work and I would like to analyze the performance of the function as a whole.

How to scale MySQL transaction wrapped queries?

Posted: 07 Mar 2013 10:33 AM PST

So I'm working on an app that wraps many of it's SQL queries in a transaction for it's SQL usage and I'm trying to think of some ways to scale the DB layer up....

Up until now we've been throwing hardware at the problem, slowly upgrading our Rackspace cloud servers...but we're reaching the upper echelons of their cloud server offerings.

I'm thinking the standard R/W split won't work for us because the transaction gets stuck to a single server eliminating the benefits of the R/W split. (Is that right?)

I've looked at dbShards and ScaleBase both of which look like interesting solutions to the problem, but as a cash strapped startup we'd prefer to use something OSS or devote some man hours creating a sharding strategy instead.

So I suppose the question boils down to, are Galera/MySQL Cluster and sharding my only two options for scaling this setup?

EDIT

To answer some more questions...

We manage a great deal of financial and inventory related data, for this reason we wrap most of our SQL traffic in a transaction so it's all or nothing.

We have ~100 tables in our DB with ~30 of them getting a great deal of use. The most commonly used tables have anywhere from from 15MM to 35MM rows.

Our transactions span across 20-30 tables with joins between ~10 of them. The queried tables are rather large and we have made some inroads with respect to query optimization and proper indexing, but no one is a fully fledged DBA in house....closest is me haha

A lot of our DB traffic is machine generated, we do a lot of syncing with third party data sources. As such, most of the SQL queries are wrapped in a transaction to ensure data consistency when performing all the calculations and syncing with the third parties (and to allow for a rollback if the link to the third party gets disrupted)

Based off of our cacti monitoring it looks like a great deal of our SQL traffic is reads (~85%R to ~15%W)

Just as a note, we haven't hit too much of a bottleneck with our current cloud server but I'm doing my due diligence ahead of time to make sure we don't run into the wall without a plan.

MySQL user defined rollback procedure

Posted: 07 Mar 2013 12:33 PM PST

I'm attempting to write my own mini-rollback procedure. I have a table that tracks any updates or deletes to another table using a trigger. I am attempting to make it possible to restore one or more of these tracked changes through the use of a procedure. However, I'm receiving a syntax error with the following:

-- UNDO_CHANGES_PROCEDURE - This copies the values of the work log track table back into the relevant record in the work log table if an accidental edit or deletion is made.  DROP PROCEDURE IF EXISTS UNDO_CHANGES_PROCEDURE;    DELIMITER $$    CREATE PROCEDURE UNDO_CHANGES_PROCEDURE(ID INT(6))  BEGIN  DECLARE var_trig CHAR(8);    SET var_trig = (SELECT TRIGGER_TYPE FROM WORK_LOG_TRACK WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID);    IF var_trig = 'Update' THEN  UPDATE WORK_LOG SET ENTRY_NUMBER = WORK_LOG_TRACK.ENTRY_NUMBER_FK, EMPLOYEE_ID_FK = WORK_LOG_TRACK.EMPLOYEE_ID_FK, WORK_ORDER_NUMBER_FK = WORK_LOG_TRACK.WORK_ORDER_NUMBER_FK, ENTRY_TIME = WORK_LOG_TRACK.ENTRY_TIME, WORK_DONE = WORK_LOG_TRACK.WORK_DONE WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID AND WORK_LOG.ENTRY_NUMBER = WORK_LOG_TRACK.ENTRY_NUMBER_FK;  ELSE  INSERT INTO WORK_LOG(ENTRY_NUMBER, EMPLOYEE_ID_FK, WORK_ORDER_NUMBER_FK, ENTRY_TIME, WORK_DONE) VALUES (WORK_LOG_TRACK.ENTRY_NUMBER_FK, WORK_LOG_TRACK.EMPLOYEE_ID_FK, WORK_LOG_TRACK.WORK_ORDER_NUMBER_FK, WORK_LOG_TRACK.ENTRY_TIME, WORK_LOG_TRACK.WORK_DONE) WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID;  END IF;  END;  $$    DELIMITER ;  

The syntax error comes in in regards to my update statement, any help or suggestions would be appreciated.

How do I generate a tnsnames.ora file to connect to an Oracle Database?

Posted: 07 Mar 2013 08:33 PM PST

I have installed Oracle 11g, and I can connect as sysman to the Oracle database, but there is no tnsnames.ora file that I can find.

Do I need to generate the tnsnames.ora file myself? If so, where do I place it? If not, how does Oracle generate it for me? If I do need to generate it, what is the appropriate syntax for the file?

How to drop multiple tables with common prefix in one query?

Posted: 07 Mar 2013 12:11 PM PST

I'm using Microsoft SQL Server 2008.My question is:How to drop multiple tables with common prefix in one query?

something like that table names:

LG_001_01_STLINE,   LG_001_02_STFICHE  

[SQL Server] Creating a reference table.

[SQL Server] Creating a reference table.


Creating a reference table.

Posted: 07 Mar 2013 04:08 AM PST

I am very new to SQL Server. I have a single table database, by this I mean I have created only one table. I have columns that I think would do well holding the data in it's own table and then create a PK to FK relationship. Also, is there anything special I have to do to query the primary table to give me the data that is in the new table?Thank youBrian

looping thru views

Posted: 07 Mar 2013 06:29 AM PST

I have a database with a large number of views. I would like to to be able to loop through the views and run a select on first record in each table from each of the views. Any advice or suggestions would be greatly appreciated.Charlie

Alpha character in SPID? Disk I\O Bottleneck Troubleshooting.

Posted: 07 Mar 2013 03:42 AM PST

Hi.OS is Server 2003 R2 Enterprise Service Pack 2.SQL is Microsoft SQL Server 2005 - 9.00.3353.00.I was tasked with troubleshooting the following repeating error:[code="plain"]Source spid4sMessageSQL Server has encountered 30 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL.2\MSSQL\DATA\DataBase_data.mdf] in database [DataBase] (8). The OS file handle is 0x00000960. The offset of the latest long I/O is: 0x000000d80d8000[/code]However, I don't know how to troubleshoot "spid4s".What is the alpha character 's' doing in the spid?There spid is just the session ID, right?Session IDs only have numbers.There is a session 4, but that's been asleep while the error has happened.It's happened multiple times while I've watched it because of 'SPID4' which I can't even investigate.Can anyone shed some light for me?

Using variables in an IF statement?

Posted: 07 Mar 2013 12:00 AM PST

Hi Guys! I'm attempting to write a query that is using two variables, one will specify an ID to pull back, and that seems to be working fine... The second variable allows the user to either specify a column TransType: 'BUY' 'SELL' or 'ALL' Obviously if the user specifies 'BUY' I'd like only the 'BUY' from that column to be returned and vice versa...I've attempted to do this by creating temp tables to solve this query... it however is not functional, any suggestions? Begindeclare @idnum varcharselect @idnum = 1declare @TC varchar(50)select @TC = 'BUY'select B.GroupDetail, C.SecID, C.TransCode, C.Quantity, B.GroupId, C.PortfolioIDFROM ERTutTransactions CINNER JOIN FinalGroupDetail BON C.PortfolioID = B.PortfolioIDAND B.GroupId = @idnumWHERE TransCode = @TCIF @TC = 'all'BEGINselect B.GroupDetail, C.SecID, C.TransCode, C.Quantity, B.GroupId, C.PortfolioIDFROM ERTutTransactions CINNER JOIN FinalGroupDetail BON C.PortfolioID = B.PortfolioIDAND B.GroupId = @idnumENDEND

Import records in MS Access table to native SQL Server 2008 table

Posted: 06 Mar 2013 08:18 PM PST

Hi SQL server expertsI am trying to use the SQL Server Import and Export Wizard to perform this simple task. The Access DB is on the same server, so I think it isn't a network problem.Everything proceeds smoothly up to the final stage, which is preceded by the following:Click Finish to perform the following actions: Source Location : E:\RISC\Access Imports\RefDiagnosis_20130306.mdbSource Provider : Microsoft.Jet.OLEDB.4.0Destination Location : <server name>Destination Provider : SQLNCLI10Copy rows from `Diagnosis_Load` to [dbo].[Diagnosis_Load]The new target table will be created.The package will not be saved. The package will be run immediately. Provider mapping file : C:\Program Files (x86)\Microsoft SQL Server\100\DTS\MappingFiles\JetToMSSql9.xmlProgress was as follows:Initialising data flow task - successInitialising connections - successSetting SQL command - successSetting source connection - Error. Text in error messageCould not connect source componentError 0xc020801: Source - Diagnosis_Load[1]: SSIS error codeDTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquiteConnection method call failed.Additional information:Exception from HRESULT: 0xC020801C (Microsoft.SqlServer.DTSPipelineWrap)What actually has gone wrong, and what do I do about it? The error message makes heavy weather of telling me.PS. Something like this should just work IMHO. I feel embarrassed for SQL Server - how hard can it be for two mature MS technologies to talk to to one another?Yours hopefullyMark Dalley

Query hangs on table variable

Posted: 06 Mar 2013 02:07 PM PST

I have a query that consistently runs fast and returns the expected output, but when I try to insert the output into a table variable it just runs indefinitely, and I can't find the problem in the code. I can't really post the query or the output (this is a data mining task with healthcare data and there are strict privacy rules) but I will describe the situation as best as I can.The query that runs fine takes these steps:Declares datetime variables for StartDate and EndDateDeclares a table variable (an ID to ID crosswalk) with 2 fields and inserts 691,969 records with a select queryDeclares another table variable (simple list of codes) with 1 field and inserts 465 records directlyFinally, there is a union select that pulls 3 fields each from 3 different tables, each inner joined to the crosswalk table variable and where Date is between StartDate and EndDate and the code is in the code list table variable.This query returns 53,463 records in about 50 seconds.When I try to insert this output into another table variable, it doesn't throw an error, it just runs - I have let it go over 26 hours before just to see what would happen - it just keeps executing...Am I pushing my luck with all the table variables? I'm stumped.

[MS SQL Server] #Files and FILEGROUPS

[MS SQL Server] #Files and FILEGROUPS


#Files and FILEGROUPS

Posted: 06 Mar 2013 07:49 PM PST

Hi,In an 8 CORE machine with 3 RAID10 (data files, log files and tempdb) what's the appropriate number of files or filegroups for a database with lots of users and IO? I saw a SQLPass presentation where it said 1 file for 4 CORES..If a database has 2 files and since the files are on the same RAID set is it worth creating also 2 FILEGROUPS (one for each data file) or the porpoise of FILEGROUPS is to put "sensible" data on one and not so sensible data on another so the 1st one has backups more frequently?Thanks,Pedro

Starting up in Single user mode

Posted: 19 Jan 2010 01:15 AM PST

I am testing a disaster recovery process but am unable to restart any 2008 systems in single user mode so that I can restore the master db. The sql server instance is stopped when I issue the command. I have tried this on three systems so far with no luck please help.D:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn>sqlservr.exe -m2010-01-19 11:09:35.82 Server Error: 17058, Severity: 16, State: 1.2010-01-19 11:09:35.82 Server initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).2010-01-19 11:09:36.12 Server Error: 17058, Severity: 16, State: 1.2010-01-19 11:09:36.12 Server initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).2010-01-19 11:09:36.42 Server Error: 17058, Severity: 16, State: 1.2010-01-19 11:09:36.42 Server initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).2010-01-19 11:09:36.72 Server Error: 17058, Severity: 16, State: 1.2010-01-19 11:09:36.72 Server initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).2010-01-19 11:09:37.03 Server Error: 17058, Severity: 16, State: 1.2010-01-19 11:09:37.03 Server initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).2010-01-19 11:09:37.33 Server Error: 17058, Severity: 16, State: 1.2010-01-19 11:09:37.33 Server initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).2010-01-19 11:09:37.63 Server Error: 17058, Severity: 16, State: 1.2010-01-19 11:09:37.63 Server initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).2010-01-19 11:09:37.93 Server Error: 17058, Severity: 16, State: 1.2010-01-19 11:09:37.93 Server initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).2010-01-19 11:09:38.24 Server Error: 17058, Severity: 16, State: 1.2010-01-19 11:09:38.24 Server initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).2010-01-19 11:09:38.54 Server Error: 17058, Severity: 16, State: 1.2010-01-19 11:09:38.54 Server initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).

Install SQL server from command line and configuration files

Posted: 07 Mar 2013 05:07 AM PST

I would like to try use command line and configutionfile to install SQL server.I read I should put the commandline fine in the root of installation file.We have a shared folder that has all the installation file on the network, I don't have write permission on that folder. I don't want to copy those installation files to my local too, then what other options can I do?I tried to add in the command line to change the directory to the installation files location at start, but since it is a shared directory, I got an error: CMD does not support UNC paths as current directories.Any recommendations?Thanks

build a small sql server

Posted: 07 Mar 2013 01:38 AM PST

Hello,I am trying to build a sql server. It can probably be good enough for a small business and run on MS SQL Server 2008 R2. Can anyone recommend me with the brands, processors, motherboards, etc...? Maybe cost $1,000 less would be better.Thanks,

Duties of a DBA

Posted: 06 Mar 2013 04:51 AM PST

Could someone please brief me about the daily duties of a production support SQL Server DBA?

Data Archival/Purging - through table partition switch IN/OUT

Posted: 06 Mar 2013 03:49 PM PST

Hi All,I have a daily partitioned table 'TBL' and I hold data for 30 Day's. I archive data to TBLHistory table by Switch out minimum partion and switch in the partition to TBLHistory.The Main Table 'TBL' and 'TBLHistory' resides on the same database. As a result size of database is increasing.My manager ask me to Create a HistoryDB database and move 'TBLHistory' to the history database.Can i move 'TBLHistory' to HistoryDB and still i can perform switch out and Switch in?Note:Both DBs are residing on same instance of the server.(MainDB) -->TBL-->SwithOut to --> Swith In to --> (HistoryDB)-->TBLHistory ?Please guide me how can i best achive this.Thanks Vineet Dubey

[Articles] Productivity and Accountability

[Articles] Productivity and Accountability


Productivity and Accountability

Posted: 06 Mar 2013 11:00 PM PST

The recent decision by Yahoo to end telecommuting elicits some comments from Steve Jones on the topic of remote work.

[SQL 2012] SQL Azure database maintenance.

[SQL 2012] SQL Azure database maintenance.


SQL Azure database maintenance.

Posted: 06 Mar 2013 07:32 PM PST

Hi Experts,What are the kind of database maintenance activities that we can perform in SQL Azure. Can you please share some document about database maintenance in SQL Azure. Basically looking for automating database backups and index rebuild in SQL Azure.Regards,Eswin

[T-SQL] Counter inside SELECT

[T-SQL] Counter inside SELECT


Counter inside SELECT

Posted: 07 Mar 2013 12:10 AM PST

Hi,I have the following scenario :[code="sql"]DECLARE @TEST1 TABLE (C_NAME varchar(10), C_CODE VARCHAR(5))INSERT INTO @TEST1(C_NAME,C_CODE) VALUES('John', 'CL1')INSERT INTO @TEST1(C_NAME,C_CODE) VALUES('Jake', 'CL2')INSERT INTO @TEST1(C_NAME,C_CODE) VALUES('Joe', 'CL3')INSERT INTO @TEST1(C_NAME,C_CODE) VALUES('Jane', 'CL4')DECLARE @TEST2 TABLE (P_NAME varchar(10), P_CODE VARCHAR(5))INSERT INTO @TEST2(P_NAME,P_CODE) VALUES('Ray', 'PL91')INSERT INTO @TEST2(P_NAME,P_CODE) VALUES('James', 'PL92')INSERT INTO @TEST2(P_NAME,P_CODE) VALUES('Fred', 'PL93')INSERT INTO @TEST2(P_NAME,P_CODE) VALUES('Mac', 'PL94')DECLARE @x INTSELECT @x = MAX(CONVERT(INT,SUBSTRING(C_CODE,3,2))) FROM @TEST1INSERT INTO @TEST1(C_NAME, C_CODE) (SELECT P_NAME, C_CODE = 'CL' + CONVERT(VARCHAR(5),@x+1) FROM @TEST2) [/code]I need the new rows add to @Table1 but the C_CODE adds incrementally. What I have in @Table1 after running the above code is :[code="plain"]C_NAME C_CODE----------------------John CL1Jake CL2Joe CL3Jane CL4Ray CL5James CL5Fred CL5Mac CL5[/code] But I need it to be :[code="plain"]C_NAME C_CODE----------------------John CL1Jake CL2Joe CL3Jane CL4Ray CL5James CL6Fred CL7Mac CL8[/code]Thanks in advance for helps.

sql server stored procedure logic problem

Posted: 06 Mar 2013 06:02 PM PST

HI friends i have small doubt in sql server.plese tell me how to solve this issuse i have 2 tables based on that i want load first table records into second table useing stored procedure. structure is same in both tablecolumns like id ,name,sal in both table same columns.first table contains 1000 records .when ever we insert this records into second table that time 501 record is failed.when ever we load records what ever failed record that records must be showing errore message and remaing reords must be to load .that means when ever we applaying try catach method in stored procedure to handle errore .first in try methode to appply logicand catch metode is devlop errore message(501 records) once its catach errore.that errore is showing that records and remaing records(502 to 1000 records) to loaded in to second table plese tell me how to implement logic code in try catch block.

Cursor replacement help

Posted: 06 Mar 2013 08:01 PM PST

Morning, Thank you for your help in advance, I have built a cursor to run through list and with the use of table functions build up a set of data. Whilst I am unable to send everything through I can at least try to give you an idea of what this looks like.[code="sql"]CREATE TABLE [dbo].[#TSC_ExportedColumnIds]( [ID] [int] NOT NULL, [ColumnType] [int] NOT NULL) ON [PRIMARY]GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (1, 1)GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (2, 1)GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (3, 1)GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (4, 1)GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (5, 1)GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (6, 1)GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (7, 3)GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (8, 3)GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (9, 3)GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (10, 1)GOINSERT [dbo].[#TSC_ExportedColumnIds] ([ID], [ColumnType]) VALUES (11, 1)DECLARE @TEMP_EXPORT TABLE ( [A] [varchar](500) NULL, [B] [nvarchar](500) NULL, [C] [nvarchar](50) NULL, [D] [nvarchar](50) NULL, [E] [nvarchar](1000) NULL, [F] [varchar](1000) NULL, [G] [varchar](500) NULL)DECLARE @COLUMN_ID INTDECLARE @getID CURSORSET @getID = CURSOR FORSELECT IDFROM dbo.#TSC_ExportedColumnIdsOPEN @getIDFETCH NEXTFROM @getID INTO @COLUMN_IDWHILE @@FETCH_STATUS = 0BEGINPRINT @COLUMN_IDDECLARE @COLUMN_TYPE INTSELECT @COLUMN_TYPE = TSC_COLUMNS.ColumnTypeFROM TSC_COLUMNSWHERE TSC_COLUMNS.ID = @COLUMN_IDIF @COLUMN_TYPE = 1 BEGININSERT INTO @TEMP_EXPORT SELECT NULL, NULL, NULL, NULL, NULL, NULL, NULL--INSERT INTO @TEMP_EXPORT SELECT * FROM [dbo].[udf_TSC_qry_ExportBlockColumnParam1](@COLUMN_ID)--INSERT INTO @TEMP_EXPORT SELECT * FROM [dbo].[udf_TSC_qry_ExportBlockColumnParam2](@COLUMN_ID,4)ENDIF @COLUMN_TYPE = 2BEGININSERT INTO @TEMP_EXPORT SELECT NULL, NULL, NULL, NULL, NULL, NULL, NULL--INSERT INTO @TEMP_EXPORT SELECT * FROM [dbo].[udf_TSC_qry_ExportSequenceColumnParam1](@COLUMN_ID)--INSERT INTO @TEMP_EXPORT SELECT * FROM [dbo].[udf_TSC_qry_ExportSequenceColumnParam2](@COLUMN_ID,4)ENDIF @COLUMN_TYPE = 3BEGININSERT INTO @TEMP_EXPORT SELECT NULL, NULL, NULL, NULL, NULL, NULL, NULL--INSERT INTO @TEMP_EXPORT SELECT A,B,C,D,E,F,G FROM [dbo].[udf_TSC_qry_ExportEventColumnParam0](@COLUMN_ID,4)ENDFETCH NEXTFROM @getID INTO @COLUMN_IDENDCLOSE @getIDDEALLOCATE @getIDselect * from @TEMP_EXPORT[/code]I have commented out the functions but kept them in to give you an idea of what is being created.My question is, would it be possible to turn this into a set operation? At the moment on real data this query runs through a list of about 2500 rows and creates a table with around 100,000 rows in about a minute.Many thanks for your thoughts,Oliver

Performing strange conditional COUNT

Posted: 06 Mar 2013 10:12 AM PST

Hey guys, Can someone show me how to go about solving this problem? I thought it would be pretty straightforward but turns out it's not. I made some strides but have been stuck on the last portion. Here is what I have so far. I have a table with the following data inside. Create table #m_TestTable (Name varchar(20),DateRecorded datetime)----Insert sample dataInsert into #m_TestTable Values ('John', dateadd(day,1,GetDate()))Insert into #m_TestTable Values ('John', dateadd(day,2,GetDate()))Insert into #m_TestTable Values ('John', dateadd(day,3,GetDate()))Insert into #m_TestTable Values ('John', dateadd(day,6,GetDate()))Insert into #m_TestTable Values ('John', dateadd(day,7,GetDate()))Insert into #m_TestTable Values ('John', dateadd(day,8,GetDate()))Insert into #m_TestTable Values ('John', dateadd(day,12,GetDate()))Insert into #m_TestTable Values ('John', dateadd(day,13,GetDate()))The idea is once the difference between the previous day and the current day is greater than or equal to 1, it should be counted as 1. I use the following query to get the difference in date between the current row and the previous row;With tblDifference as(Select Row_Number() OVER (Order by DateRecorded) as RowNumber,Name, DateRecorded,PointValue from #m_TestTable)Select Cur.Name,Cur.DateRecorded as CurrentDay,Prv.DateRecorded as PreviousDay,Datediff(Day, Prv.DateRecorded, Cur.DateRecorded) as DateDifferenceInto #temp1 FromtblDifference Cur Left Outer Join tblDifference PrvOn Cur.RowNumber = Prv.RowNumber + 1Order by Cur.DateRecorded--select * from #temp1Here is where I get stuck. Based on the datedifference column, I need to perform a count. The datedifference column data appears in following order. NULL, 1, 1, 3, 1, 1, 4, 1So the count for John should be 3 broken down like this. NULL, 1, 1 should count as ONE; 3, 1, 1 should count as ONE and 4, 1 should count as ONE. --But if the data was like this Create table #m_TestTable1 (Name varchar(20),DateRecorded datetime)----Insert sample dataInsert into #m_TestTable1 Values ('John', dateadd(day,1,GetDate()))Insert into #m_TestTable1 Values ('John', dateadd(day,2,GetDate()))Insert into #m_TestTable1 Values ('John', dateadd(day,3,GetDate()))Insert into #m_TestTable1 Values ('John', dateadd(day,6,GetDate()))Insert into #m_TestTable1 Values ('John', dateadd(day,7,GetDate()))Insert into #m_TestTable1 Values ('John', dateadd(day,8,GetDate()))Insert into #m_TestTable1 Values ('John', dateadd(day,9,GetDate()))Insert into #m_TestTable1 Values ('John', dateadd(day,10,GetDate()));With tblDifference as(Select Row_Number() OVER (Order by DateRecorded) as RowNumber,Name, DateRecorded from #m_TestTable1)Select Cur.Name,Cur.DateRecorded as CurrentDay,Prv.DateRecorded as PreviousDay,Datediff(Day, Prv.DateRecorded, Cur.DateRecorded) as DateDifferenceInto #temp2 FromtblDifference Cur Left Outer Join tblDifference PrvOn Cur.RowNumber = Prv.RowNumber + 1Order by Cur.DateRecorded--select * from #temp2The datedifference column data appears in following order. NULL, 1, 1, 3, 1, 1, 1, 1Then in this case the count for John would be only 2. NULL, 1, 1 would count as ONE, 3, 1, 1, 1, 1 would count as ONE. In summary, if there are '1's after a number, they count as ONE. If the number is anything other than 1 and is followed by '1's, it will count as ONE. If the number is anything other than one and is followed by non 1 numbers, they count as one each. To illustrate this last point, if the data was like thisCreate table #m_TestTable2(Name varchar(20),DateRecorded datetime)----Insert sample dataInsert into #m_TestTable2 Values ('John', dateadd(day,1,GetDate()))Insert into #m_TestTable2 Values ('John', dateadd(day,4,GetDate()))Insert into #m_TestTable2 Values ('John', dateadd(day,7,GetDate()))Insert into #m_TestTable2 Values ('John', dateadd(day,8,GetDate()))Insert into #m_TestTable2 Values ('John', dateadd(day,9,GetDate()))Insert into #m_TestTable2 Values ('John', dateadd(day,10,GetDate()))Insert into #m_TestTable2 Values ('John', dateadd(day,16,GetDate()))Insert into #m_TestTable2 Values ('John', dateadd(day,17,GetDate()));With tblDifference as(Select Row_Number() OVER (Order by DateRecorded) as RowNumber,Name, DateRecorded from #m_TestTable2)Select Cur.Name,Cur.DateRecorded as CurrentDay,Prv.DateRecorded as PreviousDay,Datediff(Day, Prv.DateRecorded, Cur.DateRecorded) as DateDifferenceInto #temp3 FromtblDifference Cur Left Outer Join tblDifference PrvOn Cur.RowNumber = Prv.RowNumber + 1Order by Cur.DateRecorded--select * from #temp3The datedifference column data appears in following order. NULL, 3, 3, 1, 1, 1, 6, 1Then the count for John would be 3. NULL, 3, will count as ONE, 3, 1, 1, 1 will count as ONE and 6, 1 will count as 1. I hope this makes sense. Thanks for your time. I know this is a strange one.

ORDER BY = Bubble Sort ? Quick Sort ? Insertion Sort ?

Posted: 06 Mar 2013 06:37 PM PST

Hi All,When We sort the data by using ORDER BY , which sorting algorithm method will be used by SQL optimizer?1) Bubble Sort2) Quick Sort3) Merge Sort4) Heap Sort5) Insertion SortInputs are welcome!

How to Convert Semi colon Separated Values into Column

Posted: 06 Mar 2013 02:29 PM PST

I am working on a integration project where I am receiving three string parameters ItemCode ItemName Amountand values will be in this format[b]ItemCod Parameter values [/b]T1;T2;T3;[b]ItemName Parameter values [/b]Pencil Box;Eraser;Mouse Pad;[b]Amount Paramter values [/b]1900;2000;8900;Now I would like to have a procedure in which i would receive these parameters and the query will convert this as column and records. I am not a SQLServer guy so somebody proposed a solution but it is limited to two records only where my requirement is as many records depends on the number of semi colon separated strings. The defined structure is that all parameters will have equal number of values.here is the solution somebody gave it to me.[code="sql"]INSERT INTO t(cod, name)VALUES('T1;T2;T3;T4;T5;', 'Pencil Box;Eraser;Board Marker;Trimmer;Ball Point;');SELECT CAST('<r>'+REPLACE(cod,';','</r><r>')+'</r>' AS XML).query('/r[1]').value('.','varchar(256)') cod, CAST('<r>'+REPLACE(name,';','</r><r>')+'</r>' AS XML).query('/r[1]').value('.','varchar(256)') nameINTO #tmpTable FROM tinsert INTO #tmpTableSELECT CAST('<r>'+REPLACE(cod,';','</r><r>')+'</r>' AS XML).query('/r[2]').value('.','varchar(256)') cod, CAST('<r>'+REPLACE(name,';','</r><r>')+'</r>' AS XML).query('/r[2]').value('.','varchar(256)') nameFROM t;select * from #tmpTable[/code][b]Here is the helper script to create required table so it wont waste your valuable time[/b][code="sql"]CREATE TABLE [dbo].[t]( [cod] [varchar](350) NULL, [name] [varchar](300) NULL) ON [PRIMARY]GO[/code]

remove duplicates IF matches

Posted: 06 Mar 2013 01:46 PM PST

So I have the following statement:SELECT ca.Caseid,[weeknum],[monthName], Response1_Num , Response2_Num, Response3_Num, Response4_Num, comments,[userlogin], [supervisor], [customer id], comm_ID FROM HMD hm join Correspondences_All ca on hm.[caseid]=ca.caseidwhere ca.supervisor ='Iamsomeone' and [comments] not in ('null','')It returns a bunch of duplicates in the comments, this is expected, but what I want to filter out is just this I need the query to return only one unique set of the comments UNLESS the userlogin is different. Here is a small subset of the select to give you an idea of what I mean.CaseID: Comments: Userlogin1243546457 Great help UserA3123234353 AWESOME UserB2131314242 Support was terrible UserC2131314242 Support was terrible UserC2131314242 Support was terrible UserC[b]2131314242 Support was terrible UserA[/b]3453453453 Could have been better UserBnote the item in bold, this is the variable I need to consider.Anyone able to help me with this one?

Delete large data from live table

Posted: 06 Mar 2013 06:48 AM PST

I have to delete data from live table( Size of DB:terabytes) (system is becoming slow). without affecting the end users ? And tables have indexes and identity columns.( when I delete , I don't want to affect the identity columns, how can we do it?)It is taking lot of time to delete the data.How can i do that?Any steps to follow.ThanksKomal

Is there a name for this concept?

Posted: 06 Mar 2013 06:05 AM PST

Hi, I am wondering if there is a standard industry name or term for this concept aside from "horrible", "bad idea", or "if you do this I (dba) will eat your first born."I remember learning this lesson on my own several years ago, but was recently reminded of this lazy way of getting things done in a stored procedure because a consultant that was hired to help our reporting team submitted a stored procedure to me that contains a where clause like this.A simple mockup of the scenario would look like this:[code="sql"]CREATE PROCEDURE GetSomeData(@Parameter1 int)ASBEGIN SELECT Field1 , Field2 , Field3 FROM SomeTable WHERE (Field4 = @Parameter OR @Parameter = -1)END[/code]The part I am asking about is the where clause, where a dummy value is passed in to the stored procedure that essentially is used to mean "return everything."Of course, this makes the query impossible to tune, as it will always do an index scan. I've already advised him that I will not be committing his stored procedure to the database and suggested several other ways they can achieve the desired result.Anyhow, is there a term used to describe this kind of where clause? Inquiring minds want to know!Thanks!

Search This Blog