Saturday, June 15, 2013

[SQL Server 2008 issues] Urgent: find which columns are updated

[SQL Server 2008 issues] Urgent: find which columns are updated


Urgent: find which columns are updated

Posted: 14 Jun 2013 04:18 PM PDT

In want to find which columns are updated in a row of a table, and what is their new value and previous value.i know a trigger can work here, with use of columns_updated(). but i want an exact T-SQL script, and any other alternatives for this process.Thanks a Ton in advance. :-)

SQL Server 2008R2 Cluster SAN

Posted: 14 Jun 2013 01:03 PM PDT

Hi,I would like to know if it's possible to install SQL Server binaries on SAN drives (as we only have 1 local drive reserved for the OS).Thank you,Kukah

Make the load incremental

Posted: 14 Jun 2013 02:34 AM PDT

I have a view that pulls the data from severla large tables and then I load it to another table. Currently, the load takes about 40 mins as it is a full load. I like to make it incremental. But I couldn't find a sets of key that would make the record unique. Any suggestions on how to make it incremental?

using parameters in SSRS 2008R2

Posted: 14 Jun 2013 10:00 AM PDT

Hello,I am new to SSRS. In my report, I have the following parameters created:@startDate@endDate@ReportTypeUnder @reportType, I have two items : New Enrollment and Total Enrollment.If user selects "New Enrollment", I would like to do something like enrollment_date >= @startDate and enrollment <= @endDate; If user selects "Total Enrollment", I would like to do something like (enrollment_date <= @endDate and exit_dt >= @startDate) OR (enrollment_date <= @endDate and exit_date is NULL). I think I need to use expression to do that, but I am not sure how to do it. Any help is appreciatedThanksFrank

Declaring one user variable inside the other

Posted: 14 Jun 2013 08:42 AM PDT

Hey Gurus,I have one string variable that defines the transaction type as "Revenue" or "Finance". I have another variable that am using to execute the appropriate stored procedure based on whether the transaction type is revenue or finance. I want to either use a script task for this or I want to be able to do this in my new variable. Can someone please help me with the appropriate code for either of that?Thanks in advance!

Difference between Re-initialize and Re-create merge repliation?

Posted: 14 Jun 2013 03:27 AM PDT

Hi,I had an issue with merge replication between 2 servers. So, I re-initialized subscription by generating new snapshot. But it took 13 hours to finish the initial load on the subscriber. Usually, I delete the replication and re-create the merge replication which take 2 -3 hours. I checked the connection speed between 2 servers. It was fast enough. Can anyone explain me what is the difference? why re-initialize take more time than the re-create?

Appending a character in Last_Name with a condition.

Posted: 14 Jun 2013 01:03 AM PDT

I am new in SQL server database. I have scenario of deleting middle names in a table. There are two tables in my database to control patient name.PT_RED has a field of ALS_FLGPT_NAM has fields of PT_LST_NAM and PT_MID_NAMThe two tables has a primary key: INTNL_PT_NUMHere is what I need.1. If ALS_FLG is "Y" in PT_RED table, append "$" on PT_LST_NAM in PT_NAM table with the same INTNL_PT_NUM.2. Remove PT_MID_NAM in PT_NAM tableThanks for your help.

***Restore multiple sql databases***

Posted: 14 Jun 2013 03:08 AM PDT

I have about 100 db's with mdf, ldf, and ndf files that I need to restore to the SQL 2008R2 on different instance. I have all the backups in their respective folders. Is there a script out there that will restore all of them in one swoop?? Thank you in advance, as always! God Bless

SQL 2012 and AG's

Posted: 14 Jun 2013 02:40 AM PDT

Recent post from Brent Ozar. Is what he is saying correct.In SQL 2012, if your primary drops offline or the cluster loses quorum, the readable replica databases drop offline. (This is hilarious because it's right when you really want to be able to query the secondaries.) No way to control it – it's totally automatic. In SQL 14, the secondaries remain online and readable when the primaries aren't available.Also are secondaries always just readable copies and therefore can not be used in a DR plan.

Performance DIES when REMOVING Where Clause

Posted: 14 Jun 2013 02:29 AM PDT

I've never seen anything like it. I have a simple common table expression that returns 7 rows of data.When you run Select * From cte, it takes 25 seconds to return the data.When you run Select * From cte WHERE X=1, it takes 1 secondWhen you add anything BUT "=" , Select * from cte WHERE X > 0, it takes 25 seonds.I've got an index on that field. The only thing I can add is the field is type Numeric (4,1)Thanks in advance.

Find out if replication failed.

Posted: 14 Jun 2013 04:31 AM PDT

Is there a way to find out (perhaps using DMVs) through TSQL if Replication failed between a given time frame? Thanks.

SQL Server Version

Posted: 13 Jun 2013 11:16 PM PDT

I want see how fast the migration to new versions

How to get results from an SP into a table.

Posted: 09 Jun 2013 11:13 PM PDT

Dear reader,Below is an example to get data from a stored procedure into a table.Is there a more generic method or better method to do this. (Generating a resulttable for example which works for 'any' sp). sp_spaceused is used as an example. Thanks in advance,ben[code="sql"]---- 20130610-- ben brugman---- EXAMPLE:-- two databases-- three tables-- get the space used (6 rows) into a table.---- Question:-- How do I get the info of a stored procedure in a table.------ Create a result table.--select CONVERT(varchar(30), '') as name,CONVERT(varchar(30), '') as rows,CONVERT(varchar(30), '') as reserved,CONVERT(varchar(30), '') as data,CONVERT(varchar(30), '') as index_size,CONVERT(varchar(30), '') as unusedinto ##A where 1 = 1-- second result table with an extra columnselect CONVERT(varchar(30), 'Database1') as DB_name,* Into ##B from ##A---- Fill the first result table.--insert into ##A EXEC sp_spaceused 'table_name1'insert into ##A EXEC sp_spaceused 'table_name2'insert into ##A EXEC sp_spaceused 'table_name3'-- Fill the second result table with extra info.insert into ##B select CONVERT(varchar(30), 'Database1') as DB_name, * from ##A-- Repeat process for a second database.Use database2delete ##Ainsert into ##A EXEC sp_spaceused 'table_name1'insert into ##A EXEC sp_spaceused 'table_name2'insert into ##A EXEC sp_spaceused 'table_name3'insert into ##b select CONVERT(varchar(30), 'Database2') as DB_name,* from ##A---- Show all results.--select * from ##B-- Clear updrop table ##Adrop table ##B[/code]

Help with Pivot Queries

Posted: 14 Jun 2013 03:43 AM PDT

I am trying to create a PIVOT table that produces the sum of balances that newly became exactly 1, 10, 21, 31, 61, 91, 121, 151, 181, 211 days past due each day since December 1, 2012. The data I need is entirely contained in a historical data warehouse called ACCT_MASTER_HISTORY. Here are my queries...select ASOFDATE, DAYSPD, COUNT (account) as 'Acct', sum (ttl_bal-disc) as 'Balance', sum (ttl_bal) as 'TotalBal'into #tempfrom dbo.ACCT_MASTER_HISTORYwhere DAYSPD in (1, 10, 21, 31, 61, 91, 121, 151, 181, 211) and TTL_BAL>0 and ASOFDATE > 1130531 and CLASS_CD not between '90' and '99' group by ASOFDATE, DAYSPDorder by asofdate, DAYSPDselect DAYSPD as Bucket, [1],[10],[21],[31],[61],[91],[121],[151],[181],[211]from (select asofdate, totalbal from #temp) as SourceTablePivot(asofdate, totalbal for dayspd in ([1],[10],[21],[31],[61],[91],[121],[151],[181],[211])as PivotTable;I am new to this type of query and usually would dump the raw to Excel and do the pivot there. Any help would be greatly appreciated.Thanks all!

SQL Server 2008 Database Mail - Bulk Mail Problem

Posted: 14 Jun 2013 03:39 AM PDT

I set up a job to bulk mail about 3,200 of our customers using database mail. The SMTP mail is being relayed by one of our Exchange 2010 servers. The first night I ran the job 600+ emails were sent. The remainder had a sent_status of "failed." I modified the job to run again and check if the customer's email address was in the recipients in msdb..sysmail_sentitems so they would not be sent again. The second night 600+ more went out. The same thing on the third night.We received a complaint from one customer today that she received 3 emails. Looking at msdb..sysmail_allitems, SQL Server logged the first 2 attempts as failures, but they actually were sent. Most failures are getting logged as mail server errors with this message, "Exception Message: Cannot send mails to mail server. (The operation has timed out.)"Does anyone have any ideas on how to better determine if the mail went, or how to get fewer failures from Exchange?

Concurrency With #Temp Tables in Sprocs

Posted: 14 Jun 2013 02:43 AM PDT

Afternoon AllI have a quick question.I'm am in the process of re-writing some sprocs and rather than use udf's or table variable I am using #temp tables.In essence they follow a similar format[code="sql"]CREATE PROCEDURE AASBEGIN IF OBJECT_ID('tempdb..#TEMP') IS NOT NULL DROP TABLE #TEMPCREATE TABLE #Temp(A INT)INSERT INTO #Temp< Do some bits > DROP TABLE #TempEND[/code]My question is could the temp table cause any concurrency issues either from the other sprocs creating a temp table or another user calling the sproc at the same time?I'm thinking not but I want to make sure :ermm:Thanks Andy

sqlMonitor alert:sql server instance unreachable

Posted: 12 Jun 2013 03:10 AM PDT

Hi EveryoneOn our monitoring tool sqlMonitor we got an alert called 'sql server instance unreachable' which lasted for about 2 minutes. I have checked the Cpu queue length for this period and it spikes at around a value of 17 and the counter 'Machine:Processot time' peaks 100% and stays there for about 1 minute. How can i find out what caused this as having the data is good but i need a way of analyzing the query that may have caused this ?

Stored Proc takes along time to run

Posted: 14 Jun 2013 12:34 AM PDT

My problem is that i am having a stored proc that takes too long to run. I have linked servers configured and when i run the proc from server2 it takes over 20 seconds and does a 'REMOTE SCAN' in the execution plan.The stored proc is:[code="sql"]CREATE PROCEDURE dbo.SP_MyProc@UserID uniqueidentifier,@ServiceID intASBEGINUPDATE TABLEASET Comment = @SERVICEIDWHERE UserID = @UserIdend[/code]When i run this code as simple inline update statement on both servers such as:[code="sql"]UPDATE [server2].DatabaseA.TableASET Comment = '237'WHERE UserID = '6FE30BDA-A38B-4D0D-B10F-DC0D324E7883'[/code]it runs in under 1 second but when i call the SP it takes over 20 seconds and this is causing time outs.Can anybody advise as to why when i run it as the simple inline UPDATE it works fine but not so when i call the SP ?

Truncate Table Load and Rename Always Available

Posted: 12 Jun 2013 08:38 PM PDT

Hi Guys,I'm creating a ssis package to run hourly but I need the tables to always be available.I was going to load the data into a Load table then use sp_rename to rename the table to existing table minimizing the downtime on the table.I have a few concerns regarding Indexes and primary key constraints, should I be dropping and recreating prior to using sp_rename?Any thoughts or alternatives would be much appreciated.Thanks,Rich

BULK INSERT with Data Containing Single Quotes

Posted: 14 Jun 2013 01:40 AM PDT

Let's say I have a data file with last names:[code="plain"]SMITHJONESO'HARA (Note that this one has a single quote!)CARTER[/code]that I want to insert into a table:[code="sql"]create table last_names( name varchar(MAX))[/code]I can write a stored procedure[code="sql"]create procedure sp_insert_last_name(@last_name varchar(MAX))asbegin insert into last_names(name) values (@last_name)end[/code]And some higher level code (maybe C, C#, whatever)[code="plain"]while (not end-of-data-file) last_name = read(data-file) last_name.replace("'","''",globally) // globally escape single quotes call sp_insert_last_name(last_name)end while[/code]This works because I coded to escape the single quote.What if I wanted to use BULK INSERT to do this? Will the single quote be handled correctly or will the last name O'HARA cause an error?

?? on xp_cmdshell not writing to file

Posted: 13 Jun 2013 11:40 PM PDT

Hi,Below will not write to file. I did a similar test on a more basic query and it worked.I'm assuming it's the " somewhere, but can't find it..any ideas would be appreciated...ThanksJoedeclare @sql varchar(8000)set @sql='bcp"select CAD19 as ContractNumber, CAD65 as ClientID, client.LName + ", " + CLIENT.FName as Name, convert(varchar, client.BDate, 101)as DOB, CAD300 as HSTSentDate, CAD302 as MemberStartDate, CAD18 as IntakeNotCompletedID, CAD301 as IntakeNotEnrDate, CAD754 as IntakeOtherReason FroM test.dbo.USER_DEFINED_DATA INNER JOIN test.dbo.ASSESSMENT ON test.dbo.USER_DEFINED_DATA.ASSESSMENT_MONIKER = test.dbo.ASSESSMENT.OID INNER JOIN test.dbo.Client ON test.dbo.USER_DEFINED_DATA.ATTACHED_TO_OID = test.dbo.Client.OIDWHERE (test.dbo.ASSESSMENT.Code = "CH1") AND (test.dbo.USER_DEFINED_DATA.CAD750 = "Final") AND (test.dbo.USER_DEFINED_DATA.Expdate IS NULL) and CAD700 = "Processed" FOR XML PATH("REFERRAL"), root("CLUB"),type" queryout "c:\temp\' + REPLACE(CONVERT(VARCHAR,GETDATE(),101),'/','')+'Clubhouse.xml' + '" -c -T -SUCDB01 -Usa -Ppas'exec master..xp_cmdshell @sql

Repeatable statements that cause need for DBCC checktable repair_rebuild

Posted: 13 Jun 2013 11:09 PM PDT

I have a process that imports some data and massages it by running standard CRUD statements. I then call the following to make sure that my DB is happy. I do this because the process migrates a ton of data.[code] declare @t varchar(250) select table_name INTO #tables from information_schema.tables where TABLE_TYPE = 'Base Table' while EXISTS(select table_name from #tables) BEGIN select top 1 @t = table_name from #tables print('Reindexing ' + @t) DBCC DBReindex (@t, '', 80) delete #tables where TABLE_NAME = @t END -- Update all statistics print('Updating statistics') EXEC sp_updatestats drop table #tables[/code]I then massage the data some more and run the reindex statement again. I have a certain data set that after running the statement the second time show the following:[code]DBCC results for 'sys.sysidxstats'.Msg 2511, Level 16, State 1, Line 1Table error: Object ID 54, index ID 2, partition ID 562949956960256, alloc unit ID 562949956960256 (type In-row data). Keys out of order on page (1:3757), slots 10 and 11.There are 901 rows in 33 pages for object "sys.sysidxstats".[/code]This is repeatable. It clears up after I run DBCC checktable ('sys.sysidxstats', REPAIR_REBUILD). Oddly, with a different (even larger) set of data and the same steps, the issue does not appear. That makes me think it's something in the source data that's causing it.My question: Am I causing this by calling reindex twice? Is something in my data causing this? Could my indexes be bad and be getting screwed up by my CRUD statements? One concern is that I plan on performing the reindexing statement in production once a week. Should I?Any help would be greatly appreciated.ThanksST

Insert is taking long time.

Posted: 13 Jun 2013 10:35 PM PDT

hi team,Insert one records is taking 8 to 1o seconds, indexes,stats every thing is good.i've rebuild the indexes also, same table in other environment is taking fraction of second.any suggestion please...

Backup set expire

Posted: 13 Jun 2013 11:10 PM PDT

Hi all, can any one explain what this mean ...Backup set expire date .. What is meant by expiring here. The backup will be there but we cant use or what..

Error Log

Posted: 13 Jun 2013 08:46 PM PDT

Hi Team,In my database, insertions into error log is more than 150000 records per day, when i want to retrieve error log for particular date (1 week before records) by giving time stamp, it is taking morethan 45 mins to retieve records.is there any easy way to retrive fast or insert particular date error logs in to .sql, .txt, .xls file.Please need you help.

Alias using sp_addlinkedserver?

Posted: 05 Apr 2010 02:52 AM PDT

I have a several sprocs that need to use a linked server. The linked server will change between dev, test and prod (at least), so i would love to have an alias for the linked server. The following shows how to do that using gui tools:[url]http://alexpinsker.blogspot.com/2007/08/how-to-give-alias-to-sql-linked-server.html[/url]I would much prefer to be able to script it since the linking will be done from multiple sites.Possibly? SQL Server version dependent?Thanks.

Friday, June 14, 2013

[how to] the value of undo_sql in flashback_transaction_query view is always null

[how to] the value of undo_sql in flashback_transaction_query view is always null


the value of undo_sql in flashback_transaction_query view is always null

Posted: 14 Jun 2013 09:07 PM PDT

so I have a table defined as follow:

`create table test.test as (select * from dual);  

and ran the following update statements:

update test.test set dummy = '1' where dummy = 'X';  commit;  

I want to find the undo_sql for the transaction above, so I ran the following statement to find the transaction id

SQL> select dummy, versions_xid, versions_startscn, versions_endscn from test.test  versions between scn minvalue and maxvalue ;    D VERSIONS_XID     VERSIONS_STARTSCN VERSIONS_ENDSCN  - ---------------- ----------------- ---------------  1 0A000500B00A0000           2802291  X                                            2802291  

and ran the following statement to find the undo_sql statement

SQL> select undo_sql from flashback_transaction_query   where xid = (select  versions_xid from test.test versions   between scn minvalue and maxvalue where versions_xid is not null);    UNDO_SQL  --------------------------------------------------------------------------------  

as you can see, I got nothing, if I run the following statement, I also get nothing.

SQL> select undo_sql from flashback_transaction_query where undo_sql is not null;    no row selected  

looks like undo_sql is always null;

Is there anything I didn't configure ? Here are the values from v$system_parameter

SQL> select name, value from v$system_parameter where name like '%undo%';    NAME                 VALUE  ----------------     ----------------  undo_management      AUTO  undo_tablespace      UNDOTBS1  undo_retention       18000  

Does SQL Server check disk space before allocating a new page?

Posted: 14 Jun 2013 06:27 PM PDT

I have a bet with my old boss. I bet her that SQL Server, when allocating a new extent, always allocates from the buffer pool and never checks to see whether there is some place on disk where the allocation could be stored. Essentially, she contests that SQL Server should check for available space on the LUN before allocating a page. This seems wrong, since I could place my storage on the moon, which would cause some serious latency. I feel that she really wants SQL Server always to bring in a page from disk firstly and then perform the DML task(s).

Here is my "proof" that she is wrong. If you disagree with my "proof", then please most definitely respond with a better one!

Let us create a trivial database and table. The database's recovery model will be set to SIMPLE and AUTO_CREATE_STATISTICS will be shut off, in order to minimize log record bloat.

Before we begin, let me divulge the version of SQL Server that I am using.

  SELECT @@VERSION;  -------------------------------------------------------------------------------------  Microsoft SQL Server 2012 - 11.0.2100.60 (X64)   Developer Edition (64-bit) on Windows NT 6.1  (Build 7601: Service Pack 1)  

Now, the code...

  USE master;  GO    IF DATABASEPROPERTYEX(N'PageAllocDemo' , N'Version') > 0         BEGIN              ALTER DATABASE PageAllocDemo SET SINGLE_USER WITH ROLLBACK IMMEDIATE;              DROP DATABASE PageAllocDemo;        END;  GO    CREATE DATABASE PageAllocDemo  GO    USE PageAllocDemo;  GO  SET NOCOUNT ON;  GO    -- Set the database to SIMPLE and turn off log generating crapola  ALTER DATABASE PageAllocDemo SET RECOVERY SIMPLE;  GO  ALTER DATABASE PageAllocDemo SET AUTO_CREATE_STATISTICS OFF;  GO    CREATE TABLE dbo.X  (        c1 INT IDENTITY (1,1)  ) ON [PRIMARY];  GO  

Now, let us check how many pages were allocated? I suspect zero, since we have only created a "logical table", in our case an empty heap.

  -- How many pages are allocated to our table?  DBCC IND (PageAllocDemo,X,-1);  GO  

Now, clear the log.

  -- Clear the log  CHECKPOINT;  GO  

What is currently in the log?

  -- What is in the log right now?  SELECT * FROM fn_dblog(NULL,NULL);  GO    /*    ---------------------------------------  -- Operation -------------- Context ---  ---------------------------------------  LOP_BEGIN_CKPT      LCX_NULL  LOP_XACT_CKPT       LCX_BOOT_PAGE_CKPT  LOP_END_CKPT        LCX_NULL    */  

This is expected, since we are in the SIMPLE recovery model. We will now create an explicit transaction that will insert one and only one record into our table;but, before we do that, let us open Process Monitor and filter for our MDF and LDF file as well as the PID for the SQL Server process.

enter image description here

Begin the transaction:

  BEGIN TRAN        INSERT INTO dbo.X DEFAULT VALUES;      GO  

Process Monitor shows two writes to the transaction log file. enter image description here

Let us check the log records.

  -- What is in the log right now?      SELECT * FROM fn_dblog(NULL,NULL);        /*      I omitted all the log records for PFS, GAM, SGAM, etc.      ---------------------------------------------------------------      -- Operation -------------- Context ------- Transaction ID  ---      ---------------------------------------------------------------      LOP_BEGIN_XACT      LCX_NULL                0000:0000030e      LOP_BEGIN_XACT      LCX_NULL                0000:0000030f      LOP_FORMAT_PAGE     LCX_HEAP                0000:0000030f      LOP_COMMIT_XACT     LCX_NULL                0000:0000030f      LOP_INSERT_ROWS     LCX_HEAP                0000:0000030e      LOP_COMMIT_XACT     LCX_NULL                0000:0000030e    */  

I omit the bit map and PFS allocations and we can see that a page is allocated and one row is inserted as one would anticipate.

How many pages are allocated to our heap?

  -- How many pages are allocated to our table?      DBCC IND (PageAllocDemo,X,-1);      GO        /*        One IAM page and one data page and nothing else      ---------------------------------      PageFID PagePID     IAMFID IAMPID            ------- ----------- ------ ------       1       264         NULL   NULL              1       231         1      264                 */  

This is as anticipated. We have one IAM page and one data page. Now, our penultimate action is committing the transaction. I expect a 512B log block flush to occur at this point.

  COMMIT TRAN;  

enter image description here

Let us finish the "proof" with a checkpoint operation. So far, nothing has been committed to the data file only to the log file.

  CHECKPOINT;  GO  

enter image description here

Cool, the data pages flushed to disk as expected.

My conclusion, from the evidence from Process Monitor is that SQL Server allocates in-memory, adds the record in memory, and commits the page to disk all without checking anything at the storage level.

Does anyone object to this hypothesis? If yes, why?

Contact system database design

Posted: 14 Jun 2013 03:31 PM PDT

I'm designing a database for contacts and I'm looking for some feedback on the current design.

What i want:

  • A contact can be a person or an organization.
  • A contact can have many groups.
  • A contact can have many emails and addresses
  • A contact can have one organization (employed)
  • A organization can have many contacts (employees)
  • A group can have many contacts.

What i have:

  • Contact(Id, PersonId (FK), OrganizationId (FK), EmployerId (FK, self reference))
  • Person(Id, FirstName, LastName, Birthday)
  • Organization(Id, Name, Founded)
  • Group(Id, Name)
  • ContactGroup(Id, ContactId (FK), GroupId (FK))
  • Email(Id, ContactId (FK), Name)
  • Address(Id, ContactId (FK), Street, City, ZipCode, Country)

What bothers me the most is that the contact entity has one organization or one person, and also a self reference (EmployerId) if a organization has employees (which are also contact entities) and vice versa.

enter image description here

Is this a problem or see you a better way?

How to select a row on one distinct attribute?

Posted: 14 Jun 2013 02:16 PM PDT

I want to select * from A A but only where A.attr1 is distinct. How can I do this?

Errors keeping stored procedures through restore from backup

Posted: 14 Jun 2013 06:22 PM PDT

We are running SQL Server 2008 R2 on two identical servers. One, called LIVE, is our production environment. The other, called QA, is our dev enivornment. In order to check backup integrity and run integrity checks, and also so that our dev environment has reasonably fresh data, we restore LIVE's database backups on QA daily and then run integrity checks.

It all works well but for one issue: We need QA to maintain its own versions of the all the stored procedures. We develop new SPs and modify existing SPs on QA, and test them there. Only after they have been tested do we push them over to LIVE. In order to retain QA's versions on QA, what I have been doing is scripting the CREATE PROCEDURE commands which are saved to files. After the databases are restored, all the procedures are dropped. The CREATE PROCEDURE scripts are then executed, recreating all the databases' SPs as they were on QA prior to the restore from backup. For a time this worked well, although the occasional disappearance of a SP raised eyebrows. However, this week the process has not worked, as several SPs disappeared each day, escalating to 20 disappearances yesterday and--ugh--over 100 this morning.

A look through the job history log indicates the nature of the problem:

Msg 102, Level 15, State 1, Server QA, Procedure pln_tradepaper, Line 196  Incorrect syntax near 'GO'.    Msg 102, Level 15, State 1, Server QA, Procedure pln_tradepaper, Line 230  Incorrect syntax near 'END'.  Msg 102, Level 15, State 1, Server QA, Procedure pln_tradepaper_type, Line 187  Incorrect syntax near 'GO'.    Msg 2714, Level 16, State 1, Server QA, Procedure pln_tradepaper_type, Line 199  There is already an object named '#temp1' in the database.    Msg 137, Level 15, State 2, Server QA, Procedure pln_tradepaper_type, Line 201  Must declare the scalar variable "@group".    Msg 102, Level 15, State 1, Server QA, Procedure pln_tradepaper_type, Line 211  Incorrect syntax near 'END'.    Msg 102, Level 15, State 1, Server QA, Procedure msg_upsert_asset, Line 46  Incorrect syntax near 'GO'.    Msg 137, Level 15, State 2, Server QA, Procedure msg_upsert_asset, Line 64  Must declare the scalar variable "@client_id".  

And so forth, up to the limit for the log's message. I have run the command that generates the CREATE script for this particular database, and there are no syntax issues in it of any kind. Starting tomorrow and until this is resolved, I will be retaining the scripts as they are generated; unfortunately, they have been deleted after running up to this point, so I cannot rule out the possibility that the script did have syntax errors.

The command that generates the CREATE script is this:

bcp "SELECT definition + char(13) + 'GO' FROM db1.sys.sql_modules s INNER JOIN db1.sys.procedures p ON [s].[object_id] = [p].[object_id]" queryout "D:\scripts\SP_db1.sql" -S qa -U "sa" -P "password1" -t -w  

I have only two ideas as to what might be the issue. My leading suspicion is that a bcp has a character limit on the fields which causes db1.sys.sql_modules.definition to be truncated. This would explain why the errors started with Incorrect syntax near 'GO' (the previous module's description was truncated before END, so we had a GO without a BEGIN being closed off; the other errors cascaded from there). MAX(LEN(definition) for this database is 15134. However, the data type for db1.sys.sql_modules.definition is nvarchar(max), which as I read the documentation has no default limit.

The only other idea is that somehow I am not calling bcp correctly. Since I'm only selecting a single field, I turned the default \t field separator off by using -t with no argument. The default row separator \n should be fine. Perhaps I should not be specifying Unicode with -w, or something else is missing.

If it makes any difference, the CREATE scripts are executed with

sqlcmd -S QA -U "sa" -P "password1" -d db1 -i "D:\scripts\SP_db1.sql"  

I would appreciate any ideas anyone could provide as to why this issue is occurring. I am not opposed to a different approach to persisting SPs across--I haven't been able to shake the feeling that this is somewhat of a Rube-Goldberg-like solution.

How to group by when querying duplicate records?

Posted: 14 Jun 2013 12:51 PM PDT

I have a table with duplicates and I want to present it to the stakeholders so they can help me determine the source of duplication. Currently I can print 1 row for each duplicate entry, but I want to show all attributes for each. Here's the query I'm using:

select *, count(*), max(id) from foos group by name having count(*) > 1 order by name;  

I'm not sure how to alter the query to get each record where another record shares a name. Thanks for any guidance.

[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified

Posted: 14 Jun 2013 12:20 PM PDT

I am trying to open a program for the first time on Windows XP Pro that uses PostgreSQL 9. I'm getting an error message that says :

A problem was encountered while trying to log into or create the production database. Details: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified

In my ODBC manager, I have a list of User DSN's and System DSN's. I tried installing a postgres odbc driver to see if that would help, but it didn't.

There is a connect.dat file in the program file with a line saying "OLE DB Provider = MSDASQL". Changing this entry alters the error message I get to "Provider cannot be found, it may not be properly installed".

I don't know what provider name to insert to get this to work properly. I have done extensive research on this error to no avail. Any suggestions would be greatly appreciated.

How do these snapshot isolation level configurations interact on a SQL Server 2005 instance?

Posted: 14 Jun 2013 01:49 PM PDT

I started troubleshooting a tempdb issue we are having on SQL Server 2005 Enterprise Edition. A developer is getting an error that tempdb is out of space. Technically, the error is:

Transaction aborted when accessing versioned row in table 'dbo.inserted' in database 'dbname'. Requested versioned row was not found. Your tempdb is probably out of space. Please refer to BOL on how to configure tempdb for versioning

I looked at the database configuration in sys.databases and found the following settings:

snapshot_isolation_state: 0    snapshot_isolation_state_desc: OFF    is_read_committed_snapshot_on: 1  

I looked up what this meant in BOL and the salient information is as follows:

snapshot_isolation_state

State of snapshot-isolation transactions being allowed, as set by the ALLOW_SNAPSHOT_ISOLATION option:

0 = Snapshot isolation state is OFF (default). Snapshot isolation is disallowed.

1 = Snapshot isolation state ON. Snapshot isolation is allowed.

2 = Snapshot isolation state is in transition to OFF state. All transactions have their modifications versioned. Cannot start new transactions using snapshot isolation. The database remains in the transition to OFF state until all transactions that were active when ALTER DATABASE was run can be completed.

3 = Snapshot isolation state is in transition to ON state. New transactions have their modifications versioned. Transactions cannot use snapshot isolation until the snapshot isolation state becomes 1 (ON). The database remains in the transition to ON state until all update transactions that were active when ALTER DATABASE was run can be completed.

snapshot_isolation_state_desc

Description of state of snapshot-isolation transactions being allowed, as set by the ALLOW_SNAPSHOT_ISOLATION option:

  • OFF
  • ON
  • IN_TRANSITION_TO_ON
  • IN_TRANSITION_TO_OFF

is_read_committed_snapshot_on

1 = READ_COMMITTED_SNAPSHOT option is ON. Read operations under the read-committed isolation level are based on snapshot scans and do not acquire locks.

0 = READ_COMMITTED_SNAPSHOT option is OFF (default). Read operations under the read-committed isolation level use share locks.

If I'm understanding this correctly, snapshot_isolation_state is OFF and disallowed. However, the 1 on is_read_committed_snapshot_on indicates that the database is using snapshot isolation. How is this possible if it's disallowed?

related column contents

Posted: 14 Jun 2013 11:18 AM PDT

I've designed a mysql database made of several independent/unrelated tables to use in my office. Different tables hold information about clients (names, addresses etc), visits to the office (dates, reason etc) or other information like comments (related to a specific visit). When a known client enters the office I can retrieve all the relevant data, insert the new visit data and possible comments about the visit. I need to store visits in a specific order (1st visit 1st, etc). Sometimes I forget to store a visit and I have to enter the data later. The order of the visits gets messed up and I manually (update commands) correct it by changing the visits' autoincrementing primary key number (which represents the order). This way I get the right order of visits but comments (related to the changed primary key) cannot be retrieved. Is there any way to make sure that changing the primary key would make the necessary changes to the rest of the database, so that everything would work as before the changes? Do relational 1:many or other implementations accomplish that? And how? Please note that I'm an amateur, self-taugth "database designer".

Constraint based rules engine

Posted: 14 Jun 2013 09:54 AM PDT

I am evaluating software systems which configure products that tout constraint based versus rules based engines. Can someone explain (with maybe an example or two) to a non-expert what the difference is? My guess is that a constraint based engine is rules defining relationships between objects, and rules based engines are more like the WHERE clause of a SQL language query.

How to drop inheritance in Postgresql

Posted: 14 Jun 2013 06:10 PM PDT

I am new to PostgreSQL. The situation I have is someone created a child table inherits from the Parent table. And dropped the child table. But the Parent table still have the Down Arrow sign on it. I checked and theres no other link/relation on the Parent table. Is this really an issue? I was told that the Parent table is still in the 'been inherited' status and causing the performance issue. How to resolve this by removing the 'been inherited' status' from the parent table?

Transaction Log maintenance when switching to Simple Recovery

Posted: 14 Jun 2013 08:56 AM PDT

Background:

I recently inherited 50+ SQL Servers with 450+ databases. The nightly backups are roughly 8TB and, needless to say, we're using more disk space than we'd like. All of the databases are set to FULL recovery and the transaction logs have never been backed up. I've gone through all of the SQL Servers and identified low priority ones that only need a nightly backup and a where a day of data loss is acceptable.

Question:

I'm switching a lot of low priority databases to SIMPLE recovery mode from FULL. Will the existing transaction logs be truncated (when checkpoints are created)? Some of the existing transaction logs are 50-100GBs; what is the best approach in determining what I should shrink them down to for the purposes of moving forward? I obviously don't want to keep them that large. Or, will they shrink on their own over time (I don't think they will)?

Stored procedures vs. inline SQL

Posted: 14 Jun 2013 10:58 AM PDT

I know stored procedures are more efficient through the execution path (than the inline sql in applications). However, when pressed, I'm not super knowlegeable about why.

I'd like to know the technical reasoning for this (in a way that I can explain it to someone later).

Can anyone help me formulate a good answer?

How to find memory usage by table?

Posted: 14 Jun 2013 10:05 AM PDT

I want to discover how much memory is being used by each table. Is this information available in DMVs or elsewhere?

Does the size of the primary key contribute to table size?

Posted: 14 Jun 2013 07:46 PM PDT

I have a table (InnoDB) with data length around 36G and index length 23G.

It had a composite primary key across three columns.

As an initial attempt to reduce the size of this table, I dropped the primary key (wasn't actually needed anyway). Yet, the data length and index length are the same.

Is this to be expected?

Weird SQL error

Posted: 14 Jun 2013 09:15 AM PDT

am seeing the error logs below on my SQL server, I'm not sure if they are related to trace flag 3604. I also see something to do with memory dump as well. Please advice.

 15:15:53.81 spid594   m_pageId = (1:1172094)    m_headerVersion = 1       m_type = 2|0   15:15:53.81 spid594   m_typeFlagBits = 0x0      m_level = 0               m_flagBits = 0x0|0   15:15:53.81 spid594   m_objId = 954538534       m_indexId = 6             m_prevPage = (1:1102845)|0   15:15:53.81 spid594   m_nextPage = (1:1172095)  pminlen = 21              m_slotCnt = 212|0   15:15:53.81 spid594   m_freeCnt = 2584          m_freeData = 5880         m_reservedCnt = 0|0   15:15:53.81 spid594   m_lsn = (119582:10616:47) m_xactReserved = 0        m_xdesId = (0:753243376)|0   15:15:53.81 spid594   m_ghostRecCnt = 0         m_tornBits = -2147483127  |0   15:15:53.81 spid594   |0  Allocation Status|0  -----------------|0   15:15:53.81 spid594   GAM (1:1022464) = ALLOCATED                         |0   15:15:53.81 spid594   SGAM (1:1022465) = NOT ALLOCATED                    |0   15:15:53.81 spid594   PFS (1:1164672) = 0x40 ALLOCATED   0_PCT_FULL       DIFF (1:1022470) = CHANGED|0   15:15:53.81 spid594   ML (1:1022471) = NOT MIN_LOGGED                     |0   15:15:53.81 spid594   |0  DATA:|0  -----|0   15:15:53.81 spid594   |0  Memory Dump @0x3CFF0000|0  -----------------------|0   15:15:53.81 spid594   3CFF0000:  01020000 00000600 fdd31000 01001500 7fe21100  ....................|0   15:15:53.81 spid594   3CFF0014:  0100d400 261ae538 180af816 7ee21100 01000000  ....&..8....~.......|0   15:15:53.81 spid594   3CFF0028:  1ed30100 78290000 2f000000 f094e52c 00000000  ....x)../......,....|0   15:15:53.81 spid594   3CFF003C:  09020080 00000000 00000000 00000000 00000000  ....................|0   15:15:53.81 spid594   3CFF0050:  00000000 00000000 00000000 00000000 1a2c0d00  .................,..|0   15:15:53.83 spid594   3CFF0064:  009c0300 00e30300 0097f79a 10000000 00040000  ....................|0  

Auto-increment key and Foreign Key as composite for relational database

Posted: 14 Jun 2013 02:22 PM PDT

I do not have any experience in databases other than what I have from just playing with workbench and trying to understand how I need to setup this database for a project. I have been searching the internet and I have become increasingly confused as to how I will do what I want to do. Below is what I am trying to accomplish.

I have a need for a MySQL database that will hold patient information. Patient information will be name, id (unique), and so on as well as all of the tests and samples for each test.

  • Each patient can have multiple tests
  • Each test can have multiple samples
  • If I delete a patient then all test and samples are deleted
  • If I delete a test then all samples will be deleted
  • If I delete all samples of one test OR all tests of one patient then the test OR patient should not be deleted.

So far I understand a relational database is what I need. I also understand that I can use foreign keys to link the patient id's for each table as well as the test id for each table. I have also come to the conclusion that I would like data integrity so that I don't have to manually manage deleting and updating the rows. This would also not allow for orphaned rows. I believe this can be done with a composite key but this is where I start getting confused on how to set it up.

I would expect it to work like this (with each line being an insert statement):

Patient Table:

|-------------------|  |Patient ID  |Name  |  |-------------------|  |12345       |ANG   |  |54321       |JUE   |  |-------------------|  

Tests Table:

|----------------------|  |Test ID  |Patient ID  |  |----------------------|  |1        |12345       |  |2        |12345       |  |3        |12345       |  |1        |54321       |  |2        |54321       |  |4        |12345       |  |----------------------|  

Sample Table:

|----------------------------------|  |Sample ID  |Test ID  |Patient ID  |  |----------------------------------|  |1          |1        |12345       |  |2          |1        |12345       |  |3          |1        |12345       |  |1          |2        |12345       |  |1          |1        |54321       |  |1          |2        |54321       |  |2          |2        |54321       |  |2          |2        |12345       |  |----------------------------------|  

Can this be done easily? I also know that this can be accomplished with one or two triggers but I understood that to not handle deletions and the like. Is there another way to do this that is easier to implement? Also, data being written and read from this database is handled by my LabVIEW program.

Automated query log

Posted: 14 Jun 2013 09:03 AM PDT

I would like to ask for some opinions or recommendations or guidelines on how to create an automatic logging system for the executed querys on a MySQL database.

Some details:

  • its a php+mysql project
  • multiple users can use the database, based upon a user table and a authentication system made in php (they share 1 mysql user)
  • so far some query log is implemented, it is not very efficient because ever single logging procedure is hard coded in php
  • i am looking for a way to create a general procedure, to which i can pass the user id, and the rest is done automatically
  • i need to store previous value, new value, column name, table name, user name, query type, time stamp

I am familiar with TRIGGERS and STORED PROCEDURES, but cant really figure out if that's the the way, and if yes how (don't need code, just pure logic, although a usable minimalistic example would be welcomed).

Thank you.

Edit: Wow, is this question that hard ?

MySQL HDD activity with empty process list

Posted: 14 Jun 2013 11:29 AM PDT

What does this mean and how can I understand what is mysql doing? I use InnoDB, my database is huge. This happens when I disconnect my app.

Thanx.

Why don't databases create their own indexes automatically?

Posted: 14 Jun 2013 01:03 PM PDT

I would have thought that databases would know enough about what they encounter often and be able to respond to the demands they're placed under that they could decide to add indexes to highly requested data.

Is using SUM() twice suboptimal?

Posted: 14 Jun 2013 01:05 PM PDT

I know I have to write SUM twice, if I wish to use it in a HAVING clause (or use a derived table otherwise):

SELECT  id,    sum(hours) AS totalhours    FROM mytable    GROUP BY id    HAVING sum(hours) > 50;  

My question now is, whether or not this is suboptimal. As a programmer, this query looks like the DB will calculate the sum twice. Is that so, or should I rely on optimizations the DB engine will do for me?

Update: an explain of a comparable query:

postgres=> explain select sum(counttodo) from orderline group by orderlineid having sum(counttodo) > 100;                               QUERY PLAN                               --------------------------------------------------------------------   HashAggregate  (cost=1.31..1.54 rows=18 width=8)     Filter: (sum(counttodo) > 100)     ->  Seq Scan on orderline  (cost=0.00..1.18 rows=18 width=8)  (3 rows)  

Oracle RAC 11g R2 DB silent install: parameter needed for system class

Posted: 14 Jun 2013 08:28 PM PDT

I'm installing Oracle RAC DB 11gR2 using response file and silent mode. The installation is failing with the error message:

[INS-35421] This options installs a single instance database only
You have chosen to perform a Desktop class install on a cluster. This options will not install oracle RAC

Which parameter is needed for a server-class install? I tried changing the value for oracle.install.db.config.starterdb.control from DB_CONTROL to GRID_CONTROL, but I still get the same warning.

Getting "BadImageFormatException" when attempting to connect through Visual Studio to Oracle

Posted: 14 Jun 2013 02:28 PM PDT

I get the following message: "Attempt to load Oracle client libraries threw BadImageFormatException. This problem will occur when running in 64 bit mode with the 32 bit Oracle client components installed." when I attempt to create a connection from both Visual Studio 2008 and 2012 to Oracle from my Windows 8 64bit workstation.

I'm not running a web app when this message happens. I'm only trying to establish an Oracle connection with Visual Studio. Any solutions posted on the internet that I have seen that resolve this issue with others, doesn't seem to apply to my circumstances.

  • I did not have this issue when I was using Widows 7 64bit.
  • I have set the Active Solution Platform to both Any PC and x86 when I have a solution open, but I still get the same message with either setting.
  • I have the latest Oracle 32 bit Client since 64 bit Oracle client won't work with 32 bit Visual Studio.
  • At this point, I am only trying to connect to Oracle from Visual Studio and not run a web application. So IIS settings have nothing to do with this issue.

My environment is

  • Windows 8 64bit
  • Visual Studio 2012 and 2008
  • Oracle 32 bit Client. But also tried 64 Bit Client.
  • Oracle 11.2.0.3 database.
  • Active Solution Platform was tried with both Any PC and x86.

Any suggestions would be appreciated.

SHOW TABLE STATUS very slow on InnoDB

Posted: 14 Jun 2013 12:28 PM PDT

Recently we've been migrating from MyISAM to InnoDB and I understand that MyISAM uses meta information against each table to track information and such - however, the database is about 1.8gb with 1+ million records across 400+ or so tables.

The problem comes with software like PHPMyAdmin runs "SHOW TABLE STATUS FROM dbase;" where it can take up to 50 seconds to execute that command. Is there a way to optimise this? (MySQL or MariaDB)

Thanks!

Difference between database vs user vs schema

Posted: 14 Jun 2013 03:19 PM PDT

I'm really confused with the terms database, user and schema. Can anyone explain how they are different from each other (if they are)?

If they are same then, what are the similarities between them? How do we use them? And how do we create them?

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 14 Jun 2013 01:28 PM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

SQL Server BPA 2008R2

Posted: 14 Jun 2013 11:28 AM PDT

I have been using SQL server BPA for getting good information from Microsoft. I was using its 2005 version in which i used to export the results in csv format but recently I got two new servers which has got SQL Server 2008 R2 installed on it and I know I can't run BPA 2005 on these, so chose the R2 version of it, but it doesn't have an option to save the report on csv format, only xml, I have tried the excel to convert it into csv but no use, even-though it display the details but can't narrow down the results any idea of converting the results to csv format?

mysql duplicate entry error 1062 when restoring backup

Posted: 14 Jun 2013 09:28 PM PDT

Sorry, I seen similar threads but I still couldn't find it addressing my issue plus, I needed some more info on this.

Requirement: To create an exact replica 'db4' of an existing DB 'db3'.

Procedure followed:

  • mysqldump -uuser -ppass db3 > db3.sql (size is 6G)
  • mysql -uuser -ppass db4 < db3.sql (db4 was a newly created blank database)

The 2nd step throws in the error:

ERROR 1062 (23000) at line 5524: Duplicate entry '600806' for key 1"  

I ran the 2nd step again with --force. The restore completed but with 2 additional similar errors:

ERROR 1062 (23000) at line 6309: Duplicate entry '187694' for key 1      ERROR 1062 (23000) at line 6572: Duplicate entry '1567400' for key 1  

On completion when I queried certain tables of db4 database, I was able to see missing records.

Question:

  1. Does this indicate a corrupted/problematic db3 database?

  2. How to proceed to create a 'consistent/working' replica (db4) of db3?

  3. If (2) fails, how to possibly troubleshoot and find the reason behind why it occurs?

Thanks,

MySQL optimization - year column grouping - using temporary table, filesort

Posted: 14 Jun 2013 03:28 PM PDT

I have a transactions table which is having 600,000 records, I need to list the count for the dashboard on financial year basis. The table used is MyISAM. I tried adding index for the transaction date (tran_date). Even though it is using the index it creates temporary table which is taking more time because of the temporary table and the filesort. Is there any way to optimize the query to improve the query time?

  SELECT COUNT( * ) AS cnt, CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS financial_year  FROM `transactions1`  WHERE tran_date >= '2010-06-01'  GROUP BY financial_year    Showing rows 0 - 4 (5 total, Query took 1.2095 sec)  
  id select_type  table       type  possible_keys     key key_len    ref  rows    Extra  1   SIMPLE    transactions1 range PRIMARY,tran_date tran_date 8  NULL   346485  Using where; Using index; Using temporary; Using filesort  
  Keyname     Type    Unique  Packed  Field       Cardinality   Collation   PRIMARY     BTREE   Yes       No    tran_date      205720         A                                           tran_ID        617162         A   coupon_No   BTREE   No        No    coupon_No      617162         A       account_typeBTREE   No        No    account_type   3              A       prodCode    BTREE   No        No    prodCode       430            A                                           tran_date      308581         A   tran_date   BTREE   No        No    tran_date      205720         A       cust_ID     BTREE   No        No    cust_ID        3265           A                                           tran_date      308581         A                                       account_type   308581         A                                       points_earned  617162         A  

Update :

Tried adding partition which is not that much helpful in comparison with non partitioned one. Does replication help in this case for reading this table?. There will be more grouping based on the dates (using the date functions) when reading the data.

Edit:

I altered the query and reduced the query execution time. The query I used is,

  SELECT SUM( count )  FROM (  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date  ) AS s  GROUP BY format_date    Showing rows 0 - 4 (5 total, Query took 0.5636 sec)  
  id  select_type     table     type  possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>    ALL       NULL        NULL      NULL      NULL    229676  Using temporary; Using filesort  2   DERIVED     transactions1 index     NULL        tran_date   8       NULL    617162  Using index  

But when using

  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date    Showing rows 0 - 29 (229,676 total, Query took 0.0006 sec)  

gives less time without using the SUM(count) in the derived table. Is there any other way to get the sum without using the subquery in MySQL or can the subquery be optimized to get the index.

Copy Database option not found in SQL Server Management Studio 2008 R2

Posted: 14 Jun 2013 12:50 PM PDT

I am unable to see the Copy Database option in SQL Server Management Studio 2008 R2.

Can anyone tell me whats going on please? Copy Database not found Image

[SQL Server] SSIS FTP Task - Adding Date Critiera.

[SQL Server] SSIS FTP Task - Adding Date Critiera.


SSIS FTP Task - Adding Date Critiera.

Posted: 14 Jun 2013 04:09 AM PDT

Hi all. I'm looking for a way to transfer files from an ftp site where the last modified date on the ftp is yesterday ... GETDATE()-1. In other words, I don't need to transfer everything on the ftp, just what was posted the day before. Not sure if that can be done with expressions. I'm not much familiar with programming, but still getting my way around SSIS 2008. Any help will be appreciated. Thanks

image insertion into database

Posted: 13 Jun 2013 06:40 PM PDT

how to add and retrieve a image to and from databse table using open rowset(exaplain)?is there any other ways of doing image insertion and retrieval?Thanks and Regards

Buffer cache hit ratio down..

Posted: 13 Jun 2013 10:35 PM PDT

Hello Masters,I had SQL Server 2005 SP3 on my server and from yesterday I got an alert that "Buffer cache hit ratio is less than 10%" I monitored it for 2-3 hours, also checked memory which is more than sufficient (32GB). AWE is also enabled. Checked Error logs and event logs not seen any errors too. Whats the factor that get down this buffer cache hit ration down ?How can i up this to its normal count ? Please advice.BR,Jitendra

[SQL Server Data Warehousing] Query Statistics History Report Duration and Execution Time taken


Query Statistics History Report Duration and Execution Time taken



Hi everyone


I have about 4 jobs calling the same stored procedure but each time it runs it executes different Id's. When Job1 is running it may takes 45 minutes for job to complete, Job2 can take 1 hour 20 minutes to run. They basically overlap with each other but touch different Ids that need to be executed.


Questions 1


The stored procedure creates a bunch of #temp tables and I believe each time a job runs every time it goes through the same code and create seperate #temp tables. Like Job1 runs it creates #temp1_1, Job2 runs and creates #temp_2, Job3 runs and creates #temp_3 in memory.


Can you please confirm if my understanding is right.


Question 2


When I check the Query statistics History report I see server queries mentioned in the stored procedure running for hours, but the jobs completes in less than an hour. I am saying hours thinking that time mentioned there is in seconds. First of all is the total duration, execution/minutes is that in Milli seconds or Seconds?


So how is it possible that query shows 6 hours running but none of the job runs for 6 hours straight.


Thanks much


-Sarah




.

social.technet.microsoft.com/Forums

[SQL Server Data Warehousing] Best way to change the datatype of Column of Table


Best way to change the datatype of Column of Table



Are you allowed any sort of outage on this table?  If so, then consider creating a new table with the correct data type, bulk load the data into it, then drop the old table and rename the new.  I don't think 5m records is a significant enough volume to warrant any more complex a solution. 


You can also do an ALTER TABLE/ALTER COLUMN query, as long as the data types you are converting from and to are compatible, and the data is compatible.  That's assuming the column is not part of the primary key or part of an index or constraint. 



.

social.technet.microsoft.com/Forums

Search This Blog