Tuesday, July 30, 2013

[SQL Server 2008 issues] SSIS transferring unprintable characters between systems

[SQL Server 2008 issues] SSIS transferring unprintable characters between systems


SSIS transferring unprintable characters between systems

Posted: 29 Jul 2013 07:46 AM PDT

We are using SSIS, Transfer SQL Server Objects Task Editor to move data from one SQL Server system to another.A few of the records on the source system have unprintable characters in them. Yes, it would be a goal to clean that up, but since that involves application changes, our current objective is to have the data transferred so it matches exactly between the systems.It appears on these records that the unprintable characters are not transferring to the target system. Then when something like Data Dude (or whatever they call it now) is used to compare the systems, these records appear as mismatches.Is there a way using SSIS, to get the data to copy from the source to the target exactly as the data exists?TIA,Doug

Analytical function for my Q

Posted: 29 Jul 2013 09:13 AM PDT

Hi,Is there any analytical function in sql server for achieve results like below:I did this with 2 temp tables, but might be something already there to look it nicer.I also need this group by DateIn, and display % for each delDays categoryThanks allM[code]create table #log (shipID int, datein datetime, delDays int)INSERT INTO #LOG (shipid , datein, delDays) select 1001, '2012-01-06', 9 union allselect 1002, '2012-01-06', 11 union allselect 1002 , '2012-01-06', 11 union allselect 1003 , '2012-01-06', 11 union allselect 1004 , '2012-01-06', 11 union allselect 1005 , '2012-01-06', 11 union allselect 1006 , '2012-01-06', 11 union allselect 1007 , '2012-01-06', 11 union allselect 1008 , '2012-01-06', 11 union allselect 1009 , '2012-01-06', 11 union allselect 1010 , '2012-01-06', 22 union allselect 2001 , '2012-02-06', 15 union allselect 2002 , '2012-02-06', 16 union allselect 2003 , '2012-02-06', 33 union allselect 2004 , '2012-02-06', 22 -- Need this result result:DateIn | <20days | <30days | >30days |----------|----------|-----------------------2012-01-06| 90% | 10% | | 2012-02-06| 50% | 25% | 25% | [/code]

Shrink database file with EMPTYFILE option

Posted: 29 Jul 2013 04:13 PM PDT

I want to know that can we shrink database file with EMPTYFILE option while currently accepting data or we just need to do some other thing to shrink database file with EMPTYFILE option..

indentifing the spid

Posted: 28 Jul 2013 10:11 PM PDT

i would like to find the spid of the long running job with a query ? can any one send me the query

Handy and Simple tool for SSMS 2008 R2???

Posted: 30 Sep 2012 10:04 PM PDT

Hello,I was wondering why SSMS does not have the standard builtIn plugin Quick Math (for example: sum, avg, max, min). I have been searching on google but couldn't found any tool which will add this possibility to SSMS. Do you guys know any tool or tips how to added such a extension?Br, EidjazSee my attach for example.

New Quick Search Addin for SQL Server Management Studio 2008/2012

Posted: 29 Jul 2013 03:05 PM PDT

I have built a free Addin for SQL Server Management Studio 2008/2012 [url=http://sql-hunting-dog.com/]. It is a an instant search on tables/procedures etc. with quick access to most usable actions (select from table, design table, see stored procedure body etc..).It is completely free product and very easy to use you can see it here [url=http://sql-hunting-dog.com/how/][img]http://sql-hunting-dog.com/img/how/image005.png[/img]As a .Net and SQL Developer I myself use it everyday but I want to share it with the world so it will help other Developers in their every day tasks.Just wanted to share with SQL Community.Again, any feedback will be appreciated.Regards, Alex

Question about user and login

Posted: 28 Jul 2013 08:11 PM PDT

HelloI created new login and new user to exists login. Next step is log to database with created username database and login password??Regards

Incorrect length being returned for FLOAT datatype field

Posted: 29 Jul 2013 01:16 PM PDT

Hello, is somebody able to point out where I am going wrong?We have a table that is of float datatype which holds the current balance.Initially the data is like this: 45.56 or 3.56But something is updating the balance making it end up like 45.56000000000087 or 3.56000000000003.I am trying to determine if this is an Application error or a FLOAT error / misuse.So to check troubleshoot I am trying to identify the long values but as you can see by the example below the LENGTH is not what I would expect.DROP TABLE #testCREATE TABLE #test ( float1 FLOAT ,float2 FLOAT ,float3 FLOAT ,float4 FLOAT ,float5 FLOAT )INSERT INTO #test ( float1 ,float2 ,float3 ,float4 ,float5 )VALUES ( 6.7893001777 ,6.78 ,6.7893001 ,6.78931777 ,6.7893178 ) SELECT float1,LEN(float1),float2,LEN(float2),float3,LEN(float3),float4,LEN(float4),float1,LEN(float5) FROM #test

A way to shred an XML Deadlock report for easier reading...

Posted: 26 Jun 2013 02:46 AM PDT

I'm not taking credit for this, I just took some code WayneS posted a couple years back here: [url=http://www.sqlservercentral.com/Forums/Topic955603-360-1.aspx]http://www.sqlservercentral.com/Forums/Topic955603-360-1.aspx[/url] and updated it / modified it some to work with the XML that SQL2008 spits out when you query the Extended Events...[code="sql"]declare @deadlock xmlset @deadlock = 'put your deadlock graph here'select [PagelockObject] = @deadlock.value('/deadlock[1]/resource-list[1]/pagelock[1]/@objectname', 'varchar(200)'), [DeadlockObject] = @deadlock.value('/deadlock[1]/resource-list[1]/objectlock[1]/@objectname', 'varchar(200)'), [KeyLockObject] = @deadlock.value('/deadlock[1]/resource-list[1]/keylock[1]/@objectname', 'varchar(200)'), [KeyLockIndex] = @deadlock.value('/deadlock[1]/resource-list[1]/keylock[1]/@indexname', 'varchar(200)'), [Victim] = case when Deadlock.Process.value('@id', 'varchar(50)') = @deadlock.value('/deadlock[1]/victim-list[1]/victimProcess[1]/@id', 'varchar(50)') then 1 else 0 end, [ProcessID] = Deadlock.Process.value('@id', 'varchar(50)'), [Procedure] = Deadlock.Process.value('executionStack[1]/frame[1]/@procname[1]', 'varchar(200)'), [LockMode] = Deadlock.Process.value('@lockMode', 'char(5)'), [Code] = Deadlock.Process.value('executionStack[1]/frame[1]', 'varchar(1000)'), --[ClientApp] = Deadlock.Process.value('@clientapp', 'varchar(100)'), [HostName] = Deadlock.Process.value('@hostname', 'varchar(20)'), [LoginName] = Deadlock.Process.value('@loginname', 'varchar(20)'), [TransactionTime] = Deadlock.Process.value('@lasttranstarted', 'datetime'), [BatchTime] = Deadlock.Process.value('@lastbatchstarted', 'datetime'), [InputBuffer] = Deadlock.Process.value('inputbuf[1]', 'varchar(1000)') from @deadlock.nodes('/deadlock/process-list/process') as Deadlock(Process)[/code]Just had to change some little things, SQL no longer wraps the XML in "<deadlock-list>" blocks, and the victim information is now a couple more layers deep.And yes, I'm working with a dev right now trying to troubleshoot deadlocks in an application...Thanks WayneS!

Index Fill Factor

Posted: 28 Jul 2013 11:25 PM PDT

We have a database is isnt performing as it should. A while back I had a database which grew quite rapidly and I was advised to change the index fill factor from the default 0 to 80. For mainly static databases 0 is fine but should be altered for databases with constant deletes and inserts.Is this advise correct? Will it help with performance? And are there any adverse effects which could occur as a result of changing it to 80?

Email to Mutiple users with different subject i.e Subject variable

Posted: 28 Jul 2013 10:36 PM PDT

Hi,I stuck in a weekly report which i have to send to every user for their access.i want to send it through Database Mail.I have a table like thisREQID firstname lastname accessenddate Mail ID1234 Yunus Parvez 8-Aug-13 best_yunus@yahoo.com12345 ABC xyz 9-Aug-13 best_yunus@live.comNow i want to send mail to these user with edited subject line and body.for ex. subject line: RequestID-ReqID accessenddate namebody of mail:Hi <first name>,We have received a notification informing your access is due to expire on <accessenddate>Kindly use this as the subject line for renewal access request: Renew: <Old access - Request#ReqID>Please help to get rid off this report in quick way.Response appreciated.Regards,Yunus

SSIS Import - Delimited File with Fields &gt; 8k Characters

Posted: 29 Jul 2013 05:46 AM PDT

Hello all,I'm attempting to import a text delimited file with a few fields that have greater than 8k characters. I'm running into this error on import, "Text was truncated or one or more characters had no match in the target code page.". So, how do I get around this? I've attempted to import those columns as text, varchar(8000), and varchar (max) with no luck so far. I'm hoping that I don't have to split those fields into multiple rows if they are larger than 8k characters.

Dynamic SQL Unpivoting and re - Pivoting seems overly complicated...

Posted: 29 Jul 2013 02:42 AM PDT

Hi All,It is fairly common where we work to have a questionnaire output in the form of two tables, one that contains a set of questions and another that contains a matching set of answers, along with a UserId and Timestamp. In order to display these nicely, it is useful for us to display these two as one table, where the questions become the column names and the answers become the values, all in one row per questionnaire. I have written some code which does this fine (see the example below, where the output tables correspond to #Questions and #Answers), but it seems to me that this is a lot of code, a lot of dynamic SQL and generally pretty involved looking.Is there a simpler way to do the same thing? Do you have any ideas of things that MIGHT make it all simpler and (maybe) quicker?Cheers,Mark[code="sql"]CREATE TABLE #Questions( s01q01 VARCHAR(50) ,s01q02 VARCHAR(50) ,s02q01 VARCHAR(50) ,s02q02 VARCHAR(50) ,s02q03 VARCHAR(50))CREATE TABLE #Answers( CompletingUser VARCHAR(50) ,CompletingTime VARCHAR(50) ,s01q01 VARCHAR(50) ,s01q02 VARCHAR(50) ,s02q01 VARCHAR(50) ,s02q02 VARCHAR(50) ,s02q03 VARCHAR(50))INSERT #QuestionsSELECT 'Question One','Question Two','Question One B','Question Two B','Question Three B'INSERT #AnswersSELECT 'Darth Vader',GETDATE(),'definitiely 1','There are Two','I started a new section','£22.00',NULL----------------------------------------------------------------------------------------------------------------------------------------------- Enter an empty string so that this is no longer NULL, which means we can easily add to itDECLARE @ColumnList VARCHAR(MAX) = ''DECLARE @SQL VARCHAR(MAX)CREATE TABLE #AnswerList (Label VARCHAR(10), Wording VARCHAR(MAX))CREATE TABLE #QuestionList (Label VARCHAR(10), Wording VARCHAR(MAX))-- The columnlist here is our input list of columns to Pivot aboutSELECT @ColumnList = @ColumnList + ',' + COLUMN_NAMEFROM tempdb.INFORMATION_SCHEMA.COLUMNSWHERE TABLE_NAME LIKE '#Questions%'-- lose the extaneous comma from our concatenated list of columnsSELECT @ColumnList = SUBSTRING(@ColumnList,2,LEN(@ColumnList))-- make our dynamic SQL with the column list in it and any changes will be incorporated into our querySET @SQL =-- First make the list of answers, with the labels in one column and the wording in the other'INSERT #AnswerListSELECT Label,Answer FROM (SELECT CompletingUser, CompletingTime,' + @ColumnList + ' FROM #Answers) AS PVT UNPIVOT (Answer FOR Label IN (' + @ColumnList +')) AS UnPvt' +-- Now make a similar list of questions, with the same values in the labels row as the answers (this is always true)'INSERT #QuestionListSELECT Label,Question FROM (SELECT ' + @ColumnList + ' FROM #Questions) AS PVT UNPIVOT (Question FOR Label IN (' + @ColumnList +')) AS UnPvt'EXEC (@SQL)-- Make a new column list - this is now the output column list, taken as the list of the label of al the questionsSET @ColumnList = ''-- Put it all nicely in square bracketsSELECT @ColumnList = @ColumnList + ',[' + Wording + ']'FROM #QuestionList-- Lose the comma off the frontSET @ColumnList = SUBSTRING(@ColumnList,2,LEN(@ColumnList))-- Now get the one-off info from the Answers and all the dynamic questions and answersSET @SQL = 'SELECT (SELECT CompletingUser FROM #Answers) AS CompletingUser ,(SELECT CompletingTime FROM #Answers) AS CompletingTime ,*FROM (SELECT Q.Wording AS Question, A.Wording AS Answer FROM #QuestionList AS Q' + -- Left outer join as some questions may be unanswered, so having NULL values' LEFT OUTER JOIN #AnswerList AS A ON Q.Label = A.Label) AS QA ' +-- it doesn't know that we have only one row, so we have to put a MAX in, but it makes no different because we DO have only one row.'PIVOT (MAX([Answer]) FOR Question IN ('+ @ColumnList +')) as Pvt'EXEC (@SQL)DROP TABLE #AnswersDROP TABLE #QuestionsDROP TABLE #AnswerListDROP TABLE #QuestionList[/code]All suggestions most welcome!

Alerts for Database Mirroring

Posted: 15 Jul 2013 01:30 AM PDT

HiI'm trying to get SQL Server to alert me via email when the Primary instance loses connection with Mirrored instance (i.e. mirroring becomes suspended/inactive)So far, the methods I've found require the Service Broker to be enabled on the Mirrored DB, unfortunately, this cannot be enabled in this scenario.Is there any other simple way that I can simply set MSSQL to notify me (via email) when Mirroring isn't active?Thanks.

Update table values if corresponding values change in another table

Posted: 23 Jul 2013 10:03 PM PDT

Hi,I am very new to SQL and really dont know how to phrase my question. There are 2 tables linked through a primary key and if the values in one table change, the corresponding values in another table should be changed and reflected accordingly.Does someone know what logic I need to apply for this to work ? Do I have to create a primary key-foreign key relationship and then create a trigger on the other table on which the values need to be updated ?The values in the table will be changed through a webpage.Any ideas would be appreciated.Thanks.

How to update data in a table having primary key - foreign key link with another table

Posted: 29 Jul 2013 12:27 AM PDT

Hello,Could someone please tell how can data be updated (refreshed) in one table if the data is refreshed on another table and both tables are linked through a primary key - foreign key ?I just need to know what logic I need to apply to update the values in a table if the same value has been refreshed in another table. Thanks.

How can I avoid Eager Spool on my Update Statement which consumes 56% of total cost

Posted: 28 Jul 2013 11:58 PM PDT

Hi Guys,I found my Update Query is getting the usage of Eager Spool, and the temp table (implicit) cost about 56% percentage of the Update Statement. Kindly Advice how to over come this scenario.Here is the Query.--table structure:create table mytable(tableid nchar(25), element1 int, element2 smallint, col4 nvarchar(255));Merge [dbo].[mytable] as tr --having a 10 million records using #mytable as tmp --carries the data to be updated and about 2500 rows.on(tr.tableid = tmp.tableid and tr.element1 =tmp.element1 and tr.element2 = tmp.element2) When Matched Then update set tr.tableid = tmp.tableid, tr.element1 = tmp.element1, tr.element2 = tmp.element2 , tr.col4 = tmp.col4, tr.modifiedby = @userid,tr.modifieddate = getdate() when not matched then insert (tableid,element1,element2,col4,createdby,createddate) values (tmp.tableid,tmp.element1,tmp.element2,tmp.col4,@userid,getdate()); the attached showplanxml_mytable.txt file could help you in better way.Looking forward to hear from you soon Guys.. Thanks,Prabhu

Using like over charindex to search for pattern

Posted: 28 Jul 2013 09:19 PM PDT

We where using charindex to search for a string as shown below[code="sql"]SELECT *FROM tablenameWHERE ( Charindex('47a%$.abc',CAST(columnname AS VARCHAR(MAX)))>0 ) [/code]I am planning to replace charindex with LIKE as i require to search only whether the pattern exists not the position.So i want to know is these any disadvantage in using LIKE over charindex for my case.I am using charindex in several cases. So i want it to work for all sql datatypes ,all characters including special characters(including % (as like already uses %pattern%).So please tell me is there any disadvantage in using like over charindex

Email Stop - SQL 2k8?

Posted: 28 Jul 2013 08:07 PM PDT

Hi,Email alert stopped automatically due to as below error messages, what could be reason?The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 1 (2013-07-27T11:42:13). Exception Message: Cannot send mails to mail server. (Mailbox unavailable. The server response was: 4.7.1 Client host rejected: cannot find your hostname

Monday, July 29, 2013

[how to] sql server 2008 R2 Cannot connect to my local server

[how to] sql server 2008 R2 Cannot connect to my local server


sql server 2008 R2 Cannot connect to my local server

Posted: 29 Jul 2013 09:23 PM PDT

When I connect to my local server , I got this message

A network-related or instance-specific error occurred while establishing a connection to    SQL Server. The server was not found or was not accessible. Verify that the instance name     is correct and that SQL Server is configured to allow remote connections. (provider:     Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft     SQL Server, Error: 2)  

this is my login information :

Server type :       Database Engine  Servar name :       .  Authentication:     SQL Server Authentication  Login:              sa  Password:           123  

How can I fix it ? Thanks in advance :)

MySQL slave doesn't reconnect to master after master is rebooted

Posted: 29 Jul 2013 04:38 PM PDT

I've got a new MySQL replication setup with one master and one slave.

In all of the following scenarios, the slave automatically reconnects to the master and resumes replication where it left off:

  • mysql service is restarted on the slave host
  • Slave host is rebooted
  • mysql service is restarted on the master host (reconnection happens after the default 60-second slave I/O reconnect timeout)

However, if I try rebooting the master host, the mysql service on the slave host neither detects that the master went down nor resumes replication when the master becomes available again. In this scenario:

  • I see no entries in the mysql error log on the slave host
  • Running show slave hosts; on master (once it comes back up) shows no hosts connected

UPDATE: Looking more closely at the mysql errors logs on the master, I suspect that the master is not completing a clean shutdown because when stopping the service I see the logs:

2013-07-29 17:43:45 5281 [Note] /usr/sbin/mysqld: Normal shutdown    2013-07-29 17:43:45 5281 [Note] Giving 1 client threads a chance to die gracefully  [snip]  2013-07-29 17:43:48 5281 [Note] InnoDB: Shutdown completed; log sequence number 146430781066  [snip]  2013-07-29 17:43:48 5281 [Note] Shutting down plugin 'binlog'  2013-07-29 17:43:48 5281 [Note] /usr/sbin/mysqld: Shutdown complete    130729 17:43:48 mysqld_safe mysqld from pid file /opt/mysql/run/mysqld.pid ended  

...while after rebooting, the mysql error log on master ends with:

2013-07-29 17:46:58 6441 [Note] InnoDB: Starting shutdown...  

...and is missing the Shutdown complete message.

Importing a large column (8000 chars) from Excel using SSIS

Posted: 29 Jul 2013 02:56 PM PDT

I'm trying to set up a regular import of an excel spreadsheet that we get from a vendor. I'm using SQL 2008 R2 SSIS to import it into a table. The problem connection manager is an OLE DB connection to the spreadsheet. The spreadsheet is Excel 2005. The database connection manager is using Native OLE DB\SQL Native Client.

The problem I'm getting is that the OLE DB Source keeps setting a couple of the excel columns to DT_WSTR with a 255 length. I have 2 columns however that are 4000 and 8000 characters respectively. I've gone into the Advanced Editor/Input and Output Properties tab for the OLE DB Source and changed the DataType/Length in the External Columns list and the Output Columns list. Unfortunately when I leave the Advanced Editor an error appears in the error list.

Validation error. Data Flow Task: Data Flow Task: The output column "ColumnName" (226)  on the error output has properties that do not match the properties of its   corresponding data source column.  

When I click on the Source again I get an option to "fix" the errors automatically. I then select "yes" and "ok". The error is now gone but when I go back to the External Columns the datatype/length settings are back to the original DT_WSTR/255.

I could manually change the spreadsheet into a delimited file to avoid the problem but would rather not add a manual step into the process. Does anyone know of a way to get an Excel source to allow for a long column?

How to avoid timeouts during big DML operations

Posted: 29 Jul 2013 02:07 PM PDT

I have a big insert script I need to run. Its about 55,000 records and 160 columns. The script is already created and I can't create it again.

The problem I have is that this runs for about 4 hours or so, and during that time the system that uses this database gets really slow and timeout a lot.

I would not care if my INSERT is slower but it shouldn't impact other users.

I was thinking in doing some batch of let's say 500 rows and use the WAITFOR, but was wondering if there could be a better option for doing this.

Goverment border control database solution [on hold]

Posted: 29 Jul 2013 02:02 PM PDT

I am on a research for border control IT solution. Do you recommend any big databases administration and security companies ?

I'd really appreciate it.

Tablix group totals not displaying consistently

Posted: 29 Jul 2013 08:47 PM PDT

I admit I'm pretty new to Reporting Services, but it has been fairly elegant and straight forward up until now.

I have a report with multiple parameters that returns a dataset that is then grouped into three groups. The Row groups for the report is laid out as:

Group 1    Group 2      Group 3        (Details)

To each group I've added a totals row that subtotal two columns for the group.

On running the report with a relatively small result set everything appears to work fine, but as the result set gets larger the totals columns for various groups stop appearing. A set of group totals may show up at the end of the first grouping and then not again until the end of the report.

The behavior is consistent for a given query (i.e. the vanishing totals always appear, or not appear, in the same place). I've tried reworking the query the report is based on, I've gone so far as to rebuild the report in a new project, and the issue doesn't go away.

I haven't been able to find any documentation on this issue and I'm out of ideas on how to fix.

Any suggestions (even if it's RTFM)?

Need to understand parallel query execution error

Posted: 29 Jul 2013 06:02 PM PDT

Today we experienced a degradation in performance on our production sql server. Durring the time this occurred we logged several "The query processor could not start the necessary thread resources for parallel query execution" errors. The reading that I've done suggests that this has to do with how many CPUs to use when executing a complex query. However when I checked during the outage our CPU Utilization was only at 7%. Is there something else this could be referring too that I haven't come across yet? Is this a likely culprit of the performance degradation or am I chasing a red herring?

My sp_configure values for this are as follows:

name                                minimum maximum config_value run_value  cost threshold for parallelism      0       32767   5            5  

how to check memory utilization

Posted: 29 Jul 2013 12:02 PM PDT

Is there a way to check how much of the memory being reserved by sql server is actually being used by sql server?

Is there a way to look at this information in the past? Ie how much memory was being used an hour ago?

Cannot rename the table because it is published for replication

Posted: 29 Jul 2013 10:19 AM PDT

I need to rename some tables in my DB - SQL Server 2005, with this:

EXEC sp_rename 'dbo.TableName', 'Tmp_TableName', 'OBJECT'  

but I can't do this - I get this exception -

Cannot rename the table because it is published for replication

This DB doesn't have replication - it was removed 5 years ago - we've just tried it and removed it after a week. So I suppose that it should be a flag somewhere.

Please help me remove this flag somehow.

Is the Service Broker good for publishing data between servers?

Posted: 29 Jul 2013 09:43 AM PDT

I'm trying to find a better solution when publishing data to a remote table.

So we have Server A and Server B. Server A holds a master table which should be replicated to Server B.

Currently, we have this pretty chaotic stored procedure that runs every minute or so. It locates any changes and inserts them into a temporary table on Server B. Once loaded, it inserts new records or updates existing records from this temp table (The table is hit constantly and we want to limit any for of lock).

We don't have a DBA currently so I'm trying my best (I'm just a web developer) to figure out better solutions that could scale in the future.

Note: We have a couple dozen stored procedures that are like this.

always on availability group with different speed disks

Posted: 29 Jul 2013 09:49 AM PDT

I'm looking to set up SQL Server 2012 installation with an Always On Availability Group, where the 'passive' replica will be hosted at another site on the WAN and using synchronous data commit - the idea being that we will have a hot standby with no loss of data in the event of a failure at our primary site.

One potential problem that I foresee is that the secondary site has slower storage than our primary site. I don't care about that in the event of a failure, we can live with slow speeds for a period of time until the primary site is restored. My worry is that, because we are using synchronous commit, that the slower disk speed at the secondary site will affect performance at the primary site during normal operation.

Is this a valid concern, or is it likely that the slower speed will be offset by, for example, the disk not having much read activity in comparison to the primary site?

Cannot remove unused filegroups

Posted: 29 Jul 2013 09:52 AM PDT

I wanted to remove some unused filegroups/files in a SQL Server Database but am stuck because SQL Server thinks the filegroups/files are still in use.

Some background:

  • We had some partioned tables that we converted back to non-partitioned ones
  • All partition-functions and -schemes were deleted
  • I queried the dm views for unused filegroups like this:

    SELECT   *  FROM sys.filegroups fg  LEFT OUTER JOIN   sysfilegroups sfg  ON fg.name = sfg.groupname  LEFT OUTER JOIN   sysfiles f  ON sfg.groupid = f.groupid  LEFT OUTER JOIN   sys.indexes i  ON fg.data_space_id = i.data_space_id  WHERE i.object_id IS NULL  

    This got me a list of filegroups/files i tried to remove from the database. But some of them could not be removed. Example error message:

    The filegroup 'FG_XXXX' cannot be removed because it is not empty.  
  • I was not able to get the connection to tables/indexes for these filegroups via sys.data_spaces and sys.indexes:

    SELECT * FROM  sys.data_spaces ds  INNER JOIN sys.indexes i  ON ds.data_space_id = i.data_space_id  WHERE ds.name = 'FG_XXXX'  
  • I am aware that a filegroup can not be removed if it is referenced in some partition scheme.
    But this cannot be the reason here, as i deleted all partition schemes/functions in the db.

Any hints what i could do to get rid of the filegroups?

Full text query slow on first run

Posted: 29 Jul 2013 05:58 PM PDT

I have SQL Server database which has size of 65GB and 6 million rows. We have essays in tables and the average size of an essay is 450 words. When I run a query for the first time, it takes 3-10 seconds to show me the results. On running same query again, it is in less than 1 second (very fast).

Please let me know how to optimize it so I can get queries in less than 1 sec.

Query example:

SELECT top 1 * FROM tblContent WHERE CONTAINS(pagecontent,'"increasing the likelihood"')  

Here is detail of server(VPS):

  • OS: Windows Server 2012
  • SQL Server 2012 Enterprise (Trial)
  • Processor: set to 0
  • Memory: set to 0
  • RAM: 4GB
  • Processor: QUAD 4Ghz

Splitting different parts of a string

Posted: 29 Jul 2013 08:00 AM PDT

I have an SQL Server database where there is a cell with a delimiter (\) separated string. An example of this string would be:

category_path  =============  RootCategory\Middle Category\Child Category\Child of child category  RootCategory\Middle Category\Other Child Category\  

There are a lot of category paths like this. I would like to parse all of them into an other table:

category_name            parent_path  ====================================  RootCategory             null  Middle Category          RootCategory  Child Category           RootCategory\Middle Category  Child of child category  RootCategory\Middle Category\Child Category  Other Child Category     RootCategory\Middle Category  

I have left out the duplicates here, but the result could contain duplicates (I will need to cursor over the rows of this table later, and I can call distinct there).

There are a lot of examples around the net where a string is simply splitted. (Here for example) I was thinking about writing a query where I split the string, cursor over the results and accumlate the parent_path in each step, but that seems to be very suboptimal.

Is there a solution where I wouldn't have to declare so many cursors for a simple string?

Mysqldump option for crash recovery? [on hold]

Posted: 29 Jul 2013 07:52 AM PDT

Question

Is there any command in mysqldump that can be used for crash recovery?

Background

I got this question in an interview where the interviewer said that there is an option in mysqldump which will help us to build the system from crash.

I went through the documentation but could not find any such option. Please let me know if any such thing is there. I did not pass the interview.

A Tool to Manage Tracking Query Performance Improvements [duplicate]

Posted: 29 Jul 2013 10:20 AM PDT

This question already has an answer here:

I'm building a BI system on SQL Server 2012. I have a test set of data with some tens of millions of rows.

Currently many functions and stored procedures are unacceptably slow. I'm looking for a tool I can use to automatically compare query times from previous executions as I run and re-run these queries and make performance improvements.

I want to tinker, measure, compare execution times, repeat.

For example, I want to be able to see a report showing what query times were when run on 7/31/2013. Then after I make some changes, I want to kick off the tool again on say, 8/1/2013. I'd like to see side-by-side the difference in query times between all of the historical executions.

I know I can track query times manually with SET STATISTICS TIME ON. I've also seen stored procedures others have written to manually track the performance of one or two queries. But the process is very manual.

I'm looking for a tool that I can enter the 30 functions and stored procedures I want to test into (or give it a trace), and then it will do the work of kicking of the queries 5 times (or replaying the trace), record the average time each query took to execute, and compare those times to previous executions.

Regarding Similar Questions

I've looked at:

  1. Testing stored procedure scalability - My question is not about stress testing. (Although the RML tools mentioned are VERY close to what I'm looking for. If all else fails, I will use the RML tools.)
  2. How to profile stored procedures - I know how to profile a stored procedure and use tools like SQL Profiler. Profiler won't give me comparisons to previous executions.

The RML tools are very close to what I want, as one can load traces into multiple DBs and view the results side by side.

But is there any tool that automates: replaying sql statements and viewing the performance diff of different iterations side-by-side? (The major limitation with the RML tools is that the trace from each iteration has to be loaded into a different database and the side-by-side comparison is only possible by opening two instances of Reporter and switching back and forth between the two of them. Ten different databases and ten different windows for ten different executions seems unwieldy...)

In Microsoft SQL Server 2008, syntax generates the error "The Parallel Data Warehouse (PDW) features are not enabled."

Posted: 29 Jul 2013 05:45 PM PDT

I have the following virtual column generated from an aggregate over a sorted partition,

MIN(picture_id) OVER ( PARTITION BY [360_set] ORDER BY picture_id ASC )  

However, when I execute that, I get the following.

Msg 11305, Level 15, State 10, Line 12  The Parallel Data Warehouse (PDW) features are not enabled.  

This is where it gets interesting though, without a sort order on the partition, it works:

MIN(picture_id) OVER ( PARTITION BY [360_set] )  

And, further, ROW_NUMBER() a window function (not an aggregate function) works with an explicit order on the partition.

ROW_NUMBER() OVER ( PARTITION BY [360_set] ORDER BY picture_id ASC )  

How come the desired statement doesn't work? Where is this documented? The version information was requested, this is what I in Help → About.

Microsoft SQL Server Management Studio          10.0.5512.0  Microsoft Analysis Services Client Tools        10.0.5500.0  Microsoft Data Access Components (MDAC)         6.1.7601.17514  Microsoft MSXML                                 3.0 6.0   Microsoft Internet Explorer                     9.10.9200.16635  Microsoft .NET Framework                        2.0.50727.5472  Operating System                                6.1.7601  

The result from SELECT @@VERSION is Microsoft SQL Server 2008 (SP3) - 10.0.5512.0 (X64) Aug 22 2012 19:25:47 Copyright (c) 1988-2008 Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (VM)

SQL server level permissions

Posted: 29 Jul 2013 08:32 AM PDT

I am aware of the Microsoft provided code to transfer logins between SQL servers, however this only does the account and the password.

What if that particular account has various roles and permissions assigned to it at a server level, is there an equivalent piece of code to script these permissions also?

Thanks

How can I execute a stored procedure whose name is in a variable?

Posted: 29 Jul 2013 01:51 PM PDT

Given a variable that contains a stored procedure name:

DECLARE @stored_procedure_name varchar(512);  SET @stored_procedure_name = 'some_stored_procedure_name';  

How can I execute the stored procedure (without passing in any arguments)?

When creating remote BLOB store is "RBSFilestreamFile" always the name of the file to be added to the FILEGROUP?

Posted: 29 Jul 2013 05:44 PM PDT

When creating a remote BLOB store in SQL Server (2008 R2) is "RBSFilestreamFile" always the name of the file when adding it to the FILEGROUP like in this query (this is the name I've seen used in every example I've found online, but I need to know for sure)?

ADD FILE (name = RBSFilestreamFile, filename = 'c:\Blobstore')      TO FILEGROUP RBSFilestreamProvider  

I'm asking because I'm working on an application for restoring SharePoint content databases and need to know if I can hardcode this string into the application.

I know you can create file groups and files with any name you want, but specifically for setting up RBS with SharePoint using SQL 2008's built in FILESTREAM provider are these names an expected convention? i.e. Will it work if I name my FILEGROUP and/or FILE something else?

Restore exceeds licensed limit of 10240 MB per database. All I really need is the DB structure

Posted: 29 Jul 2013 12:10 PM PDT

I received a .bak file from a customer that I need to Restore, but the Restore exceeds licensed limit of SQL Server 2008 R2 10240 MB per database. All I really need is the DB structure. Is there a way to Restore anyway or for me to just Restore the structure?

Database migration, how to

Posted: 29 Jul 2013 12:03 PM PDT

I have two databases DB1 and DB2. In both databases exists this two tables mo_sms and mt_sms. This is the structure of those tables:

CREATE TABLE IF NOT EXISTS `mo_sms` (    `id_MO` int(11) unsigned NOT NULL AUTO_INCREMENT,    `sms_proceso` char(1) NOT NULL COMMENT 's=SMS c=CHAT d=Descargas',    `msisdn` varchar(20) NOT NULL,    `texto_sms` varchar(160) DEFAULT NULL,    `brand_id` int(10) unsigned NOT NULL,    `fecha_sms` datetime NOT NULL,    `comando_id` int(10) unsigned NOT NULL DEFAULT '0',    `alias_desc` varchar(25) DEFAULT NULL,    `shortcode_id` int(10) unsigned NOT NULL,    `precio` float(11,2) unsigned DEFAULT '0.00' COMMENT 'Precio del MO',    `id_user` int(10) unsigned NOT NULL,    `state` char(1) NOT NULL DEFAULT '0' COMMENT '0=Por procesar 1=Procesado',    `tipo_sms` tinyint(4) NOT NULL DEFAULT '0' COMMENT '0=Normal, <>0 dependera del tipopredeterminado',    `anio_sms` smallint(4) unsigned NOT NULL DEFAULT '0',    `mes_sms` smallint(2) unsigned zerofill NOT NULL DEFAULT '00',    PRIMARY KEY (`id_MO`),    KEY `ix_carrier` (`brand_id`),    KEY `ix_fecha_sms` (`fecha_sms`),    KEY `ix_fecha_carrier_keyword` (`fecha_sms`,`brand_id`,`alias_desc`),    KEY `ix_msisdn` (`msisdn`),    KEY `ix_sms_proceso` (`sms_proceso`),    KEY `ix_sms_proceso_state` (`sms_proceso`,`state`),    KEY `ix_id_user` (`id_user`),    KEY `ix_fecha_sms_user` (`fecha_sms`,`id_user`),    KEY `ix_varios` (`anio_sms`,`mes_sms`,`comando_id`,`shortcode_id`,`brand_id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COMMENT='Tabla de MO' AUTO_INCREMENT=82972 ;    CREATE TABLE IF NOT EXISTS `mt_sms` (    `id_MT` int(11) unsigned NOT NULL AUTO_INCREMENT,    `sms_proceso` char(1) NOT NULL DEFAULT 'c' COMMENT 's=SMS c=CHAT d=Descargas',    `msisdn` varchar(20) NOT NULL,    `texto_sms` varchar(160) DEFAULT NULL,    `brand_id` int(10) unsigned NOT NULL,    `fecha_sms` datetime NOT NULL,    `comando_id` int(10) unsigned NOT NULL DEFAULT '0',    `alias_desc` varchar(25) DEFAULT NULL,    `shortcode_id` int(10) unsigned NOT NULL,    `id_user` int(10) unsigned NOT NULL,    `tipo_sms` tinyint(4) unsigned NOT NULL DEFAULT '0' COMMENT '0=Normal, <>0 dependera del tipopredeterminado',    `id_MO` int(11) unsigned NOT NULL,    `state` char(1) DEFAULT '0' COMMENT '0=Por Procesar 1=Procesado',    `anio_sms` smallint(4) unsigned NOT NULL DEFAULT '0',    `mes_sms` smallint(2) unsigned zerofill NOT NULL DEFAULT '00',    PRIMARY KEY (`id_MT`),    KEY `ix_carrier` (`brand_id`),    KEY `ix_fecha_sms` (`fecha_sms`),    KEY `ix_fecha_carrier_keyword` (`fecha_sms`,`brand_id`,`alias_desc`),    KEY `ix_msisdn` (`msisdn`),    KEY `ix_sms_proceso` (`sms_proceso`),    KEY `ix_id_user` (`id_user`),    KEY `ix_fecha_sms_user` (`fecha_sms`,`id_user`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COMMENT='Tabla de MT' AUTO_INCREMENT=93853;  

I have some values in DB2 that should be migrated (inserted) in DB1. My problem here is I don't know how to get the records from DB2.mo_sms tables and those relations from DB2.mt_sms and then insert to DB1.mo_sms and DB2.mt_sms. For example I'll get all the records from mo_sms with fecha_sms LIKE '%2013-04-19%' then if I insert those records in DB1.mo_sms new IDs will be generated then the integrity will be lost. Should I make this programatically or there is any way to do this using just SQL queries? I'm using MySQL as DBMS.

SQL Server Replication: "ALTER TABLE ALTER COLUMN" is not propagated to subscribers

Posted: 29 Jul 2013 08:44 AM PDT

We are running SQL Server 2008 R2 SP1 as publisher & distributor, and SQL Server 2005 SP3 as subscriber. The replication of schema changes is activated, and the replication has been running for years, including frequent schema changes (new column, new constraints, etc).

The following instruction was sent on the publisher:

use myDatabase  alter table Tbl_TypeLignePaye        alter column natureTypeLignePaye nvarchar(12)  go  

where field natureTypeLignePaye was originally nvarchar(3) null

The query ran without errors on the main database. The result is the following:

  1. The field natureTypeLignePaye still appears as nvarchar(3) in the object browser
  2. The column properties screen indicates a nvarchar type with a length of 12
  3. The change was not propagated to the subscribers

Any idea on what is going on with this database?

Publisher: object browser window vs property window give incoherent data

field type and length

Is it possible to have extra tables in a Slave with MySQL Replication

Posted: 29 Jul 2013 02:44 PM PDT

As my title mention I have a Master and a Slave database.

Master if for operations data and my slave mainly for reporting stuff.

The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system.

How could I add tables on the Slave without Master caring about it ? Is it even possible ?

Replication issue - CREATE SELECT alternative?

Posted: 29 Jul 2013 04:44 PM PDT

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

Which text-index I should create for xooops engine to achieve better search results?

Posted: 29 Jul 2013 10:44 AM PDT

In one of projects we use xoops engine to manage content. In mysql slow query log most of queries are following :

SELECT p.uid,f.forum_id, p.topic_id, p.poster_name, p.post_time, f.forum_name, p.post_id, p.subject              FROM xps33_bb_posts p,              xps33_bb_posts_text pt,                  xps33_bb_forums f WHERE p.post_id = pt.post_id AND p.approved = 1 AND p.forum_id = f.forum_id AND f.forum_id IN (1,4,61,7,9,17,20,45,35,44,38,39,43,53,54,55,56,57,58,60,14,29,40,26,18,41,33,24,32,59,25) AND ((p.subject LIKE '%rivi%' OR pt.post_text LIKE '%orvi%') AND (p.subject LIKE '%care%' OR pt.post_text LIKE '%gor%'))  ORDER BY p.post_time DESC LIMIT 0, 5;  

I can't change them as It would involve changing the engine which is not an option atm. But I can help the engine to search faster. As I understood as the table uses MyIsam engine I can create text indicies which should make search faster, am I right?

So in general for which indicies I should create to avoid following queries run for long time?

+----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  | id | select_type | table | type   | possible_keys       | key     | key_len | ref                | rows   | Extra                       |  +----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  |  1 | SIMPLE      | p     | ALL    | PRIMARY,forumid_uid | NULL    | NULL    | NULL               | 144090 | Using where; Using filesort |  |  1 | SIMPLE      | f     | eq_ref | PRIMARY             | PRIMARY | 4       | diginew.p.forum_id |      1 | Using where                 |  |  1 | SIMPLE      | pt    | eq_ref | PRIMARY             | PRIMARY | 4       | diginew.p.post_id  |      1 | Using where                 |  +----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  3 rows in set (0.00 sec)  

How to drop a DB2 instance when the instance owner was removed

Posted: 29 Jul 2013 11:44 AM PDT

This is a real sticky situation. I was handed over a machine (running an AIX 7.1), and my first task was to re-install DB2 server on it. But someone before me had conveniently removed an instance owner account, and probably recreated it. Now, the problem is this:

1) When I try to uninstall DB2, it says the instance is active and has to be dropped first.

2) When I try to drop this instance, DB2 says there is no such instance.

I am quite new to DB2 administration. Not sure how to proceed here. Any help is appreciated

Thanks

Ensure correct username when using pg_restore

Posted: 29 Jul 2013 09:44 AM PDT

I have just installed postgres 9.1.6 on a local Ubuntu server. Now I'm trying to restore a database dump from a database on Heroku. The local database is setup like this:

sudo -u postgres psql -c "create user app_user with password 'pass';"  sudo -u postgres psql -c "create database app_production owner app_user;"  

Now, when I try to restore the the dump I use the following command:

pg_restore --verbose --schema=public --no-acl --no-owner --jobs=8 --exit-on-error --username=app_user --dbname=app_production /tmp/app_production.dump  

Now in psql with \l to see ownerships I get the following:

                                         List of databases            Name    |   Owner   | Encoding |   Collate   |    Ctype    |   Access privileges    ------------------+-----------+----------+-------------+-------------+-----------------------   app_production   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    postgres         | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    template0        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres   template1        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres  

As you can see, the ownership of app_production database has now turned from app_user to postgres. I would have expected the owner of the app_production database to still be app_user, so what am I doing wrong?

BTW, The --schema=public was added, because I was getting a weird error:

"Could not execute query: ERROR: must be owner of extension plpgsql"

Another thing is, that the owner of the dump is the user that the database was having on heroku, which would be something like 'jebf473b73bv73v749b7'

Splitting Tables in MySQL. Good practice?

Posted: 29 Jul 2013 06:25 PM PDT

I have started working on an existing project and the previous developer had split up a table into 10 separate tables with identical schemas but different data.

The tables look like this:

[tableName_0]  [tableName_1]  [tableName_2]  [tableName_3]  [tableName_4]  [tableName_5]  [tableName_6]  [tableName_7]  [tableName_8]  [tableName_9]  

The primary key is an integer id field. The application uses a hash algorithm (id mod 10) to know what table to access when doing lookups. For example id = 10 would result to [tableName_0].

Combined, the tables have probably 100,000 rows and the growth rate is relatively low.

So, my question is whether or not this is a viable solution or even if it's a good practice in any situation. My theory is to push to have them combined as it will make things easier as far as UNIONs, etc go. The main downside is changing all the application code and whether it is even worth it in the long run.

How to execute SQL against all DBs on a Server

Posted: 29 Jul 2013 11:46 AM PDT

I have some standard SQL that I run against multiple databases on a single server to help me diagnose problems:

select       so.name,      so.type,      MAX(case when sc.text like '%remote%' then '' ELSE 'N' END) AS Relevant,      @@ServerName as Server,      DB_Name() as DBName   from      sysobjects so with (nolock)      join syscomments sc with (nolock) on so.id = sc.id  where (sc.text like '%emote%')  group by so.name, so.type  order by so.type, so.name  

How can I execute this against all databases on a single server? (besides manually connecting to one at a time and executing)

[SQL Server] % of participants that are retained

[SQL Server] % of participants that are retained


% of participants that are retained

Posted: 29 Jul 2013 09:19 AM PDT

I am a school teacher very new to MS SQL server. Everyone is suggesting to try this site out. Here goes! I am trying to write queries to test different types of outcome measures for the participation in a academic program. There are several different ways to calculate this outcome measurement I would like to try. The outcome which I am trying to Calculate is: What is the % of participants that are retained during six months of the program? I am testing different ways to define participant and different time ranges. There are 4 queries I am trying to produce. Unfortunately, I have to use for different tables: [b]Attendance, Status, Deerolled, Inactive[/b]. I have included sample data from each below. [b]Queries[/b]1.A participant is defined as everyone that attended a class at least twice a week for 6 months (181 days total) starting at July 1st 2012 and Ending June 30th 2013, so the length of the fiscal year. If a participant is deenrolled or inactive they are dropped. 2.A participant is defined as everyone that attended a class at least twice a week for 6 months (181 days total) starting at January 1st 2013. If a participant is deenrolled or becomes inactive they are dropped.3.A participant is defined as everyone that attended a class at least twice a week starting at January 1st 2013 until today4.A participant is defined as a student's enrollment start date until they are deenrolled or become inactive. Participant (Numerator) participant / all students which were served (Denominator) The 4 query outputs I am looking for are different versions of this: Example:[b]Participants [/b] [b]Served[/b] [b]Percent_Served [/b]75 100 75% I have been messing arouns with different versions of the queries below. As you can see I am very new to SQL server. :) [code="sql"]SELECT Count (distinct ID) as Count, Count ( DATEADD( dd, -181, DATEADD(wk, DATEDIFF(wk,0,Date), 0)) > 2 as Participants ,FROM Attendancewhere Attendence_date date between '07/01/2012' and '06/30/2013'and ID not in (Select ID from Inactive) or ID not in (select ID from Denenrolled) GROUP BY ID[/code][code="sql"]SELECT Count (distinct ID) as Count, Count ( DATEADD( dd, -181, DATEADD(wk, DATEDIFF(wk,0,Date), 0)) - Enrolled_Date as Participants ,FROM Attendancewhere Attendence_date date between '07/01/2012' and '06/30/2013'and ID not in (Select ID from Inactive) or ID not in (select ID from Denenrolled) GROUP BY ID [code="sql"]Any programming assistance for these queries is greatly appreciated. Below are the sample/exmaple datasets. Attendence_date is the date a student participated in one class. [code="sql"]CREATE TABLE Attendance ( ID int, Attendence_date datetime, )INSERT INTO ID Attendence_date VALUES (4504498, '7/1/2012'),(4504498, '7/2/2012'),(4504498, '7/3/2012'),(4504498, '7/4/2012'),(4504498, '7/5/2012'),(4504498, '7/8/2012'),(4504498, '7/9/2012'),(4504498, '7/10/2012'),(4504498, '7/11/2012'),(4504498, '7/12/2012'),(4504498, '7/1/2012'),(4504498, '7/2/2012'),(4504498, '7/3/2012'),(4504498, '7/4/2012'),(4504498, '7/5/2012'),(4504498, '7/8/2012'),(4504498, '7/9/2012'),(4504498, '7/10/2012'),(4504498, '7/11/2012'),(4504498, '7/12/2012'),(9201052, '7/15/2012'),(9201052, '7/16/2012'),(9201052, '7/17/2012'),(9201052, '7/17/2012'),(9201052, '7/18/2012'), (7949745, '7/17/2012'), (7949745, '7/18/2012'),(7949745, '7/23/2012'), (7949745, '7/23/2012'), (7949745, '7/24/2012'),(7949745, '7/26/2012'),(7949745, '7/26/2012'), (7949745, '8/8/2012'), (7949745, '8/8/2012'), (7949745, '11/5/2012'), (7949745, '11/5/2012'), (7949745, '11/5/2012'), (7949745, '11/6/2012'), (7949745, '11/6/2012'), (7949745, '11/6/2012'), (7949745, '11/7/2012'), (7949745, '11/7/2012'), (7949745, '11/7/2012') [/code][code="sql"]CREATE TABLE Status ( ID int, Intake_Date datetime , Engaged_Date datetime ), Enrolled_Date datetime)INSERT INTO ID Intake_Date Engaged_Date Enrolled_Date VALUES (7949745, '3/7/2012', '7/17/2012', '3/8/2012'),(4504498, '2/21/2013', '3/5/2013', '3/22/2013'),(1486279, '4/18/2013', '5/7/2013', '5/20/2013'),(9201052, '5/15/2012', '7/13/2012', '5/15/2012'),(1722390, '3/5/2012', '8/27/2012', '3/8/2012'),(7735695, '9/7/2012', '9/7/2012', '9/28/2012'),(9261549, '3/7/2012', '7/24/2012', '3/8/2012'),(3857008, '3/15/2013', '3/18/2013', '4/3/2013'),(8502583, '3/14/2013', '4/15/2013', '5/3/2013'),(1209774, '4/19/2012', '1/1/2012' '4/24/2012')[/code][code="sql"]CREATE TABLE Deenrolled ( ID int, Deenrolled_Date datetime)INSERT INTO ID Deenrolled VALUES (7949745, '2/4/2013'),(5485272, '07/08/2013'),(8955628, '01/10/2013'),(5123221, '7/8/2013'),(5774753, '7/18/2013'),(3005451, '2/18/2013'),(7518818, '05/29/2013'),(9656985, '6/20/2013'),(2438101, '7/17/2013'),(1437052, '7/25/2013'),(9133874, '4/25/2013'),(7007375, '6/19/2013'),(3178181, '5/24/2013')[/code][code="sql"]CREATE TABLE Inactive ( ID int, Effect_Date datetime)INSERT INTO ID Effect_Date VALUES (1209774, '10/12/2012'),(5419494, '10/12/2012'),(4853049, '10/9/2012'),(1453678, '5/23/2013'),(1111554, '7/16/2012'),(5564128, '2/15/2013'),(1769234, '7/16/2012')[/code]

PRIMARY KEY VS UNIQUE KEY*

Posted: 29 Jul 2013 08:57 AM PDT

Hi everyone,Does anybody know any cases/example that we need to use UNIQUE KEY instead of PRIMARY KEY? I understand that UNIQUE KEY can allow one NULL value in the data, but I don't know the reason for that. Why do we need that extra NULL VALUE? And, When do we need it?Thanks!Kaz

Eager Spool

Posted: 13 Apr 2013 08:45 PM PDT

Dear AllI am deleting rows from huge table. I am doing it in the batch of 1000 rows. This tables has got cluster index, and many indexes. In the execution plan its shows deletion on the cluster index is feeling up the Eager spool and this eager spool table is used for deletion of other indexes.Cost of feeling the eager spool is shown Zero% but for other idexex is 8% each.Becasuse of eager spool it is increasing tempdb_log file also and perfomance of the query is slow.Any suggestion to improve the perfromance.

[Articles] How to understand NoSQL Databases

[Articles] How to understand NoSQL Databases


How to understand NoSQL Databases

Posted: 28 Jul 2013 11:00 PM PDT

Today we have a guest editorial from Phil Factor that looks at NoSQL databases.

[MS SQL Server] Guidance with shrink and rebuild/reorganise indexes

[MS SQL Server] Guidance with shrink and rebuild/reorganise indexes


Guidance with shrink and rebuild/reorganise indexes

Posted: 28 Jul 2013 06:56 PM PDT

Hi,I'm not really very offay with database shrinking per se. I know how to do it but there are some things that I'm not understanding so I'm after some explanation as to what's going on behind the scenes as to why I'm seing what I'm seeing.The server and DBs I'm working on are all development. Shrink is evil and the code was written by Satan himself, I know, and I would never attempt any of this in a production environment.First of all a bit of background information: The server I'm working on hosts several dev databases, one of them was huge, 75GB data file with one table containing half a billion rows. This was causing the server to run out of disc space so some action was required. I've managed to get this down to about 100 million by keeping only the last 14 days worth of records. All the DBs are also using Simple recovery.After reducing the size of this particular DB I shrunk it to about 15GB and set up a job to delete any records older than 14 days to prevent it from getting rediculously large again. Obviously this has caused massive fragmentation (not that it matters particularly on this DB) so as an opportunity to learn I decided to rebuild the indexes.From what I've read online, rebuilding should compact the pages and reclaim disc space. However, as I was running the rebuild (using a task in SSIS) I noticed the data file was actually growing in size.So my question (after going round the houses a bit) is, what's goin on here? Why is the DB growing while the indexes are rebuilding? Will the DB grow in size permanently or is this just a temporary growth state while indexes are dropped and recreated?I've read a number of articles about it but none of them seem to acknowledge this so I'm left a little confused.Thanks in advance.

Alert-tempdb freesapce?

Posted: 28 Jul 2013 04:54 PM PDT

Hi,I had received alert for Tempdb free sapce and reach the below threshold values.I want to know.. Tempdb location and drive having enough free space available. Tempdb reached 200 MB it will be automatically extent & allocating sapce if database autogrowth enable. so we don't want take action if more free sapce available in Disk.If it datafile size reached Full accoupied in disk and no free sapce available then we ill consider the take action.DATE/TIME: 7/29/2013 11:11:47 AMDESCRIPTION: The SQL Server performance counter 'Free Space in tempdb (KB)' (instance 'N/A') of object 'SQLServer:Transactions' is now below the threshold of 200.00 (the current value is 192.00).Thanksananda

Search This Blog