Tuesday, May 21, 2013

[SQL Server 2008 issues] Long running query

[SQL Server 2008 issues] Long running query


Long running query

Posted: 20 May 2013 05:47 PM PDT

Hello All,I have a requirement where in i have to identify long running query under a particular login and the process should be a terminated or killed (if the query runs for more than 15 minutes) and communicated to the respective group along with the query.How can i achieve this?Your help would be appreciated.

How to restore database from network place...

Posted: 20 May 2013 06:06 PM PDT

Hi. My requirement is Restoring database from network place. when i am trying the restore the drive is not display in my browse button. How to restore DB in network place. please give me reply.....regards...shiva

Running updation in SQL

Posted: 20 May 2013 06:30 PM PDT

Dear Team , Kindly advise , How to create query in sqlserver for running updations, Qustions :1 . 1st row Total comes to second row Value1 then calculate 2nd row total . then its comes to 3rd row Value1 column. and its continue for all rows.how to write the query for this issue?Note : Link between all rows ==> Id , PreviousRowId [size="5"][b] Refer the attachment [/b][/size]Thanks,Chandrahasan S

How to write a select query that shows upcoming birthdays

Posted: 20 May 2013 07:30 AM PDT

This will be an easy challenge.Select date_of_birth from person --This gives values in the YYYYMMDD format like 19700101I want a list of people whose birthdays are within the next 30 daysfrom today. Can someone construct the SQL for me please ?

select records from a table where the records belong to fist clienname when ordered in ascending order of clientname

Posted: 20 May 2013 06:37 AM PDT

Using SQLServer Management Studio 2005I need to develop a query/storedproc using which I can retrieve all records pertaining to first clientname after they are ordered in Ascending order of Clientname)Eg:[img]\\dgsnas2\home\agandra\Desktop\Capture.png[/img]# In the above table, I want to retrieve all the rows pertaining to clientname 'A123'But in future, if a new clientname say: 'A001' is added, then my query should retrieve records pertaining to 'A001' and not 'A123'Appreciate any help! Thanks!

Query implementation help.

Posted: 20 May 2013 09:04 AM PDT

I have about 50 fields (client attributes) that needs to be shown on the report. Client may or may not want to see all the 50 fields.Through the web (i am thinking), they will select some fields and query needs to be run on those fields only...Thos 50 fields may be coming from several tables. Also, they need to be able to rename the fields as they wish. Also another client may want the field to be named differently; I hope i am making sense here. What would be the best way to go around this?

Running DTS packages on 2008 R2

Posted: 12 May 2013 07:54 PM PDT

Hope someone can help me on this oneWe have a 2008 R2 cluster which failed over recently, a number of jobs that run DTS packages failed due to the DTS components not installed. Went through the install of the DTS components and the 2005 Backwards Compatability. At this point I still couldn't open a package on the server. Read up a few articles that showed that on occasion, binary files needed to be manually copied into the program files directory where the SQL server was installed (http://msdn.microsoft.com/en-us/library/ms143755%28v=sql.105%29.aspx)After this was done I could open the packages but the jobs still failed.The code in the job step is set to run as a CmdExec with the direct call to the DTS package on the servere.g dtsrun /S "Server" /E /N "Package"I tried running this code via xp_cmdshell and got the error that the dtsrun component could not be found. Unfortunately I couldn't test adding the full path of the location of the DTSRun into an xp_cmdshell command but I'm guessing that this is what the error is down to.A bit more research found that there can be issues with the order of path's in a machines Environmental Variables:http://blogs.msdn.com/b/ramoji/archive/2008/10/16/how-to-open-dts-packages-in-sql-server-2008.aspxI made the necessary changes but the job still fails.One last thing, I compared the location of the dll's and rlls (that have to be manually moved as directed in the first link) from the node where the DTS packages run to where the DTS packages don't run and there were some discrepancies (strangely enough on the node that the packages are working, they're not in the same folder as directed in the first link). Made the necessary changes and now both nodes are identical in terms of file locationsThe jobs still failed. Can anyone else suggest something??Thanks

TimeStamp without milliseconds!!!

Posted: 20 May 2013 08:05 AM PDT

Hey,Am trying to add an timestamp column by using derived column but i dont want to get the milliseconds. i tried using the expression which is mentioned above but am unable to ignore the failure, it again goes back to "fail component" . can u help me with the expression?Thanks,

Kindly advise , How to create query in sqlserver for below issue,

Posted: 20 May 2013 01:49 PM PDT

Dear Team , Kindly advise , How to create query in sqlserver for below issue, I Have Table with below detailsId Name Value1 value2 Total(Value1+Value2) PreviousRowId1 A 10 0 10 02 B 5 13 C 6 24 D 7 3 5 E 8 4 6 F 5 5Qustions :1 . 1st row Total comes to second row Value1 then calculate 2nd row total . then its comes to 3rd row Value1 column. and its continue for all rows.how to write the query for this issue?Note : Link between all rows ==> Id , PreviousRowIdI need output like this below , Id Name Value1 value2 Total(Value1+Value2) PreviousRowId1 A 10 0 10 02 B 10 5 15 13 C 15 6 21 24 D 21 7 28 35 E 28 8 36 46 F 36 5 41 5Thanks,Chandrahasan S

Data Modeling - Ensuring Uniqueness over a group of rows

Posted: 20 May 2013 01:26 PM PDT

What are the different ways or best practices to design this kind of data model?Lets say there are 50 different color choices. Any number of colors can be assigned to a group, but no two groups can contain the exact same colors. For example, if Group A has {Red, Blue, Green}, then no other group can have those exact three colors only; other groups can have a subset of those three colors like {Red, Blue} or {Blue, Green} or more colors in addition to those colors like {Red, Blue, Green, Yellow} or {Red, Blue, Green, Purple} or {Red, Blue, White} but NOT {Red, Blue, Green}.These are some of the ways I can think of:1. Use procedure to insert rows and include logic in the proc to check uniqueness and prevent duplicates2. Or use a trigger in a similar way3. Or Use multiple columns, but this isnt really a robust solution; first off, the number of columns is somewhat predefined and DDL has to change if new columns have to be added; secondly, how we use the multiple columns to store which color in which column doesnt really help in ensuring uniqueness; by that I mean ColA=Green and ColB=Red is technically different from ColA=Red and ColB=Green even though they are same from a human perspective.4.Or Have a parent-child table combination, where the parent table has one row with one column being used to store a concatenated string over a pre-defined sort order in one row (some thing like Blue|Green|Red), and the child table has the actual colors in multiple rows with each row being tied back to the appropriate parent table row; this helps, but the child table still needs to have some kind of logic to ensure uniqueness on itself.5. Or Have an indexed view underneath the table (this is kind of inverse of the Num 5). The parent table here will have multiple rows for each color, and the indexed view will have one row with one column being used to store a concatenated string which will then participate in the Unique index.Are there any other ways to do this?

performance in sql server

Posted: 19 May 2013 09:36 PM PDT

Hi every one,I have 3 table as below:Table AttachmentIDAttachment, Name1 | A2 | BTable ActionPerformedAttachmentIDActionPerformedAttachment IDActionPerformed IDAttachment 1 | A | 1 2 | A | 1 3 | A | 1 4 | B | 2 5 | B | 2Table ActionPerformedIDActionPerformed IDProject A | P1 B | P2Now I have IDProject = P1, and I want to delete data in table Attachment and ActionPerformedAttachmentSo I can do it by 2 case:Case 1: I use temp table for deleteCase 2: I use cursor for delete.Please help me, what is the best case.Thank you very much.

Making data in a column unique (that's not a PK)

Posted: 20 May 2013 03:57 AM PDT

This may be an elementary question but if I don't ask, I won't learn.What's the best way to ensure that data entered into a certian column is unique and not already in the table even though the column is NOT the PK. For example:SalesmanIDSSNFirstNameLastName...etc...SalesmanID is the PK but I do want SSN to be unique as well. Is there something in the table design I can do to ensure this or do I need to resort to a Trigger or Stored Proc?Thanks in advance for offering help.

Package Execution result is blank!!!

Posted: 20 May 2013 08:38 AM PDT

Hi, I have created a package everything was good. package executed well. but today suddenly when i execute the same package nothing is happening..there is no execution process when i check the execution result there was nothing in it...it is not returning any error. it stops when i stop it but there is no action taking place.how should i troubleshoot it??Thanks,

SSIS Derived Column Transformation Date Time Expression

Posted: 29 Apr 2012 08:29 PM PDT

I'm using Derived Column Transformation and I'd like to create column with the current date/time with .000 milliseconds. Can anyone help with the expression I need to perform this please? Many thanks in advance. Damion

How to call JAVA class file in sql trigger

Posted: 20 May 2013 12:47 AM PDT

Hi Team,I want a java batch file to be executed in a trigger, is it possible to call JAVA class file in SQL Trigger.Please suggest..?

Proper way to write a sproc

Posted: 19 May 2013 07:21 PM PDT

HiI am an experienced sql developer and have written many stored procedures but i am worried that i am still quite beginner when setting them up.i start my sproc and then get straight into writing the code that performs the tasks that the sproc needs to do, declaring the variables queries loops what ever.i look at some other procs that people have done (and sorry i dont hvae any examples) and there are all sorts of other commands that come in first.should i really be doing things before i start the nitty gritty?Thanks in advance

Estimating the size on a non-clustered index

Posted: 20 May 2013 04:53 AM PDT

Does anyone have a query (or some other way) to estimate the size of any potential non-clustered index before it's created?

My CPUs are all at 100%. How can I find out what is running *right now*?

Posted: 20 May 2013 04:27 AM PDT

There are a number of nice queries (below) that show you what is using CPU time on your server. But I believe they pull this from the DMVs, so they show you ALL this info since your SQL Instance last started. Mine has been running for over 6 months, but [b]right now[/b] our CPUs are getting slammed. So these reports show me what has been using CPU over that whole time period.Is there any way to limit these reports to only show DMV data from say the last 24 hours, or even 1 hour? Or is there some other way to find out what is banging on my CPUs right now?[code="sql"]--identify the most costly queries by CPUSELECT TOP 20 qs.sql_handle, qs.execution_count, qs.total_worker_time AS Total_CPU, total_CPU_inSeconds = --Converted from microseconds qs.total_worker_time/1000000, average_CPU_inSeconds = --Converted from microseconds (qs.total_worker_time/1000000) / qs.execution_count, qs.total_elapsed_time, total_elapsed_time_inSeconds = --Converted from microseconds qs.total_elapsed_time/1000000, st.text, qp.query_planfrom sys.dm_exec_query_stats as qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as st cross apply sys.dm_exec_query_plan (qs.plan_handle) as qpORDER BY qs.total_worker_time desc-- Find queries that take the most CPU overallSELECT TOP 50 ObjectName = OBJECT_SCHEMA_NAME(qt.objectid,dbid) + '.' + OBJECT_NAME(qt.objectid, qt.dbid) ,TextData = qt.text ,DiskReads = qs.total_physical_reads -- The worst reads, disk reads ,MemoryReads = qs.total_logical_reads --Logical Reads are memory reads ,Executions = qs.execution_count ,TotalCPUTime = qs.total_worker_time ,AverageCPUTime = qs.total_worker_time/qs.execution_count ,DiskWaitAndCPUTime = qs.total_elapsed_time ,MemoryWrites = qs.max_logical_writes ,DateCached = qs.creation_time ,DatabaseName = DB_Name(qt.dbid) ,LastExecutionTime = qs.last_execution_time FROM sys.dm_exec_query_stats AS qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt ORDER BY qs.total_worker_time DESC -- Find queries that have the highest average CPU usageSELECT TOP 50 ObjectName = OBJECT_SCHEMA_NAME(qt.objectid,dbid) + '.' + OBJECT_NAME(qt.objectid, qt.dbid) ,TextData = qt.text ,DiskReads = qs.total_physical_reads -- The worst reads, disk reads ,MemoryReads = qs.total_logical_reads --Logical Reads are memory reads ,Executions = qs.execution_count ,TotalCPUTime = qs.total_worker_time ,AverageCPUTime = qs.total_worker_time/qs.execution_count ,DiskWaitAndCPUTime = qs.total_elapsed_time ,MemoryWrites = qs.max_logical_writes ,DateCached = qs.creation_time ,DatabaseName = DB_Name(qt.dbid) ,LastExecutionTime = qs.last_execution_time FROM sys.dm_exec_query_stats AS qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt ORDER BY qs.total_worker_time/qs.execution_count DESC[/code]

join only if the joining column exists

Posted: 20 May 2013 04:53 AM PDT

[code="plain"] IF EXISTS(SELECT 1 FROM sys.columns WHERE name = 'CID AND object_id = Object_ID(Benin)) BEGIN DELETE dm FROM [ABC].[Dbo].[Benin] dm JOIN #TempDim T ON dm.CID=T.TempDimID END[/code] The table Benin Does not have the field CID , but it fails in validation/parsing before the query can run. I want the query to run only if the joining columns exists. any work around?

Heterogeneous queries require the ANSI_NULLS and ANSI_WARNINGS options to be set for the connection

Posted: 20 May 2013 06:42 AM PDT

I have a front end application that has worked for years. The issue began when one of the databases it pulls from was moved to another server. That is the only difference. Created a linked server connection to the new server.Now some people can successfully pull data from within the application, but others can not. They receive the below error when ever they try to pull data:" [Microsoft][ODBC SQL Server Driver][SQL Server]Heterogeneous queries require the ANSI_NULLS and ANSI_WARNINGS options to be set for the connection. This ensures consistent query semantics. Enable these options and then reissue your query."Why does the program continue to work for some people and not others?

Open Transactions

Posted: 20 May 2013 06:25 AM PDT

Is it possible don´t allow a user to open transactions ?

Changing a user defined data type definition

Posted: 22 Jul 2012 10:24 AM PDT

I recently had cause to change the definition of a user defined data type and was I suppose intrigued by the challenges faced with the process and the little information I could locate on the internet that catered fully for making a change in a controlled and reliable manner.We have in our production databases a little over 1000 tables, some 8000 stored procedures, 500 functions and nearly 250 user-defined types with at present about 30 production databases in existence. The general solutions to the problem of needing to re-define a UDT was a lot of manual drop this, copy data to temporary columns, re-create everything etc etc. It seemed to me like a manual process was very open to catastrophic failure and one I would definitely not use in a production environment - surely if one could do the job manually, one could also handle it in a more automated manner?The particular UDT we needed to change from a nvarchar(20) to nvarchar(40) was used in about 40 columns across 30 tables with some of those tables being the most heavily populated and queried tables in the database. These tables are supplemented by no fewer than 300 indexes, constraints and foreign keys - trying to handle manually dropping all those objects, and the 1000 odd stored procedures that referenced columns of the specified type across 30 production databases (not to mention 60 or so test databases) was a tasks to which I was not prepared to dedicate my time. I don't get paid for the mundane and repetitive after all...Too hard, can't be done my colleagues said... challenge accepted.Attached is three scripts I created to accomplish the task; one main processing procedure and two functions which are used to build the SQL to drop and create an index/constraint/key.Bearing in mind these scripts were designed and tested in my environment and there may be differences or situations in other environements which I did not encountered and thus have not catered for. I accept that validation could be improved - primarily checking compatibility between the current type definition and the new definition to ensure you're not changing an nvarchar(20) to a tinyint for example. Given that it was created to solve an internal problem and I didn't actually intend on publishing the code it's not the cleanest thing I've written, but I'm happy with it none-the-less.These scripts have been used in SQL Server 2008, SQL Server 2008R2 and SQL Server 2012 environments with success across the board - all development, test and production databases were succesfully updated. They were designed for 2008 primarily so they don't cater for columnstore indexes and other new features of 2012 - but we're only in the test phase of 2012 so we don't presently make use of any new features.What I'm looking for at this stage is some feedback from those experienced amongst us - pros and cons - on the procedure, information about some setups that it would not be practical in or other suggestions on better handling of the process.I am presently adapting a copy of this process to solve the problem of changing database collation. Using the guts of the process to drop/recreate objects with the new collation is so far succesfull in test environments but I've yet to deploy in a production setting, though I hope to do so shortly once my tests have been thoroughly evaluated at which time I may also consider posting the code for that.As a side note - use of this script did expose to me a significant number of invalid stored procedures that were sitting in our databases - they were no longer in use and did not re-compile due to changes in the database schema that were not compatible with the current procedure definition. After getting over the disbelief of how little maintenance has been done on our databases in the past I purged nearly 500 stored procedures and UDF's from each of our databases.[b]************************************************************************************************DISCLAIMER:I wrote this procedure specifically for my own development, test and production environments.If anyone intends on making use of this in your own environment I strongly suggest thorough testing in a testenvironment prior to production usage.I cannot and will not make any guarantees as to the suitability or reliability of the process to environments other than my own and strongly recommend that only experienced DBA's make use of the process and only after gaining an understandingof how it works.************************************************************************************************[/b]

Import Excel File to SQL Server

Posted: 15 May 2013 02:02 AM PDT

I am having trouble while importing excel file to SQL Server.Excel File:ColumnA--------b22a2a2aaWhen I import...it comes asColumnA--------b2a2a2aaJust number "2" doesn't comes.I tried datatype mapping as varchar(512)and if I change datatype to float then only numeric value comes and rest of data is comes as blank. I need to all the data.Can you please tell me which datatype should I use.Thanks in advance.

Antivirus on a SQL Server VM

Posted: 15 May 2013 11:57 PM PDT

hi guys, my vm admin wants to run AV at the host and not at the guest level. what do i need to ask the vm admin to make sure he is adhering to sql server best practices?cheers

Clustered index usage

Posted: 20 May 2013 04:41 AM PDT

Hi everyone. I have looked at a query that shows me reads and writes for indexes. Looking at the indexes that have the worst read/write ratio, I see this one:type_desc Table Name Index Name Total Reads Total Writes DifferenceCLUSTERED TABLE1 PK_INDEX1 964251 4281563 -3317312It seems like the index is being written to much more then read from but as it is a clustered index, is this normal that it would have to read the table itself in order to do inserts or updates?when looking at read/write ratios, are non clustered indexes of more interst for tuning?Thanks,Howard

Database suspended-Need help

Posted: 22 Sep 2010 08:19 PM PDT

Hello,Why database getting suspended stage?How can we solve this?What are all the problem we will get this error?could you please give some guide on this.Thanks,ARP

Security Problems after SSRS Domain Migration

Posted: 14 Mar 2013 09:41 PM PDT

Morning all,I recently carried out an SSRS Migration from one domain to another.I used the backup and restore method, and all reports are generating fine and all access for existing users appears to be fine.However, there is one (fairly significant) problem;Some users appear to have been duplicated (seems that a problem with new SIDs being generated / mapped for each domain user affected, and this has resulted in me being unable to edit the security on the site levels/folders where this is a problem.I can neither add, amend or delete users and was advised to delete the first instance of a user to correct this, but it seems this has to be ruled out, due to the fact that even the top level folder contains duplicate users, and an error occurs when I attempt to delete thr user via the site security.The original error was:"Response is not well-Formed XML"The error when I try to delete a user now in one of the affected folders is;"role assignment is not valid. The role assignment is either empty or it specifies a user or group name that is already used in an existing role assignment for the current item. (rsInvalidPolicyDefinition)"I tried amending the ownership and modifiedby entries in the database for one of the specific users to myself, and then deleted the user and policies assigned to him, but it unfortunately never resolved the problem (The user still remains in place with security entries on the site, even if deleted within the database).Has anyone else ever encountered this problem and is there a way of resolving this which doesn't mean having to resort to backing up the original db with the affected users stripped out, in order for a restore to be carried out.A lot of further configuration work and report imports has occurred since this was put in place.Thanks in advance for any responses.

SQL Server Security Issue

Posted: 20 May 2013 12:12 AM PDT

Hello all,There are accounts that were created well before my time in which the users (developers) know the passwords. These accounts are being used to elevate access and perform tasks that they should not be doing.Is there a way to create an alert that will notify me when user access is elevated or changed?Thanks all who respond to this in advance, any input is greatly appreciated.

International Characters (Korean) displayed as Junk data after migration to PB12.5.

Posted: 20 May 2013 12:03 AM PDT

Dear Gurus,We have facing following issue as details are mentioned below. Your kind help and support is needed. [b]Details:[/b][i]We have migrated the application from Power Builder 9.0.3 that connects to a MS-SQL Server 2005 to Power Builder 12.5 that connects to MS-SQL Server 2008 R2 with Collation SQL_Latin1_General_CP1_CI_AS.[/i][i]In PB 9.0.3 with MS-SQL Server 2005 the application displays & save the international characters (Korean) correctly but after migration the application is displaying the same international characters (Korean) as Junk characters.[/i]We are using SNC SQL Native Client interface with following parameters.[b]SQLCA.DBMS = "SNC SQL Native Client(OLE DB)"SQLCA.LogPass = <Mypassword>SQLCA.ServerName = "ServerName"SQLCA.LogId = "UserID"SQLCA.AutoCommit = FalseSQLCA.DBParm = "Database=DBName,Provider='SQLNCLI10'"[/b]Now, when we connect to the server the international characters are displayed as junk as shown below.¼­¿ï½Ã¼­¿ï½ÃInstead of the Korean language as shown below.서울시서울시Then we added the AutoTranslate parameter:[b]SQLCA.DBMS = "SNC SQL Native Client(OLE DB)"SQLCA.LogPass = < Mypassword >SQLCA.ServerName = " ServerName "SQLCA.LogId = "UserID"SQLCA.AutoCommit = FalseSQLCA.DBParm = "Database='Databasename',Provider='SQLNCLI10',ProviderString='AutoTranslate=NO'"[/b]After applying the AutoTranslate keyword still the issue persists and the application show the international characters (Korean) as junk characters.We also try the same AutoTranslate keyword with PB12.5 and ODBC combination with the following [b]dbparm:[/b][b]SQLCA.DBMS = "ODBC"SQLCA.AutoCommit = FalseSQLCA.DBParm = "ConnectString='DSN=test1; UID=UserID; PWD=<MyPassword>; Autotranslate=no'"[/b][u]When tested with the same environment of PB12.5 and MS-SQL Server 2008 , the application displays the International characters (Korean) correctly as shown below.서울시서울시[/u]Apparently Autotranslate (or Auto Translate) property is working perfectly with ODBC but not with OLE DB. Although the SQL native client OLE dbparam accepts the autotranslate property but the international characters(Korean) still show as junk.Kindly help on this if we are missing any configuration or provide any work around to achieve the correct behavior regarding the international characters (Korean) with PB12.5 and MS-SQL Server 2008 R2 using SQL native client OLE DB.ThanksAdeel Imtiaz

No comments:

Post a Comment

Search This Blog