Thursday, March 14, 2013

[SQL Server 2008 issues] new to calling procedures or scripts

[SQL Server 2008 issues] new to calling procedures or scripts


new to calling procedures or scripts

Posted: 13 Mar 2013 10:54 AM PDT

WHat is the best way to go around this. I know I can just add the code all in one page but I would rather break it upso I have 2 .sql files one called createtable.sql and one called cleantheversion.sqlhere are the scriptscreatetable.sqlif exists(select 1 from sys.tables where object_id = object_id('dbo.BigTable')) drop table BigTable; --Drop SOURCE--Create a tableCREATE TABLE BigTable(software_name_raw VARCHAR(255),software_version_raw VARCHAR(255))GO--Insert records into Source tableINSERT INTO BigTableVALUES('Microsoft Office 2003','11.0.2.1'),('Microsoft 2003 Office','11.0.3.1'),('Microsoft Office 2003 Professional','12.1.2.1'),('Sun Microsystems','11.0.0.1')GOSELECT * FROM BigTable as Source_TABLEGO??can i call the other .sql file somehow-----EG Call cleantheversion.sql???????????---------------------cleantheversion.sqlBEGIN TRANSACTION Inner3;GOupdate dbo.BigTable set BigTable.Software_Version_Raw =case when CHARINDEX('.',Software_Version_Raw,0) >0 then substring(Software_Version_Raw,0, CHARINDEX('.',Software_Version_Raw,0) )+ '.x' when isnumeric(Software_Version_Raw)=1 then Software_Version_Raw + '.x' else Software_Version_Raw endgoCOMMIT TRANSACTION Inner3;

How can I get these values with a match field that has different file extensions.

Posted: 13 Mar 2013 05:57 PM PDT

Hello and thank you for the help.I am trying to write a select statement where I can view 3 columns by a filename with a join statement. Unfortunately the value in both filenames match but their file extensions do not. Here is an example:6024500036687.pst.txt = 6024500036687.pst.msg.htmAs you see the first piece of the filename match but the file extensions do not. Any suggestion will help. I am fairly green and would greatly appreciate the help.

update still difficult??

Posted: 13 Mar 2013 03:05 PM PDT

I have a table which has three columns [code]create table employees (ID int NOT NULL PRIMARY KEYNAME varchar(10) NULLEXTENDEDID int NOT NULL IDENTITY(1,1))INSERT INTO employees ( ID, NAME, EXTENDEDID) VALUES( 1, Sean, 1), ( 3, Jason, 2), ( 2, Gail, 3), ( 5, Jeff, 4)[/code] I have to Identify those columns where ID is not equal to EXTENDEDIDSo I did [quote]Select * from employees where ID <> EXTENDEDID[/quote]Now I have to make the two columns equal.. I know that EXTENDEDID is an identity column and i cant change it..So trying to make the columns equal comparing with EXTENDEDID So the required output shd be[quote]ID NAME EXTENDEDID1 Sean 12 Jason 23 Gail 34 Jeff 4[/quote]Now I used the following statement/statements to change get the O/P[quote]Update employees set ID= (select EXTENDEDID from employees where ID<> EXTENDEDID) where ID<> EXTENDEDIDupdate employees set ID= EXTENDEDIDUpdate employees SET ID= EXTENDEDID WHERE employees.ID <> employees.EXTENDEDID[quote]All of them failMy next requirement is to Print all those records that have been updated accordingly...Any Suggestions are appreciated :)

Reg Backup Stratergy

Posted: 13 Mar 2013 04:07 PM PDT

Is it best practice to take backups of system databases and user created databases in separate through maintenance plans and jobs. Else we should take them together.

Why is "instead of delete" trigger not fired by delete inside "after update" trigger

Posted: 13 Mar 2013 02:40 PM PDT

One of my tables has both an "after update" and an "instead of delete" trigger. When I perform a delete within the after update trigger, the instead of delete trigger is not fired. Although I don't actually want the instead of delete trigger to fire in this particular case, I'm troubled by the fact that I haven't seen anything in the documentation that indicates it shouldn't. If I understand correctly, the recursion is indirect, and hence, should not be prevented by RECURSIVE_TRIGGERS disabled (the setting in my database).The only thing I found in the docs that seemed close to explaining what I'm seeing is the following from the SQL Server 2008 R2 documentation:[i]If an INSTEAD OF trigger defined on a table executes a statement against the table that would ordinarily fire the INSTEAD OF trigger again, the trigger is not called recursively. Instead, the statement is processed as if the table had no INSTEAD OF trigger and starts the chain of constraint operations and AFTER trigger executions. For example, if a trigger is defined as an INSTEAD OF INSERT trigger for a table, and the trigger executes an INSERT statement on the same table, the INSERT statement executed by the INSTEAD OF trigger does not call the trigger again. The INSERT executed by the trigger starts the process of performing constraint actions and firing any AFTER INSERT triggers defined for the table.[/i]This statement, however, applies only to the case in which the statement triggering the recursive call to the instead of delete trigger is itself within the instead of delete trigger. In my case, the delete statement in question is in an after update trigger, so the statement from the docs seems not to apply.I noticed that at least one other user has been puzzled by this behavior:http://www.sqlservercentral.com/Forums/Topic1387252-23-1.aspxStill no response to his post...Does anyone know what's going on here? I could explicitly disable triggers before executing the delete statement within the after update trigger, but I don't want to do this unnecessarily...Thanks,Brett S.

"Truncation may occur due to inserting data from data flow column...." in SSIS 2008

Posted: 13 Mar 2013 05:18 PM PDT

HI All,Am getting that error in SSIS 2008 even i changed the column length in flat file source in advanced editor..i guess its not saved so that it is giving the same error again.So colud any body tell me how that meta data property is to be refreshed or solve this issue ?Regards,Ravindra

How to add below complex conditions in SQL script..

Posted: 13 Mar 2013 04:29 PM PDT

[size="2"][font="Courier New"]Hi All,Below is my written queryDECLARE @FromDate DATETIMEDECLARE @EndDate DATETIMESET @FromDate = '2013-01-01 00:00:00.000'SET @EndDate = '2013-02-13 00:00:00.000'SELECT DISTINCT year(sd.FKDAT) As YEARWISE_DATA, so.vkbur As MARKET, so.bezei As NAME, sd.kunrg As PARTY, cm.NAME1 As PARTY_NAME, --za.FGCODE As ITEM, --za.FGDESC As ITEM_DESCRIPTION, --za.EANNUM As USACODE, im.MATNR As ITEM, im.MAKTX As ITEM_DESCRIPTION, im.EAN11 As USACODE, SUM(sd.FKIMG) As QUANTITY, SUM(vb.NETWR) As VALUE_IN_FC, SUM(sd.NTGEW) As WEIGHT -- (SUM(vb.KWMENG) - SUM(sd.FKIMG)) As PENDING_QUANTITY, -- (SUM(vb.NETWR) - SUM(sd.NETWR)) As PENDING_VALUE, -- (SUM(vb.NTGEW) - SUM(sd.NTGEW)) As PENDING_WEIGHT FROM sales_office soLEFT JOIN SALES_DATA sd ON so.VKBUR = sd.VKBURINNER JOIN Item_Mas im ON sd.MATNR = im.MATNR INNER JOIN Cust_Mas cm ON sd.KUNRG = cm.KUNNRINNER JOIN VBAP vb ON sd.AUBEL = vb.VBELN AND sd.AUPOS = vb.POSNR and sd.MATNR = vb.MATNRWHERE sd.FKDAT >= @FromDate AND sd.FKDAT <= @EndDate AND im.EAN11 != ' 'GROUP BY YEAR(sd.FKDAT), so.vkbur, so.bezei, sd.kunrg, cm.NAME1, --za.FGCODE As ITEM, --za.FGDESC As ITEM_DESCRIPTION, --za.EANNUM As USACODE, im.MATNR, im.MAKTX, im.EAN11ORDER BY 1, so.VKBUR, sd.kunrg, im.EAN11 ASCFor PENDING_QUANTITY, PENDING_VALUE and PENDING_WEIGHT calculations I have to place two conditionsI will check in sales_data table with condition (sd.FKDAT >= @FromDate AND sd.FKDAT <= @EndDate AND sd.KUNRG = 'HA127', AND sd.MATNR = 'FG151765162081222') [b]Condition-1[/b] IF I found no entries in result set i.e. 0 Rows Affected Then I want this result setQUANTITY = 0, VALUE_IN_FC = 0, WEIGHT = 0, PENDING_QUANTITY = SUM(vb.KWMENG) PENDING_VALUE = SUM(vb.NETWR) and PENDING_WEIGHT = SUM(vb.NTGEW) From VBAP table with one more filter only for Pending Attributes calculation i.e. (vb.AUDAT >= @FromDate AND vb.AUDAT <= @EndDate AND vb.KUNNR = 'HA127', AND vb.MATNR = 'FG151765162081222') alongwith other select query output[b]Condition-2[/b] IF I found entries in sales_data table based on parameters provided above ThenQUANTITY = SUM(sd.FKIMG), VALUE_IN_FC = SUM(vb.NETWR), WEIGHT = SUM(sd.NTGEW)PENDING_QUANTITY = 0, PENDING_VALUE = 0, PENDING_WEIGHT = 0My concern is how to add these conditions in above Select query and in Inner Join of Sales_data & VBAP statements..?[/font][/size]

Errors for sql server connection in asp server

Posted: 13 Mar 2013 04:03 PM PDT

In an asp server we are getting error for SQL server connection like below[highlight="#FAFAD2"]System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at AutoRenReg.ConnectToSQLDB()[/highlight]Also after the ExecuteScalar, ExecuteReader and ExecuteNonQuery i am getting error like[highlight="#FAFAD2 "]System.Data.SqlClient.SqlException: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParserStateObject.WriteSni() at System.Data.SqlClient.TdsParserStateObject.WritePacket(Byte flushMode) at System.Data.SqlClient.TdsParserStateObject.ExecuteFlush() at System.Data.SqlClient.TdsParser.TdsExecuteSQLBatch(String text, Int32 timeout, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader() at WinMsg.Page_Load(Object sender, EventArgs e)[/highlight]Both errors not seen always.But it happens sometime.So what may be reason for these error?How to solve it?

need assistance with query

Posted: 13 Mar 2013 03:42 PM PDT

Hello all!How could I return the top ten highest values in a column? WOuld it be somewhere along the lines of Select Top(10) StudentScore from dbo.Employees?The output would be along the lines of this:10099665433etc...Does anyone know how? thanks

How to Get The Most Filled Records from Similar Records

Posted: 13 Mar 2013 09:29 AM PDT

How to Get The Most Filled Records from Similar RecordsI have table called "Locations" and it contains 4 Columns "Neighborhood,City,Governorate,Country"[b]Neighborhood, City, Governorate, Country, , , Egypt , , Alexandria, Egypt, East, Alexandria, Egypt, West, Alexandria, EgyptMoharemBek, West, Alexandria, EgyptMoharemBek, North, Alexandria, Egypt, , Cairo, Egypt, , , Morocco[/b]Here is the first Record filled only with Country Value and nothing else ...The Next Record Contains 2 filled Columns Governorate and Country and the country value the same as The First Record ...so here the winner will be The Second Record and we shall ignore First Record.The Same On the Third Record ..Contains 3 filled Columns City, Governorate and Country ... and Governorate the same as the second record and Country the same as First and Second ...so here the winner will be the third record and we shall ignore the first 2 records .The Output I wish to get it :[b] , East, Alexandria, EgyptMoharemBek, West, Alexandria, EgyptMoharemBek, North, Alexandria, Egypt, , Cairo, Egypt, , , Morocco[/b]

ProcessAddress times out every time - result set size, time of day seem irrelevant

Posted: 13 Mar 2013 02:16 PM PDT

I am using the ProcessAddress geocoding function found [url=http://yaddressudf.codeplex.com/]here[/url] on CodePlex, but no matter what, it always times out on me. Is this just an issue with Yahoo's geocoding, or am I doing something wrong?Here's my stored procedure:BEGIN UPDATE [Account_Master_2013] SET [County] = pa.County, [Long] = pa.Longitude, [Lat] = pa.Latitude FROM [Account_Master_2013] CROSS APPLY ProcessAddress([Account_Master_2013].[Address], [Account_Master_2013].CITY + ', ' + [Account_Master_2013].[STATE], null) pa WHERE [Account_Master_2013].[state]=@StateRegion AND [Account_Master_2013].County IS NULL AND ACM IN (SELECT TOP (@Top_N_Records) ACM FROM [Account_Master_2013] WHERE lat=0 OR lat IS NULL);ENDAt first I tried limiting the records processed to a really small set, but that didn't seem to matter. Nor did the time of day that I ran the stored procedure (because I was wondering if the server were just busy). The code compiles fine, and runs, but times out after about two minutes, even if I set the @Top_N_Records = 5 .What am I doing wrong? Or should I use something else to do geocoding? I am pretty sure the syntax for the update is right - Vedran Kesegic sorted that part out a long time ago... now it just plain fails though.Where do I start to try to fix this? Thanks!Pieter

Fuzzy Matching Barcodes

Posted: 13 Mar 2013 06:01 AM PDT

I have a list of 1 million+ products with numeric barcode values stored as VARCHAR(50) that can range in length in a table. I'm trying to find a way to do a fuzzy search query when a user enters a barcode to find a product. Let's say I have a barcode table like:Barcodes----------98425103984251049942510399564312If the user entered something like 98425102, I would expect it to return 98425103. However, if the user entered 97425103, I'd expect both 98425103 and 99425103 to be returned. I'm basically looking for the closest matches to the entered barcode, while factoring misspellings/mistypes as well. Would I implement this via RegEx CLR or is there a way to do this through the standard LIKE operator?

how smart is the SS2008R2 query optimizer?

Posted: 13 Mar 2013 05:56 AM PDT

Hello - How smart is the SS2008R2 query optimizer? For example, consider the following query:select top 10 * from ActivityPointer where Subject like '%new registrant%' andOwningUser = 'F9841D46-5904-E011-949A-005056BC3615'A guid search is going to be faster than a text search. And a like search (%%) makes text searches even slower. So is SS2008R2 smart enough to optimize this query before it executes it?For example, will SS2008R2 determine on its own that OwningUser is a guid search that provides better performance than a text like search so therefore apply the guid search first in order to reduce the number of rows that the text like search is applied to?

Diagnosing SQL response time issues

Posted: 13 Mar 2013 05:46 AM PDT

We moved our server racks in our colo facility over the weekend, upgraded to a new firewall, and installed new switches.I've got one SQL server (SQL 2008 R2 SP2, Windows 2008 R2 Enterprise) in that facility. Before the move, it was behaving just fine. After the move, I'm seeing blocking and deadlocks on a regular basis. Note that this SQL server is very underutilized - far more hardware than necessary for the load it has.The interesting part of this is that if I compare SQL server stats from before the move and after, the only stat that's changed is wait times have started to creep up. But I can't spot any good reason for that on the server itself.What we *are* seeing is data getting *to* the SQL server fine, but a huge delay in the data get back from the server to the calling application. My favorite test case - I fire off a very small update from one application (it updates four records in a table) against a SQL server that isn't in the colo facility, and response time is the same as prior to the move. If I set that same application to look at the SQL server in the colo facility and send the exact same update, it takes 42+ seconds to finish.But you don't see that 42+ seconds on the SQL server. And if the application times out, the data is still getting written to the server.We've run hardware diagnostics on the SQL server - nothing reporting an issue.We've swapped out network cables, and have tested changing switches and even moving back to the old switches that were in place prior to the move. No change in behavior.I'm seeing far more ASYNC_NETWORK_IO waits than I did prior to the move, but nowhere near enough to account for the sheer lag we're experiencing.I've checked baseline SQL Server configuration from before and after - no changes to the config.We did deploy quite a few windows updates - this server hadn't been offline for maintenance in several months. My network guys are telling me they don't see anything unusual, and me and my SQL folks aren't finding anything pointing to the SQL server directly - everything I'm seeing is pointing to dropped connections over the network.I'm perfectly willing to hire a consultant to come in and look at the situation - but I'm not even sure what type of consultant to call at this point - a network one or a SQL one. Any suggestions for where to hunt next would be appreciated. Short of physically removing the server from the colo facility and racking it in another office, I'm kind of out of things to test.

Wednesday, March 13, 2013

[SQL Server] Creating my first UD

[SQL Server] Creating my first UD


Creating my first UD

Posted: 13 Mar 2013 10:04 AM PDT

I really like sql server, but one of the things that drives me crazy is formatting columns. The numbers in the databases that I pull from are usually numeric and I really need commas to help me read the numbers. I found the code below that will convert the numeric into numbers with columns. Since I use this all of the time, I would like to create a UDF. I have never done this before, but have found many examples. This is what I would like the UDF to do:REPLACE(CONVERT(varchar(20), (CAST(SUM(@numtocomma ) AS money)), 1), '.00', '')So far this:CREATE FUNCTION commainsert ( @numtocomma numeric(38,6)) RETURNS moneyASBEGINdeclare @monval moneyset @monval=REPLACE(CONVERT(varchar(20), (CAST(SUM(@numtocomma ) AS money)), 1), '.00', '')return @monvalENDselect dbo.commainsert(10201.21) as Numberyields this:Number10201.21Would someone please tell me where I went wrong?Thanks

outer reference error with group by

Posted: 13 Mar 2013 07:18 AM PDT

I get the following error: Each GROUP BY expression must contain at least one column that is not an outer referencewhen I try to run this query:INSERT INTO progsumm_dba.acc_task_sums (fiscal_year, plan_option, goal, objective_num, task_code, account_code, person_cost, added_cost, fte_prcnt) SELECT :fa_year, s.plan_option,s.goal, s.objective_num, s.task_code, l.account_code, sum(s.person_cost),sum(s.added_cost), sum(ISNULL(s.fte_prcnt,0)) FROM progsumm_dba.fund_task_sums s, account_funds_lookup l WHERE s.fiscal_year = :fa_year AND s.plan_option = :fa_plan AND s.fiscal_year = l.fiscal_year AND s.plan_option = l.plan_option AND s.fund_code = l.fund_code GROUP BY :fa_year, s.plan_option, s.goal, s.objective_num, s.task_code, l.account_codeI have no idea what is meant by 'outer reference'. Can anyone clear this up for me?

Microsoft Access 2010 & SQL Server 2008 Express

Posted: 20 Jan 2011 12:34 AM PST

What's the best way for clients to connect to an sql server? Should all clients have their own seperate Access data project or can one be shared on a network drive amongst all the clients that need access to sql server?Also, how can I share the reports created within the access data project?

SQL Query - Update varchar ID's...confused.com

Posted: 13 Mar 2013 02:20 AM PDT

Hi,So I have been replacing an old database of ours with a new one - it has all gone relatively well. The new database structure etc is far better, far more effecient and well awesome. unfortunately something I couldn't change was the ID structure ( the users wanted the same data presented to them a lot better). My problem - I have had to rename tables/queries/stored procedures/triggers to follow a new convention, which is fine, unfortunately I would "really" like to change 1 tables ID:Sample ID's:[code="sql"]AG001/1/P1 AG002/1/P1 AG003/1/P1 AG004/1/P1 AG006/1/P1 AG007/1/PO1 AG008/1/P01 AG009/1/P01 AG010/1/P01 AL001/1/P1 AL002/1/P1 AL003/1/P1 CC013/1/P01 [/code]As you can see they are a pain in the..It basically mirrors three tiers (each tier is a section between the slashes). I would like to change the ID's in the final tier to the following:[code="sql"]AG001/1/L1 AG002/1/L1 AG003/1/L1 AG004/1/L1 AG006/1/L1 AG007/1/LO1 AG008/1/L01 AG009/1/L01 AG010/1/L01 AL001/1/L1 AL002/1/L1 AL003/1/L1 CC013/1/L01 [/code]I find this quite difficult to explain, the first XX000 is the 1st tier, the /1/ is the second tier and the X01 or X1 or X001 is the final tier (the final tier is the messiest due to human error). Unfortauntely there are too many records to edit manually. I have tried looking into using CHARINDEX/REPLACE/SUBSTR which are helpful - the following provides me with the final tier which I then need to edit - however how can I go and edit to the new format.Used in a cursor:[code="sql"] RIGHT(@ID, CHARINDEX('/', REVERSE(ID)) - 1)[/code]I have also thought about simply recreating the final tier ID's from scratch, so they all follow Lnnn - this would be neater but again I unfortauntely don't know how to do this using SQL?Now I hold the 2nd tiers ID in another column[code="sql"]AG001/1 AG002/1 AG003/1 AG004/1 AG006/1 AG007/1 AG008/1 AG009/1 AG010/1 AL001/1 AL002/1 AL003/1 CC013/1 [/code]Perfect results:[code]AG001/1/L001AG001/2/L001AG001/2/L002AG001/2/L003CE001/1/L001CE002/1/L001[/code]

Dead lock not terminated

Posted: 13 Mar 2013 01:04 AM PDT

Dear AllI have read that sql takes care of dead locks on its own. But when i run following query through SSMS in 2 differnt query sessions it keeps on goingUSE AdventureWorksBEGIN TRANUPDATE Person.Address SET AddressLine1 = 'New address' WHERE AddressID = 25WAITFOR DELAY '0:0:10'SELECT * FROM Person.Address WHERE AddressID = 20Am i doing something wrong. I am using sql server 2008 R2Regards

Sum values in a select statement

Posted: 13 Mar 2013 12:15 AM PDT

I know this is a simple question. However, I'm not sure where to start. I'm new to SQL and would appreciate any help offered.How would I sum the following in example 1 and have it display as shown in example 2?--Example 1Expense_Id Expense Price Expense_By 1 Coffee 5.00 Jim 2 Coffee 3.00 Jim 3 Coffee 3.00 Mike 4 GAS 3.00 Jim 5 GAS 3.00 Jim 6 GAS 3.00 Mike 7 GAS 3.00 Mike--Example 2Expense_By Expense Total_Price Jim GAS 6.00 Mike GAS 6.00 Jim Coffee 8.00 Mike Coffee 3.00

[how to] Completely, uninstall the Oracle server on Linux

[how to] Completely, uninstall the Oracle server on Linux


Completely, uninstall the Oracle server on Linux

Posted: 13 Mar 2013 08:29 PM PDT

I have searched lot of time in the google uninstall oracle server in linux os,but i can't figure out , any correct solution.because i trying to hope that i need to reinstall the server because reason of lot of errors is occurred when iam trying to startup the server.does any one know completely remove all data belong to the oracle server?

Postgres Automated Install

Posted: 13 Mar 2013 05:55 PM PDT

Is it possible to automatically create a database after a postgres one click install (from Enterprise DB)?

How to become oracle developer

Posted: 13 Mar 2013 04:09 PM PDT

In future I want to become an oracle developer, now I just have basic questions for those who are or currently was an oracle developer:

  • I go a 2:2 in my ICT degree, do I need a masters in order to become an oracle developer or is it not necessary? Or am I better to follow the Oracle and PL/SQL for dummies books and go through the books and generate the knowledge and mention in CV I have gone through the books?

  • I obviously need Oracle at home in future but it is expensive. Does business give you license of software for home use or are you expected to have oracle at home yourself with your own purchase?

  • I need experience obviously to become an oracle developer. I have done a little bit of oracle at undergraduate standard in final year where I simply followed a portfolio doing tasks such as queries, inserting data, triggers, little bit of XML. Is it a lot different in the work place? Are there companies willing to take in learning developers for voluntary purposes to learn?

Thanks

Database research ideas [closed]

Posted: 13 Mar 2013 04:00 PM PDT

I am wanting to participate in a Master by Research degree in University and I am really interested in database programming. Especially in either Mysql or in Oracle. Now I just finished my final undergraduate year and my strongest subject was in Databases where I got an A. The assignment included following a portfolio where I normalize data and then by using Oracle I was able to follow steps in inserting data, updating data, triggers, queries etc.

Now I normally like to make things but in a master b research I need to research a topic and then create a product and write a thesis based on my research.

My question is that for a person like me who has only little bit experience using oracle and has used Mysql during my undergraduate years, does anybody recommend any research topics I could get into which is suitable for me and what I can handle in order to produce my research on? I am expected to write a research proposal but do not know what is suitable research for databases? Should I combine a database with a web application for example or solely just concentrate on database only?

Thanks

How to properly configure PostgreSQL RAM usage?

Posted: 13 Mar 2013 03:08 PM PDT

Using PostgreSQL, we have several hundred concurrent processes doing regular database access. We have our data spread out through several database servers, each of which houses a connection pool with PgBouncer. The problem is that RAM usage (via top - not sure if this is the best way to determine that) tends to climb to nearly or at 100%, on all servers. I am pretty sure this is bad.

I have tried out several configurations of pgbouncer / postgres, and eventually (after a few minutes of my system running) the RAM usage goes up to this point.

My questions are:

  1. Should I set up my connection pooler on a different server as the database? It seems so many open connections on the same server could be causing this.
  2. In general, what are good guidelines for RAM usage on Postgres? I really don't know how to tell if the server is behaving well / as expected or badly.

Thanks very much! Let me know if more information is needed and I'll try to post ASAP.

How to properly configure pgBouncer transaction level pooling?

Posted: 13 Mar 2013 03:04 PM PDT

What is a good way to determine the number of available transaction connections when using pgBouncer to pool database connections? Also, how should I determine the number of max_connections and max_transaction_locks on postgresql.conf?

Our application has over 200 concurrent processes, each of which regularly reads and writes to the database. Using session-level, I would expect the number of pooled connections to be around the number of concurrent processes working on the DB at a given time, but I don't really know how to translate that to transaction-level pooling.

Thanks very much! Let me know if more information is necessary and I will gladly oblige.

Consistent Database Export of Oracle Database

Posted: 13 Mar 2013 04:14 PM PDT

I'm quite confused and I need some clarification!

I do full export of database with Export Pump (for replication/duplication).

what usualy I do is:

SQL>SHUTDOWN IMMEDIATE  SQL>STARTUP RESTRICT  

then export, so I can have Consistent export (DMP-file)!

now can I export with only use the parameter CONSISTENT=Y or/and FLASHBACK_SCN=SOME_NUMBER and what is the differences (if any) and when to use them

I hope some one explain it to me in simple language!

any one can recommend best practice?

How to speed up SQL Server RIGHT JOIN on 100MM+ Records - use TVF?

Posted: 13 Mar 2013 04:19 PM PDT

  1. I use a table insert to UNION a CTE of converting transactions to visits (converting or not) and then summarize the visit data (no details, just pages visited & txns completed) to get funnel clickthrough & overall conversion; after this, RIGHT JOIN to the VisitData table

    • The transaction table has 500k relevant rows or less for every run of this process, but
    • The VisitData table has 3MM or more rows per day, so if I run this weekly it ends up in a 25MM-row batch.
  2. To get overall visit info for converting and non-converting visits into one summary table, as mentioned above, I have to:

    1. RIGHT JOIN the reasonably-sized Transactions CTE_T, 500k of ~2MM Txn table rows, to
    2. the Ridicularge Visits table V, ~90MM rows per partition
      • Since the Visits table is so huge, it is partitioned by date, and although it has indices for the keys I join on, SQL Server still does a table scan to attach the rest of the augmenting data (e.g. visitor location etc.) from the visit table V to the transactions info from CTE_T.

This takes a very long time, between 3-7 hours depending on how wide the timespan is (the Visit table is partitioned by month @ 90MM rows per partition), but never less than 2 hours JUST TO SCAN THE TABLE (!) before any query/join logic is applied or any other operation performed.

  • My question is this: is it possible to write a table-valued function that will return (@UserID,VisitData1,VisitData2,etc), allowing SQL Server to spool (I'm told) this in the background while allowing other things to occur in the meantime?
  • If the answer is YES, have you experienced significant (or even just noticeable) gains in performance using this method?

Join Calender Table

Posted: 13 Mar 2013 02:43 PM PDT

I have a Datatable and a Calendartable. I do a join on this tables to get sequential dates if there is no data for this day.

My problem now is that there a several id's and i need a count for each one per day. The problem is that I get a 'null' value for the id-field if there is no data for this date. Also my query below does only work if there is one id in the table, if i add a second id, the empty rows are not shown because they exist for the other id.

My sample tables are:

    --Datatable      CREATE TABLE [dbo].[datatable](          [the_id] int,          [the_date] date,          [the_count] int NULL      ) ON [PRIMARY]        INSERT INTO datatable (the_id, the_date, the_count) VALUES      (1, dateadd(d, -5, getdate()), 37),      (1, dateadd(d, -5, getdate()), 30),      (1, dateadd(d, -5, getdate()), 70),      (1, dateadd(d, -4, getdate()), 8),      (1, dateadd(d, -4, getdate()), 9),      (1, dateadd(d, -2, getdate()), 19),      (1, dateadd(d, -2, getdate()), 3),      (1, dateadd(d, -1, getdate()), 20)        INSERT INTO datatable (the_id, the_date, the_count) VALUES      (2, dateadd(d, -5, getdate()), 27),      (2, dateadd(d, -5, getdate()), 17),      (2, dateadd(d, -5, getdate()), 37),      (2, dateadd(d, -3, getdate()), 8),      (2, dateadd(d, -3, getdate()), 89),      (2, dateadd(d, -2, getdate()), 19),      (2, dateadd(d, -2, getdate()), 9),      (2, dateadd(d, -1, getdate()), 20),      (2, dateadd(d, -1, getdate()), 2)        --Calendartable      CREATE TABLE [dbo].[calendartable](          [the_day] date      )        DECLARE @StartDate date      DECLARE @EndDate date      SET @StartDate = DATEADD(d, -10, GETDATE())      SET @EndDate = DATEADD(d, 10, GETDATE())        WHILE @StartDate <= @EndDate      BEGIN      INSERT INTO [calendartable] (the_day)      SELECT @StartDate      SET @StartDate = DATEADD(dd, 1, @StartDate)      END  

My query:

    --Query      DECLARE @mindate date      DECLARE @maxdate date      SELECT @mindate = MIN(CONVERT(date, the_date)),      @maxdate = MAX(CONVERT(date, the_date))      FROM datatable        SELECT  dt.the_id, isnull(dt.the_date, ct.the_day),      (SELECT SUM(the_count) WHERE the_id = dt.the_id and the_date = dt.the_date)      as the_sum_count      FROM calendartable AS ct      LEFT JOIN datatable AS dt      ON dt.the_date = ct.the_day      AND ct.the_day BETWEEN @mindate AND @maxdate      WHERE ct.the_day BETWEEN @mindate AND @maxdate      GROUP BY dt.the_id, dt.the_date, ct.the_day      ORDER BY dt.the_id, dt.the_date ASC  

This query shows the row with no data at this day, but I also need the id to which this date belongs. The query shows no empty rows if there is a second id in the datatable.

Please help!

EDIT:

To be more explicit which output I want:

    ID  the_date    the_count      1   2013-03-08  137      1   2013-03-09  17      1   2013-03-10  null  <--- this is missing in the above query      1   2013-03-11  22      1   2013-03-12  20      2   2013-03-08  81      2   2013-03-09  null <--- this is also missing      2   2013-03-10  97      2   2013-03-11  28      2   2013-03-12  22  

I hope my questions is clearer now!

1286 - Unknown storage engine 'InnoDB'

Posted: 13 Mar 2013 01:38 PM PDT

I am trying to use roundcube and it recently just broke. I don't know if this is due to a MySQL update that happened recently or not but in phpMyAdmin I get the following error if I try and view a table:

1286 - Unknown storage engine 'InnoDB'  

and

mysql> SHOW ENGINES;  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  | Engine             | Support | Comment                                                        | Transactions | XA   | Savepoints |  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  | MRG_MYISAM         | YES     | Collection of identical MyISAM tables                          | NO           | NO   | NO         |  | CSV                | YES     | CSV storage engine                                             | NO           | NO   | NO         |  | MyISAM             | DEFAULT | MyISAM storage engine                                          | NO           | NO   | NO         |  | BLACKHOLE          | YES     | /dev/null storage engine (anything you write to it disappears) | NO           | NO   | NO         |  | FEDERATED          | NO      | Federated MySQL storage engine                                 | NULL         | NULL | NULL       |  | PERFORMANCE_SCHEMA | YES     | Performance Schema                                             | NO           | NO   | NO         |  | ARCHIVE            | YES     | Archive storage engine                                         | NO           | NO   | NO         |  | MEMORY             | YES     | Hash based, stored in memory, useful for temporary tables      | NO           | NO   | NO         |  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  8 rows in set (0.00 sec)  

and

[mysqld]  default-storage-engine=MyISAM  local-infile=0  symbolic-links=0  skip-networking  max_connections = 500  max_user_connections = 20  key_buffer = 512M  myisam_sort_buffer_size = 64M  join_buffer_size = 64M  read_buffer_size = 12M  sort_buffer_size = 12M  read_rnd_buffer_size = 12M  table_cache = 2048  thread_cache_size = 16K  wait_timeout = 30  connect_timeout = 15  tmp_table_size = 64M  max_heap_table_size = 64M  max_allowed_packet = 64M  max_connect_errors = 10  query_cache_limit = 1M  query_cache_size = 64M  query_cache_type = 1  low_priority_updates=1  concurrent_insert=ALWAYS  log-error=/var/log/mysql/error.log  tmpdir=/home/mysqltmp  myisam_repair_threads=4  [mysqld_safe]  open_files_limit = 8192  log-error=/var/log/mysql/error.log    [mysqldump]  quick  max_allowed_packet = 512M    [myisamchk]  key_buffer = 64M  sort_buffer = 64M  read_buffer = 16M  write_buffer = 16M  

Ideas as to how to fix? It used to work just fine.

Why is SQL running the same query longer from another connection?

Posted: 13 Mar 2013 03:29 PM PDT

Here is the issue overview: Why does my stored procedure run faster when executed localy vs remotely?

Dont jump to any conclusion just yet, let me explain what I mean...

Here is the setup:
A Windows 2008R2 (correction: 2003) application server executes a stored procedure that performs some action, (what its doing is really important at this point in time). This stored procedure is executed on the SQL server over a TCP/IP connection to the database server. The DB server is physicaly located right next to the application server, and they are connected to eachother via 1GB NICs to a 1GB Switch. The DB server is running SQL 2005 SP2 Enterprise Edition, and has 16GB of memory and several vLUNS striped across 48 15k drives in an HP-EVA FC connected SAN. From all indicators thus far, there are no I/O, Mem, or CPU constreints or limits being hit. Trace Falg 1118 is on and TempDB is split across 8 file on their own vLUN. Data, and TLogs also have their own vLUNS too.

So, here is what I am seeing:
Using SQLCMD on the database server, with SQLProfiler running from the same DB server, I can execute the stored procedure and I see that the execution starts immediatly, and compleats with a durration of about 2,100ms with an IO of about 1200.

Using SQLCMD on the application server, with SQLProfiler running from the DB server, I can execute the same exact stored procedure, with the SAME exact parameters, and I see that the execution starts immediatly, and compleats with a durration of about 110,000ms with an IO of about 1200.

The query results in 1 row, with 4 columns [INT, INT, VARCHAR(50), VARCHAR(100)]

ASIDE:(I know the query is a train wreck, this is a regulated system and I cannot change it on a live prodution server, so please dont make any sugestions about doing so. The next version has been rewritten to be better.)

From everything we can see, there is no reason that we should be seeing differances like this, but what is heppening is the .NET application that calls this query from the application server is timing out waiting for the responce.

We have checked locking and blocking, WAIT states, Query plans, IO contention, CPU contention, MEM contention, NETWORK saturation/utilization, performed indexes rebuilds on all indexes, updates all stats, and a hand full of other items, but haven't come up with anything that points to why this is happening.

Please ask more questions if you have any, make recomendations as you come up with them, and depending on the impact (remember this is a production environment) we will try them and respond back.

-Thanks! -Chris

Can't add field that begins with an underscore in Crystal Reports

Posted: 13 Mar 2013 12:28 PM PDT

There's a field in a FoxPro table that I'm trying to pull in a Crystal Report (v10), and it begins with an _ and it doesn't appear in Field Explorer->Fields->Database Fields->->_fieldName

Are fields that start with _ invisible to Crystal Reports?

Troubleshooting Database Mail issues and Service Account permissions in SQL Server

Posted: 13 Mar 2013 02:08 PM PDT

I am having difficulties troubleshooting a Database Mail issue. When I try to stop and start the service using sysmail_start_sp and sysmail_stop_sp, I see a critical error in the Administrative Logs on the server.

The relevant exception appears to be:

Message: Access to the path 'C:\Windows\Microsoft.NET\Framework64\      v2.0.50727\Config\machine.config' is denied.  

I am not finding good documentation on which Windows account is used for Database Mail. I thought it would be the SQLAgent service. I have tried changing the permissions of this user to no avail.

Any guidance is appreciated.

Handling concurrent access to a key table without deadlocks in SQL Server

Posted: 13 Mar 2013 12:57 PM PDT

I have a table that is used by a legacy application as a substitute for IDENTITY fields in various other tables.

Each row in the table stores the last used ID LastID for the field named in IDName.

Occasionally the stored proc gets a deadlock - I believe I've built an appropriate error handler; however I'm interested to see if this methodology works as I think it does, or if I'm barking up the wrong tree here.

I'm fairly certain there should be a way to access this table without any deadlocks at all.

The database itself is configured with READ_COMMITTED_SNAPSHOT = 1.

First, here is the table:

CREATE TABLE [dbo].[tblIDs](      [IDListID] [int] NOT NULL CONSTRAINT PK_tblIDs PRIMARY KEY CLUSTERED IDENTITY(1,1) ,      [IDName] [nvarchar](255) NULL,      [LastID] [int] NULL,  );  

And the nonclustered index on the IDName field:

CREATE NONCLUSTERED INDEX [IX_tblIDs_IDName] ON [dbo].[tblIDs]  (      [IDName] ASC  )   WITH (      PAD_INDEX = OFF      , STATISTICS_NORECOMPUTE = OFF      , SORT_IN_TEMPDB = OFF      , DROP_EXISTING = OFF      , ONLINE = OFF      , ALLOW_ROW_LOCKS = ON      , ALLOW_PAGE_LOCKS = ON      , FILLFACTOR = 80  );    GO  

Some sample data:

INSERT INTO tblIDs (IDName, LastID) VALUES ('SomeTestID', 1);  INSERT INTO tblIDs (IDName, LastID) VALUES ('SomeOtherTestID', 1);  GO  

The stored procedure used to update the values stored in the table, and return the next ID:

CREATE PROCEDURE [dbo].[GetNextID](      @IDName nvarchar(255)  )  AS  BEGIN      /*          Description:    Increments and returns the LastID value from tblIDs          for a given IDName          Author:         Max Vernon          Date:           2012-07-19      */        DECLARE @Retry int;      DECLARE @EN int, @ES int, @ET int;      SET @Retry = 5;      DECLARE @NewID int;      SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;      SET NOCOUNT ON;      WHILE @Retry > 0      BEGIN          BEGIN TRY              BEGIN TRANSACTION;              SET @NewID = COALESCE((SELECT LastID FROM tblIDs WHERE IDName = @IDName),0)+1;              IF (SELECT COUNT(IDName) FROM tblIDs WHERE IDName = @IDName) = 0                   INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID)              ELSE                  UPDATE tblIDs SET LastID = @NewID WHERE IDName = @IDName;              COMMIT TRANSACTION;              SET @Retry = -2; /* no need to retry since the operation completed */          END TRY          BEGIN CATCH              IF (ERROR_NUMBER() = 1205) /* DEADLOCK */                  SET @Retry = @Retry - 1;              ELSE                  BEGIN                  SET @Retry = -1;                  SET @EN = ERROR_NUMBER();                  SET @ES = ERROR_SEVERITY();                  SET @ET = ERROR_STATE()                  RAISERROR (@EN,@ES,@ET);                  END              ROLLBACK TRANSACTION;          END CATCH      END      IF @Retry = 0 /* must have deadlock'd 5 times. */      BEGIN          SET @EN = 1205;          SET @ES = 13;          SET @ET = 1          RAISERROR (@EN,@ES,@ET);      END      ELSE          SELECT @NewID AS NewID;  END  GO  

Sample executions of the stored proc:

EXEC GetNextID 'SomeTestID';    NewID  2    EXEC GetNextID 'SomeTestID';    NewID  3    EXEC GetNextID 'SomeOtherTestID';    NewID  2  

EDIT:

I've added a new index, since the existing index IX_tblIDs_Name is not being used by the SP; I assume the query processor is using the clustered index since it needs the value stored in LastID. Anyway, this index IS used by the actual execution plan:

CREATE NONCLUSTERED INDEX IX_tblIDs_IDName_LastID ON dbo.tblIDs  (      IDName ASC  )   INCLUDE  (      LastID  )  WITH (FILLFACTOR = 100, ONLINE=ON, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);  

EDIT #2:

I've taken the advice that @AaronBertrand gave and modified it slight. The general idea here is to refine the statement to eliminate unnecessary locking, and overall to make the SP more efficient.

The code below replaces the code above from BEGIN TRANSACTION to END TRANSACTION:

BEGIN TRANSACTION;  SET @NewID = COALESCE((SELECT LastID FROM dbo.tblIDs WHERE IDName = @IDName), 0) + 1;    IF @NewID = 1      INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID);  ELSE      UPDATE dbo.tblIDs SET LastID = @NewID WHERE IDName = @IDName;    COMMIT TRANSACTION;  

Since our code never adds a record to this table with 0 in LastID we can make the assumption that if @NewID is 1 then the intention is append a new ID to the list, else we are updating an existing row in the list.

SQL Server cluster questions please

Posted: 13 Mar 2013 02:13 PM PDT

We have SQL Server 2008 R2 cluster active/active/active (a, b, c) and three instances run on this environment.

We did a test on one of the instances, and failover test.

  1. Manual failover from node c to b. After 3 minutes, the application cannot connect to SQL Server.
  2. Back to node c we are OK and application is happy
  3. Manual failover from node c to a. After 3 minutes, the application cannot connect to SQL Server.
  4. Back to node c we are OK and application is happy

Please give me some helps. Thanks

oracle null value in quotes - yet another newbie question

Posted: 13 Mar 2013 01:34 PM PDT

I have a where clause like

"and uc.completion_status in ('[P....]') "   

here ('[P....]') is the user input from taken a dropdown list in the user interface. and dropdown list consists three options Y, N or NotStarted

not started is defined with nvl(uc.completion_status, 'NotStarted') therefore NotStarted is null actually. and null in quotes is not working. how can this problem can be handled?

thanks in advance.

Can I move up rows in the memory itself?

Posted: 13 Mar 2013 08:55 PM PDT

Lets say i have the following table :

id         col2     col3   1          1        1   2          2        2   3          3        3     4          4        4  

And I'm trying to move rows to different id as a set, which will result :

 id         col2     col3   1          2        2   2          3        3   3          4        4     4          null     null  

And I don't mean in the select but actually transfering rows up the table.

Is there a simple way to do that?

The idea is to enter number of rows ( or range of id's) which will create some kind of "padding" to the rows and moves them backwords in the storage.

Similar to the option to maintain a gapless id in case u remove rows, I'm trying to achive the same but for the data itself, So in the above example i'm trying to "delete" 1 row

Is this possible with some kind of an option or I have to write a query that implemets the logic?

Alternative tools to export Oracle database to SQL Server?

Posted: 13 Mar 2013 02:41 PM PDT

I've got an Oracle database that I need to export (schema and data) to SQL Server.

I am trying the Microsoft SQL Server Migration Assistant for Oracle, but it is horribly slow, grossly inefficient and very un-user-friendly, e.g. I was having problems connecting to the SQL Server DB during data migration - but it still spent ~5 minutes preparing all the data before attempting a connection to SQL Server, then when it failed, the 5 minutes of preparatory work were wasted.

Right now, I'm just trying to connect to another Oracle DB using this tool, I left it overnight and came back this morning, and it's still stuck on 19% of "Loading objects..." And this is on a machine with a good 18GB RAM, of which maybe 8.5 GB currently in use. Task Manager shows me that SSMAforOracle.exe is using 0 CPU, 0 PF Delta, and no change whatsoever in memory usage. In other words: frozen stiff. Absolutely intolerable.

Are there any other tools out there that can migrate an Oracle DB to SQL Server a little more efficiently?

unique constraint violated (ORA-00001) in concurrent insert

Posted: 13 Mar 2013 08:46 PM PDT

I have a procedure that is called from concurrent transactions:

//Some actions here    INSERT INTO table1                  (table1_id, table1_val1, table1_val2, table1_val3)                VALUES                  (gettablenewid('TABLE1'), val1, val2, val3);    INSERT INTO table1                  (table1_id, table1_val1, table1_val2, table1_val3)                VALUES                  (gettablenewid('TABLE1'), val1, val2, val3);  INSERT INTO table1                  (table1_id, table1_val1, table1_val2, table1_val3)                VALUES                  (gettablenewid('TABLE1'), val1, val2, val3);  //some other actions  

Function gettablenewid code (id_table stores PKs for each table):

create or replace      function        GetTableNewId(tablename in varchar2)        return number is        PRAGMA AUTONOMOUS_TRANSACTION;        Result number;      cursor c1 is SELECT ig.id_value+1 id_new                                FROM id_table ig                                WHERE ig.table_identifier = tablename                                FOR UPDATE of  ig.id_value;      begin      for c1_rec in c1 loop          UPDATE id_table ig           SET ig.id_value = c1_rec.id_new         WHERE current of c1 ;         Result:=c1_rec.id_new;      end loop;      commit;        return(Result);      end GetTableNewId;  

Occasionally insert statement fails with ORA-00001 for table1_id, I can't undestand why it happens.

SQL Server deadlock too long [closed]

Posted: 13 Mar 2013 01:34 PM PDT

I have a really weird problem with a deadlock between a business process and a monitoring job. The both were involved on a deadlock that lasted around 5.5 hours. I don't know why SQL Server took so long in identifying that deadlock. Here is the fragment of the errorlog file:

Process ID 103 was killed by hostname Server1, host process ID 9908.  2013-03-11 07:38:11.33 spid16s     deadlock-list  2013-03-11 07:38:11.33 spid16s      deadlock victim=process3c1dc18  2013-03-11 07:38:11.33 spid16s       process-list  2013-03-11 07:38:11.33 spid16s        process id=process3c1dc18 taskpriority=0 logused=0 waitresource=OBJECT: 10:2073058421:2  waittime=19049468 ownerId=484658244 transactionname=WstrObjnameI4I4 lasttranstarted=2013-03-11T02:20:37.643 XDES=0x2a2ac81c8 lockMode=Sch-S schedulerid=3 kpid=11148 status=suspended spid=62 sbid=0 ecid=0 priority=0 transcount=1 lastbatchstarted=2013-03-11T02:20:02.900 lastbatchcompleted=2013-03-11T02:20:02.897 clientapp=SQLAgent - TSQL JobStep (Job 0x05BCA8DF1ECB76448CAB7B7FBBDCF12C : Step 1) hostname=CHQSQL10 hostpid=3180 loginname=Domain\SQLUser isolationlevel=read committed (2) xactid=484655352 currentdb=10 lockTimeout=4294967295 clientoption1=673185824 clientoption2=128056  2013-03-11 07:38:11.33 spid16s         executionStack  2013-03-11 07:38:11.33 spid16s          frame procname=adhoc line=4 stmtstart=78 sqlhandle=0x020000002e6f9032eba1a214586a5ff0b388fbf56e7c1535  2013-03-11 07:38:11.33 spid16s     SELECT 'Mar 11 2013  2:20:02:900AM',  2013-03-11 07:38:11.33 spid16s         DB_NAME() DBName,  2013-03-11 07:38:11.33 spid16s         OBJECT_NAME(p.OBJECT_ID) BlockedObjectName,  2013-03-11 07:38:11.33 spid16s         tl.resource_type,  2013-03-11 07:38:11.33 spid16s         tl.request_mode,  2013-03-11 07:38:11.33 spid16s         cast(wt.wait_duration_ms / 60000.00 as decimal(10,2)) as [LockDuration],  2013-03-11 07:38:11.33 spid16s         tl.request_session_id,  2013-03-11 07:38:11.33 spid16s         se1.host_name as [request_Host],  2013-03-11 07:38:11.33 spid16s         se1.program_name as [request_Program],  2013-03-11 07:38:11.33 spid16s         se1.login_name as [request_Login],  2013-03-11 07:38:11.33 spid16s         h1.TEXT AS RequestingText,  2013-03-11 07:38:11.33 spid16s         wt.blocking_session_id,  2013-03-11 07:38:11.33 spid16s         se2.host_name as [blocking_Host],  2013-03-11 07:38:11.33 spid16s         se2.program_name as [blocking_Program],  2013-03-11 07:38:11.33 spid16s         se2.login_name as [blocking_Login],  2013-03-11 07:38:11.33 spid16s         h2.TEXT AS BlockingTest  2013-03-11 07:38:11.33 spid16s         FROM sys.dm_tran_locks AS tl  2013-03-11 07:38:11.33 spid16s         INNER JOIN sys.databases db ON db.database_id = tl.resource_database_id  2013-03-11 07:38:11.33 spid16s         INNER JOIN sys.dm_os_waiting_tasks AS wt ON tl.lock_owner_address = wt.resource_address  2013-03-11 07:38:11.33 spid16s         INNER JOIN sys.partitions AS p ON p.hobt_id = tl.resource_associated_entity_id  2013-03-11 07:38:11.33 spid16s         INNER JOIN sys.dm_exec_connections ec1 ON ec1.session_id = tl.request_session_id  2013-03-11 07:38:11.33 spid16s         INNER JOIN sys.dm_exec_connections ec2 ON ec2.se       2013-03-11 07:38:11.33 spid16s          frame procname=mssqlsystemresource.sys.sp_executesql line=1 sqlhandle=0x0400ff7fbe80662601000000000000000000000000000000  2013-03-11 07:38:11.33 spid16s     sp_executesql       2013-03-11 07:38:11.33 spid16s          frame procname=DBATools.dbo.sp_foreachdb line=72 stmtstart=5630 stmtend=5694 sqlhandle=0x03001e0020c96c7eba4db70070a100000100000000000000  2013-03-11 07:38:11.33 spid16s     EXEC sp_executesql @sql;       2013-03-11 07:38:11.33 spid16s          frame procname=DBATools.dbo.usp_CheckBlockingSessions line=62 stmtstart=5850 stmtend=6840 sqlhandle=0x03001e00ae80847cbc31b80070a100000100000000000000  2013-03-11 07:38:11.33 spid16s     INSERT INTO dbo.BlockingSessions  2013-03-11 07:38:11.33 spid16s         (GatherTime  2013-03-11 07:38:11.33 spid16s         ,DBName  2013-03-11 07:38:11.33 spid16s         ,BlockedObjectName  2013-03-11 07:38:11.33 spid16s         ,resource_type  2013-03-11 07:38:11.33 spid16s         ,request_mode  2013-03-11 07:38:11.33 spid16s         ,LockDuration  2013-03-11 07:38:11.33 spid16s         ,request_session_id  2013-03-11 07:38:11.33 spid16s         ,request_Host  2013-03-11 07:38:11.33 spid16s         ,request_Program  2013-03-11 07:38:11.33 spid16s         ,request_Login  2013-03-11 07:38:11.33 spid16s         ,RequestingText  2013-03-11 07:38:11.33 spid16s         ,blocking_session_id  2013-03-11 07:38:11.33 spid16s         ,blocking_Host  2013-03-11 07:38:11.33 spid16s         ,blocking_Program  2013-03-11 07:38:11.33 spid16s         ,blocking_Login  2013-03-11 07:38:11.33 spid16s         ,BlockingText)  2013-03-11 07:38:11.33 spid16s       EXEC dbo.sp_foreachdb  @command=@v_SQLCommand, @user_only=1, @compatibility_level=90  2013-03-11 07:38:11.33 spid16s       --EXEC sp_MSForEachDB @v_SQLCommand       2013-03-11 07:38:11.33 spid16s          frame procname=adhoc line=1 sqlhandle=0x01001e006da0d429b0e25a9e040000000000000000000000  2013-03-11 07:38:11.33 spid16s     EXEC dbo.usp_CheckBlockingSessions       2013-03-11 07:38:11.33 spid16s         inputbuf  2013-03-11 07:38:11.33 spid16s     EXEC dbo.usp_CheckBlockingSessions      2013-03-11 07:38:11.33 spid16s        process id=process3c769b8 taskpriority=0 logused=1088 waitresource=KEY: 10:327680 (00001df3833b) waittime=4656 ownerId=484617865 transactionname=TRUNCATE TABLE lasttranstarted=2013-03-11T02:05:07.180 XDES=0x4c89b55b0 lockMode=X schedulerid=12 kpid=13200 status=suspended spid=64 sbid=0 ecid=0 priority=0 transcount=1 lastbatchstarted=2013-03-11T02:05:07.180 lastbatchcompleted=2013-03-11T02:05:07.163 clientapp=Microsoft SQL Server hostname=Server1 hostpid=9248 loginname=Domain\SQLUser isolationlevel=read committed (2) xactid=484617865 currentdb=10 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056  2013-03-11 07:38:11.33 spid16s         executionStack  2013-03-11 07:38:11.33 spid16s          frame procname=adhoc line=1 stmtend=58 sqlhandle=0x01000a0037917905903bd7c8010000000000000000000000  2013-03-11 07:38:11.33 spid16s     truncate table dbo.Table1       2013-03-11 07:38:11.33 spid16s         inputbuf  2013-03-11 07:38:11.33 spid16s     truncate table dbo.Table1  truncate table dbo.Table2  truncate table dbo.Table3  truncate table dbo.Table4  truncate table d      2013-03-11 07:38:11.33 spid16s       resource-list  2013-03-11 07:38:11.33 spid16s        objectlock lockPartition=2 objid=2073058421 subresource=FULL dbid=10 objectname=DBStore.dbo.Table1 id=lock4a2309200 mode=Sch-M associatedObjectId=2073058421  2013-03-11 07:38:11.33 spid16s         owner-list  2013-03-11 07:38:11.33 spid16s          owner id=process3c769b8 mode=Sch-M  2013-03-11 07:38:11.33 spid16s         waiter-list  2013-03-11 07:38:11.33 spid16s          waiter id=process3c1dc18 mode=Sch-S requestType=wait  2013-03-11 07:38:11.33 spid16s        keylock hobtid=327680 dbid=10 objectname=UKStoreReporting .sys.sysrowsets indexname=clust id=lock4ad68f980 mode=U associatedObjectId=327680  2013-03-11 07:38:11.33 spid16s         owner-list  2013-03-11 07:38:11.33 spid16s          owner id=process3c1dc18 mode=S  2013-03-11 07:38:11.33 spid16s         waiter-list  2013-03-11 07:38:11.33 spid16s          waiter id=process3c769b8 mode=X requestType=convert  

I really don't know in the first place why the monitoring process acquire a lock on a user table if the query does not mention it. Another question would be if system tables are involved on a deadlock does sql server ignore it?

Here is full first query:

SELECT       DB_NAME() DBName,      OBJECT_NAME(p.OBJECT_ID) BlockedObjectName,      tl.resource_type,      tl.request_mode,      cast(wt.wait_duration_ms / 60000.00 as decimal(10,2)) as [LockDuration],      tl.request_session_id,      se1.host_name as [request_Host],      se1.program_name as [request_Program],      se1.login_name as [request_Login],      h1.TEXT AS RequestingText,      wt.blocking_session_id,      se2.host_name as [blocking_Host],      se2.program_name as [blocking_Program],      se2.login_name as [blocking_Login],      h2.TEXT AS BlockingTest      FROM sys.dm_tran_locks AS tl      INNER JOIN sys.databases db ON db.database_id = tl.resource_database_id      INNER JOIN sys.dm_os_waiting_tasks AS wt ON tl.lock_owner_address = wt.resource_address      INNER JOIN sys.partitions AS p ON p.hobt_id = tl.resource_associated_entity_id      INNER JOIN sys.dm_exec_connections ec1 ON ec1.session_id = tl.request_session_id      INNER JOIN sys.dm_exec_connections ec2 ON ec2.session_id = wt.blocking_session_id      INNER JOIN sys.dm_exec_sessions se1 ON ec1.session_id = se1.session_id      INNER JOIN sys.dm_exec_sessions se2 ON ec2.session_id = se2.session_id      CROSS APPLY sys.dm_exec_sql_text(ec1.most_recent_sql_handle) AS h1      CROSS APPLY sys.dm_exec_sql_text(ec2.most_recent_sql_handle) AS h2      WHERE db.database_id = DB_ID()      AND cast(wt.wait_duration_ms / 60000.00 as decimal(10,2)) >= 5  

The truncate is a series of truncates all in a single transaction because it is a task on a SSIS.

Finally how could I avoid this problem? Thanks

Named Pipe Provider Error code 40

Posted: 13 Mar 2013 02:50 PM PDT

I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But sqlcmd throws an error:

Login timeout expired  

Help!

MySQL auto increment problem with deleting rows / archive table

Posted: 13 Mar 2013 05:39 PM PDT

A hosted server is running "maintenance" each weekend. I am not privy to the details.

In a database on this server there is a MyISAM table. This table never holds more than 1000 rows and usually much less. It is MyISAM so that the auto increment does not reset (and with so few rows it really doesn't matter). Rows are regluarly deleted from this table and moved to an archive table (1M rows).

The problem is lately the auto increment has "rolled back" slightly after each maintenance.

Is there any easy way to verify the auto increment of the insert table by reading the max id from both the insert and the archive table?

I'd rather not verify before each insert unless that is the only solution.

Here are the basic table layouts:

CREATE TABLE x  (      xid int(10) unsigned NOT NULL AUTO_INCREMENT, //snip      PRIMARY KEY (xid)  ) ENGINE=MyISAM AUTO_INCREMENT=124 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;    CREATE TABLE xhistory  (      xid int(10) unsigned NOT NULL DEFAULT '0', //snip      PRIMARY KEY (xid)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;  

Far from perfect workaround: (this was somewhat urgent, I had to manually update over 100 rows)

select xid from xhistory where x=?  

Check if just inserted row in x exists in history. If it does:

select greatest(max(x.xid),max(xhistory.xid)) as newval from x,xhistory  

Find a new id.

INSERT INTO x SELECT * FROM x AS iv WHERE iv.xid=? ON DUPLICATE KEY UPDATE xid=?  

And update our row with this id.

Cannot generate reports from SQL Management Data Warehouse

Posted: 13 Mar 2013 02:53 PM PDT

I'm running SQL Server 2008 R2 and have installed the MDW on one server and have a Data Collector collecting and uploading the server activity, query results, and Disk activity data to the MDW. When I select any of the reports from the MDW with Data Collection > Reports > Management Data Warehouse I receive the error:

Exception has been thrown by the target of an invocation - > Object reference not set to an instance of an object.

This occurs for all 3 reports and after I've waiting some time and data has been uploaded from the data collector. I do not have SSRS running, but read that isn't necessary.

Any suggestions?

How to prevent SQL Server service account from registering / deregistering SPN?

Posted: 13 Mar 2013 12:48 PM PDT

The service account is a domain account. It is not a domain admin (nor is it a member of any group that is a domain admin). It has neither "write servicePrincipalName" nor "Write public information" permission (nor a member of a group with these permissions). Yet it is still able to register / deregister it's SPN upon startup and shutdown.

What permission am I missing?

Bulk insert into SQL Server from VMWare guest using distributed switch

Posted: 13 Mar 2013 02:42 PM PDT

This is mostly likely not a SQL server issue but the setup seems to only be affecting BULK INSERTS to SQL Servers.

We have recently moved VM Hardware and all the guests that were moved had their virtual switches changed from standard to distributed.

I then started receiving

A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0)

on a two SQL servers during BULK INSERT operations. One of the SQL servers was a VM with the new configuration and the other was a physical server. Both BULK INSERT operation originated from a VM with the new configuration. The BULK INSERTs would not fail every time, it was very random when it would.

When we changed the virtual switch to be a standard switch instead of a distributed switch the issue goes away.

I am looking for more of an explanation to why it doesn't work with a distributed switch instead of a resolution. My guess would be that the BULK INSERT operation is serial and with a distributed switch the packets are being routed through different hosts, some of which may be busier than others, and are arriving at the destination server beyond some latency threshold. (note: there is nothing in the windows event log at the times of the errors on either the source or destination server)

Empty LONGTEXT or NULL?

Posted: 13 Mar 2013 06:39 PM PDT

In my MySQL DB I have one field called html_contents that contains all the html to be shown in a webpage. Obviously the html could be huge, and certaintly bigger than 64KB, therefore I decided to use LONGTEXT rather than VARCHAR.

When the field is not set or left empty would you say it is better (alias more efficient for the DB) to set it to NULL or to empty string?

I read this: When to use NULL and when to use an empty string? but it talks about empty strings in general (probably small strings and not LONGTEXT).

I was wondering if with LONGTEXT is a different story, whether it saves a lot of space or execution time to use NULL instead of leaving empty LONGTEXT around.

[MS SQL Server] Grant persmissions to database

[MS SQL Server] Grant persmissions to database


Grant persmissions to database

Posted: 13 Mar 2013 05:09 AM PDT

We have 5 small in-house developed applications whose databases are hosted on a SQL server.For each application we have an execute account that is used by application to access back end SQL databasesSo I create the logins in SQL server. Now come to the users security part.Shall I simplely grant db_owner for these five accounts for their application databases respectively?They may need to read and write and also execute stored procedures in the databases. So I think db_owner should be simple and fine.Thanks

Cannot connect after change Service Account

Posted: 13 Mar 2013 04:42 AM PDT

I have a 2008 R2 SQL Server that is not clustered. I changed the SQL Service Account username and password. All SQL Services start up but I am unable to connect using Windows Authentication. I can connect using a SQL Account with sysadmin rights. Admin rights are granted to a Domain Security Group (which I am a member). Is there a step I'm missing? I have changed the Service Account and passwords on other Servers without any errors.

How to find cause of Replication delay?

Posted: 12 Mar 2013 10:59 PM PDT

Hi everyone,I just have a quick question about finding the cause of a delay I saw with one of my subscriptions today. I noticed that data was not being published to the subscription when I tried accessing a report and began looking for the issue. I did the usual checks with the replication monitor and I checked the connection between both servers, looked at the job history, checked the sync history etc. But I found no errors or warnings. The only thing I could see was that the sync status said that the subscription was waiting on a reply from the server but thats all the info I could find :ermm: Any help on where to look to find the cause of this delay would be really helpful, and as a side note should I apply alerts to notify me if there is a delay again? Thanks in advance for all your help,Kindest Regards,Craig Specs:Windows Server 2008SQL Server 2008 R2

[SQL 2012] table partitioning

[SQL 2012] table partitioning


table partitioning

Posted: 13 Mar 2013 12:37 AM PDT

I planning to do table partitioning using the region wise. in my region table there are 18 region are there.... is it required for 18 partition ?Could any one suggest me to do best in scenario.

Linked Server SQL 2000 - SQL 2012 Issues

Posted: 10 Jul 2012 11:44 AM PDT

We installed SQL 2012 recently... linked server between SQL 2012 and SQL 2008 using SQLNCLI11 is working fine; however, we have issues creating linked server between SQL 2000 and SQL 2012. [b]Msg 8522, Level 16, State 3, Line 1Microsoft Distributed Transaction Coordinator (MS DTC) has stopped this transaction.[/b]I am not sure why I am getting this error when the linked server with SQL 2008 is doing fine and there is not MS DTC error also linked server using the same SQL 2000 server with SQL 2008 server is working fine as well so I don't think there is issue with configuring MSDTC. Thanks a lot for your help.

SQL Query

Posted: 13 Mar 2013 02:58 AM PDT

Hopefully, I am posting this in a right area if not I am sorry. Dont mind the $$$$,#### and **** I am trying to get the last 60 days of records from today, and I am having issued writing the query that returns that value....RDMS its tie to SQL/Oracle database and here is my query that is currently working but I just like to see the last 60 days if anyone can possible help!!!SELECT INFORMENT.PRODUCT_OFFER_PURCHASE.*******_#######, INFORMENT.INVOLVED_PARTY.INVOLVED_PARTY_ID, To_Char(INFORMENT.PRODUCT_OFFER_PURCHASE.DATE_ADDED, 'YYYYMMDD'), INFORMENT.DEPOSIT_$$$$$$$.BAL_LEDGER_CURRENT, INFORMENT.PRODUCT_OFFER_PURCHASE.INTEREST_RATE, To_Char(INFORMENT.PRODUCT_OFFER_PURCHASE.DATE_OPEN, 'YYYYMMDD'), To_Char(INFORMENT.PRODUCT_OFFER_PURCHASE.DATE_CLOSE, 'YYYYMMDD'), INFORMENT.PRODUCT_OFFER_PURCHASE.*******_STATUS_CODE, INFORMENT.PRODUCT_OFFER_PURCHASE.CLIENT_DEFINED FROM INFORMENT.PRODUCT_OFFER_PURCHASE, INFORMENT.INVOLVED_PARTY, INFORMENT.DEPOSIT_$$$$$$$$$ WHERE INFORMENT.PRODUCT_OFFER_PURCHASE.INVOLVED_PARTY_ID_PRIMARY = INFORMENT.INVOLVED_PARTY.INVOLVED_PARTY_ID AND INFORMENT.PRODUCT_OFFER_PURCHASE.******** = INFORMENT.INVOLVED_PARTY.*********AND INFORMENT.PRODUCT_OFFER_PURCHASE.******** = INFORMENT.DEPOSIT_$$$$$$$.******** AND INFORMENT.PRODUCT_OFFER_PURCHASE.APPL_ID = INFORMENT.DEPOSIT_$$$$$$$.APPL_ID AND INFORMENT.PRODUCT_OFFER_PURCHASE.*******_######## = INFORMENT.DEPOSIT_$$$$$$$$.******_####### AND INFORMENT.PRODUCT_OFFER_PURCHASE.********* = '020'and(INFORMENT.PRODUCT_OFFER_PURCHASE.APPL_ID = 'SAV' or INFORMENT.PRODUCT_OFFER_PURCHASE.APPL_ID = 'DDA')

SSAS/MDX Unit Testing

Posted: 12 Mar 2013 08:31 PM PDT

Hi All,I've been doing a bit of searching round for ways to unit test cubes and the results seem to be limited. What we're trying to do is make sure that any releases we make to live don't alter the existing numbers, unless we are aware that this should be the case, obviously we'd then make a change to the tests.I've been toying with the idea of creating framework using ADOMD.NET, but this appears to be limited to single result set numbers, e.g.[code="sql"]Select [Measures].[Total Sales Value] On 0From [My Cube]Where [Date].[Year - Month - Day].[Date].&[20130313][/code]Does anyone have any ideas? In the meantime I'll continue searching round for ADOMD.NET stuff and post back any results I find.Cheers,Jim.

We have some issues on configuring IIS 7 with SQLSERVER 2012

Posted: 12 Mar 2013 01:32 PM PDT

Can anyone help us on configuring IIS 7 with SQLSERVER 2012?Regards,Ram

Search This Blog