Wednesday, March 6, 2013

[SQL Server 2008 issues] Attempt to fetch logical page (1:440) in database 2 failed. It belongs to allocation unit 422212869292032 not to 6269010747738816512.

[SQL Server 2008 issues] Attempt to fetch logical page (1:440) in database 2 failed. It belongs to allocation unit 422212869292032 not to 6269010747738816512.


Attempt to fetch logical page (1:440) in database 2 failed. It belongs to allocation unit 422212869292032 not to 6269010747738816512.

Posted: 05 Mar 2013 04:23 PM PST

hi all,One of my sp is giving the following error some times not every time:[b]Attempt to fetch logical page (1:440) in database 2 failed. It belongs to allocation unit 422212869292032 not to 6269010747738816512.[/b]can any one help on this.

Concatenate with a smallint variable

Posted: 05 Mar 2013 04:18 PM PST

Hi All,I could not format a column in the below,[u][i][b]declare @table table (mMonth smallint);insert into @table values (1),(9),(11),(12);select mMonth, LEN(mMonth) Length, case LEN(mMonth) when 1 then ('0'+CAST(mMonth AS VARCHAR(6))) else mMonth end formattedMMonthfrom @table[/b][/i][/u][i][b]case LEN(mMonth) when 1 then ('0'+CAST(mMonth AS VARCHAR(6))) else mMonth end formattedMMonth[/b][/i]keyword could not give the correct result.

SQL Server for Content Censorship

Posted: 05 Mar 2013 06:08 PM PST

Hi Champs,We have little strange requirement in a case of SharePoint where backend is SQL. Users in social features of SharePoint shout not be able use offensive words in comments, newsfeeds, conversations, reply etc, The system must not allow "offence words". Somehow it is not possible SharePoint automatically. We can do it only manually through content approval process. But that approach is not practical approach for the larger environment. What i was thinking if something can be done from the SQL Side as SharePoint stores Data in SQL. I was thinking something at runtime detection of Offensive words against a Master Table (Offensive word Dictionary) or While user post (Saved) also fine so SQL can compare and if found offensive Data it should tgriger alert for Admin. Please champs let me know if this approach is possible some how. Thanks.Regards,Inder.

Formatting issue in SQL

Posted: 04 Mar 2013 10:43 PM PST

Hello all, I have seen some strange things in my SQL Server 2008 enterprise edition. whenever I do sp_helptext for any procedure, some of the random linescome as two lines and this risks the code break. For example, if I have a code like this:create procedure Temp_Procedure@p1 varchar(20)asbeginselect * from dbo.Table1where Col1 = @p1and Col2 = 'Temporary'endIt shows as:create procedure Temp_Procedure@p1 varchar(20)asbeginselect * from dbo.Table1where Col1 = @p1and Col2 = 'Temporary'endIt looks simpler in this example but where the codes are of thousand lines, it becomes headache to format it again and again. I don't know if there is any environment variable setting or something. I have checked many a options but nothing worked.

Auto Update Statistics Asynchronously option

Posted: 05 Mar 2013 05:48 PM PST

Hi Does the option "Auto Update Statistics Asynchronously" work for update ,delete and insert statement or it is just used for select?I mean if I enable this option ,after update ,delete and insert would the statistics be updated?Also I have a nightly job that update statistics with full scan.

Execution Plan - Key Lookup

Posted: 05 Mar 2013 09:56 AM PST

I run the following query on the AdventureWorks database.SELECT ContactID,LastName,PhoneFROM Person.ContactWHERE EmailAddress LIKE 'sab%'In the execution plan, why is the "Key Lookup" operator appearing below the "Index Seek" operator and not after it.

Mirrored DB log file growth is High

Posted: 05 Mar 2013 03:45 PM PST

Hi all,I have mirroring setup in SQL 2008r2 server, Everything working fine.But MirroredDB LDF file growth is very high. How restrict the LDF file growth.With,Bharath.

SSIS not producing information from my COMPUTE BY clause when exporting to Excel

Posted: 05 Mar 2013 12:50 PM PST

When I run a script that uses COMPUTE BY in SSMS I get the desired results. When I try and create an SSIS package to export out to Excel, it completely ignores the COMPUTE BY clause. I need to do a subtotal by a certain group but I don't need to sum any of the columns.I've been reading about GROUP BY (ROLLUP) but I can't get that to work. It wants me to add every single column name into the GROUP BY clause which of course ruins how I want the data to look when it's exported out to Excel.Here's a sample of how I'd like the data to look like once it gets to Excel...[img]http://oi50.tinypic.com/2a9pgrn.jpg[/img]This is where I'd want a subtotal on the post_amt on every change in admit_svc_code. Like I said this works just fine if I use COMPUTE BY within SSMS but that line is ignored when I try and export it out to Excel. I know how to make this work if I export the raw data into Excel but I'd like the data to already look like that from the SSIS export.

Transactions

Posted: 05 Mar 2013 01:56 PM PST

Is it possible to update 2 Db with 2 different sqlConnections under same Transaction ?Thanks.

Disable AUTO_CREATE_STATISTICS and AUTO_CREATE_STATISTICS for all databases

Posted: 05 Mar 2013 02:18 PM PST

How to disable AUTO_CREATE_STATISTICS and AUTO_CREATE_STATISTICS for all databases in single step?

xml query datetime datatype problem while inserting to xml

Posted: 01 Mar 2013 06:57 PM PST

I have a query likeUPDATE aSET oldRemarks1.modify('insert <Value_Py>{sql:column("b.collection_date")}</Value_Py> after (/Root/Row[3]/Item)[1]') from b where a.[Date]='31-May-2004' and a.sl_no=b.regkey_slnoHere collection_date is of datetime datatype.So while it is inserted to xml it will be like 2012-03-05T00:00:00.000 in the xml tag.But i want to insert it as just "2012-03-05" in the xml tag. So how can i use cast or any other method here so that it will insert properly in xml.

Publish SSRS 2008 in LAN

Posted: 27 Feb 2013 03:51 PM PST

How to publish SSRS for internal alone.I have a project which needs to be circulated only to my team members using my hostname.What are the steps to deploy them.now i am able to do with http://localhost/Reportshttp://myhost-name/Reportswhere as if i see in another machine [b]http://myhost-name/Reports[/b] i am unable to get the details.Thanks

RANK function used over a partion - unexpected result

Posted: 05 Mar 2013 11:33 AM PST

Hi,I have the following table / data:[code="other"]id Title Director===================================1 movi dir1 2 mo dir2 3 movie333 dir333 4 movie dir4 5 movie dir4 6 movie dir4 [/code]If I run the statement:[code="sql"] select id, title, director, RANK() over (partition by title order by (select 0)) as RNKfrom Movies; [/code]I get the following result:[code="other"]id Title Director RNK===========================================2 mo dir2 11 movi dir1 14 movie dir4 15 movie dir4 16 movie dir4 13 movie333 dir333 1[/code]I would expect the RNK value for id =4,5,6 would be 1,2,3Why are the values for id = 4,5,6 identical?Thank you for help.

Issue with a Connection to a Named Instance in Home Lab

Posted: 05 Mar 2013 10:15 AM PST

I installed a single named instance of SQL Server in my lab, and I am having trouble connecting from a remote machine, although local connections to the instance work fine. The server hosting SQL Server is a brand new VMware Workstation virtual machine that does not have any other SQL Server versions or instances installed.On the database server, I used SQL Server Configuration Manager to change the TCP port to static 1433.On the database server, I opened the firewall with the following commands:[code="plain"]netsh advfirewall firewall add rule name = SQLPort dir = in protocol = tcp action = allow localport = 1433 remoteip = localsubnet profile = DOMAINnetsh advfirewall firewall add rule name = SQLPort_1434 dir = out protocol = UDP action = allow localport = 1434[/code]PortQry.exe shows 1433 open when I run it from the remote machine that I am using to connect to SQL Server.Scenario 1: Surprisingly, this command (which does not specify the instance name) works on the remote machine:[code="plain"]sqlcmd.exe -S sql_netbios_name -U sa -P password -Q "select getdate()"[/code]Scenario 2: This command (which specifies the instance name) gets an error error (which I am attempting to fix with this web posting) on the remote machine:[code="plain"]sqlcmd.exe -S sql_netbios_name\instance_name -U sa -P password -Q "select getdate()"HResult 0xFFFFFFFF, Level 16, State 1SQL Server Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF].Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online..Sqlcmd: Error: Microsoft SQL Server Native Client 10.0 : Login timeout expired.[/code]Scenario 3: But this command (which specifies the instance name *AND* the port number, with the connection string encapsulated in double quotes) works on the remote machine:[code="plain"]sqlcmd.exe -S "sql_netbios_name\instance_name,1433" -U sa -P password -Q "select getdate()"[/code]Any ideas why Scenario 2 fails? This has to be simple. I just don't see it.

If I compress a primary key, am I compressing the whole table --&gt; P-key with DATA_COMPRESSION

Posted: 05 Mar 2013 06:56 AM PST

Can someone explain how data compression works at the index level? If I compress a primary key, am I compressing the whole table [since the data pages are leaf pages]?

Joining to Multiple rows which should really be one row

Posted: 05 Mar 2013 08:32 AM PST

Hi, looking for some help or advice if possible, been searching this all night with no luck, im probably not using the correct search terms. I am loading Mainframe data into SQL Server 2008. There is a master record Table which I need to join to a notes table. The Master Record might have name, address, job title etc, and the Notes Table contains a note. The thing that makes this a bit unusual is as this has come off the mainframe due to file space considerations the notes field was split over multiple rows.So you having something like thisMaster TableID, Name, Address, Job Desc1, Bob", "New York","Sports Reporter"2,"Chris","Washington","Doctor"Notes TableID, EmpID,Note1,2,"Chris has been"2,2,"a doctor in the"3,2,"USA for 5 years."I want a select statment to output rows for reporting purposes (using SSRS) which would say something like this..2,"Chris","Washington","Doctor","Chris has been a doctor in the USA for 5 years"Can anyone point me in the right direction? I hope I have explained it clearly.Thanks

convert 2000 db to 2008 r2 db

Posted: 05 Mar 2013 03:06 AM PST

Upgrade to 2008r2 from 2000. Backup the 2000 db and restore to 2008 r2. Create all application users and add all the role and right same as 2000.When I point the appl to new 2008 r2 db, it said my application user does not have 'execute' right to one stored procedure.But when I point back to 2000 db, everything is OK. The application is up as usual.I find out that my appl user does not have 'execute' right for that stored procedure too on 2000.It seems I am missing something. Anything is default in 2000 but I need to grant manually??Or anything about right or role I do not pay attention when upgrading from 2000 to 2008 r2.Hope anyone can give me some advicesThanks a lot

Displaying Data From 3 SQL Tables On A Form

Posted: 05 Mar 2013 03:23 AM PST

I have a program of my own, for my own use, written in Excel VBA that I want to rewrite in VB2010 Express using SQLCE.It currently displays a Form as per the picture attached.What is the best way of replicating something similar ? I know I can create individual Buttons & TextBoxes for each Row, but I am wondering if there is a better way. I tried using DataGridView, but [b](a)[/b] it's pretty ugly & [b](b)[/b] it doesn't let me use more than one Table !!!I don't mind using a load individual Buttons & TextBoxes, but thought I would ask people who might know better, before doing so ;-) !!!

create procedure which contains special caracters from .sql file with sql powershell

Posted: 05 Mar 2013 07:22 AM PST

Hello,I have a .sql file which is actualy the body of a stored procedure. The procedure containts characters like [b]'é, à, è...'[/b]. when i run the command:[b]invoke-sqlcmd -server "servername" -database "databasename" -inputfile "filename.sql" [/b] against MS SQLServer, the procedure is succesfully created, but in place of characters listed above I have a [b]'?'[/b].As work around I found somthing like:[b]PS SQLSERVER:\>$server="servername"PS SQLSERVER:\>$database="databasename"PS SQLSERVER:\> $query=get-content -path "path_to_file..."PS SQLSERVER:\> $connection=new-object System.Data.SqlClient.SQLConnectionPS SQLSERVER:\> $connection.ConnectionString="Server={0};Database={1};IntegratedSecurity=True" -f $server,$databasePS SQLSERVER:\> $command = new-object System.Data.SQLClient.SQLCommand($query, $connection)PS SQLSERVER:\> $connection.Open()PS SQLSERVER:\> $command.ExecuteScalar()PS SQLSERVER:\>$connection.Close()[/b]Now the troublesome characters are all in place, but the generated procedure is a loooooooong line.Is there a way to make this right with SQLPS? Is there a option(like sqlcmd -u)for unicode files?Thank you!

Data Tier Application - Copies and Deletes existing Security??

Posted: 05 Mar 2013 07:10 AM PST

I know there is little written about the data tier architecture, and I have read the white papers, etc.. but it seems strange in an application designed for moving sql projects from devt to production that there is no way of stopping the dropping of the existing security that is on the Prod environment. Let me expand...Steps:1/ Have a working devt, etc envs configured with a devt, test... configurations (data tier registrered)2/ Have a working Prod env configured with a prod configuration (data tier registrered)3/ Have a release candidate on the devt machine4/ Export the data tier at the devt rc (which contains all the devt security; You can exclude the security from the created package)5/ Upgrade the prod env....Wait! Even though I removed the devt security, the comparison of the schemas has told slq to drop all the current prod users/logins/schemas that were not on the devt env (which is basically the prod security)..:w00t:Doh!How can you remove the comparison of the security - Add to the built dacpac the production security(lame and has further complications re SQL logins needing to be re-enabled and if on differing domains; What about windows logins)?? How were they thinking this would work in a production deploy? If you have an idea, or can see I am missing something please let me know.

fetch tables most often queried

Posted: 26 Feb 2013 12:24 PM PST

Is there a way to determine which tables of a database are being queried most often, sorted in descending order?

What is the purpose of "Register as Data-tier Application" ?

Posted: 26 Sep 2010 10:18 PM PDT

As I can use "Extract Data-tier Application" to generate a dacpac file, what is the purpose of "Register as Data-tier Application" ?Do you the reason of it ?

T-Sql Logical Query

Posted: 05 Mar 2013 04:55 AM PST

Hi all - I was presented with a request today which I'm completely stumped on.Here it is; Given an employees table with the columns ID, FirstName, LastName, HireDate, and TerminationDate how would you calculate the longest period in days that the company has gone without hiring or firing anyone.Any help would be greatly appreciated. Thanks, RJ

performance

Posted: 05 Mar 2013 03:04 AM PST

as part of environment refresh i am restoring prod back up to a test server.both prod and test are in same domainand i am using Quest lite speed to restore (from target server I am selecting prod backup file through network).So my question is - is there any performance issues on production arise? network slowness? or any other issues?Thanks.

Alter User Defined Data Type

Posted: 05 Mar 2013 04:23 AM PST

I'm trying to alter a column in a table that corresponds to UDDT, this column has a primary key and it reference to foriegn key in other tables. What should be the approach to accompolish this task. So many objects has dependencies on this UDDT

Execution completed with errors

Posted: 05 Mar 2013 02:19 AM PST

We have defined a cursor and a transaction which empowers to process data record by record. The desired behavior is If there is a failure in an iteration then it should rollback transaction, log error in a custom table and proceed for next record in the queue, finally the stored procedure should indicate execution as successful.As per mentioned logic, in SQL Server 2008 it performs same as of above explained requirements and completes its execution whereas in SQL Server 2000 it also completes execution but prompts error message at the end of stored procedure execution if there was an error during execution of an iteration. Our external application fires a stored procedure in SQL Server 2000 which contain similar logic and gets fail if there was an error in an iteration. We want to achieve same behavior in SQL Server 2000 as of SQL Server 2008's behavior.[u][b]SQL Server 2000 T-SQL Code:[/b][/u]DECLARE @Flag AS INTDECLARE @ErrorCode AS INTDECLARE CSR_TEST CURSORFOR SELECT 1 AS Flag UNION SELECT 2 AS Flag OPEN CSR_TEST FETCH NEXT FROM CSR_TEST INTO @FlagWHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRANSACTION IF @Flag = 1 BEGIN SELECT 1 / 0 SELECT @ErrorCode = @@Error IF @ErrorCode <> 0 GOTO LogError END IF @Flag = 2 BEGIN SELECT 1 / 1 SELECT @ErrorCode = @@Error IF @ErrorCode <> 0 GOTO LogError END COMMIT TRANSACTION PRINT 'COMMIT TRANSACTION' GOTO ProcEnd LogError: ROLLBACK TRANSACTION PRINT 'ROLLBACK TRANSACTIOM' --Error Logging in Custom Table ProcEnd: FETCH NEXT FROM CSR_TEST INTO @Flag END CLOSE CSR_TESTDEALLOCATE CSR_TEST[u][b]SQL Server 2008 T-SQL Code:[/b][/u]DECLARE @Flag AS INTDECLARE @ErrorCode AS INTDECLARE CSR_TEST CURSORFOR SELECT 1 AS Flag UNION SELECT 2 AS Flag OPEN CSR_TEST FETCH NEXT FROM CSR_TEST INTO @FlagWHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRY BEGIN TRANSACTION IF @Flag = 1 BEGIN SELECT 1 / 0 SELECT @ErrorCode = @@Error IF @ErrorCode <> 0 RAISERROR (8134,0,0) END IF @Flag = 2 BEGIN SELECT 1 / 1 SELECT @ErrorCode = @@Error IF @ErrorCode <> 0 RAISERROR (8134,0,0) END COMMIT TRANSACTION PRINT 'COMMIT TRANSACTION' END TRY BEGIN CATCH ROLLBACK TRANSACTION PRINT 'ROLLBACK TRANSACTIOM' --Error Logging in Custom Table END CATCH FETCH NEXT FROM CSR_TEST INTO @Flag END CLOSE CSR_TESTDEALLOCATE CSR_TEST

Copying from SQLSMS grid, pasting in Excel, the word "NULL" is pasted!

Posted: 05 Mar 2013 01:58 AM PST

When I copy/paste grid results from SQL Server Management Studio --> an Excel file, the WORD "NULL", the actual letters "N" "U" "L" and "L" are pasted into the cell for all NULL values.Dear geniuses at Microsoft: A NULL value does not equal the string "NULL" !!!!Those are two completely different things. Making users to a find/replace post-paste is ridiculous. Making us encapsulate every single field separately in our select with an ISNULL(Field,'') is ridiculous. What if a user has a SELECT * which contains hundreds of columns that they want to copy/paste into Excel?By now, I should not be surprised when a Microsoft product does something silly like this, but for the life of me, I cannot [i]believe[/i] this is the native behavior of SQL Server Management Studio.Please, somebody, tell me there is an option or setting in SQLSMS that I am missing to prevent this very silly behavior.

Transactional Replication - Report last synchronisation date

Posted: 04 Mar 2013 08:38 PM PST

Hi all, apologies if this has been asked before, I tried searching and came up with nothing.I'm using Transactional Replication to Push a publication out to several other machines. These machines are used for reporting off, and there's a requirement to show on these reports how old the data is. So the idea is to show something like "Last updated 05-Mar-2013 11:29:00".My question is whether it's possible to query this information from the subscriber server/database alone? I'm aware that there's various means of doing it from the publisher, but these machines are to be used for disaster recovery reports and as such we can't rely on being able to access the publisher server.I've considered:a) Putting triggers on the tables in the subscriber DB to update a "last updated" table on each operation.b) Customising the Insert/Delete/Update SPs for each table to update a "last updated" table.But I was hoping to be able to avoid either of those, simply to keep the complexity of the replication solution to a minimum. No amount of googling has come up with anything, but I could just be using the wrong search terms!Does anyone have any clever ideas for how to get this information?Thanks in advance

Row level locking

Posted: 05 Mar 2013 01:32 AM PST

How to find row level locking is enabled or disabled for the tables?Thanks

Virtualization - yet again ...

Posted: 05 Mar 2013 01:56 AM PST

I spent some time searching this forum and the interwebs and haven't found a lot of definitive insight/agreement on virtualization and best practices. So here we go with another VM thread.I'm seeing issues with virtualiztion that seem obvious on the surface, but maybe I'm naive and too old school.DB 101 says, in general, separate MDF and LDF files to separate disk drives. Does this still apply in a virtual environment? I have to think yes. Which means I have to have the sys admins that host our environment make sure that log and data files are separate on the physical hardware not just in the virtual layer. They resist divulging such things.Similar issue with CPUs. A virtual server really doesn't know what it's running on. It could be sharing those 16 cores with 4 other instances and not even know it. So it seems reasonable that the actual CPUs should be explicitly allocated to specific SQL instances. though I suppose this might be more for high volume instances that are expected to have constant high CPU utilization.I recently ran into a technique for dealing with tembDB contention that recommended placing the tempDB files across several disk depending on the number of CPUs since each CPU could handle a tempDB thread separately during I/O. Which means knowing the real CPU resources available, not the virtual, and the real configuration of the disk arrays.Memory allocation was well figures in. A virtual instance can be assigned a max amount of memory that it may never actually get if the physical host has already doled it out. Rebooting an instance would also free up memory on the physical hardware and another virtual instance might grab that, no? Which would leave the rebooted instance starved for memory even if it is "assigned" a specific amount through the virtualization.So, in general, it seems to me that if you are virtualizing SQL servers, you have to drill down through the virtual layer and make sure the physical resources are properly allocated to the instances.Clustering gets interesting if the nodes on a cluster happen to share some physical resource. If that resource fails you have the pleasure of multiple nodes on your cluster dropping off.This makes me wonder if virtualization is even worth the effort in some cases. The idea behind server consolidation is to make sure your hardware is fully utilized. If a single instance of SQL A high availaibility, high volume system seems to less a candidate for virtualization than say a number of low volume instances for say departmental databases that seem to proliferate in many organizations.Sorry for the rambling. I just don't see a lot of definitive answers to some pretty serious issues when it comes to business continuity, performance, etc when it comes to virtualizing servers. Seems like it has become "the thing to do" and not enough thought is given to the needs of the entire database environment. The best practice should be to examine the information system, then decide if virtualization meets the need or not - not simply plop your junk on a virtual environment because some white paper says it will save you money.Thoughts?

Unexplained query plan differences between two near identical queries

Posted: 05 Mar 2013 01:25 AM PST

All:I have noticed some behavior that I can't explain with sql similar to the following. It is reproducible on multiple machines in our enterprise so isn't machine specific. Basically by moving the insert statement to put data into a temp table outside of dynamic sql vs. inside the dynamic sql, we are seeing completely different query plans and performance which seems ... strange.It seems that this is a scope issue but I'm curious if somebody has a more specific explanation of what we are seeing.My example:CREATE #table (myFoo varchar(10) )DECLARE @sql NVARCHAR(MAX) = ' INSERT INTO #table SELECT foo FROM bar'EXEC (@sql)Results in:SQL Server Execution Times: CPU time = 858 ms, elapsed time = 862 ms.VS:CREATE #table (myFoo varchar(10) )DECLARE @sql NVARCHAR(MAX) = ' SELECT foo FROM bar'INSERT INTO #tableEXEC (@sql)Results in: SQL Server Execution Times: CPU time = 47 ms, elapsed time = 51 ms. and a much better and different planCan somebody please explain what is happening?Thanks,

Query comes to a crawl until executed using the WITH RECOMPILE option

Posted: 05 Mar 2013 12:37 AM PST

we have a stored procedure that will run just fine for a couple of hours but then somewhere during that time its execution plan seems to get confused and executing it then comes to crawlwhen executing the stored procedure using the WITH RECOMPILE option we see that query begins executing at what we consider a normal amount of time again and will usually for a while, but eventually it slows down to a crawl againcurrently, a simple fix for us is to leave the WITH RECOMPILE option in the stored procedurehowever, we'd like to get a better understanding and come to an actual resolution if possible rather than this "workaround"any insight is greatly appreciated, thanks in advance!

No comments:

Post a Comment

Search This Blog