Thursday, September 26, 2013

[SQL Server 2008 issues] Commenting

[SQL Server 2008 issues] Commenting


Commenting

Posted: 25 Sep 2013 06:30 PM PDT

How much detail should be used in comments, do you assume a level of understanding? One comment I saw once was "If you need comments to understand this - you shouldn't be reading it" – extreme to one side, on the other hand, do you really need a comment for [code="sql"]Insert into table_aSelect b.col1,c.col1,b.col2,c.col2from tableb b inner join tablec c on . .. . . [/code]The reason for asking - I am writing documentation for a few inherited databases, to me the current level of commenting is fine, it give an overview of what the stored procedure does, and if there is anything that seems outside of the overview is commented.

Sql Agent

Posted: 25 Sep 2013 01:53 AM PDT

Hi Guys,Can someone please point me in the correct direction, been searching in vain for a number of days.I'm monitoring sql agent jobs with a mixture of SCOM & notifier alerts with a job, howeverencountering problems where a few jobs complete successfully.However within the steps there have been errors, I've queried various agent historytable alas no errors are reported.How can I extract information within a successful scheduled job?Cheers

Shrink the database only upto its initial size which is set

Posted: 02 Sep 2013 05:21 PM PDT

I have some database which are already created which are already created with default initial size. But i want to reset the initial size of these. I did it using SSMS.But now if i shrink the database it will shrink beyond the initial size i have set.It will shrink till the initial size it had during creation of database. But i don't want it. I was expecting to shrink till the initial size which i have set.So please suggest me how it can be done?

SQL With Browser Disabled

Posted: 25 Sep 2013 12:07 AM PDT

I am sure I have had this working before, but how can I connect to SQL Server remotely, using the instance name (With instance name not hidden), but with the browser disabled? It works with Servername, port number. But not Server\instance.browser enabled is lets me connect with server\instance. I know reading on the internet it says browser must be enabled for this but I am sure ive done it with this disabled!

2008 SQL replication

Posted: 25 Sep 2013 02:48 AM PDT

I have a 2008 SQL server i need to replicate for a reporting server on a Nightly basis. I had the replication working originally but had to break it for a software upgrade. The DB has grown in size from 40 GB to 150 GB ( a lot of imports .When i tried to re-enable snapshot replication the Publisher takes around 40 Minutes the subscriber I let run for over 12 hrs and it never completed. Currently i am doing a manual backup From the publisher and then a manual restoration to the subscriber. When the DB is compressed the backup is 28 GB and transfers to the subscriber in less then 5 minutes backup. Would it be better to do a transnational replication ? Is there ways to have SQL only update the tables that changed ? When all of our imports are done i expect the Db to be approximately 250 GB

String Split

Posted: 17 Sep 2013 09:29 PM PDT

I want to write query which procures me result like below.General rule is if any word starts from number then it should split as word.Input-----Ørstedsvej 7BVolvo 25D104ND NokiaResult------Ørstedsvej 7 BVolvo 25 D104 ND NokiaOr the simplest way to do.

Importing from a improperly formatted excel file

Posted: 25 Sep 2013 06:39 AM PDT

Any guidance would be appreciated. Despite my protestations I have been tasked to import from an excel file. The file is not what I would call a proper data file. The data will start in cell D10 and end in cell Kx where x is the number of data rows in the file.What are my options here. My first thought would be to use SSIS and dump say cell A1 through K10000 ( assuming I would never have more than 10000 rows) into a staging table and parse the data out.Anyone been down this road...?

Multi instance SQL 2008 R2 upgrade

Posted: 25 Sep 2013 01:59 PM PDT

I have 3 SQL 2008 R2 SP1 instances on a server. And i want to upgrade only 1 instance to SP2. I use an automated installation code in a batch file as below which cannot be changed now.R2SP2setup.exe /IAcceptSLServerLicenseTerms="TRUE" /quiet /allinstancesFor the other 2 instances which i don't want to patch with SP2, if i keep their SQL services in stopped and disabled state, will it work and upgrade only the one i require. Please let me know if there is any other option. I cannot change the /allinstances parameter in the above code as it is already in Production.

merging different clients data onto same box

Posted: 25 Sep 2013 07:22 AM PDT

Hi.Our current infrastructure is set up whereby we have 20 different companies using the service we provide, each having their own server with sql server 2008r2 installed. Each database server has the same main 3 databases installed all with the same table structure across the estate. Some of the servers are being well under utilised so we are looking at consolidating say 2 clients onto 1 server but we need to keep them totally separate for security/data protection.I am thinking that the best way to do this is by installing multiple instances on each server, so an instance A for company A , and instance B for company B.Can anybody who has done this before or knows about it, please advise me of the pro's and cons of doing it and any 'gottchas'We use transactional replication as well if this adds any extra info for your reasoning.

Replication Latency Issues. Comprehending data in MSdistribution_History

Posted: 25 Sep 2013 05:57 AM PDT

Hello, Can anyone please help us understand the MSdistribution_History table better. Here is our problem and what we understood so far: Randomly in the night, we are getting alerted on replication latency issues. By the time, we open up our computers and remote in, latency issues are gone.We are trying to understand MSdistribution_History for that particular Publication and we are observing weird thing in it. I will explain what we are observing it first.our latency window: 3:34AM - 3:39AMwe see a bump in delivered_transactions and its associated delivered_commands and obviously there is and will be current delivery latency for that many number of transactions.However, if you can see in the attached xl, there is a bump in number of transactions @3:19AM(which was not the cultprit for the alert. Our latency threshold is 5 min) and the alerts started coming in @3:34AM.From 3:34AM until 3:50AM, we got alerted for latency. In the attached table, we have high latency rate but, no transactions/commands associated with it.So my question here is, when the synch happened fine until 3:24AM(given the bump in no.of transactions/commands) why are we getting alerts @3:34AM until 3:50AM? hope I explained it clearly.Note:*Rev columns are the derived columns - converted the cumulative readings into per session readingsPlease shed some light at your earliest convenience.ThanksJagan K

Select TOP 1 Runs forever

Posted: 25 Sep 2013 05:37 AM PDT

I have 2 identical tables that we just recently deleted about 15M rows on 2 separate databases.After the delete I rebuilt the index on both tables:ALTER INDEX REBUILDTables both show fragmentation (from the following query):[code="sql"]DECLARE @db_id SMALLINT;DECLARE @object_id INT;SET @db_id = DB_ID(N'<DBNAME>');SET @object_id = OBJECT_ID(N'<TABLENAME>');IF @object_id IS NULL BEGIN; PRINT N'Invalid object';END;ELSE BEGIN; SELECT * FROM sys.dm_db_index_physical_stats(@db_id, @object_id, 0, NULL , 'DETAILED');END;GO[/code]Table 1:[code]database_id object_id index_id partition_number index_type_desc alloc_unit_type_desc index_depth index_level avg_fragmentation_in_percent fragment_count avg_fragment_size_in_pages page_count avg_page_space_used_in_percent record_count ghost_record_count version_ghost_record_count min_record_size_in_bytes max_record_size_in_bytes avg_record_size_in_bytes forwarded_record_count20 1285579618 0 1 HEAP IN_ROW_DATA 1 0 13.9545707492285 41736 53.1668343875791 2218971 0.136384976525819 4540 0 0 4901 4957 4917.541 0[/code]Table 2:[code]database_id object_id index_id partition_number index_type_desc alloc_unit_type_desc index_depth index_level avg_fragmentation_in_percent fragment_count avg_fragment_size_in_pages page_count avg_page_space_used_in_percent record_count ghost_record_count version_ghost_record_count min_record_size_in_bytes max_record_size_in_bytes avg_record_size_in_bytes forwarded_record_count17 1285579618 0 1 HEAP IN_ROW_DATA 1 0 16.138381869376 41280 45.3369670542636 1871510 0.820434890042006 24858 0 0 4902 4971 4920.613 0[/code]The query:[code="sql"]SELECT TOP 1 * FROM <TABLE>[/code]Runs for almost 15 minutes in Table 1 and 3 seconds in Table 2.I can fix the issue if I drop Table 1 and reload it but can anyone point me somewhere else to figure out why this was happening?Both tables have approx. 370 columns on them...I realize you can't reproduce the data, I'm just looking for general ideas.Thanks

format decimal in modify() XML method

Posted: 25 Sep 2013 04:26 AM PDT

Hi, In my XML file, the following format is being generated: 0It was to be generated with this format: 0.00I'm using it:declare @mydecimal decimal(12,2) = 0.00;...SET @myXml.modify('insert <tag1 value1=''{sql:variable("@mydecimal")}''/>');...<tag1 value1="0"> -- as is showing... <tag1 value1="0.00"> -- as it should be!I've used convert, but it did not work ..Thanks,JoseAcJR

Error message when changing location of share drive in SSIS package

Posted: 18 Sep 2013 03:56 AM PDT

Hi all,At work we have a job that does the backup of 3 tables to a sharedrive using a SSIS package delivering the data in a flat file.This week i had to change the location of the sharedrive in the package, after doing the changes the package is not working and the following message appears (see below), does anyone knows how to solve this one?Executed as user: (Database Name)\SYSTEM. Microsoft (R) SQL Server Execute Package Utility Version 10.50.4279.0 for 64-bit Copyright (C) Microsoft Corporation 2010. All rights reserved. Started: 9:46:26 AM Error: 2013-09-18 09:46:27.42 Code: 0xC020200E Source: Data Flow Task Flat File Destination [111] Description: Cannot open the datafile "ShareLocation\Filename.txt". End Error Error: 2013-09-18 09:46:27.42 Code: 0xC004701A Source: Data Flow Task SSIS.Pipeline Description: component "Flat File Destination" (111) failed the pre-execute phase and returned error code 0xC020200E. End Error Error: 2013-09-18 09:46:27.42 Code: 0xC020200E Source: Data Flow Task Flat File Destination [16] Description: Cannot open the datafile "ShareLocation\Filename.txt". End Error Error: 2013-09-18 09:46:27.42 Code: 0xC004701A Source: Data Flow Task SSIS.Pipeline Description: component "Flat File Destination" (16) failed the pre-execute phase and returned error code 0xC020200E. End Error Error: 2013-09-18 09:46:27.44 Code: 0xC020200E Source: Data Flow Task Flat File Destination [166] Description: Cannot open the datafile "ShareLocation\Filename.txt". End Error Error: 2013-09-18 09:46:27.44 Code: 0xC004701A Source: Data Flow Task SSIS.Pipeline Description: component "Flat File Destination" (166) failed the pre-execute phase and returned error code 0xC020200E. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 9:46:26 AM Finished: 9:46:27 AM Elapsed: 0.672 seconds. The package execution failed. The step failed.,00:00:01,0,0,,,,009/18/2013 09:00:00,Aurum - Backup to share,Error,0,(Database Name),Aurum - Backup to share,(Job outcome),,The job failed. The Job was invoked by Schedule 14 (daily). The last step to run was step 1 (*****).,00:00:00,0,0,,,,009/18/2013 09:00:00,Aurum - Backup to share,Error,1,(Database Name),Aurum - Backup to share,Aurum Share Backup,,Best regards,Daniel

Mapping inserted / source identity values

Posted: 14 Sep 2013 05:16 AM PDT

Hi,I have an interesting problem that i haven't been able to find a good solution for.What i want to do is to make an insert from table source into destination and find out the mapping between the source and destination table rows.The problem is that I cannot match the non-identity values on each other because they're no uniqueness between them. In real world, it's actually adress logging tables.Here are the sample script:[code="sql"]CREATE TABLE [source] (i INT identity PRIMARY KEY, some_value VARCHAR(30))CREATE TABLE [destination] (i INT identity PRIMARY KEY, some_value VARCHAR(30))CREATE TABLE [mapping] (i_old INT, i_new INT) -- i_old is source.i value, i_new is the inserted destination.i column-- Import some sample data...INSERT INTO [source] (some_value)SELECT TOP 30 nameFROM sysobjects-- Destination has some data alreadyINSERT INTO [destination] (some_value)SELECT TOP 30 nameFROM sysobjects--Here, i want to transfer everything from source into destination, but be able to keep a mapping on the two tables:-- I try to use OUTPUT clause, but i cannot refer to columns outside of the ones being inserted:INSERT INTO [destination] (some_value)--OUTPUT inserted.i, s.i INTO [mapping] (i_new, i_old) -- s.i doesn't work!!SELECT some_valueFROM [source] s[/code]I cannot change table definition, and i'd rather not mess with SET IDENTITY_INSERT ON either...Does someone have an idea?

Best way to mass delete partitioned data?

Posted: 25 Sep 2013 04:26 AM PDT

We have a > 40 TB data warehouse that once resided on a single SQL 2008 Enterprise instance which was "mostly" migrated to a three node SQL 2012 Enterpise failover cluster.The largest tables are partitioned, and we use a relatively simple partitioning key on a datetime2 column, which is also the clustering key. Only the 2012 and 2013 data are "live" on the new failover cluster. 2012 and older data also still resides on the single instance 2008 server. SAN storage is getting skinny and we need to delete the 2012 data on the single instance and reclaim the "double dipped" space.So my question is, what is the best approach to mass delete a large quantity of partitioned data? My target is only the 2012 data, which should reside on very specific filegroups. We used only 1 file per filegroup, for example with a very boring but descriptive logical name like '20120901File'.Can this be as simple as simply updating the partitioning scheme and dropping the 2012 file groups? Or if I do this will I be causing some insidious corruption?Do I need to take a more conservative approach and do batch row deletions in something like a while loop (to enforce say 1000 row batches) while also switched to bulk logged recovery model to avoid trashing the transaction log?Is there a way to get clever with partition switching? I have done sliding windows in the past, but this isn't quite what I'm after.Ballpark estimates are deleting on the order of tens of billions of rows, and 10 - 20 TB of data, so you can see the appeal of being able to quickly drop the floor out from under a bunch of data by simply nuking an entire filegroup. Just not sure if that is feasible and won't have other undesirable consequences.As a last thought, I believe our setup has indexes aligned.Cheers,Jared

Bulk Insert - CSV with and without quotes

Posted: 25 Sep 2013 03:39 AM PDT

HelloI need to Bulk Insert a .csv file into an SQL table. The problem is within the .csv file some fields have quotes some have commas within the fields. Does anyone have a solution or a workaround of how I can handle this?.csv file as follows:[code="plain"]"1 High Street","London","United Kingdom""1,High Street","London","United Kingdom""1,High Street",,United Kingdom[/code]SQL Code as follows:[code="plain"]BULK INSERT test FROM 'c:\test\test.csv' WITH (FIELDTERMINATOR=',',ROWTERMINATOR = '')select * from test[/code]You should be able to see all the issues I am experiencing. I tried importing with the quotes specified but due to not all fields having these present it imported incorrectly again.Any ideas?Thanks

SSIS Use a conditional statement on a file name?

Posted: 25 Sep 2013 02:26 AM PDT

I have a file that comes from a vendor who can't seem to spell. The file has a Prefix of D,E, or P. 95% of the time they put the 'D' ( or others) at the beginning of the file names so in my ForLoop.Collection.Files: box I have d*.txt, but every now and then, some muckity-muck puts the prefix on the end instead and the gates of heck come crashing down.I want to test the file name somehow, or maybe put an -OR- in that Files: box. Any ideas?Bueller?Thanks in advance.Crusty.

Point in time restore Not from latest .BAK file

Posted: 24 Sep 2013 11:03 PM PDT

Lets say I need to restore a db to from BACKUP A to TRANSLOG BACKUP Z.I have a script to restore and include the .trn files up to Backup Z. This works fine and can be done via the GUI.Lets take this scenario as an example. This is my issue.FULL Backup ATrans Backup aTrans Backup bTrans Backup cTrans Backup dFULL Backup B _Corruption FoundTrans Backup eTrans Backup fTrans Backup gTrans Backup hFULL Backup C – Corruption FoundTrans Backup iTrans Backup jTrans Backup kTrans Backup ZI need to restore up to 'TRANS BACKUP Z', However my script now breaks at the time that Full Backup B was created. This breaks my backup chain. How do I get around this and get it to restore FULL BACKUP A – TRANS BACKUP Z.The script I am using does have all the relevant TRN files listed to be loaded and are in the correct order. Why does the backup break the chain? Not a live issue but preparing for unfixable db corruption. To eliminate a lengthy downtime.

Need Tool to FTP Files to Secured SFTP Site

Posted: 25 Sep 2013 01:54 AM PDT

I'm looking for a 3rd party tool that I can execute from my SSIS package to FTP files to a secured ftp site. Because I will have many files to send from several folders, it needs to be able to accept a parameter containing the file path each time it is called.I have WinSCP but I'm wondering if there is a better tool, one that I can pass a filename into. I appreciate any ideas.Thanks, Jack

Replication-Distribution Database already exists problem

Posted: 25 Sep 2013 12:32 AM PDT

Hi All,When i complete the replication wizard it will try to run through, but the following error message appears:[i]Cannot create file 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Data\distribution.MDF' because it already exists. Change the file path or the file name, and retry the operation.CREATE DATABASE failed. Some file names listed could not be created. Check related errors.Changed database context to 'master'. (Microsoft SQL Server, Error: 5170)[/i]I have checked in the system database folder and the Distribution database isnt there. However, the mdf and ldf files reside in the following location:C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA. Question 1. Can i delete these two files?Question 2. will the replicaiton wizard create a new mdf and ldf?thanks in advance guys!

CTE

Posted: 24 Sep 2013 09:19 PM PDT

Is it possible to call procedure in CTE?

3 comments:



  1. mississauga real estate agents
    Realtor Piyush is a one of the best real estate realtor in brampton. We have more than 10+ years of experience in this field. We are specializes in buying, selling, leasing and investing in properties of Brampton. We also cover wide area of field Mississauga and around many places.

    ReplyDelete

Search This Blog