Monday, April 29, 2013

[SQL Server 2008 issues] How to track table schema changes

[SQL Server 2008 issues] How to track table schema changes


How to track table schema changes

Posted: 28 Apr 2013 05:38 PM PDT

Hi, I have a development database and some developer are use this one. I need to track the new tables and modified existing tables such as adding new columns, indexes or modified existing one. May i know any way to track these changes. I am not bother about data changes.Thanks...

Log Shipping - Delete Log files

Posted: 28 Apr 2013 06:21 PM PDT

Hi, I have setup the log shipping on SQL server 2008 R2 and it's working fine . I have setup the copy log file job copy from D:\LOG to D:\COPYLOG folder and setup delete the copied files after 72 hours . Files are get deleted from D:\LOG folder but it's not deleting from D:\COPYLOG folder . Do i need to keep all the log files inthis folder ? How do I delete the log filed from COPYLOG folder after applied .RegardsSuresh

Inserts are slow

Posted: 28 Apr 2013 03:12 AM PDT

I have a stored procedure that requires reading transactions one by one and then creating a few inserts per transaction. The total number of inserts is usually about 15,000. To me this is a very small number.The inserts are getting created in a loop as I process and check each row from the source system.One source system transaction may need to have several inserts created in the destination table and its a complicated process to determine this.What I'm struggling with are these inserts are taking for ever and the status under sp_who quickly goes from runnable to suspended.I'm actually gathering all the inserts into a temp table and then doing real insert at the end with a select into.The table does have about 100 columns and yes they are all needed for research reasons.Any help or suggestions is appreciated.

Best way to do same.

Posted: 28 Apr 2013 04:14 PM PDT

create table main(fileNo Integer)insert into main values(1)insert into main values(2)insert into main values(3)insert into main values(4)insert into main values(5)insert into main values(6)create table doc(fileNo Integer, code char(1))insert into doc values(1,'I')insert into doc values(1,'A')insert into doc values(2,'A')insert into doc values(3,'I')insert into doc values(3,'A')insert into doc values(6,'A')select main.fileNo from mainwhere fileno not in(select fileno from doc where code='I')I need records for which no corresponding record found in Doc Table with Code = 'I'so output here is:2456is there any better way to do the same?thanks

Rebuild or DBREINDEX

Posted: 27 Apr 2013 11:06 PM PDT

Hi I have a table with a clustered index and 6 non clustered indexes.The size of the table is 400G and logical fragmentation of the indexes are above 90%.We have data modification in 24 hours of all days of week,but it is less at nights.I want to rebuild the indexes and set fillfactor for the indexes that are not in ascending order.1.Should I use rebuild or DBREINDEX?2.Is it better to use online or offline option?(because of blocking)and is there any thing that I should consider if I use online rebuild ?3.What is the order of rebuilding the clustered and non clustered indexes?which of them should be done first?4.What would happen if it is canceled during the rebuild or dbreindex ?I dont want to have a lot of time out in users request.What is the best scenario for doing this task?

Using MERGE tsql for FACT loading

Posted: 28 Apr 2013 03:56 AM PDT

Hi All,I m using the below tsql MERGE statement for loading data to one of the fact table in DWH db. For initial full load there would be around 30 million records and going forward for delta load it would be less than 50 k records.Here i'm dealing with accumulating snapshot type of fact table where there will always insert i.e. if its a new record then insert into the table, when existing record but with different fact values then create a new record with different load dateTo handle the above requirement I have created a temp table in the same database as that of target fact table with same structure and then used the below MERGE query.[code="sql"]INSERT INTO dbo.[Fct_Prod_Mthly_Actuals] ([Well_Skey],[DateKey],[Cost_Center],[Wghtd_WI_Key],[Item_Cd],[Prd_Lease_Vent_Cd],[Rev_Lease_Vent_Cd],[PHA_Flg],Days_Prodcd,Days_Injctd,[Pressure_Base],[Metric_Desc],[Metric_Vol],[Load_Date],[Update_Date])SELECT [Well_Skey],[DateKey],[Cost_Center],[Wghtd_WI_Key],[Item_Cd],[Prd_Lease_Vent_Cd],[Rev_Lease_Vent_Cd],[PHA_Flg],Days_Prodcd,Days_Injctd,[Pressure_Base],[Metric_Desc],[Metric_Vol],CAST(CONVERT(VARCHAR(8),GETDATE(), 112)AS DATE) AS [LOAD_DATE],[Update_Date]FROM(MERGE dbo.[Fct_Prod_Mthly_Actuals] AS DSTUSING dbo.[Fct_Prod_Mthly_Actuals_Temp] AS SRCON (SRC.[Well_Skey] = DST.[Well_Skey]ANDSRC.[DateKey] = DST.[DateKey]ANDSRC.[Wghtd_WI_Key] = DST.[Wghtd_WI_Key]ANDSRC.[Item_Cd] = DST.[Item_Cd]ANDSRC.[Metric_Desc] = DST.[Metric_Desc])WHEN NOT MATCHED THENINSERT ([Well_Skey],[DateKey],[Cost_Center],[Wghtd_WI_Key],[Item_Cd],[Prd_Lease_Vent_Cd],[Rev_Lease_Vent_Cd],[PHA_Flg],Days_Prodcd,Days_Injctd,[Pressure_Base],[Metric_Desc],[Metric_Vol],[Load_Date],[Update_Date])VALUES (SRC.[Well_Skey],SRC.[DateKey],SRC.[Cost_Center],SRC.[Wghtd_WI_Key],SRC.[Item_Cd],SRC.[Prd_Lease_Vent_Cd],SRC.[Rev_Lease_Vent_Cd],SRC.[PHA_Flg],SRC.Days_Prodcd,SRC.Days_Injctd,SRC.[Pressure_Base],SRC.[Metric_Desc],SRC.[Metric_Vol],SRC.[Load_Date], SRC.[Update_Date])WHEN MATCHED AND (ISNULL(DST.[Metric_Vol],'') <> ISNULL(SRC.[Metric_Vol],'')OR ISNULL(DST.Pressure_Base,'') <> ISNULL(SRC.Pressure_Base,'') OR ISNULL(DST.Days_Injctd,'') <> ISNULL(SRC.Days_Injctd,'')OR ISNULL(DST.Days_Prodcd,'') <> ISNULL(SRC.Days_Prodcd,'')OR ISNULL(DST.Rev_Lease_Vent_Cd,'') <> ISNULL(SRC.Rev_Lease_Vent_Cd,'') OR ISNULL(DST.Prd_Lease_Vent_Cd,'') <> ISNULL(SRC.Prd_Lease_Vent_Cd,'') OR ISNULL(DST.[PHA_Flg],'') <> ISNULL(SRC.[PHA_Flg],''))THEN UPDATE SET DST.[LOAD_DATE] = DST.[LOAD_DATE] OUTPUT SRC.[Well_Skey],SRC.[DateKey],SRC.[Cost_Center],SRC.[Wghtd_WI_Key],SRC.[Item_Cd],SRC.[Prd_Lease_Vent_Cd],SRC.[Rev_Lease_Vent_Cd],SRC.[PHA_Flg],SRC.Days_Prodcd,SRC.Days_Injctd,SRC.[Pressure_Base],SRC.[Metric_Desc],SRC.[Metric_Vol],SRC.[Update_Date], $Action AS MergeAction) AS MRGWHERE MRG.MergeAction = 'UPDATE';[/code]So My query is:1) Can the above query handle such large volume of data.2) Do i need to create any indexes in either target fact/temp table. Or consider for partitioning the fact table.Please suggest/advice..Thanks in advanceSam

Resizing the log file in Simple recovery mode

Posted: 27 Apr 2013 08:08 PM PDT

Hi - A brief description of the problem and how it came to be.....We have a nightly data load into a reporting db which is set in simple recovery model. This nightly job has suddenly increased in time from 20 mins to over 1hour. I am getting alerts from sql monitor (red gate) stating i have a high number of vlf's in the databases. I noticed the transaction log file was set at 1mb and set to auto grow. I had no idea on how big to set the transaction log so i set it to 10Gb. I watched the job run and kept checking the log size and 500mb seemed to be the max size it grew to.So now i need to resize it.Will resizing it clear any fragmentation and should i resize it to say 500mb.Will this clear the VLF's ?

No comments:

Post a Comment

Search This Blog