Tuesday, July 16, 2013

[T-SQL] CALCULATE 2 years back data

[T-SQL] CALCULATE 2 years back data


CALCULATE 2 years back data

Posted: 15 Jul 2013 07:58 AM PDT

how to calculate 2 years back datasuppose today 07/15/2013 ,2 years back ,07/15/2011.how to calculate exact date?

The conversion of a nvarchar data type to a datetime data type resulted in an out-of-range value

Posted: 15 Jul 2013 11:43 PM PDT

Im using this syntax to get a value but it gives me a error in visual studio:The conversion of a nvarchar data type to a datetime data type resulted in an out-of-range valueSyntax im using in visual studio:select h.branch_n, type_n, count(incident_id) as x from incident as ajoin sectn_dept as con a.sectn_dept_id = c.sectn_dept_id join inc_cat on a.inc_cat_id = inc_cat.inc_cat_id join cat_proces_mngt on a.inc_cat_id = cat_proces_mngt.inc_cat_idjoin timeon a.date_logged1 = time.pk_datejoin sectn as gon c.sectn_id = g.sectn_idjoin branch as hon g.branch_id = h.branch_id where exists ( select f.branch_n, count(*) as y from incident as b join sectn_dept as d on b.sectn_dept_id = d.sectn_dept_id join inc_cat on b.inc_cat_id = inc_cat.inc_cat_id join cat_proces_mngt on b.inc_cat_id = cat_proces_mngt.inc_cat_id join sectn as e on d.sectn_id = e.sectn_id join branch as f on e.branch_id = f.branch_id where c.sectn_n = d.sectn_n and date_logged1 >= @week and date_logged1 < dateadd(week, 1, @week1) and location_id = 2 and type_id in (1,2,3) and week >= @week group by f.branch_n having count(*) > 10 and count(*) <= 30)and date_logged1 >= @week and date_logged1 < dateadd(week, 1, @week1) and location_id = 2 and type_id in (1,2,3) and week >= @weekgroup by h.branch_n, type_n order by h.branch_n, type_n descWHEN I'm running in Microsoft server management studio it works fine and i get results like this;Branch_n -- type_n -- xCook ST Change 6Cook Question 7Cook Incident 5BV ST Change 7BV Question 5BV Incident 12G ST Change 7G Question 3

insert or update according the field value

Posted: 15 Jul 2013 06:59 PM PDT

Hi. I try to write a stored procedure. First, if record exists, it will look at recordtime. If recordtime is null, it updates record time, else it updates updatedtime. I write a code like below:set @isexist=(select count(*) from GNL_OgretmenDevamsizlikGirisLog where OgretmenID=@ogretmenID and DersYiliID=@dersyiliID and SinifID=@sinifID and DersID=@dersID and Convert(date,GirisTarihi) = Convert(date,getdate()) ) if(@isexist>0)begin endI will write update code in if state. But I could not write. how can I control recordtime and updatedtime, and update one of them.Thanks.

Performance Tuning Large update statement--HELP

Posted: 15 Jul 2013 02:50 AM PDT

I have a 1.2 TB database in which I am updating a couple columns across 11 tables.T1 (F_name, L_Name, Address)t2 (User_Id, Email_Address)..etc All tables have different columns etc. I have written a while loop that produces the UPDATE statement for me per each table, and then executes the UPDATE statement on that table.The Update statement appears below:USE DBNAME;UPDATE T1SET T1.F_Name = fData(F_Name), T1.L_Name = fData(L_Name), T1.Address = fData(Address)fData is a function that passes the field in and obfuscates the data so that it can be used in our DEV environment without allowing sensitive data into developers hands from production. I am not concerned about table locks in this instance as this process occurs on my personal testbox where the database sits.The issue is I have ran this statement for say T1 (~1.2 million rows) and takes about ~2-3 minutes. Then when running on say table T6 (~250 million rows) and similar column types it is taking over 12 hours. I have tried extrapolating times and this should take closer to 2.5 hours and not 8. I'm wondering what kind of performance enhancements can I make to this process.Things considering: Setting database into simple mode for logging purposes, attempting to complete updates in batches (unneeded for locking purposes, but wondering will this speed up performance as far as SQL retrieving smaller set and updating smaller sets rather than the whole table at once), updating based on table indices, disabling tables indices, setting transaction level isolation...any other thoughts. Or can you expound on how each of these would help a LARGE table update process disregarding any blocking needs.Testbox Specs:Dual quad core processors, 96GB RAMSQL Specs:SQL 2008R2

Forecasting Year Numbers

Posted: 15 Jul 2013 01:40 AM PDT

Hi allApologies if this is posted in the wrong section but I didnt really know where it belonged.I have a set of data (snippet of matrix below)Period 1 Asset Category No Of Jobs Sum Of Spend Av Of Spend 36 £13,972.49 £388.12A 1044 £266,798.16 £255.55B 911 £187,905.17 £206.26C 185 £44,863.45 £242.51D 195 £106,987.62 £548.65New Equipment 25 £61,087.37 £2,443.49Planned Only 41 £24,576.29 £599.42Total 2437 £706,190.55 I actually have 8 periods worth of info, what I want to be able to do is forecast the remaining 4 periods based on the 8 periods I do have. So ideally I would take Asset Cat "A" and say for arguments sake I have 1000 jobs £10,000 Spend with an average of £10 per job. I want to apply that average to the average number of jobs so the average would be 1000 jobs divided by 8 periods = 125 jobs a period. so I want Periods 9 - 12 to show as 125*£10 - does that make any sense ???I could do it easily in excel but how can I do it in SSRS?ThanksCarl.

No comments:

Post a Comment

Search This Blog