Wednesday, March 13, 2013

[SQL Server 2008 issues] How to query SSAS dimension properties

[SQL Server 2008 issues] How to query SSAS dimension properties


How to query SSAS dimension properties

Posted: 13 Mar 2013 03:55 AM PDT

Hello all,Short version.I need to determine how from within TSQL I can query an SSAS cube to find the source column name (KeyColumn) of an attribute within a dimension. ie, my attribute is called "Product Color"; however in the database, that is coming from a table called "Products" with a column name of "Custom1". I basically need to find the database column names of all the attributes in a specified dimension. Long version.Our application leverages a somewhat generic, yet configurable cube per client. Each of our clients, are configured by the amount and type of data they send us. So from a database perspective, within our customer and product dimensions, we allow for our ETL team to leverage any one of multiple generic/custom columns. Being that said generic columns aren't named for their purpose (again because they're generic so we can have a model database for all clients), the columns are then renamed within the Cube and the cube then uses those names for the application. We're working on some custom reporting that hits the database and not the cube (I know) and with the above said, I have no way of knowing what to index because any given client can be using X to Y of these custom fields. Outside of dumping a blanket index on all columns, the next best bet would be to dynamically create an index based on the columns in use for that cube.We've accomplished this within the application using SQL AMO; however I have a need to do this from within TSQL. Actually, I could use AMO from Powershell which is what I'm going to start looking into shortly, but that will take longer. Was curious if this was possible from within TSQL and furthermore, any suggestions on this topic (outside of fixing our database/cube) are most welcome!Thanks

Extended Events

Posted: 13 Mar 2013 03:03 AM PDT

I only recently discovered Extended Events. Why? I don't know!I jumped in and learned how to use it and then I let the rest of our sql team in on the discovery. One of the team members quickly stated that it's worthless and is too difficult to use and read, and about the only thing it might be good for is locating long running queries. I thought it was easy to create scripts using 2012 and also easy to use ssms to read the xel in table-format.Am I just dealing with a case of Debbie Downer here or am I just too excited with the shinny-toy? Perfmon seems to be the tool of choice for, Debbie.Anyone else think that this is a great tool?What types of things do you prefer to use it for?Thanks!!

invalid character in input file

Posted: 12 Mar 2013 12:04 PM PDT

Hi all, I'm being told I have to scrub invalid data in an input file. I have a 30 character description field with extended Ascii poop causing a failure in my SSIS package, trying to store this in a char column.Is there a simple way using sql to replace ascii values < 32 and > 127 with a space ' '?thanks in advance

Recommended index causing query to run twice as long / HASH vs NESTED LOOP

Posted: 12 Mar 2013 10:13 AM PDT

Hey all,Been working on some performance tuning and running into a wall at this point. I've taken some initial passes at rewriting the query in question and have reduced run time without touching indexing by more than half (from 40 to 16 seconds). When viewing the missing index recommendations in all three spots (DTA, DMV's, execution plan), SQL is telling me that I have an opportunity for a better index for this query. After review of said recommendation, it's pretty straight forward that it's a better index than the existing (or at least I thought). The existing index that is used is twice as large and the order isn't as ideal; whereas the new index is much more focused on the query at hand. So, seems like a no brainer to go ahead and add this index yeah? I've done so and have experimented a multitude of ways; however now that the optimizer wants to use this new index, run time is now averaging around 33 seconds!? Looking at the execution plans, there is a distinct change ... with the new index, it has added an additional Nested Loop to the mix vs. in the old, there were just the two Hash Match (Aggregate)'s. I can't say I'm overly educated on all the differences; however I tried adding JOIN hints and specifying each type only resulted in an even longer query time.At this stage, I'm simply trying to better understand why my plan looks worse with a better index and why the nested loop was introduced into the mix.Thank you!Execution plans (top is pre index, bottom is post index):[URL=http://imageshack.us/photo/my-images/831/execplan.png/][IMG]http://img831.imageshack.us/img831/3883/execplan.png[/IMG][/URL]Query:[code="sql"]SELECT YEAR(i.[InvoiceDate]) AS [Year] ,MONTH(i.[InvoiceDate]) AS [TimePeriod] ,SUM(i.[Amount] * ISNULL(cur.[Rate],1)) AS [Revenue]FROM dbo.[ftInvoices] iJOIN dbo.[DimCurrency] cur ON i.[CurrencyID] = cur.[CurrencyID]WHERE i.[Amount] > 0 AND i.[Quantity] > 0AND i.[Cost] > 0AND i.[InvoiceDate] BETWEEN '01/01/0001' AND '12/31/9999'GROUP BY YEAR(i.[InvoiceDate]), MONTH(i.[InvoiceDate])[/code]Original Index:[code="sql"]CREATE INDEX [ix_InvoiceDate_Quantity_Amount] ON [dbo].[ftInvoices]( [InvoiceDate] ASC, [Quantity] ASC, [Amount] ASC)INCLUDE ( [ContractPricingFlag], [Cost], [CurrencyID], [CustID], [DimFourId], [DimOneId], [DimThreeId], [DimTwoId], [ExchangeRate], [Exclude], [ExtendedCost], [InvoiceNum], [LineNum], [ListPrice], [ProdID], [UOMID]) [/code]New Index:[code="sql"]CREATE NONCLUSTERED INDEX [IX__ftInvoices__CurrencyID__InvoiceDate__Quantity__Amount__Cost]ON [dbo].[ftInvoices] ([CurrencyID],[InvoiceDate],[Quantity],[Amount],[Cost])[/code]

Incorrect syntax error when using group by

Posted: 12 Mar 2013 06:54 AM PDT

I'm trying to do a join and then a group by and order by, but I'm getting an "Incorrect syntax near 'ds' error. ds is the alias for the dataset which is the union of the 2 inner queries. Any help would be greatly appreciated: -- Add the parameters for the stored procedure here Declare @FromDt as date = '01-01-2011', @ThruDt as date = '03-11-2013', @Region as varchar(50) = 'Claims', @Queue as varchar(50) = 'Catch-All'; Declare @RegionTbl as table(RegionName varchar(50)); Declare @QueueTbl as table(QueueName varchar(50)); Insert @RegionTbl select Value from hpexprpt.dbo.split(@Region,','); Insert @QueueTbl select Value from hpexprpt.dbo.split(@Queue,','); ( select users.last_name + ',' + users.first_name as [User ID] , workflow_regions.name as Region , queues.name as [Queue] , work_item_statuses.name as [Status] , case when convert(varchar,work_items.creation_date,110) < @FromDt then 'From Backlog' else 'Current' end as [Class] , wf_jobs.wf_job_id as [JobID] , work_items.elapsed_time , CONVERT(varchar, DATEADD(ms, work_items.elapsed_time * 1000, 0), 114) as [Total Time] , DATEADD(SECOND,(work_items.elapsed_time * -1),work_items.completion_date) as [Start_Date] , work_items.completion_datefrom hpexpprod.dbo.work_items join hpexpprod.dbo.queues on queues.queue_uid = work_items.queue_uid join hpexpprod.dbo.workflow_regions on workflow_regions.workflow_region_uid = work_items.workflow_region_uid join hpexpprod.dbo.work_item_statuses on work_item_statuses.work_item_status_uid = work_items.work_item_status_uid join HPEXPPROD.dbo.wf_jobs on wf_jobs.wf_job_uid = work_items.wf_job_uid join hpexpprod.dbo.actors on actors.actor_uid = work_items.actor_uid left join HPEXPPROD.dbo.users on users.actor_uid = actors.actor_uidwhere workflow_regions.name in (select * from @RegionTbl) and queues.name in (select * from @QueueTbl) and ( cast(isnull(work_items.completion_date,'') as date) between @FromDt and @ThruDt or cast(work_items.last_updated_date as date) between @FromDt and @ThruDt ) unionSELECT users.last_name + ', ' + users.first_name as [User ID] ,workflow_regions.name as Region ,btr.name as [Queue] ,'Break' as [Status] ,case when convert(varchar, btt.creation_date,110) < @FromDt then 'From Backlog' else 'Current' end as [Class] , '' as [JobId] ,btt.elapsed_time ,CONVERT(varchar, DATEADD(ms,btt.elapsed_time * 1000, 0), 114) as [Total Time] ,DATEADD(SECOND,(btt.elapsed_time * -1),btt.creation_date) as [Start_Date] , btt.creation_date completion_dateFROM HPEXPPROD.dbo.break_time_tracking btt join HPEXPPROD.dbo.break_time_reasons btr on btr.break_time_reason_uid = btt.break_time_reason_uid join hpexpprod.dbo.workflow_regions on workflow_regions.workflow_region_uid = btt.workflow_region_uid join HPEXPPROD.dbo.actors on actors.actor_uid = btt.actor_uid left join HPEXPPROD.dbo.users on users.actor_uid = actors.actor_uid where workflow_regions.name in (select * from @RegionTbl) and cast(isnull(btt.creation_date,'') as date) between @FromDt and @ThruDt)ds group by ds.[USER ID] order by 9 asc

Executionlog2 table

Posted: 07 Mar 2013 08:13 PM PST

How much time Executionlog2 table and catlog table in ReportServer database hold the data?Does it store all historical data and it always remain in table or after some time of interval period sql server remove the data from both the table?????????????????

Temp table join example

Posted: 13 Mar 2013 01:59 AM PDT

Hi,I'm looking for a basic example of a temporary table joined to and actual tableI'm guessing it should be like:Select ModelFROM CARS INNER JOIN #tempTable onCARS.Model = #tempTable.Model

Linked Server Replication

Posted: 12 Mar 2013 07:44 PM PDT

Good MorningI have been struggling with data synchronization between SQL SERVER and MYSQL for ages, I have tried all possible scenario and cannot get it to work. Added triggers to SQL SERVER with errors. I am now at the point to sync data through REPLICATION but also picked up an issue now. I created the Linked server and it works 100% - Server Options setup as Subscriber, but when I setup new Subscription I cannot see it in the list of databases. What am I missing? Thanks

Bulk Insert & Temporary Table

Posted: 13 Mar 2013 12:42 AM PDT

Hi,I have the following situation. I have a list of items that I am excluding from a query. So far I have been using a WHERE statement and NOT LIKE and NOT LIKE. The list keeps growing, so I thought why not just create a temporary table based on a text file that contains the list of items I want to exclude. This was easy. The problem is that this temporary table contains just one column and I have nothing to join it to.So I am hoping someone has some advice on how to best handle a list of items you want to exclude, other than having numerous not like statements.

Possible to recover / retrieve a server-side trace definition?

Posted: 13 Mar 2013 12:18 AM PDT

I'm trying to find what is being traced from a server-side trace on a couple SQL servers I'm managing. The person who created these traces left before I started, and there are no clear notes about this.The trace logs to a file (figured that out with this bit of T-SQL:[code="sql"]SELECT t.id ,CASE t.status WHEN 0 THEN 'Stopped' ELSE 'Running' END AS status ,t.path ,t.max_size ,t.stop_time ,t.max_files ,t.is_rollover ,t.is_shutdown ,t.is_default ,t.file_position ,t.start_time ,t.last_event_time ,t.event_countFROM sys.traces AS tWHERE t.is_rowset = 0 ;[/code])The trace in question is saving to files on disk, and doesn't clean up (or get cleaned up) after itself, so I have to manually delete files to keep the disk from filling up...I don't (yet) want to stop the trace, until I know what it's doing...Thanks,Jason A.

Access issue

Posted: 12 Mar 2013 09:24 PM PDT

I have got access to Production Domain but I don't have access to Development Domain means I ca not create AD group or service account in Dev environment.I tried installing SQL server on Dev servers. SQL Service accounts were created by someone who has got the access to Dev Domain but when I tried installing SQL server it errored with 'Access denied error'Is this because I don't have access to Dev Domain?Thaks in advance!

T-SQL which cause Table or Index Scan

Posted: 12 Mar 2013 11:00 PM PDT

Trying to find a comprehensive list of T-SQl that will cause a table scan. 1 Such as Select * from xxx, 2 Where xyz like '%abc%' etc, etc.

Slow query - optimisation help required!

Posted: 12 Mar 2013 08:48 PM PDT

I have the following query:[code="sql"]INSERT INTO dbo.Load9_PotentialOverlaps_(Master_Id, Master_GUID, Master_SubmissionID, Duplicate_Id, duplicate_GUID, Duplicate_SubmissionID, MatchKeyType) SELECT a.id, a.GUID, a.SubmissionID, b.id, b.GUID, b.SubmissionID, 6 FROM dbo.All_keys_ AS a INNER JOIN dbo.Load9_keys_ AS b on a.idxMatchKey1 = b.idxMatchKey1 WHERE NOT EXISTS (SELECT 1 from dbo.Load9_PotentialOverlaps_ as c WHERE c.duplicate_ID IN (a.id,b.id)) OPTION (MAXDOP 2);[/code]I've attached the estimated execution plan if anyone wants to look at it.The main tables used by the above query are as below:Table sizes:[b]dbo.All_keys_ = 88 million rowsdbo.Load9_keys_ = 750 thousand rows[/b]Using sp_whoisactive, I can see regular wait info of the type: [b](20ms)PAGEIOLATCH_SH:MyDatabase:1(*)[/b]Wait stats from this moring (SQL Server restarted last night around 9PM)[img]http://s15.postimage.org/xvj5k2zob/waitstats.jpg[/img][img]http://s15.postimage.org/q3xu7s657/All_Keys.jpg[/img][img]http://s24.postimage.org/4oe7dbdut/Load9_keys.jpg[/img]

Not able to modify existing maintenance plan?

Posted: 12 Mar 2013 10:57 PM PDT

Hi,Tlog backup failed and no errors are written error log and application logs as well..when try to editing existing maintenance plan as below errors display"Microsoft SQL Server Management Studio is unable to load this document Error Loading from xml. No further detailed error information can be specified for this problem because no Events Objects was passed where detailed error information can be stored"ErrorSQL Server Scheduled Job 'Tlog_Backup.Subplan_1' (0xDFA7450D1A79F54291B8D3F5AA540950) - Status: Failed - Invoked on: 2013-03-13 15:52:00 - Message: The job failed. The Job was invoked by Schedule 14 (tlog). The last step to run was step 1 (Subplan_1).pl. suggestion me how to resolve this issues?thanksananda

Login history

Posted: 12 Mar 2013 10:11 PM PDT

Hi Friends,Let us consider I have a login ABC & I have given db_reader access for XYZ Database. But now that particular login has db_owner permission. Is there any way to find when that user ABC has got the db_owner permission.Thanks in advance

How SQL query result insert in to table?

Posted: 12 Mar 2013 08:46 PM PDT

Hi Team,Daily am taking the size of databases using "EXEC SP_Databases" and updating in a excel file manually,[b]DATABASE_NAME | DATABASE_SIZE | REMARKS[/b]now i want to create a job to execute the above query and store the results in a table.New table structure is [b]DATE_OF_EXECUTION | DATABASE_NAME | DATABASE_SIZE | REMARKS[/b]Please help me in query

SQL Server Templates

Posted: 12 Mar 2013 08:15 PM PDT

My own created SQL Server templates have suddenly disappeared from Template Explorer in SSMS ?Anyone know how i can find them? Tried looking in the directory but they're not there?!C:\Users\XXX\AppData\Roaming\Microsoft\Microsoft SQL Server\100\Tools\Shell\Templates

Cross Join Trick

Posted: 12 Mar 2013 07:26 PM PDT

Create table Amount (YoA int ,Currency char(3),Amt decimal (9,2))Create table Master(YoA int ,Currency char(3),)Insert into Amountselect 2008, 'CAD' , 3400 UNION select 2008, 'USD' , 400 UNION select 2009, 'CAD' , 560 UNION select 2010, 'USD' , 6750 insert into MAsterselect 2008, 'CAD' UNION select 2008, 'USD' UNION select 2009, 'CAD' UNION select 2009, 'USD' UNION select 2010, 'CAD' UNION select 2010, 'USD'Required Output2008, CAD, 34002008, USD, 4002009, CAD, 5602009, USD, 0.002010, CAD, 0.002010, USD, 6750

When delimiter is part of the field - How to handle in SSIS

Posted: 12 Mar 2013 05:43 PM PDT

I have a CSV file with field delimiter as [b]COMMA (,)[/b]. The package is working without any issues for few source files, but fails when the field value in the file contains "COMMA".Note: I have no control over source file. Is there a way to handle it in SSIS?

EXEC Master.dbo.xp_DirTree

Posted: 12 Mar 2013 05:41 PM PDT

hello all.I use this command for my purpose:EXEC Master.dbo.xp_DirTree 'D:\Reports ',1,1now I want to have *.rpx file that exist in my folder(Reports).how to do this?

retrive data from path on another computer

Posted: 12 Mar 2013 04:06 PM PDT

hello all.I have som rpx file on one drive on server and I want to select rpx file name to sql server.how to retrive this file name with T-Sql?

Attempt to fetch logical page (1:440) in database 2 failed. It belongs to allocation unit 422212869292032 not to 6269010747738816512.

Posted: 05 Mar 2013 04:23 PM PST

hi all,One of my sp is giving the following error some times not every time:[b]Attempt to fetch logical page (1:440) in database 2 failed. It belongs to allocation unit 422212869292032 not to 6269010747738816512.[/b]can any one help on this.

copying table with filegroups and partitions in DW server

Posted: 12 Mar 2013 10:41 AM PDT

DB Size - 4tbTable Size - 160gbPartitioned - YesFGs - 159row count - 205,363,396indexes - 1CI, 4NCIPartitioned code:[code="sql"]USE [xx]GO/****** Object: PartitionScheme [PS_PRTN_ID_159] Script Date: 3/12/2013 6:36:32 PM ******/CREATE PARTITION SCHEME [PS_PRTN_ID_159] AS PARTITION [PFN_PRTN_ID_159] TO ([FG_PRTN_159_P1], [FG_PRTN_159_P2], [FG_PRTN_159_P3], [FG_PRTN_159_P4], [FG_PRTN_159_P5], [FG_PRTN_159_P6], [FG_PRTN_159_P7], [FG_PRTN_159_P8], [FG_PRTN_159_P9], [FG_PRTN_159_P10], [FG_PRTN_159_P11], [FG_PRTN_159_P12], [FG_PRTN_159_P13], [FG_PRTN_159_P14], [FG_PRTN_159_P15], [FG_PRTN_159_P16], [FG_PRTN_159_P17], [FG_PRTN_159_P18], [FG_PRTN_159_P19], [FG_PRTN_159_P20], [FG_PRTN_159_P21], [FG_PRTN_159_P22], [FG_PRTN_159_P23], [FG_PRTN_159_P24], [FG_PRTN_159_P25], [FG_PRTN_159_P26], [FG_PRTN_159_P27], [FG_PRTN_159_P28], [FG_PRTN_159_P29], [FG_PRTN_159_P30], [FG_PRTN_159_P31], [FG_PRTN_159_P32], [FG_PRTN_159_P33], [FG_PRTN_159_P34], [FG_PRTN_159_P35], [FG_PRTN_159_P36], [FG_PRTN_159_P37], [FG_PRTN_159_P38], [FG_PRTN_159_P39], [FG_PRTN_159_P40], [FG_PRTN_159_P41], [FG_PRTN_159_P42], [FG_PRTN_159_P43], [FG_PRTN_159_P44], [FG_PRTN_159_P45], [FG_PRTN_159_P46], [FG_PRTN_159_P47], [FG_PRTN_159_P48], [FG_PRTN_159_P49], [FG_PRTN_159_P50], [FG_PRTN_159_P51], [FG_PRTN_159_P52], [FG_PRTN_159_P53], [FG_PRTN_159_P54], [FG_PRTN_159_P55], [FG_PRTN_159_P56], [FG_PRTN_159_P57], [FG_PRTN_159_P58], [FG_PRTN_159_P59], [FG_PRTN_159_P60], [FG_PRTN_159_P61], [FG_PRTN_159_P62], [FG_PRTN_159_P63], [FG_PRTN_159_P64], [FG_PRTN_159_P65], [FG_PRTN_159_P66], [FG_PRTN_159_P67], [FG_PRTN_159_P68], [FG_PRTN_159_P69], [FG_PRTN_159_P70], [FG_PRTN_159_P71], [FG_PRTN_159_P72], [FG_PRTN_159_P73], [FG_PRTN_159_P74], [FG_PRTN_159_P75], [FG_PRTN_159_P76], [FG_PRTN_159_P77], [FG_PRTN_159_P78], [FG_PRTN_159_P79], [FG_PRTN_159_P80], [FG_PRTN_159_P81], [FG_PRTN_159_P82], [FG_PRTN_159_P83], [FG_PRTN_159_P84], [FG_PRTN_159_P85], [FG_PRTN_159_P86], [FG_PRTN_159_P87], [FG_PRTN_159_P88], [FG_PRTN_159_P89], [FG_PRTN_159_P90], [FG_PRTN_159_P91], [FG_PRTN_159_P92], [FG_PRTN_159_P93], [FG_PRTN_159_P94], [FG_PRTN_159_P95], [FG_PRTN_159_P96], [FG_PRTN_159_P97], [FG_PRTN_159_P98], [FG_PRTN_159_P99], [FG_PRTN_159_P100], [FG_PRTN_159_P101],[FG_PRTN_159_P102], [FG_PRTN_159_P103], [FG_PRTN_159_P104], [FG_PRTN_159_P105], [FG_PRTN_159_P106], [FG_PRTN_159_P107], [FG_PRTN_159_P108],[FG_PRTN_159_P109], [FG_PRTN_159_P110], [FG_PRTN_159_P111], [FG_PRTN_159_P112], [FG_PRTN_159_P113], [FG_PRTN_159_P114], [FG_PRTN_159_P115],[FG_PRTN_159_P116], [FG_PRTN_159_P117], [FG_PRTN_159_P118], [FG_PRTN_159_P119], [FG_PRTN_159_P120], [FG_PRTN_159_P121], [FG_PRTN_159_P122],[FG_PRTN_159_P123], [FG_PRTN_159_P124], [FG_PRTN_159_P125], [FG_PRTN_159_P126], [FG_PRTN_159_P127], [FG_PRTN_159_P128], [FG_PRTN_159_P129],[FG_PRTN_159_P130], [FG_PRTN_159_P131], [FG_PRTN_159_P132], [FG_PRTN_159_P133], [FG_PRTN_159_P134], [FG_PRTN_159_P135], [FG_PRTN_159_P136], [FG_PRTN_159_P137], [FG_PRTN_159_P138], [FG_PRTN_159_P139], [FG_PRTN_159_P140], [FG_PRTN_159_P141], [FG_PRTN_159_P142], [FG_PRTN_159_P143], [FG_PRTN_159_P144], [FG_PRTN_159_P145], [FG_PRTN_159_P146], [FG_PRTN_159_P147], [FG_PRTN_159_P148], [FG_PRTN_159_P149], [FG_PRTN_159_P150], [FG_PRTN_159_P151], [FG_PRTN_159_P152], [FG_PRTN_159_P153], [FG_PRTN_159_P154], [FG_PRTN_159_P155], [FG_PRTN_159_P156], [FG_PRTN_159_P157], [FG_PRTN_159_P158], [FG_PRTN_159_P159])GO[/code]This is a DW server and we are planning to add additional columns. we would like to test this in the test.Q is what will be the best way to emulate production i.e. have partitions, fgs and populate the table. Copying the table structure and moving data will be simple but i am trying to find a less time consuming way to move partitions (if possible) and fgs (if possible)any input will be greatly appreciated

Read & Write Permissions on SQL Server Agent Service startup account

Posted: 18 Nov 2012 10:17 PM PST

How do I add read permission and write permission to the Temp directory of the SQL Server Agent Service startup account. Many ThanksStuart.

SQL AGENT IS NOT NOT ABLE TO START....

Posted: 12 Mar 2013 03:31 AM PDT

Hi,Can anyone help me ?SQL AGENT IS NOT NOT ABLE TO START....Getting an error " The process terminated unexpectedly [0x8007042b] in my development server.I installed one more instance for that also the same issue is occuring....not able to change for sql agent server user in configuration manager.

query performance

Posted: 04 Mar 2013 08:53 AM PST

I have a table with 6 million rows. The table has around 200 columns. The data can be sliced and diced by the user in a number of ways. Are there any tips of improving query performance other than adding indexes?Currently some of the queries take around 40 seconds.thanksKK

SQL concatenate

Posted: 12 Mar 2013 05:45 AM PDT

I have a SQL question…Picture 3 tables in this relationship:Table A – Column1 (PK), Coulmn2 (Identity)Table B – Column1 (PK), Coulmn2 (Identity) sample data for column 2 is – a_id, c_idTable C – Column1 (PK), Coulmn2 (Identity) sample data for column 2 is integer information)A row of A can have 1 to many rows of B.Each row of B must refer to one row of CWhat I want to do is set up this result. Assume that A.identity =1 and B.c_id refers to C.identity=1 and C.integer_information = 100Assume that another row of A.identity = 2 and there are there rows of B referring to C.Integer_information of 100,200 and 300.I need this result:a.Identity Result of c.integer_information1 1002 100,200,300I have this much SQLSelect A.identity From AInner join B on B.a_id = a.identityInner join C on c.identity = b.c_idHow do I concatenate the values above with no comma if there is 1 value and commas between multiple values?

data staging (reading BAK files without restoring)

Posted: 12 Mar 2013 06:35 AM PDT

Hello,Has anyone heard of or used a third party tool to import data to a staging area from a BAK file without conducting a full DB restore?Red Gate has SQL Virtual Restore which has been retired. Is there another way to get a table subset off a FULL backup without redesigning the source DB filegroups?I am an ETL staging developer with no control over the source systems and want to stage only a small set of my BAK file without a full restore.Ideas?John

Track SQL severity levels in SCOM?

Posted: 12 Mar 2013 05:50 AM PDT

Trying to set up alerts and/or email notifications in SCOM 2007 that are triggered by SQL Server (2005 & 08) severity levels 19-25 and 823–825. Several db servers at my organization cannot send dbmail due to compliancy issues so I need another way to send alerts.Does anyone know if this can be accomplished in SCOM 2007? I can only find SCOM severity levels 0, 1, and 2 in FMSQLOPERATIONS.alert.vAlert. Maybe I can log these or pull these from the SQL Boxes' windows app logs using SCOM.Anybody have any experience with this?

SQL Server error - Cannot Open User Default Database error 4064

Posted: 12 Mar 2013 01:36 AM PDT

Any idea on this?user is part of active directory and has windows authentication and still getting this error....any thoughts?master is default db..from sql error log : sql error 18456 severity 14 state 16thanks--------------------------------------------------------------------------------

sp_updatestats and the default sampling rate

Posted: 04 Mar 2013 08:51 AM PST

Using SQL Server 2008R2As my Production database approaching 500GB and potentially can grow up to 1TB in 2 years, issue with sp_updatestats (after re-indexing part of maint step) using the default sampling rate (which potentially can skew the performance) bothers me.It has been a discussion earlier at that link: http://www.sqlservercentral.com/Forums/Topic1310877-146-2.aspx, but I still confused about using sp_updatestatsCurrently I am performing the following steps during weekend maintenance:1. ALTER Database MyDBSET RECOVERY Simple,AUTO_CREATE_STATISTICS OFF,AUTO_UPDATE_STATISTICS OFF2. My index maintenance routine based on the following criteria:Currently I Reindex Clustered and Non-Clustered Indexes when avg_page_space_used_in_percent < 75 and avg_fragmentation_in_percent > 10 and page_count > 500.Of those selected, if Fragmentation <=30, than I reorganize those Indexes. If Fragmentation > 30, than I rebuild those Indexes. So at the end of the Reindex Maint I have Non-Clustered and Clustered Indexes either Rebuilt or Reorganized.3. Currently I am running the Update Statistics on a whole database after previous reindex step:[b]sp_updatestats[/b]Since sp_updatestats updates statistics by using the default sampling rate, it possibly can deteriorate all my indexes after running reindex routine.[b]3A.[/b] So I was thinking about "… update statistics for all tables which are defragged and don't run update statistics for all tables which are REBUILD"http://sqlserverpedia.com/wiki/Updating_StatisticsSo here is my logic for performing routine in [b]3A[/b]Select indexes which were "REORGANIZE"d (no statistics update) during reindex maintenance along with other indexes, where statistics were either not updated for the last, say, few weeks and run the following:UPDATE STATISTICS Table_Name (IndexName) WITH FULLSCAN, NORECOMPUTE against indexes selected above.By running above I will be able to update statistics without running sp_updatestats4. ALTER Database MyDBSET RECOVERY Full,AUTO_CREATE_STATISTICS ON,AUTO_UPDATE_STATISTICS ONPlease let me know if you have any comments, suggestions, recommendations on [b]step 3A[/b].It has been a suggestion earlier to run: USE dbname;EXEC sys.sp_MSforeachtable @command1 = N'UPDATE STATISTICS ? WITH FULLSCAN, COLUMNS;';But unfortunately it takes way to long on my Production Database, given the time frame I have for the weekend maintenance.

No comments:

Post a Comment

Search This Blog