[SQL Server 2008 issues] Identifying connecting linked servers |
- Identifying connecting linked servers
- DBCC checkdb lifecycle
- Integration Services (BIDS)
- SSAS dimention filter
- SSRS, how to logicaly combine related reports
- Count number of leading character
- tempdb data files - move/initial size
- Looking for ways to speed up complex query with frequent changes to underlying data
- Can you create an index on a system table?
- dm_exec_query_stats vs. dm_exec_procedure_stats
- Shrink Database File and Fragmentation?
- Boss keeps creating views as a quick fix
- Log Shipping: Log file restore time increased (???)
- Investigate mail service notifications on DB server
- dbcc checkdb(‘DB-NAME’,REPAIR_ALLOW_DATA_LOSS) didnt fix the issue
- Assign values: conditional case when
- can any one give reply for this The TCP/IP connection to the host INA2, port 1433 has failed.
Identifying connecting linked servers Posted: 09 Sep 2013 07:11 PM PDT Hello,I think that the answer is 'no', but does anyone know of a way to identify linked servers which connect [u]to[/u] a particular instance from that instance? I wish to change a login password on my instance and think that there are other instances on my LAN which have linked server connections to mine using this login and if I change the password they will fail. If I can identify them ahead of my change then I can take action to stop them failing (put in an alias). I wasn't sure if there was a log or metadata to record connections which I could query to gain this knowledge.I think it may be possible to use Registered Servers and run a common query over all the instances I already know about, but I neither know the syntax for this type of query nor do I know all the instances on my LAN.Failing any prior testing I'll have to resort to making the change and picking up any failures as they occur, but that doesn't seem very professional.Any thoughts? |
Posted: 25 Aug 2013 07:06 PM PDT I first ran DBCC CHECKDB on an instance. It threw away around 2000 consistency errors.I ran repair_rebuild and it didn't fix any errors.Then I ran dbcc with REPAIR_ALLOW_DATA_LOSS and it fixed around 1900 consistency errors.I switched DB into multi user mode again and am running checkdb again.is there a way I can fix the remaining 100 odd errors? |
Posted: 09 Sep 2013 08:33 AM PDT I have a flat file whose rows and columns need to be reversed (transformed). Can BIDS handle that?thanksf |
Posted: 09 Sep 2013 04:18 PM PDT Hi All,I am working on SSAS Project… and stuck on following issue..Currently having following scenario..Having ProductCategory, Product, SalesOrder, Time dimensions…Sum of product one of fact measureTotal sale another fact measure Product is having one attribute User exp means how many months the project is having user experience.Schema.. ProductId, ProductName, ProductCatId(Used for hierarchy), monthofexp like 0,1,2,3,6,….24 etc.Need to prepare report as following…Upto 6 month exp = sum(total no of product.)6 to 12 month exp = sum(total no of product.)….…24 to 36 month exp = sum(total no of product.)And so on…..It's really appreciated if you help me to find out the way… |
SSRS, how to logicaly combine related reports Posted: 09 Sep 2013 01:58 PM PDT Hi,I have to produce 4 reports pretty much from the same source, just different values, ranges, they all related, though have different columns.I'm thinking how I can combine them into one .rdl. Can I use map to create 4 pages and use separate data source for each of them?With separate title and tablix for each page.? Is it possible?Or there is something else I can use.I don't won't to deal with 4 separate rdl-setup-deploy-vss-etc....ThanksMario |
Count number of leading character Posted: 09 Sep 2013 02:12 AM PDT Hello all.I need a query that will give me the number of leading spaces in a string. For instance in this string [code="sql"]' =10 02=5608= '[/code] i need to know how many spaces are in the front of it. Since there are spaces in the string in the middle and the end i cannot use count since it will give me the number of all the spaces.Thank you for your help |
tempdb data files - move/initial size Posted: 09 Sep 2013 07:43 AM PDT I discovered that we have a couple data files assigned to our tempdb. The primary data file (tempdev.mdf) is on the D: drive but there is a secondary data file (tempdev_2.ndf) that is on C:. The server locked up over the weekend and I suspect that the secondary log file grew to exceed the available hard drive space. My plan is to move the secondary file to the D: drive where there is more space. I see the best practices recommend that the data files be the same size and that we should have one data file per cpu core. Currently the primary data file is set to a initial size of 3GB with autogrowth of 10%. The secondary file is set to initial size of 3MB with autogrowth of 1MB. I think the 3GB is a reasonable size for the tempdb based on the current load but I'm wondering if I should split it up into multiple data files with smaller initial sizes. What should I be checking to determine the number of files?The system is running SS 2008 R2 with 8 cores and 32 GB ram. |
Looking for ways to speed up complex query with frequent changes to underlying data Posted: 09 Sep 2013 04:50 AM PDT I've been asked to optimize a query in use in a CRM system.The query generates a list of contacts for telesales agents to browse, generated from an underlying master contacts list of approx 750k records. Contacts are assigned to Sources, Sources are assigned to Campaigns. The basic query gets all Contacts assigned to a Campaign. As Contacts can be assigned to multiple Sources, and a Campaign may also have multiple Sources, it's common to have many duplicate contacts for a single campaign.There are also several different types of exclusion list which remove contacts based on email address, telephone number, number of previous contacts etc. These are matched against the generated list using WHERE xxx NOT IN clauses.A simplified version of the query is here: https://gist.github.com/andybellenie/6498720There are also search filters on a lot of the columns, but I've not included those in the gist.The requirements are:1. Exclusion lists are updated every minute so the results need to be accurate2. They need to fetch both an accurate count and first 100 records with pagination with each request3. Many agents access the query4. There are approx 10 filter fields which apply to columns not included in the gist.So, what I'm thinking is to avoid getting the DB to do all of this work on each request, but instead create a separate table to store a de-normalised result set, and simply update that every time one of the underlying tables or exclusion lists changes. I can write triggers to handle all of the updating directly in the db and keep it well away from the application itself.I've not used this approach before, so I wanted to get some thoughts from you guys before getting too buried in it.Thanks! |
Can you create an index on a system table? Posted: 09 Sep 2013 03:16 AM PDT I've been looking into Change Data Capture and it doesn't even look like a Primary Key gets created for the system table that's written to. Can you add an index to system tables? I've never even had to try this before, but now I'm curious. |
dm_exec_query_stats vs. dm_exec_procedure_stats Posted: 08 Jan 2013 02:21 AM PST Hi. I ran several queries this morning to gather a baseline of performance data on a system. dm_exec_query_stats has 840k records. dm_exec_procedure_stats about 500. Does this mean that all statements in procedure_stats are 'echoed' in query stats? My intention is to gather stats related to procs and queries [dynamic sql]. Suggestions? I wish to take a baseline to compare to stats obtained later. Links, thoughts, suggestions appreciated. |
Shrink Database File and Fragmentation? Posted: 09 Sep 2013 01:58 AM PDT I have an 800GB data file that I want to shrink by adding another filegroup/data file and moving 300gb of tables/indexes into the new file. After moving the files, I planned on shrinking the 800gb file in increments. Are there any negative results in shrinking 300gb of space? My ultimate goal is to stop the file from growing any larger and speed up the nightly backups....but I don't want to cause any performance issues from shrinking the data file as it runs perfectly for users. Can someone explain what may be expected from shrinking a large amount and if fragmentation will occur? |
Boss keeps creating views as a quick fix Posted: 09 Sep 2013 12:16 AM PDT Good Morning Everyone.This is my first post in the forums, but I find the emails very informative and knowledgeable. I have an issue and am wondering what others have done to maybe help resolve it.My Manager is an ex Classic ASP/Access developer. My team of Programmer Analyst have converted everything to ASP.NET/Silverlight with SQL Server backend. We're in the healthcare field and things are constantly changing and deadlines for submissions always around the corner. Manager still has this Access mindset with limited T-SQL knowledge. He creates views on top of views in order to get what he needs done (Access mindset - run make table query to use in another query) . He says it's a temporary fix to get what he needs, but since he doesn't go back to clean them up, they become permanent. We've tried to get him to stop and just ask us to create what he needs, but that doesn't work. Any advice would be great! |
Log Shipping: Log file restore time increased (???) Posted: 09 Sep 2013 02:16 AM PDT We have log shipping setup for our primary CIS database.Everything has been humming along fine with the restoration of the log file into the secondary database completing in 80-90 seconds.For some reason that I've yet to determine, these log file restores are now consuming 860-870 seconds. Which has caused a bit of a backlog.The issue does not appear to be related to the size of the log file. Seems to be a lot of time taking place in the 'redo' phase of the restore:2013-09-09 08:25:40.90 spid64 Recovery completed for database cisprod (database ID 6) in 866 second(s) (analysis 9 ms, redo 865900 ms, undo 42 ms.) This is an informational message only. No user action is required.Anybody seen anything like this before? thanksD |
Investigate mail service notifications on DB server Posted: 09 Sep 2013 12:27 AM PDT Hi Fnds,i am looking to find kind of investigation notifications from sqlservers. so i want to identify those servies which server is using mail services. also want migrate one environment to another environment.Can anyone have idea?#Replays are welcome.cheers,AtTitUdE BuIlDs ThE TrUst........! |
dbcc checkdb(‘DB-NAME’,REPAIR_ALLOW_DATA_LOSS) didnt fix the issue Posted: 08 Sep 2013 08:20 PM PDT I ran dbcc checkdb('DB-NAME',REPAIR_ALLOW_DATA_LOSS) and it didnt fix the inconsistencies.Now, I may need to restore the backup from the previous full backup.But my question is, will the backup file not have these inconsistency errors? |
Assign values: conditional case when Posted: 09 Mar 2013 01:39 PM PST My data are arranged like:IF OBJECT_ID('TempDB..#Table1') IS NOT NULL DROP TABLE #Table1--===== Create the test table with CREATE TABLE #Table1 ( Name1 Char(5),Name2 Char(5),Total int)INSERT INTO #Table1 (Name1, Name2,Total) SELECT 'X1','Y1',8UNION ALLSELECT 'X2','Y2',38UNION ALLSELECT 'X3','Y3',2UNION ALLSELECT 'X4','Y4',29UNION ALL SELECT 'X4','Y5',18UNION ALL SELECT 'X4','Y6',7UNION ALL SELECT 'X4','Y7',10UNION ALL SELECT 'X5','Y8',4UNION ALL SELECT 'X5','Y9',80UNION ALL SELECT 'X5','Y10',32UNION ALL SELECT 'X5','Y11',93UNION ALL SELECT 'X6','Y12',54UNION ALL SELECT 'X6','Y13',22UNION ALL SELECT 'X6','Y14',68UNION ALL SELECT 'X7','Y15',6UNION ALL SELECT 'X7','Y16',9UNION ALL SELECT 'X7','Y17',100UNION ALL SELECT 'X8','Y18',3Select * from #Table1 I am trying to assing values (0 or 1) in columns D thru G based on the following logic. THe first wo case when conditions work. The last two are not working and assigning wrong values. The desired outcome for example for Name1=X7 is:Name1 Name2 Total Name1>100&Name2<25 Name1<100&Name2<25 Max of Name1>100&Name2<25 Max of Name1<100&Name2<25 X7 Y15 6 1 0 1 0 X7 Y16 9 1 0 0 0 X7 Y17 100 0 0 0 0 select name1, name2,total, case when (sum(total) over (partition by name1) >100) and total <25 Then 1else 0end as [Name1>100&Name2<25], --Workscase when (sum(total) over (partition by name1) <100) and total <25 Then 1else 0end as [Name1<100&Name2<25], --Workscase when (sum(total) over (partition by name1) > 100) and (min(total) over (partition by name1) <25) Then 1else 0end as [Max of Name1>100&Name2<25], ---[b]not working[/b]case when (sum(total) over (partition by name1) <100) and (min(total)over (partition by name1) <25) Then 1else 0end as [Max of Name1<100&Name2<25] --[b]not working[/b]from #Table1group by name1,total,name2order by name1, name2, total Any suggesitons? Thank you for your help, Helal |
can any one give reply for this The TCP/IP connection to the host INA2, port 1433 has failed. Posted: 08 Sep 2013 08:16 PM PDT HI all,need helpim getting this below error in application server log 2013-09-06 14:08:02,429 ERROR [cke-bss-error] (ajp-10.229.201.57-8009-4) Exception occured inDBConnection getConnection()com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host INA2, port 1433 has failed. Error: "Address already in use: connect. Verify the connection properties, check that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port, and that no firewall is blocking TCP connections to the port." |
You are subscribed to email updates from SQLServerCentral / SQL Server 2008 - General / SQL Server 2008 To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment