[SQL Server 2008 issues] Project Plan |
- Project Plan
- Getting table size greater than zero even row count is zero ?
- Log Shipping using Virtual Servers
- Create table of unique alpha numbers to use as sku's
- Clustered Indexes on Identity columns
- Service Broker takes long time to insert into a table
- SQL server startup parameters
- Same query, two users, different performance
- Insert Temp Table help.
- Caclulating two-tailed Student's t-distribution
- Executa a Job When Replication ends
- sql 2008 r2 cluster installation failed
- Update Script to handle databases where Inactive
- TSQL QUERY HELP
- Trimming column values in place (without temporary tables, etc)
- sqlcmd and XML ON
- Merge_Replication_Tables_Not_In_Sync(MSmerge_tombstone)
- Reporting Services
- comparision
- SQL 2008 R2 Replication with database/table collation change
Posted: 04 Sep 2013 07:04 PM PDT Need Project Plan for SQL server 2008 Database server.. general overview.. please suggestLike we have SQL servers. so what would be the project plan... |
Getting table size greater than zero even row count is zero ? Posted: 04 Sep 2013 05:37 PM PDT Hi,I was checking table size and row count and found a table type heap with rowcount 0 is have space.Can anyone please describe what could be the reason ?Below is query i used on my database.SELECT t.NAME AS TableName, p.rows AS RowCounts, SUM(a.total_pages) * 8 AS TotalSpaceKB, SUM(a.used_pages) * 8 AS UsedSpaceKB, (SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKBFROM sys.tables tINNER JOIN sys.indexes i ON t.OBJECT_ID = i.object_idINNER JOIN sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_idINNER JOIN sys.allocation_units a ON p.partition_id = a.container_idWHERE t.NAME NOT LIKE 'dt%' -- AND t.is_ms_shipped = 0 AND i.OBJECT_ID > 255 GROUP BY t.Name, p.RowsORDER BY t.Name |
Log Shipping using Virtual Servers Posted: 03 Sep 2013 09:13 PM PDT Good morning all,I'm looking at implementing log shipping on two virtual servers running VMWare and SQL Server 2008 R2 (Standard) and attached to a SAN, does anyone know if I need to do anything different regarding permission levels for the SQL Server Service and Agent accounts (i.e. do I need to grant permissions on the physical host server/SAN and the virtual servers etc?).Thanks in advance for any help offered.M |
Create table of unique alpha numbers to use as sku's Posted: 04 Sep 2013 12:16 PM PDT Hello all,I am new in the since of posting in the forum I do have some past experience with SQL mostly in v2000/v2005.What I have, is a need for a table of unique alpha numeric values that I can use for sku numbers and printing simple bar code labels for our products. (I do not need barcoding help as my label software just needs the number).I will be populating in blocks of 15,000 skus to start and will need to remove any duplicates if they so happen to exist after the initial population.The alpha part will come from a separate table of values. This table is as so:Table = SkuCategoriesTable Columns = (t_id IDENTITY , skuCategory_Name nvarchar(MAX), skuCategory_KeyCode char(10))Values = (t_id, 'SomeCategory Product', 'SP')What I need is the ability to run a statement, cursor or whatever method to populate another table with a set number of records with a unique sku number using the 'skuCategory_KeyCode' column value 'SP'Where "S" is inserted at the beginning and "P" added at the end of a unique 9 digit number.For instance or a sample of the value I am looking to get is as follows.sku= "S123412345P"The sku list table needs the following columns(id IDENTITY, sku VARCHAR, datecreated (TimeStamp), isUsed BIT default 0) Thank you and go easy on me as this is my first post! |
Clustered Indexes on Identity columns Posted: 04 Sep 2013 07:27 AM PDT Hello,I had a quick question for everyone... I often wondered why Clustered Indexes are usually placed on PK Identity(1,1) columns (1,2,3,4,5,6,7,8...) instead of being placed on columns that would impact performance better. A Clustered Index on a date range column would be much more efficient than a Clustered Index on a basic "ID" column. I have seen this time and time again. Usually NON_Clustered indexes are placed on date ranges instead of Clustered Indexes. For the most part, I presume that this is due to that fact that a clustered index must be on a unique column and most datetime columns are not unique. Most articles that I have read lead you to the direction of placing indexes on JOINS, WHERE CLAUSES, and sometimes ORDER BY. But Clustered indexes on a join would be so much faster than NON-Clustered. What is the best practice? Thanks for your time.Dave |
Service Broker takes long time to insert into a table Posted: 04 Sep 2013 08:01 AM PDT I am new to service broker. One of our databases uses service broker on SQL Server 2008 R2. A Stored Procedure makes use of service broker. And the procedure is called nearly 100 times.Now the problem is that the procedure execution completes very quickly in 13 seconds but I am able to see the inserts to the main table occurring for a very long time. For nearly 6 hrs after the procedure execution completes. Can someone shed some light on this ? Do I tune the query/change the send or receive queue number/tune the table ? |
Posted: 04 Sep 2013 05:05 AM PDT Hi,i'm trying to find the startup parameters for sql server 2008 and 2005 using t-sql...for sql 2008 R2 i was able to get the information using the below script......SELECT * FROM sys.dm_server_registry Please let me know if there is any t-sql script to find the start up parameters....i would like to avoid the gui...Thanks in advance... |
Same query, two users, different performance Posted: 04 Sep 2013 06:14 AM PDT Hey Gurus,I have a query that performs differently depending upon the user executing it. This was first brought to my attention because an Excel spreadsheet was taking a long time to load data (Excel 2010, Sql Server 2008 R2). The query was awful, I rewrote it and got better performance. But when the user tried running it from Excel, still had bad performance. After a bit of head scratching I saw that the connection from Excel had a username/password specified, whereas when I ran it from SSMS I was logging in using my username via windows integrated security (and I am dbo). So, I tried a few things:1. In two windows in the same SSMS, run the query simultaneously. One as me, the other using "Execute as Login =".2. Start up two instances of SSMS. Connect in one as me, in the other as the specified login.In any case, when I run the query as me, it takes about 7 1/2 minutes. When I run as the specified user, it takes 12+ minutes (today's measurements. Other days when I run as me I can have it run in as little as 2 minutes, but these times have been consistent today).In any case, I am always executing the exact same statement (SELECT * FROM a table valued function) with the same parameters. The only difference is "who" is logged in.Why would two different user logins have different performance profiles? What should I be looking at to track this down?Thanks in advance.Tom |
Posted: 04 Sep 2013 06:54 AM PDT I have a query that "Mickey" on here helped me with that I made some changes and it works wonderfully for what we need. I want to create a associated Zip Code for my cpt codes just as I have given them a Geography lat/long. My brain must be drained or I just cant think straight because I cant figure it out and I bet its super simple:crazy:The Geography query below works like a charm. I would like to duplicate it however with the following modification.CREATE TABLE #TempLocation (Code Varchar(5) PRIMARY KEY,Location Varchar)INSERT INTO #TempLocationVAlUES ('OR001', ------this is where I'm loosing it, I want to assign this a Zip Code like I have below with Geography points)---The one below this comment works very wellCREATE TABLE #TempLocation (Code Varchar(5) PRIMARY KEY,Location Geography) INSERT INTO #TempLocation VALUES ('OR001',Geography::Point(35.136195,-106.658707, 4326)) INSERT INTO #TempLocationVALUES ('OR002',Geography::Point(35.083956,-106.688232, 4326))INSERT INTO #TempLocationVALUES ('OR003',Geography::Point(35.105305,-106.5193182, 4326))INSERT INTO #TempLocationVALUES ('CS001',Geography::Point(35.132264,-106.712952, 4326))INSERT INTO #TempLocationVALUES ('CS002',Geography::Point(35.132260,-106.712951, 4326))INSERT INTO #TempLocationVALUES ('CS003',Geography::Point(35.132262,-106.712951, 4326))SELECT tl.Location ,[Visit: Visit Type] ,[Visit: Visit ID] ,[Procedure: Code] ,[Date: Service date of the Charge] ,[Insurance: Charge Primary Ins. Plan] ,[Patient: Age] ,[Patient: First Name] ,[Patient: Last Name] ,[Patient: Patient ID] ,[Patient: Patient Name] ,[Patient: Preferred Provider] ,[Patient: Sex] ,[Patient: Zip Code] ,[Patient: Date Of Birth] ,[Patient: Address Line 1] ,[Patient: Middle Initial] ,[Patient: Phone Primary] ,[Patient: Race] ,[Patient: SSN] ,[Patient: State] ,[Procedure: Description] ,[Provider: Billable] ,[Provider: Rendering] FROM [PrimeRecord130010].[dbo].[vCHCCharges] v JOIN #TempLocation tl ON v.[Procedure: Code] = tl.Code --Where [Date: Service date of the Charge] between '01/01/13' and '06/01/13'GODROP TAble #TempLocation |
Caclulating two-tailed Student's t-distribution Posted: 04 Sep 2013 01:58 AM PDT I would like to calculate the two-tailed Student's t-distribution from within T-SQL. Excel does this with it's TINV() function. I want to be able to pass it the degrees of freedom and spedify the probability level and get returned the t-distribution value. Is there some way of either calling the excel function or recreating it in T-SQL?TIA,Jon |
Executa a Job When Replication ends Posted: 04 Sep 2013 06:29 AM PDT Grettings,I need to know if I can execute a Job when a replication ends but by a automatic way. It doesn't means I must set a shedule to this Job.Before somebody ask me WHY?, I need this because I replicate temporaly tables, after that, I need to put those rows on a production table but I must insert on production table when the data is ready.I 'll be waiting for suggestions.Thanks. |
sql 2008 r2 cluster installation failed Posted: 04 Sep 2013 02:27 AM PDT While installing sql 2008 r2 cluster ,it failed with below error."microsoft sql server 2008 r2 cluster setup attempted to perform an unauthorized operation"TITLE: Microsoft SQL Server 2008 R2 Setup------------------------------The following error has occurred:Attempted to perform an unauthorized operation.Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup.After clicking cancel the "Database Engine" and "Replication" failed and reaming feature are installed sucesfullyLog FileOverall summary: Final result: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Exit code (Decimal): -2068119551 Exit facility code: 1211 Exit error code: 1 Exit message: SQL Server installation failed. To continue, investigate the reason for the failure, correct the problem, uninstall SQL Server, and then rerun SQL Server Setup. Start time: 2013-01-25 10:59:08 End time: 2013-01-25 11:13:30 Requested action: Install Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130125_105747\Detail.txt Exception help link: http%3a%2f%2fgo.microsoft.com%2ffwlink%3fLinkId%3d20476%26ProdName%3dMicrosoft%2bSQL%2bServer%26EvtSrc%3dsetup.rll%26EvtID%3d50000%26ProdVer%3d10.50.1600.1%26EvtType%3d0x6121810A%400x92D13C14Please suggest |
Update Script to handle databases where Inactive Posted: 03 Sep 2013 08:08 PM PDT Hi, I am looking to find script that works for updates to handle databases which have been Inactive. basically if databases are offline that will go in active and if we apply new enhancements that needs come to active.If any one can help will be Appreciate.Fella@@@Attitude Builds The Trust@@@ |
Posted: 04 Sep 2013 01:15 AM PDT Can anybody help with the following query:Basically its returning 100% when all records are populated and 0% when only some are populated ![code="sql"] select coll, count(case when coll is not null and value is not null then 1 end) CntCol, count(case when coll is not null and value is not null then 1 end) / count(coll)*100.0 Percentagefrom( select 'AltPhone' coll, AltPhone value from ABC_User_CustomProfile union all select 'lastname' coll, lastname value from ABC_User_CustomProfile union all select 'staffid' coll, staffid value from ABC_User_CustomProfile) srcgroup by coll[/code] |
Trimming column values in place (without temporary tables, etc) Posted: 04 Sep 2013 01:11 AM PDT I have four tables containing the regular chars and varchars. Unfortunately, no one bothered to trim the values of white space before inserting them into the columns. Is there a way to ltrim(rtrim()) in place without resorting to temporary tables or variables?I guess I could use a cursor but there has to be a better way.Thanks!John |
Posted: 04 Sep 2013 12:47 AM PDT I've been given a script that is similar to following. This script is saved out to a network share and called from a SQL job using sqlcmd. I am not at all familiar with sqlcmd and this :XML ON (also this :OUT command). What I ultimately need to do, is dynamically change the file name and append the date to it. Here is a modified version of the script:Use [DB1]GO:XML ON :OUT \\fileserv1\customer\XML\getCompScores.xmlSELECT loginname ,substring(convert(VARCHAR, grade), 1, 4) AS 'Grade' ,date_submittedFROM OrganizationLogin AND convert(VARCHAR, date_submitted, 101) = CONVERT(DATE, DATEADD(DD,-1,GETDATE()))FOR XML raw ,root('Accounts') ,elementsGO:out stdout :XML OffLike I mentioned previously, I just need to be able to dynamically change getCompScores.xml to be getCompScores_20130903.xml.Any and and all help will be greatly appreciated. |
Merge_Replication_Tables_Not_In_Sync(MSmerge_tombstone) Posted: 03 Sep 2013 11:05 PM PDT We've merge replication configured in our environment(1 Publisher, 3 Subscribers) and We know by default merge replication is bi-directional. Its working fine in case of inserts i.e, if a row is inserted at publisher then it's getting updated at subscriber and vice versa. The real issue is, if row gets deleted at publisher then its getting deleted at subscriber but, if a row gets deleted in any of 3 subscribers then its not getting deleted at publisher.When a row is deleted at publisher then it's getting updated in MSmerge_tombstone(Publisher). But, in case of subscriber, the corresponding row(rowguid) is not getting updated in MSmerge_tombstone(Subscriber). I think that might be the issue, even merge delete trigger exist on these tables(Subscriber). Please help me out, as it is a production issue.Publisher(2008R2 sp1-Enterprise), Subscribers(2008R2 sp1-Standard) |
Posted: 03 Sep 2013 07:48 PM PDT Two Questions about reporting services1) Is it possible to rename the ReportServer database? I have just tried but change it as part of a migration, but then the reporting services was referencing objects within the database with the old database name. I guess all objects would also need to be changed if we were going to do this or is there any easier way?2) Changing the service account for report services. I have inherited some servers which use Network Service account for reporting services. I prefer using AD accounts for all SQL Services. Where would I need to add this group to to enable reporting services to start up? |
Posted: 03 Sep 2013 09:10 PM PDT Hi,I have a table with column name is[u]proposaldata[/u]123456542213and other table column is [u]proposaldata[/u]000123456000542213 so i need to compare to this columns of different table s but 123456 and 000123456 should treated as same so wat i will do |
SQL 2008 R2 Replication with database/table collation change Posted: 03 Sep 2013 09:44 PM PDT I have a transactional replication setup with a database with a collation of Arabic_CS_AS however, when setting up the subscriber the collation is the standard SQL_Latin1_General_CP1_CI_AS collation. The affect is that certain columns in the database are now garabage characters.Is there a way to modify the subscriber database collation as the publication? |
You are subscribed to email updates from SQLServerCentral / SQL Server 2008 - General / SQL Server 2008 To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment