Friday, May 24, 2013

[SQL Server 2008 issues] How to allocate the Amount to other columns accordingly

[SQL Server 2008 issues] How to allocate the Amount to other columns accordingly


How to allocate the Amount to other columns accordingly

Posted: 23 May 2013 06:22 PM PDT

Hi, I have a requirement, Following is the structure of my table,Create Table #Test(Valuedt Datetime,TotalDue_Amount Numeric (12,2),TotalRecieved_Amount Numeric (12,2),Maintaince_Due Numeric (12,2),InsDue Numeric (12,2),Stationary_Due Numeric (12,2),Travelling_Exp_Due Numeric (12,2),Maintaince_Collection Numeric (12,2),Ins_Collection Numeric (12,2),Stationary_Collection Numeric (12,2),Travelling_Exp_Collection Numeric (12,2))Insert Into #Test Values (GETDATE(),2000,1000,500,200,100,50,0,0,0,0) Insert Into #Test Values (DateAdd(dd,1,GETDATE()),5000,4000,1000,500,500,100,0,0,0,0) I have TotalDue_Amount which sum of all other due,And I have TotalRecieved_Amount ,Now my requirement is, I want to allocate this TotalRecieved_Amount to Maintaince_Collection,Ins_Collection,Stationary_Collection,Travelling_Exp_Collection according to follwing due amount in following order,Maintaince_Due,InsDue,Stationary_Due,Travelling_Exp_DueKindly help in this requirement.Kindly tell me if any more information is needed.Thanks in Advance!

Sybase odbc driver

Posted: 23 May 2013 06:30 PM PDT

Hi does anyone know where IO can download the odbc driver for SYbase. I need to create a linked server to SYBASE

MAXDOP ON INDEXED VIEW

Posted: 23 May 2013 05:59 AM PDT

I have a indexed view and ı m trying to create UNIQUE CLUSTERED index with maxdop=64 setting but sql server engine does not use this option ? Is it any problem or tip? CREATE UNIQUE CLUSTERED INDEX XXXX ON [dbo].XXXX_1( [Column1] ASC, [Column2] ASC, [Column3] ASC, [Column4] ASC, [Column5] ASC)WITH (MAXDOP=64,PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

Log shipping environment - High IOPS and reads on secondary servers log disk

Posted: 23 May 2013 05:05 PM PDT

Hi there!Background: We're running log shipping between two servers, and the secondary server restores the transaction logs with standby mode, to support queries against the databases for reporting purposes.After going through our perfmon logs and the disk metrics we, surprisingly, saw that the log disk on the secondary server, where we only have the .ldf files, for the standby databases had pretty high values for IOPS and the read/write ratio was about 95:5.I would expect the disk where the data file exists to have more IOPS, and a relatively high value for reads (the queries from users, etc) and for writes (restoring the transaction logs as part of the log shipping job), but not really the log disk ... How does it work, when SQL Server reads the transaction log file on the secondary databases, is it during restores to keep track of where the next transaction log should be applied? Can anyone please explain? I feel that there's a part missing in my log shipping puzzle. :-)Thank you very much!Sincerely,Gord

Calculate average sale value for the result of the sql query

Posted: 23 May 2013 11:25 AM PDT

Can anyone please tell me how to calculate the Average Value and select it as a column?I am trying to add AVG(TotalValue) as AgvSales on the top of query but throws errorColumn 'SalesData.SaleDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.

Suspended process in msdb database

Posted: 05 Dec 2012 04:00 PM PST

Hello gentsI have right now a suspended spid 53 at msdb which is in suspended status for over 2 hours. Seen from sp_who2, the 'cmd' it is doing is 'delete' and the login behind this process is "NT AUTHORITY\SYSTEM" from program name "Microsoft Windows Operating System".In addition, lastwaittype is BROKER_RECEIVE_WAITFOR and I ran dbcc opentran in msdb and it returns:Oldest active transaction:spid: 53UID: -1Name: receiving msgStart time: <the time i ran "dbcc opentran">Any ideas where else I should look at?Thanks in advance!

Distribution Server list

Posted: 23 May 2013 12:17 PM PDT

Is there any catalog in the subscription server which lists out all the distribution servers?eg We have subscription server REP2A. Distribution servers RED01 and RED05 connect to this subscriber. i don't want to use MSreplication_subscriptions and go to all the databases to get this info.I am looking for just one catalog which lists out all the distribution servers for me.-Kinda

Setting up a report within SQL

Posted: 23 May 2013 03:34 AM PDT

I have a SQL statement ready go but I am looking for a way to get SSMS to run the report weekly at midnight and provide an output of that report into a specific file. Basically, instead of me running reports and distributing them, I want to create the report and have it run automatically on a set schedule and users who need it can access the output file. I am not sure what this process is called so I am not sure what to search for at this point. Any help would be greatly appreciated.

do we even need perfmon anymore?

Posted: 23 May 2013 11:01 AM PDT

are all the counters in DMVs in 2005 and 2008?are they influenced by being in a virtual machine or are the storage, memory and cpu counters unreliable and must be harvested from VM monitor?thanks a lotdrew

Help with parsing xml to get an element value

Posted: 23 May 2013 08:53 AM PDT

I have a table with a column Col1 of data type varchar(max).Col1 contains strings that are in XML format as below.<?xml version="1.0" encoding="utf-16" standalone="yes"?><Schedule xmlns="http://schemas.microsoft.com/sqlserver/reporting/2010/03/01/ReportServer"> <StartDateTime>2013-03-25T16:05:26</StartDateTime> <Timezone>CDT</Timezone> <WeeklyRecurrence> <WeeksInterval>2</WeeksInterval> <DaysOfWeek> <Sunday>true</Sunday> <Monday>false</Monday> <Tuesday>true</Tuesday> <Wednesday>false</Wednesday> <Thursday>false</Thursday> <Friday>false</Friday> <Saturday>false</Saturday> </DaysOfWeek> </WeeklyRecurrence></Schedule>I need to extract the Timezone value (ie CDT).Can you help? Thank you!

Table Partitioning

Posted: 23 May 2013 09:29 AM PDT

I have a table with 5 million rows and I want to see if i could use partioning. Whats happening is AccountNumber is being recycled in our sytem.I have a process that loads new accounts to this table. But if that account already exist (created few years back), this process wouldn't load thsenew accounts which is not what we want.I am thinking of partiitoning this table based on year. But If I partition this table based on year, how do I specify to not to look for previous year partion during the new account load; so that it will still get loaded? Can this be achived through partitioning?

Microsoft.SqlServer.Replication.ComErrorException; cannot change properties to "Any CPU"

Posted: 23 May 2013 09:08 AM PDT

Hello,As the title states, I am having issues with the MergeSynchronizationAgent class in C#. When I try to create my syncagent using the following call:syncAgent = subscription.SynchronizationAgent;wheresubscription = new MergePullSubscription(); I receive an error that states Microsoft.SqlServer.Replication.ComErrorException, "Class not registered". Now, I've read solutions to this problem and the most common one is to set my platform to "Any CPU". There is a problem with choosing that solution however, basically doing so interrupts other functionality in our tool that is even more critical than the checkout functionality. So, the problem boils down to this, we are developing a 32-bit application and the SQL-server is 64-bit. I have to set the build properties of the project to "x86" to ensure the more important functionality of the tool stays intact. Are there any suggestions that don't have to do with setting the build properties to "Any CPU", i.e. more on the SQL Server side? I am quite new to all of this SQL Server business so I apologize for any information that may be missing or unclear. Thanks in advance.

Re-partitioning a large table

Posted: 23 May 2013 05:48 AM PDT

Current partitioning strategy does not restrict growth within a single partition where we already see 576,114,055 records and growing. Operations on that specific partition are taking unacceptable amount of time. We'd like to re-partition the table with something more manageable but the problem is that the amount of locking that is going to take to merge all partitions into one, switching it out to a non-partition table and then re-partitioning the new table, is unpredictably long (can't simulate production load).Looks like this is the case with just 3-4 partitions, others are 3-5mm range. Any suggestions on how to address those few partitions without re-partitioning the entire table? Or if we have to re-partition, is there a better way than the one proposed here?Thanks

Writing a case statement once including it in several queries

Posted: 23 May 2013 07:08 AM PDT

I used to use Focus before migrating to Sql and miss some of the commands in Focus and not in SQL. . One statement that Focus had was called "include". In focus, I could include a query with a complex case statement into different queries as needed. I could copy the code into the queries but the problem is if the code changes, then I have to change the code in each query.How can I do this in SQL Server?ThanksCarroll

Import of huge XML file

Posted: 21 May 2013 06:01 PM PDT

Hi All!I have an xml file of 44 Gb (Not Meg, its really GB)Delivered by the Danish custom authorities.My problem is simple - How to import such a beast?I have seen a limit of 2.1 Gb everywhereBest regardsEdvard Korsbæk

Linked Server Query Hung in Killed/Rollback

Posted: 23 May 2013 06:33 AM PDT

I have a SQL 2008 R2 production database where a sql agent job kicks off some stored procedures. One of the procedures is to update the tables on the development database. The problem happens when the job on Production is manually killed. The command running on the Dev database receives the kill signal but it just hangs and does not close out ultimately consuming all of the memory 12GB allocated to the instance. Restarting the sql service usually fails and I end up rebooting the Dev box. Can someone explain why the linked connection query would hang as it keeps happening?

Patch compliance report

Posted: 23 May 2013 06:33 AM PDT

Hello everyoneI want to set up a new process at work, whereby each of our SQL Server instances (2000 up to 2012) are checked for patch compliance on a monthly basis.It'd be nice to automate this and have a pretty report produced and emailed out, showing the build/patch version of each instance and how out of date they are. Any other information, such as the number of available patches for each server, would be a bonus.Is anyone already doing this and, if so, how? I've seen WSUS in action, but didn't think much of the reporting side of things. I've heard that SolarWinds Patch Manager is quite good, but it comes at a cost. Maybe even PowerShell could be used, but there must be some manual intervention when checking for the latest SQL Server patches (i.e. manually updating the script each time a new patch is released by Microsoft).Anyway, your thoughts would be appreciated.Thanks,Innerise

Problem patching to 2008 SP3

Posted: 06 Mar 2012 02:47 AM PST

I have a 2008 enterprise 32 bit instance running on Windows Server 2003 Enterprise server. I'm an administrator on the server. When I try to run SQLServer2008SP3-KB2546951-x86-ENU.exe it uncompresses to another drive under a random directory name. Then a dos window (setup.exe) briefly flashes on the screen and the directory is deleted again.HELP! :w00t:

indexed views, aggregations and poor cardinality estimations

Posted: 21 May 2013 03:19 AM PDT

I've got a large sales table (66m rows) that I need to query in an aggregated way to return a bestseller list for specific dates and stores.I think i'm running into an issue relating to statistics and the distribution of values, but I'm not sure....Here is the structure of the tables involved in the query:tblSales table structure:[date_id] [int] NOT NULL,[store_id] [int] NOT NULL,[product_id] [bigint] NOT NULL,[volume_sold] [int] NOT NULL date_id is an int representing a specific date, store_id is the id of the store that made the sale, product_id is the id of the product that sold, volume_soldis the... volume sold of a specific product on a specific date in a specific store. Pretty straightforward.tblDates:date_id int,calendar_date datetime,period intcalendar_date is the actual date the date_id represents, period is an int that is used to group the date_ids into weeks (week 1 = period 1, week2 = period 2, etc.)[dbo].tblStores[dbo].[tblStores_To_Aggregates][dbo].[tblAggregates]these tables are used to assign groups of stores into aggregates that can then be queried on (ex: west coast aggregate, small stores aggregate, ALL aggregate, etc.)to speed up the query i've created an indexed view on the data like so:[code="sql"]selecttblDates.period,product_id,[tblAggregates].aggregate_id,SUM(volume_sold) as vol_sold,COUNT_BIG(*) AS FREQUENCYfrom[dbo].[tblSales] salesINNER JOIN [dbo].tblStores ON sales.store_id = tblStores.store_idINNER JOIN [dbo].[tblStores_To_Aggregates] ON tblStores.store_id = [tblStores_To_Aggregates].store_idINNER JOIN [dbo].[tblAggregates] ON [tblStores_To_Aggregates].aggregate_id = [tblAggregates].aggregate_idINNER JOIN [dbo].tblDates ON tblDates.date_id = sales.date_idgroup by[tblAggregates].aggregate_id,tblDates.period,product_id[/code]And then a clustered index on the view in the form:[aggregate_id] ASC,[period] ASC,[product_id] ASCAnd here is the query:[code="sql"]select top 1000 sales.product_id, SUM(volume_sold) as vol_sold, RANK() OVER (ORDER BY SUM(volume_sold) DESC) as product_rankFROM [dbo].[tblSales] sales INNER JOIN [dbo].tblStores ON sales.store_id = tblStores.store_id INNER JOIN [dbo].[tblStores_To_Aggregates] ON tblStores.store_id = [tblStores_To_Aggregates].store_id INNER JOIN [dbo].[tblAggregates] ON [tblStores_To_Aggregates].aggregate_id = [tblAggregates].aggregate_id INNER JOIN [dbo].tblDates ON tblDates.date_id = sales.date_id WHERE period >= 460 and period <= 470 and [tblAggregates].aggregate_id = 1 group by product_idorder by product_rank asc[/code]The issue comes when varying the aggregate_id. The aggregate could contain many stores, or only a few: aggregate_id 1 has 1878 stores. 2 has 93.When querying on id 2, the query returns in subsecond and 1 takes 30+ seconds.Looking at the query plan it seems like the issue in caused by a poor estimation of the amount of rows that will come back.The plan for 2 has an estimation that is close. But for 1 it's way off. This is causing it to spill to tempdb and slow down the query.My guess is that the problem is related to the statistics (they are up to date btw) and them giving poor cardinality estimations. I'm kind of at a loss at what to do next. Any advice or suggestions would be appreciated.

Several Jobs or One Jobs with several steps?

Posted: 23 May 2013 12:33 AM PDT

We do have the need of running several jobs daily... jobs that update custom tables, create reports etc.What is the best practice? To have several jobs (one for each SP) or to group SP's into job step and have fewer jobs that are going to run for longer time since they have several steps?

Ambiguous SQL server

Posted: 23 May 2013 12:25 AM PDT

HiIn a couple of select statements (out of nine) I had to qualify the field table with the table.Yet this seems ambiguous in that surely all select statements should be the same.Any ideas?

Moving from SQL 2012 Enterprise to Standard - possible?

Posted: 23 May 2013 12:57 AM PDT

Is there any way to downgrade a SQL Server 2012 Enterprise instance to Standard edition? I guess it's not as easy as switching the keys, is it? We have a SQL Server 2012 Enterprise instance we setup, but we've found that Standard Edition would suffice and the Enterprise license could be better used elsewhere. I hoped a full reinstall wouldn't be needed, but I'm afraid it may. We're not using any of the Enterprise features on this server.Thanks.

MS-SQL instance failing due tempdb space issues

Posted: 22 Apr 2013 02:49 AM PDT

Hi,I should clarify my instance is Cluster aware and running SQL2012, but posting here because I believe this is more a generic SQL issue than version specific.Having said that, I patched my SQL2012 failover instance few days ago and when failing over the other node, I noticed that the SQL resource did not come online on its own. As a matter of fact, I struggle over a 5 to 10 min period in order to bring it online.I tested taking the whole SQL group in the Cluster offline, move to other node and bring online one by one, starting by disk resources. Such test went fine.I checked the SQL logs and this is what I found:[code="plain"]"Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized."[/code]Could be that lack of tempdb space was not allowing the SQL instance to come online? And if that's the case, how to troubleshoot this. I know tempdb internal usage goes up and down. I temporary scheduled a job that keep track of tempdb internal usage; it runs every minute and saves data to a table, but I barely see anything above 800MB total, which is extremely low. I will leave it for several days though, just to get a better picture and more accurate results.Total allocated space for tempdb is 150GB, spreaded over six files (not set to auto-growth). For Tlog, it was 10GB (looks it was too small) It is now 50GB, limited to 80GB. Total data (all databases hosted there) is about 1.3TB and 100GB for Tlog files only.I started to believe the issue was temdb, but the Tlog file, but I am not so sure about it. I need to be sure I am using the right size before do another planned failover. If 50GB is still too small, I may have to ask our SAN admin to expand that LUN.Any ideas, suggestions or opinions?

recommended configuration

Posted: 22 May 2013 10:45 PM PDT

I need to build 3 servers to run SSRS (dev, qa, and prod) the servers will only be used for SSRS, the data is coming from another datasource. What is the recommended hardware configuration for SQL servers just running SSRS?The OS will be Windows Server 08 R2 and SQL 08 R2.

How to send mail from DataBase mail in sql server 2008

Posted: 22 May 2013 10:27 PM PDT

Hi, I am using Sql Server 2008.I had tried using Gmail server and I was able to send the mail.But,I want to configure the mail using my official mail server.I had used the smtp server which is configured in our OutlookBut I am getting following error,The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 5 (2013-05-23T16:31:15). Exception Message: Cannot send mails to mail server. (Failure sending mail.). )There are two different server for Incoming and outgoing mail with two different port numbers.So I had used details of outgoing server.Please help me I had want to configure it.

BULK INSERT failing.

Posted: 02 Feb 2012 07:45 PM PST

Hello,I have a situation where I am trying to do a bulk insert of a file that's on a network share. It's failing with:Cannot bulk load because the file "\\Server\Folder\File" could not be opened. Operating system error code 5 (failed to retrieve text for this error. Reason: 15105).What I think is odd though is that use of xp_FileExist confirms that that the file exists and the SQL Server can see it.The remote folder has permissions for "Everyone". Is there an internal setting in SQL Server that has been set to prevent me doing this? It's a new server and so this has never been done before. But it worked fine in pre-production, on a different server.Hope you can help.J

No comments:

Post a Comment

Search This Blog