[SQL Server 2008 issues] best optimized query for the requirement |
- best optimized query for the requirement
- Strange issue with one of the four Sharepoint databases that we Log ship to our DR site.
- Comparision of tables in two instances
- System.Data.SqlClient.SqlException (0x80131904): Timeout Expired
- BULK INSERT Different Data Types in the SAME Columns
- Check Data While Inserting
- Need help to read text file
- Not able to find historical data using SQL Server Profiler in SQL Server 2008
- Data not inserting
- Real time live stats
- SQL Script Help - Need to capture a single transaction from many possibilities
- SSMS User Interface Tip to Freeze Panes.
- table naming conventions
- sys.dm_io_virtual_file_stats - Number of Read and Writes
- Transactional replication problem
- Doubt about Shrinking process , Rebiulding index and fragmenatation
- Update YTD number with the sum of MTD numbers
- Substring with IF or CASE statement
- Can't get rid of Key Lookup in execution plan
- Problem having restoring database backup file on secondary server
- how to make a stored procedure return a value when there are no available values
- divide the percentage equally for every user- sql server stored procedure
- Bulk Insert via VBA - Operating system error 5
- SQL to divide one row by another row based on formula or rule.
- case statement in count?
- Combining Two Very Large Tables
- cache plan
- SSAS Count and not SUM
- SP execution time.
best optimized query for the requirement Posted: 19 Mar 2013 06:12 PM PDT I have the below query and need the best query to optimize the requirementSELECT DISTINCT product_id FROM fact_sales WHERE (agent_id=1 AND product_code in (1)) AND product_id IN (SELECT DISTINCT product_id FROM fact_sales WHERE (agent_id=2 AND product_code in (3))AND product_id IN (SELECT DISTINCT product_id FROM fact_sales WHERE (agent_id=3 AND product_code in (2))AND product_id IN (SELECT DISTINCT product_id FROM fact_sales WHERE (agent_id=4 AND product_code in (1))))) |
Strange issue with one of the four Sharepoint databases that we Log ship to our DR site. Posted: 18 Mar 2013 09:13 PM PDT On the server that hosts our Sharepoint databases we are log shipping 4 of them to a sql server instance at our DR site. The issue we are seeing is that for one of the 4 databases the first transaction log generated by the Log Shipping scheduled task after the full backup has been taken is the same size as the database itself (2.4Gb). Following this all subsquent files genrated are much smaller (200 Mb).I have checked for open / long running transactions but none are reported.Out of the 4 Sharepoint databases the one with the issue is the only one that has any Sharepoint Workflows configured. However they are configured to run in response to actions rather than at a certain time of day.Both Servers are SQL Server 2008 SP3 Standard 64 bit.Has anyone got any thoughts on what might be causing this behavoiur. |
Comparision of tables in two instances Posted: 19 Mar 2013 07:12 PM PDT Am having two instances A and B, in that having a table called students in two databasess, but the records are not in match,i want to retrieve records which are having in "A" database and not having in "B" database.Instance : ADatabase : StudentsTable : Stud_InfoInstance : BDatabase : StudTable : Stud_Info. |
System.Data.SqlClient.SqlException (0x80131904): Timeout Expired Posted: 19 Mar 2013 11:49 AM PDT I apologize if this question has been asked before, but we're really stumped and the answers I've seen posted don't really apply to our situation. We're getting the following error sporadically in a system that's being run by SQL 2008 R2 and connects to other databases on remote computers using the same version of SQL. The error message may be lengthy, but perhaps one of you will see something we don't: Error in GetScheduleByPFTID Type PAST:System.Data.SqlClient.SqlException (0x80131904): Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.Run(RunBehavior.runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Dta.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataset, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) at Scheduling.GetScheduleByPFTID(String pType, String pReason, Int32 pPFTID, Int32 pLastTxtNo, String pRSWChartNumber) |
BULK INSERT Different Data Types in the SAME Columns Posted: 19 Mar 2013 07:25 AM PDT Issue: I have a small csv file uploaded through a web page by an end-user. It has two INT columns. Sometimes, the user accidentally uploads the column headers - they are varchar. I am using BULK INSERT to get the rows into a temporary table. The bulk insert works fine when the column headers are NOT there. It silently fails when the column headers are there. I specifically changed the table definition to use NVARCHAR's for the input when I discovered that the end-users were uploading the column headers.[code="sql"]SET @Command = N'CREATE TABLE ##temp_123 ([Staging_ID] NVARCHAR(512), [LeadID] NVARCHAR(512))'EXECUTE sp_ExecuteSQL @CommandSET @Command = N'BULK INSERT ##temp_123 FROM ''' + @InputFileNameWithPath + N''' WITH ( FIELDTERMINATOR = '','', ROWTERMINATOR = '''')'EXECUTE sp_ExecuteSQL @Command[/code]I can see the temp table. When the input file has column headers, there is one row with both columns NULL. When there are no column headers, there are as many rows as in the user's file and they all have values.I'm stumped. Thanks in advance.- Seth |
Posted: 19 Mar 2013 06:24 PM PDT Dear All,I have a UI like,LineNo ProdID Discount(%) and a "Insert" Button-------- ------ --------I can insert a single or multiple lines in this UI.While inserting the following data, it will check whether there is any data in Discount Column greater than 18. If so, it will not insert any data and raise an error message.[code="plain"]LineNo ProdID Discount(%)------ ------ -------1 1000 02 1001 23 1002 194 1003 0[/code]I coded but it always checks LineNo1. Please help me to code whether I can check data that is greater than 18 in Discount Column against all the LineNo .Regards,Akbar |
Posted: 18 Mar 2013 11:49 PM PDT Hi,My text file as attachment ~ AG-Persekutuan-Bln-09-2012.txtHow to read this file as follows,1. Taxpayer data starting in the second row2. The 1st TaxPayer read as follows,[code="sql"]SUBSTRING('02029105500000326272560816105987 Samat Bin Sani 0000015004363 ',21,6) + '-' + SUBSTRING('02029105500000326272560816105987 Samat Bin Sani 0000015004363 ',27,2) +'-' + SUBSTRING('02029105500000326272560816105987 Samat Bin Sani 0000015004363 ',29,4) as newIdx,SUBSTRING('02029105500000326272560816105987 Samat Bin Sani 0000015004363 ',52,40) as myName[/code]3. The 2nd TaxPayer read as follows,[code="sql"]SUBSTRING('02011105700000488437551130055335 Jalaludin Bin Jalil 0000025004363 ',21,6) + '-' + SUBSTRING('02011105700000488437551130055335 Jalaludin Bin Jalil 0000025004363 ',27,2) +'-' + SUBSTRING('02011105700000488437551130055335 Jalaludin Bin Jalil 0000025004363 ',29,4) as newIdx,SUBSTRING('02011105700000488437551130055335 Jalaludin Bin Jalil 0000025004363 ',52,40) as myName[/code]4. And the next reading as like above5. After all the content is read, the result as follow[code="sql"]560816-10-5987 SAMAT BIN SANI 551130-05-5335 JALALUDIN BIN JALIL 590826-05-5028 MUKARRAMAH BT IBRAHIM 550317-05-5242 FATIMAH BT ISMAIL 540301-04-5163 ABU MANSOR BIN HJ MOHD NOR [/code]It's possible SQL Server 2008 R2 read text file? If yes, I really need help. Please help me. I'm stuck |
Not able to find historical data using SQL Server Profiler in SQL Server 2008 Posted: 19 Mar 2013 05:10 PM PDT Hello All,I would like to capture the script that was ran by user on yesterday between particular time.But somehow i am not able to get the desire information.Can anyone please help me if i am missing anything.Template: Standard (No option with Default)LoginName: Name of userStartTime : Greater than > 2013-03-18 06:00:00.000EndTime : Less than < 2013-03-18 06:30:00.000.Please let me know if more information is required. |
Posted: 16 Mar 2013 12:20 AM PDT Hi i have written a query to check the insertion but its not working neither getting any error :Declare @valuebased varchar(13)='0000000000'Declare @valueDiscount varchar(13)='0000000000'Declare @ProductDiscount varchar(13)='0000000000'Declare @IsBooklet BITDeclare @vouchertypeid int, @voucherno varchar(13), @denomination int, @createddate datetime, @expirydate datetime, @voucherstatus int, @quantity int, @isbookletid bitIf @IsBooklet = 1BEGIN DECLARE @count int SET @count =1WHILE (@count < 10) BEGIN SET @voucherno = 'VVB'+CONVERT(varchar(10),MAX(@valuebased)+1) INSERT INTO GV_Voucher (VoucherTypeId,VoucherNo, Denomination, CreatedDate, ExpiryDate, VoucherStatusId, TransactionID, Quantity, IsBooklet) Values (1,@voucherno,10,GETDATE(),GETDATE(),1,'a1',12,0) SET @valuebased = @valuebased + 1 SET @count = @count +1 ENDEND |
Posted: 19 Mar 2013 03:55 PM PDT Icecube have now upgraded our statistics on live streaming. Now you can even track how many people are viewing your live broadcast real time. You can among other things see how many at any one time, how many total. how long they see, where they are from and more Please contact us for a test |
SQL Script Help - Need to capture a single transaction from many possibilities Posted: 19 Mar 2013 01:44 PM PDT Here is the DLL[code="sql"]IF EXISTS ( SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[XX_TEMP_AR_INSERT]') AND type IN ( N'U' ) ) DROP TABLE [dbo].[XX_TEMP_AR_INSERT]GOSET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOSET ANSI_PADDING ONGOCREATE TABLE [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] [varchar](32) NULL , [PAYER_ID] [varchar](15) NULL , [AMOUNT] [decimal](12 , 2) NULL , [BILLING_SITE_ID] [varchar](15) NULL , [TRANSFER_ID] [varchar](32) NULL )ON [PRIMARY]GOSET ANSI_PADDING OFFGOINSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( 'DA6773113801A85B803C0F3FAC1D1A8D' , '0466' , '-100.78' , '400' , '348C701138017AAD2203767EAC1B1238' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( 'FA6773113801D9A6833C0F24AC1D1A8D' , 'SELF' , '100.78' , '400 ' , '348C701138017AAD2203767EAC1B1238' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '8411E299A994420681210E3AB07E8ABF' , 'SELF' , '-100.78' , '400' , '2F525D70E0B74634A82E26983E3EBCA9' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '42FAE87AC9774132ADDA13570402AC1C' , 'SELF' , '100.78' , '900' , '2F525D70E0B74634A82E26983E3EBCA9' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '59BBC4473A013DCEAD3009B4AC1D1A8C' , '0466' , '-400.49' , '400' , '8620BF473A01AFE3BE03AF49AC1B1339' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '88BBC4473A012F73B0300949AC1D1A8C' , 'SELF' , '400.49' , '400' , '8620BF473A01AFE3BE03AF49AC1B1339' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( 'B84A37A6E93B4882B6289C016EC47868' , 'SELF' , '-400.49' , '400' , '9EEFA7DEE2CA4318B7B140148E111293' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( 'E49651E5715046269AF023332CB50C40' , 'SELF' , '400.49' , '900' , '9EEFA7DEE2CA4318B7B140148E111293' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '758237E78D5A4458A4DD6B2BAD3F602D' , 'SELF' , '501.27' , '901' , '28F880EDC0D148E9B2C3393161CA509A' )INSERT INTO [dbo].[XX_TEMP_AR_INSERT] ( [SYS_ID] , [PAYER_ID] , [AMOUNT] , [BILLING_SITE_ID] , [TRANSFER_ID] ) VALUES ( '916E1F1F238E4EA48D1C31525C846372' , 'SELF' , '-501.27' , '900' , '28F880EDC0D148E9B2C3393161CA509A' )[/code]Here is query that I'm working with[code="sql"]SELECT [JArd1].[PAYER_ID] AS TRANSFER_FROM , [JArd1].[AMOUNT] , [JArd1].[SYS_ID] , [JArd2].[PAYER_ID] AS TRANSFER_TO , [JArd2].[SYS_ID] , [JArd2].[AMOUNT] FROM [dbo].[XX_TEMP_AR_INSERT] AS Ard JOIN ( SELECT [Ard2].[SYS_ID] , [Ard2].[PAYER_ID] , [Ard2].[TRANSFER_ID] , [Ard2].[AMOUNT] FROM [dbo].[XX_TEMP_AR_INSERT] AS Ard2 WHERE [Ard2].[AMOUNT] < 0.00 ) JArd1 ON [Ard].[SYS_ID] = [JArd1].[SYS_ID] JOIN ( SELECT [Ard3].[SYS_ID] , [Ard3].[PAYER_ID] , [Ard3].[TRANSFER_ID] , [Ard3].[AMOUNT] FROM [dbo].[XX_TEMP_AR_INSERT] AS Ard3 WHERE [Ard3].[AMOUNT] > 0.00 ) JArd2 ON [JArd1].[TRANSFER_ID] = [JArd2].[TRANSFER_ID] WHERE [JArd1].[PAYER_ID] <> [JArd2].[PAYER_ID][/code]What I'm trying to capture is the last transaction to occur in XX_TEMP_AR_INSERT where the balance goes from payer "X" to payer "Y" and payer "X" <> payer "Y". When you run SELECT * FROM XX_TEMP_AR_INSERT, I want the query above to capture data from rows 5 & 6 only. Ignore rows 1 & 2 because they are the first occurence of the a transfer. These are the results that I want the query to return:0466 -400.49 59BBC4473A013DCEAD3009B4AC1D1A8C SELF 88BBC4473A012F73B0300949AC1D1A8C 400.49 |
SSMS User Interface Tip to Freeze Panes. Posted: 18 Mar 2013 08:39 PM PDT Accidentally discovered this, and thought I should share it, as I don't think I've seen it mentioned before.In Management Studio, in a sql window, if you click on the little rectangular block at the top right of the window (below the X to close), and pull down, the current window splits into panes, allowing you to look at 2 parts of the same procedure simultaneously.I don't expect it will change your life, but you never know! |
Posted: 19 Mar 2013 09:38 AM PDT we are creating a new database maintaining address and in the future may have more into the databases.I have a question about the nameing convention for database tables.We used to have a standard to give a prefix of 3 letters for a table to group them together functionally, like schSchoolProgram, schAssignment, appProcess, secUsers.Is this a good way to name tables, I found sometimes it is hard to mention the table, I have to spell out the first 3 characters when I talk to someone about the tables.For the new database, when creating tables, shall I just use the plain names, like Address instead of addrAddress, StudentHomeAddress instead of addrStudentHomeAddress?Also for domain tables shall I just: City, County instead of domCity, domCounty?Thanks |
sys.dm_io_virtual_file_stats - Number of Read and Writes Posted: 19 Mar 2013 03:57 AM PDT I looked up http://msdn.microsoft.com/en-us/library/ms190326.aspx for details on sys.dm_io_virtual_file_stats table.and see following descriptions:num_of_reads - Number of reads issued on the file.num_of_writes -Number of writes made on this file.Is it ok to assume that number of reads translate into number of select queries? andnumber of writes translate into number of insert queries?thanks |
Transactional replication problem Posted: 19 Mar 2013 07:49 AM PDT Hi everyone,We encountered some strange error when we were trying to add a subscriber.Following steps were performed:1. Publsher and distributor (2 different machines) were created2. Publication was created3. Few (12) subscribers were successfully added to the publicationWhen some time after everything is successfully running we tried to add 2 or more subscribers and got an error:MessageThe replication agent encountered a failure. See the previous job step history message or Replication Monitor for more information. The step failed.Checking replication monitor it sends us to job and job sends us to replication monitor. Removing newly added subscribers make everything looks good again.Any ideas if we need to check some settings?Thanks. |
Doubt about Shrinking process , Rebiulding index and fragmenatation Posted: 19 Mar 2013 06:25 AM PDT hi i know that shrinking data base increase fragmentation , so logically shrinking data files would two result in fragmentation but does shrinking log file increases fragmentation. i hope answer is NO.Second question is rebuilding an index reduces fragmentation but it increase size of database.I have seen increase in size of log file but does data file size to increases . Again i hope ans is No.Plz correct me on this |
Update YTD number with the sum of MTD numbers Posted: 19 Mar 2013 06:57 AM PDT I have a table with budget data. The month to date number has been loaded. I need to update the YTD value. So, the October YTD number would be the same as the MTD. I got that already. Tricky part is......... November's YTD has to be October's YTD plus November's MTD. December's YTD has to be November's YTD plus December's MTD and so on.Any thoughts? |
Substring with IF or CASE statement Posted: 19 Mar 2013 06:02 AM PDT Hi,I am trying to import into SQL Server 2008 R2 a list of strings as follows:=KET+N.207-13-F4001=KET+DG014-13-F4011=KET+RE002-36-MV009I need to split the strings so they are inserted into separate fields. One of the substring statements is:'SUBSTRING(xlCode; 15; 2) [if the second position is a number, and]'SUBSTRING(xlCode; 15; 1) [if the second position is anything else]AS GroupNoMy experience with T-SQL is just not enough to figure this one out. Can you tell me how the statement should be set up?Thanks in advance for any help.Maarten |
Can't get rid of Key Lookup in execution plan Posted: 19 Mar 2013 04:42 AM PDT I have a query that has a very costly Key Lookup step in its execution plan. It looks like this:[code="sql"]SELECT B.BenefitID, PW.WaiverReason, EW.DateWaived, EW.DateEntered, CBA.OrderID FROM EmployeeWaivers EW JOIN CompanyBenefitAssoc CBA ON CBA.BenefitID = EW.BenefitID AND CBA.CompanyID = YYYY LEFT JOIN PlanWaivers PW ON PW.WaiverID = EW.WaiverID LEFT JOIN Benefits B ON B.BenefitID = EW.BenefitID WHERE EW.EmployeeID = XXXX AND EW.Overwritten = 0 ORDER BY CBA.OrderID, PW.WaiverReason, EW.DateEntered[/code]The Key Lookup is on the EmployeeWaivers table. I created the following covering index:[code="sql"]CREATE NONCLUSTERED INDEX [IX_EmployeeWaivers_BenefitID_WaiverID_EmployeeID_Overwritten_incl_DateWaived_DateEntered] ON [dbo].[EmployeeWaivers] ( [BenefitID] ASC, [WaiverID] ASC, [EmployeeID] ASC, [Overwritten] ASC)INCLUDE ( [DateWaived],[DateEntered])[/code]I still have the Key Lookup as part of the execution plan. What did I miss? |
Problem having restoring database backup file on secondary server Posted: 19 Mar 2013 06:41 AM PDT Hi for last two days at one of our client side we have been trying figure out a why its taking two days two restore database backup files on secondary server. restore process is till going on and is stuck at 98% and no other process are there on secondary server.using sp_who2 i could find only two lastwaittypes as BACKUPTHREAD and BACKUPIO. how should i proceed ahead with analysis . i mean does disk has some bottleneck problem ?/the database of which we are trying to restore has file group.does it had to anything with filegropu |
how to make a stored procedure return a value when there are no available values Posted: 19 Mar 2013 01:04 AM PDT Here is the script for a stored procedure that i have, i want the stored procedure to return a value of 1,if it is a null or if there are no values for it to return ,Create PROCEDURE [dbo].[GetExtractSiteExtractFileBatchPreviousSuccessInd] @BatchStatusCd NVARCHAR(5) ,@ExtractSiteCd NVARCHAR (10) ,@ExtractAsOfDate DATETIME AS SELECT case when @ExtractAsOfDate = isnull(convert(varchar(10),ExtractAsOfDate,120),'1900-01-01') then 0 when @ExtractAsOfDate <> isnull(convert(varchar(10),ExtractAsOfDate,120),'1900-01-01')then 1 else 1 end as ExtractSiteExtractFileBatchPreviousSuccessInd FROM ExtractSiteExtractFileBatch WHERE BatchStatusCd = @BatchStatusCd and ExtractSiteCd =@ExtractSiteCd AND ExtractAsOfDate = @ExtractAsOfDategohere is the exec statement for the stored procedure, i want it to return a value of 1, if the value does not exist in the sample data.EXEC @return_value = [dbo].[GetExtractSiteExtractFileBatchPreviousSuccessInd] @BatchStatusCd = N'sb', @ExtractSiteCd = N'CEPOK', @ExtractAsOfDate = N'2013-03-18 00:00:00.000'SELECT 'Return Value' = @return_valueGObelow is the sample dataSELECT '[ExtractSiteExtractFileBatchID]','[ExtractSiteCd]','[ExtractAsOfDate]','[BatchStatusCd]','[InsertedDate]','[UpdatedDate]' union all select '1442','CEPOK','2012-07-12 00:00:00.000','SB','2012-07-13 06:04:41.843','2012-07-13 07:30:29.660' union all select '1443','CFTRT','2012-07-12 00:00:00.000','SB','2012-07-13 07:30:30.077','2012-07-13 08:29:40.433' union all select '5472','CEPED','2012-07-12 00:00:00.000','SBR','2013-03-18 17:34:12.047','2013-03-18 17:34:12.047' |
divide the percentage equally for every user- sql server stored procedure Posted: 19 Mar 2013 03:55 AM PDT I have written a stored procedure. as of now it inserts 10 percent for every user as below. orderid processid uid ordervalue perwet(percent wieghtage)1 1 1 10000 101 1 2 10000 101 1 3 10000 10I want that if more than 1 users are involved in 1 process in same order it should divide the percent equallythat means it must insert like thisorderid processid uid ordervalue perwet(percent weightage)1 1 1 10000 3.33 1 1 2 10000 3.331 1 3 10000 3.33structure of tables CREATE TABLE [dbo].[temp_calpoints1]([orderid] [int] NULL,[processid] [int] NULL,[uid] [int] NULL,[ordervalue] [bigint] NULL,[perwet] [int] NULL) ON [PRIMARY]CREATE TABLE [dbo].[process]([processid] [int] NULL,[processdesc] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[perwet] [int] NULL) ON [PRIMARY]set ANSI_NULLS ONset QUOTED_IDENTIFIER ONgoALTER PROCEDURE [dbo].[CalculatePointsAllorder]@fromDate varchar(10), @toDate varchar(10)ASBEGINprint @fromdate;print @todate; delete from temp_calpoints1; delete from temp_users;delete from temp_OrderMaster;insert into temp_usersSELECT uid FROM UserInfo where exists (select * fromOrderMaster where UserInfo.uid = OrderMaster.uid);insert into temp_OrderMasterselect * from OrderMaster where orderlogindate between @fromDate and @toDate DECLARE @t_orderid intDECLARE @t_processid intDECLARE @t_uid intDECLARE @t_ordervalue bigintDECLARE db_cursor CURSOR FOR SELECT orderid FROM temp_OrderMaster;--select orderid from OrderMaster where CONVERT(VARCHAR(10),orderlogindate,110) between --@fromDate and @toDate;--where orderlogindate --between @fromDate and @toDate;DECLARE db_cursor1 CURSOR FOR SELECT processid FROM process;--DECLARE db_cursor2 CURSOR FOR select uid from temp_users;DECLARE db_cursor2 CURSOR FOR select uid from userinfo;OPEN db_cursor FETCH NEXT FROM db_cursor INTO @t_orderid WHILE @@FETCH_STATUS = 0 BEGIN --print 'order ' -- PRINT @t_orderid;OPEN db_cursor1 FETCH NEXT FROM db_cursor1 INTO @t_processid WHILE @@FETCH_STATUS = 0 BEGIN --print 'process*******' --PRINT @t_processid;OPEN db_cursor2 FETCH NEXT FROM db_cursor2 INTO @t_uid WHILE @@FETCH_STATUS = 0 BEGIN--print '***'--print 'user' --print @t_uid--print @t_processid --print '***'if @t_processid = 1begin--print 'in processid ' --print @t_uid--print @t_processid --print '***'set @t_ordervalue = 0;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue-- @t_uid = b.uid from temp_OrderMaster a,EnquiryMaster b wherea.EnquiryId = b.enquiryid and b.uid = @t_uid and a.orderid = @t_orderid --)if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endif @t_processid = 2beginset @t_ordervalue = 0;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue --@t_uid = b.uid from temp_OrderMaster a,Requirement b wherea.requirementid = b.RequirementID and b.uid = @t_uid and a.orderid = @t_orderid --)if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endif @t_processid = 3beginset @t_ordervalue = 0;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue --,@t_uid = b.uid from temp_OrderMaster a,Proposal b wherea.proposalid = b.proposalid and b.uid = @t_uid and a.orderid = @t_orderid --)if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endif @t_processid = 4beginset @t_ordervalue = 0;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue --,@t_uid = uid from temp_OrderMaster whereorderid = @t_orderid and uid = @t_uid--)if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endif @t_processid = 5beginset @t_ordervalue = 0;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue --,@t_uid = b.uid from temp_OrderMaster a,OrderVendor b where b.orderid = @t_orderid and b.uid = @t_uid--)if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endif @t_processid = 6beginset @t_ordervalue = 0;--set @t_uid = 0;print 'in processid 6 'print @t_uid;PRINT @t_orderid;--insert into temp_calpoints1 (ordervalue) values(select @t_ordervalue = ordervalue -- ,@t_uid = b.uid from temp_OrderMaster a,CollectionFollowUp b where b.orderid = @t_orderid and b.uid = @t_uid--)print @t_ordervalue ;if @t_ordervalue <> 0 insert into temp_calpoints1 (orderid,processid,uid,ordervalue) values(@t_orderid,@t_processid,@t_uid,@t_ordervalue)endFETCH NEXT FROM db_cursor2 INTO @t_uidENDCLOSE db_cursor2FETCH NEXT FROM db_cursor1 INTO @t_processidENDCLOSE db_cursor1 FETCH NEXT FROM db_cursor INTO @t_orderid END CLOSE db_cursor update temp_calpoints1 set perwet = (select perwetfrom process where processid=temp_calpoints1.processid) END--update query set on count of group by claue from another table in sql--select count(*) from temp_calpoints1 group by processid,orderid--EXEC [CalculatePointsAllorder] @fromDate = '2012-09-10' ,@toDate = '2013-11-13'--select * from temp_calpoints1 order by orderid,processid,uid--select * from temp_OrderMaster |
Bulk Insert via VBA - Operating system error 5 Posted: 19 Mar 2013 03:58 AM PDT Hi,I hope this is the correct location for my question.I have written a simple piece of VBA to convert some data and save as a csv file which is then to be bulk instered into a database.I get the error 'Cannot bulk load because file "filename" could not be opened. Operating system error code 5(failed to retrieve text for this error. Reason: 15105).'When I copy the SQL string into SQL Studio is completes correctly and I have fully named the file and location or it. I have seen elsewhere on the forum similar issues which people put down to permissions, given the studio can complete the code without trouble that seems odd to me.Any ideas? |
SQL to divide one row by another row based on formula or rule. Posted: 19 Mar 2013 03:35 AM PDT Hi All,I have a below table and need to divide one row by another row based on the formula.Date Name ID Result3/18/2013 A 1 53/18/2013 B 2 163/18/2013 C 3 213/18/2013 D 4 113/18/2013 E 5 453/18/2013 F 6 223/18/2013 G 7 53/19/2013 A 1 23/19/2013 B 2 73/19/2013 C 3 153/19/2013 D 4 93/19/2013 E 5 193/19/2013 F 6 123/19/2013 G 7 3Please suggest how to proceed to get below result. Formula Column is used to get Result(%) column.Formula Date Result(%)B/A 3/19/2013 0.285714286D/C 3/19/2013 0.6F/(E-G) 3/19/2013 0.75B/A 3/18/2013 0.3125D/C 3/19/2013 0.523809524F/(E-G) 3/19/2013 0.55Thanks in advance. |
Posted: 19 Mar 2013 03:55 AM PDT Hi trying to do the followingSELECT Date, Year, CASE DATEPART(mm, date) WHEN 07 THEN 6 END AS JulyMaxhours, KindOfDay, Month,[b]COUNT(CASE DATEPART(mm, date) WHEN 07 )[/b]FROM Auxiliary.CalendarWHERE (KindOfDay = 'Weekday') AND (Year = 2013) AND (Month = 7)What I want in the case is the count of days = 07 even better is if I can multiply that count by 6 ThanksJoe |
Combining Two Very Large Tables Posted: 19 Mar 2013 02:54 AM PDT I have two tables that each contain approximately 90 million records. The second table represents an "update" of the second table; however, no unique identifier exists within either population to identify which records in the new table are added or changed from the previous table. Each table contains the following fields:1. First Name2. Middle Name3. Last Name4. Social Security Number5. Birth Date7. ID (unique to the table, but not across tables)The SSN is unique in both tables; however, an SSN may be revised for a distinct record from one table to the next based on updated information, etc. that is received. We currently have the following indexes assigned to each table:1. Social Security Number (Clustered Index)2. First Name, Middle Name, Last Name, Birth Date, ID (Non-Clustered Indexes)I am trying to identify the most efficient way to:1. Combine the tables, and;2. De-duplicate records within the combined table based on a combination of different fields (e.g., same SSN; same first name, last name, birth date; etc.)We've accomplished #1 in the past by partitioning each table into chunks based on SSN and then unioning those chunks. For example: SELECT Social_Security_Number, Last_Name, First_Name, Middle_Name, Death_Date, Birth_DateINTO ##tbl_20110901_FullFile_00_04 FROM tbl_20110901_FullFileWHERE LEFT(Social_Security_Number,2) IN('00','01','02','03','04')SELECT Social_Security_Number, Last_Name, First_Name, Middle_Name, Death_Date, Birth_DateINTO ##tbl_20120301_FullFile_00_04 FROM tbl_20120301_FullFileWHERE LEFT(Social_Security_Number,2) IN('00','01','02','03','04')SELECT * INTO ##tbl_Combined_00_04 FROM ##tbl_20110901_FullFile_00_04 UNION SELECT * FROM ##tbl_20120301_FullFile_00_04Is there a better way to accomplish this?Thanks,James |
Posted: 19 Mar 2013 01:39 AM PDT what is query cache plan , list the dmv's to trouble shoot poor performance of a query in sql 2005/2008? |
Posted: 19 Mar 2013 03:49 AM PDT HiHopefully this will be an easy one for the SSAS heads out there.I am new to cubs, i have set up some dimention and measures and i am browsing my cube nicely.problem is it is summing the values and i would like a count instead. I can see for the life of me where you change this to a count.Thanks in advance |
Posted: 19 Mar 2013 02:14 AM PDT How can I find how long a particular SP runtime?SueTons. |
You are subscribed to email updates from SQLServerCentral / SQL Server 2008 / SQL Server 2008 - General To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment