Tuesday, June 4, 2013

[SQL Server 2008 issues] Join Problem

[SQL Server 2008 issues] Join Problem


Join Problem

Posted: 02 Jun 2013 09:27 PM PDT

my table is like this for attendancecreate table attendance(CID int,BID int,date datetime,eid int,timein datetime,timeout datetime,)and my shift table is like thisCREATE TABLE SHIFT(CompanyID int,BranchID int,ShiftID int,ShiftName varchar(50),LongName varchar(50),SType varchar(50),TimeIn datetime,TimeOutdatetime,LTime datetime,HDTime datetime,Night int,TotalTime datetime,)and this is my data of shift TableCompanyID-----------1BranchID------------1ShiftID---------------1ShiftName-------------GLongName------------------GENERAL SHIFTStype-------------------------------nullshifttimein------------------------------1/23/2013 9:00:00 AMshiftTimeout---------------------------1/23/2013 6:00:00 PMLtime--------------------------------1/23/2013 9:16:00 AMHdTime----------------------------1/23/2013 2:00:00 PMNight---------------------------------0TotalTime-------------------------6/3/2013 9:00:00 AMmy query is like thisselectdistinct Date,[Time in],[Time out],CONVERT(varchar(8),DATEADD(ss,DATEDIFF(ss,[Time in],[Time out]),0),108) AS SpendTime,CAST (ABS (540 - DATEDIFF (MINUTE, [Time in], [Time out] ) ) / 60 as varchar ) + ' hours : ' +-- Hours ( + ':' separator )CAST (ABS (540 - DATEDIFF (MINUTE, [Time in], [Time out] ) ) % 60 as varchar )+ ' minutes' as excesstime,CASE WHEN DATEDIFF(MINUTE, [Time in], [Time out]) >= 540 THEN 'Excess' ELSE 'Short' END ExcessShortFROM trans tLeft outer join shift s on t.bid = s.bid and t.cid = s.cid where employeecode = 26446i want to to add Totaltime column in replace of 540becouse shift timing is not 9 hours some shift are 8 hours and some are 10 hoursplease help me out

system databases crruption

Posted: 03 Jun 2013 05:58 PM PDT

Hi all, If system databases got corrupted and we don't have backups of the system databases... how to handle this situation

SID and PID

Posted: 03 Jun 2013 04:22 PM PDT

Hi can any one explain what these SID and PID . what is the use of them..

Find the error column and the row

Posted: 03 Jun 2013 05:58 PM PDT

Hi All,I am facing a problem with finding the exact error row and column where the error occured.Scenario :1. TableA has 300+ columns all Varchar datatype 2. TableB has exact number of columns as da TableA but different datatypesTableB is to be populated from the TableA .While including a single insert statement, converting accordingly, it error out with DataConversion error. I am having a trouble finding the exact record and column for the error description.I have figure out a way to do that , i.e. Adding an Identity column in the TableB ; after rollback, Identity would still be bound to the table. So, the next insert would give me the point of failure.But, Is there any way to do this without altering table ?here is sample code to work it out .[code="sql"]create table varchartest(a varchar(30),b varchar(30))create table varchartest_1(a varchar(30),b int)insert into varchartest values ('xyz','1'),('abc','2'),('bcd','3'),('efg','4'),('erer','5'),('rerer','x'),('rerer','7')-- alter table varchartest_1 add c int identity-- insert into varchartest_1(a,b) select top 1 a,b from varchartestselect * from varchartest_1 [/code]Thanks

Kitchens Suppliers Manchester

Posted: 03 Jun 2013 05:01 PM PDT

Kitchens Suppliers Manchester. http://www.solidwoodkitchen.co.uk U.K manufactured kitchens with appliances only £595. TEL- 01616-694785. Kitchens Suppliers Manchester

What is the uses of Environment variable explain give me some example.

Posted: 03 Jun 2013 05:23 PM PDT

Hi ,i am new to ssis ,but in my package used Environment variable .Plaese explain give me with some example.Thanks,bal85

Database mail sending fails with SP_Send_DBmail.

Posted: 03 Jun 2013 05:27 PM PDT

Hi,I am stuck in below problem.I have two servers; ServerA and ServerB. I have linked serverA to serverB bysp_addlinkedserver @server = 'serverB',@provider = 'SQLNCLI',@datasrc = 'ServerB', @srvproduct = '' on serverA. Now I am querying on ServerA and retriveing data from ServerB's Table.The Query is : EXEC msdb.dbo.sp_send_dbmail @recipients = 'myname@myDomain.com',@subject = 'MySubject',@body = 'This Is Test.',@body_format = 'Text', @profile_name = 'Profile1',@query = 'set nocount onSELECT * FROM [ServerB].[Database].dbo.myTable AS MTWHERE DATEADD(DAY,0,DATEDIFF(DAY,0,Date_Col)) = DATEADD(DAY,0,DATEDIFF(DAY,0,GETDATE()))',@execute_query_database = 'master',@query_result_separator = ' ',@attach_query_result_as_file = 1,@query_attachment_filename = 'myFile.csv'I have created an proc on serverA's master database and calling this proc into a job.This job throws an error:" Executed as user: ServerA\sqlAgent. Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. [SQLSTATE 28000] (Error 18452). The step failed. "Can Anybody help in solving this??It would be a great help.Thanks and Regards,Jignesh

configuring alerts

Posted: 03 Jun 2013 05:12 PM PDT

i would like to configure alerts in all the sql servers to the below 1. disk space on the server 2. log space used of all the databases 3. alert when the job is failed 4. server shut down \up.replies are appreciated .

how to assign additional SQL Server administrators

Posted: 03 Jun 2013 12:26 AM PDT

When I ran SQLEXPRADV_x86_ENU.exe (run under Windows' Administrator user account) to install SQL Server 2008 Express with Advanced Services, when it got to the Database Engine Configuration step, under "Specify SQL Server administrators" I clicked on "Add Current User" to assign NULL\Administrator as a SQL Server administrator.How can I now add my other Windows user accounts (both administrator level and reduced-privilege accounts, if possible) as SQL Server administrators?

Why is CDC returning Insert/Delete when I UPDATE a row in a table?

Posted: 15 Mar 2013 06:11 AM PDT

This is my second post asking this question. Hopefully someone will respond this time. I am experimenting with using CDC to track user changes in our application database. So far I've done the following:[code="sql"]-- ENABLE CDC ON DV_WRP_TESTUSE dv_wrp_test GO EXEC sys.sp_cdc_enable_db GO -- ENABLE CDC TRACKING ON THE AVA TABLE IN DV_WRP_TESTUSE dv_wrp_test GO EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'AVA', @role_name = NULL GO-- UPDATE A ROW IN THE AVA TABLE FROM SSMSUPDATE AVA SET AvaDesc = 'Test3' WHERE AvaKey = 119-- GET CDC RESULTS FOR CHANGES TO AVA TABLEUSE dv_wrp_test GO SELECT * FROM cdc.dbo_AVA_CT GO--RESULTS SHOW OPERATION 3 (BEFORE UPDATE) AND 4 (AFTER UPDATE) CORRECTLY--__$start_lsn __$end_lsn __$seqval __$operation __$update_mask AvaKey AvaDesc AvaArrKey AvaSAPAppellationID--0x0031E84F000000740008 NULL 0x0031E84F000000740002 3 0x02 119 Test2 6 NULL--0x0031E84F000000740008 NULL 0x0031E84F000000740002 4 0x02 119 Test3 6 NULL[/code]The results shown above are what I expect to see. My problem occurs when I use our application to update the same column in the same table. The vb.net application passes a Table Valued Parameter to a stored procedure which updates the table. Below is the creation script for the stored proc:[code="sql"]SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOif exists (select * from sysobjects where id = object_id('dbo.spdv_AVAUpdate') and sysstat & 0xf = 4) drop procedure dbo.spdv_AVAUpdateGOCREATE PROCEDURE [dbo].[spdv_AVAUpdate] @AVA AS tvpAVA READONLY -- table valued parameterASDECLARE @ErrLogID AS INTEGERBEGIN TRY SET NOCOUNT ON BEGIN TRANSACTION UPDATE AVA SET AVA.AvaDesc = TVP.AvaDesc, AVA.AvaArrKey = TVP.AvaArrKey FROM @AVA TVP INNER JOIN AVA AVA ON (AVA.AvaKey = TVP.AvaKey) -- Commit the transaction COMMIT TRANSACTION -- Return '' for success SELECT '' AS ErrMessageEND TRY BEGIN CATCH -- Roll back any active or uncommittable transactions IF XACT_STATE() <> 0 BEGIN ROLLBACK TRANSACTION END -- Log the error into the ErrorLog table EXECUTE dbo.uspLogError @ErrLogID = @ErrLogID OUTPUT -- Retrieve logged error information. SELECT dbo.ErrorLog.* FROM dbo.ErrorLog WHERE ErrLogID = @ErrLogID END CATCH GOGRANT EXECUTE on dbo.spdv_AVAUpdate TO publicGO[/code]When I look at the results of CDC, instead of operations 3 and 4, I see 1 (DELETE) and 2 (INSERT) for the change that was initiated from the stored procedure:[code="sql"]-- GET CDC RESULTS FOR CHANGES TO AVA TABLEUSE dv_wrp_test GO SELECT * FROM cdc.dbo_AVA_CT GO-- RESULTS SHOW OPERATION 1 (DELETE) AND 2 (INSERT) INSTEAD OF 3 AND 4--__$start_lsn __$end_lsn __$seqval __$operation __$update_mask AvaKey AvaDesc AvaArrKey AvaSAPAppellationID--0x0031E84F000000740008 NULL 0x0031E84F000000740002 3 0x02 119 Test2 6 NULL--0x0031E84F000000740008 NULL 0x0031E84F000000740002 4 0x02 119 Test3 6 NULL--0x0031E84F00000098000A NULL 0x0031E84F000000980003 1 0x0F 119 Test3 6 NULL--0x0031E84F00000098000A NULL 0x0031E84F000000980004 2 0x0F 119 Test4 6 NULL[/code]Does anyone know why this might be happening, and if so, what can be done to correct it? Also, is there any way to get the user id associated with the CDC?Thanks,Gina

HOW TO EXECUTE 2-3 SQL STATMENTS ONE AFTER ANOTHER BASED ON THE RESULT OF PREVIOUS QUERY?

Posted: 03 Jun 2013 07:12 AM PDT

Hi,Ho I can run 2-3 sql statments in one go, which are based on above's query value.for an example.first statment copy the data from linked server into new table in local server.second statement copy few fields from the table , just created in local server to another table in the local server.and third statment apply the merge query into this table , by considering it as source to the another destination tablenow , I want to know how can i achive this in one go? either via stored procdure, cursor or smonthing elese.?please help.Thanks.Pratik.

Is there a way to read execution statistics for stored procedures that have "with recompile"

Posted: 03 Jun 2013 02:26 AM PDT

Is there a way to read execution statistics for stored procedures that have "with recompile"I've tried the following with no joy[code="sql"]USE tempdbgoCREATE PROCEDURE usp_testwithoutrecompileASSELECT TOP 1 * FROM sys.objectsgo CREATE PROCEDURE usp_testwithrecompileWITH RECOMPILE ASSELECT TOP 1 * FROM sys.objectsgo EXEC usp_testwithoutrecompile;EXEC usp_testwithrecompile;SELECT OBJECT_NAME(object_id),* FROM sys.dm_exec_procedure_statsWHERE OBJECT_NAME(object_id) = 'usp_testwithoutrecompile';SELECT OBJECT_NAME(object_id),* FROM sys.dm_exec_procedure_statsWHERE OBJECT_NAME(object_id) = 'usp_testwithrecompile';DROP PROCEDURE usp_testwithoutrecompile;DROP PROCEDURE usp_testwithrecompile;[/code]or maybe you know a better way for me to work.I am using the execution stats [max_elapsed_time] and ([Total_elapsed_time]/[Execution_count]) to compare sp performance between our prod & QA servers.Now we have a number of procs that use with recompile and I can not judge the impact of the change.(I'm using SQL 2008 R2 Enterprise 64 bit)Many thanks in advanceIan

SQL Server - Stored Procedure

Posted: 03 Jun 2013 06:06 AM PDT

While passing parameters in exec statement can we use CAST or not.I am getting error while using CAST to pass the parameters.Example:EXEC dbo.abc CAST(0 as bit), nullNOTE: I need to pass just two parameters..Thanks in advance.

SSIS package execution remote vs / local

Posted: 02 Jun 2013 07:56 PM PDT

Hi all. Hope I'm posting in the right forum.We currently schedule SSIS packages using the Windows task scheduler, but we want to use the SQL agent for this. The most important reasons are.- Easier maintenance of jobs by the DBA instead of the system administrators who administer scheduled tasks- Easier monitoring of jobs via job history- Easier to define flows in SQL agent job steps for dependent packages. First execute package X and then Y etc.In our configuration we have 2 separate servers.- DB server. Purpose: only for serving data (not in DMZ for security reasons. This is a strict security policy which we can not change)- JOB server. Purpose: application server. Contains scheduled tasks and serves SSIS packages (in DMZ)[b]Problem[/b]SSIS Service is installed on the job server but not on the database node. First I tested scheduling an agent job on the DB server which calls a package hosted by the JOB server. This works, but the SSIS package is acutally run on the DB server instead of remotely on the JOB server. This is a problem because the DB server is not in the DMZ and can not access external sites. So our packages can not be run correctly.Workarounds for remote scheduling via sp_startjob are complex and have several additional problems.[b]Solution?[/b]A solution is just to stick with old scheduling and run the packages directly via dtexec. However we then loose al of the SQL agent job advantages.Another solution is to install another SQL instance on the JOB server itself just for scheduling the SSIS packages. This is in my opinion the best solution, but it will costs us an extra SQL license!It also violates the SQL instances not in DMZ policy. However because we don't store any data (besides master databases) here the risk is decreased.[b]Advice?[/b]I really need some good advice from the experts on this ;)

Calling Batch file using SQL Script

Posted: 03 Jun 2013 12:03 AM PDT

Hi Team,Am calling a batch file from sql script, passing parameters to batch file and trying to print the parameters in batch file, [b]declare @bat varchar(200) set @bat = 'E:\Test_batch\Test.bat'+ ' ' + 'UPDATE' + ' ' + 'DELETE' exec master..xp_cmdshell @bat[/b]Parameters are printing, but throwing below error msg [b]UPDATE' is not recognized as an internal or external command,operable program or batch file.'DELETE' is not recognized as an internal or external command,operable program or batch file.[/b]Please help me.

How to find SQL Server account name ?

Posted: 03 Jun 2013 04:57 AM PDT

Hi,I need to find account name that SQL Server instance is installed. Is it possible ?Thanks

issues with setting up Publisher in SQL2008

Posted: 30 May 2013 11:51 PM PDT

I have a two server configuration where the distribution db is on another server and the publisher is on a second server. I have setup the distribution db just fine (SQL2008R2 SP2) and when I configure the Publisher on SQL2008 SP3 it works but when the replication starts it gets a failed login.....The login is right and the password is correct. I don't understand why this thinks the password is incorrect. In the Publication I chose the option of RUn under SQL Server Agent Service Account so I didn't have to supply the password.... MessageReplication-Replication Snapshot Subsystem: agent failed. The replication agent had encountered an exception.Source: ReplicationException Type: Microsoft.SqlServer.Replication.ConnectionFailureExceptionException Message: Login failed for userMessage Code: 18456Replication-Replication Snapshot Subsystem: agent failed. Unable to start execution of step 2 (reason: Error authenticating proxy system error: Logon failure: unknown user name or bad password.). The step failed.

facing issue in looping through in SOL to XML

Posted: 30 May 2013 06:56 PM PDT

We are trying to display the xml code using following query but facing some issue as follows. We have created the temp using various tables in the database and trying to create xml ouput from the same. CREATE TABLE #crossTable11( EMPID int INT , SN Int INT , EmpName VARCHAR(150) , EventName VARCHAR(150) , BookingDate smalldatetime SMALLDATETIME , PositionName VARCHAR(150) , DepartmentName VARCHAR(150) )INSERT INTO #crossTable11 (EMPID, SN, EmpName, EventName, BookingDate, PositionName, DepartmentName)VALUES (1, 6, 'N1', '99hh', '2013-05-20 00:00:00', 'Manager', 'Admin')INSERT INTO #crossTable11 (EMPID, SN, EmpName, EventName, BookingDate, PositionName, DepartmentName)VALUES (2, 6, 'N1', 'MS Excel', '', '', '')INSERT INTO #crossTable11 (EMPID, SN, EmpName, EventName, BookingDate, PositionName, DepartmentName)VALUES values (13, 6, 'N1', 'MS Office', '', '', '')INSERT INTO #crossTable11 (EMPID, SN, EmpName, EventName, BookingDate, PositionName, DepartmentName)VALUES (4, 2, 'Pkar', '99hh', '2013-05-20 00:00:00', 'Manager', 'Admin')INSERT INTO #crossTable11 (EMPID, SN, EmpName, EventName, BookingDate, PositionName, DepartmentName)VALUES (5, 2, 'Pkar', 'MS Excel', '2013-05-10 00:00:00', 'Supplier', 'Accounts')INSERT INTO #crossTable11 (EMPID, SN, EmpName, EventName, BookingDate, PositionName, DepartmentName)VALUES values (6, 2, 'Pkar', 'MS Office', '2013-08-01 00:00:00', 'Supplier', 'Accounts') SELECT 'Site of Person Involved' AS '@caption', 'SitePI' AS '@fieldName', ( select EmpName as 'emp/@ID',PositionName as 'emp/@Position',DepartmentName as 'emp/@Department', ( select EventName as 'event/@ID',BookingDate as 'event/@Date' from #crossTable11 L where L.EMPID = p.EMPID order by EventName for XML path ('event'),TYPE ) from #crossTable11 p group by p.SN,p.EmpName,p.PositionName,p.DepartmentName,p.EMPID for xml PATH(''), TYPE )for xml PATH('r'), TYPEOutput is as follows which shows the duplicate names. <r caption="Site of Person Involved" fieldName="SitePI"> <emp ID="Pkar" Position="" Department="" /> <event> <event ID="MS Office" Date="1900-01-01T00:00:00" /> </event> <emp ID="Pkar" Position="Supervisor" Department="Admin" /> <event> <event ID="99hh" Date="2013-05-20T00:00:00" /> </event> <emp ID="Pkar" Position="Supervisor" Department="Admin" /> <event> <event ID="MS Excel" Date="2013-05-10T00:00:00" /> </event> <emp ID="N1" Position="" Department="" /> <event> <event ID="99hh" Date="2013-05-20T00:00:00" /> </event> <emp ID="N1" Position="" Department="" /> <event> <event ID="MS Office" Date="1900-01-01T00:00:00" /> </event> <emp ID="N1" Position="Manager" Department="Account" /> <event> <event ID="MS Excel" Date="1900-01-01T00:00:00" /> </event></r>Required output is: We would like to avoid the duplicates and combine the output for them <r caption="Site of Person Involved" fieldName="SitePI"> <emp ID="Pkar" Position="Supervisor" Department="Admin" /> <event> <event ID="99hh" Date="2013-05-20T00:00:00" /> <event ID="MS Office" Date="1900-01-01T00:00:00" /> <event ID="MS Excel" Date="2013-05-10T00:00:00" /> </event> <emp ID="N1" Position="Manager" Department="Account" /> <event> <event ID="99hh" Date="2013-05-20T00:00:00" /> <event ID="MS Office" Date="1900-01-01T00:00:00" /> <event ID="MS Excel" Date="1900-01-01T00:00:00" /> </event>

sp_updatestats job - very slow

Posted: 03 Jun 2013 03:33 AM PDT

Hello,Sp_updatestats on one of my DBs took over 2 hours. The job runs daily at 5:30am and it usually takes no longer than 45 seconds.Any suggestion is greatly appreciatedTHX

creating logon & logoff triggers in MSSQL

Posted: 03 Jun 2013 01:02 AM PDT

We have several database that we track user logons for, until now all these databases resided in Oracle. For Oracle I created a sys trigger than logs [i]os user, database user, program used[/i], etc., as well as logon time & logoff time that writes to a table I called "audlog" then I query the table every monthThis is what I created in Oracle:[code="sql"]CREATE TABLE audlog.session_audit (user_name VARCHAR2(30), log_date DATE, session_id VARCHAR2(30), ip_addr VARCHAR2(30), hostname VARCHAR2(30), auth_type VARCHAR2(30), os_user VARCHAR2(30), event VARCHAR2(8), program VARCHAR2(30)) tablespace users;Create Or Replace Trigger Trg_Logon_Info After Logon On DatabaseDeclareProgram Varchar2(30);BeginSelect program into program From v$session Where sid=(Select Max(Sid) From V_$Mystat);Insert Into Audlog.Session_Audit Values ( user , sysdate , Sys_Context ('USERENV', 'SESSIONID') , Sys_Context ('USERENV', 'IP_ADDRESS') , Sys_Context ('USERENV', 'HOST') , Sys_Context ('USERENV', 'AUTHENTICATION_TYPE') , Sys_Context ('USERENV', 'OS_USER') , 'LOG ON' -- Event , Program );End;/Create Or Replace Trigger Trg_Logoff_Infobefore Logoff On DatabaseDeclareProgram Varchar2(30);BeginSelect Program Into Program From V$Session Where Sid=(Select Max(Sid) From V_$Mystat);Insert Into Audlog.Session_Audit Values ( user , sysdate , Sys_Context ('USERENV', 'SESSIONID') , Sys_Context ('USERENV', 'IP_ADDRESS') , Sys_Context ('USERENV', 'HOST') , Sys_Context ('USERENV', 'AUTHENTICATION_TYPE') , Sys_Context ('USERENV', 'OS_USER') , 'LOG OFF' -- Event , Program );End;[/code]I would like to create something in MSSQL that basically shows the same information, so I can keep the reports consistent looking. Can I duplicate this trigger in MSSQL?

Bulk insert Error

Posted: 03 Jun 2013 12:52 AM PDT

Hello All,Currently trying to insert a data from a file placed in a remote server and using the BULK INSERT command to perform the same, but I am getting the error "Operating system error code 5(Access is denied) when i use the bulk insert command.Login already has bulkadmin privileges and able to access the file in the remote server from my source server.I am able to perform the import operation when i use the import /export wizard.Why is it so?

Trying to use case when to pivot data

Posted: 03 Jun 2013 12:11 AM PDT

I'm trying to pivot this data but, I can't seem to get rid of the "extra zeros".Here's my query.[code="sql"]SELECT provider, month_name, case when WeekNumberOfMonth='1' then ArrivedVisits else 0 end as 'Week1', case when WeekNumberOfMonth='2' then ArrivedVisits else 0 end as 'Week2', case when WeekNumberOfMonth='3' then ArrivedVisits else 0 end as 'Week3', case when WeekNumberOfMonth='4' then ArrivedVisits else 0 end as 'Week4', case when WeekNumberOfMonth='5' then ArrivedVisits else 0 end as 'Week5'from VISITSgroup by provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisitsorder by provider, fiscal_month[/code])[code="sql"]create table VISITS(provider varchar(100),fiscal_month int,month_name varchar(25),WeekNumberOfMonth int,ArrivedVisits int)[/code][code="sql"]insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',1,'October',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',1,'October',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',1,'October',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',1,'October',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',1,'October',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',2,'November',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',2,'November',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',2,'November',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',2,'November',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',2,'November',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',3,'December',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',3,'December',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',3,'December',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',3,'December',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',3,'December',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',3,'December',6)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',4,'January',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',4,'January',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',4,'January',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',4,'January',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',4,'January',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',5,'February',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',5,'February',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',5,'February',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',5,'February',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',5,'February',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',6,'March',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',6,'March',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',6,'March',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',6,'March',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',6,'March',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',6,'March',6)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',7,'April',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',7,'April',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',7,'April',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',7,'April',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABELES M.D.,DEBORAH',7,'April',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',1,'October',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',1,'October',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',1,'October',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',1,'October',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',1,'October',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',2,'November',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',2,'November',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',2,'November',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',2,'November',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',2,'November',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',3,'December',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',3,'December',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',3,'December',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',3,'December',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',3,'December',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',3,'December',6)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',4,'January',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',4,'January',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',4,'January',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',4,'January',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',4,'January',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',5,'February',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',5,'February',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',5,'February',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',5,'February',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',5,'February',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',6,'March',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',6,'March',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',6,'March',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',6,'March',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',6,'March',5)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',6,'March',6)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',7,'April',1)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',7,'April',2)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',7,'April',3)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',7,'April',4)insert into VISITS(provider, fiscal_month, month_name, WeekNumberOfMonth, ArrivedVisits) values('ABESAMIS M.D.,AILEEN A.',7,'April',5)[/code]

Process For Data Migration

Posted: 02 Jun 2013 10:17 PM PDT

Hi All, I have a requirement to migrate a database but, with the migrated version to have column name changes that must be reflected in all of the code base. I am basically perfoming a translation on an existing database. Now, I could set these up as SSIS packages or individual scripts and searching all the Sprocs/views, functions..etc, but I hate re-inventing the wheel and was wondering if anyone knew of any software or another processes that could aid this? I have the DBA toolbelt from redgate but I cannot see anything that will help (so far, apart from the SQLSearch helping to identify the locations of the references)Thanks in advanceElliot

2 transaction log files -- which one is in use?

Posted: 02 Jun 2013 09:23 PM PDT

Hi ,Today we discovered a database that has 2 log files (reason unknown; they're on the same disk). This database is in SIMPLE mode. I'm wondering which view will identify which of the two are active. I've looked at ones that seemed the most likely from the autocomplete but haven't been able to find it (sys.fn_dblog provided nothing while sys.dm_audit_actions knows that the action was logged but doesn't say where to).sys.dm_io_virtual_file_stats says the number of reads/writes (duration unknown, assume since startup) but it doesn't have a date field. The idea would be to delete the currently inactive log file before the database switches to it again.Mike

Locale issues when importing flat file data (UK/US date problem)

Posted: 03 Jun 2013 12:32 AM PDT

I've raised this issue before but I've now been allowed to spend time on site to review the problem further. Unfortunately, the servers and the desktops are on different domains from mine (and each other I believe), quite locked down and administered by another organisation.We have a server set up with SQL Server 200R2 Installed on it.The location and all the regional settings on the server are all set to United Kingdom.I'm connecting to the server via a Win 7 desktops with SSMS installed (SQL 200R2) The location and the regional settings on the desktops are also set to United Kingdom.If importing flat file pipe delimited data via SSIS into a database on the server from the desktop it thinks the locale is United States and screws the dates up. Whilst I can create a package and view the properties, change and save it, I cannot run the package due to permissions, nor will the relevant permissions be given. Plus the nature of the data being provided means consistency is very irregular.If I run a file in on the server itself it correctly loads it up as United Kingdom and the dates are correct. It also correctly imports the data if I log in via another server.I've created a new instance on the desktops, ensured that everything is set for UK and tried to load the data there. This also gives me the US date problem.In all the tests we have carried out it does seem that the desktops are at fault. I've checked with the sys admins and they say nothing is being blocked (or shown in the logs as being blocked). Unfortunately, they are not open to the problem or there being a problem for them to fix.Whilst I can log into and load the data via the server direct, this is not really acceptable and because of the nature of the project, may not be allowed due to risks involved.Has anyone come across this issue before and if so what was the solution? Is this a problem with ports and if so which ones? Or is this a problem with GP?Many thanks

option (maxdop 1)

Posted: 09 Aug 2010 08:27 PM PDT

I've noticed some blocking sessions caused by the following SQL but it's nowhere to be found in my SP's and Exe's. Anyone know what this is? Is it stats / indexes being rebuilt?insert [dbo].[CLIENT] select * from [dbo].[CLIENT] option (maxdop 1)

help with Arithmetic overflow error; Calculating Lunar Phases.

Posted: 02 Jun 2013 11:12 PM PDT

ok a few years ago, I was poking around with calculating Lunar Phases,which I adapted off of a math web site: and with a bit of help from the community in this thread, we came up with a better ITVF version for it in this thread:[b][url=http://www.sqlservercentral.com/Forums/Topic899188-391-1.aspx#bm899574]mis-appying a cross apply table value function?[/url][/b]so today I decided to apply that ITVF apply against a larger tally table than before; basically 1101 years, from 1900-01-01 to 3000-0101.that ITVF returns an error when it starts calculating against any date greater than [b]2808-07-13 00:00:00.000[/b] and i sure as heck can't see the forest thru the trees, and see where the calculation is hitting a data type limit;[quote][color="#FF0000"]Msg 8115, Level 16, State 8, Line 4Arithmetic overflow error converting numeric to data type numeric.[/color][/quote]here's a test harness that allows you to recreate the issue; can you help me figure out where i'm overflowing?[code]--this TallyCalendar table goes from year 0 (1900-01-01 to year 3000-01-01--calculation crashes on dates greater than '2808-07-11 00:00:00.000' --initialize our vars With TallyC AS (SELECT convert(datetime,RW) AS TheDate FROM ( SELECT TOP ( datediff(dd,0, dateadd(year,1101,0)) ) ROW_NUMBER() OVER (ORDER BY sc1.id) -1 AS RW FROM Master.dbo.SysColumns sc1 CROSS JOIN Master.dbo.SysColumns sc2 ) X), cteDtSplit AS ( SELECT TheDate, YEAR(TheDate) AS TheYear, MONTH(TheDate) AS TheMonth, DAY(TheDate) AS TheDay FROM TallyC ), cteDates AS ( SELECT TheDate, TheYear - FLOOR( ( 12 - TheMonth ) / 10 ) AS yy, CASE WHEN (TheMonth + 9) >= 12 THEN (TheMonth + 9) - 12 ELSE TheMonth + 9 END AS mm, TheDay AS dd FROM cteDtSplit ), ctePre AS ( SELECT TheDate, dd, FLOOR( 365.25 * ( yy + 4712 ) ) AS k1, FLOOR( 30.6 * mm + 0.5 ) AS k2, FLOOR( FLOOR( ( yy / 100 ) + 49 ) * 0.75 ) - 38 AS k3 FROM cteDates ), cteAdj AS ( SELECT TheDate, CASE WHEN (k1 + k2 + dd + 59) > 2299160 THEN (k1 + k2 + dd + 59) - k3 ELSE k1 + k2 + dd + 59 END AS jd -- % for dates in Julian calendar FROM ctePre ), cteFin AS ( SELECT TheDate, ((( jd - 2451550.1 ) / 29.530588853) - CAST((FLOOR( ( jd - 2451550.1 ) / 29.530588853 )) AS DECIMAL(20,16))) * 29.53 AS AG FROM cteAdj ) SELECT TheDate,CASE WHEN ag < 1.84566 THEN 'New Moon' WHEN ag < 5.53699 THEN 'Waxing crescent' WHEN ag < 9.22831 THEN 'First quarter' WHEN ag < 12.91963 THEN 'Waxing near full moon' -- the web calls this "Gibbous ", WTH is that? WHEN ag < 16.61096 THEN 'Full Moon ' WHEN ag < 20.30228 THEN 'Waning near full moon' -- the web calls this "Gibbous ", WTH is that? WHEN ag < 23.99361 THEN 'Last quarter' WHEN ag < 27.68493 THEN 'Waning crescent' ELSE 'New Moon' END AS Phase FROM cteFin[/code]

DROP PK on highly called big table

Posted: 02 Jun 2013 08:05 PM PDT

HII have a table with 80 million records with PK on an int column. A service selects data from this table multiple times/sec in live environment.I have to change the datetype of this PK column from int to BigInt. So I need to drop the PK constraint. But when I run the ALTER TABLE statement to drop it, it gets blocked by Select statements (they have NOLOCK in them). So i think it will be never ending wait.Is there any other better way to achieve it?Can I put some lock on table while altering it so that Select statements just wait for their turn but son't block the ALTER statement?thanks

Query Tuning

Posted: 02 Jun 2013 08:40 PM PDT

Hi all,How to do query tuning. Is there any methods for query tuning.how to find Query cost .What is the importance of this in tuning.

tempdb keeps growing

Posted: 02 Jun 2013 09:08 PM PDT

I have configured my tempdb as follows:8 data files with an initial size of 14GB each. This was based on 2 weeks of monitoring the DB and seeing to what size it grows.I have set the autogrow to 10%.The thing is, the data files have now grown to 22 GB each - making the overall size just over 183 GB. The tempdb is on its own drive and the capacity of this drive is 200 GB.I am afraid that I might be running out of space....I know restarting the SQL service will shrink the files back to its initial size of 14 GB each, but this is a production server and a restart is not possible.Any possible solutions?Will the tempdb not release unused space?

Need to be alerted to disallowed email addresses being entered - Admin cleaning up after App programmers! :-)

Posted: 21 May 2013 06:06 PM PDT

Hi allI think I may be on the right track when considering creating a scheduled job. I need to capture and correct (probably by a simple delete) instances where Application users enter a disallowed email address like '*@specificdomain.com' into the database. We're supporting a SQL2008r2 Server that is used by a third party application developer. They, in their wisdom, have created a nasty side effect that I won't bore anyone with here.We are unable to alter an stored procedures etc in the DB as they're encrypted, so instead of being able to stop the problem occurring at source we need to cure the issue after the event. I want to be able to generate a job that will scan 2 of the DB's tables for any occurrence of '*@specificdomain.com' that exist in the table that would be identified thus:SELECT FIRST_NAME, LAST_NAME, EMAILFROM CandidatesWHERE EMAIL LIKE '%@specificdomain.com%'SELECT FIRST_NAME, LAST_NAME, EMAILFROM CONTACTSWHERE EMAIL LIKE '%@specificdomain.com%'Can a job be created and scheduled to check the DB and either replace or just delete the disallowed email addresses?Thanks in anticipation of someone enlightening me!S

SQL Server only using 4Gb of 8Gb RAM

Posted: 29 May 2013 08:14 PM PDT

Hi all,I'm slightly befuddled and was wondering if anyone could shed some light on why my test box running a SQL Server 2008 R2 instance is only using 4Gb of the available 8Gb RAM? I've checked the properties of the instance and memory is showing as 4095 MB on the properties 'General' node but when I look at the 'Memory' node the 'Maximum server memory' is set to 8192 MB. When I look at Start > Computer > Properties this shows 8.00 Gb RAM and I'm using Windows Server Standard SP2 (32 Bit). Is it because the OS is set to use the other 4Gb and if so can I change it to 2Gb for OS and 6Gb for SQL?I feel I'm missing something blindingly obvious, could someone put me out of my misery please? :-)Thanks. M

Risks of not upgrading from SQL 2000, SQL 2005

Posted: 28 May 2013 02:36 AM PDT

Hello - I am inheriting 5 servers to administer from a jack-of-all-trades DBA/developer in another department.2 of the instances are SQL 2000 standard and are worried that SQL 2000 is out of support.The jack-of-all-trades DBA/developer will still be involved but has no desire to upgrade the SQL 2000 instances. (DTS packages that would need to be rewritten and he is understaffed. Plus, its not broke so don't fix it.)I was told by my boss that we need to inform them of the risks of running an out of support version.I had a conversation with the developer and he said "if the servers are behind a firewall, then there are no security risks"Can anyone offer an educated response to the firewall statement? Any links or insights would be appreciated. ThanksDave

Monday, June 3, 2013

[how to] Why is the padding on months in Oracle 9 characters?

[how to] Why is the padding on months in Oracle 9 characters?


Why is the padding on months in Oracle 9 characters?

Posted: 03 Jun 2013 09:00 PM PDT

I ran two simple sql statements

select length(to_char(sysdate, 'fmMonth')) from dual;  select length(to_char(sysdate, 'Month')) from dual;  

The length of the first statement was 4 (It's June) and the length of the second statement was 9. Why does it pad 5 extra characters unless you specifically ask it not to?

The only reason I could come up with was that the month with the most amount of characters is september (nls_date_language set 'English') which has 9 character.

Second question is if you changed your nls_date_language variable would you also change the amount of padding?

How to merge data sets without including redundant rows?

Posted: 03 Jun 2013 07:49 PM PDT

I think this must be a fairly common thing to do, but I haven't been able to find an example. It basically involves merging data from two data sets that involve dates/times (or more generally any sequential values) and removing redundant records. By redundant I mean records that can be implied by other records.

Here's the scenario I'm trying to solve. We have a table of historical prices for items. New prices are imported periodically into the table. The import file can contain dates from any time, we don't know that they are always "after" the data in the history table.

To avoid data bloat I only want to insert new records if they give new information. So if a new record can be inferred from an existing one I don't want to insert it. Conversely if an existing record can be inferred from a new one I want to replace it. Record B can be inferred by record A if the price is the same and there are no other changes between the date of record A and the date of record B (if the price was $1 yesterday and is $1 today and there are no changes between yesterday and today, then we can infer that the price today is $1 using yesterday's price).

Some examples might help, here are the two problem scenarios:

  1. An incoming record can be inferred from an existing record, so the incoming record is redundant. E.g.

    old record: 2013-04-23 1.00

    new record: 2013-04-24 1.00 <-- this is implied by the existing record, don't insert it

  2. An existing record can be inferred from a new record, so the existing record is redundant. E.g.

    new record: 2013-04-23 1.00

    old record: 2013-04-24 1.00 <-- this is implied by the new record, delete it

  3. This just shows an example of when a value is not redundant. E.g.

    old record: 2013-04-23 1.00

    old record: 2013-04-24 1.20

    new record: 2013-04-25 1.00 <-- not redundant, the price changed since it was last 1.00

There is a more detailed example here http://sqlfiddle.com/#!3/c61a8/1

Currently I'm leaning towards a multi-step approach of:

  1. Delete from incoming where there is an existing record that has the same price with an earlier date (scenario 1 above).
  2. Delete from existing where there is an incoming record that has the same price with an earlier date (scenario 2 above).
  3. Insert the remaining incoming records.

There must be a nicer way, maybe using MERGE, but it's doing my head in trying to work out how to do it.

How do I efficiently "merge" the existing and incoming records? Thanks

TempDB MDF file huge after update script ran

Posted: 03 Jun 2013 06:28 PM PDT

On the weekend we had a huge deployment run that affected a large chunk of our system. It was all run in one massive transaction (probably the wrong thing to do). As a result our TempDB DatabaseFile has grown from ~5GB to ~30GB.

This is causing some space issues on the drive that it is located on. The server is Windows 2003 and we are running SQL 2005.

This was a one off event and so we don't want the TempDB file to stay that large. I have tried running DBCC SHRINKDATABASE and DBCC SHRINKFILE commands however it says:

Page 1:571344 could not be moved because it is a work table page.

I saw this question TempDB data files don't shrink well on SQL 2008 and was wondering if I need to free all system cache before I can shrink the database down. Obviously I don't want to do that on a live database if I can help it.

Is there any other way that I can shrink TempDB's MDF file?

MYSQL - Using simple inventory number convention as Primary Key

Posted: 03 Jun 2013 05:29 PM PDT

I have an inventory with three numbered groups like so:

Group 1: 0-999 Group 2: e0-e999 Group 3: v0-v999

Can I simply make a one-column table to store this information and make that column the primary index (needed?), or should I make it two columns.

example:

code (primary)  000  016  760  433  e221  e011  v999  v003  

vs.

(primary)   code_id    code      1        000      2        016      3        760      4        433      5        e221      6        e011      7        v999      8        v003  

Oracle 11g http listener configuration

Posted: 03 Jun 2013 07:06 PM PDT

Could please someone point me how to configure oracle db to be able to display PL/SQL Server pages. I have successfully created and mapped dad with the DBMS_EPG.

Here is the listener:

# listener.ora Network Configuration File: D:\app\Ja\product\11.2.0\dbhome_1\NETWORK\ADMIN\listener.ora  # Generated by Oracle configuration tools.    LISTENER =    (DESCRIPTION_LIST =      (DESCRIPTION =        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))      )      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))      )      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 8080))      )    )    ADR_BASE_LISTENER = D:\app\Ja  

When I want to enter

localhost:8080/my_dad/home

I get error: No data received.

How do I solve this?

What is a good way to Merge 2 range sets into a single view

Posted: 03 Jun 2013 03:31 PM PDT

The real purpose of this has to do with effectivity dates and contains additional keys and fields, but I simplified it for the sake of, well, simplicity.

Sample:

test_Widget Table  theFirst   theLast    Letter  1          5          A  7          10         (null)  11         15         C    test_Default Table  theFirst   theLast    Letter  1          8          T  9          12         U  13         20         V    Expected Output:  theFirst   theLast    Letter  1          5          A  7          8          T  9          10         U  11         15         C  

Here's what I'm trying to do:

Use the Widget and Override table to craft a query that has for every range of numbers the proper Letter chosen.

Use Widget.Letter for all Letters, but in the case of a NULL in Widget use the Default.Letter. Only include ranges that are inclusive to the Widget table

I have working code that does it, but I'm wondering if there's something a bit more elegant, built-in, etc to handle this.

My code:

select distinct    case when w.letter is null then greatest(w.thefirst, d.thefirst) else w.thefirst end as tfirst,     case when w.letter is null then least(w.thelast, d.thelast) else w.thelast end as tlast,    case when w.letter is null then d.letter else w.letter end as letter  from test_widget w  inner join test_default d  on   w.thefirst <= d.thelast and  w.thelast >= d.thefirst  

Connection to local SQL Server 2012 can be established from SSMS 2008 but not from SSMS 2012

Posted: 03 Jun 2013 02:48 PM PDT

I have two local SQL Server instances running on my local machine. The first is SQL Server 2008 R2 Enterprise Edition (named MSSQLSERVER) and the 2nd is SQL Server 2012 Business Intelligence Edition.

My problem is with SSMS 2012 which can connect to distant servers but not the local 2012 instance; I can however connect to this instance from SSMS 2008.

The error message I get when trying to login is

Login Failed. The login is from an untrusted domain and cannot be used with Windows Authentication. (Microsoft SQL Server, Error: 18452)

I must point out that I don't have the necessary privileges to access SQL Server Configuration Manager (blocked by group policy).

Any help would be appreciated.

Problem with sorting matrix groups in SSRS

Posted: 03 Jun 2013 02:24 PM PDT

I've hit the following issue in SQL Server 2005 Reporting Services

This is a slightly odd and complicated situation: I have a table containing various groups and within each group is also embedded a matrix. The rows of each matrix are sorted by a different field to their group expression. So far, so good.

The problem arises in the table header: it needs to mirror exactly the format of each group to give a "Grand Total" type functionality. I've copied exactly the matrix from the table's groups, but it refuses to sort the data correctly. I've tried changing the sort direction - even removing the sort altogether - but it makes no difference to the order in which the data is displayed.

Since I originally hit this issue, I've found a workaround involving simply grouping by the "sort" column - this works since it was only a calculate field created to sort the matrix groups.

Would still be interested to know why this is an issue for a matrix in the table header but not the group header.

Any ideas?

Create database role to allow read access to all tables; write access to some

Posted: 03 Jun 2013 02:07 PM PDT

We have a database with over 1000 tables. I need to create a role that allows read only on all tables, as well as write access to two specific tables.

I was messing around with database roles, but whenever I went to add the tables, I had to hand select all 1000... is there a better way to do this?

Import 4 million rows

Posted: 03 Jun 2013 01:22 PM PDT

Hi I need to import about 4 million records into my DB. The file is a csv. I have tried MyAdminphp and it times out. Then i tried Razor SQL and it's way too slow. Any suggestions for handling this many iserts?

Getting Random Sample from large database MYSQL (No Auto Inc Field)

Posted: 03 Jun 2013 02:52 PM PDT

You were so helpful with my last question, I thought id throw another one at you thats stumping me.

Basically I got a table, requirements from company was it was all supposed to be 1 table so I got myself 1000 columns, and 100million rows (... life sucks).

Anyway they want me to generate for them 500,000 row random sample of their data. Problem is my PK is a VARCHAR(50) of their ids and has no increment system to it.

My rig is not overly powerful so a ORDER BY RAND() would be very ill advised!

Currently I am working on a Stored Procedure to maybe make a temp table then export it to csv?

Its not currently done, but if I am going the wrong direction let me know!

CREATE PROCEDURE randomResults()  BEGIN      DECLARE results CURSOR SCROLL FOR      FOR      SELECT * FROM massiveTable;      DECLARE nth INTEGER;      SET nth = 0;      DECLARE hits INTEGER;      SET hits = 0;      DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;      CREATE TABLE tmp LIKE massiveTable;        -- open      OPEN results;      -- fetch        WHILE hits <= 500000 DO          FETCH NEXT FROM results;          IF (nth mod 5) THEN              INSERT INTO               SET hits = hits + 1;                  END IF;          SET nth = nth + 1;      END WHILE;      -- end fetch          -- close      CLOSE results;  END$$  DELIMITER ;  

I am sure you guys have a better way of doing this, was just planning to get every 5th record and put it into a tmp table then export it...

My Table:

TABLE massiveTable (    staffhashID VARCHAR(50) PRIMARY KEY,  email VARCHAR (100),  marketCat1 tinyINT(1),  ...  1000 of them  ) engine = MYISAM;  

an example of the PK: 433kfdgsd3gfsd232

Is there a way to find the least recently used tables in a schema?

Posted: 03 Jun 2013 03:21 PM PDT

Is there a way to find the least recently used tables in a MySQL schema? Besides going into data directories? I was hoping there was a metadata or status trick-- but Update_Time in STATUS and INFORMATION_SCHEMA is always NULL.

MySql one time event never runs?

Posted: 03 Jun 2013 03:17 PM PDT

Please have a look at below events

1) create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO ...     2) create EVENT Test2 ON SCHEDULE EVERY 20 SECOND STARTS CURRENT_TIMESTAMP ON COMPLETION PRESERVE ENABLE DO ...   

I expect event Test1 to run one time after 20 seconds but it never runs. Event Test2 is working fine.

Any idea? Thanks.

Optimize command issuing "'View is not base table" and "Corrupt" error messages

Posted: 03 Jun 2013 01:23 PM PDT

I have no experience with MySQL. My boss ran an optimize against a MySQL database. Against a bunch of views we got the message

"tablexyz" is not BASE TABLE

and the next has message is

Corrupt

It seems to be only against views, we didn't get any of these error messages against the base tables.

Does this look like an incorrect error message, or do we have issues with our tables?

How to find Oracle home information on Unix?

Posted: 03 Jun 2013 08:16 PM PDT

Need help finding Oracle home path corresponding to a database instance in RAC environment. I am aware of few of the ways to achieve the same. Listing them below to avoid the same answers.

  1. /etc/oratab This file is not mandatory and hence may not have all instance information.

  2. Parsing contents of Listener.ora In RAC environment, the listener.ora can be located at non default location.

  3. use TNS_ADMIN to find Listener.ora location and parse the file.

  4. ORACLE_HOME env variable May not be set always.

  5. ps -ef | grep tns to get the home path from service name. Gives path for currently running listener

  6. select "SYSMAN"."MGMT$TARGET_COMPONENTS"."HOME_LOCATION"
    from "SYSMAN"."MGMT$TARGET_COMPONENTS"
    where "SYSMAN"."MGMT$TARGET_COMPONENTS"."TARGET_NAME" = <Database SID>

    The schema sysman can be dropped after first time login to oracle.

  7. SELECT NVL(SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '\', -1, 2) -1) , SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '/', -1, 2) -1)) FOLDER
    FROM DBA_LIBRARIES
    WHERE LIBRARY_NAME = 'DBMS_SUMADV_LIB';

    So if a DBA changes Oracle Home (and hence the location of libqsmashr.so) after installation of Oracle, the path retrieved from above query would be invalid.

  8. . oraenv Works only for 11g

I am trying to find out a generic way which will work for all Oracle versions and it should not be dependent on anything which is not useful to DBA.

Do you have any way other than listed above to do the same?

Many Thanks in advance.

Storing dynamic data in NoSQL

Posted: 03 Jun 2013 06:46 PM PDT

I have scenario where I need to store unstructured data but the rest of my data is structured and relational. An example of the type of unstructured data is as explained below:

User Type 1:    How do you blah blah : 5 fields    User Type 2 :    How do you blah blah : 3 fields    User Type 3 :    How do you blah blah : 7 fields  

All 3 types are asked the same question "How do you blah blah" but each user type answers it using different number of fields. And there can be a lot of different user types.

For the relational data, I'm using MySQL but i'm a little confused on how to store this unstructured data:

  1. Serialize as JSON and store in MySQL
  2. Use NoSQL

My requirements are high reads, average updates, average inserts & no deletes. No JOINS needed. I need guaranteed writes & high availability. If I were to choose a NoSQL, it'd be a AP type according to the CAP theorem. I won't be hitting millions of records anytime soon.

I also plan to provide a text search for this data in the future, but it doesn't need to be a real time search, so I can always index the data using Lucene periodically. But of course, document based NoSQL implementations do provide this out of the box. But I have read in a few places where people have advised against storing JSON data in MySQL. But adding a NoSQL layer could be overkill.

What do I do & if you do advise me to go for NoSQL DB, which one should I choose?

PostgreSQL 9.2.4 (Windows 7) - Service won't start, “could not load pg_hba.conf”

Posted: 03 Jun 2013 02:28 PM PDT

I am trying to get Postgres 9.2.4 to run as a service on Windows 7. After installing postgres, the service was running fine. However, after setting postgres up as a server for another program, the service stopped running. When I try to start the service now, I get a message saying :

"The postgresql-x64-9.2 - PostgreSQL Server 9.2 service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs."

When I try running the program that should use the database server, I get this error :

"A problem was encountered while attempting to log into or create the production database. Details: Could not connect to the server; Could not connect to remote socket. The application must now close"

I have also encountered this error once while opening the same program :

"A problem was encountered while attempting to log into or create the production database. Details: FATAL: could not load pg_hba.conf The application must now close."

I have tried running the service logged on as a local system account as well as my own account (In the postgres service properties) to no avail. I also tried restarting my computer. After a lot of troubleshooting online, I learned that a good thing to check is the pg_log file. Here are the contents of the latest pg_log entry :

2013-05-29 14:59:45 MDT LOG:  database system was interrupted; last known up at 2013-05-29 14:58:01 MDT  2013-05-29 14:59:45 MDT LOG:  database system was not properly shut down; automatic recovery in progress  2013-05-29 14:59:45 MDT LOG:  record with zero length at 0/175BB98  2013-05-29 14:59:45 MDT LOG:  redo is not required  2013-05-29 14:59:45 MDT LOG:  database system is ready to accept connections  2013-05-29 14:59:45 MDT LOG:  autovacuum launcher started  2013-05-29 15:07:00 MDT LOG:  local connections are not supported by this build  2013-05-29 15:07:00 MDT CONTEXT:  line 1 of configuration file "C:/PostgreSQL/data/pg_hba.conf"  2013-05-29 15:07:00 MDT FATAL:  could not load pg_hba.conf  2013-05-29 15:07:00 MDT LOG:  local connections are not supported by this build  2013-05-29 15:07:00 MDT CONTEXT:  line 1 of configuration file "C:/PostgreSQL/data/pg_hba.conf"  2013-05-29 15:07:00 MDT FATAL:  could not load pg_hba.conf  2013-05-29 15:09:03 MDT LOG:  received fast shutdown request  2013-05-29 15:09:03 MDT LOG:  aborting any active transactions  2013-05-29 15:09:03 MDT LOG:  autovacuum launcher shutting down  2013-05-29 15:09:03 MDT LOG:  shutting down  2013-05-29 15:09:03 MDT LOG:  database system is shut down  

It seems to be having issues with the pg_hba.conf file, which looks like this :

local all all trust  host all all 127.0.0.1 255.255.255.255 trust  host all all 0.0.0.0 0.0.0.0 trust  

As per many suggestions online, I tried editing the top line to a number of different alternatives (host all all trust / host all 127.0.0.1/32 trust / host all 192.168.0.100/24 trust , etc.). This made sense to me, as the log file was saying that local connections are unsupported by postgres and was also pointing to that line. However, none of my changes had any effect. I tried restarting my computer after every change but nothing made any difference.

When I searched for examples of what a pg_hba.conf file normally looks like, the examples looked slightly different from my file. I noticed that in the PostgreSQL program file, in addition to pg_hba.conf, there was also a "20130529-150444-old-pg_hba.conf" file which looked a lot more like the examples I was finding online. This file has several lines of comments before these last few lines :

# TYPE  DATABASE        USER            ADDRESS                 METHOD    # IPv4 local connections:  host    all             all             127.0.0.1/32            md5  # IPv6 local connections:  host    all             all             ::1/128                 md5  # Allow replication connections from localhost, by a user with the  # replication privilege.  #host    replication     postgres        127.0.0.1/32            md5  #host    replication     postgres        ::1/128                 md5  

I was hoping that this was the original pg_hba.conf file and that if I replaced the new file with the contents of the old one, postgres would start working again. No such luck. I have been hoping for more error files to be logged in pg_log to see if the previously stated error had disappeared or changed to something else, but no more files have been logged.

I have been troubleshooting online for a few days now and nothing I've found has worked. Sorry for having such a long question, but I wanted to be thorough and include all relevant information. I would appreciate it if anyone could shed some light on this problem or offer suggestions.

Relation to original tables or to existing linking table

Posted: 03 Jun 2013 05:22 PM PDT

In my database I have a table with different settings for my app. Each setting is in relation to a guest (table guests) and an event (table events). So basically each guest has specific settings for each event he is linked to.

Every guest which has settings is are allready linked to the events for other reasons so there is an existing event_guest table with the necessary links.

So I'm not exactly sure about how I should link the settings table with the others.

Option 1

I link the settings with the table event_guest which links guests and events.

enter image description here

Option 2

I link the settings with the "original" tables guests and events.

enter image description here

Spontaneous I would go with option 1 but I'm a little bit confused about it...

My concern with option 1 is, that if I have a lot of deep relations, maybe even another table after settings, I need more complex sql queries to get for example data from settings, guests and events. Could this become a performance issue?

Which is the better solution and what are its advantages and disadvantages?

PostgreSQL backup error

Posted: 03 Jun 2013 09:23 PM PDT

I am trying to backup my company PostgreSQL database using pgAdmin III, so I selected our company DB from the tree, then right clicked on it and selected 'Backup', selected the destination folder for the backup file then clicked OK. Then I got this error message:

C:/Program Files/PostgreSQL/9.0/bin\pg_dump.exe --host localhost --port 5432 --username "pgres1" --format tar --blobs --encoding UTF8 --verbose --file "E:\DB_Backup\DB_Backup_TEST.backup" \"CompanyDB_TEST\" pg_dump: [archiver (db)] connection to database ""CompanyDB_TEST"" failed: FATAL: database ""CompanyDB_TEST"" does not exist pg_dump: *** aborted because of error

Process returned exit code 1.

So can someone please help me by telling me what I am doing wrong here?

I am 100% sure that CompanyDB_TEST do exist.

I am running the PostgreSQL under Windows Server 2003.

Slow SSRS Report in production

Posted: 03 Jun 2013 03:23 PM PDT

I have an SSRS report which gets its data by firing a series of stored procedures.

Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine.

I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run.

What else should I be doing? Something with perfmon?

Create Oracle ASM Disks Without Installing Oracle

Posted: 03 Jun 2013 03:21 PM PDT

I have been asked to setup workstations (VMs) for an instructor's Oracle 11g R2 Workshop. I am finding the documentation that was downloaded from the Oracle Academy site extremely underwhelming. I am supposed to be configuring the environment with a set of ASM disks as LO devices that the students will then use to create an ASM disk group when they install Oracle Grid and as disks for the Fast Recovery Area.

My understanding is that the student will be installing Oracle Grid, Oracle Database, then creating a database, and that I do not need to have Oracle installed beforehand. However, the documentation makes use of the oracleasm command which comes with the Oracle installation.

Is there a way to set this up without installing Oracle? Has anyone ever setup/taught these workshops? I tried downloading the Oracle ASMLib tool here but it says it needs oracleasm as a dependency.

Error 1044 Access denied to user

Posted: 03 Jun 2013 04:23 PM PDT

This is driving me crazy.

When I try to create a few tables from my Workbench model I get this error 1044.

I've been trying to find a solution but nothing works for me.

Curiously when I run SELECT USER(),CURRENT_USER(); I get two versions of the same user. One is techboy@(an ip address) and the other is techboy@%.

Access is denied to both at one point or another.

The MySql server is a remote hosted server with the user permissions correctly set.

Multiple database servers for performance vs failover

Posted: 03 Jun 2013 05:23 PM PDT

If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best?

Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active?

I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.'

If it matters, we have traffic around 3,000-7,000 simultaneous users.

Cross Database transactions - Always on

Posted: 03 Jun 2013 08:23 PM PDT

Recently we are working on a POC to get Always on work and happened to see this article in BOL

http://technet.microsoft.com/en-us/library/ms366279.aspx

This article suggests that there would be logical inconsistency when we are dealing with Synchronous mode too, but will this actually be the case?

Consider for example databases A and B on which the transaction is running and A is in High-safety mode and B is not mirrored. The log of A has to go to Mirrored database then the Primary database commits eventually two phase commit(transaction on B) succeeds but article suggests that log will not be transferred in the first place and results in commit on B which is contradictory. Please help me in understanding Whether the statement suggested in above article is true. If yes how can it be :).

PS :Please let me know if I need to provide more information around this.

Maatkit shows the MySQL replication error in one table, but won't fix it

Posted: 03 Jun 2013 03:06 PM PDT

We use MySQL 5.1 with a primary/secondary replication setup. I use mk-table-checksum from Maatkit to generate checksums on the master and perform consistency checks on the replica.

mk-table-checksum is used to generate the checksums, and is pretty simple:

% mk-table-checksum localhost \          --quiet \          --replicate=test.checksum --empty-replicate-table --create-replicate-table \          --chunk-size=500000 --sleep-coef=0.5 \          --ignore-tables=mysql.general_log_backup,mysql.general_log,mysql.slow_log,mysql.help_topic,mysql.tables_priv  

These checksums are then replicated to the Replica (slave) server, where we compare the checksums.

Around a month ago we had a server failure. We fixed the server, started up the database and resumed replication. Everything seems to be working fine-- Slave_IO_Running and Slave_SQL_Running are both set to "Yes", everything appears to be getting replicated successfully except for the error mentioned below, etc.

There is one problem, however. When I check the table using the following recommended method on the Replica, I see a consistency problem.

# mysql --execute "SELECT db, tbl, chunk, this_cnt-master_cnt AS cnt_diff, \  >         this_crc <> master_crc OR ISNULL(master_crc) <> ISNULL(this_crc) AS crc_diff \  >         FROM test.checksum \  >         WHERE master_cnt <> this_cnt OR master_crc <> this_crc OR ISNULL(master_crc) <> ISNULL(this_crc);"  +--------+------------------+-------+----------+----------+  | db     | tbl              | chunk | cnt_diff | crc_diff |  +--------+------------------+-------+----------+----------+  | plants | trees_properties |    40 |        0 |        1 |  +--------+------------------+-------+----------+----------+  

This problem still occurs even after force another sync on the master using mk-table-sync on the master. I will probably need to replace the corrupt data by hand, but I am unclear what the actual problem is. What is a 'chunk'?

My guess is that only a handful of rows are different, and I can probably replace those rows by hand. But how can I list which rows are missing?

I realize that Maatkit has been deprecated in favor of Percona Tools. I plan to upgrade some day. But for now, "if it ain't broke, don't fix it."

MySQL generic trigger to save identification data for later accessing of the changed row

Posted: 03 Jun 2013 07:23 PM PDT

I am pretty inexperienced with this.

I need a generic trigger, able to create and save in a fixed table some sort of identification data for a changed row from generic (any) table. The identification data should be used later to SELECT the changed item in the given table.

Can be this done without previously knowing the table structure?

The only idea I had, but it's way too inefficient in my opinion, also requires previous knowledge of the table column names, is to save a hash by:

MD5(concat(NEW.field1, NEW.field2, NEW.field3, ...))  

then

SELECT * FROM chaged_table WHERE hash = MD5(concat(field1, field2, field3, ...))  

to identify the changed row in the table which triggered the change.

I would greatly appreciate any help or suggestions!

TempDB data files don't shrink well on SQL 2008

Posted: 03 Jun 2013 06:23 PM PDT

So ideally you want to pre-size your TempDB data and log files appropriately so that this isn't a concern, but sometimes a rogue developer runs a crazy huge query on a production server during work hours, causing the TempDB data files to blow up huge.

If TempDB was literally the only thing on the drive, then I could probably just leave it like that, but on some servers I have several SQL instances that all share the same TempDB drive.

So how can I shrink these TempDB data files without restarting the instance?

I normally try:

DBCC SHRINKFILE (name = 'tempdev', size = 5000)  

This worked fairly consistently in SQL 2000 and 2005 (assuming the actual tempdb activity had tapered off), but seems to not work very often in 2008 (on my current server, it only worked on 1 of 4 data files, the others continue to remain 2-3x larger).

How to do MySQL User Accounting

Posted: 03 Jun 2013 03:10 PM PDT

I can't seem to find any information on this anywhere. I found a question here called Monitor MySQL activity per database? but that's not quite the answer either.

I'd like to be able to log the following to a system log somewhere, preferably via syslog:

  • User (not master/slave) login
  • User logout
  • User connect time
  • User IP

Something similar to last and lastb for MySQL. Can this be done? Is anything already written and in the Ubuntu and Red Hat Enterprise Linux repositories? Can MaatKit do this?

Thanks!

Is there a way to measure replication lag in MySQL with a resolution less than 1 second?

Posted: 03 Jun 2013 03:07 PM PDT

Is there a way by which you can measure replication lag in MySQL with a resolution of less than 1 second?

That is, can replication lag be measured on the microsecond or millisecond level?

[SQL Server] query to search

[SQL Server] query to search


query to search

Posted: 02 Jun 2013 08:17 PM PDT

I have a table with fields and data type likecompanycode [nchar(4)] actcode [nchar(12)] vounum [nchar(14)] trnam [decimel (18,6)]I like to build a query which will bring almost 900 rowsand build query like select comcod, actcode, vounum,trnam where comcod=3305 andvounum in ('BC201203000073','BC201204000001','BC201204000002','BC201205000001','BC201206000001','BC201206000002') ----------Here I would like to put 900 (nine hundred) vounum data and run the query, theproblem is data type. It is time consuming to edit every data by putting ' ' and append coma (,) for every vounum. Is there any way to paste the vounum without putting single quote ' vounum ' and appending a coma (,) for every vounum and successfully run the query.

How have Trigger's example after insert send to mail on outlook?.

Posted: 29 Jan 2013 11:37 AM PST

I create trigger after insert into table i want send mail on outlook and example trigger please.thank you.

INSERTING RECORDS WITH TRIGGER

Posted: 03 Jun 2013 03:54 AM PDT

INSERT INTO ACC_JVDTL_TRN ( FINYRTRN_ID, VRDT, LEDGER_ID, AMOUNT) VALUES ((select i.FINYRTRN_ID from inserted i), (select i.BILLDATE from inserted i), (select i.BILL_LEDG_ID from inserted i), (select i.BILLAMOUNT from inserted i)) The above portion is a part of a trigger. All are working fine except the AMOUNT which always insert 0.00 instead of the actual amount. Please help.Thanks in advance PRASAD SJ

Loading .csv File

Posted: 03 Jun 2013 05:19 AM PDT

Hi, I was trying to load a .csv file into my database. It was a comma delimited file and for one of the columns there is a comma(,) in between the data just like Texas,Houston can some one help me how to get rid of the comma in between. the package which i have created recognizing the value after the comma as a new column but it should not be like that. Can any of the guys help me in this. I was getting error in the Flat file source itself. I thought of using Derived column but the package is failing at the source point itself.

Calculating Sales History for Months/Years

Posted: 03 Jun 2013 01:42 AM PDT

I have a table - TrnDate, TrnYear, TrnMonth, StockCode, InvoiceQtyThe TrnYear and TrnMonth are the Financial Periods and dont always tie back to the TrnDate, Month/Year.for example - 31/05/2013 - might fall into Period 6 / 2013What I am after is a script that will calculate historic InvoiceQty for ;StockCode, MonthToDate, LastMonth, Previous6Months, Previous12Months, Previous18Months, Previous24Months, Previous36Monthsthe hard bit is when you want to go back 6 months and the year changes ....hopefully this makes sense ?

SQL: Excel file has bigger size using OPENQUERY to update it

Posted: 02 Jun 2013 09:23 PM PDT

Maybe the solution can be so easy but I can't find it so I write here for some help.We have this sql function:CREATE FUNCTION [dbo].[updateExcel]( -- Add the parameters for the function here @cell VARCHAR(4), @description VARCHAR(200))RETURNS BITASBEGIN DECLARE @sql NVARCHAR(1000) SET @sql = 'UPDATE openquery(LinkedServer2ExcelFile, ''SELECT * FROM [Sheet1$'+@cell+':'+@cell+']'') set F1 = '''+@description+'''' --PRINT @sql EXEC sp_executesql @sql RETURN 0END----that we use to update some excel fileEXEC @Result = updateExcel 'somecell', 'somevalue'The problem is that after this update the excel has a bigger size. But when we open it and save it again, the file's size get normal againI hope to find here some answers ...Thanx !!!

Get Value from another table

Posted: 02 Jun 2013 08:43 PM PDT

I have two tables, lets say Table 1 and Table 2 as belowTable 1:Col1 Col2 Col31 _A 112 _B 123 _C 124 _A 11Table 2:ID Val_A A_B B_C C11 AA12 BB13 CCI need an output From Table1 and Table 2 as below1 A AA2 B BB3 C BB4 A AAI am very new to SQL Server. Can you please help me in getting this output?Thank you in advance

[MS SQL Server] Production Database in Suspect Mode...

[MS SQL Server] Production Database in Suspect Mode...


Production Database in Suspect Mode...

Posted: 03 Jun 2013 05:26 AM PDT

Production Database in Suspect Mode...Luckily we switched the nodes and working on other but trying to find the root cause of it.Here are the logs Can any one help me with it.05/30/2013 04:03:48,spid251,Unknown,The log for database 'XXXXXXX' is not available. Check the event log for related error messages. Resolve any errors and restart the database.05/30/2013 04:03:48,spid251,Unknown,Error: 9001<c/> Severity: 21<c/> State: 1.05/30/2013 04:03:48,spid254,Unknown,The log for database 'XXXXXX' is not available. Check the event log for related error messages. Resolve any errors and restart the database.05/30/2013 04:03:48,spid254,Unknown,Error: 9001<c/> Severity: 21<c/> State: 1.05/30/2013 04:03:48,spid244,Unknown,The log for database 'XXXXXXX' This not available. Check the event log for related error messages. Resolve any errors and restart the database.05/30/2013 04:03:48,spid244,Unknown,Error: 9001<c/> Severity: 21<c/> State: 4.05/30/2013 04:03:48,spid5s,Unknown,LogWriter: Operating system error 170(The requested resource is in use.) encountered.05/30/2013 04:03:48,spid5s,Unknown,Error: 17053<c/> Severity: 16<c/> State: 1.05/30/2013 04:03:48,spid5s,Unknown,Write error during log flush.05/30/2013 04:03:48,spid5s,Unknown,LogWriter: Operating system error 170(The requested resource is in use.) encountered.05/30/2013 04:03:48,spid5s,Unknown,Error: 17053<c/> Severity: 16<c/> State: 1.05/30/2013 00:00:11,spid22s,Unknown,This instance of SQL Server has been using a process ID of 2508 since 5/29/2013 3:56:31 PM (local) 5/29/2013 7:56:31 PM (UTC). This is an informational message only; no user action is required.

How to Restore a DB from one instace to Another

Posted: 03 Jun 2013 05:05 AM PDT

hello every one I need help...I have two Instance. Instace A and Instance B. I need to restore a backup that are store in instace Ato instance B. how can use the command restore database.thank you.

Backups failing after full transaction log

Posted: 02 Jun 2013 04:12 PM PDT

The sequence of events went like this.Transaction log disk ran out of space. I was able to add space to the drive. I then manually increased the space of the database log file that ran out of space through SSMS. That was successful. The file increased on the previously full disk.I tried to run my transaction backup jobs but backup files were never written. I canceled this job.I had previously canceled a scheduled transaction log backup job that was running since the disk filled up.I tried to run a manual backup on a small database bit am getting the below error. Msg 3023, Level 16, State 2, Line 11Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. Reissue the statement after the current backup or file manipulation operation is completed.My server log file was huge, crashing SSMS or any text editor. I ran sp_cycle_errorlog and created a new log.All I see now is: BACKUP failed to complete the command BACKUP DATABASE dba. Check the backup application log for detailed messages. Any suggestions?

Questions on Index Rebuilding (not reorganize)

Posted: 02 Jun 2013 11:07 PM PDT

Hi Guys,Need some information on Index Rebuilding. (sort_in_tempdb = off)When performing index rebulding (for indexes > 30% fragmented), will we expect to see an increase in the user transaction log size (LDF)? Or we will expect to see an increase in the database size instead (MDF)?Anyway to calculate how much size needed for rebuilding indexes?Let's say if i need to rebuild 3 indexes, each about 5 GB.After rebuilding the first index, will the temporary space used for rebuilding the first index be reuse for the 2nd index? Or we first need to perform a transaction log backup before the space can be reused?thanks!

shrinking tempdb log file

Posted: 03 Jun 2013 12:12 AM PDT

Last week our tempdb log file grew in size and alerts started pouring in as we have just few space left on the drive. Now we have free space in the log file. Can I release the space to the OS ? If the file is used and if we issue a truncate command we will have consistency issues ........can we do it if free space is available?

[Articles] Relying on Algorithms

[Articles] Relying on Algorithms


Relying on Algorithms

Posted: 02 Jun 2013 11:00 PM PDT

Today Steve Jones looks at the way software affects the world and how we might be worried about how things might change in the future.

Search This Blog