Thursday, May 2, 2013

[SQL Server 2008 issues] Outer apply random value

[SQL Server 2008 issues] Outer apply random value


Outer apply random value

Posted: 01 May 2013 07:15 PM PDT

Hi, I'm having problems getting a random value with outer apply and newid().Here's the DDL: [url=http://sqlfiddle.com/#!6/47f38/2]SQLFiddle[/url]The wanted result would be something like this:Volvo RedFord WhiteBMW RedThanks for your time

Login audit

Posted: 28 Apr 2013 08:27 PM PDT

Hi,I've been asked to produce a report on who has accessed a particular database on an SQL server we have. The system (a third party with no internal auditing) uses sql authentication. Logging on the server is set to record failed and successful login attempts. I can read the log files using xp_readerrorlog but that only gives the the time and name of the user and not the database.Any one got any ideas on how I can do this report?Thanks.

Migration from SQL server 2005 to SQL server 2008

Posted: 01 May 2013 05:53 PM PDT

I am at my client location and they have assigned me a migration task from SQL server 2005 to sql server 2008.I should first do it on the test server and I have never done this before. Could someone please help me how and what to start with. What is the procedure for migration?

Tracking running queries

Posted: 30 Apr 2013 08:31 PM PDT

Hi allIn one of the daily emails (quite a while ago), there was a method for getting server performance and tracking what queries were running, etc.I can't find the link any more, so if anyone has it could they send me a copy please?

Error while taking backup for a user not having sysadmin permission

Posted: 24 Apr 2013 12:52 AM PDT

I am accessing my sql server 2008 r2 express server using SSMS. I have about 5 logins. For a particular login I have all server roles other than sysadmin. It is a sql login(not windows authentication) In user roles it has all permission for almost all database.For some database it has only datareader.Now when i try to take backup of database using SSMS and when try to select backup destination by clicking button i get error like"C:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\BackupCannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists. If you know that the service account can access a specific file, type in the full path for the file in the File Name control in the Locate dialog box."Even though I have db_owner and all permissions i am getting this error. But if I select the path manually in the file name field then it allows to take backup without any error.So why does it happen? I want this user to take backup without these error messages.(But i can't give sysadmin for the user).How can I solve it?(I already some article stating that use sql query and sqlcmd to take backup.but i want it to work using SSMS itself) [url=http://social.msdn.microsoft.com/Forums/en-US/sqltools/thread/a0ada2bb-f66e-4af1-a1d7-3057bd5a7c53]Also see my detailed discussion on this problem in msdn forum by me here[/url]

Question about MAX_QUEUE_READERS in Service Broker.

Posted: 01 May 2013 07:52 AM PDT

Guys,I have a Question about MAX_QUEUE_READERS option in Service Broker.In a high transaction environment where inserts/updates/deletes are very high, would it be affected byhaving MAX_QUEUE_READERS value higher than 1 ?What happens when a INSERT statement is issued and then Immediately DELETE is issued for the same recordand because of MAX_QUEUE_READERS being 2 or 3 ...DELETE gets processed before INSERT (Parallel Processing) ?OR I am not correct with my understanding of MAX_QUEUE_READERS ?I have a Production Issue where sometimes Queue backs up because of high User Activity and we have MAX_QUEUE_READERS set as 1.I am thinking of increasing it to may be 5, but would like to know ,that we don't end up with Orphan Records dueto multi-threaded parallel process created by MAX_QUEUE_READERS.Thanks.

Percent to Total - Customer Sales Penetration

Posted: 01 May 2013 01:49 AM PDT

This is related to a previous post, but I didn't have all the information. Now, I'm including the selects to see if I can find a solution, or need to go in a completely different direction. This also relates to SSRS, which is what these queries are built to run this report. I'm hoping to come to some solution, so all suggestions are appreciated! = )I have a somewhat simple query (bear with me) that management has asked for me to add a Customer Penetration field in their report. My problem is not the formula, but how to work with the query that I have now. I don't know if I can paste the entire sql, but I'm thinking there must be some sort of partition/group by function to get the total sales into the main query. I can do the formula in the report once I have the data. I can run the two queries separately, but my other problem is that I need to be able to join the two to be sure it's counting the same customers, not just total sales. One query gives customer and product sales based on several parameters entered. The other query is essentially the same and provides just total customer sales by Branch and Business Unit. There are too many rows to work with in the main query, so that may be part of the problem as well. Any help would be appreciated. This is the main query that I started with:--DECLARE @TransDate date = '3/1/2013'--DECLARE @CompanyID nvarchar(4) = '003'--DECLARE @BU nvarchar(20) = 'BROADLINE'--DECLARE @PC nvarchar(20) = 'ALL'--DECLARE @Family nvarchar(40) = '06 Fzn Groceries And Other'--DECLARE @ProdLine nvarchar(40) = '59 Fzn Breads & Doughs'--DECLARE @ProdGrp nvarchar(40) = 'ALL'SELECTInvoiceDate as MonthDate,Branch,BusinessUnit ,CustID as CustNbr,CustName ,Fam_ID as ProdFamily ,Lin_ID as ProdLine,Grp_ID as ProdGroup,Prod_ID as ProdNbr,NetSalesFROM( SELECT CONVERT(DATETIME,DATEADD(mm, DATEDIFF(mm,0,salestrans.LDayDay), 0),101) AS InvoiceDate,salestrans.Branch ,salestrans.BusinessUnit,salestrans.CustID,customer.CustName,Product.Fam_ID,Product.Lin_ID,Product.Grp_ID,Product.Prod_ID,SUM (ISNULL (CASE WHEN salestrans.pctype IN ('S','A') THEN isnull(salestrans.extprice,0) ELSE 0 END, 0)) as NetSalesFROM bi.salestrans salestransleft outer join bi.star2sales star2sales on (salestrans.Branch = star2sales.Branch and salestrans.InvNo = star2sales.Invno and salestrans.InvDtl = star2sales.Invdtl)inner join abcp.Customer customer on (salestrans.Branch = customer.CompanyID and salestrans.CustID = customer.CustomerID)left join bi.Producton (salestrans.ProdWhs = product.Prod_Whs and salestrans.ProdID = Product.Prod_ID)WHEREsalestrans.LDayDay BETWEEN dateadd(month,datediff(month,0,@TransDate)-0,0) AND dateadd(ms,-3,DATEADD(mm, DATEDIFF(month,0, @TransDate)+1, 0))AND Customer.[Effective Date] <= @TransDateAND (Customer.EndDate > @TransDate OR Customer.EndDate IS NULL)AND Customer.[Excluded?] = 'NO' AND (Customer.CompanyID IN (@CompanyID) --, @CompanyID2) OR 'ALL' IN (@CompanyID) ) --, @CompanyID2))and Product.Prod_Whs in ('008','003','009','020')and salestrans.PcID not in ('65','95')GROUP BY CONVERT(DATETIME,DATEADD(mm, DATEDIFF(mm,0,salestrans.LDayDay), 0),101),salestrans.Branch,salestrans.BusinessUnit,salestrans.CustID,customer.CustName,Product.Fam_ID,Product.Lin_ID,Product.Grp_ID,Product.Prod_ID) as CustVendProdGrpSalesWHERE (PCID IN (@PC) OR 'ALL' IN (@PC))AND (BusinessUnit IN (@BU) OR 'ALL' IN (@BU))AND (Fam_ID IN (@Family) or 'ALL' IN (@Family))AND (Lin_ID IN (@ProdLine) OR 'ALL' IN (@ProdLine))AND (Grp_ID IN (@ProdGrp) OR 'ALL' IN (@ProdGrp));And then I tried to add this column from the query below:,case when Row_Number() over (partition by salestrans.Branch, salestrans.CustId, BusinessUnit,Product.Fam_ID,Product.Lin_ID,Product.Grp_ID--,Product.Prod_IDorder by Product.Grp_ID) = 1 then sum(isnull(TotalCMNetSales,0)) else 0 end as TotalsalestestThis is the total customer sales query. I've tried doing an inner subselect several ways, and I was able to get some results the first time but it was not summing the customer sales correctly. Then, I tried a few other things, but it was "timing out" using up too much memory.( SELECTInvoiceDate as MonthDate,Branch,BusinessUnit,CustID ,SUM(TotalCMNetSales) as TotalCMNetSalesFROM ( SELECT CONVERT(DATETIME,DATEADD(mm, DATEDIFF(mm,0,salestrans.LDayDay), 0),101) AS InvoiceDate,salestrans.Branch ,salestrans.BusinessUnit,salestrans.CustID--,salestrans.ProdID--,Product.Fam_ID--,Product.Lin_ID--,Product.Grp_ID,SUM (ISNULL (CASE WHEN DATEADD(m,DATEDIFF(m,0,salestrans.LDayDay),0) = dateadd(month,datediff(month,0,@TransDate)-0,0)and salestrans.pctype IN ('S','A') THEN isnull(salestrans.extprice,0) ELSE 0 END, 0)) as TotalCMNetSalesFROM bi.salestrans salestransinner join abcp.Customer customer on (salestrans.Branch = customer.CompanyID and salestrans.CustID = customer.CustomerID)left join bi.Producton (salestrans.ProdWhs = product.Prod_Whs and salestrans.ProdID = Product.Prod_ID)WHEREsalestrans.LDayDay BETWEEN dateadd(month,datediff(month,0,@TransDate)-0,0) AND dateadd(ms,-3,DATEADD(mm, DATEDIFF(month,0, @TransDate)+1, 0))AND Customer.[Effective Date] <= @TransDateAND (Customer.EndDate > @TransDate OR Customer.EndDate IS NULL)AND Customer.[Excluded?] = 'NO' AND (Customer.CompanyID IN (@CompanyID) --, @CompanyID2) OR 'ALL' IN (@CompanyID) ) --, @CompanyID2))and Product.Prod_Whs in ('008','003','009','020')and salestrans.PcID not in ('65','95')GROUP BY CONVERT(DATETIME,DATEADD(mm, DATEDIFF(mm,0,salestrans.LDayDay), 0),101),salestrans.Branch,salestrans.BusinessUnit,salestrans.CustID--,salestrans.ProdID--,customer.CustName--,Product.Fam_ID--,Product.Lin_ID--,Product.Grp_ID) as CustSalesWHERE (BusinessUnit IN (@BU) OR 'ALL' IN (@BU))--AND (Fam_ID NOT IN (@Family) or 'ALL' IN (@Family))--AND (Lin_ID NOT IN (@ProdLine) OR 'ALL' IN (@ProdLine))--AND (Grp_ID NOT IN (@ProdGrp) OR 'ALL' IN (@ProdGrp))GROUP BY InvoiceDate ,Branch,BusinessUnit,CustID --,Fam_ID--,Lin_ID--,Grp_ID) as TotalCustSales

SQL Server 2008 cluster node going down unexpectedly

Posted: 01 May 2013 04:17 AM PDT

Last night our primary SQL Server node went down and failed over to the secondary node.I was actually on the server at the moment having just launched a trace to troubleshoot a particular query when suddenly I lost all connectivity to SQL Server.Our setup is:Microsoft SQL Server 2008 R2 (SP1) - 10.50.2796.0 (X64) 2 Node Active/Passive Cluster.Here is what I found in the Administrative Log :[sqsrvres] CheckQueryProcessorAlive: sqlexecdirect failed[sqsrvres] printODBCError: sqlstate = HYT00; native error = 0; message = [Microsoft][SQL Server Native Client 10.0]Query timeout expired[sqsrvres] CheckQueryProcessorAlive: sqlexecdirect failed[sqsrvres] printODBCError: sqlstate = 08S01; native error = 0; message = [Microsoft][SQL Server Native Client 10.0]The connection is no longer usable because the server failed to respond to a command cancellation for a previously executed statement in a timely manner. Possible causes include application deadlocks or the server being overloaded. Open a new connection and re-try the operation.We have SQL Server and SQL Server agent are running under designated network accounts.SQL Server Browser is running under a Local account.Never had that issue before in 2 years we've been using the server.The SQL Server error log did not reveal much. The very last event in the error log before the node went down is:2013-04-30 20:06:48.970 spid133 SQL Trace ID 2 was started by login "sa". Thank you for your help

SQL Server 2008 R2 slow query performance to Oracle 11g using Openquery

Posted: 01 May 2013 04:17 AM PDT

Gang,I'm completely perplexed on this one. I am running a query against an Oracle 11g database. It looks something like this:select * from openquery(oradb, "select col1, col2, col3....col15 from ora.Table where rownum <= 100000")The query is going against a linked server using the Oracle 11.2 client download. The purpose of the query is to pull in the first 100,000 records from a particular table. The data is about 15MB worth of data. When I run the query on two of my SQL Servers - let's call them A1 and A2 and both running 2008 R2, the query takes about 20 seconds. Here is where it gets wierd.When I run it on my new server - call it B1 also running 2008 R2, it takes EXACTLY 3 minutes and 36 seconds, everytime. The query executes immediately but the data transfer is incredibly slow.We monitored the server and the network performance is only about 40,000 B/sec. If we run the query in SQL Plus (Oracle client), it runs in 20 seconds - just like to other SQL Servers. THe network performance is well over 500,000 B/sec. That level of performance is the same on servers A1 and A2.I'm leaning that SQL Server is throttling this query for some reason. My networking and Oracle teams looked at it and said Oracle & Network are not the issue - I agree since the SQL Plus query runs in 20 seconds.[b]Has anyone else seen this on SQL Server? Are there some networking settings in SQL Server that would cause it to not spike out the network?[/b]I'm completely lost on this one.Dennis

"SET STATISTICS TIME" and "SET STATISTICS IO" questions

Posted: 29 Apr 2013 04:56 PM PDT

1) As I understood from here[url=http://www.sqlservercentral.com/articles/Performance+Tuning/measuringperformance/1323/][/url]"elapsed time" is the total time of statement's execution, but how than it can be less then CPU time, it should includes it?[quote]DBCC execution completed. If DBCC printed error messages, contact your system administrator. SQL Server Execution Times: CPU time = 16 ms, elapsed time = 5 ms.[/quote]2) I receive this message for a statement executed right after DBCC DROPCLEANBUFFERS, why 193 and 203, where from it reads remaining 10 pages, I've just cleaned the cache?[quote]Table 'InsertTest'. Scan count 1, logical reads 203, physical reads 0, read-ahead reads 193, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.[/quote]

?? on Job Steps and how they work

Posted: 01 May 2013 04:33 AM PDT

Hi,I'm trying to setup a job w 2 stepsstep1 run and emails a report via SSRS, I just created a subscription in the front end copied the step out of the job it created.The report sets a flag on a field with an 'x' to know which records to report onstep2 is where I get lost...step2 updates the reported records with a 'y' problem is when the report is emailed the report is blank, I assume the report doesn't get emailed until step 2 is completed?If so, is there away around it? maybe 2 jobs, job2 being dependent on job1 being sucessful?here are my steps...STEP1 exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='89882d84-9810-4cdf-9e13-d74035f3ddcd'STEP2 exec dbo.xmL_test_UPDATEThanksJoe

Sql server 2008 installation problem( must specify valid information for parsing the string)

Posted: 30 Apr 2013 11:15 PM PDT

Hello,When i am trying to install sql server 2008 using iso file , it shows an error message " must specify valid information for parsing the string " . it blocked me to continue. Any idea?? Thanks in advance

TFS Upgrade from 2005 to 2010

Posted: 01 May 2013 02:18 AM PDT

Hi, recently I have been assigned to upgrade our TFS 2005 to TFS 2010. Here is what we are looking to accomplish. 1- Upgrade TFS to 20102- Move all the projects under One Project team. Our current environment has 20 different projects and our approach is to put all those projects under one TEAM project. Can Anybody put a light on technical aspects of how to put current projects into one team projects? what kind of issues we might face and if someone has already done it successfully, it will be helpful to mimic that into our approach?If anyone need more technical detail of our current TFS 2005 environment, please feel free to ask.

Create SSIS package in SQL SERVER 2012 for SQL SERVER 2008

Posted: 06 Jun 2012 01:40 AM PDT

Is it possible to develop SSIS packages in SQL SERVER 20012 for deploying in SLQ SERVER 2008?

Lazy Writes / sec

Posted: 30 Apr 2013 11:31 PM PDT

on one of my servers i checked the Lazy Writes / sec and i get a very very high value of 2635802.everywhere i read that this should be < 20is this a incremental value of last server start and i should get this value on intervals and find the difference or what?!this server has been online and never restarted for 18 months now. Meanning that everything works just fine but this is a counter that i had never looked at and wondering if this is some kinda memory problem.BTW its sql server 2008thanx

Wednesday, May 1, 2013

[SQL Server Data Warehousing] Data warehouse


We have constructed the data warehouse  in SQL Server -> creating dimensions, fact tables. Its not relational database(have not added primary key or surrogate key) in the data warehouse.


Our functional area is Terminal & Ports, so is this right approach to have the non relation database or not?


Should we go with Normalization approach or dimensional?



.

social.technet.microsoft.com/Forums

[SQL Server] Sql Server Stored Procedure varaible declaration

[SQL Server] Sql Server Stored Procedure varaible declaration


Sql Server Stored Procedure varaible declaration

Posted: 01 May 2013 12:02 AM PDT

i want to search data with different criteria.one is from "date" and other is from itemid and so on.in last part of procedure i used if else statement and im passing my variable "option" to select my search criteria.if i search by"date" it simple means i am not sending "itemid" value from my application to procedure .i want some advise about this how to get what i want.Option 1 Search by dateOption 2 Seatch by itemidand so one [code]PROCEDURE [dbo].[PROC_GET_SOH_REPORTOptions]( @ItemName VARCHAR(200) = '%', @To Date, @From Date, @ItemIdd int, @Option Int )ASBEGIN -- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements. SET NOCOUNT ON; CREATE TABLE #tempSOH ( ItemID INT, ItemCode VARCHAR(50), ItemName VARCHAR(200), ItemLocation VARCHAR(50), created_date date, SOH NUMERIC(18,2) ) INSERT INTO #tempSOH ( ItemID, ItemCode, ItemName, ItemLocation,created_date ) SELECT Item_Master.Item_ID, Item_Master.Item_Code, Item_Master.Item_Name, Item_Location_Master.Rack_Name,Item_Master.Created_Date FROM Item_Master INNER JOIN Item_Location_Master ON Item_Master.Rack_ID = Item_Location_Master.Rack_ID WHERE Item_Master.Item_Name LIKE +'%'+ @ItemName +'%' UPDATE #tempSOH SET SOH = ISNULL((SELECT SUM( Stock_Quantity) FROM Stock_Header WHERE Stock_Type NOT IN ('Stock Out') AND Item_ID = #tempSOH.ItemID),0) UPDATE #tempSOH SET SOH = SOH - ISNULL((SELECT SUM( Stock_Quantity) FROM Stock_Header WHERE Stock_Type = 'Stock Out' AND Item_ID = #tempSOH.ItemID),0) if (@option)='1' SELECT ItemID, ItemName, ItemLocation, SOH,created_date FROM #tempSOH where created_date>=@From and created_date<=@to ORDER BY ItemName else if(@Option ='2') SELECT ItemID, ItemName, ItemLocation, SOH,created_date FROM #tempSOH where ItemID>=@ItemIdd ORDER BY ItemName DROP TABLE #tempSOH [/Code]

Stored procedure to transfer Applications into Loans mismatching number of BEGIN and COMMIT statementserror:

Posted: 30 Apr 2013 11:11 PM PDT

I had a problem when transfering an active application and to a loan table where i have to calculate the monthly payment. When i try to execute the stored procedure i get this error:Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1 Here is the code use in the procedure. When i executed is says commited succesfully. So i dont know why i get this message. I tried several things i seen on google but this getting this annoying message. I will very thankfull if someone can help me with this. ALTER Procedure [dbo].[Transfer_ApprovedApp_intoLoans]@FirstPaymentDate as DateAS Begin Begin Transaction DECLARE @RowCount As Int; DECLARE @ApplicationID as char(5); DECLARE @LoanTypeID as char(5); DECLARE @LoanAmount As Numeric(8,4); DECLARE @LoanRate As Numeric(12,8); DECLARE @LoanYears As Int; DECLARE @NumberOfMonths As Int = 12; /**Number of months**/ DECLARE @MonthlyRate As Numeric(12,8); /**To use to convert the anual rate to monthly rate **/ DECLARE @NumberOfPeriods As Int; /**Months multiplied by the number of years**/ DECLARE @MonthlyPayment As Numeric(8,4); DECLARE @RateinPercent AS Integer = 100 DECLARE @Status AS Char(1) /**Check if application is active and approved**/ SELECT * FROM Applications WHERE ApplicationID = @ApplicationID AND Status = 'A' /**Check if Loan exists**/ SELECT @RowCount = Count(*) FROM Applications WHERE ApplicationID = @ApplicationID IF @RowCount = 0 BEGIN PRINT 'Applicación' + @ApplicationID + ' no está aprobada o activa'; Return; END; Else /**Declare cursor**/ Declare AppTransferLoanCursor CURSOR for Select ApplicationId,LoanTypeId,LoanAmount,LoanRate,LoanYears, [Status] From Applications Where Status = 'A' and Transferred = 'N'; /** Open Cursor **/ Open AppTransferLoanCursor Fetch Next from AppTransferLoanCursor Into @ApplicationID, @LoanTypeID, @LoanAmount, @LoanRate, @LoanYears, @Status While @@FETCH_STATUS = 0 Begin /**Calculate monthly payment**/ Set @MonthlyRate =(@LoanRate/@RateinPercent)/@NumberOfMonths Set @NumberOfPeriods = @LoanYears* @NumberOfMonths Set @MonthlyPayment=@LoanAmount/(power(1+@LoanRate,@NumberOfPeriods)-1)*(@LoanRate*power(1+@LoanRate,@NumberOfPeriods)) /*Mark the application as transferred**/ Update Applications Set Transferred = 'Y' Where ApplicationId = @ApplicationId /**Transfer values into Loans**/ Insert into Loans(LoanType,LoanAmount,LoanRate,LoanYears,MonthlyPayment,Status) Values(@LoanTypeID,@LoanAmount,@LoanRate,@LoanYears,@MonthlyPayment, @Status) /**To fetch next row**/ Fetch Next from AppTransferLoanCursor Into @ApplicationID, @LoanTypeID, @LoanAmount, @LoanRate, @LoanYears, @Status EndCommit Transaction Close ApplicationTransferCursor DEALLOCATE ApplicationTransferCursor End

Query for non-existent entries

Posted: 30 Apr 2013 08:07 PM PDT

Hi,I'd like to build a query that produces the missing entries in a scores per week table. Say that you have a table with the following structure:[code="other"]ID|Name|noWeek|Score1 John 1 22 John 2 33 John 3 14 John 5 7..................[/code]Let's say the maximum week number is 5 for all possible people in this table. What I'd like is a query that lists the missing entries. Sth like this: - John 4 -I've got the number of weeks (1-2...5) in a lookup table, if that helps. So I guess we're looking at some kind of CROSS JOIN, or cartesian product, or the like. Any ideas of how this could be done? Thanks,

Exclude rows with blanks in a particular field

Posted: 30 Apr 2013 07:33 PM PDT

I know this is a popular one and I've Googled it but I haven't found anything that has worked yet, so apologies if this is incredibly obvious.I'm trying to amend a view to exclude records that are either null or blank in a field called CURRENT_POINT[code="other"]...and CURRENT_POINT is not null and DATALENGTH(CURRENT_POINT)> 0[/code]The nulls are fine, but I'm still getting the blanks. I hoped DATALENGTH would resolve that but it hasn't.As I said, I'm sorry if it's obvious. Can anybody advise me please?

[how to] How to get Greenplum table defintion?

[how to] How to get Greenplum table defintion?


How to get Greenplum table defintion?

Posted: 01 May 2013 08:24 PM PDT

There are already tables created in the schema but I don't know how to get the detailed information of how it was created. Like the configuration used, the orientation, etc.

How do I get this information in Greenplum?

How do I selectively purge MySQL table rows using a list of coloum values?

Posted: 01 May 2013 07:44 PM PDT

Am brand new to Databases and working with MySQL. I have a large MySQL table with the following coloms - Id, Link, Timestamp

And I have a list of links. I want to keep only the rows with link colom that matches a entry in this list.

Any pointers to how I accomplish this?

Change URL of Erwin data model HTML export hyperlink of www.ca.com to something different

Posted: 01 May 2013 03:17 PM PDT

The website URL of http://www.ca.com appears in every fact group (subject area) of our Erwin file. How can we make a global change to this URL in Erwin, so we don't have to update each subject area data model?

<HTML>  <HEAD>  <TITLE>  CA Report Template Builder - Atomic Physical Document  </TITLE>  </HEAD>  <BODY BGCOLOR=#FFFF99 TEXT=#000000 LINK=#000000 VLINK=#000000 ALINK=#990000>  <P><A HREF=http://www.ca.com TARGET=REPWINDOW><IMG src="company.jpeg"></A><P><P>&nbsp<P><P>&nbsp<P>  <P><TABLE>  <B><FONT FACE=Arial SIZE=2 COLOR=#0033FF>Report Components:</FONT></B>  <P><TABLE>  <TR><TD ALIGN = LEFT>  <FONT FACE=Arial SIZE=2 COLOR=#0033FF>1. 'Picture' section  </FONT></TD></TR>  <TR><TD ALIGN = LEFT>  <A HREF = 1Picturesection0.htm TARGET=REPWINDOW><FONT FACE=Arial SIZE=2>DM Case / Display1  </FONT></TD></TR>  </TABLE>  </BODY>  </HTML>  

SQL Server:: Auto Generate Scripts to create table, indexes etc

Posted: 01 May 2013 04:31 PM PDT

In SSMS, we can right click a DB/Table/Index etc and select 'SCRIPT AS ' option and generate Drop and Create statement. Is there a way to automate this script generation and drop the scripts to a temp location , rather than manually right click each article and generate them.

Thanks,

Trying to write a system function sq server 2008r2

Posted: 01 May 2013 05:56 PM PDT

I am trying to write a simple system function so that the dbo. is NOT required to access the function. I have found an example of this for SQL 2000 but when I run

EXEC SP_CONFIGURE 'ALLOW UPDATES', 1  RECONFIGURE WITH OVERRIDE  go    create function system_function_schema.trim      (@input varchar(max))      returns  varchar(max)  as  begin      declare @result varchar(max)        select  @result = rtrim(ltrim(@input))        return  @result  end  go    EXEC SP_CONFIGURE 'ALLOW UPDATES', 0  RECONFIGURE WITH OVERRIDE  

This is the error thrown.

The specified schema name "system_function_schema" either does not exist or you   do not have permission to use it. Severity 16  

The expected usage would be

select trim('   padded value   ')  

Does anyone know the approach I should be taking or if I HAVE to create it as a regular used define function that must be called with the dbo. like :

select dbo.trim('   padded value   ')  

Watch output of MySQL query over time

Posted: 01 May 2013 12:08 PM PDT

I'd like to run a query every few seconds and show the output in a console window. From bash, using watch:

watch "mysql -A -pXXXXX -h host -D dbname -u username --vertical -e \"SELECT rowID FROM IDs LIMIT 1\""  

With the output:

*************************** 1. row ***************************  rowID: 68434810  

I'd like to not disconnect and reconnect every 2 seconds though. Is there a better way?

Can I find out what version of MySQL from the data files?

Posted: 01 May 2013 02:10 PM PDT

I have a very odd situation going on here. I had a linux box running ubuntu 8.10 and MySQL server with a Wordpress installation on it hosting an internal blog for our company. The machine that this was running on has crapped out, but I was able to put the drive into an enclosure and gain access to the files and recover the MySQL data folder. We are mostly a Windows company and I have very limited linux experience. Before this machine crashed we were experimenting with moving this blog to a Windows Virtual Machine running PHP for IIS; and we have an older copy of the Wordpress database up and running on MySQL 5.1. I am currently trying to retrieve the latest data from the linux MySQL data folder and import it into the current database but I am having problems.

Problems so far: 1) I do not know what version of MySQL was installed on the linux box 2) I see ib_logfile0, ib_logfile1, and ibdata1 on the linux MySQL data folder, but am not sure if it was truly running InnoDB or not as I also have .frm, .myd, and .myi files in the wordpress folder

If I can figure out the version of the linux box MySQL I might be able to create a virtual machine to run linux and MySQL so I can copy the data folder and then do an export dump; unless someone else knows of an easier way to get the data into my current database.

Any help or guidance on this issue would be greatly appreciated.

Thanks, Nathan

How to avoid lagging when enabling log-slave-updates?

Posted: 01 May 2013 02:49 PM PDT

Master:

Slave:

Both are running 5.5.28.

For the incremental backup purpose, I need to enable the log-slave-updates option. But Seconds_Behind_Master keeps increasing immediately after restarting MySQL. A simple statement get... 2 minutes to updated on the slave at spare time. At the busy time, it seems that the Slave cannot catch up with the Master, so I have to comment it out and restart MySQL.

Is there any way to improve this?

How to create a procedure through terminal

Posted: 01 May 2013 11:34 AM PDT

In mysql console we will use the delimiter command to change the delimiter and it is useful to define the procedures. I understood the command delimiter for only the mysql client(client side command).

but I am using the mysql client which does not have the command delimiter like dbslayer, on this kind of clients how can I define the procedures.

for now consider:

create procedure test_pro()  begin  select 'hello pro';  end  

I tried the following :

mysql -u root -pmypass  test < proc_file  

where proc_file contains the above procedure;

but that is giving me the following error:

ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 3  

Event I tried the following

 create procedure test_pro()      begin      select 'hello pro';      end;  

(added the trailing semicolon) but am getting the same error .

Same I am experiencing with the dbslayer , if I am able to define the above procedure through terminal I think I should be able to do that through the dbslayer

Foreign Key to Multicolumn Primary Key?

Posted: 01 May 2013 11:50 AM PDT

I have three connected table in my design. Basic logic is when a category deleted, also delete records in category_product table which has deleted category's id in its cat_id column. But when I tried to create tables with query below, I get this error: "there is no unique constraint matching given keys for referenced table 'category_product'". I don't know what is wrong with this design or logic.

CREATE TABLE IF NOT EXISTS category_product (      cat_id integer NOT NULL,      product_id integer NOT NULL,      CONSTRAINT PK_CATEGORY_PRODUCT_ID PRIMARY KEY (cat_id, product_id)  );      CREATE TABLE IF NOT EXISTS categories (      id serial,      name text NOT NULL,      is_active boolean NOT NULL DEFAULT true,      CONSTRAINT PK_CATEGORIES_ID PRIMARY KEY (id),      CONSTRAINT FK_PRODUCTS FOREIGN KEY (id) REFERENCES category_product(cat_id) ON UPDATE NO ACTION ON DELETE CASCADE  );    CREATE TABLE IF NOT EXISTS products (      id serial,      name text NOT NULL,      description text NULL,      photo text NULL,      price money NOT NULL,      is_in_menu boolean NOT NULL DEFAULT true,      is_from_kitchen boolean NOT NULL DEFAULT true,      is_deleted boolean NOT NULL DEFAULT false,      CONSTRAINT PK_PRODUCTS_ID PRIMARY KEY (id),      CONSTRAINT FK_CATEGORY_PRODUCT FOREIGN KEY (id) REFERENCES category_product(product_id) ON UPDATE NO ACTION ON DELETE CASCADE  );  

So what kind of structure I need to achieve what I am trying?

Oracle Express intermittently stops responding, requiring a OS reboot to fix

Posted: 01 May 2013 12:32 PM PDT

I've been having an issue with Oracle XE the past 2 days and it's really been making me pull my hair out.

There seems to be no rhyme nor reason why this happens, or when it does, but it happened 6 times yesterday and so far twice this morning. I'm developing a .NET app, and sometimes when I start the app, refresh the page, or do whatever, I get an exception that a connection could not be made to the database. I then try to connect with sqlplus I just get:

C:\>sqlplus user/password@localhost    SQL*Plus: Release 11.2.0.2.0 Beta on Wed May 1 11:08:53 2013    Copyright (c) 1982, 2010, Oracle.  All rights reserved.  

It then just sits there forever. No timeout, no error, no nothing. I can't connect with any other database program either, the whole server is just unresponsive.

If I go into the Services manager, I see that the services are indeed running so nothing has crashed:

enter image description here

I've tried restarting OracleServiceXE and OracleXETNSListener. Both will shut down and restart, however I still cannot make a connection to the server.

The only thing that fixes this problem is a complete Windows reboot. However, this of course takes 5-10 minutes and is really starting to get old.

There does not appear to be anything related to Oracle around this time in the Windows Event Viewer. I'm happy to add any other information that would help debug. Any ideas?

Update:

I did notice a few strange errors in listener.log. These errors start the exact same time the crashes started happening:

30-APR-2013 08:50:42 * 12531  TNS-12531: TNS:cannot allocate memory  Tue Apr 30 08:52:28 2013  30-APR-2013 08:52:28 * 12531  TNS-12531: TNS:cannot allocate memory  Tue Apr 30 08:52:58 2013  30-APR-2013 08:52:58 * 12531  TNS-12531: TNS:cannot allocate memory  

Last 1,000 lines of alert_XE.log are here.

Why do some DBAs avoid the use of null? [duplicate]

Posted: 01 May 2013 11:11 AM PDT

This question already has an answer here:

I've noticed that some DBAs avoid using NULL for things like integer values, instead preferring to use NOT NULL DEFAULT 0.

Is there any conventional reason for this?

Challenge. Split database 500 records in two smaller datasets, randomly

Posted: 01 May 2013 02:55 PM PDT

I have a database with 500 records. I want to split these records to 75% and 25% *randomly*in order to use the different datasets for training and testing to machine learning algorithms. Does anyone knows how to do that using an sql query? ps. I use mysql.

In addition I find this ... select ... from ... where ... ORDER BY rand() LIMIT 150 but this gives me only the first dataset

Thanks in advance

Postgres - Schema information within Trigger?

Posted: 01 May 2013 11:00 AM PDT

Background : (Multi-tenant design;separate schema per customer, all table structures identical;Data-warehouse / non-OLTP system here; Postgres 9.2.x).

Reviewing patterns for the following problem: I have a matview that needs to be updated if an underlying condition is altered (in this case, an inclusion BOOLEAN).

Was considering implementing a TRIGGER, on update, which would then either execute the matview creation UDF or possibly call 'NOTIFY' to be handled by some listening event, perhaps at a later point in time as the immediate matview update isn't necessarily required.

Workflow :

  UPDATE made to some underlying table.  TRIGGER is executed.    (Obtain schema information here).      EXECUTE matview creation      -or-     NOTIFY command for queue / processing at a later time.  

First question, am I on the right track here or are there other, more efficient / effective patterns I should review?

Given the nature of my multi-tenant design, is it possible to obtain the current schema from the underlying table from which the trigger event was called as I'll need that information to pass into my matview creation UDF.

Thanks for any help!

Force sql server to run query conditions as written?

Posted: 01 May 2013 08:11 PM PDT

Im using Sql server 2008 R2 And I have this pseudo query (SP) :

select ...  from ...  WHERE    @LinkMode IS NULL       AND (myColumn IN (...very long time exeted query...))       ...       ...  

The problem is that still , the query takes a very long time to execute -- even if I execute the SP with @LinkMode=2

As you noticed , the long time consuming query should be executed only if @LinkMode is null ( which is not the case here. in my case @LinkMode =2 ! )

However , If I change it to :

 select ...      from ...      WHERE    1=2           AND (myColumn IN (...very long time exeted query...))       ...       ...  

The SP does run fast .

I've heard before that sometimes , optimizer can optimize the order of criteria

So I ask :

  • Even if the optimizer choose different route , what can be faster than checking if =null ? I mean , I think that checking if a==null is much faster than running the other long query...

  • How can I force Sql server to run the query as I've written it ! ( the same order)

Disable trigger still fired

Posted: 01 May 2013 01:05 PM PDT

I've just run the following t-sql to disable a trigger (with a deploy script wrapper around osql):

disable trigger dbo.trExampleTableInsert ON dbo.ExampleTable

In SSMS I could see the trigger was disabled (downward pointing red arrow). When I inserted some rows into the table, the trigger still fired (I have an audit table that logs when the trigger fires).

I enabled and disabled the trigger this time using SSMS UI. When I inserted rows the trigger didn't fire.

What does the UI do differently over the T-SQL command? Did I need to issue a commit?

Is there a better way than the ones listed within to join data in an excel spreadsheet to a table in a SQL Server database?

Posted: 01 May 2013 01:47 PM PDT

Let's say I'm given a list of names and email addresses in an excel spreadsheet. I am asked to find all customers in the database who's name, and email are in that list.

I currently know of two ways to achieve this...

  • Import the data into a new table (to be removed later) and then use this table to get the data.
  • Paste the names and addresses into a SQL query.

Is there a 'better' way to do this kind of thing (i.e. one that doesn't require me to place the data in the database before using it, and doesn't require me to create a monster query that has all the data in a 'where x in (data)' part)

SQL Server: Sync Indexes between two tables within Same database

Posted: 01 May 2013 01:26 PM PDT

I need to automate and Sync only indexes between two tables ( Primary and Stage_Table) within same database. Tried using SSIS SQL Server Objects Task, but looks like it works only when we sync between two different instances. Indexes on Primary table keep getting changed freq and i need the empty stage_table with all the updated indexes when i do the partition switch.

Thanks,

Teradata performance statistics

Posted: 01 May 2013 10:37 AM PDT

I need some statistics regarding the performance of Teradata, something like with a given hardware, given nodes and given AMPs - a table which has thousands of rows can be read in 't' secs. I need this information to analyze the feasibility of teradata.

MySQL Read Speed and Partitioning on Separate Drives

Posted: 01 May 2013 10:41 AM PDT

Introduction

Simple project but for an heavy read load -90% reads- over a table(~20Milion Rows ever-growing where I have to boost performance and ensure response low time for read queries) that was partitioned for a fixed field -21 partitions, avg 1Mln Row/Partition- like this:

CREATE TABLE `my_table`  (   `id` int(10) unsigned NOT NULL,   `fieldA` char(40) NOT NULL,   `fieldB` char(40) NOT NULL,   `fieldC` char(32) DEFAULT NULL,   -- some other fields   PRIMARY KEY (`id`,`fieldC`)   -- some other index  ) ENGINE=MyISAM   PARTITION BY LIST COLUMNS(`fieldC`) (    partition pNULLorEMPTY VALUES IN(NULL,''),    partition pValueA VALUES IN('valueA'),    partition pValueB VALUES IN('valueB'),    partition pValueC VALUES IN('valueC')    -- other partitions  );  

The root of the queries are always "WHERE fieldC = ? ... AND ..." so I always guaranteed access to a partition.

This table can be only MyISAM so I have installed the MySQL server 5.6.11

Prior to MySQL 5.6.6, a SELECT from a partitioned MyISAM table caused a lock on the entire table; in MySQL 5.6.6 and later, only those partitions actually containing rows that satisfy the SELECT statement's WHERE condition are locked. This has the effect of increasing the speed and efficiency of concurrent operations on partitioned MyISAM tables.

The setup consist on a Master/Slave topology with same HW (I can distribuite reads across Master and Slave)

Possible solution

I have tree choice for the hardware architecture(6 slot disk 3.5''):

Soloution #1

  • 1xraid1 for OS, /tmp and binlog

  • 1xraid1 partition for mysql data (ex: /diskA)

  • 1xraid1 additional partition for mysql data (ex: /diskB)

Soloution #2

  • 1xraid1 OS, /tmp and binlog

  • 1xraid10 mysql data

Soloution #3 (2 disk waste for tmp!)

  • 1xraid1 OS and binlog

  • 1xraid1 partition for mysql data

  • 1xraid1 /tmp

The first choice can help me to spread the I/O over two disk using the:

CREATE TABLE `my_table`  (   `id` int(10) unsigned NOT NULL,   `fieldA` char(40) NOT NULL,   `fieldB` char(40) NOT NULL,   `fieldC` char(32) DEFAULT NULL,   -- some other fields   PRIMARY KEY (`id`,`fieldC`)   -- some other index  ) ENGINE=MyISAM   PARTITION BY LIST COLUMNS(`fieldC`) (    partition pNULLorEMPTY VALUES IN(NULL,'')      DATA DIRECTORY = '/diskA/customer'       INDEX DIRECTORY = '/diskA/customer',    partition pValueA VALUES IN('valueA')      DATA DIRECTORY = '/diskB/customer'       INDEX DIRECTORY = '/diskB/customer',    partition pValueB VALUES IN('valueB')      DATA DIRECTORY = '/diskA/customer'       INDEX DIRECTORY = '/diskA/customer',    partition pValueC VALUES IN('valueC')      DATA DIRECTORY = '/diskB/customer'       INDEX DIRECTORY = '/diskB/customer'    -- other partitions  );  

The second choice improve the fault tolerance level, but consider that I have a Master/Slave configuration so, in every moment, I can promote the Slave as a Master.

What kind of architecture you advice?

There are problems that at the moment I do not see?

Any other kind of suggestions would be greatly appreciated. :)

Thanks in advance

crx

Is it ever a good idea to denormalize for integrity?

Posted: 01 May 2013 01:18 PM PDT

I'm using Postgres 9.2 to develop a quiz app, in which I present the user with a series of problems and record their answers.

These problems can take a variety of forms - they might be multiple choice (What's 2 + 2? A: 2. B: 3. C: 4), or they might require the user to calculate an answer of their own, in which case I need to constrain their input to be something like '440' or '1/2' or '.333'. Some of the problems might prompt the user to type in an essay. And, of course, I may need to add more types of problems later.

The tables I'm envisioning, in a simplified form, are something like this:

CREATE TABLE problems  (    problem_id serial NOT NULL PRIMARY KEY,    problem_type text NOT NULL, -- Refers to a lookup table    answer_letter text, -- If not null, refers to the correct answer in the answers table below.    response text -- If not null, represents a correct answer to be input, like '0.4'  );    CREATE TABLE answers  (    problem_id integer, -- Foreign key    answer_letter text,    content text,      CONSTRAINT answers_pkey PRIMARY KEY (problem_id, answer_letter)  )    CREATE TABLE questions  (    user_id integer,    created_at timestamptz,    problem_id integer, -- Foreign key    answer_letter text,    response text,      CONSTRAINT questions_pkey PRIMARY KEY (user_id, created_at)  );  

So, the problems table would have a variety of constraints to ensure that:

  • When problem_type is 'multiple_choice', answer_letter must not be null and response must be null.
  • When problem_type is 'user_input', answer_letter must be null and response must not be null. Response must also consist of only a few characters.
  • When problem_type is 'essay', both answer_letter and response must be null, since I can't really have a correct answer for an essay question.

This is clean enough, and constrains the problems table just fine. I might use an enum instead of a lookup table for problem_type, since all of its possible values will already be baked into the schema anyway.

My difficulty is, how to constrain the questions table? Its constraints will be very similar (I don't want an answer_letter supplied for a question that references an essay problem, and so on). I can think of a few options:

  1. Create a unique index on problems (problem_id, problem_type), add a problem_type field to questions and include it in the reference to problems, then use it in check constraints in a way similar to problems. This is the way I'm leaning right now, since it seems cleanest overall, but then I'm denormalizing to achieve the proper constraints, which feels wrong to me.
  2. Create three problem tables, one for each type, and constrain them separately. Do the same with three question tables. This feels like the pure relational way to me, which is typically what I'd like to go for, but it also feels way too complex. I don't want to have to deal with unioning three tables (or more, later on) to get a user's question history.
  3. Go with #2, but use Postgres' inheritance support to try to keep the logic simple. But since you can't point a foreign key at a hierarchy of tables, again it's not a very clean solution, it's a solution that would again need to be hacked around.
  4. Use triggers to ensure that questions data fits the corresponding problem. Maybe it's because I don't have much experience with triggers, but I'm concerned that cramming that sort of imperative logic into the DB will eventually become unmanageable.
  5. Forget the question constraints, handle it in app logic, and hope for the best. You can't constrain everything all the time. Of course, I don't really like this idea either.

I feel like there's a problem with my approach to modeling that's leading me to these difficulties, especially since I'm running into a couple of cases very similar to this elsewhere in my schema (this one was just the easiest to describe). Maybe it's just a domain that's difficult to model, but I feel like there must be a better way, and I'm just not normalizing correctly.

Help? Thanks!

Bitmask Flags with Lookup Tables Clarification

Posted: 01 May 2013 07:05 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 01 May 2013 01:05 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

BIT columns all "1" after a phpMyAdmin export/import

Posted: 01 May 2013 03:05 PM PDT

I have to import data from a MySQL database using phpMyAdmin because that's the only interface my hosting provider supports.

I have exported my database from my local machine using phpMyAdmin. After that I imported the script file to my host. All of data in the columns that is BIT type are changed to '1'. Hosting database version is 5.5.29.

Is it a phpMyAdmin problem, or MySQL version problem? How can I fix this?

SQL Server replication subscriptions marked as inactive

Posted: 01 May 2013 10:05 AM PDT

Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever?

It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time.

Note, I'm not talking about the subscriptions being marked as expired...just as inactive.

Thank you.

Error trying to run SSIS Packages in SQL Server 2008 SQL Server Agent

Posted: 01 May 2013 02:48 PM PDT

I am trying to run a number of SQL Server 2008 stored procedures that are wrapped in SSIS packages consisting mainly of Execute SQL Tasks. They run fine in Visual Studio 2008 after deploying to a folder on the SQL Server server. The problem is when I created a job in SQL Server Agent and tried to run the job. That is when I get the error: Executing the query "EXEC ? = dbo.usp_Clear_All_JDAWG_Tables" failed with the following error: "The EXECUTE permission was denied on the object 'usp_Clear_All_JDAWG_Tables' database 'JDAWGD' schema 'dbo'.". Possible failure reasons: Problems with the query "ResultSet" property not set correctly parameters not set correctly or connection not established correctly. End Error DTExec: The package execution returned DTSER_FAILURE (1). The 'usp_Clear_All_JDAWG_Tables' is the name of the first of many packages this job is trying to run. The stored procs themselves do not use parameters but in the SQL Task I have commands like this to run the stored proc: EXEC ? = dbo.usp_Load_JDAWG_Tables_From_Persons_All_Agencies Is there a better/different way to run a stored procedure in the SQL Task?

As a side note, I was able to create and successfully run an Agent job that directly executed the stored procedures that the packages run.

Thanks for any assistance.

Login failure when running a SSIS package from a SQL Server job

Posted: 01 May 2013 10:05 AM PDT

I have a SSIS package that migrates data from a Sybase database to SQL Server. The connection to the source database has been made through an ODBC connection.

If I run the package directly on my machine, it runs successfully, however if I want to run it through a SQL Server job I have created on my local machine (where the destination database is), I'm getting a login failure error:

An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80004005  Description: "Login failed for user 'xxx\user1'.".  An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80004005  Description: "Cannot open database "MySybaseDatabase" requested by the login.  

I know that the job runs as 'SQL Server Agent Service Account' that is in fact xxx\user1 and it is different from a user has been set up in ODBC connection to access the source database, but don't know why the odbc connection login does not take the precedance and is used rather than the agent service account?

I read some posts in different sites, they were talking about setting up proxy, etc, but don't know if it is quite relevant to my issue.

The connection to the source database has been set up via ODBC, but as the step was failing, I added the userId and password to it to force SQL uses the userId I want rather than SQL Sever agant user, but it didn't help. enter image description here Do you know how we can fix this login issue?

Thank you.

error in Execute SQL Task with parameter

Posted: 01 May 2013 10:06 AM PDT

I have an Execute SQL Task object in the Error handlers section of a SSIS package. What it does is inserting a record to a table using OLE DB connection. There is a value in the insert statement that should be sourced from a user variable.

When I try to pass the query I get the following error, I searched internet a lot and tried different things, but still get the error message:

"The query failed to parse. Parameter Information cannot be derived from SQL Statements. Set parameter information before preparing command."

What I have set to the SQLStatement is:

INSERT INTO ErrorLog(ErrorDateTime, ErrorDescription) VALUES(GETDATE(),?)  

What I have in the Parameter Mapping section of the Task: enter image description here

The SQLSourceType is "Direct input" and as there is no result to this query the ResultSet has been set to "None".

The ErrorDescription column in the target table is varchar(max).

Any idea how fix this issue?

Thanks in advance.

Microsoft Visual Studio is Busy

Posted: 01 May 2013 10:08 AM PDT

I am working in Visual Studio 2008 on a SSIS package. It is really frustrating that I get "Microsoft Visual Studio is Busy" message quite often. It almost happens every time I try to open a data source, destination, etc.

From what I can see it seems VS tries to save Items. Is there any way to avoid this auto save? Generally how can we get rid of this message and interruptions?

If I SET ARRAYSIZE 1 in SQL*Plus, the records are still fetched in pairs. Why?

Posted: 01 May 2013 06:39 PM PDT

I thought about writing a simple tail -f like utility to "trace" the progress of some figures within the database:

create or replace function tail_f return varchar2_tab pipelined as     n number;  begin        loop        exit when ...          select count(*) into n from ... where ...;          pipe row(sysdate || ' n= ' || n);        dbms_lock.sleep(60);      end loop;        return;  end tail_f;  

And then I'd like to select * from table(tail_f) in SQL*Plus.

In order to fetch the rows one by one, I SET ARRAYSIZE 1. Yet, the records (except the first one) are fetched in pairs.

Is there an explanation for this and how can I get the records as soon as one is piped?

[SQL Server] Using REPLACE in an UPDATE statement



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

Search This Blog