Wednesday, May 1, 2013

[how to] How to get Greenplum table defintion?

[how to] How to get Greenplum table defintion?


How to get Greenplum table defintion?

Posted: 01 May 2013 08:24 PM PDT

There are already tables created in the schema but I don't know how to get the detailed information of how it was created. Like the configuration used, the orientation, etc.

How do I get this information in Greenplum?

How do I selectively purge MySQL table rows using a list of coloum values?

Posted: 01 May 2013 07:44 PM PDT

Am brand new to Databases and working with MySQL. I have a large MySQL table with the following coloms - Id, Link, Timestamp

And I have a list of links. I want to keep only the rows with link colom that matches a entry in this list.

Any pointers to how I accomplish this?

Change URL of Erwin data model HTML export hyperlink of www.ca.com to something different

Posted: 01 May 2013 03:17 PM PDT

The website URL of http://www.ca.com appears in every fact group (subject area) of our Erwin file. How can we make a global change to this URL in Erwin, so we don't have to update each subject area data model?

<HTML>  <HEAD>  <TITLE>  CA Report Template Builder - Atomic Physical Document  </TITLE>  </HEAD>  <BODY BGCOLOR=#FFFF99 TEXT=#000000 LINK=#000000 VLINK=#000000 ALINK=#990000>  <P><A HREF=http://www.ca.com TARGET=REPWINDOW><IMG src="company.jpeg"></A><P><P>&nbsp<P><P>&nbsp<P>  <P><TABLE>  <B><FONT FACE=Arial SIZE=2 COLOR=#0033FF>Report Components:</FONT></B>  <P><TABLE>  <TR><TD ALIGN = LEFT>  <FONT FACE=Arial SIZE=2 COLOR=#0033FF>1. 'Picture' section  </FONT></TD></TR>  <TR><TD ALIGN = LEFT>  <A HREF = 1Picturesection0.htm TARGET=REPWINDOW><FONT FACE=Arial SIZE=2>DM Case / Display1  </FONT></TD></TR>  </TABLE>  </BODY>  </HTML>  

SQL Server:: Auto Generate Scripts to create table, indexes etc

Posted: 01 May 2013 04:31 PM PDT

In SSMS, we can right click a DB/Table/Index etc and select 'SCRIPT AS ' option and generate Drop and Create statement. Is there a way to automate this script generation and drop the scripts to a temp location , rather than manually right click each article and generate them.

Thanks,

Trying to write a system function sq server 2008r2

Posted: 01 May 2013 05:56 PM PDT

I am trying to write a simple system function so that the dbo. is NOT required to access the function. I have found an example of this for SQL 2000 but when I run

EXEC SP_CONFIGURE 'ALLOW UPDATES', 1  RECONFIGURE WITH OVERRIDE  go    create function system_function_schema.trim      (@input varchar(max))      returns  varchar(max)  as  begin      declare @result varchar(max)        select  @result = rtrim(ltrim(@input))        return  @result  end  go    EXEC SP_CONFIGURE 'ALLOW UPDATES', 0  RECONFIGURE WITH OVERRIDE  

This is the error thrown.

The specified schema name "system_function_schema" either does not exist or you   do not have permission to use it. Severity 16  

The expected usage would be

select trim('   padded value   ')  

Does anyone know the approach I should be taking or if I HAVE to create it as a regular used define function that must be called with the dbo. like :

select dbo.trim('   padded value   ')  

Watch output of MySQL query over time

Posted: 01 May 2013 12:08 PM PDT

I'd like to run a query every few seconds and show the output in a console window. From bash, using watch:

watch "mysql -A -pXXXXX -h host -D dbname -u username --vertical -e \"SELECT rowID FROM IDs LIMIT 1\""  

With the output:

*************************** 1. row ***************************  rowID: 68434810  

I'd like to not disconnect and reconnect every 2 seconds though. Is there a better way?

Can I find out what version of MySQL from the data files?

Posted: 01 May 2013 02:10 PM PDT

I have a very odd situation going on here. I had a linux box running ubuntu 8.10 and MySQL server with a Wordpress installation on it hosting an internal blog for our company. The machine that this was running on has crapped out, but I was able to put the drive into an enclosure and gain access to the files and recover the MySQL data folder. We are mostly a Windows company and I have very limited linux experience. Before this machine crashed we were experimenting with moving this blog to a Windows Virtual Machine running PHP for IIS; and we have an older copy of the Wordpress database up and running on MySQL 5.1. I am currently trying to retrieve the latest data from the linux MySQL data folder and import it into the current database but I am having problems.

Problems so far: 1) I do not know what version of MySQL was installed on the linux box 2) I see ib_logfile0, ib_logfile1, and ibdata1 on the linux MySQL data folder, but am not sure if it was truly running InnoDB or not as I also have .frm, .myd, and .myi files in the wordpress folder

If I can figure out the version of the linux box MySQL I might be able to create a virtual machine to run linux and MySQL so I can copy the data folder and then do an export dump; unless someone else knows of an easier way to get the data into my current database.

Any help or guidance on this issue would be greatly appreciated.

Thanks, Nathan

How to avoid lagging when enabling log-slave-updates?

Posted: 01 May 2013 02:49 PM PDT

Master:

Slave:

Both are running 5.5.28.

For the incremental backup purpose, I need to enable the log-slave-updates option. But Seconds_Behind_Master keeps increasing immediately after restarting MySQL. A simple statement get... 2 minutes to updated on the slave at spare time. At the busy time, it seems that the Slave cannot catch up with the Master, so I have to comment it out and restart MySQL.

Is there any way to improve this?

How to create a procedure through terminal

Posted: 01 May 2013 11:34 AM PDT

In mysql console we will use the delimiter command to change the delimiter and it is useful to define the procedures. I understood the command delimiter for only the mysql client(client side command).

but I am using the mysql client which does not have the command delimiter like dbslayer, on this kind of clients how can I define the procedures.

for now consider:

create procedure test_pro()  begin  select 'hello pro';  end  

I tried the following :

mysql -u root -pmypass  test < proc_file  

where proc_file contains the above procedure;

but that is giving me the following error:

ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 3  

Event I tried the following

 create procedure test_pro()      begin      select 'hello pro';      end;  

(added the trailing semicolon) but am getting the same error .

Same I am experiencing with the dbslayer , if I am able to define the above procedure through terminal I think I should be able to do that through the dbslayer

Foreign Key to Multicolumn Primary Key?

Posted: 01 May 2013 11:50 AM PDT

I have three connected table in my design. Basic logic is when a category deleted, also delete records in category_product table which has deleted category's id in its cat_id column. But when I tried to create tables with query below, I get this error: "there is no unique constraint matching given keys for referenced table 'category_product'". I don't know what is wrong with this design or logic.

CREATE TABLE IF NOT EXISTS category_product (      cat_id integer NOT NULL,      product_id integer NOT NULL,      CONSTRAINT PK_CATEGORY_PRODUCT_ID PRIMARY KEY (cat_id, product_id)  );      CREATE TABLE IF NOT EXISTS categories (      id serial,      name text NOT NULL,      is_active boolean NOT NULL DEFAULT true,      CONSTRAINT PK_CATEGORIES_ID PRIMARY KEY (id),      CONSTRAINT FK_PRODUCTS FOREIGN KEY (id) REFERENCES category_product(cat_id) ON UPDATE NO ACTION ON DELETE CASCADE  );    CREATE TABLE IF NOT EXISTS products (      id serial,      name text NOT NULL,      description text NULL,      photo text NULL,      price money NOT NULL,      is_in_menu boolean NOT NULL DEFAULT true,      is_from_kitchen boolean NOT NULL DEFAULT true,      is_deleted boolean NOT NULL DEFAULT false,      CONSTRAINT PK_PRODUCTS_ID PRIMARY KEY (id),      CONSTRAINT FK_CATEGORY_PRODUCT FOREIGN KEY (id) REFERENCES category_product(product_id) ON UPDATE NO ACTION ON DELETE CASCADE  );  

So what kind of structure I need to achieve what I am trying?

Oracle Express intermittently stops responding, requiring a OS reboot to fix

Posted: 01 May 2013 12:32 PM PDT

I've been having an issue with Oracle XE the past 2 days and it's really been making me pull my hair out.

There seems to be no rhyme nor reason why this happens, or when it does, but it happened 6 times yesterday and so far twice this morning. I'm developing a .NET app, and sometimes when I start the app, refresh the page, or do whatever, I get an exception that a connection could not be made to the database. I then try to connect with sqlplus I just get:

C:\>sqlplus user/password@localhost    SQL*Plus: Release 11.2.0.2.0 Beta on Wed May 1 11:08:53 2013    Copyright (c) 1982, 2010, Oracle.  All rights reserved.  

It then just sits there forever. No timeout, no error, no nothing. I can't connect with any other database program either, the whole server is just unresponsive.

If I go into the Services manager, I see that the services are indeed running so nothing has crashed:

enter image description here

I've tried restarting OracleServiceXE and OracleXETNSListener. Both will shut down and restart, however I still cannot make a connection to the server.

The only thing that fixes this problem is a complete Windows reboot. However, this of course takes 5-10 minutes and is really starting to get old.

There does not appear to be anything related to Oracle around this time in the Windows Event Viewer. I'm happy to add any other information that would help debug. Any ideas?

Update:

I did notice a few strange errors in listener.log. These errors start the exact same time the crashes started happening:

30-APR-2013 08:50:42 * 12531  TNS-12531: TNS:cannot allocate memory  Tue Apr 30 08:52:28 2013  30-APR-2013 08:52:28 * 12531  TNS-12531: TNS:cannot allocate memory  Tue Apr 30 08:52:58 2013  30-APR-2013 08:52:58 * 12531  TNS-12531: TNS:cannot allocate memory  

Last 1,000 lines of alert_XE.log are here.

Why do some DBAs avoid the use of null? [duplicate]

Posted: 01 May 2013 11:11 AM PDT

This question already has an answer here:

I've noticed that some DBAs avoid using NULL for things like integer values, instead preferring to use NOT NULL DEFAULT 0.

Is there any conventional reason for this?

Challenge. Split database 500 records in two smaller datasets, randomly

Posted: 01 May 2013 02:55 PM PDT

I have a database with 500 records. I want to split these records to 75% and 25% *randomly*in order to use the different datasets for training and testing to machine learning algorithms. Does anyone knows how to do that using an sql query? ps. I use mysql.

In addition I find this ... select ... from ... where ... ORDER BY rand() LIMIT 150 but this gives me only the first dataset

Thanks in advance

Postgres - Schema information within Trigger?

Posted: 01 May 2013 11:00 AM PDT

Background : (Multi-tenant design;separate schema per customer, all table structures identical;Data-warehouse / non-OLTP system here; Postgres 9.2.x).

Reviewing patterns for the following problem: I have a matview that needs to be updated if an underlying condition is altered (in this case, an inclusion BOOLEAN).

Was considering implementing a TRIGGER, on update, which would then either execute the matview creation UDF or possibly call 'NOTIFY' to be handled by some listening event, perhaps at a later point in time as the immediate matview update isn't necessarily required.

Workflow :

  UPDATE made to some underlying table.  TRIGGER is executed.    (Obtain schema information here).      EXECUTE matview creation      -or-     NOTIFY command for queue / processing at a later time.  

First question, am I on the right track here or are there other, more efficient / effective patterns I should review?

Given the nature of my multi-tenant design, is it possible to obtain the current schema from the underlying table from which the trigger event was called as I'll need that information to pass into my matview creation UDF.

Thanks for any help!

Force sql server to run query conditions as written?

Posted: 01 May 2013 08:11 PM PDT

Im using Sql server 2008 R2 And I have this pseudo query (SP) :

select ...  from ...  WHERE    @LinkMode IS NULL       AND (myColumn IN (...very long time exeted query...))       ...       ...  

The problem is that still , the query takes a very long time to execute -- even if I execute the SP with @LinkMode=2

As you noticed , the long time consuming query should be executed only if @LinkMode is null ( which is not the case here. in my case @LinkMode =2 ! )

However , If I change it to :

 select ...      from ...      WHERE    1=2           AND (myColumn IN (...very long time exeted query...))       ...       ...  

The SP does run fast .

I've heard before that sometimes , optimizer can optimize the order of criteria

So I ask :

  • Even if the optimizer choose different route , what can be faster than checking if =null ? I mean , I think that checking if a==null is much faster than running the other long query...

  • How can I force Sql server to run the query as I've written it ! ( the same order)

Disable trigger still fired

Posted: 01 May 2013 01:05 PM PDT

I've just run the following t-sql to disable a trigger (with a deploy script wrapper around osql):

disable trigger dbo.trExampleTableInsert ON dbo.ExampleTable

In SSMS I could see the trigger was disabled (downward pointing red arrow). When I inserted some rows into the table, the trigger still fired (I have an audit table that logs when the trigger fires).

I enabled and disabled the trigger this time using SSMS UI. When I inserted rows the trigger didn't fire.

What does the UI do differently over the T-SQL command? Did I need to issue a commit?

Is there a better way than the ones listed within to join data in an excel spreadsheet to a table in a SQL Server database?

Posted: 01 May 2013 01:47 PM PDT

Let's say I'm given a list of names and email addresses in an excel spreadsheet. I am asked to find all customers in the database who's name, and email are in that list.

I currently know of two ways to achieve this...

  • Import the data into a new table (to be removed later) and then use this table to get the data.
  • Paste the names and addresses into a SQL query.

Is there a 'better' way to do this kind of thing (i.e. one that doesn't require me to place the data in the database before using it, and doesn't require me to create a monster query that has all the data in a 'where x in (data)' part)

SQL Server: Sync Indexes between two tables within Same database

Posted: 01 May 2013 01:26 PM PDT

I need to automate and Sync only indexes between two tables ( Primary and Stage_Table) within same database. Tried using SSIS SQL Server Objects Task, but looks like it works only when we sync between two different instances. Indexes on Primary table keep getting changed freq and i need the empty stage_table with all the updated indexes when i do the partition switch.

Thanks,

Teradata performance statistics

Posted: 01 May 2013 10:37 AM PDT

I need some statistics regarding the performance of Teradata, something like with a given hardware, given nodes and given AMPs - a table which has thousands of rows can be read in 't' secs. I need this information to analyze the feasibility of teradata.

MySQL Read Speed and Partitioning on Separate Drives

Posted: 01 May 2013 10:41 AM PDT

Introduction

Simple project but for an heavy read load -90% reads- over a table(~20Milion Rows ever-growing where I have to boost performance and ensure response low time for read queries) that was partitioned for a fixed field -21 partitions, avg 1Mln Row/Partition- like this:

CREATE TABLE `my_table`  (   `id` int(10) unsigned NOT NULL,   `fieldA` char(40) NOT NULL,   `fieldB` char(40) NOT NULL,   `fieldC` char(32) DEFAULT NULL,   -- some other fields   PRIMARY KEY (`id`,`fieldC`)   -- some other index  ) ENGINE=MyISAM   PARTITION BY LIST COLUMNS(`fieldC`) (    partition pNULLorEMPTY VALUES IN(NULL,''),    partition pValueA VALUES IN('valueA'),    partition pValueB VALUES IN('valueB'),    partition pValueC VALUES IN('valueC')    -- other partitions  );  

The root of the queries are always "WHERE fieldC = ? ... AND ..." so I always guaranteed access to a partition.

This table can be only MyISAM so I have installed the MySQL server 5.6.11

Prior to MySQL 5.6.6, a SELECT from a partitioned MyISAM table caused a lock on the entire table; in MySQL 5.6.6 and later, only those partitions actually containing rows that satisfy the SELECT statement's WHERE condition are locked. This has the effect of increasing the speed and efficiency of concurrent operations on partitioned MyISAM tables.

The setup consist on a Master/Slave topology with same HW (I can distribuite reads across Master and Slave)

Possible solution

I have tree choice for the hardware architecture(6 slot disk 3.5''):

Soloution #1

  • 1xraid1 for OS, /tmp and binlog

  • 1xraid1 partition for mysql data (ex: /diskA)

  • 1xraid1 additional partition for mysql data (ex: /diskB)

Soloution #2

  • 1xraid1 OS, /tmp and binlog

  • 1xraid10 mysql data

Soloution #3 (2 disk waste for tmp!)

  • 1xraid1 OS and binlog

  • 1xraid1 partition for mysql data

  • 1xraid1 /tmp

The first choice can help me to spread the I/O over two disk using the:

CREATE TABLE `my_table`  (   `id` int(10) unsigned NOT NULL,   `fieldA` char(40) NOT NULL,   `fieldB` char(40) NOT NULL,   `fieldC` char(32) DEFAULT NULL,   -- some other fields   PRIMARY KEY (`id`,`fieldC`)   -- some other index  ) ENGINE=MyISAM   PARTITION BY LIST COLUMNS(`fieldC`) (    partition pNULLorEMPTY VALUES IN(NULL,'')      DATA DIRECTORY = '/diskA/customer'       INDEX DIRECTORY = '/diskA/customer',    partition pValueA VALUES IN('valueA')      DATA DIRECTORY = '/diskB/customer'       INDEX DIRECTORY = '/diskB/customer',    partition pValueB VALUES IN('valueB')      DATA DIRECTORY = '/diskA/customer'       INDEX DIRECTORY = '/diskA/customer',    partition pValueC VALUES IN('valueC')      DATA DIRECTORY = '/diskB/customer'       INDEX DIRECTORY = '/diskB/customer'    -- other partitions  );  

The second choice improve the fault tolerance level, but consider that I have a Master/Slave configuration so, in every moment, I can promote the Slave as a Master.

What kind of architecture you advice?

There are problems that at the moment I do not see?

Any other kind of suggestions would be greatly appreciated. :)

Thanks in advance

crx

Is it ever a good idea to denormalize for integrity?

Posted: 01 May 2013 01:18 PM PDT

I'm using Postgres 9.2 to develop a quiz app, in which I present the user with a series of problems and record their answers.

These problems can take a variety of forms - they might be multiple choice (What's 2 + 2? A: 2. B: 3. C: 4), or they might require the user to calculate an answer of their own, in which case I need to constrain their input to be something like '440' or '1/2' or '.333'. Some of the problems might prompt the user to type in an essay. And, of course, I may need to add more types of problems later.

The tables I'm envisioning, in a simplified form, are something like this:

CREATE TABLE problems  (    problem_id serial NOT NULL PRIMARY KEY,    problem_type text NOT NULL, -- Refers to a lookup table    answer_letter text, -- If not null, refers to the correct answer in the answers table below.    response text -- If not null, represents a correct answer to be input, like '0.4'  );    CREATE TABLE answers  (    problem_id integer, -- Foreign key    answer_letter text,    content text,      CONSTRAINT answers_pkey PRIMARY KEY (problem_id, answer_letter)  )    CREATE TABLE questions  (    user_id integer,    created_at timestamptz,    problem_id integer, -- Foreign key    answer_letter text,    response text,      CONSTRAINT questions_pkey PRIMARY KEY (user_id, created_at)  );  

So, the problems table would have a variety of constraints to ensure that:

  • When problem_type is 'multiple_choice', answer_letter must not be null and response must be null.
  • When problem_type is 'user_input', answer_letter must be null and response must not be null. Response must also consist of only a few characters.
  • When problem_type is 'essay', both answer_letter and response must be null, since I can't really have a correct answer for an essay question.

This is clean enough, and constrains the problems table just fine. I might use an enum instead of a lookup table for problem_type, since all of its possible values will already be baked into the schema anyway.

My difficulty is, how to constrain the questions table? Its constraints will be very similar (I don't want an answer_letter supplied for a question that references an essay problem, and so on). I can think of a few options:

  1. Create a unique index on problems (problem_id, problem_type), add a problem_type field to questions and include it in the reference to problems, then use it in check constraints in a way similar to problems. This is the way I'm leaning right now, since it seems cleanest overall, but then I'm denormalizing to achieve the proper constraints, which feels wrong to me.
  2. Create three problem tables, one for each type, and constrain them separately. Do the same with three question tables. This feels like the pure relational way to me, which is typically what I'd like to go for, but it also feels way too complex. I don't want to have to deal with unioning three tables (or more, later on) to get a user's question history.
  3. Go with #2, but use Postgres' inheritance support to try to keep the logic simple. But since you can't point a foreign key at a hierarchy of tables, again it's not a very clean solution, it's a solution that would again need to be hacked around.
  4. Use triggers to ensure that questions data fits the corresponding problem. Maybe it's because I don't have much experience with triggers, but I'm concerned that cramming that sort of imperative logic into the DB will eventually become unmanageable.
  5. Forget the question constraints, handle it in app logic, and hope for the best. You can't constrain everything all the time. Of course, I don't really like this idea either.

I feel like there's a problem with my approach to modeling that's leading me to these difficulties, especially since I'm running into a couple of cases very similar to this elsewhere in my schema (this one was just the easiest to describe). Maybe it's just a domain that's difficult to model, but I feel like there must be a better way, and I'm just not normalizing correctly.

Help? Thanks!

Bitmask Flags with Lookup Tables Clarification

Posted: 01 May 2013 07:05 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 01 May 2013 01:05 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

BIT columns all "1" after a phpMyAdmin export/import

Posted: 01 May 2013 03:05 PM PDT

I have to import data from a MySQL database using phpMyAdmin because that's the only interface my hosting provider supports.

I have exported my database from my local machine using phpMyAdmin. After that I imported the script file to my host. All of data in the columns that is BIT type are changed to '1'. Hosting database version is 5.5.29.

Is it a phpMyAdmin problem, or MySQL version problem? How can I fix this?

SQL Server replication subscriptions marked as inactive

Posted: 01 May 2013 10:05 AM PDT

Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever?

It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time.

Note, I'm not talking about the subscriptions being marked as expired...just as inactive.

Thank you.

Error trying to run SSIS Packages in SQL Server 2008 SQL Server Agent

Posted: 01 May 2013 02:48 PM PDT

I am trying to run a number of SQL Server 2008 stored procedures that are wrapped in SSIS packages consisting mainly of Execute SQL Tasks. They run fine in Visual Studio 2008 after deploying to a folder on the SQL Server server. The problem is when I created a job in SQL Server Agent and tried to run the job. That is when I get the error: Executing the query "EXEC ? = dbo.usp_Clear_All_JDAWG_Tables" failed with the following error: "The EXECUTE permission was denied on the object 'usp_Clear_All_JDAWG_Tables' database 'JDAWGD' schema 'dbo'.". Possible failure reasons: Problems with the query "ResultSet" property not set correctly parameters not set correctly or connection not established correctly. End Error DTExec: The package execution returned DTSER_FAILURE (1). The 'usp_Clear_All_JDAWG_Tables' is the name of the first of many packages this job is trying to run. The stored procs themselves do not use parameters but in the SQL Task I have commands like this to run the stored proc: EXEC ? = dbo.usp_Load_JDAWG_Tables_From_Persons_All_Agencies Is there a better/different way to run a stored procedure in the SQL Task?

As a side note, I was able to create and successfully run an Agent job that directly executed the stored procedures that the packages run.

Thanks for any assistance.

Login failure when running a SSIS package from a SQL Server job

Posted: 01 May 2013 10:05 AM PDT

I have a SSIS package that migrates data from a Sybase database to SQL Server. The connection to the source database has been made through an ODBC connection.

If I run the package directly on my machine, it runs successfully, however if I want to run it through a SQL Server job I have created on my local machine (where the destination database is), I'm getting a login failure error:

An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80004005  Description: "Login failed for user 'xxx\user1'.".  An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80004005  Description: "Cannot open database "MySybaseDatabase" requested by the login.  

I know that the job runs as 'SQL Server Agent Service Account' that is in fact xxx\user1 and it is different from a user has been set up in ODBC connection to access the source database, but don't know why the odbc connection login does not take the precedance and is used rather than the agent service account?

I read some posts in different sites, they were talking about setting up proxy, etc, but don't know if it is quite relevant to my issue.

The connection to the source database has been set up via ODBC, but as the step was failing, I added the userId and password to it to force SQL uses the userId I want rather than SQL Sever agant user, but it didn't help. enter image description here Do you know how we can fix this login issue?

Thank you.

error in Execute SQL Task with parameter

Posted: 01 May 2013 10:06 AM PDT

I have an Execute SQL Task object in the Error handlers section of a SSIS package. What it does is inserting a record to a table using OLE DB connection. There is a value in the insert statement that should be sourced from a user variable.

When I try to pass the query I get the following error, I searched internet a lot and tried different things, but still get the error message:

"The query failed to parse. Parameter Information cannot be derived from SQL Statements. Set parameter information before preparing command."

What I have set to the SQLStatement is:

INSERT INTO ErrorLog(ErrorDateTime, ErrorDescription) VALUES(GETDATE(),?)  

What I have in the Parameter Mapping section of the Task: enter image description here

The SQLSourceType is "Direct input" and as there is no result to this query the ResultSet has been set to "None".

The ErrorDescription column in the target table is varchar(max).

Any idea how fix this issue?

Thanks in advance.

Microsoft Visual Studio is Busy

Posted: 01 May 2013 10:08 AM PDT

I am working in Visual Studio 2008 on a SSIS package. It is really frustrating that I get "Microsoft Visual Studio is Busy" message quite often. It almost happens every time I try to open a data source, destination, etc.

From what I can see it seems VS tries to save Items. Is there any way to avoid this auto save? Generally how can we get rid of this message and interruptions?

If I SET ARRAYSIZE 1 in SQL*Plus, the records are still fetched in pairs. Why?

Posted: 01 May 2013 06:39 PM PDT

I thought about writing a simple tail -f like utility to "trace" the progress of some figures within the database:

create or replace function tail_f return varchar2_tab pipelined as     n number;  begin        loop        exit when ...          select count(*) into n from ... where ...;          pipe row(sysdate || ' n= ' || n);        dbms_lock.sleep(60);      end loop;        return;  end tail_f;  

And then I'd like to select * from table(tail_f) in SQL*Plus.

In order to fetch the rows one by one, I SET ARRAYSIZE 1. Yet, the records (except the first one) are fetched in pairs.

Is there an explanation for this and how can I get the records as soon as one is piped?

No comments:

Post a Comment

Search This Blog