Sunday, March 3, 2013

[how to] Ubuntu 12, POSTGRES 9.2 , POSTGIS 2.0

[how to] Ubuntu 12, POSTGRES 9.2 , POSTGIS 2.0


Ubuntu 12, POSTGRES 9.2 , POSTGIS 2.0

Posted: 03 Mar 2013 08:38 PM PST

At the current moment, March 4th 2013, does POSTGIS2.0 can be install with POSTGRES 9.2?

I check their website out and to my understanding it is not possible...

http://trac.osgeo.org/postgis/wiki/UsersWikiInstall#PostGIS2.0  

I hope that's not the case. Any one can tell and point out the instruction how to install POSTGIS 2.0 on POSTGRES 9.2 on Ubuntu?

Thanks

Unable to restore dump file in mysql5.5

Posted: 03 Mar 2013 08:16 PM PST

I'm trying to restore a mysql5.5 dump file using mysql command prompt using the command given below:

mysql> -u -p <

But it is giving the below error:

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-u ro ot -p root123 mydb< "D:\mysql_dump\backup_2013_01_mydb.sql"' at line 1

Please help. I've tried it on both WinXP and Win7. Also please guide how to do the same using Mysql Workbench.

Thank You

Database describing databases and references to tables

Posted: 03 Mar 2013 05:23 PM PST

Context

The company I work for makes a number of different softwares each with its own database. As they're all about related things (accounting) a good amount of data is theoretically the same among several of them, and if the user owns more than one there are options to import the appropriate data from the others.

In general each database is "structured" differently, most tables use absolute positions and numbered fields (reflecting the software's data entry forms), some are much worse.
So the importing feature is not trivial.
At the moment the references to the locations of the data in the other softwares are hard-coded in the source code, but in order to facilitate both the importing and a future possible unification I'm making a database of them.

I had the feeling that other systems such as rdf could be more suited to the task but I had to do it with an Access database. I think the matter is problematic for any sql databases though.

Database structure

The database I made is divided in data definitions and associations, under certain conditions, of a location (in some cases a cell in an absolute position, in others a field) of a database's software with a datum or class of data.

I decided to make a table in the database for each general type of data, with fields indicating its properties to identify an exact specific datum (or class of data if some field is left null), except for properties common to all data such as the period to which it refers. I might just make a table with each possible exact datum but it is a lot more convenient to be able to select based on the properties. The grouping of data is at a subjective most general reasonable level.

I will use code to operate on the various softwares' databases, I don't need to make cross-database sql queries.


Question

The issue now is that a certain location in a target database may be associated with any one of the types of data, thus with any one of the data definition tables.

Of course in terms of information I'd just need a field identifying the table and one for the record ID, but as far as I can tell, both normal SQL queries and constraints can't apply to different tables based on the current contents of a cell. SQL queries could probably still be made putting a conditional join for every possible table, not very convenient, but I could actually also do without them and use only code. There seems to be no way to make foreign key constraints on the other hand, I could also do without them but I'd prefer not to.

I figured different alternatives to face the problem, each with serious disadvantages, I'd like you to tell me if you know any better method or what you think would be the least worst:

  1. the first is what I just told, one field identifying the table and one the key

  2. I might make one foreign key field for each table. SQL queries would still be very complex but I would have referential integrity. The big problem though is that I have a limit of 255 fields and it is very likely that if not now at a certain time there will be more than 255 types of data, and thus tables, that need to be referred to. I know I might make a child table with the same fields at that point, but the 255 fields limit applies also to views, so I would then need to change the queries and the code, or put now ahead of time a field and establish a convention for the child table name, and consider that in the code. I would like to allow the users to insert new kinds of data and make the associations so any solution should not require manual adjustment of the database and the code that refers to it at a future time.

  3. One table for all the data types, with a lot of void cells. This is actually impossible, I would need way more than 255 fields. Unless of course I use here the child table approach.

  4. Going back to the first option, with the single field for the indication of the table, I figured I could enforce referential integerity in one table by inserting a "table in the middle", listing all the possible specific data, with one record for every record in each table; the absence of refential integrity would be "pushed" to this table then, but it would be a little easier, with the requirement to be careful only when inserting a new datum, not every time I need to reference it. I will probably have 5 references to each datum on average though, so a relative improvement.

  5. Finally, as every option has its drawbacks, why not go the whole hog and just make a single lookup table that is easier to manage and let the code do all the checks?


By the way, how can it be that it's not possible to refer tables in records? Wouldn't it just take adding a TableRef field type?


I hope I've made myself clear, thank you for your help


note this was previously posted at http://stackoverflow.com/questions/15192645/database-describing-databases-and-references-to-tables

implementing DAC and MAC in Oracle

Posted: 03 Mar 2013 04:49 PM PST

I want an example that might help me implement a dynamics modes of an access mode model:

Access Modes are : use 1 Allows the subject to use the object without modifying the object read 2 Allows the subject to read the contents ofthe object update 3 Allows the subject to modify the contents ofthe object create 4 Allows the subject to add instances to the object delete 4 Allows the subject to remove instances ofthe object

dynamic access modes are: grant 1 Allows the subject to grant any static access mode to any other subject revoke 1 Allows the subject to revoke a granted static access mode from a subject delegate 2 Allows the subject to grant the grant privilege to other subjects abrogate 2 Allows the subject to grant the revoke privilege to other subjects

I was thinking about adopting the linux implementation in DAC and MAC, can you think of a simple way?

Versioning data in Databases

Posted: 03 Mar 2013 06:52 PM PST

Is there any ready mechanism in Oracle, SqlServer, PostgreSql or OR framework (c#), which can versions informations in database? By versioning informarions I mean that, e.g. I have table Persons(id, version, isEnable, enableFrom, enableTo, name, surname), and any change in column name or surname makes new record with incremented version, changed isEnable, enableFrom and enableTo.

mysqldump: flush-privileges option

Posted: 03 Mar 2013 04:50 PM PST

I'd like to get some clarification about the --flush-privileges option for mysqldump.

Here's the description of the option from the MySQL docs:

--flush-privileges      Send a FLUSH PRIVILEGES statement to the server after dumping the mysql database.  This option should be used any time the dump contains the mysql database and any other  database that depends on the data in the mysql database for proper restoration.   

When it says that it sends the flush statement after dumping the database, I read that to mean that the data, schema, etc. is dumped into the backup file and then the flush statement is sent to the database that was just dumped (after it was dumped).

I was wondering what dumping the data, etc. did that required the privileges to be flushed, so I started searching for an explanation to be sure when and why to use it.

As I read various answers, it occurred to me that it would make sense if the flush statement was being included in the dump file. Then after the contents of the file was loaded into a database, the flush privileges statement was run to update the settings after the new info was imported.

So, how does it work?
A) Flush the source database after dumping the data to a file? If so, why is this necessary?
B) Flush the destination database after importing the contents of the dump file?
C) Something other than the possibilities I've described?

How does MySQL determine the 'index first key' and 'index last key' with indexes?

Posted: 03 Mar 2013 07:44 PM PST

In other words, how does MySQL find the result range with indexes on the first step when it's executing a select query?

Adding an ID column to a join table (many-to-many) so that it can be targeted

Posted: 03 Mar 2013 08:23 AM PST

I'm building a web application for a company that sells cars. They buy leads (personal details of people interested in buying cars) and try to turn them into customers. The company has a number of sales persons, and they want each lead to be distributed to 5 of their salespersons randomly.

http://i.stack.imgur.com/12k35.png

However, every sales person must have his own individual copy of the lead. This is because sales persons work on a commission basis, so they need to be able to:

  • edit/add a lead's details without the other sales persons seeing it
  • manage tasks for each lead without the other sales persons seeing those tasks

The only option I see for this is to add an id (PRIMARY KEY) column to the salesperson_has_lead table, and change its salesperson_id and lead_id columns to foreign key columns.

http://i.stack.imgur.com/UPb27.png

However, I always read that it's wrong to add an ID (PRIMARY KEY) columns to a many-to-many join table. Does anyone see any flaws in the solution pictured above, or know of a better option? Any help will be very much appreciated!

How to grant permission for am administrator account in MS SQL server 2012?

Posted: 03 Mar 2013 07:09 PM PST

I installed MS SQL Server 2012 on my laptop. I am trying to run a script to create a simple database.

SQL server version: 11.0.3128

SSMS starting parameters:

server type: database engine  server name: BOGLENETWORK\SQLEXPRESS  Authentication: Windows Authentication  user name:boglenetwork\admin121

I did some online search and it seems like The account I used doesn't have proper permission to write to the folder in which .mdf and .ldf file should be held. I would like to know do I check the account permission and make necessary modification the the file can be created in the destination folder. Thank for you advise and help!

forum post on a similar issue. I received the follow message:

Msg 5133, Level 16, State 1, Line 1  Directory lookup for the file "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLEXPRESS\MSSQL\DATA\saledat.mdf" failed with   the operating system error 3(The system cannot find the path specified.).  Msg 1802, Level 16, State 1, Line 1  CREATE DATABASE failed. Some file names listed could not be created. Check related errors.

Here is the code I am using to create the database:

USE master;  GO  CREATE DATABASE sales  ON   ( NAME = sales_dat,      FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLEXPRESS\MSSQL\DATA\saledat.mdf',      SIZE = 10,      MAXSIZE = 50,      FILEGROWTH = 5 )  LOG ON  ( NAME = sales_log,      FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLEXPRESS\MSSQL\DATA\salelog.ldf',      SIZE = 5MB,      MAXSIZE = 25MB,      FILEGROWTH = 5MB ) ;  GO

Unable to configure Informatica sources with ODBC 64 bit drivers

Posted: 03 Mar 2013 06:31 AM PST

I have an Oracle database installed on my local system (64 bit) and am trying to use Informatica as an ETL tool.

My problem is that my database is working fine (I am able to connect to it via SQL*Plus and all and run queries), also I was able to configure a domain for Informatica, but I am not able to import sources from my database as I am not able to configure System DSN for my installed database.

The entry that's supposed to show (Oracle in Oradb11g_home1) doesn't show up. However I was able to find it when I tried to configure it through my system's ODBC options. It just isn't available in the Informatica ODBC options.

Any idea how to configure this?

Problem with a surrogate key?

Posted: 03 Mar 2013 07:55 AM PST

I am working with an application for students management. For the student's table I used a surrogate key (auto increment id). The problem I face with the surrogate key is when the user wants to show information about a student he must enter the id but the user knows only the name of the student (and I can't use the name as a primary key because it's not unique). How can I solve this problem?

Process sessions foreach minute

Posted: 03 Mar 2013 12:54 AM PST

I have a table that stores sessions. The columns are: ID, Start (Timestamp) and Stop (Timestamp, can be NULL).

It's for online radio. A listener connects to the stream, and a row is created. Once they stop listening, Stop is updated to the current time.

Currently, it's about 80K rows per day.

I'd like to process the sessions once a day by determining the max amount of listeners for each minute, and push that data into a different table: Timestamp, ListenerCount (UNSIGNED SMALL INT).

I'm not super concerned with speed, just with working out how I can process this data within MySQL. I'm assuming I use a stored procedure (not much experience there), loop over each minute between the MIN(Start) and Max(Stop), and do a COUNT() between x-minute and x-minute:59.

Does this sound viable? Is there a better way?

Thanks for your time.

What queries is Oracle executing, how frequently, and time-taken?

Posted: 03 Mar 2013 06:53 AM PST

I have a (Java) web-application, that of course executes many queries (selects, inserts, updates, deletes) to the Oracle Database.

I want to find out the following for last 7 days:

    SQLQuery          Times-Executed   Average-Resp-Time    Average-Rows-Returned  ====================================================================================      SELECT whatever     981            330ms                1201      UPDATE whatever     45             99ms                 0  

Is this something Oracle can give me? If so, how can I get that? If not Oracle, should I look at JDBC driver?

Need logic to collate data

Posted: 03 Mar 2013 04:11 AM PST

I have a table in the below structure.

phone bigint  answer_time datetime  

I should group all phone numbers, and its total number of attempts and mention its no:of time answered and no:of time not answered. For this the condition is answer_time='0000-00-00 00:00:00' for not answered and answer_time > '0000-00-00 00:00:00' for answered. There can be any number of times the call has been made to a particular number. Any suggestions or approach would be helpful. I'm terribly stuck up on this.

The result should be :

+----------------+---------------+--------------------+--  | phone         | count(*) | NOT_ANSWERED | ANSWERED   |  +---------------+----------+------------+------------+-----+  | 7260513       |      2   |       1      |  1       |  +------------+----------+------------+--------------------+  

In this, at the first attempt the call is answered at the second it is not. The attempts may range max to 16 times.

FOR MAX

mysql> select phone, answer_time from cdr_test;  +------------+---------------------+  | phone   | answer_time         |  +------------+---------------------+  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  | 31389966 | 0000-00-00 00:00:00 |  +------------+---------------------+  13 rows in set (0.00 sec)    mysql> select phone, Count(*), Count(CASE WHEN Time(answer_time) = 0 THEN 1 end ) AS  NOT_ANSWERED, Count(CASE WHEN Time(answer_time) > 0 THEN 1 end) AS ANSWERED  FROM   cdr_test  GROUP  BY phone;  +------------+----------+--------------+----------+  | phone      | Count(*) | NOT_ANSWERED | ANSWERED |  +------------+----------+--------------+----------+  | 31389966 |       13 |            0 |        0 |  +------------+----------+--------------+----------+    1 row in set (0.00 sec)  

"custom archiver out of memory" error when restoring large DB using pg_restore

Posted: 03 Mar 2013 09:34 AM PST

I'm trying to a restore a local copy of a 30GB DB (with only one table) dumped using the Heroku wrappers (described here) using pg_restore. The Heroku DB is using 9.1.7 and I have had the same result using 9.1.5 and 9.1.7 under OS X 10.8.

Using the command:

pg_restore --clean --no-acl --no-owner -d db_name -U username -h localhost -v db.dump  

I get the following output on the command line ("cleaning" commands omitted) after 20 minutes and seeing the DB size grow to about 30 GB:

pg_restore: creating SCHEMA public  pg_restore: creating COMMENT SCHEMA public  pg_restore: creating EXTENSION plpgsql  pg_restore: creating COMMENT EXTENSION plpgsql  pg_restore: creating TABLE video_data  pg_restore: creating SEQUENCE video_data_id_seq  pg_restore: creating SEQUENCE OWNED BY video_data_id_seq  pg_restore: executing SEQUENCE SET video_data_id_seq  pg_restore: creating DEFAULT id  pg_restore: restoring data for table "video_data"  pg_restore(11938) malloc: *** mmap(size=18446744071605936128) failed (error code=12)  *** error: can't allocate region  *** set a breakpoint in malloc_error_break to debug  pg_restore: [custom archiver] out of memory  pg_restore: *** aborted because of error  

Turning up log level to DEBUG1 reveals the error:

...(many of these checkpoints hints)...  LOG:  checkpoints are occurring too frequently (15 seconds apart)  HINT:  Consider increasing the configuration parameter "checkpoint_segments".  ERROR:  extra data after last expected column  CONTEXT:  COPY video_data, line 463110542: "3599794500:59:01497 797942  2012    23422-08-96 2400892 08.794219   08 0617 2342118769-08 72    115 2046225 1..."  STATEMENT:  COPY video_data (id, video_id, views, favorites, comments, created_at, likes, dislikes) FROM stdin;  

I have tried increasing SHMMAX and SHMALL to 2GB, checkpoint_segments to 30, as well as upgrading PG to match patchlevels. A search reveals a small handful of people with this problem several years back, and no resolution.

Any recommendations?

SQL Server - Separating data, log, and TempDB files on a SAN

Posted: 03 Mar 2013 02:30 PM PST

I have a SQL Server connected to a SAN. Our new storage vendor recommends that all LUNs span the entire disk array, which is RAID5. I would normally request 3 separate LUNs (data, log, and TempDB) from the SAN administrator, but given the new vendor's recommendation, is there any point in creating separate LUNs? Or would I see the same performance if everything was in one LUN since it would span all disks anyway?

Great article here, but doesn't quite address my exact situation: http://www.brentozar.com/archive/2008/08/sql-server-on-a-san-dedicated-or-shared-drives/

The login already has an account under a different user name

Posted: 03 Mar 2013 06:56 AM PST

When I execute this SQL:

USE ASPState  GO  IF NOT EXISTS(SELECT * FROM sys.sysusers WHERE NAME = 'R2Server\AAOUser')  CREATE USER [R2Server\AAOUser] FOR LOGIN [R2Server\AAOUser];  GO  

I get the following error:

The login already has an account under a different user name.

How do I know what this different user name is for my login account?

compare the same table

Posted: 03 Mar 2013 09:31 AM PST

I am facing an issue with the following query. When I execute the query, it takes very long. I broke the query into two parts, compared with a shell script, but is there any chance to go with one query?

Any suggestion welcome.

select distinct substring(mobile_num,3,12)  from mobile  where  status ='INACTIVE'    and date(unsub_date) >= DATE(CURDATE() - INTERVAL 90 DAY)    and mobile_num not in(select distinct mobile_num from mobile where status='ACTIVE')  order by updtm;  
| mobile_num  | varchar(12)   | keyword     | varchar(45)   | sub_date    | datetime     | unsub_date  | datetime     | circle_name | varchar(45)   | type        | varchar(45)   | status      | varchar(45)  | operator    | varchar(45)   | act_mode    | varchar(45)   | deact_mode  | varchar(45)   | id          | bigint(20)    | updtm       | timestamp     

Database Restoration Issue

Posted: 03 Mar 2013 06:13 AM PST

I am new with postgresql.

When I attempt to restore database a .sql.gz file into the database with psql, the restore does not complete when the database size is too large (32Gb, for example).

These reason again I restore empty tables. How can I resolve this issue?

psql -U postgres -d dbname -f <filename>  

SQL Server Select Count READ_COMMITTED_SNAPSHOT QUESTION

Posted: 03 Mar 2013 02:31 PM PST

I seem to be getting a lot of deadlocks when doing select count(*) on a particular table. I have already changed all the required parameters and made them into row only locking.

I've also changed the database to use READ_COMMITTED_SNAPSHOT isolation,

however, it seems that using a select count(*) where column = ? on the table triggers deadlocks or locks on the table..

Am I correct that the select count(*) should only be accessing intermediate rows?, however, it doesn't seem that way and I'm still encountering deadlocks. Proper indexing would probably help,

The question is: Does SQL server 2008 R2 place shared lock on table during select count(*) even when read_committed_snapshot is set to on?

Thanks

Viewing MySQL Account Resource Limits

Posted: 03 Mar 2013 04:31 PM PST

Is there any way of viewing an account's remaining resources that are allocated to it? I setup an account that's allowed 7200 queries an hour. At any point, could I then run a query to find out how many remaining queries it's allowed?

MySQL must be storing this information somewhere as FLUSH USER_RESOURCES; will reset the counters however, I tried a few variants such as SHOW USER_RESOURCES and they don't seem to display anything. I've also hunted around information_schema and mysql tables.

Is it just not possible to retrieve that information?

sql server database sharding - what to do with common data / non sharded data

Posted: 03 Mar 2013 03:31 PM PST

We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level.

We are looking to shard the database now to ensure that we can handle company growth and future loads.

We have decided what particular data should be sharded. It is a subset of our database which is highly utilized.

However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc .

I see two options to handle this common/universal data:

1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs.

2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards.

concerns about design #1

1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data.

2)a loss of referential integrity. Not possible to do cross database referential integrity.

3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards.

4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3.

concerns about design #2

In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively.

I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see.

thank you in advance.

set the clock on mysql

Posted: 03 Mar 2013 07:31 AM PST

I would like to know where does mysql get it current time. I am updating a table and using

update `table1` set `cola` = 1, `colb` =2 , `updated` = NOW();  

the time is 2 hours early. the time on the server is completly different, so I'm a little confused...

I tried

 SELECT @@global.time_zone, @@session.time_zone;  

but I get SYSTEM as result

Oracle server connection reset by peer socket write error

Posted: 03 Mar 2013 02:31 AM PST

Full disclosure, I'm not a database admin, but if anyone is able to help me, it would be you guys.

Our Oracle 11g database server stopped working, I'm not aware of anything that could have caused it, it was out of the blue. When I try to connect via SQL Developer I get either 'connection reset by peer socket write error' or 'Input/Output error: connection reset.

I've tried restarting it, checked whether the Windows Services are running and went through every item in the configuration that I could have found, but nothing helped. It seems that the listener is unable to start (I've tried adding a new listener with identical setting and also got the socket write error)

Since I've done no changes to the configuration, I'm perplexed what the issue could be. Does anyone has any ideas to what could be the cause of this issue? And if not, can some one post a link to something like the SQL Server's "Repair Installation" which I didn't find?

Any help is appreciated.

EDIT: I was asked for the following:

Output for the lsnrctl status:

    LSNRCTL for 32-bit Windows: Version 11.2.0.1.0 - Production on 03-JUL-2012 13:30  :53    Copyright (c) 1991, 2010, Oracle.  All rights reserved.    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))  TNS-12571: TNS:packet writer failure   TNS-12560: TNS:protocol adapter error    TNS-00530: Protocol adapter error     32-bit Windows Error: 54: Unknown error  

And according to the lsnrctl start, the listener is already running

How to break down query into sub-queries for simplicity?

Posted: 03 Mar 2013 11:32 AM PST

I'm trying to learn Relational Algebra using PRODUCT to join tables.

Could someone show me how to convert the following SQL statement using PRODUCT for the multiple tables, but breaking the query down?

i.e. something like:

 R1 -> PROJECT Emp_ID (Employee);   R2 -> PROJECT Income ( SELECT Employee ID = R1 (Salary));  

This is due to the fact that I'm finding it hard to understand how the statements are broken down and then put together in one RA query using PRODUCT. Here is the SQL code:

SELECT name  FROM depart d, course c, lesson l  WHERE d.dname = l.dname   AND c.cnr = l.cnr  AND c.cname = 'Course'  AND d.city = 'London';  

SQL Server to Oracle Export - Data conversion error

Posted: 03 Mar 2013 10:31 AM PST

I have a rather weird problem with exporting a bunch of tables from SQL Server 2005 to Oracle database. I have SQL Agent job that deletes Oracle tables, recreates them and then exports the data using linked server and OPENQUERY. The entire script is here.

My problem is, the job fails while exporting data at the last table PSRPT_TEAM_ROLES_PSFT. The error message from the log files says the job failed due to a data conversion error: Msg 8114, Sev 16, State 5, Line 4 : Error converting data type varchar to numeric. [SQLSTATE 42000]. I have identified the column that is likely causing this error, BO_ID which is DECIMAL(31,0) at source and exported as NUMERIC(31,0) to destination. When I alter the destination column to NVARCHAR2 and export, the job doesn't fail but the Oracle guys told me that it significantly increased the size of the table. The weird part is, when I run the export query manually, even when the datatype NUMERIC(31,0) at destination I see no error. So I guess my question is

  • Am I missing something obvious?
  • Why does this table export fail when run as a SQL agent job and work perfectly when run manually?

Inserting and updating and selecting at thousands of times per second

Posted: 03 Mar 2013 03:59 PM PST

I have a table that gets inserted, updated and selected from thousands of times within one second. I am having deadlock issues though.

  1. The database has 2-5 simultaneous 1000+ row inserts using Linq to Sql.
  2. 40 times per second, there is also a select statement from that table, and if a condition is true (95% of the time it is), an update happens with code similar to this:

    create procedure AccessFile (@code, @admin) AS

    declare @id int, @access datetime, @file string

    select @id=Id, @accessed = accessed, @file = file from files where code = @code

    IF @admin<> 0 IF @accessed is null begin

    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

    update files set accessed = getdate() where id = @id

    SET TRANSACTION ISOLATION LEVEL READ COMMITTED

    end

    select @id as Id, @file as File

It seems as though it is the updates conflicting with the inserts that are causing the deadlocks.

The example is a 1 to 1 with the stored procedure, only difference is the names. Assume the 1 and 2, regardless of the stored procedures names.

No comments:

Post a Comment

Search This Blog