Wednesday, March 27, 2013

[how to] Providing development access to the database [duplicate]

[how to] Providing development access to the database [duplicate]


Providing development access to the database [duplicate]

Posted: 27 Mar 2013 06:56 PM PDT

This question is an exact duplicate of:

I am beginning a project that will involve insert and updates to a sql server database from a website. I maintain the SQL server database and am providing access to the sql server db.

My plan is to create a user with read/write access and provide the outsourced developers with a connection string with access to views that I create. Please let me know if this is outside the development expectations or if anyone has recommendations.

Providing access to the SQL server database for outside developers

Posted: 27 Mar 2013 08:52 PM PDT

I am beginning a project that will involve insert and updates to a sql server database from a website. I maintain the SQL server database and am providing access to the sql server db.

My plan is to create a user with read/write access and provide the outsourced developers with a connection string with access to views that I create. Please let me know if this is outside the development expectations or if anyone has recommendations.

Thank you

No network ACL, yet my user can successfully connect to external server

Posted: 27 Mar 2013 05:57 PM PDT

This one's had both me and our dba scratching our heads.

We have an Oracle 11.1.0.7.0 instance, DEV, with a user who can successfully connect to our ldap server, e.g.:

DECLARE    l_session DBMS_LDAP.session;  BEGIN    l_session := DBMS_LDAP.init('ldap.host.bla.bla',389);  END;  /    PL/SQL procedure successfully completed  

We have another Oracle instance, 11.2.0.3.0 called UAT, with the same user, same set up, but when they run the above code, it raises an exception from DBMS_LDAP:

ORA-24247: network access denied by access control list (ACL)  

In both instances, if I run the following queries, no rows are returned:

select * from dba_network_acls;    select * from dba_network_acl_privileges;  

To fix this problem in UAT, we've created a suitable network ACL using DBMS_NETWORK_ACL_ADMIN.create_acl. Now, UAT works fine.

What's bothering us is that this worked fine in DEV all along, with apparently no network ACLs created. Our only theory is a bug in the version in DEV (11.1.0.7.0) but I hesitate to just leap on that as a conclusion.

We will soon be upgrading DEV to 11.2 to be consistent with the other environments, but before then, is there anything else I can check to see what difference there is between the two environments that might cause this behaviour?

(Background for those unfamiliar with ACLs: http://www.oracle-base.com/articles/11g/fine-grained-access-to-network-services-11gr1.php)

Oracle: Non key-preserved table should be

Posted: 27 Mar 2013 05:03 PM PDT

I'm getting "ORA-01779: cannot modify a column which maps to a non key-preserved table" when I try to update a join. I've searched around the site and found a lot of advice on what key-preserved means and why it is necessary... but as near as I can tell I'm complying with that advice, and still getting the error.

I have two tables:

PG_LABLOCATION has, among other things, the columns:  "LABLOCID" NUMBER,  "DNSNAME" VARCHAR2(200 BYTE)    LABLOCID is the primary key, DNSNAME has a unique constraint    PG_MACHINE has, among other things, the columns:  "MACHINEID" NUMBER,   "LABLOCID" NUMBER,   "IN_USE" NUMBER(1,0) DEFAULT 0,   "UPDATE_TIME" TIMESTAMP (6) DEFAULT '01-JAN-1970'    MACHINEID is a primary key  LABLOCID is a foreign key into LABLOCID in PG_LABLOCATION (its primary key)  

The update I'm running is:

update     (select mac.in_use, mac.update_time       from pg_machine mac        inner join pg_lablocation loc         on mac.lablocid = loc.lablocid       where loc.dnsname = 'value'         and '02-JAN-2013' > mac.update_time    )  set in_use = 1 - MOD( 101, 2 ), update_time = '02-JAN-2013';  

I'm only updating values in one table (PG_MACHINE) and the join column in the other table is the primary key, which should make it key-preserved by my reading. I'm concerned that the where clause is causing the problem, but I tried removing the filter on mac.update_time and got the same error, and loc.dnsname has a unique constraint.

What's even odder is that we have, like many folks, a dev and a prod environment. We did a complete schema and data migration from prod to dev. I've looked them both over and they have identical indexes and constraints. The query works in dev, but generates the above error in prod.

So two questions:

1) Can you see what's wrong with my query? 2) Can you suggest what might be different between my dev and prod environment (e.g. server settings) that could cause this error in one but not the other?

MySQL inserts/deletes slowing down after a while after Start/Reboot

Posted: 27 Mar 2013 05:58 PM PDT

The Setup here is: OS: Ubuntu 12.04 (1 Core and 1.75 GB of RAMO) MySQL Version: 5.5

There are about 8 to 9 tables in the database. On initially starting mysql the queries are fast at around 0.05 - 0.10 seconds, however after sometime the INSERTS and DELETES slow down to around 1 - 2 seconds, however the SELECTS are still around 0.05 - 0.10 seconds.

Then after rebooting the system the situation reverts back to how it was in the beginning of the previous paragraph. Everything is fast, but then INSERTS and DELETES slow down tremendously after a while.

Note: I'm just doing simple inserts and deletes on very few records, and the database starts with empty tables in the beginning.

Any insight?

Edit: Create Table of 1 of the Tables

| FeedUps |   CREATE TABLE `FeedUps` (    `post_id` int(10) unsigned DEFAULT NULL,    `liker` int(10) unsigned DEFAULT NULL,    UNIQUE KEY `post_id` (`post_id`,`liker`),    KEY `liker` (`liker`),    CONSTRAINT `FeedUps_ibfk_1` FOREIGN KEY (`post_id`)         REFERENCES `Feed` (`post_id`) ON DELETE CASCADE ON UPDATE CASCADE,    CONSTRAINT `FeedUps_ibfk_2` FOREIGN KEY (`liker`)         REFERENCES `Users` (`user_id`) ON DELETE CASCADE ON UPDATE CASCADE  ) ENGINE=InnoDB DEFAULT CHARSET=latin1   

SQL Server 2012, rebuild not lowering avg fragmentation

Posted: 27 Mar 2013 05:03 PM PDT

I have a script identifying what indices to rebuild.

select       'alter index ' + name + ' on ' + @dbname + '.dbo.' + OBJECT_NAME(a.object_id) + ' rebuild;'  from sys.dm_db_index_physical_stats (DB_ID(@dbname), NULL, NULL, NULL, NULL) AS a  inner join sys.indexes AS b ON a.object_id = b.object_id AND a.index_id = b.index_id  where avg_fragmentation_in_percent > 30  

Which, e.g generates:

alter index FooIndex on FooDb.dbo.FooTable rebuild;  

But after I execute the alter statement the index still has a high fragmentation value and if I execute it again it don't get any lower (50%). Any input on what could be wrong would be highly appreciated.

UPDATED: I manually increased the size of the DB which managed to lower the fragmentation on some of the indices, but still not all. Still have a couple around 50%.

//Daniel

Newly discovered SQL Server Express has all databases inaccessible/offline

Posted: 27 Mar 2013 03:39 PM PDT

We have several remotes sites at my company and today one of the employees came to me and said their SQL reporting tool stopped working. Up until this point I did not even know this server or database or reporting service existed!

I have RDP access to the server and can open SQL Server Management Studio 2008 R2 and the databases appear to be SQL Server Express. Currently I can login to the instance with my domain admin account but every database gives me the following message when I try to expand it:

enter image description here The webpage they go to for the reports gives them the following error:

•   An error has occurred during report processing. (rsProcessingAborted)      o   Cannot create a connection to data source 'DNPDataSource'. (rsErrorOpeningConnection)          ?   For more information about this error navigate to the report server on the local server machine, or enable remote errors  

When I try to view the SQL Server Error log I get this error:

enter image description here

I do not have the sa password. I'm not a DBA but need to try and figure this one out, can anyone point me in a direction to start troubleshooting this? I'm completely lost.


here is the ERROR.LOG

2013-03-27 13:14:24.34 Server      Microsoft SQL Server 2008 R2 (SP1) - 10.50.2550.0 (X64)       Jun 11 2012 16:41:53       Copyright (c) Microsoft Corporation      Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)    2013-03-27 13:14:24.34 Server      (c) Microsoft Corporation.  2013-03-27 13:14:24.34 Server      All rights reserved.  2013-03-27 13:14:24.34 Server      Server process ID is 9040.  2013-03-27 13:14:24.34 Server      System Manufacturer: 'Intel Corporation', System Model: 'S5520UR'.  2013-03-27 13:14:24.34 Server      Authentication mode is MIXED.  2013-03-27 13:14:24.34 Server      Logging SQL Server messages in file 'c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Log\ERRORLOG'.  2013-03-27 13:14:24.34 Server      This instance of SQL Server last reported using a process ID of 2428 at 3/27/2013 1:14:02 PM (local) 3/27/2013 7:14:02 PM (UTC). This is an informational message only; no user action is required.  2013-03-27 13:14:24.34 Server      Registry startup parameters:        -d c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\master.mdf       -e c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Log\ERRORLOG       -l c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\mastlog.ldf  2013-03-27 13:14:24.37 Server      SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.  2013-03-27 13:14:24.37 Server      Detected 16 CPUs. This is an informational message; no user action is required.  2013-03-27 13:14:24.51 Server      Using dynamic lock allocation.  Initial allocation of 2500 Lock blocks and 5000 Lock Owner blocks per node.  This is an informational message only.  No user action is required.  2013-03-27 13:14:24.51 Server      Lock partitioning is enabled.  This is an informational message only. No user action is required.  2013-03-27 13:14:24.56 Server      Node configuration: node 0: CPU mask: 0x00000000000000ff:0 Active CPU mask: 0x00000000000000ff:0. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.  2013-03-27 13:14:24.62 spid7s      Starting up database 'master'.  2013-03-27 13:14:24.69 spid7s      2 transactions rolled forward in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.69 spid7s      0 transactions rolled back in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.69 spid7s      Recovery is writing a checkpoint in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.78 spid7s      FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'SQLEXPRESS'.  2013-03-27 13:14:24.83 spid7s      SQL Trace ID 1 was started by login "sa".  2013-03-27 13:14:24.85 spid7s      Starting up database 'mssqlsystemresource'.  2013-03-27 13:14:24.87 spid7s      The resource database build version is 10.50.2500. This is an informational message only. No user action is required.  2013-03-27 13:14:25.09 spid10s     Starting up database 'model'.  2013-03-27 13:14:25.09 spid7s      Server name is 'WCCKEMAPP\SQLEXPRESS'. This is an informational message only. No user action is required.  2013-03-27 13:14:25.21 spid10s     The tail of the log for database model is being rewritten to match the new sector size of 4096 bytes.  2560 bytes at offset 99840 in file c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\modellog.ldf will be written.  2013-03-27 13:14:25.31 spid10s     Clearing tempdb database.  2013-03-27 13:14:25.32 spid13s     A new instance of the full-text filter daemon host process has been successfully started.  2013-03-27 13:14:25.37 spid7s      Starting up database 'msdb'.  2013-03-27 13:14:25.40 Server      A self-generated certificate was successfully loaded for encryption.  2013-03-27 13:14:25.40 Server      Server is listening on [ 'any' <ipv6> 54547].  2013-03-27 13:14:25.40 Server      Server is listening on [ 'any' <ipv4> 54547].  2013-03-27 13:14:25.40 Server      Server local connection provider is ready to accept connection on [ \\.\pipe\SQLLocal\SQLEXPRESS ].  2013-03-27 13:14:25.40 Server      Server named pipe provider is ready to accept connection on [ \\.\pipe\MSSQL$SQLEXPRESS\sql\query ].  2013-03-27 13:14:25.40 Server      Dedicated administrator connection support was not started because it is disabled on this edition of SQL Server. If you want to use a dedicated administrator connection, restart SQL Server using the trace flag 7806. This is an informational message only. No user action is required.  2013-03-27 13:14:25.56 spid10s     Starting up database 'tempdb'.  2013-03-27 13:14:25.60 spid13s     The Service Broker protocol transport is disabled or not configured.  2013-03-27 13:14:25.60 spid13s     The Database Mirroring protocol transport is disabled or not configured.  2013-03-27 13:14:25.61 spid13s     Service Broker manager has started.  2013-03-27 13:14:25.77 spid7s      The tail of the log for database msdb is being rewritten to match the new sector size of 4096 bytes.  2048 bytes at offset 12007424 in file c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\MSDBLog.ldf will be written.  2013-03-27 13:14:25.84 spid7s      Recovery is complete. This is an informational message only. No user action is required.  2013-03-27 13:14:25.90 Logon       Error: 17187, Severity: 16, State: 1.  2013-03-27 13:14:25.90 Logon       SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again.  [CLIENT: ::1]  2013-03-27 13:14:25.90 Logon       Error: 17187, Severity: 16, State: 1.  2013-03-27 13:14:25.90 Logon       SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again.  [CLIENT: 172.17.0.210]  2013-03-27 13:14:26.48 Server      The SQL Server Network Interface library successfully registered the Service Principal Name (SPN) [ MSSQLSvc/WCCKemAPP.WCC.LOCAL:SQLEXPRESS ] for the SQL Server service.   2013-03-27 13:14:26.48 Server      The SQL Server Network Interface library successfully registered the Service Principal Name (SPN) [ MSSQLSvc/WCCKemAPP.WCC.LOCAL:54547 ] for the SQL Server service.   2013-03-27 13:14:26.48 Server      SQL Server is now ready for client connections. This is an informational message; no user action is required.  2013-03-27 13:14:31.04 spid51      Starting up database 'ReportServer'.  2013-03-27 13:14:31.37 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:31.76 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:32.07 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:32.24 Logon       Error: 18456, Severity: 14, State: 5.  2013-03-27 13:14:32.24 Logon       Login failed for user 'drily'. Reason: Could not find a login matching the name provided. [CLIENT: 172.17.0.210]  2013-03-27 13:15:12.28 spid55      Starting up database 'DNP'.  2013-03-27 13:15:13.75 spid55      Starting up database 'DSS'.  2013-03-27 13:19:36.62 spid57      Starting up database 'ReportServerTempDB'.  2013-03-27 13:25:31.18 spid53      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:25:36.38 spid53      Starting up database 'DSSDL'.  2013-03-27 13:25:38.89 spid53      Starting up database 'DSSUSERDIR'.  2013-03-27 13:25:41.26 spid53      Starting up database 'permissionsAudit'.  2013-03-27 13:25:45.00 spid53      Starting up database 'PMKemmererProduction'.  2013-03-27 13:25:48.05 spid53      Starting up database 'PMKemmererProductionTEST'.  2013-03-27 13:26:01.57 spid54      Attempting to load library 'xpstar.dll' into memory. This is an informational message only. No user action is required.  2013-03-27 13:26:01.58 spid54      Using 'xpstar.dll' version '2009.100.1600' to execute extended stored procedure 'xp_instance_regread'. This is an informational message only; no user action is required.  2013-03-27 13:26:52.10 Logon       Error: 18456, Severity: 14, State: 38.  2013-03-27 13:26:52.10 Logon       Login failed for user 'WCC\baadmin'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]  2013-03-27 13:26:53.37 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:53.60 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:53.92 spid59      Starting up database 'QuietDose'.  2013-03-27 13:26:54.16 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:54.36 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:54.66 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:54.89 spid59      Starting up database 'STX'.  2013-03-27 13:26:55.57 spid59      Starting up database 'Test'.  2013-03-27 13:26:55.76 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:55.91 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:56.08 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:56.31 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:56.52 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:56.68 spid59      Starting up database 'STX'.  2013-03-27 13:26:57.24 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:57.28 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:57.45 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:57.55 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:57.74 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:57.83 spid59      Starting up database 'STX'.  2013-03-27 13:29:36.55 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 13:39:36.57 spid56      Starting up database 'ReportServerTempDB'.  2013-03-27 13:41:59.55 Logon       Error: 18456, Severity: 14, State: 8.  2013-03-27 13:41:59.55 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]  2013-03-27 13:44:07.70 Logon       Error: 18456, Severity: 14, State: 8.  2013-03-27 13:44:07.70 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]  2013-03-27 13:49:36.57 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 13:59:36.57 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:09:36.56 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:15:50.50 spid54      Starting up database 'DSSDL'.  2013-03-27 14:15:50.75 spid54      Starting up database 'DSSUSERDIR'.  2013-03-27 14:15:51.92 spid54      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:15:52.25 spid54      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:15:52.51 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:15:52.70 spid54      Starting up database 'STX'.  2013-03-27 14:18:02.83 spid51      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:46.58 Logon       Error: 18456, Severity: 14, State: 38.  2013-03-27 14:18:46.58 Logon       Login failed for user 'WCC\baadmin'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]  2013-03-27 14:18:47.49 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:47.70 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:47.92 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:48.04 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:48.33 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:48.53 spid59      Starting up database 'STX'.  2013-03-27 14:18:49.12 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:49.33 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:49.44 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:49.60 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:49.84 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:49.98 spid59      Starting up database 'STX'.  2013-03-27 14:18:50.28 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:50.39 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:50.48 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:50.53 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:50.66 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:50.73 spid59      Starting up database 'STX'.  2013-03-27 14:19:36.54 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:19:36.93 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:29:36.55 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:39:36.57 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:49:36.57 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:59:36.58 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 15:09:36.60 spid53      Starting up database 'ReportServerTempDB'.  

I can provide older logs too if it will be beneficial. I'm reading those now from here.

Results from query select name, state_desc, is_in_standby, is_cleanly_shutdown from sys.databases;

enter image description here


Image for Kin's request

enter image description here

What is the algorithmic complexity of a PosgtreSQL greater than or lesser than index query (compared to equality)?

Posted: 27 Mar 2013 02:10 PM PDT

Assuming there is an index idx_int on an int_col column, what is the algorithmic complexity of a query like this?

SELECT id FROM table      WHERE table.int_col > 1;  

I'm specifically interested in knowing if the query is significantly more inefficient than if doing an equality clause (which, if I understand correctly is O(log N)). I am pretty sure both of them can use a B-tree, so I would expect them to be about the same in terms of complexity / efficiency.

Thanks very much!

how to get image from mysql on jsp using struts 2 [closed]

Posted: 27 Mar 2013 01:21 PM PDT

my class .... extends ....

{

// i retrieve the image from db but want to on jsp using struts 2

Blob image = null;

            Statement st8 = con.createStatement();                 ResultSet rs8 = st8.executeQuery("Select image from pic where CID=1111;");                    while(rs8.next())                  {                            image= (Blob) rs8.getBlob("image");                                     }  

but not display on jsp using param value

Using MongoDB and PostgreSQL together

Posted: 27 Mar 2013 01:14 PM PDT

My current project is essentially a run of the mill document management system.

That said, there are some wrinkles (surprise, surprise). While some of the wrinkles are fairly specific to the project, I believe there are some general observations and questions that have come up which don't have a canonical answer (that I could find, anyway) and that are applicable to a wider problem domain. There's a lot here and I'm not sure it's a good fit for the StackExchange Q&A format but I think it a) an answerable question and b) non-specific enough that it can benefit the community. Some of my considerations are specific to me but I think the question could be of use to anyone faced with deciding on SQL vs NoSQL vs both.

The background:

The web app we are building contains data that is clearly relational in nature as well as data that is document-oriented. We would like to have our cake and eat it too.

TL;DR: I think #5 below passes the smell test. Do you? Does anyone have experience with such an integration of SQL and NOSQL in a single application? I tried to list all the possible approaches to this class of problem in below. Have I missed a promising alternative?

Complexities:

  • There are many different classes of documents. The requirements already call for dozens of different documents. This number will only ever go up. The best possible case would be one in which we could leverage a simple domain specific language, code generation and a flexible schema so that domain experts could handle the addition of new document classes without the intervention of DBAs or programmers. (Note: already aware we are living out Greenspun's Tenth Rule)
  • The integrity of previous successful writes is a central requirement of the project. The data will be business critical. Full ACID semantics on writes can be sacrificed provided that the things that do get succesfully written stay written.
  • The documents are themselves complex. The prototype document in our specific case will require storage of 150+ distinct pieces of data per document instance. The pathological case could be an order of magnitude worse, but certainly not two.
  • A single class of documents is a moving target subject to updates at a later point in time.
  • We like the free stuff we get from Django when we hook it into a relational database. We would like to keep the freebies without having to jump back two Django versions to use the django-nonrel fork. Dumping the ORM entirely is preferable to downgrading to 1.3.

Essentially, it's a mishmash of relational data (your typical web app stuff like users, groups, etc., as well as document metadata that we'll need to be able to slice and dice with complex queries in realtime) and document data (e.g. the hundreds of fields which we have no interest in joining on or querying by - our only use case for the data will be for showing the single document into which it was entered).

I wanted to do a sanity check (if you check my posting history, I'm pretty explicit about the fact that I am not a DBA) on my preferred method as well as enumerate all of the options I've come across for others solving broadly similar problems involving both relational and non-relational data.

Proposed Solutions:

1. One table per document class

Each document class gets its own table, with columns for all metadata and data.

Advantages:

  • The standard SQL data model is in play.
  • Relational data is handled in the best possible way. We'll denormalize later if we need to.
  • Django's built-in admin interface is comfortable with introspecting these tables and the ORM can live happily with 100% the data out of the box.

Disadvantages:

  • Maintenance nightmare. Dozens (hundreds?) of tables with (tens of?) thousands of columns.
  • Application-level logic responsible for deciding exactly which table to write to. Making the table name a parameter for a query stinks.
  • Basically all business logic changes will require schema changes.
  • Pathological cases might require striping data for single forms across multiple tables (see: What is the maximum number of columns in a PostgreSQL table?).
  • We would probably need to go find a real, honest-to-God DBA who would no doubt end up hating life and us.

2. EAV modeling

There is just a fields table. Entity-Attribute-Value modeling is already well understood. I've included it for completeness. I don't think any new project being started in 2013 would go with an EAV approach on purpose.

Advantages:

  • Easy to model.

Disadvantages:

  • More difficult to query.
  • DB layer no longer has a straight-forward representation for what constitutes one app-level object.
  • We would lose DB-level constraint checking.
  • Number of rows on one table will grow 100s-1000s of times faster. Likely future pain point, performance-wise.
  • Limited indexing possible.
  • DB schema is nonsensical as far as ORM is concerned. Batteries included web app stuff is preserved but custom data models are going to require custom queries.

3. Use PostgreSQL hstore or json fields

Either of these field types would do the trick for storing schemaless data within the context of a relational DB. The only reason I don't jump to this solution immediately is it is relatively new (introduced in version 8.4 so not that new), I have zero previous exposure to it and I am suspicious. It strikes me as wrong for precisely the same reasons I would feel uneasy throwing all my nice, easily normalized data into Mongo - even though Mongo can handle references between documents.

Advantages:

  • We get the benefits of the Django ORM and the built-in auth and session management.
  • Everything stays in one backend that we've previously used on other projects successfully.

Disadvantages:

  • No experience with this, personally.
  • It doesn't look like a very highly used feature. It looks like they get recommended quite a bit to people looking at NOSQL solutions but I don't see a lot of evidence that they are being chosen. This makes me think I must be missing something.
  • All values stored are strings. Lose DB-level constraint checking.
  • The data in the hstore will never be displayed to the user unless they specifically view a document, but the metadata stored in more standard columns will be. We will be beating that metadata up and I worry the rather large hstores we will be creating might come with performance drawbacks.

4. Go full bore document-oriented

Make all the things documents (in the MongoDB sense). Create a single collection of type Document and call it a day. Bring all peripheral data (including data on user accounts, groups, etc) into mongo as well. This solution is obviously better than EAV modeling but it feels wrong to me for the same reason #3 felt wrong - they both feel like using your hammer as a screwdriver too.

Advantages:

  • No need to model data upfront. Have one collection with documents of type Document and call it a day.
  • Known good scaling characteristics, should the collection need to grow to encompass millions or even billions of documents.
  • JSON format (BSON) is intuitive for developers.
  • As I understand it (which is only vaguely at this point), by being paranoid with regard to write-concern level even a single instance can provide pretty strong data safety in the event of anything and everything up to a hard drive crash.

Disadvantages:

  • The ORM is out the window for Django trunk. Freebies that go out the window with it: the auth framework, the sessions framework, the admin interface, surely many other things.
  • Must either use mongo's referencing capabilities (which require multiple queries) or denormalize data. Not only do we lose freebies that we got from Django, we also lose freebies like JOINs we took for granted in PostgreSQL.
  • Data safety. When one reads about MongoDB, it seems there is always at least one person referring to how it will up and lose your data. They never cite a particular occurrence and it might all just be hogwash or just related to the old default fire and forget write-concern but it still worries me. We will of course be utilizing a fairly paranoid backup strategy in any case (if data is corrupted silently that could well be immaterial of course..).

5. PostgreSQL and MongoDB

Relational data goes in the relational database and document data goes in the document-oriented database. The documents table on the relational database contains all of the data we might need to index or slice and dice on as well as a MongoDB ObjectId which we would use when we needed to query for the actual values of the fields on the documents. We wouldn't be able to use the ORM or the built-in admin for the values of the documents themselves but that's not that big of a loss since the whole app is basically an admin interface for the documents and we would have likely had to customize that specific part of the ORM to an unacceptable degree to make it work just the way we need.

Advantages:

  • Each backend does only what it is good at.
  • References between models are preserved without requiring multiple queries.
  • We get to keep the batteries Django gave us as far as users, sessions, etc are concerned.
  • Only need one documents table no matter how many different classes of documents are created.
  • The less often queried document data is strongly separated from the far more often queried metadata.

Disadvantages:

  • Retrieving document data will require 2 sequential queries, first against the SQL DB and then against the MongoDB (though this is no worse than if the same data had been stored in Mongo and not denormalized)
  • Writing will no longer be atomic. A write against a single Mongo document is guaranteed to be atomic and PG obviously can make atomicity guarantees but ensuring atomicity of write across both will require application logic, no doubt with a performance and complexity penalty.
  • Two backends = two query languages = two different programs with dissimilar admin requirements = two databases vying for memory.

SSIS File System Task Copy File Could not find part of the path error

Posted: 27 Mar 2013 01:50 PM PDT

I have created an SSIS package that uses a File Systen Task (copy) that copies a file from 1 folder to another. The package runs fine when I run it from my local computer but when I import the package into our SQL Server and Create a Job to run the Package, the package fails and I see the following error in the log:

Started:  3:20:01 PM    Error: 2013-03-27 15:20:01.70       Code: 0xC002F304       Source: copy template File System Task       Description:  An error occurred with the following error message:       "Could not find a part of the path       'P:\IT\PROVIDERS\PROVIDERS AUDIT\File      Template\QnxtProviderAuditList_yyyymmdd.xlsx'."    End Error    DTExec:  The package execution returned DTSER_FAILURE (1).    Started:  3:20:01 PM    Finished: 3:20:01 PM    Elapsed:  0.452 seconds.    The package execution failed.  The step failed.  

The file path in the error is the file that needs to be copied (sourceconnection).

I can copy the file path directly from the error and go to it (via windows explorer) on the physical SQL Server machine so I do not understand why the package is saying it could not find the Path.

Does anyone have any idea why this package would run fine on my local machine but fails with the above error when run from the SQL Server? P:\ is a shared drive which the SQL Server has access to so I doubt it's an access/privileges issue.

How does PostgreSQL physically order new records on disk (after a cluster on primary key)?

Posted: 27 Mar 2013 01:42 PM PDT

Need to know how PostgreSQL orders records on disk. In this case, I would like to take advantage of index combination as stated in the docs, which as I understand uses bitmaps to get matching rows and returns them according to their physical location. The table in question has been clustered by its primary key.

As I understand it, PostgreSQL does not automatically continue doing clustering after a clustering is finished (although it does remember that it clustered according to a certain index). Now, since this is the primary key, I wonder if the physical storage order would be according to that (which if true I would like to use to our advantage for a specific query).

In summary, how does PostgreSQL order its new records, especially after clustering?

Thanks very much!

MySQL - Normalization over meaningless data - Compound Primay Keys or Surrogate ID field

Posted: 27 Mar 2013 03:05 PM PDT

Please forgive my title; if you have a suggestion for it, feel free to comment.

I have a database:

DROP TABLE IF EXISTS `books`;    CREATE TABLE `books` (    `isbn` VARCHAR(255) NOT NULL,    `title` VARCHAR(255) NULL DEFAULT NULL,    PRIMARY KEY (`isbn`)  ) COMMENT 'Books used at this school';    DROP TABLE IF EXISTS `classes`;    CREATE TABLE `classes` (    `class_id` INT(10) NOT NULL AUTO_INCREMENT,    `teacher_id` SMALLINT(5) NULL DEFAULT NULL,    PRIMARY KEY (`class_id`)  ) COMMENT 'Classes at the school';    DROP TABLE IF EXISTS `b_c`;    CREATE TABLE `b_c` (    `isbn` VARCHAR(255) NOT NULL,    `class_id` INT(10) NOT NULL,    PRIMARY KEY (`isbn`)  ) COMMENT 'Books to classes';    ALTER TABLE `b_c` ADD FOREIGN KEY (isbn) REFERENCES `books` (`isbn`) ON UPDATE CASCADE;  ALTER TABLE `b_c` ADD FOREIGN KEY (class_id) REFERENCES `classes` (`class_id`) ON UPDATE CASCADE;  

The issue I'm having is that I would like to normalize data as much as possible (I don't want multiple entries for the same relationship to be entered into the table b_c), but I would like to only store what data is absolutely pertinent.

My first idea to deal with this is to just create a compound primary key for the b_c table consisting of the fields isbn and class_id which would solve the issue of having duplicate relationships in the table, however, I have heard strong opinions on having a unique identifier for every row in a table like this. The justification for having a unique identifier for every row seems to be that it's useful to be able to specify a specific row, though I don't see a situation in which this would become useful. Can someone offer an example?

Another criticism I've heard is that using compound PKs in this way can make JOINs extremely taxing. Can someone comment on the performance of these two different methods?

The question boils down to "Is it worth it to add an id field to the b_c table or is the use of compound PKs enough to properly represent the relationship between the books and classes tables?

If you have any other comments about the design not directly pertaining to the question, I would love to hear them and thank you in advanced for you help.

Record with variable number of columns?

Posted: 27 Mar 2013 12:35 PM PDT

I am trying to model a cable mapping for devices. Each device has multiple cards, and each of those cards has multiple ports. Since both the number of cards and ports vary, I am confused on how to establish the correct normalized form and relationships that would address one device record having an arbitrary number of cards and those cards having an arbitrary number of ports. Any thoughts?

SQL Server 2008 R2 SP1 - Tempdb “Available free space” showing negative value [duplicate]

Posted: 27 Mar 2013 11:56 AM PDT

Please help as this issue is driving me nuts...

Tempdb has four data files & one log file. They are all located on the same dedicated drive.

When I right click on tempdb in SQL Server Management Studio (SSMS) to view Properties --> General...I'm seeing "Size" as ~65000 MB with "Space Available" as ~64000 MB. However, when I try to shrink the files via the GUI, I see that the "currently allocated space" hasn't changed & is still showing the initial size.

This is also where the "Available free space" is shown as a negative number. The only work around I have found so far is to restart SQL Server to flush out tempdb, but I can't keep on doing this.

Has anyone ever come across this & know the root cause/permanent solution for it please?

Thanks in advance!

SQL Server 2008 R2 SP1 - Tempdb "Available free space" showing negative value

Posted: 27 Mar 2013 05:21 PM PDT

Please help as this issue is driving me nuts...

Tempdb has four data files & one log file. They are all located on the same dedicated drive.

When I right click on tempdb in SQL Server Management Studio (SSMS) to view Properties --> General...I'm seeing "Size" as ~65000 MB with "Space Available" as ~64000 MB. However, when I try to shrink the files via the GUI, I see that the "currently allocated space" hasn't changed & is still showing the initial size.

This is also where the "Available free space" is shown as a negative number. The only work around I have found so far is to restart SQL Server to flush out tempdb, but I can't keep on doing this.

Has anyone ever come across this & know the root cause/permanent solution for it please?

Thanks in advance!

How this simple Stored Procedure ended up in Deadlock

Posted: 27 Mar 2013 01:07 PM PDT

Thanks for all and this forum is really helpful.

After few minutes of system running we get "Try restarting the transaction." And its end up in deadlock.

We use MySQL 5.5.28 and the Java code calling this Stored Procedure is in synchronised block. However, I am still we getting this deadlock. When I ran, show engine innodb status;, I see the following query in deadlock:

INSERT INTO Site_Msg_Tag (Site_Msg_Tag_Id,Unit_Id,Log_Timestamp,Interval_Id,Digest_Matched)          VALUES (NULL,siteId,logTimestamp,IntervalId,digestMatched);  

Please refer below for entire Stored Procedure and the table it's updating:

CREATE PROCEDURE sp_CreateCtrMsgTagIfNotExist ( siteId INT, logTimestamp DATETIME, IntervalId INT, digestMatched TINYINT(1), OUT returnpk INT )  BEGIN       SET returnpk = 0 ;     START TRANSACTION;          SELECT Site_Msg_Tag_Id INTO returnPK FROM Site_Msg_Tag WHERE Unit_Id = siteId AND Log_Timestamp = logTimestamp     AND Interval_Id = IntervalId ;       IF (returnpk IS NULL OR returnpk = 0  )     THEN /*1*/          INSERT INTO Site_Msg_Tag (Site_Msg_Tag_Id,Unit_Id,Log_Timestamp,Interval_Id,Digest_Matched)          VALUES (NULL,siteId,logTimestamp,IntervalId,digestMatched);            SELECT LAST_INSERT_ID() INTO returnpk;      END IF;/*1*/        COMMIT;    END |    DELIMITER ;  

Table :

| Site_Msg_Tag | CREATE TABLE `Site_Msg_Tag` (    `Site_Msg_Tag_Id` int(11) NOT NULL AUTO_INCREMENT,    `Unit_Id` int(11) NOT NULL,    `Log_Timestamp` datetime NOT NULL,    `Interval_Id` int(11) NOT NULL,    `Digest_Matched` tinyint(1) DEFAULT '0',    PRIMARY KEY (`Site_Msg_Tag_Id`),    UNIQUE KEY `ix_Site_Msg_Tag_Ts_Unit` (`Unit_Id`,`Log_Timestamp`,`Interval_Id`),    UNIQUE KEY `ix_Site_Msg_Tag_Unit_Ts` (`Log_Timestamp`,`Unit_Id`,`Interval_Id`),    KEY `FK_Site_Msg_Tag_Interval_Id` (`Interval_Id`),    CONSTRAINT `FK_Site_Msg_Tag_Interval_Id` FOREIGN KEY (`Interval_Id`) REFERENCES `Interval` (`Interval_Id`),    CONSTRAINT `FK_Site_Msg_Tag_Unit_Id` FOREIGN KEY (`Unit_Id`) REFERENCES `Unit` (`Unit_Id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8 |  

I saw in this thread Will these two queries result in a deadlock if executed in sequence?

in that they solved the issue by writing the query in more efficient way any suggestions ? . Thanks in advance.

ISO Week vs SQL Server Week

Posted: 27 Mar 2013 03:11 PM PDT

Okay so I have a report that does a this week vs last week comparison and our customer noticed that their data was "funky". Upon further investigation we found it was not doing weeks correctly according to the ISO standards. I ran this script as a test case.

SET DATEFIRST 1  SELECT DATEPART(WEEK, '3/26/13')      , DATEPART(WEEK, '3/27/12')      , DATEPART(WEEK, '3/20/12')      , DATEPART(WEEK, '1/2/12')  SELECT DATEPART(ISO_WEEK, '3/26/13')      , DATEPART(ISO_WEEK, '3/27/12')      , DATEPART(ISO_WEEK, '3/20/12')      , DATEPART(ISO_WEEK, '1/2/12')  

When run I got these results.

ResultSet

I thought this was peculiar and so I did some more digging and found that SQL Server counts January 1st as the first week of the year where ISO counts the first Sunday in January as the first week of the year.

The question then ends up being two fold. Question 1 why is this? Question 2 is there any way to change this so I'm not having to modify all of my code to use ISO_Week everywhere?

MySQL subqueries that use range based on values of main queries don't use indices properly

Posted: 27 Mar 2013 06:20 PM PDT

I think I've isolated a problem that has been affecting many of my queries lately. And would like some help to figure out a solution for this.

Ok so my findings are that a normal query that runs very fast using like a couple of rows can actually use indexes improperly when used in a subquery which is based on values from the main query.

Lets take an example:

DROP TEMPORARY TABLE IF EXISTS Person;  DROP TEMPORARY TABLE IF EXISTS CofeeBreaks;  CREATE TEMPORARY TABLE IF NOT EXISTS Person  (      `person_id` INT(11) AUTO_INCREMENT,      `age` INT,      `lastCofee` DATETIME,      KEY `idkey` (`person_id`) USING BTREE,      KEY `datekey` (`lastCofee`) USING BTREE  ) ENGINE = MEMORY;  CREATE TEMPORARY TABLE IF NOT EXISTS CofeeBreaks  (      `id` INT(11) AUTO_INCREMENT,      `cofeeBreakStart` DATETIME,      `cofeeBreakEnd` DATETIME,      KEY `brekIdKey`(`id`) USING BTREE  ) ENGINE = MEMORY;  INSERT INTO Person (age, lastCofee) VALUES (24, '2013-03-27 14:45:34');  INSERT INTO Person (age, lastCofee) VALUES (34, '2013-03-27 14:46:38');  INSERT INTO Person (age, lastCofee) VALUES (26, '2013-03-27 15:25:24');  INSERT INTO Person (age, lastCofee) VALUES (28, '2013-03-27 16:33:54');  INSERT INTO Person (age, lastCofee) VALUES (46, '2013-03-27 17:11:03');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 15:11:03', '2013-03-27 17:25:24');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 14:45:34', '2013-03-27 15:25:24');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 17:11:03', '2013-03-27 17:11:03');  SELECT * FROM Person WHERE lastCofee BETWEEN '2013-03-27 15:11:03' AND '2013-03-27 17:11:03';  SELECT      *,      (SELECT AVG(Person.age) FROM Person WHERE Person.lastCofee BETWEEN CofeeBreaks.cofeeBreakStart AND CofeeBreaks.cofeeBreakEnd) AS averageAge  FROM CofeeBreaks;  

So the explain results for the first select are as follow:

1   SIMPLE  Person  range   datekey datekey 9       1   Using where  

But the second query doesn't use the index properly in the subquery and analyses more rows than necessary:

id  select_type         table   type    possible_keys   key key_len ref rows  1   PRIMARY CofeeBreaks ALL                                               3  2   DEPENDENT SUBQUERY  Person  ALL       datekey                         5  

As we can see the subquery needs to analyse all rows in the person table when none of the cofeebreaks ranges surrounds all of the 5 persons.

The way I've been fixing the performance issues in a very busy database is by doing a select into a temporary table and than looping the rows of that table and updating it with the aggregates that i need. This is obviously ugly code and shouldn't be built that way. But I really haven't found a way to optimize queries for this kind of subqueries and I've tried a lot of possible ways to do this without success on the optimization.

Thanks in advance for any inputs.

psql.exe crash when issuing \connect

Posted: 27 Mar 2013 10:46 AM PDT

I'm trying to use psql 8.4.16 from a windows 7 32bit PC, to connect to a local database.

When I open a command line console I do "psql.exe -U myuser " to open the interactive console.

The next command is "\connect mydb" and it just crashes showing the typical windows message "psql- the PostgreSql interactive terminal has stopped working", and gives me two options, find a solution on line or close the program.

Any hint?.

Edit: I've found the problem, I was mixing psql.exe with some postgresql dlls from previous versions.

Expand sparse table with self outer join on distinct values

Posted: 27 Mar 2013 06:03 PM PDT

I have this:

g1       | g2        | x  ---------+-----------+-----  New York | Monday    | 210  New York | Tuesday   | 258  Chicago  | Monday    | 30  Chicago  | Wednesday | 25  LA       | Thursday  | 40  

But I want this:

g1       | g2        | x  ---------+-----------+-----  New York | Monday    | 210  New York | Tuesday   | 258  New York | Wednesday | NULL  New York | Thursday  | NULL  Chicago  | Monday    | 30  Chicago  | Tuesday   | NULL  Chicago  | Wednesday | 25  Chicago  | Thursday  | NULL  LA       | Monday    | NULL  LA       | Tuesday   | NULL  LA       | Wednesday | NULL  LA       | Thursday  | 40  

In other words, I want a record for every combination of DISTINCT g1 and DISTINCT g2. I should note that g1 and g2 are arbitrary groupings (not always location- or date-related). How do I fill in the table with these NULL values based on the cross product?

SYSDBA password in Oracle DB

Posted: 27 Mar 2013 11:56 AM PDT

I am connecting to Oracle DB using sysdba:

sqlplus / as sysdba  

What is the default password for sysdba user. I tried in net and found out the default password is change_on_install but it is not working for me.

I am interested to know if Oracle DB maintains some file or DB table where it stores the passwords.

SQL Server 2008 Error for Outlook BCM [closed]

Posted: 27 Mar 2013 11:00 AM PDT

I have Outlook Business Contact Manager installed and I had no issues until yesterday. I have tried to add a new contact this morning however I received the following error:

BCM Error severity levels greater than 18 can only be specified by members of the   sysadmin role, using the WITH LOG option. Uncommittable transaction is detected at  the end of the batch. The transaction is rolled back.  

Can anyone help me solve this issue? I have SQL server 2008 installed.

Creating a copy of production database for reporting purposes

Posted: 27 Mar 2013 06:49 PM PDT

We previously ran Crystal Reports straight off our production database. This caused a direct impact to the users using the database. We moved to a new (dedicated) server and a new database and ditched Crystal Reports in favor of SSRS. We also have a dedicated reporting server to which our production database is copied over to twice a day. We run queries off of the reporting server as to not slow down the production environment.

Is this the best solution to our problem? The backup restorations kick everyone off the reporting server whenever they occur and management is asking we step up this process (backup restoration) to once every two hours. However, just twice a day gives us issues. For example, as I write this, the database is stuck in "Restoring..." and has been for at least 12 hours. So in the meantime that the DBAs come in (I only write reports) I thought I'd ask. Is this the best possible setup we've got going on?

SSIS Package Data Flow Task RollBack on Error

Posted: 27 Mar 2013 07:50 PM PDT

I tried running a package and it failed but I noticed in the output box (in visual studio) that it managed to write 2000+ rows to a table. I was wondering if an SSIS package would roll back the transactions in my Data Flow Task if the overall package failed. I found this thread

http://www.sqlservercentral.com/Forums/Topic998096-391-1.aspx#bm998104

which states that in order for the package to rollback DFT changes:

Set the TransactionOption property of the package to "Required" Set the TransactionOption property of the dft to "Supported"

I had both the Package and DFT Transaction Option property set to supported so I am guessing that the rows written to the table were not rolled back... Do you all agree that this is the best way to make sure table changes are rolled back in the event the package fails? Will this also rollback file System Tasks such as file moves on the local file system?

MySQL, Installation process, my.ini

Posted: 27 Mar 2013 04:57 PM PDT

Recently i decided to install apache + php + mysql separately from each other instead using one of the packets such as wamp,xampp,zend server,etc..., So far i installed apache server with php engine, Now at mysql part i will be happy to get some answers for the questions below:

  • After i unzip the zip file downloaded from mysql website i have my-default.ini, File if i am not wrong it's basically a template i should copy the file, Edit the name to my.ini and take the file back to the folder my-default.ini is.
  • Just for a general Knowledge, What configuration file i can edit so for example: I can rename my.ini to anotherName.ini, So that mysql server will look for file name anotherName.ini.
  • Also i am looking for a good guide about my.ini configuration.

Thank you all and have a nice day.

Convert rows to columns in SSIS. Unpivot?

Posted: 27 Mar 2013 07:59 PM PDT

If I have data like this

"","",WAREHOUSE ID,1,2,3,4,5...  "","",WAREHOUSE CODE,BOS,HOU,LAX,NYC,CHI...  "","",WAREHOUSE NAME,BOSTON,HOUSTON,LOS ANGELES,NEW YORK CITY,CHICAGO...  

But want to transform it like this

 WAREHOUSE ID, WAREHOUSE CODE, WAREHOUSE NAME   1,BOS,BOSTON   2,HOU,HOUSTON   3,LAX,LOS ANGELES   ...  

If, as above, [Column 2] is the column that contains what should be the header row, and Columns 3 through 40 contain what should be row values, which columns get selected and set to pass through for the UNPIVOT to perform the desired transposition?

Or is UNPIVOT not even what I'm looking for?

Import batches of rows from Excel in parallel

Posted: 27 Mar 2013 08:02 PM PDT

We receive an Excel file weekly currently containing about 250k rows. The file grows by about 300 rows each week.

In SSIS I've hooked up an Excel source to an OLE DB destination to import the file but it takes a long time.

I've looked at using the SQL Command to specify how to select part of the sheet but I can't see how to turn this in to a parallel operation.

Is it possible in SSIS to split the Excel file (by rows) and import it in parallel?

How do I list all databases and tables with psql command line tool?

Posted: 27 Mar 2013 11:32 AM PDT

I try to learn PostgreSQL administration and have started learn psql command line tool.

When I have logged in with psql --username=postgres, how do I list all databases and tables?

I have tried with \d, d and dS+ but nothing is listed. I have created two databases and a few tables with pgAdmin III, so they should be listed.

No comments:

Post a Comment

Search This Blog