Wednesday, March 27, 2013

[how to] Providing development access to the database [duplicate]

[how to] Providing development access to the database [duplicate]


Providing development access to the database [duplicate]

Posted: 27 Mar 2013 06:56 PM PDT

This question is an exact duplicate of:

I am beginning a project that will involve insert and updates to a sql server database from a website. I maintain the SQL server database and am providing access to the sql server db.

My plan is to create a user with read/write access and provide the outsourced developers with a connection string with access to views that I create. Please let me know if this is outside the development expectations or if anyone has recommendations.

Providing access to the SQL server database for outside developers

Posted: 27 Mar 2013 08:52 PM PDT

I am beginning a project that will involve insert and updates to a sql server database from a website. I maintain the SQL server database and am providing access to the sql server db.

My plan is to create a user with read/write access and provide the outsourced developers with a connection string with access to views that I create. Please let me know if this is outside the development expectations or if anyone has recommendations.

Thank you

No network ACL, yet my user can successfully connect to external server

Posted: 27 Mar 2013 05:57 PM PDT

This one's had both me and our dba scratching our heads.

We have an Oracle 11.1.0.7.0 instance, DEV, with a user who can successfully connect to our ldap server, e.g.:

DECLARE    l_session DBMS_LDAP.session;  BEGIN    l_session := DBMS_LDAP.init('ldap.host.bla.bla',389);  END;  /    PL/SQL procedure successfully completed  

We have another Oracle instance, 11.2.0.3.0 called UAT, with the same user, same set up, but when they run the above code, it raises an exception from DBMS_LDAP:

ORA-24247: network access denied by access control list (ACL)  

In both instances, if I run the following queries, no rows are returned:

select * from dba_network_acls;    select * from dba_network_acl_privileges;  

To fix this problem in UAT, we've created a suitable network ACL using DBMS_NETWORK_ACL_ADMIN.create_acl. Now, UAT works fine.

What's bothering us is that this worked fine in DEV all along, with apparently no network ACLs created. Our only theory is a bug in the version in DEV (11.1.0.7.0) but I hesitate to just leap on that as a conclusion.

We will soon be upgrading DEV to 11.2 to be consistent with the other environments, but before then, is there anything else I can check to see what difference there is between the two environments that might cause this behaviour?

(Background for those unfamiliar with ACLs: http://www.oracle-base.com/articles/11g/fine-grained-access-to-network-services-11gr1.php)

Oracle: Non key-preserved table should be

Posted: 27 Mar 2013 05:03 PM PDT

I'm getting "ORA-01779: cannot modify a column which maps to a non key-preserved table" when I try to update a join. I've searched around the site and found a lot of advice on what key-preserved means and why it is necessary... but as near as I can tell I'm complying with that advice, and still getting the error.

I have two tables:

PG_LABLOCATION has, among other things, the columns:  "LABLOCID" NUMBER,  "DNSNAME" VARCHAR2(200 BYTE)    LABLOCID is the primary key, DNSNAME has a unique constraint    PG_MACHINE has, among other things, the columns:  "MACHINEID" NUMBER,   "LABLOCID" NUMBER,   "IN_USE" NUMBER(1,0) DEFAULT 0,   "UPDATE_TIME" TIMESTAMP (6) DEFAULT '01-JAN-1970'    MACHINEID is a primary key  LABLOCID is a foreign key into LABLOCID in PG_LABLOCATION (its primary key)  

The update I'm running is:

update     (select mac.in_use, mac.update_time       from pg_machine mac        inner join pg_lablocation loc         on mac.lablocid = loc.lablocid       where loc.dnsname = 'value'         and '02-JAN-2013' > mac.update_time    )  set in_use = 1 - MOD( 101, 2 ), update_time = '02-JAN-2013';  

I'm only updating values in one table (PG_MACHINE) and the join column in the other table is the primary key, which should make it key-preserved by my reading. I'm concerned that the where clause is causing the problem, but I tried removing the filter on mac.update_time and got the same error, and loc.dnsname has a unique constraint.

What's even odder is that we have, like many folks, a dev and a prod environment. We did a complete schema and data migration from prod to dev. I've looked them both over and they have identical indexes and constraints. The query works in dev, but generates the above error in prod.

So two questions:

1) Can you see what's wrong with my query? 2) Can you suggest what might be different between my dev and prod environment (e.g. server settings) that could cause this error in one but not the other?

MySQL inserts/deletes slowing down after a while after Start/Reboot

Posted: 27 Mar 2013 05:58 PM PDT

The Setup here is: OS: Ubuntu 12.04 (1 Core and 1.75 GB of RAMO) MySQL Version: 5.5

There are about 8 to 9 tables in the database. On initially starting mysql the queries are fast at around 0.05 - 0.10 seconds, however after sometime the INSERTS and DELETES slow down to around 1 - 2 seconds, however the SELECTS are still around 0.05 - 0.10 seconds.

Then after rebooting the system the situation reverts back to how it was in the beginning of the previous paragraph. Everything is fast, but then INSERTS and DELETES slow down tremendously after a while.

Note: I'm just doing simple inserts and deletes on very few records, and the database starts with empty tables in the beginning.

Any insight?

Edit: Create Table of 1 of the Tables

| FeedUps |   CREATE TABLE `FeedUps` (    `post_id` int(10) unsigned DEFAULT NULL,    `liker` int(10) unsigned DEFAULT NULL,    UNIQUE KEY `post_id` (`post_id`,`liker`),    KEY `liker` (`liker`),    CONSTRAINT `FeedUps_ibfk_1` FOREIGN KEY (`post_id`)         REFERENCES `Feed` (`post_id`) ON DELETE CASCADE ON UPDATE CASCADE,    CONSTRAINT `FeedUps_ibfk_2` FOREIGN KEY (`liker`)         REFERENCES `Users` (`user_id`) ON DELETE CASCADE ON UPDATE CASCADE  ) ENGINE=InnoDB DEFAULT CHARSET=latin1   

SQL Server 2012, rebuild not lowering avg fragmentation

Posted: 27 Mar 2013 05:03 PM PDT

I have a script identifying what indices to rebuild.

select       'alter index ' + name + ' on ' + @dbname + '.dbo.' + OBJECT_NAME(a.object_id) + ' rebuild;'  from sys.dm_db_index_physical_stats (DB_ID(@dbname), NULL, NULL, NULL, NULL) AS a  inner join sys.indexes AS b ON a.object_id = b.object_id AND a.index_id = b.index_id  where avg_fragmentation_in_percent > 30  

Which, e.g generates:

alter index FooIndex on FooDb.dbo.FooTable rebuild;  

But after I execute the alter statement the index still has a high fragmentation value and if I execute it again it don't get any lower (50%). Any input on what could be wrong would be highly appreciated.

UPDATED: I manually increased the size of the DB which managed to lower the fragmentation on some of the indices, but still not all. Still have a couple around 50%.

//Daniel

Newly discovered SQL Server Express has all databases inaccessible/offline

Posted: 27 Mar 2013 03:39 PM PDT

We have several remotes sites at my company and today one of the employees came to me and said their SQL reporting tool stopped working. Up until this point I did not even know this server or database or reporting service existed!

I have RDP access to the server and can open SQL Server Management Studio 2008 R2 and the databases appear to be SQL Server Express. Currently I can login to the instance with my domain admin account but every database gives me the following message when I try to expand it:

enter image description here The webpage they go to for the reports gives them the following error:

•   An error has occurred during report processing. (rsProcessingAborted)      o   Cannot create a connection to data source 'DNPDataSource'. (rsErrorOpeningConnection)          ?   For more information about this error navigate to the report server on the local server machine, or enable remote errors  

When I try to view the SQL Server Error log I get this error:

enter image description here

I do not have the sa password. I'm not a DBA but need to try and figure this one out, can anyone point me in a direction to start troubleshooting this? I'm completely lost.


here is the ERROR.LOG

2013-03-27 13:14:24.34 Server      Microsoft SQL Server 2008 R2 (SP1) - 10.50.2550.0 (X64)       Jun 11 2012 16:41:53       Copyright (c) Microsoft Corporation      Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)    2013-03-27 13:14:24.34 Server      (c) Microsoft Corporation.  2013-03-27 13:14:24.34 Server      All rights reserved.  2013-03-27 13:14:24.34 Server      Server process ID is 9040.  2013-03-27 13:14:24.34 Server      System Manufacturer: 'Intel Corporation', System Model: 'S5520UR'.  2013-03-27 13:14:24.34 Server      Authentication mode is MIXED.  2013-03-27 13:14:24.34 Server      Logging SQL Server messages in file 'c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Log\ERRORLOG'.  2013-03-27 13:14:24.34 Server      This instance of SQL Server last reported using a process ID of 2428 at 3/27/2013 1:14:02 PM (local) 3/27/2013 7:14:02 PM (UTC). This is an informational message only; no user action is required.  2013-03-27 13:14:24.34 Server      Registry startup parameters:        -d c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\master.mdf       -e c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Log\ERRORLOG       -l c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\mastlog.ldf  2013-03-27 13:14:24.37 Server      SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.  2013-03-27 13:14:24.37 Server      Detected 16 CPUs. This is an informational message; no user action is required.  2013-03-27 13:14:24.51 Server      Using dynamic lock allocation.  Initial allocation of 2500 Lock blocks and 5000 Lock Owner blocks per node.  This is an informational message only.  No user action is required.  2013-03-27 13:14:24.51 Server      Lock partitioning is enabled.  This is an informational message only. No user action is required.  2013-03-27 13:14:24.56 Server      Node configuration: node 0: CPU mask: 0x00000000000000ff:0 Active CPU mask: 0x00000000000000ff:0. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.  2013-03-27 13:14:24.62 spid7s      Starting up database 'master'.  2013-03-27 13:14:24.69 spid7s      2 transactions rolled forward in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.69 spid7s      0 transactions rolled back in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.69 spid7s      Recovery is writing a checkpoint in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.78 spid7s      FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'SQLEXPRESS'.  2013-03-27 13:14:24.83 spid7s      SQL Trace ID 1 was started by login "sa".  2013-03-27 13:14:24.85 spid7s      Starting up database 'mssqlsystemresource'.  2013-03-27 13:14:24.87 spid7s      The resource database build version is 10.50.2500. This is an informational message only. No user action is required.  2013-03-27 13:14:25.09 spid10s     Starting up database 'model'.  2013-03-27 13:14:25.09 spid7s      Server name is 'WCCKEMAPP\SQLEXPRESS'. This is an informational message only. No user action is required.  2013-03-27 13:14:25.21 spid10s     The tail of the log for database model is being rewritten to match the new sector size of 4096 bytes.  2560 bytes at offset 99840 in file c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\modellog.ldf will be written.  2013-03-27 13:14:25.31 spid10s     Clearing tempdb database.  2013-03-27 13:14:25.32 spid13s     A new instance of the full-text filter daemon host process has been successfully started.  2013-03-27 13:14:25.37 spid7s      Starting up database 'msdb'.  2013-03-27 13:14:25.40 Server      A self-generated certificate was successfully loaded for encryption.  2013-03-27 13:14:25.40 Server      Server is listening on [ 'any' <ipv6> 54547].  2013-03-27 13:14:25.40 Server      Server is listening on [ 'any' <ipv4> 54547].  2013-03-27 13:14:25.40 Server      Server local connection provider is ready to accept connection on [ \\.\pipe\SQLLocal\SQLEXPRESS ].  2013-03-27 13:14:25.40 Server      Server named pipe provider is ready to accept connection on [ \\.\pipe\MSSQL$SQLEXPRESS\sql\query ].  2013-03-27 13:14:25.40 Server      Dedicated administrator connection support was not started because it is disabled on this edition of SQL Server. If you want to use a dedicated administrator connection, restart SQL Server using the trace flag 7806. This is an informational message only. No user action is required.  2013-03-27 13:14:25.56 spid10s     Starting up database 'tempdb'.  2013-03-27 13:14:25.60 spid13s     The Service Broker protocol transport is disabled or not configured.  2013-03-27 13:14:25.60 spid13s     The Database Mirroring protocol transport is disabled or not configured.  2013-03-27 13:14:25.61 spid13s     Service Broker manager has started.  2013-03-27 13:14:25.77 spid7s      The tail of the log for database msdb is being rewritten to match the new sector size of 4096 bytes.  2048 bytes at offset 12007424 in file c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\MSDBLog.ldf will be written.  2013-03-27 13:14:25.84 spid7s      Recovery is complete. This is an informational message only. No user action is required.  2013-03-27 13:14:25.90 Logon       Error: 17187, Severity: 16, State: 1.  2013-03-27 13:14:25.90 Logon       SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again.  [CLIENT: ::1]  2013-03-27 13:14:25.90 Logon       Error: 17187, Severity: 16, State: 1.  2013-03-27 13:14:25.90 Logon       SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again.  [CLIENT: 172.17.0.210]  2013-03-27 13:14:26.48 Server      The SQL Server Network Interface library successfully registered the Service Principal Name (SPN) [ MSSQLSvc/WCCKemAPP.WCC.LOCAL:SQLEXPRESS ] for the SQL Server service.   2013-03-27 13:14:26.48 Server      The SQL Server Network Interface library successfully registered the Service Principal Name (SPN) [ MSSQLSvc/WCCKemAPP.WCC.LOCAL:54547 ] for the SQL Server service.   2013-03-27 13:14:26.48 Server      SQL Server is now ready for client connections. This is an informational message; no user action is required.  2013-03-27 13:14:31.04 spid51      Starting up database 'ReportServer'.  2013-03-27 13:14:31.37 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:31.76 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:32.07 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:32.24 Logon       Error: 18456, Severity: 14, State: 5.  2013-03-27 13:14:32.24 Logon       Login failed for user 'drily'. Reason: Could not find a login matching the name provided. [CLIENT: 172.17.0.210]  2013-03-27 13:15:12.28 spid55      Starting up database 'DNP'.  2013-03-27 13:15:13.75 spid55      Starting up database 'DSS'.  2013-03-27 13:19:36.62 spid57      Starting up database 'ReportServerTempDB'.  2013-03-27 13:25:31.18 spid53      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:25:36.38 spid53      Starting up database 'DSSDL'.  2013-03-27 13:25:38.89 spid53      Starting up database 'DSSUSERDIR'.  2013-03-27 13:25:41.26 spid53      Starting up database 'permissionsAudit'.  2013-03-27 13:25:45.00 spid53      Starting up database 'PMKemmererProduction'.  2013-03-27 13:25:48.05 spid53      Starting up database 'PMKemmererProductionTEST'.  2013-03-27 13:26:01.57 spid54      Attempting to load library 'xpstar.dll' into memory. This is an informational message only. No user action is required.  2013-03-27 13:26:01.58 spid54      Using 'xpstar.dll' version '2009.100.1600' to execute extended stored procedure 'xp_instance_regread'. This is an informational message only; no user action is required.  2013-03-27 13:26:52.10 Logon       Error: 18456, Severity: 14, State: 38.  2013-03-27 13:26:52.10 Logon       Login failed for user 'WCC\baadmin'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]  2013-03-27 13:26:53.37 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:53.60 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:53.92 spid59      Starting up database 'QuietDose'.  2013-03-27 13:26:54.16 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:54.36 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:54.66 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:54.89 spid59      Starting up database 'STX'.  2013-03-27 13:26:55.57 spid59      Starting up database 'Test'.  2013-03-27 13:26:55.76 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:55.91 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:56.08 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:56.31 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:56.52 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:56.68 spid59      Starting up database 'STX'.  2013-03-27 13:26:57.24 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:57.28 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:57.45 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:57.55 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:57.74 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:57.83 spid59      Starting up database 'STX'.  2013-03-27 13:29:36.55 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 13:39:36.57 spid56      Starting up database 'ReportServerTempDB'.  2013-03-27 13:41:59.55 Logon       Error: 18456, Severity: 14, State: 8.  2013-03-27 13:41:59.55 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]  2013-03-27 13:44:07.70 Logon       Error: 18456, Severity: 14, State: 8.  2013-03-27 13:44:07.70 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]  2013-03-27 13:49:36.57 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 13:59:36.57 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:09:36.56 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:15:50.50 spid54      Starting up database 'DSSDL'.  2013-03-27 14:15:50.75 spid54      Starting up database 'DSSUSERDIR'.  2013-03-27 14:15:51.92 spid54      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:15:52.25 spid54      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:15:52.51 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:15:52.70 spid54      Starting up database 'STX'.  2013-03-27 14:18:02.83 spid51      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:46.58 Logon       Error: 18456, Severity: 14, State: 38.  2013-03-27 14:18:46.58 Logon       Login failed for user 'WCC\baadmin'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]  2013-03-27 14:18:47.49 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:47.70 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:47.92 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:48.04 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:48.33 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:48.53 spid59      Starting up database 'STX'.  2013-03-27 14:18:49.12 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:49.33 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:49.44 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:49.60 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:49.84 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:49.98 spid59      Starting up database 'STX'.  2013-03-27 14:18:50.28 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:50.39 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:50.48 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:50.53 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:50.66 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:50.73 spid59      Starting up database 'STX'.  2013-03-27 14:19:36.54 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:19:36.93 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:29:36.55 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:39:36.57 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:49:36.57 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:59:36.58 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 15:09:36.60 spid53      Starting up database 'ReportServerTempDB'.  

I can provide older logs too if it will be beneficial. I'm reading those now from here.

Results from query select name, state_desc, is_in_standby, is_cleanly_shutdown from sys.databases;

enter image description here


Image for Kin's request

enter image description here

What is the algorithmic complexity of a PosgtreSQL greater than or lesser than index query (compared to equality)?

Posted: 27 Mar 2013 02:10 PM PDT

Assuming there is an index idx_int on an int_col column, what is the algorithmic complexity of a query like this?

SELECT id FROM table      WHERE table.int_col > 1;  

I'm specifically interested in knowing if the query is significantly more inefficient than if doing an equality clause (which, if I understand correctly is O(log N)). I am pretty sure both of them can use a B-tree, so I would expect them to be about the same in terms of complexity / efficiency.

Thanks very much!

how to get image from mysql on jsp using struts 2 [closed]

Posted: 27 Mar 2013 01:21 PM PDT

my class .... extends ....

{

// i retrieve the image from db but want to on jsp using struts 2

Blob image = null;

            Statement st8 = con.createStatement();                 ResultSet rs8 = st8.executeQuery("Select image from pic where CID=1111;");                    while(rs8.next())                  {                            image= (Blob) rs8.getBlob("image");                                     }  

but not display on jsp using param value

Using MongoDB and PostgreSQL together

Posted: 27 Mar 2013 01:14 PM PDT

My current project is essentially a run of the mill document management system.

That said, there are some wrinkles (surprise, surprise). While some of the wrinkles are fairly specific to the project, I believe there are some general observations and questions that have come up which don't have a canonical answer (that I could find, anyway) and that are applicable to a wider problem domain. There's a lot here and I'm not sure it's a good fit for the StackExchange Q&A format but I think it a) an answerable question and b) non-specific enough that it can benefit the community. Some of my considerations are specific to me but I think the question could be of use to anyone faced with deciding on SQL vs NoSQL vs both.

The background:

The web app we are building contains data that is clearly relational in nature as well as data that is document-oriented. We would like to have our cake and eat it too.

TL;DR: I think #5 below passes the smell test. Do you? Does anyone have experience with such an integration of SQL and NOSQL in a single application? I tried to list all the possible approaches to this class of problem in below. Have I missed a promising alternative?

Complexities:

  • There are many different classes of documents. The requirements already call for dozens of different documents. This number will only ever go up. The best possible case would be one in which we could leverage a simple domain specific language, code generation and a flexible schema so that domain experts could handle the addition of new document classes without the intervention of DBAs or programmers. (Note: already aware we are living out Greenspun's Tenth Rule)
  • The integrity of previous successful writes is a central requirement of the project. The data will be business critical. Full ACID semantics on writes can be sacrificed provided that the things that do get succesfully written stay written.
  • The documents are themselves complex. The prototype document in our specific case will require storage of 150+ distinct pieces of data per document instance. The pathological case could be an order of magnitude worse, but certainly not two.
  • A single class of documents is a moving target subject to updates at a later point in time.
  • We like the free stuff we get from Django when we hook it into a relational database. We would like to keep the freebies without having to jump back two Django versions to use the django-nonrel fork. Dumping the ORM entirely is preferable to downgrading to 1.3.

Essentially, it's a mishmash of relational data (your typical web app stuff like users, groups, etc., as well as document metadata that we'll need to be able to slice and dice with complex queries in realtime) and document data (e.g. the hundreds of fields which we have no interest in joining on or querying by - our only use case for the data will be for showing the single document into which it was entered).

I wanted to do a sanity check (if you check my posting history, I'm pretty explicit about the fact that I am not a DBA) on my preferred method as well as enumerate all of the options I've come across for others solving broadly similar problems involving both relational and non-relational data.

Proposed Solutions:

1. One table per document class

Each document class gets its own table, with columns for all metadata and data.

Advantages:

  • The standard SQL data model is in play.
  • Relational data is handled in the best possible way. We'll denormalize later if we need to.
  • Django's built-in admin interface is comfortable with introspecting these tables and the ORM can live happily with 100% the data out of the box.

Disadvantages:

  • Maintenance nightmare. Dozens (hundreds?) of tables with (tens of?) thousands of columns.
  • Application-level logic responsible for deciding exactly which table to write to. Making the table name a parameter for a query stinks.
  • Basically all business logic changes will require schema changes.
  • Pathological cases might require striping data for single forms across multiple tables (see: What is the maximum number of columns in a PostgreSQL table?).
  • We would probably need to go find a real, honest-to-God DBA who would no doubt end up hating life and us.

2. EAV modeling

There is just a fields table. Entity-Attribute-Value modeling is already well understood. I've included it for completeness. I don't think any new project being started in 2013 would go with an EAV approach on purpose.

Advantages:

  • Easy to model.

Disadvantages:

  • More difficult to query.
  • DB layer no longer has a straight-forward representation for what constitutes one app-level object.
  • We would lose DB-level constraint checking.
  • Number of rows on one table will grow 100s-1000s of times faster. Likely future pain point, performance-wise.
  • Limited indexing possible.
  • DB schema is nonsensical as far as ORM is concerned. Batteries included web app stuff is preserved but custom data models are going to require custom queries.

3. Use PostgreSQL hstore or json fields

Either of these field types would do the trick for storing schemaless data within the context of a relational DB. The only reason I don't jump to this solution immediately is it is relatively new (introduced in version 8.4 so not that new), I have zero previous exposure to it and I am suspicious. It strikes me as wrong for precisely the same reasons I would feel uneasy throwing all my nice, easily normalized data into Mongo - even though Mongo can handle references between documents.

Advantages:

  • We get the benefits of the Django ORM and the built-in auth and session management.
  • Everything stays in one backend that we've previously used on other projects successfully.

Disadvantages:

  • No experience with this, personally.
  • It doesn't look like a very highly used feature. It looks like they get recommended quite a bit to people looking at NOSQL solutions but I don't see a lot of evidence that they are being chosen. This makes me think I must be missing something.
  • All values stored are strings. Lose DB-level constraint checking.
  • The data in the hstore will never be displayed to the user unless they specifically view a document, but the metadata stored in more standard columns will be. We will be beating that metadata up and I worry the rather large hstores we will be creating might come with performance drawbacks.

4. Go full bore document-oriented

Make all the things documents (in the MongoDB sense). Create a single collection of type Document and call it a day. Bring all peripheral data (including data on user accounts, groups, etc) into mongo as well. This solution is obviously better than EAV modeling but it feels wrong to me for the same reason #3 felt wrong - they both feel like using your hammer as a screwdriver too.

Advantages:

  • No need to model data upfront. Have one collection with documents of type Document and call it a day.
  • Known good scaling characteristics, should the collection need to grow to encompass millions or even billions of documents.
  • JSON format (BSON) is intuitive for developers.
  • As I understand it (which is only vaguely at this point), by being paranoid with regard to write-concern level even a single instance can provide pretty strong data safety in the event of anything and everything up to a hard drive crash.

Disadvantages:

  • The ORM is out the window for Django trunk. Freebies that go out the window with it: the auth framework, the sessions framework, the admin interface, surely many other things.
  • Must either use mongo's referencing capabilities (which require multiple queries) or denormalize data. Not only do we lose freebies that we got from Django, we also lose freebies like JOINs we took for granted in PostgreSQL.
  • Data safety. When one reads about MongoDB, it seems there is always at least one person referring to how it will up and lose your data. They never cite a particular occurrence and it might all just be hogwash or just related to the old default fire and forget write-concern but it still worries me. We will of course be utilizing a fairly paranoid backup strategy in any case (if data is corrupted silently that could well be immaterial of course..).

5. PostgreSQL and MongoDB

Relational data goes in the relational database and document data goes in the document-oriented database. The documents table on the relational database contains all of the data we might need to index or slice and dice on as well as a MongoDB ObjectId which we would use when we needed to query for the actual values of the fields on the documents. We wouldn't be able to use the ORM or the built-in admin for the values of the documents themselves but that's not that big of a loss since the whole app is basically an admin interface for the documents and we would have likely had to customize that specific part of the ORM to an unacceptable degree to make it work just the way we need.

Advantages:

  • Each backend does only what it is good at.
  • References between models are preserved without requiring multiple queries.
  • We get to keep the batteries Django gave us as far as users, sessions, etc are concerned.
  • Only need one documents table no matter how many different classes of documents are created.
  • The less often queried document data is strongly separated from the far more often queried metadata.

Disadvantages:

  • Retrieving document data will require 2 sequential queries, first against the SQL DB and then against the MongoDB (though this is no worse than if the same data had been stored in Mongo and not denormalized)
  • Writing will no longer be atomic. A write against a single Mongo document is guaranteed to be atomic and PG obviously can make atomicity guarantees but ensuring atomicity of write across both will require application logic, no doubt with a performance and complexity penalty.
  • Two backends = two query languages = two different programs with dissimilar admin requirements = two databases vying for memory.

SSIS File System Task Copy File Could not find part of the path error

Posted: 27 Mar 2013 01:50 PM PDT

I have created an SSIS package that uses a File Systen Task (copy) that copies a file from 1 folder to another. The package runs fine when I run it from my local computer but when I import the package into our SQL Server and Create a Job to run the Package, the package fails and I see the following error in the log:

Started:  3:20:01 PM    Error: 2013-03-27 15:20:01.70       Code: 0xC002F304       Source: copy template File System Task       Description:  An error occurred with the following error message:       "Could not find a part of the path       'P:\IT\PROVIDERS\PROVIDERS AUDIT\File      Template\QnxtProviderAuditList_yyyymmdd.xlsx'."    End Error    DTExec:  The package execution returned DTSER_FAILURE (1).    Started:  3:20:01 PM    Finished: 3:20:01 PM    Elapsed:  0.452 seconds.    The package execution failed.  The step failed.  

The file path in the error is the file that needs to be copied (sourceconnection).

I can copy the file path directly from the error and go to it (via windows explorer) on the physical SQL Server machine so I do not understand why the package is saying it could not find the Path.

Does anyone have any idea why this package would run fine on my local machine but fails with the above error when run from the SQL Server? P:\ is a shared drive which the SQL Server has access to so I doubt it's an access/privileges issue.

How does PostgreSQL physically order new records on disk (after a cluster on primary key)?

Posted: 27 Mar 2013 01:42 PM PDT

Need to know how PostgreSQL orders records on disk. In this case, I would like to take advantage of index combination as stated in the docs, which as I understand uses bitmaps to get matching rows and returns them according to their physical location. The table in question has been clustered by its primary key.

As I understand it, PostgreSQL does not automatically continue doing clustering after a clustering is finished (although it does remember that it clustered according to a certain index). Now, since this is the primary key, I wonder if the physical storage order would be according to that (which if true I would like to use to our advantage for a specific query).

In summary, how does PostgreSQL order its new records, especially after clustering?

Thanks very much!

MySQL - Normalization over meaningless data - Compound Primay Keys or Surrogate ID field

Posted: 27 Mar 2013 03:05 PM PDT

Please forgive my title; if you have a suggestion for it, feel free to comment.

I have a database:

DROP TABLE IF EXISTS `books`;    CREATE TABLE `books` (    `isbn` VARCHAR(255) NOT NULL,    `title` VARCHAR(255) NULL DEFAULT NULL,    PRIMARY KEY (`isbn`)  ) COMMENT 'Books used at this school';    DROP TABLE IF EXISTS `classes`;    CREATE TABLE `classes` (    `class_id` INT(10) NOT NULL AUTO_INCREMENT,    `teacher_id` SMALLINT(5) NULL DEFAULT NULL,    PRIMARY KEY (`class_id`)  ) COMMENT 'Classes at the school';    DROP TABLE IF EXISTS `b_c`;    CREATE TABLE `b_c` (    `isbn` VARCHAR(255) NOT NULL,    `class_id` INT(10) NOT NULL,    PRIMARY KEY (`isbn`)  ) COMMENT 'Books to classes';    ALTER TABLE `b_c` ADD FOREIGN KEY (isbn) REFERENCES `books` (`isbn`) ON UPDATE CASCADE;  ALTER TABLE `b_c` ADD FOREIGN KEY (class_id) REFERENCES `classes` (`class_id`) ON UPDATE CASCADE;  

The issue I'm having is that I would like to normalize data as much as possible (I don't want multiple entries for the same relationship to be entered into the table b_c), but I would like to only store what data is absolutely pertinent.

My first idea to deal with this is to just create a compound primary key for the b_c table consisting of the fields isbn and class_id which would solve the issue of having duplicate relationships in the table, however, I have heard strong opinions on having a unique identifier for every row in a table like this. The justification for having a unique identifier for every row seems to be that it's useful to be able to specify a specific row, though I don't see a situation in which this would become useful. Can someone offer an example?

Another criticism I've heard is that using compound PKs in this way can make JOINs extremely taxing. Can someone comment on the performance of these two different methods?

The question boils down to "Is it worth it to add an id field to the b_c table or is the use of compound PKs enough to properly represent the relationship between the books and classes tables?

If you have any other comments about the design not directly pertaining to the question, I would love to hear them and thank you in advanced for you help.

Record with variable number of columns?

Posted: 27 Mar 2013 12:35 PM PDT

I am trying to model a cable mapping for devices. Each device has multiple cards, and each of those cards has multiple ports. Since both the number of cards and ports vary, I am confused on how to establish the correct normalized form and relationships that would address one device record having an arbitrary number of cards and those cards having an arbitrary number of ports. Any thoughts?

SQL Server 2008 R2 SP1 - Tempdb “Available free space” showing negative value [duplicate]

Posted: 27 Mar 2013 11:56 AM PDT

Please help as this issue is driving me nuts...

Tempdb has four data files & one log file. They are all located on the same dedicated drive.

When I right click on tempdb in SQL Server Management Studio (SSMS) to view Properties --> General...I'm seeing "Size" as ~65000 MB with "Space Available" as ~64000 MB. However, when I try to shrink the files via the GUI, I see that the "currently allocated space" hasn't changed & is still showing the initial size.

This is also where the "Available free space" is shown as a negative number. The only work around I have found so far is to restart SQL Server to flush out tempdb, but I can't keep on doing this.

Has anyone ever come across this & know the root cause/permanent solution for it please?

Thanks in advance!

SQL Server 2008 R2 SP1 - Tempdb "Available free space" showing negative value

Posted: 27 Mar 2013 05:21 PM PDT

Please help as this issue is driving me nuts...

Tempdb has four data files & one log file. They are all located on the same dedicated drive.

When I right click on tempdb in SQL Server Management Studio (SSMS) to view Properties --> General...I'm seeing "Size" as ~65000 MB with "Space Available" as ~64000 MB. However, when I try to shrink the files via the GUI, I see that the "currently allocated space" hasn't changed & is still showing the initial size.

This is also where the "Available free space" is shown as a negative number. The only work around I have found so far is to restart SQL Server to flush out tempdb, but I can't keep on doing this.

Has anyone ever come across this & know the root cause/permanent solution for it please?

Thanks in advance!

How this simple Stored Procedure ended up in Deadlock

Posted: 27 Mar 2013 01:07 PM PDT

Thanks for all and this forum is really helpful.

After few minutes of system running we get "Try restarting the transaction." And its end up in deadlock.

We use MySQL 5.5.28 and the Java code calling this Stored Procedure is in synchronised block. However, I am still we getting this deadlock. When I ran, show engine innodb status;, I see the following query in deadlock:

INSERT INTO Site_Msg_Tag (Site_Msg_Tag_Id,Unit_Id,Log_Timestamp,Interval_Id,Digest_Matched)          VALUES (NULL,siteId,logTimestamp,IntervalId,digestMatched);  

Please refer below for entire Stored Procedure and the table it's updating:

CREATE PROCEDURE sp_CreateCtrMsgTagIfNotExist ( siteId INT, logTimestamp DATETIME, IntervalId INT, digestMatched TINYINT(1), OUT returnpk INT )  BEGIN       SET returnpk = 0 ;     START TRANSACTION;          SELECT Site_Msg_Tag_Id INTO returnPK FROM Site_Msg_Tag WHERE Unit_Id = siteId AND Log_Timestamp = logTimestamp     AND Interval_Id = IntervalId ;       IF (returnpk IS NULL OR returnpk = 0  )     THEN /*1*/          INSERT INTO Site_Msg_Tag (Site_Msg_Tag_Id,Unit_Id,Log_Timestamp,Interval_Id,Digest_Matched)          VALUES (NULL,siteId,logTimestamp,IntervalId,digestMatched);            SELECT LAST_INSERT_ID() INTO returnpk;      END IF;/*1*/        COMMIT;    END |    DELIMITER ;  

Table :

| Site_Msg_Tag | CREATE TABLE `Site_Msg_Tag` (    `Site_Msg_Tag_Id` int(11) NOT NULL AUTO_INCREMENT,    `Unit_Id` int(11) NOT NULL,    `Log_Timestamp` datetime NOT NULL,    `Interval_Id` int(11) NOT NULL,    `Digest_Matched` tinyint(1) DEFAULT '0',    PRIMARY KEY (`Site_Msg_Tag_Id`),    UNIQUE KEY `ix_Site_Msg_Tag_Ts_Unit` (`Unit_Id`,`Log_Timestamp`,`Interval_Id`),    UNIQUE KEY `ix_Site_Msg_Tag_Unit_Ts` (`Log_Timestamp`,`Unit_Id`,`Interval_Id`),    KEY `FK_Site_Msg_Tag_Interval_Id` (`Interval_Id`),    CONSTRAINT `FK_Site_Msg_Tag_Interval_Id` FOREIGN KEY (`Interval_Id`) REFERENCES `Interval` (`Interval_Id`),    CONSTRAINT `FK_Site_Msg_Tag_Unit_Id` FOREIGN KEY (`Unit_Id`) REFERENCES `Unit` (`Unit_Id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8 |  

I saw in this thread Will these two queries result in a deadlock if executed in sequence?

in that they solved the issue by writing the query in more efficient way any suggestions ? . Thanks in advance.

ISO Week vs SQL Server Week

Posted: 27 Mar 2013 03:11 PM PDT

Okay so I have a report that does a this week vs last week comparison and our customer noticed that their data was "funky". Upon further investigation we found it was not doing weeks correctly according to the ISO standards. I ran this script as a test case.

SET DATEFIRST 1  SELECT DATEPART(WEEK, '3/26/13')      , DATEPART(WEEK, '3/27/12')      , DATEPART(WEEK, '3/20/12')      , DATEPART(WEEK, '1/2/12')  SELECT DATEPART(ISO_WEEK, '3/26/13')      , DATEPART(ISO_WEEK, '3/27/12')      , DATEPART(ISO_WEEK, '3/20/12')      , DATEPART(ISO_WEEK, '1/2/12')  

When run I got these results.

ResultSet

I thought this was peculiar and so I did some more digging and found that SQL Server counts January 1st as the first week of the year where ISO counts the first Sunday in January as the first week of the year.

The question then ends up being two fold. Question 1 why is this? Question 2 is there any way to change this so I'm not having to modify all of my code to use ISO_Week everywhere?

MySQL subqueries that use range based on values of main queries don't use indices properly

Posted: 27 Mar 2013 06:20 PM PDT

I think I've isolated a problem that has been affecting many of my queries lately. And would like some help to figure out a solution for this.

Ok so my findings are that a normal query that runs very fast using like a couple of rows can actually use indexes improperly when used in a subquery which is based on values from the main query.

Lets take an example:

DROP TEMPORARY TABLE IF EXISTS Person;  DROP TEMPORARY TABLE IF EXISTS CofeeBreaks;  CREATE TEMPORARY TABLE IF NOT EXISTS Person  (      `person_id` INT(11) AUTO_INCREMENT,      `age` INT,      `lastCofee` DATETIME,      KEY `idkey` (`person_id`) USING BTREE,      KEY `datekey` (`lastCofee`) USING BTREE  ) ENGINE = MEMORY;  CREATE TEMPORARY TABLE IF NOT EXISTS CofeeBreaks  (      `id` INT(11) AUTO_INCREMENT,      `cofeeBreakStart` DATETIME,      `cofeeBreakEnd` DATETIME,      KEY `brekIdKey`(`id`) USING BTREE  ) ENGINE = MEMORY;  INSERT INTO Person (age, lastCofee) VALUES (24, '2013-03-27 14:45:34');  INSERT INTO Person (age, lastCofee) VALUES (34, '2013-03-27 14:46:38');  INSERT INTO Person (age, lastCofee) VALUES (26, '2013-03-27 15:25:24');  INSERT INTO Person (age, lastCofee) VALUES (28, '2013-03-27 16:33:54');  INSERT INTO Person (age, lastCofee) VALUES (46, '2013-03-27 17:11:03');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 15:11:03', '2013-03-27 17:25:24');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 14:45:34', '2013-03-27 15:25:24');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 17:11:03', '2013-03-27 17:11:03');  SELECT * FROM Person WHERE lastCofee BETWEEN '2013-03-27 15:11:03' AND '2013-03-27 17:11:03';  SELECT      *,      (SELECT AVG(Person.age) FROM Person WHERE Person.lastCofee BETWEEN CofeeBreaks.cofeeBreakStart AND CofeeBreaks.cofeeBreakEnd) AS averageAge  FROM CofeeBreaks;  

So the explain results for the first select are as follow:

1   SIMPLE  Person  range   datekey datekey 9       1   Using where  

But the second query doesn't use the index properly in the subquery and analyses more rows than necessary:

id  select_type         table   type    possible_keys   key key_len ref rows  1   PRIMARY CofeeBreaks ALL                                               3  2   DEPENDENT SUBQUERY  Person  ALL       datekey                         5  

As we can see the subquery needs to analyse all rows in the person table when none of the cofeebreaks ranges surrounds all of the 5 persons.

The way I've been fixing the performance issues in a very busy database is by doing a select into a temporary table and than looping the rows of that table and updating it with the aggregates that i need. This is obviously ugly code and shouldn't be built that way. But I really haven't found a way to optimize queries for this kind of subqueries and I've tried a lot of possible ways to do this without success on the optimization.

Thanks in advance for any inputs.

psql.exe crash when issuing \connect

Posted: 27 Mar 2013 10:46 AM PDT

I'm trying to use psql 8.4.16 from a windows 7 32bit PC, to connect to a local database.

When I open a command line console I do "psql.exe -U myuser " to open the interactive console.

The next command is "\connect mydb" and it just crashes showing the typical windows message "psql- the PostgreSql interactive terminal has stopped working", and gives me two options, find a solution on line or close the program.

Any hint?.

Edit: I've found the problem, I was mixing psql.exe with some postgresql dlls from previous versions.

Expand sparse table with self outer join on distinct values

Posted: 27 Mar 2013 06:03 PM PDT

I have this:

g1       | g2        | x  ---------+-----------+-----  New York | Monday    | 210  New York | Tuesday   | 258  Chicago  | Monday    | 30  Chicago  | Wednesday | 25  LA       | Thursday  | 40  

But I want this:

g1       | g2        | x  ---------+-----------+-----  New York | Monday    | 210  New York | Tuesday   | 258  New York | Wednesday | NULL  New York | Thursday  | NULL  Chicago  | Monday    | 30  Chicago  | Tuesday   | NULL  Chicago  | Wednesday | 25  Chicago  | Thursday  | NULL  LA       | Monday    | NULL  LA       | Tuesday   | NULL  LA       | Wednesday | NULL  LA       | Thursday  | 40  

In other words, I want a record for every combination of DISTINCT g1 and DISTINCT g2. I should note that g1 and g2 are arbitrary groupings (not always location- or date-related). How do I fill in the table with these NULL values based on the cross product?

SYSDBA password in Oracle DB

Posted: 27 Mar 2013 11:56 AM PDT

I am connecting to Oracle DB using sysdba:

sqlplus / as sysdba  

What is the default password for sysdba user. I tried in net and found out the default password is change_on_install but it is not working for me.

I am interested to know if Oracle DB maintains some file or DB table where it stores the passwords.

SQL Server 2008 Error for Outlook BCM [closed]

Posted: 27 Mar 2013 11:00 AM PDT

I have Outlook Business Contact Manager installed and I had no issues until yesterday. I have tried to add a new contact this morning however I received the following error:

BCM Error severity levels greater than 18 can only be specified by members of the   sysadmin role, using the WITH LOG option. Uncommittable transaction is detected at  the end of the batch. The transaction is rolled back.  

Can anyone help me solve this issue? I have SQL server 2008 installed.

Creating a copy of production database for reporting purposes

Posted: 27 Mar 2013 06:49 PM PDT

We previously ran Crystal Reports straight off our production database. This caused a direct impact to the users using the database. We moved to a new (dedicated) server and a new database and ditched Crystal Reports in favor of SSRS. We also have a dedicated reporting server to which our production database is copied over to twice a day. We run queries off of the reporting server as to not slow down the production environment.

Is this the best solution to our problem? The backup restorations kick everyone off the reporting server whenever they occur and management is asking we step up this process (backup restoration) to once every two hours. However, just twice a day gives us issues. For example, as I write this, the database is stuck in "Restoring..." and has been for at least 12 hours. So in the meantime that the DBAs come in (I only write reports) I thought I'd ask. Is this the best possible setup we've got going on?

SSIS Package Data Flow Task RollBack on Error

Posted: 27 Mar 2013 07:50 PM PDT

I tried running a package and it failed but I noticed in the output box (in visual studio) that it managed to write 2000+ rows to a table. I was wondering if an SSIS package would roll back the transactions in my Data Flow Task if the overall package failed. I found this thread

http://www.sqlservercentral.com/Forums/Topic998096-391-1.aspx#bm998104

which states that in order for the package to rollback DFT changes:

Set the TransactionOption property of the package to "Required" Set the TransactionOption property of the dft to "Supported"

I had both the Package and DFT Transaction Option property set to supported so I am guessing that the rows written to the table were not rolled back... Do you all agree that this is the best way to make sure table changes are rolled back in the event the package fails? Will this also rollback file System Tasks such as file moves on the local file system?

MySQL, Installation process, my.ini

Posted: 27 Mar 2013 04:57 PM PDT

Recently i decided to install apache + php + mysql separately from each other instead using one of the packets such as wamp,xampp,zend server,etc..., So far i installed apache server with php engine, Now at mysql part i will be happy to get some answers for the questions below:

  • After i unzip the zip file downloaded from mysql website i have my-default.ini, File if i am not wrong it's basically a template i should copy the file, Edit the name to my.ini and take the file back to the folder my-default.ini is.
  • Just for a general Knowledge, What configuration file i can edit so for example: I can rename my.ini to anotherName.ini, So that mysql server will look for file name anotherName.ini.
  • Also i am looking for a good guide about my.ini configuration.

Thank you all and have a nice day.

Convert rows to columns in SSIS. Unpivot?

Posted: 27 Mar 2013 07:59 PM PDT

If I have data like this

"","",WAREHOUSE ID,1,2,3,4,5...  "","",WAREHOUSE CODE,BOS,HOU,LAX,NYC,CHI...  "","",WAREHOUSE NAME,BOSTON,HOUSTON,LOS ANGELES,NEW YORK CITY,CHICAGO...  

But want to transform it like this

 WAREHOUSE ID, WAREHOUSE CODE, WAREHOUSE NAME   1,BOS,BOSTON   2,HOU,HOUSTON   3,LAX,LOS ANGELES   ...  

If, as above, [Column 2] is the column that contains what should be the header row, and Columns 3 through 40 contain what should be row values, which columns get selected and set to pass through for the UNPIVOT to perform the desired transposition?

Or is UNPIVOT not even what I'm looking for?

Import batches of rows from Excel in parallel

Posted: 27 Mar 2013 08:02 PM PDT

We receive an Excel file weekly currently containing about 250k rows. The file grows by about 300 rows each week.

In SSIS I've hooked up an Excel source to an OLE DB destination to import the file but it takes a long time.

I've looked at using the SQL Command to specify how to select part of the sheet but I can't see how to turn this in to a parallel operation.

Is it possible in SSIS to split the Excel file (by rows) and import it in parallel?

How do I list all databases and tables with psql command line tool?

Posted: 27 Mar 2013 11:32 AM PDT

I try to learn PostgreSQL administration and have started learn psql command line tool.

When I have logged in with psql --username=postgres, how do I list all databases and tables?

I have tried with \d, d and dS+ but nothing is listed. I have created two databases and a few tables with pgAdmin III, so they should be listed.

[MS SQL Server] owners_id remains NULL

[MS SQL Server] owners_id remains NULL


owners_id remains NULL

Posted: 27 Mar 2013 02:34 AM PDT

I had recently move my data and log files to different locations on the server box to keep them all in the same location (house keeping maintenace). The databases owner used to be the previous DBA and when I reattached the database my domain account became the owner. I ran a query on sys.databases and the owner_sid has my domain account's SID, however, when I ran a SELECT [name], SUSER_NAME([owner_sid]) AS OWNERS_ID,[owner_sid], * FROM master.[sys].[databases] my databases except of model, msdb, and master have NULL for SUSER_NAME([owner_sid]). I can change the db owner to sa but I cannot change the owner to another domain account even though I know they exist. This discrepencies are pretty strange and I don't think there's any orphan users since the SID actually match.Any ideas on what's going on?

SA Disabled\Locked

Posted: 26 Mar 2013 10:52 PM PDT

Hi Experts,We have two situations here1. 'SA' is locked and we don't have any login with sysadmin previleage2. SA is disabled and no other logins in that server except builtin\administrator.Can anyone please help me to login to server with admin rights or enable\unlock SATIA

Missing Lookup operator?

Posted: 26 Mar 2013 10:30 PM PDT

Hi AllConsider the following table and their indexes[code="sql"]create table test1 (Col1 int identity (1,1) primary key clustered, Col2 int, Col3 int, Col4 int)create nonclustered index ncx on test1 (Col4)create table test2 (Col1 int identity (1,1) primary key clustered, Col2 int, Col3 int, Col4 int)create nonclustered index ncx2 on test2 (Col3,Col4)[/code]Consider the following query[code="sql"]select test2.Col2,test1.Col2 from test1inner join test2 on test1.Col1 = test2.Col1where test2.Col1 < 5[/code]The execution plan shows 2 Clustered Index seeks and a nested loops join. Because there are no indexes on the columns in my select list (test2.Col2,test1.Col2), shouldn't I be seeing lookups? Thanks

connecting sql server 2008 to visual studio 2008

Posted: 26 Mar 2013 08:55 PM PDT

I cannot connect my server name to server whenever i m connecting error is displayedA network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified) (.Net SqlClient Data Provider)can anyone please help to solve this.i am digging my head in the net for all the possible solutions.no luck..thanks in advance.

Cant see SQL Server 2008 R2 on local, no matter what

Posted: 26 Mar 2013 11:22 AM PDT

Hi, no matter I tried I still can't access instance of MSSQLServer I install on my machine, and I'm trying to do it thru MS SSMS on local machine,I see Instance MSSQLSERVER in configuration, I started all services (SQL Server, SQL Browser), enabled TCP (port 1433) , (Shared memory also ENABLED). REstarted all services. Using W authentication on SSMS, going to .\MSSQLSERVER, localhost\MSSQLSERVER, t410\MSSQLServer...user Name = T410\lenovo which is confirmed correct.... and still geting error 87 "A network-related or instance-specific error occured while connecting to MSSQL server.checked firewall for tcp/1433 http://msdn.microsoft.com/en-us/library/ms175043.aspxIn my SSMS ServerName dropdown, I still have stored sqlexpress instance, should be lef from my prev installaion of express. But now I'm on 2008 R2, running W7 64. I also have <Microsoft SQL Server 2008> folder I think left from prev install, it has only Configuration tools.All new install in <Microsoft SQL Server 2008 R2>Looks like only thing left is firewall? That what I learned from www, but I'm local so should be the case either. Is there any more diagnostics I can run, not sure about isql -L?APpreciate all help. How to make it work, ThanksV

Cannot create a database...Keep getting error

Posted: 26 Mar 2013 06:44 AM PDT

Hey guys,Ran into a problem I have never seen before...I am logged on to a SQL Server box as sa with sa rights...I go to create a database, give it a name, the drives are already populated, and when I click OK, I get the following errorCREATE FILE encountered an operating system error 5(Access denied) while attempting to open or create the file blah blah blah...What is this error? I even went to the drive where it will be created and created a txt file on that drive so:I am logged in as SA on SQL ServerI can create a file on the driveWhy is it not letting me create a database?

[Articles] What Do You Want to Know About I/O?

[Articles] What Do You Want to Know About I/O?


What Do You Want to Know About I/O?

Posted: 26 Mar 2013 11:00 PM PDT

Steve Jones is looking forward to learning more about I/O. There is a panel of experts, taking questions at SQL Intersection in a few weeks.

[SQL 2012] Low memory usage and low PLE on 23GB virtual server

[SQL 2012] Low memory usage and low PLE on 23GB virtual server


Low memory usage and low PLE on 23GB virtual server

Posted: 26 Mar 2013 10:24 PM PDT

I am trying to help a customer with a performance problem on a SQL server that runs several different OLTP systems. The system is continuously handling hundreds of batches per second. CPU load is very low, the disk is 100% busy almost all the time.Users are complaining about longer response times than expected. This is a brand new server, so we have no previous baseline.This is a virtual server using Hyper-V, the physical server is only used for this virtual server.16 logical CPU:s28GB memory23GB allocated to SQL server using minimum and maximum server memorySQL Server 2012, Standard editionLock Pages in memory is NOT enabledWhen looking at memory usage on the machine, there does not seem to be any external memory pressure - there is no paging and there seems to always be physical memory available.When trying to diagnose the problem I have found something that I think might be the cause of the heavy disk read activity.I watch the following perfmon counters:SQL Server Memory Manager:Free Memory (MB)SQL Server Memory Manager:Database Cache Memory (MB)SQL Server Buffer Manager:Page Life ExpectancyWhen the server is loaded I would expect Free memory to be very low, Database Cache memory to be close to the maximum, and Page Life expectancy to be high.What I am seeing is instead that Free memory is constantly about 16 GB, Database Cache memory varies between 200 MB and 2 GB. Page Life expectancy repeatedly drops to 0, climbs to 100, and drops to 0 again.So, there is lots of free memory available, but it is not getting used!I have used SQL Profiler to look for any SQL commands that reads has a large amount of reads, but I ave not found anything significant. There are simply a very large number of small requests.Do you have any ideas about what could be causing this behavior?Any ideas about how to continue trying to find the cause of the problem?/SG

Page life exceptancey performance couter value not change for a long time

Posted: 26 Mar 2013 07:28 PM PDT

My Server is SQL SERVER 2012 Enterprise Editiona few days ago I saw Page life exceptancey counter is not change But not "0" maybe It changed once a day....I did big table full scan but It's no use........current value is 348,611 What's wrong???Thanks

SSIS 2012 Centralized Error Capture

Posted: 26 Mar 2013 07:41 AM PDT

I'm using SSIS 2012. I'm trying to collect all package errors as a formatted string, and communicated those from an OnError event (at the control flow level) as an email. I have found examples here: http://agilebi.com/jwelch/2008/01/15/handling-multiple-errors-in-ssis-revisited/ http://agilebi.com/jwelch/2007/05/05/handling-multiple-errors-in-ssis/ The only problem is, these examples are in vb.net. I haven't found anything like this done in C#.I've tried vb.net to C# conversion tools, but I receive errors when I attempt the conversions. How can I implement these same solutions in C#? The script task I am writting is configured for .NET 4.0 Thanks. csdunn1us

Microsoft Systems Center 2012 SQL Server Requirements - Choosing Between SQL 2008R2 And SQL 2012

Posted: 26 Mar 2013 03:16 AM PDT

While MSSC (Microsoft Systems Center) 2012 supports the use of either SQL 2008 R2 or SQL 2012 there's no info at the SYSTEM CENTER 2012 website (that I can find) that details the pros/cons of each version of SQL Server, pointing out why you would select one version over the other and/or what to consider based on what version of SQL you go with.My company is implementing SC 2012 and I have been asked to decide between SQL 2008R2 and SQL 2012 and I'd like to find out if there are pro/cons to one or the other as they relate specifically to Systems Center 2012. Any System Center 2012 users out there with some thoughts/input/references on deciding on which version of SQL Server to go with for SC2012? There are clearly outlined differences between SQL 208R2 and SQL 2012 in general but what I'm looking for is anything that is specific to SQL Server for use by System Center 2012.Thanks

[T-SQL] system tables and views

[T-SQL] system tables and views


system tables and views

Posted: 26 Mar 2013 11:43 PM PDT

Does a database diagram exist of SQL Server's system tables? I am looking for documentation that will show me on which columns to join system tables so that I can develop my own investigative queries. I am familiar with many helpful posts that provide queries....however, invariably they require me to rely on someone else's experience to identify the keys on which to join tables and that that the joins are correct. SQL Server documentation of system tables, similar to http://msdn.microsoft.com/en-us/library/ms187997.aspx, but showing mapping between system tables, where primary keys are identified is what I'm looking for.

problem with union

Posted: 26 Mar 2013 07:26 PM PDT

Hi all, I have a problem with my "union" query, this a sample of my query. What I'd like to obtain is my 'effectif_enseignant' and my 'effectif_biatss' in two separated columns. Can somebody help me please (I don't know if I'm writting in the good part of this formum so please appologise if it's not the correct area)thank in advance Bastienselect COUNT (perso_id)as 'effectif_enseignant',REGION_NOM,ACAD_NOM,TYPO_NOM,cat_perso_nom,ETAB_RATT_NOM, ZONE_GEO_NOMfrom VIVIERwhere CATEGORIE_PERSONNEL.CAT_PERSO_CODE ='ENS'and FLAG_ACTIVITE=1union select COUNT (perso_id) as 'effectif_biatss',REGION_NOM,ACAD_NOM,TYPO_NOM,cat_perso_nom,ETAB_RATT_NOM, ZONE_GEO_NOMfrom VIVIERwhere CATEGORIE_PERSONNEL.CAT_PERSO_CODE ='BIA' and FLAG_ACTIVITE=1

SQL Query Problem: How to aggregate discount percentages?

Posted: 26 Mar 2013 02:30 AM PDT

I have a scenario whereby I need to aggregate [i]n[/i] number of discounts to get a total discount %. Each discount must be applied net of the previous discount. [b]For example[/b]: I buy a hat for £100 and I have two valid vouchers. The first gets me 15% off, 100-(100*(15/100)) = £85. And then we have a second voucher worth 5%, 85-(85*5/100) = £80.75. So our total discount is [b]19.25[/b]%.Sequence is important, so a further field records the order in which the discounts are applied.So in our table we might have:[code]SaleID | Discount | Sequence1 15% 1 1 5% 22 5% 13 5% 13 10% 23 25% 3[/code]Is anyone able to put together an elegant query that will give the total discount for each sale using the above table? (I can think of some less elegant ways to do it, but there must be a cleaner way...?). As in the example above, I expect 19.25% for SaleID 1.Many Thanks.

[SQL Server 2008 issues] SQL Query Confusion

[SQL Server 2008 issues] SQL Query Confusion


SQL Query Confusion

Posted: 26 Mar 2013 10:41 AM PDT

Hi,I have 2 tablesTable Member with columnsA B C Dapple bread carrot ChocolateMango Begal Onion CandyTable Util ColumnsA B C D X yapple bread carrot Chocolate coffee chedderMango Begal Onion Candy tea ProvolonKiwi biscuit garlic peanut greentea mozrellaNow I want to know weather table UTIL have any data not in table MEMBER ONLY on columns A,B,C,DI wrote following query ---dd not give me resultSelect * from Util ULeft Outer join on Member mon U.A=m.AAND U.B=m.BAND U.C=m.CAND U.D=m.Dwherem.A is NULL ORm.B is NULL ORm.C is NULL ORm.D is NULLNext query I wrote was like this--and this gives me result the last row in the table Util.Select * from Util Uwhere U.A not in (Select A from Member)or U.B not in (Select B from Member)or U.C not in (Select C from Member)OR U.D not n (Select D from Member)I want to know what is wrong in my 1st query, why left outer join is not working..Kindly please help.Thanks

KILLED/ROLLBACK STATE - SERVICE RESTART

Posted: 20 Mar 2013 06:57 PM PDT

Hi all,Is there any fix other than sql service restaRT when there is transaction stuck in killed/rollback state (after killing a blocked transaction) in sysprocesses?i had this situation and once i unsucessfully tried moving the database to offline mode(courtesy:google) to end the transaction.any help will be appreciated.

reorganize index

Posted: 18 Mar 2013 05:16 PM PDT

Hi Guys,I'm going to reorganize indexes with > 10%.Is there a need to update statistics after the re-org?Understand that if it's a rebuild, SQL server will automatically update the stats.My question is, does re-org need a update statistics? Is the old statistics still accurate after re-org?In Micrsoft Note:Operations such as rebuilding, defragmenting, or reorganizing an index do not change the distribution of data. Therefore, you do not need to update statistics after performing ALTER INDEX REBUILD, DBCC REINDEX, DBCC INDEXDEFRAG, or ALTER INDEX REORGANIZE operations. The query optimizer updates statistics when you rebuild an index on a table or view with ALTER INDEX REBUILD or DBCC DBREINDEX, however; this statistics update is a byproduct of re-creating the index. The query optimizer does not update statistics after DBCC INDEXDEFRAG or ALTER INDEX REORGANIZE operations. thanks

Difference between?

Posted: 26 Mar 2013 03:34 AM PDT

1) Declare @i int=0 select @i=id+1 from user; select @i /* here id is an identity column*/2) select @i=max(id)+1 from user; select @i;In the above queries gives the same output but is there any difference in terms of performance?

data design considerations

Posted: 26 Mar 2013 01:52 AM PDT

Scenario is that I want to have a primary key to identify a new entry in a main table. Call it "record_id." This will also be a foreign key in many other tables.The most important of those tables will hold a complete history of all changes to the main record. Each record in the history table will have both the record_id as well as a unique history_id.So I'm envisioning a main table that generates the record_id, and a history table which will FK to the main table as well as generate its own unique internal PK as history_id.The thing I'm struggling with is that the "main" table seems like it only has one purpose, which is to generate the PK. All the other data would be stored in the history table because each piece of it might potentially change (this is storing data that is altered by users in something like a workflow application). So the main table is reduced to functioning the way a sequence does in Oracle. It holds no data and there's no real reason even to use it beyond its ability to create the record_id.This isn't really a problem, but it seems odd. Every time I think of other fields that might belong in the main table, such as create-date and such, that argument disappears, because that date will be in the first record of the history table as well as in another audit table.I get the feeling that something obvious is slipping right past my thought process. Composite key may fit here somehow.Though maybe I should just relax and allow this "main" table to hold only the record_id and recognize that its usefulness is limited to that - an SQL Server version of an Oracle sequence.Your thoughts are welcome!

Data warehouse File groups and Table partition designing

Posted: 26 Mar 2013 09:54 AM PDT

Hi Experts,Project: Datawarehouse migration from Oracle to SQL Server 2012In Oracle Number of Schemas in Oracle - 6Tables: Each Schema has 3 partitioned tables and three unpartitioned tablesEvery schema has its own Table space, all non partitioned tables are in its own Table SpaceEvery partitioned table(three tables in each schema) is partitioned on date partition column. i.each year a new partition is added to that table(PART_2011, PART_2012, PART_2013 ...etc) In SQL Server 2012All schemas from oracle are migrated to SQL Server in to one database with seperate schema nameNumber of Schemas:6Tables: Each Schema has 3 partitioned tables and three unpartitioned tablesMy Question: Can you please suggest me how to plan the file groups and partitions in SQL Server like FILEGROUPS for each schema or FILEGROUPS for every yearLet me know if you need more info

Query - Please Help!

Posted: 26 Mar 2013 03:21 AM PDT

Someone please tell me what I'm doing wrong..... :( All I want to do, is get a list of databases which don't have extended properties. Query below:exec ('create table #t_nodb(dbname varchar(120)) insert into #t_nodb (dbname) exec sp_MSforeachdb ''if ''?'' in (''tempDB'') returnuse ? select ''?'' where ''?'' not in (select ''?'' from sys.extended_properties)''select * from #t_nodbdrop table #t_nodb') Thanks!

Deadlock Analysis Results from trace flag 1222

Posted: 26 Mar 2013 12:43 AM PDT

Hi.I have just captured deadlock info into the error log using trace flag 1222. Can anybody help me disecting the results as it looks like a mine field of info !!!!:-)

FULLSCAN vs. REBUILD

Posted: 26 Mar 2013 06:53 AM PDT

Hi All,Except that REBUILD eliminates the fragmentation at indexes, is there any other advantage over update of the statistics for the indexes with FULLSCAN option? One disadvantage is the more consuming time for REBUILD, but not much more than UPDATE with FULLSCAN.If FULLSCAN makes the best update for the stats, then why should I run an everyday REBUILD for all indexes when I know that only few of them get fragmented? Then I will need to do the REBUILD only for them.RegardsIgorMi

Cant create a database...Not sure why

Posted: 26 Mar 2013 06:54 AM PDT

Hey guys,Ran into a problem I have never seen before...I am logged on to a SQL Server box as sa with sa rights...I go to create a database, give it a name, the drives are already populated, and when I click OK, I get the following errorCREATE FILE encountered an operating system error 5(Access denied) while attempting to open or create the file blah blah blah...What is this error? I even went to the drive where it will be created and created a txt file on that drive so:I am logged in as SA on SQL ServerI can create a file on the driveWhy is it not letting me create a database? Any suggestions?

Maintaining Compression

Posted: 26 Mar 2013 02:44 AM PDT

Recently I compressed all objects within a databases to page compression. Regardless of the object was a heap or clustered object. Heaps will keep their page compression on existing pages. New INSERT(s) that create new pages on heaps will be row compressed. I'm looking to maintain page compression and run a weekly rebuild on the heaps where row compression exists. Is the best way to approach this is to use sys.dm_db_index_physical_stats DMV? Compare the page count <> compressed_page_count. Where those two are not equal, is it fair to say there is row compression? Is there another way to go about this? Below is an example of the script and partial output of that script. Shouldn't I see more of a match on page count and compressed_page_count? Why are there so many mismatches?SELECT o.name, ips.partition_number, ips.index_type_desc, ips.record_count, ips.avg_record_size_in_bytes, ips.min_record_size_in_bytes, ips.max_record_size_in_bytes, ips.page_count, ips.compressed_page_countFROM sys.dm_db_index_physical_stats ( DB_ID(), NULL, NULL, NULL, 'DETAILED') ipsJOIN sys.objects o on o.object_id = ips.object_idORDER BY record_count DESC;index_type_desc page_count compressed_page_countNONCLUSTERED INDEX 1096438 1077279CLUSTERED INDEX 1961930 1958193HEAP 2134800 2134793CLUSTERED INDEX 891358 891356NONCLUSTERED INDEX 253437 253432

SQL DB to XML file on disk

Posted: 26 Mar 2013 05:52 AM PDT

Hi Am a newbie to the world of SQL server. I need to extract all recods from a DB table (say EMployee) and create a XML file on disk.EMP Table --> Name, Dept , Salary. I tried the below but not sure how to write this to an XML file on disk. Any help on this is highly appreciatedcreate proc [dbo].[EMPLOYEE]asdeclare @XmlOutput xml set @XmlOutput = (select name, dept, salary from dbo.EMPLOYEEFOR XML AUTO, ROOT('CallData'), ELEMENTS)select @XmlOutput AS CALL_DATA

Crosstab, dynamic number of columns, No Pivot function!

Posted: 26 Mar 2013 04:06 AM PDT

hi, is it possible to make a query that will turn a result set into a crosstab and it will automatically know how many columns are needed, but WITHOUT using the Pivot function?here is a table and data to illustrate what i'm talking about:CREATE TABLE Lines( line varchar(5) NULL, stopid varchar(5) NULL, boardings int NULL)insert into lines (line, stopid, boardings)values (2,1,58)insert into lines (line, stopid, boardings)values (2,2,37)insert into lines (line, stopid, boardings)values (2,3,40)insert into lines (line, stopid, boardings)values (2,4,41)insert into lines (line, stopid, boardings)values (4,1,13)insert into lines (line, stopid, boardings)values (4,2,21)insert into lines (line, stopid, boardings)values (4,3,25)insert into lines (line, stopid, boardings)values (4,4,20)insert into lines (line, stopid, boardings)values (4,5,12)insert into lines (line, stopid, boardings)values (4,6,9)insert into lines (line, stopid, boardings)values (4,7,4)insert into lines (line, stopid, boardings)values (4,8,3)insert into lines (line, stopid, boardings)values (4,9,1)--------------------------------------so when I select line 2, there would be four columns, each with the boardings number in it, when I select line 4, there would be nine columns.Thanks in advance for any help given!!-Martin

restore database without overwriting users

Posted: 26 Mar 2013 01:18 AM PDT

I am receiving a daily download of a database and restoring it locally to do some custom reporting on. I have this working from Batch files running nightly.The problem I have is that when I restore the database, it also overwrites the local users of the database which includes a user I added to access the tables from Visual Studio.How can I restore a database, but leave the local users tables alone?Thank you in advance.

SQL Server and NoSQL

Posted: 26 Mar 2013 04:22 AM PDT

Hi thereI need some advice/articles to go through. We have a ecommerce website using SQL Server 2008 R2. For the searches we implement a ranking system using FullText and some other ranking criteria, for example, the amount of units that product has sold etc.The search is working pretty well, but my manager wants to explore the use of NoSQL infrastructure to speed up our search results.Two questions:1. Can noSQL be completely integrated with SQL Server, so we can use the data in SQL to populate the NoSQL somehow?2. Will we have the flexibility to modify our searches according to our specific business needs?Any advice/articles pointing in the right direction will be appreciated.Thanks.

sql server question

Posted: 26 Mar 2013 04:03 AM PDT

Hi, it's sql server 2008R2 environment. There are some stored procedures in a database under schema called schema1. I granted userA execute,select and view deifinition on that schema1 schema. But userA is still unable to see those stored procs which are under schema1. Am I missing anything here to do? Thanks.

Curious: performance difference when using function in a where clause.

Posted: 21 Mar 2013 11:12 PM PDT

Hello all,This is just something I noticed and am [b]wondering[/b] about (I think this is an unexpected effect), there is [b]no problem[/b] to solve.I get a performance difference when I use a function in a where clause. See the code, the performance numbers are below the code.Sorry I was not able to isolate the problem and give definitions and data to reproduce this.Please read the code.[code="sql"]------------------------------------------------------------------------------------select * from client_order where -- ABS(ClientNr) = 12345 [b]ClientNr = 12345[/b] and startdttm = '2012-03-16'----(1 row(s) affected)----Table 'Encounter'. [b]Scan count 30, logical reads 62, physical reads 61[/b], read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.---- SQL Server Execution Times:---- CPU time = 0 ms, elapsed time = 765 ms.-- Estimated execution plan: Estimated Subtree Cost : [b]0.106408[/b]-- Actuel execution plan: Estimated Subtree Cost : [b]0.106408[/b]------------------------------------------------------------------------------------------------------------------------------select * from client_order where [b]ABS(ClientNr) = 12345[/b] -- ClientNr = 12345 and startdttm = '2012-03-16'----(1 row(s) affected)----Table 'client_order'. [b]Scan count 1, logical reads 4, physical reads 4[/b], read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.---- SQL Server Execution Times:---- CPU time = 0 ms, elapsed time = 49 ms.-- Estimated execution plan: Estimated Subtree Cost :[b] 0.0183334[/b]-- Actuel execution plan: Estimated Subtree Cost : [b]0.0183334[/b]------------------------------------------------------------------------------------[/code]Thanks for your time and attention,ben brugman

Maintaining Compression

Posted: 26 Mar 2013 02:44 AM PDT

Duplicate Post. My apologies. Proxy server at work is acting up.

Maintaining Compression

Posted: 26 Mar 2013 02:44 AM PDT

Duplicate Post. My apologies. Proxy server at work is acting up.

Problem in 1 Billion Records Table

Posted: 24 Mar 2013 03:34 PM PDT

Hi,I have one huge table which has 1Billion records. I tried to delete unnecessary records but it took a lot of time to delete and the delete was unsuccessful.Now my data is 26,491,552KB and index 20,569,168KB. Now inserting and updating process become very slow. What is the best way to handle the table to make it faster read and write access.Thanks in advance.

SSIS Warning - Global Shared Memory

Posted: 07 Apr 2009 10:27 PM PDT

HiI'm busy rewriting DTS packages as SSIS packages. As and when I finish a package I run it in debug mode via Microsoft Visual Studio and then examine the Exection Results to see the messages generated.Now it may or may not matter how I run the package but the following warning has been generated :-[SSIS.Pipeline] Warning: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console.I'm one of these people who likes to run programs and get no warnings at all (if it can be helped).Can someone kindly explain what it means and if I can can get rid of it?

Track SQL severity levels in SCOM?

Posted: 12 Mar 2013 05:50 AM PDT

Trying to set up alerts and/or email notifications in SCOM 2007 that are triggered by SQL Server (2005 & 08) severity levels 19-25 and 823–825. Several db servers at my organization cannot send dbmail due to compliancy issues so I need another way to send alerts.Does anyone know if this can be accomplished in SCOM 2007? I can only find SCOM severity levels 0, 1, and 2 in FMSQLOPERATIONS.alert.vAlert. Maybe I can log these or pull these from the SQL Boxes' windows app logs using SCOM.Anybody have any experience with this?

Minimal Logging Enhancement in SQL Server 2008

Posted: 25 Mar 2013 09:53 PM PDT

Hi folks,SQL Server 2008 has enhanced INSERT..SELECT statement to allow minimal logging in certain cases, documented here:[url=http://msdn.microsoft.com/en-us/library/ms174335(v=sql.100).aspx]http://msdn.microsoft.com/en-us/library/ms174335(v=sql.100).aspx [/url](See section: Best Practices for Bulk Loading Data)We planned to use this feature in a situation where we wanted to insert millions of rows in an empty heap (or an empty clustered index) and made sure that all pre-requisites are fulfilled.[i]a) The target table is a heap or an empty B-Tree, b) TABLOCK is being used on the target table, c) The database is in SIMPLE recovery model, d) The target table is not used in replication.[/i]The INSERT is done from a SELECT from a temp table (#temp) having around 300,000 rows (300,000 distinct Ids) joined with another table that has ~500 rows for each of the distinct #temp.Id. Thus, 500 x 300,000 = 150 million rows (around 10 GB)We are on [b]Microsoft SQL Server 2008 (SP1) - 10.0.2740.0 (X64).[/b]But it seems the minimal logging is not working for us, as I see the transaction log growing to almost 10GB, during the insert if no other processes are running.We also face transaction log full issue, when other processes are running along with this insert. (Transaction Log size = 20 GB)There were no backups being run at the time of testing, which may have caused full logging.Is the minimal logging using INSERT..SELECT available in all versions of SQL Server 2008 or did it get included in one of the Cumulative Updates?I would really like to make use of this new feature in SQL Server 2008. Any ideas on how to achieve it, would be appreciated.

Optimal values needed for Server

Posted: 26 Mar 2013 01:11 AM PDT

Hi,Can any one tell me what these items are total cpu usage,context switches/sec,paging rate,network bandwidth utilization,disk queue length .Under what value ranges they should fall normally .. and how can we say some thing is wrong using these items.

Parallel Thread Deadlocks Help?

Posted: 25 Mar 2013 10:10 PM PDT

Trace Flag 1222 enabledProfiler CapturedDatabase Tuning AdviserI'm believe I'm getting "Intra-Query Parallel Thread Deadlocks" and not really sure how to go about fixing the problem.I know the problems are coming from a management tool database that we use in IT to remote and patch machines etc..This particular database has regular integrity checks as well as index rebuilds weekly I have captured the deadlock using Profiler then imported into the database Tuning Adviser. It's recommendations were to create various indexes which I applied but this didn't make any different to the problem I'm getting.Here is the deadlock info I'm getting but not quite sure how to troubleshoot. Any help would be much appreciated.

SSMS crashing at startup

Posted: 28 Aug 2011 07:23 PM PDT

Hello,I have a windows 2008 server with an instance of sql server 2008 installed on it. After installing an instance of SQL server 2008 R2. I'm no more able to open SQL server management studio, it crash at the startup, with the following message: - "SQL server management studio has stopped working".When I click the "details button" I get the following information:Description: Stopped workingProblem signature: Problem Event Name: CLR20r3 Problem Signature 01: ssms.exe Problem Signature 02: 2009.100.1600.1 Problem Signature 03: 4bb679e7 Problem Signature 04: mscorlib Problem Signature 05: 2.0.0.0 Problem Signature 06: 4d8c1599 Problem Signature 07: f50 Problem Signature 08: 7 Problem Signature 09: Exception OS Version: 6.1.7600.2.0.0.272.7 Locale ID: 4108Here is the detailed log:Version=1EventType=CLR20r3EventTime=129590751466956914ReportType=2Consent=1ReportIdentifier=510705be-d20d-11e0-a4a8-00155d43f601WOW64=1Response.type=4Sig[0].Name=Problem Signature 01Sig[0].Value=ssms.exeSig[1].Name=Problem Signature 02Sig[1].Value=2009.100.1600.1Sig[2].Name=Problem Signature 03Sig[2].Value=4bb679e7Sig[3].Name=Problem Signature 04Sig[3].Value=mscorlibSig[4].Name=Problem Signature 05Sig[4].Value=2.0.0.0Sig[5].Name=Problem Signature 06Sig[5].Value=4d8c1599Sig[6].Name=Problem Signature 07Sig[6].Value=f50Sig[7].Name=Problem Signature 08Sig[7].Value=7Sig[8].Name=Problem Signature 09Sig[8].Value=ExceptionDynamicSig[1].Name=OS VersionDynamicSig[1].Value=6.1.7600.2.0.0.272.7DynamicSig[2].Name=Locale IDDynamicSig[2].Value=4108UI[2]=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exeUI[3]=SSMS - SQL Server Management Studio has stopped workingUI[4]=Windows can check online for a solution to the problem.UI[5]=Check online for a solution and close the programUI[6]=Check online for a solution later and close the programUI[7]=Close the programLoadedModule[0]=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exeLoadedModule[1]=C:\Windows\SysWOW64tdll.dllLoadedModule[2]=C:\Windows\syswow64\kernel32.dllLoadedModule[3]=C:\Windows\syswow64\KERNELBASE.dllLoadedModule[4]=C:\Windows\syswow64\ADVAPI32.dllLoadedModule[5]=C:\Windows\syswow64\msvcrt.dllLoadedModule[6]=C:\Windows\SysWOW64\sechost.dllLoadedModule[7]=C:\Windows\syswow64\RPCRT4.dllLoadedModule[8]=C:\Windows\syswow64\SspiCli.dllLoadedModule[9]=C:\Windows\syswow64\CRYPTBASE.dllLoadedModule[10]=C:\Windows\syswow64\USER32.dllLoadedModule[11]=C:\Windows\syswow64\GDI32.dllLoadedModule[12]=C:\Windows\syswow64\LPK.dllLoadedModule[13]=C:\Windows\syswow64\USP10.dllLoadedModule[14]=C:\Windows\WinSxS\x86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.4927_none_d08a205e442db5b5\MSVCR80.dllLoadedModule[15]=C:\Windows\WinSxS\x86_microsoft.vc80.atl_1fc8b3b9a1e18e3b_8.0.50727.4053_none_d1c738ec43578ea1\ATL80.DLLLoadedModule[16]=C:\Windows\syswow64\ole32.dllLoadedModule[17]=C:\Windows\syswow64\OLEAUT32.dllLoadedModule[18]=C:\Windows\syswow64\SHELL32.dllLoadedModule[19]=C:\Windows\syswow64\SHLWAPI.dllLoadedModule[20]=C:\Windows\system32\IMM32.DLLLoadedModule[21]=C:\Windows\syswow64\MSCTF.dllLoadedModule[22]=C:\Windows\WinSxS\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16661_none_420fe3fa2b8113bd\comctl32.dllLoadedModule[23]=C:\Windows\syswow64\CLBCatQ.DLLLoadedModule[24]=C:\Windows\system32\propsys.dllLoadedModule[25]=C:\Windows\system32tmarta.dllLoadedModule[26]=C:\Windows\syswow64\WLDAP32.dllLoadedModule[27]=C:\Program Files (x86)\Microsoft SQL Server\100\Shared\instapi10.dllLoadedModule[28]=C:\Windows\syswow64\SETUPAPI.dllLoadedModule[29]=C:\Windows\syswow64\CFGMGR32.dllLoadedModule[30]=C:\Windows\syswow64\DEVOBJ.dllLoadedModule[31]=C:\Windows\system32\profapi.dllLoadedModule[32]=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\msenv.dllLoadedModule[33]=C:\Windows\system32\VERSION.dllLoadedModule[34]=C:\Windows\system32\SXS.DLLLoadedModule[35]=C:\Windows\system32\uxtheme.dllLoadedModule[36]=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\1033\msenvui.dllLoadedModule[37]=C:\Windows\system32\CRYPTSP.dllLoadedModule[38]=C:\Windows\system32\rsaenh.dllLoadedModule[39]=C:\Windows\system32\RpcRtRemote.dllLoadedModule[40]=C:\Windows\syswow64\urlmon.dllLoadedModule[41]=C:\Windows\syswow64\WININET.dllLoadedModule[42]=C:\Windows\syswow64\iertutil.dllLoadedModule[43]=C:\Windows\syswow64\CRYPT32.dllLoadedModule[44]=C:\Windows\syswow64\MSASN1.dllLoadedModule[45]=C:\Windows\SysWOW64\mscoree.dllLoadedModule[46]=C:\Windows\Microsoft.NET\Framework\v4.0.30319\mscoreei.dllLoadedModule[47]=C:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorwks.dllLoadedModule[48]=C:\Windows\assembly\NativeImages_v2.0.50727_32\mscorlib\23bc3936180ff789f44259a211dfc7fc\mscorlib.ni.dllLoadedModule[49]=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\AppIDPackage.dllLoadedModule[50]=C:\Windows\assembly\NativeImages_v2.0.50727_32\System\610374fef100556da252243e673ac64b\System.ni.dllLoadedModule[51]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\8fcb99a22ed897c584edf6d0cc4c2e5d\Microsoft.SqlServer.SqlTools.VSIntegration.ni.dllLoadedModule[52]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.VisualStu#\3cf0869fb52f95ba66c44af9dc8204ce\Microsoft.VisualStudio.Shell.Interop.ni.dllLoadedModule[53]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.VisualStu#\8faa4b5bcdf2747817aa8265eeb5819c\Microsoft.VisualStudio.OLE.Interop.ni.dllLoadedModule[54]=C:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorjit.dllLoadedModule[55]=C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Drawing\eba4ec48e3f7f16864c6d96f510fafd9\System.Drawing.ni.dllLoadedModule[56]=C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Windows.Forms\ad9c2f4737e1e07fa774af31a7d74235\System.Windows.Forms.ni.dllLoadedModule[57]=C:\Windows\assembly\NativeImages_v2.0.50727_32\SqlWorkbench.Interf#\864b9ee423e3e165c490161234074661\SqlWorkbench.Interfaces.ni.dllLoadedModule[58]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\62b49440df2124a81ecc0562ba077228\Microsoft.SqlServer.SqlTDiagM.ni.dllLoadedModule[59]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\40433cbeb10b6d14df78a6d697c0e55e\Microsoft.SqlServer.Instapi.ni.dllLoadedModule[60]=C:\Windows\assembly\NativeImages_v2.0.50727_32\EnvDTE\8230653175f553933b4db2f1bb8f0312\EnvDTE.ni.dllLoadedModule[61]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.VisualStu#\0aa9c0c21c7ccde9ac4ba666bcf5e477\Microsoft.VisualStudio.Shell.ni.dllLoadedModule[62]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.VisualStu#\9b79361da13edd39d0c2a893f0b7a0ae\Microsoft.VisualStudio.Shell.Interop.8.0.ni.dllLoadedModule[63]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\17e048fcec6ef01d19f8a5ad8c1e5bee\Microsoft.SqlServer.Management.SDK.SqlStudio.ni.dllLoadedModule[64]=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\Packages\Debugger\vsdebug.dllLoadedModule[65]=C:\Windows\system32\MSIMG32.dllLoadedModule[66]=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\Packages\Debugger\1033\VSDebugUI.dllLoadedModule[67]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.DataWareh#\fcba072adf0ae8ab0dbe9fa136e185ac\Microsoft.DataWarehouse.SQM.ni.dllLoadedModule[68]=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\1033\AppIDPackage.rllLoadedModule[69]=c:\Program Files (x86)\Common Files\Microsoft Shared\VS7Debug\coloader80.dllLoadedModule[70]=C:\Program Files (x86)\ApexSQL\ApexSQLRefactor2008\ApexSql.Refactor.SsmsLoader.dllLoadedModule[71]=C:\Windows\assembly\GAC\Extensibility\7.0.3300.0__b03f5f7f11d50a3a\Extensibility.dllLoadedModule[72]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\4f1a27df050bbefad78b47f692860d66\Microsoft.SqlServer.Management.Sdk.Sfc.ni.dllLoadedModule[73]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\c1baff7ac94c5d243a7029dd6b6de516\Microsoft.SqlServer.Management.Reports.ni.dllLoadedModule[74]=C:\Windows\assembly\NativeImages_v2.0.50727_32\EnvDTE80\116509b8cdf36c562998803c5ee16a21\EnvDTE80.ni.dllLoadedModule[75]=C:\Program Files (x86)\ApexSQL\ApexSQLRefactor2008\ApexSql.Refactor.Ssms2008.dllLoadedModule[76]=C:\Program Files (x86)\ApexSQL\ApexSQLRefactor2008\ApexSql.Common.CommonLib.dllLoadedModule[77]=C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Configuration\0bddc91cbf37d143f08f6684b2919566\System.Configuration.ni.dllLoadedModule[78]=C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Xml\155679a9c8991cc33f90d6b27bac1977\System.Xml.ni.dllLoadedModule[79]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\c6a1362ff9bde823ee54a9dcede56c6b\Microsoft.SqlServer.Management.SqlStudio.Explorer.ni.dllLoadedModule[80]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\4d67f983b34ca9c59d2c7c06341166e8\Microsoft.SqlServer.Management.Controls.ni.dllLoadedModule[81]=C:\Windows\assembly\NativeImages_v2.0.50727_32\ObjectExplorer\b47708857448f5785f2f145dca3ae6f9\ObjectExplorer.ni.dllLoadedModule[82]=C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Design\a87b99435541fab7c7a58782904030f3\System.Design.ni.dllLoadedModule[83]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\bee73824bae4d7aedcb5abc9163cd71e\Microsoft.SqlServer.Management.UserSettings.ni.dllLoadedModule[84]=C:\Windows\assembly\NativeImages_v2.0.50727_32\SqlMgmt\0dc4c31dc6b4151fac9071c70596f6f7\SqlMgmt.ni.dllLoadedModule[85]=C:\Windows\system32\shfolder.dllLoadedModule[86]=C:\Windows\assembly\NativeImages_v2.0.50727_32\ConnectionDlg\38cea46249218a9ed0e7be1d8ed6da90\ConnectionDlg.ni.dllLoadedModule[87]=C:\Windows\WinSxS\x86_microsoft.windows.gdiplus_6595b64144ccf1df_1.1.7600.16385_none_72fc7cbf861225ca\gdiplus.dllLoadedModule[88]=C:\Windows\system32\WindowsCodecs.dllLoadedModule[89]=C:\Windows\assembly\GAC_MSIL\Microsoft.VisualStudio\2.0.0.0__b03f5f7f11d50a3a\Microsoft.VisualStudio.dllLoadedModule[90]=C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Drawing.Desi#\0fb34b9054c6a2491e48b8be259a5b43\System.Drawing.Design.ni.dllLoadedModule[91]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.VisualStu#\ad22382630a4c67729c4fbb56358d8f7\Microsoft.VisualStudio.Designer.Interfaces.ni.dllLoadedModule[92]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\6773e394d6582bec2129252e17d34e76\Microsoft.SqlServer.RegSvrEnum.ni.dllLoadedModule[93]=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\VS SCC\1033\VssProviderui.dllLoadedModule[94]=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Microsoft.SqlServer.Management.SqlStudio.Migration.dllLoadedModule[95]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\8bd8872d84923c75fec798f98d70cee0\Microsoft.SqlServer.Management.SqlStudio.Migration.ni.dllLoadedModule[96]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\3f14de30910c16142636e4a6f4fd827c\Microsoft.SqlServer.Smo.ni.dllLoadedModule[97]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.NetEnterp#\3fb5e172f37fcc845966b2c181eba25d\Microsoft.NetEnterpriseServers.ExceptionMessageBox.ni.dllLoadedModule[98]=C:\Windows\assembly\NativeImages_v2.0.50727_32\System.Data\86f429e0a23238cf277d464bd0433d86\System.Data.ni.dllLoadedModule[99]=C:\Windows\assembly\GAC_32\System.Data\2.0.0.0__b77a5c561934e089\System.Data.dllLoadedModule[100]=C:\Windows\syswow64\WS2_32.dllLoadedModule[101]=C:\Windows\syswow64\NSI.dllLoadedModule[102]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\b23d899f3488016213705826a909e2f1\Microsoft.SqlServer.Management.SqlStudio.ni.dllLoadedModule[103]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Accessibility\5c6e1a094b1e65c69b528151cc19b1ee\Accessibility.ni.dllLoadedModule[104]=C:\Windows\system32\oleacc.dllLoadedModule[105]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\924b5c76a22f99698324e0164fb80af9\Microsoft.SqlServer.ConnectionInfo.ni.dllLoadedModule[106]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\a043fb3b67c8cdea7b4d9b03bf8381cb\Microsoft.SqlServer.Management.RegisteredServers.ni.dllLoadedModule[107]=C:\Windows\assembly\NativeImages_v2.0.50727_32\Microsoft.SqlServer#\0435b8b3c7483bd03683b576a57ccd93\Microsoft.SqlServer.SqlClrProvider.ni.dllLoadedModule[108]=C:\Windows\system32\apphelp.dllFriendlyEventName=Stopped workingConsentKey=CLR20r3AppName=SSMS - SQL Server Management StudioAppPath=C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exeReportDescription=Stopped workingI have tried to uninstall SSMS and reinstall it, but the same error occur.Thank you for your help

Search This Blog