Thursday, May 30, 2013

[how to] Oracle datafile extension .dat or .dbf

[how to] Oracle datafile extension .dat or .dbf


Oracle datafile extension .dat or .dbf

Posted: 30 May 2013 09:08 PM PDT

I have seen these 2 extensions used for datafiles, .dat and .dbf while creating and/or altering a tablespace. I'm not sure what the difference between the 2 extensions is, or if .dat is incorrect.

Here are 2 examples from Oracle Database SQL Reference 10g Release 2 (10.2)

.dat

CREATE TABLESPACE tbs_01      DATAFILE 'tbs_f2.dat' SIZE 40M      ONLINE;  

.dbf

CREATE TABLESPACE tbs_03      DATAFILE 'tbs_f03.dbf' SIZE 20M     LOGGING;  

How to setup SQL active/active cluster to achieve Blue / Green instance switching?

Posted: 30 May 2013 08:00 PM PDT

I am wondering if anyone has ever used a multi-instance cluster (nee 'Active/Active') to achieve blue/green (or A/B) deployment scenarios, and what the best way of configuring it is (using SQL 2012 / Windows 2008 R2)?

To be specific, the scenario I want to achieve is to be able to switch between which cluster instance is being connected to by clients without either the clients or the SQL instances knowing (I stress I'm not talking about node failover here). I'm envisaging that the best way to achieve this is something like:

  • Setup 2 node cluster, each of which has InstanceA and InstanceB instances
  • Configure both InstanceA and InstanceB to listen as if they were the default instance on their cluster address
  • Use DNS to switch which virtual address clients actually connect to.

This should hopefully enable me to do the following:

  • Deploy database to instance A, and have clients connect to it via DNS alias as if default instance
  • Deploy new version of database to instance B
  • Vet new version of database (connecting explicitly to cluster\InstanceB)
  • Redirect DNS alias to point to instance B's cluster name
  • Clients now connect to InstanceB without realising anything's changed
  • Both instances can still failover to the other node in a true outage

Joining the dots, it seems like this should be possible:

... but I've never seen a full example. Has anyone done it? Will what's proposed above work? What have I missed?

Linked Server Login Timeout but SqlCmd works

Posted: 30 May 2013 08:48 PM PDT

I've got a SQL 2005 SP4 server that connects to a 2008 SP3 instance via linked servers using the SQL Server server type. Every once in a while, one of those linked servers will start throwing login timeouts. To rule out firewalls, I can RDP to the server and run sqlcmd and get in just fine, even making sure to use the same login. I'm thinking that SQL has somehow cached something that prevents it finding the right address. The remote servername is defined in that machine's host file. So far, only a reboot fixes the issue.

*Edit: linked server is setup using remote sql login.

Any ideas?

Sync two Oracle production server database

Posted: 30 May 2013 05:54 PM PDT

I have an oracle database that runs a 6 hours batchjob everyday. This process slows down performance during the 6 hours timeframe.

Is there any methods that i could build another server that runs the batchjob, once is done. sync the data to Production server. (time taken must be shorter than 6hrs)

please advise

thanks Shawn

All four data nodes in MySQL Cluster in same node group

Posted: 30 May 2013 04:47 PM PDT

I am testing MySQL Cluster 7.2. I have two servers, mysql1 and mysql2. I want the management server and two data nodes to run on mysql1 and two other data nodes to run on mysql2. My config.ini file looks like this:

[ndb_mgmd]  hostname=mysql1  datadir=/var/mysql-cluster/ndb_data  NodeId=1    [ndbd default]  noofreplicas=2  datadir=/var/mysql-cluster/ndb_data  DataMemory=8M    [ndbd]  hostname=mysql1  NodeId=3    [ndbd]  hostname=mysql2  NodeId=4    [ndbd]  hostname=mysql1  NodeId=13    [ndbd]  hostname=mysql2  NodeId=14    [mysqld]  NodeId=50  

With this configuration I would expect, as per the MySQL Cluster documentation, that the two data nodes 3 and 4 would be in nodegroup 0, while the two data nodes 13 and 14 would be in nodegroup 1.

However, when I start everything up and show the nodes, I see this:

Connected to Management Server at: localhost:1186  Cluster Configuration  ---------------------  [ndbd(NDB)]     4 node(s)  id=3    @192.168.0.42  (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0, Master)  id=4    @192.168.0.43  (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0)  id=13   @192.168.0.42  (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0)  id=14   @192.168.0.43  (mysql-5.5.30 ndb-7.2.12, starting, Nodegroup: 0)    [ndb_mgmd(MGM)] 1 node(s)  id=1    @192.168.0.42  (mysql-5.5.30 ndb-7.2.12)    [mysqld(API)]   1 node(s)  id=50 (not connected, accepting connect from any host)  

Everything seems to be in nodegroup 0! What do I have to do to get 3 and 4 in one group and 13 and 14 in another?

PostgreSQL 9.2.4 (Windows 7) - Service won't start, “could not load pg_hba.conf”

Posted: 30 May 2013 03:56 PM PDT

I am trying to get Postgres 9.2.4 to run as a service on Windows 7. After installing postgres, the service was running fine. However, after setting postgres up as a server for another program, the service stopped running. When I try to start the service now, I get a message saying :

"The postgresql-x64-9.2 - PostgreSQL Server 9.2 service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs."

When I try running the program that should use the database server, I get this error :

"A problem was encountered while attempting to log into or create the production database. Details: Could not connect to the server; Could not connect to remote socket. The application must now close"

I have also encountered this error once while opening the same program :

"A problem was encountered while attempting to log into or create the production database. Details: FATAL: could not load pg_hba.conf The application must now close."

I have tried running the service logged on as a local system account as well as my own account (In the postgres service properties) to no avail. I also tried restarting my computer. After a lot of troubleshooting online, I learned that a good thing to check is the pg_log file. Here are the contents of the latest pg_log entry :

2013-05-29 14:59:45 MDT LOG:  database system was interrupted; last known up at 2013-05-29 14:58:01 MDT  2013-05-29 14:59:45 MDT LOG:  database system was not properly shut down; automatic recovery in progress  2013-05-29 14:59:45 MDT LOG:  record with zero length at 0/175BB98  2013-05-29 14:59:45 MDT LOG:  redo is not required  2013-05-29 14:59:45 MDT LOG:  database system is ready to accept connections  2013-05-29 14:59:45 MDT LOG:  autovacuum launcher started  2013-05-29 15:07:00 MDT LOG:  local connections are not supported by this build  2013-05-29 15:07:00 MDT CONTEXT:  line 1 of configuration file "C:/PostgreSQL/data/pg_hba.conf"  2013-05-29 15:07:00 MDT FATAL:  could not load pg_hba.conf  2013-05-29 15:07:00 MDT LOG:  local connections are not supported by this build  2013-05-29 15:07:00 MDT CONTEXT:  line 1 of configuration file "C:/PostgreSQL/data/pg_hba.conf"  2013-05-29 15:07:00 MDT FATAL:  could not load pg_hba.conf  2013-05-29 15:09:03 MDT LOG:  received fast shutdown request  2013-05-29 15:09:03 MDT LOG:  aborting any active transactions  2013-05-29 15:09:03 MDT LOG:  autovacuum launcher shutting down  2013-05-29 15:09:03 MDT LOG:  shutting down  2013-05-29 15:09:03 MDT LOG:  database system is shut down  

It seems to be having issues with the pg_hba.conf file, which looks like this :

local all all trust  host all all 127.0.0.1 255.255.255.255 trust  host all all 0.0.0.0 0.0.0.0 trust  

As per many suggestions online, I tried editing the top line to a number of different alternatives (host all all trust / host all 127.0.0.1/32 trust / host all 192.168.0.100/24 trust , etc.). This made sense to me, as the log file was saying that local connections are unsupported by postgres and was also pointing to that line. However, none of my changes had any effect. I tried restarting my computer after every change but nothing made any difference.

When I searched for examples of what a pg_hba.conf file normally looks like, the examples looked slightly different from my file. I noticed that in the PostgreSQL program file, in addition to pg_hba.conf, there was also a "20130529-150444-old-pg_hba.conf" file which looked a lot more like the examples I was finding online. This file has several lines of comments before these last few lines :

# TYPE  DATABASE        USER            ADDRESS                 METHOD    # IPv4 local connections:  host    all             all             127.0.0.1/32            md5  # IPv6 local connections:  host    all             all             ::1/128                 md5  # Allow replication connections from localhost, by a user with the  # replication privilege.  #host    replication     postgres        127.0.0.1/32            md5  #host    replication     postgres        ::1/128                 md5  

I was hoping that this was the original pg_hba.conf file and that if I replaced the new file with the contents of the old one, postgres would start working again. No such luck. I have been hoping for more error files to be logged in pg_log to see if the previously stated error had disappeared or changed to something else, but no more files have been logged.

I have been troubleshooting online for a few days now and nothing I've found has worked. Sorry for having such a long question, but I wanted to be thorough and include all relevant information. I would appreciate it if anyone could shed some light on this problem or offer suggestions.

Different dates Oracle 11g with TOAD

Posted: 30 May 2013 03:25 PM PDT

I have the following queries:

SELECT to_date(to_char(to_date('01-FEB-1949'))) FROM DUAL;    /*this returns 2/1/2049. */    SELECT to_date(to_char(to_date('01-FEB-1949'),'dd-MON-yyyy')) FROM DUAL;   /*this returns 2/1/1949.*/  

Why does the first one returns the year 2049 instead of 1949?

By Googling I have found that I can "force" the client date format to be the one desire by changing the keyon the registry:

KEY_OraClient11g_home1  NLS_DATE_FORMAT : YYYY/MM/DD  

Thanks in advance!

How should I arrange a database replication for my site?

Posted: 30 May 2013 05:46 PM PDT

Here is my problem. I have a busy Drupal site struggling under high load. After applying all caches I see that database is the bottleneck. I have two servers to handle the site: A and B, on the same rack/subnet. The server A is frontend web server and is set to handles all database queries to the server B. Currently there is no detabase set up on A. The database on B is MariaDB 10. CPU-wise, The server A is much less powerful than B, but has the same amount of RAM. The load on server A is very low (< 0.5) The load on server B is not low (> 5). Reads / Writes ratio is currently 92% / 8%

So my questions are:

-Are there any benefit in defining master/slave database on these two servers?

-If is good idea to go master/slave route, how do you arrange the servers? (which server should be the master? Which one should be the frontend?)

Why these queries show up at the slow-query log? Interpreting EXPLAIN

Posted: 30 May 2013 08:26 PM PDT

I'm having a hard time interpreting the EXPLAIN results of these queries. They both end up in the slow-query log, but the execution time is ~0.0050ms and the final result set is always under 100 rows. What's wrong here? Is my second "improved" version any better?
Any suggestion?

  mysql> # Original    mysql> EXPLAIN SELECT SQL_NO_CACHE relSrc, relDst, 1 as relType, relTypeDesc, 0 as fracQty, '24794' as source FROM productsRelationships1        -> LEFT JOIN productsRelationshipsDesc on 1=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 2 as relType, relTypeDesc, 0 as fracQty, '24794' as source FROM productsRelationships2        -> LEFT JOIN productsRelationshipsDesc on 2=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 3 as relType, relTypeDesc, 0 as fracQty, '24794' as source FROM productsRelationships3        -> LEFT JOIN productsRelationshipsDesc on 3=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 5 as relType, relTypeDesc, 0 as fracQty, '24794' as source FROM productsRelationships5        -> LEFT JOIN productsRelationshipsDesc on 5=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 6 as relType, relTypeDesc, fracQty, '24794' as source      FROM productsRelationships6        -> LEFT JOIN productsRelationshipsDesc on 6=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 7 as relType, relTypeDesc, 0 as fracQty, '24794' as source FROM productsRelationships7        -> LEFT JOIN productsRelationshipsDesc on 7=relTypeID        -> WHERE relDst='24794' OR relSrc='24794'        -> ORDER BY relType, relSrc, RelDst;    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+-------+-------+-----------------------------------------+    | id | select_type  | table                     | type        | possible_keys       | key           | key_len | ref   | rows  | Extra                                   |    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+-------+-------+-----------------------------------------+    |  1 | PRIMARY      | productsRelationships1    | index       | PRIMARY,src-1       | src-1         | 2       | NULL  |   663 | Using where; Using index                |    |  1 | PRIMARY      | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    |  2 | UNION        | productsRelationships2    | index       | src-dst-2           | src-dst-2     | 4       | NULL  | 13126 | Using where; Using index                |    |  2 | UNION        | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    |  3 | UNION        | productsRelationships3    | index       | PRIMARY             | PRIMARY       | 4       | NULL  | 11459 | Using where; Using index                |    |  3 | UNION        | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    |  4 | UNION        | productsRelationships5    | index       | PRIMARY,src-5       | src-5         | 2       | NULL  |   369 | Using where; Using index                |    |  4 | UNION        | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    |  5 | UNION        | productsRelationships6    | index_merge | PRIMARY,src-6,dst-6 | dst-6,PRIMARY | 2,2     | NULL  |     2 | Using union(dst-6,PRIMARY); Using where |    |  5 | UNION        | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    |  6 | UNION        | productsRelationships7    | index       | PRIMARY,src-7       | src-7         | 2       | NULL  |     1 | Using where; Using index                |    |  6 | UNION        | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | const |     1 | Using index                             |    | NULL | UNION RESULT | <union1,2,3,4,5,6>        | ALL         | NULL                | NULL          | NULL    | NULL  |  NULL | Using filesort                          |    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+-------+-------+-----------------------------------------+    13 rows in set (0.00 sec)      mysql>    mysql>    mysql> # Improved?    mysql> EXPLAIN SELECT SQL_NO_CACHE relSrc, relDst, relType, fracQty, source, relTypeDesc FROM (        -> SELECT relSrc, relDst, 1 as relType, 0 as fracQty, '24794' as source FROM productsRelationships1        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 2 as relType, 0 as fracQty, '24794' as source FROM productsRelationships2        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 3 as relType, 0 as fracQty, '24794' as source FROM productsRelationships3        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 5 as relType, 0 as fracQty, '24794' as source FROM productsRelationships5        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 6 as relType,      fracQty, '24794' as source FROM productsRelationships6        -> WHERE relDst='24794' OR relSrc='24794'        -> UNION ALL        -> SELECT relSrc, relDst, 7 as relType, 0 as fracQty, '24794' as source FROM productsRelationships7        -> WHERE relDst='24794' OR relSrc='24794'        -> ) AS rels        -> LEFT JOIN productsRelationshipsDesc ON relType=relTypeID        -> ORDER BY relType, relSrc, RelDst;    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+--------------+-------+------------------------------------------------------+    | id | select_type  | table                     | type        | possible_keys       | key           | key_len | ref          | rows  | Extra                                                |    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+--------------+-------+------------------------------------------------------+    |  1 | PRIMARY      | <derived2>                | ALL         | NULL                | NULL          | NULL    | NULL         |    38 | Using filesort                                       |    |  1 | PRIMARY      | productsRelationshipsDesc | ref         | relTypeID           | relTypeID     | 1       | rels.relType |     1 | Using index                                          |    |  2 | DERIVED      | productsRelationships1    | index       | PRIMARY,src-1       | src-1         | 2       | NULL         |   663 | Using where; Using index                             |    |  3 | UNION        | productsRelationships2    | index       | src-dst-2           | src-dst-2     | 4       | NULL         | 13126 | Using where; Using index                             |    |  4 | UNION        | productsRelationships3    | index       | PRIMARY             | PRIMARY       | 4       | NULL         | 11459 | Using where; Using index                             |    |  5 | UNION        | productsRelationships5    | index       | PRIMARY,src-5       | src-5         | 2       | NULL         |   369 | Using where; Using index                             |    |  6 | UNION        | productsRelationships6    | index_merge | PRIMARY,src-6,dst-6 | dst-6,PRIMARY | 2,2     | NULL         |     2 | Using union(dst-6,PRIMARY); Using where; Using index |    |  7 | UNION        | productsRelationships7    | index       | PRIMARY,src-7       | src-7         | 2       | NULL         |     1 | Using where; Using index                             |    | NULL | UNION RESULT | <union2,3,4,5,6,7>        | ALL         | NULL                | NULL          | NULL    | NULL         |  NULL |                                                      |    +----+--------------+---------------------------+-------------+---------------------+---------------+---------+--------------+-------+------------------------------------------------------+    9 rows in set (0.00 sec)  

Those are the tables definitions

  mysql> SHOW CREATE TABLE productsRelationships1\G    *************************** 1. row ***************************           Table: productsRelationships1    Create Table: CREATE TABLE `productsRelationships1` (      `relSrc` smallint(5) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      PRIMARY KEY (`relSrc`,`relDst`),      UNIQUE KEY `src-1` (`relSrc`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationships2\G    *************************** 1. row ***************************           Table: productsRelationships2    Create Table: CREATE TABLE `productsRelationships2` (      `relSrc` smallint(5) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      KEY `src-dst-2` (`relSrc`,`relDst`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationships3\G    *************************** 1. row ***************************           Table: productsRelationships3    Create Table: CREATE TABLE `productsRelationships3` (      `relSrc` smallint(5) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      PRIMARY KEY (`relSrc`,`relDst`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationships5\G    *************************** 1. row ***************************           Table: productsRelationships5    Create Table: CREATE TABLE `productsRelationships5` (      `relSrc` smallint(5) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      PRIMARY KEY (`relSrc`,`relDst`),      UNIQUE KEY `src-5` (`relSrc`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationships6\G    *************************** 1. row ***************************           Table: productsRelationships6    Create Table: CREATE TABLE `productsRelationships6` (      `relSrc` smallint(5) unsigned NOT NULL,      `relType` tinyint(2) unsigned NOT NULL DEFAULT '6',      `fracQty` int(2) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      PRIMARY KEY (`relSrc`,`relDst`),      UNIQUE KEY `src-6` (`relSrc`),      UNIQUE KEY `dst-6` (`relDst`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationships7\G    *************************** 1. row ***************************           Table: productsRelationships7    Create Table: CREATE TABLE `productsRelationships7` (      `relSrc` smallint(5) unsigned NOT NULL,      `relDst` smallint(5) unsigned NOT NULL,      PRIMARY KEY (`relSrc`,`relDst`),      UNIQUE KEY `src-7` (`relSrc`)    ) ENGINE=InnoDB DEFAULT CHARSET=utf8    1 row in set (0.00 sec)      mysql> SHOW CREATE TABLE productsRelationshipsDesc\G    *************************** 1. row ***************************           Table: productsRelationshipsDesc    Create Table: CREATE TABLE `productsRelationshipsDesc` (      `relTypeID` tinyint(2) unsigned NOT NULL AUTO_INCREMENT,      `relTypeDesc` varchar(100) NOT NULL,      UNIQUE KEY `relTypeID` (`relTypeID`,`relTypeDesc`)    ) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=utf8    1 row in set (0.00 sec)  

And this is the amount of data in every table.

  mysql> SELECT COUNT(1) FROM productsRelationships1\G    *************************** 1. row ***************************    COUNT(1): 663    1 row in set (0.00 sec)      mysql> SELECT COUNT(1) FROM productsRelationships2\G    *************************** 1. row ***************************    COUNT(1): 263    1 row in set (0.00 sec)      mysql> SELECT COUNT(1) FROM productsRelationships3\G    *************************** 1. row ***************************    COUNT(1): 8551    1 row in set (0.01 sec)      mysql> SELECT COUNT(1) FROM productsRelationships5\G    *************************** 1. row ***************************    COUNT(1): 369    1 row in set (0.00 sec)      mysql> SELECT COUNT(1) FROM productsRelationships6\G    *************************** 1. row ***************************    COUNT(1): 80    1 row in set (0.00 sec)      mysql> SELECT COUNT(1) FROM productsRelationships7\G    *************************** 1. row ***************************    COUNT(1): 0    1 row in set (0.00 sec)      mysql> SELECT COUNT(1) FROM productsRelationshipsDesc\G    *************************** 1. row ***************************    COUNT(1): 7    1 row in set (0.00 sec)  

What are reasonable options for an in-memory multi-core database?

Posted: 30 May 2013 01:03 PM PDT

I'm going to preface this with pointing out that I only need somewhat persistent data. The purpose of this database platform would be to support statistical analysis in R. I usually build my tables from csv files I get from clients and query those tables to build flat files to dump into R. I can either import a .csv type file or run a query from R. So, essentially I'm performing a lot of inner and outer joins on the entire data set to get the resulting output I need. To date, my databases haven't exceeded 5-10GB. I may have projects in the near future that will be larger but I don't see anything that would exceed memory. I need maximum speed for a little while.

To admit a little guilt - I would be happy with sqlite if it supported full joins (without getting too hacky) and if it had good multi-core support. I like it's simplicity - it just doesn't perform well enough. Or I'm too ignorant.

Options I have explored are:

  • PostgreSQL in a ramdisk - unsure if a ramdisk would actually be necessary but I've seen a lot of info on the topic.

  • Using MySQL memory tables - I haven't looked to see if other databases have a similar feature. I'm sure they do.

  • McObject's eXtremeDB - It doesn't quite seem like a good fit for me. It's designed to be an embedded DB.

  • VoltDB - I was excited about this option until I read that they don't quite have outer joins and self joins working. Their SQL seems a little too limited.

I'm switching from my laptop (running ubuntu) which frequently overheats to an Amazon EC2 instance which I can scale up as much as I need. Thus the need for good multi-core support. I'll likely build my tables in an on-demand instance and do the heavy querying in spot instances. My laptop has already conditioned me for periodic shut-downs so, I'm not too worried about that. I've already built an instance with R and have been having fun playing with AWS for other projects over the last few months.

I'm not beholden to any specific database platform however, I have reached a point of information paralysis. Reasonable solutions and things to consider will be very helpful. I'm not looking for a step-by-step how to - that's what Google and the rest of stack exchange is for. I've also been avoiding Amazon's RDC service for this. I'm not exactly sure why - probably so I can use spot instances.

I'm also open to the idea that I'm looking at my problem all wrong. Should I abandon SQL all together?

Oracle pivot on a column with delimited data

Posted: 30 May 2013 12:59 PM PDT

My data is like:

keycol,col1,col2,col3  1,a;b;c,some data,some other data  2,x,some data,some other data  3,y;z,some data,some other data  

Where a column is delimited in the source system with semicolons.

And I want to pivot it to:

1,a,some data,some other data  1,b,some data,some other data  1,c,some data,some other data  2,x,some data,some other data  3,y,some data,some other data  3,z,some data,some other data  

I found a technique here, but I can't quite get it to work:

CREATE TABLE yt      (keycol int, col1 varchar2(5), col2 varchar2(9), col3 varchar2(15))  ;    INSERT ALL       INTO yt (keycol, col1, col2, col3)           VALUES (1, 'a;b;c', 'some data', 'some other data')  SELECT * FROM dual  ;    INSERT ALL       INTO yt (keycol, col1, col2, col3)           VALUES (2, 'x', 'some data', 'some other data')  SELECT * FROM dual  ;    INSERT ALL       INTO yt (keycol, col1, col2, col3)           VALUES (3, 'y;z', 'some data', 'some other data')  SELECT * FROM dual  ;  

I thought I could include the keycol in the CONNECT BY this way to get parallel recursive chains, but I guess it doesn't work like that. I'm pretty sure I've done this with recursive CTEs in SQL Server.

SELECT keycol    ,trim(regexp_substr(col1, '[^;]+', 1, level)) col1    ,col2     ,col3  FROM yt t  CONNECT BY keycol = PRIOR keycol AND instr(col1, ';', 1, level - 1) > 0  

http://sqlfiddle.com/#!4/3d378

FWIW, I'm on Oracle 10g.

Autogrow. Primary vs additional data files

Posted: 30 May 2013 12:18 PM PDT

My databases all use autogrow, which grows the primary MDF file on a percentage. But one of the databases, from a third party application grows by adding additional .NDF files.

Where is this option set? When I look at autogrow settings, there is the option to grow or not, by percentage or by xMB, and an option for limited or unlimit growth. But I see nothing that tells it whether to grow the primary MDF, or grow by adding additional NDFs.

And, is there a way to combine these NDF files back into the primary MDF?

Thanks!

RZ

SSIS How can I write to a Specific Cell in an Excel Sheet

Posted: 30 May 2013 12:59 PM PDT

I am trying to complete what I thought would be a very simple task but after hours of looking thru various articles and attempting different methods, I still have not been able to Write to a specific Cell using SSIS.

All I am trying to do is write "DOB" in cell D2 in an excel sheet.

I tried using SQL COMMAND in the Execute SQL Task componenet to do the UPDATE of the 1 cell but kept getting error messages. Below is the code I tried. SSIS came back with an error saying it was expecting at least 1 paramater...

update [Sheet1$D2:D2] SET F1='DOB'  

I also tried

INSERT INTO [Sheet1$D2:D2] VALUES ('DOB')   

but got the following error message:

This table contains cells that are outside the range of cells defined in this spreadsheet.

I tried a few different C# and VB scripts but none of them did the trick. Any ideas or suggestions?

I tried modifying the script in the below article to accomplish my task but was unsuccessful

http://bidn.com/blogs/KeithHyer/bidn-blog/2475/updating-a-single-excel-cell-using-ssis

I'm thinking there's got to be an easier way.

How much data can SQL Server full text indexing handle?

Posted: 30 May 2013 06:30 PM PDT

I realize that the question is vague and it depends on hardware and our needs.

We currently have 5 million rows of data, a total of 5GB of data which we want to index using full text indexing. Our data increases quite rapidly and it's not unreasonable to assume that in a few years it will be closer to a billion rows and a TB of data.

The index is searchable by web site users, and they expect responses within a second or two.

Is it reasonable to assume that this data set will be indexable using SQL Server 2012 full text indexing? Is it common to do full text indexing of this amount of data? And is there any good reading on the subject, for example from others' experience?

PK as ROWGUIDCOL or use a separate rowguid column?

Posted: 30 May 2013 03:02 PM PDT

There's a long-winded debate going on here so I'd like to hear other opinions.

I have many tables with uniqueidentifier clustered PK. Whether this is a good idea is out of scope here (and it's not going to change anytime soon).

Now, the database has to be merge published and the DEVs are advocating the use of a separate rowguid column instead of marking the existing PK as the ROWGUIDCOL.

Basically, they say that the application should never bring into its domain something that is used by replication only (it's only "DBA stuff" for them).

From a performance standpoint, I see no reason why I should add a new column to do something I could do with an existing one. Moreover, since it's only "DBA stuff", why not let the DBA choose?

I kind of understand the DEVs' point, but I still disagree.

Thoughts?

EDIT: I just want to add that I'm in the minority in this debate and the DEVs questioning my position are people I respect and trust. This is the reason why I resorted to asking for opinions.
I might also be missing something and could have misunderstood their point.

How can I achieve a unique constraint with two fields?

Posted: 30 May 2013 02:32 PM PDT

I have a table with e.g. Name and IsDeleted fields. I want to add a row constraint so that only one Name value can have IsDeleted as 'false'. There can be many duplicate Name values, but they must all have IsDeleted asd true.

How would I write this check constraint ?

memory used by Locks

Posted: 30 May 2013 05:08 PM PDT

I am kind of curious, one of SQL 2012 enterprise edition with 128 GB of RAM size of database is 370 GB and growing, amount of memory used by locks (OBJECTSTORE_LOCK_Manager) memory clerk showing 7466016 KB. I can also confirm that by looking at perf counter select * from sys.dm_os_performance_counters where counter_name = 'Lock Memory (KB)'

However, when I run query

select count(*) from sys.dm_tran_locks  

it shows only 16 locks. So what is using over 7 GB of locks. Is there a way to find out?

Does that mean if once memory for locks has been allocated SQL has yet not yet deallocated it? In past 1 hour I do not see lock count exceeding 500 but lock memory stays the same.

EDIT: Max Server Memory is 106 GB, We do not use lock pages in memory and I do not see any memory pressure or any errors in the error log in past 12 hours. Avialble MBytes couter shows more than 15 GB of available memory.

EDIT 2 Activity monitor consistenly shows 0 waiting tasks so obviously no blocking.

Considering SQL server lock take about 100 bytes of memory 7 GB is lots of memory and trying to find out who is using it.

EDIT 3: I run a server dash board report top transaction by lock count it says "currently no locking transactions are running on the system. However, lock memory still shows as stated above. DB is most busy during overnight hours.

Postgres wont shutdown due to wal archiving

Posted: 30 May 2013 07:03 PM PDT

I commanded Postgres to shutdown using the init.d scripts (Linux) over 18h ago.

I can still see the processes running:

-bash-3.2$ ps -fe | grep postg  postgres  2299  3265  0 16:06 pts/5    00:00:00 ps -fe  postgres  2300  3265  0 16:06 pts/5    00:00:00 grep postg  root      3263 10185  0 May23 pts/5    00:00:00 su - postgres  postgres  3265  3263  0 May23 pts/5    00:00:01 -bash  root      5985 13676  0 May20 pts/3    00:00:00 su - postgres  postgres  5987  5985  0 May20 pts/3    00:00:01 -bash  postgres 14266     1  0 May23 ?        00:06:34 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data  postgres 14268 14266  0 May23 ?        00:01:51 postgres: logger process  postgres 14270 14266  0 May23 ?        00:01:30 postgres: writer process  postgres 14281 14266  0 May23 ?        00:00:09 postgres: archiver process   last was 000000010000028F000000A3  postgres 14282 14266  0 May23 ?        00:03:07 postgres: stats collector process  postgres 14283 14266  0 May23 ?        00:56:49 postgres: wal sender process postgres 10.40.227.238(12032) streaming 28F/A4000650  postgres 14306 14266  9 May28 ?        04:01:55 postgres: opera_man om 10.40.227.146(44745) SELECT  

On the standby server (running normally) I see that:

$ ps -fe | grep postg  cluser   20724  7090  0 09:54 pts/0    00:00:00 psql -U postgres report  postgres 20726 21475  0 09:54 ?        00:01:12 postgres: postgres report [local] idle  postgres 21475     1  0 Apr24 ?        00:00:03 /usr/pgsql-9.1/bin/postmaster -p 5432 -D /var/lib/pgsql/9.1/data  postgres 21477 21475  0 Apr24 ?        00:00:01 postgres: logger process  postgres 21478 21475  0 Apr24 ?        05:34:10 postgres: startup process   recovering 000000010000028F000000A4  postgres 21485 21475  0 Apr24 ?        00:07:16 postgres: writer process  postgres 21486 21475  0 Apr24 ?        00:00:18 postgres: stats collector process  postgres 24091 21475  0 May23 ?        00:46:49 postgres: wal receiver process   streaming 28F/A40006E0  cluser   32136 30224  0 16:09 pts/16   00:00:00 grep postg  

The log shows 'FATAL: the database system is shutting down', what could be the reason of this and how do I get it back running?

Algorithm for finding the longest prefix

Posted: 30 May 2013 03:28 PM PDT

I have two tables.

First one is a table with prefixes

code name price  343  ek1   10  3435 nt     4  3432 ek2    2  

Second is call records with phone numbers

number        time  834353212     10  834321242     20  834312345     30  

I need write a script which find longest prefix from prefixes for each record, and write all this data to third table, like this:

 number        code   ....   834353212     3435   834321242     3432   834312345     343  

For number 834353212 we must trim '8', and then find the longest code from prefix table, its 3435.
We must always drop first '8' and prefix must be in the beginning.

I solved this task long time ago, with very bad way. Its was terrible perl script which do a lot of queries for each record. This script:

  1. Take a number from calls table, do substring from length(number) to 1 => $prefix in the loop

  2. Do the query : select count(*) from prefixes where code like '$prefix'

  3. If count>0 then take first prefixes and write into table

First problem is query counts - it's call_records * length(number). Second problem is LIKE expressions. I am afraid those are slow.

I tried to solve the second problem by:

CREATE EXTENSION pg_trgm;  CREATE INDEX prefix_idx ON prefix USING gist (code gist_trgm_ops);  

That speeds up each query, but did not solve problem in general.

I have 20k prefixes and 170k numbers now, and my old solution is bad. Looks like I need some new solution without loops.

Only one query for each call record or something like this.

I can't start Mysql 5.6 server due to "TIMESTAMP with implicit DEFAULT value is deprecated" Error?

Posted: 30 May 2013 03:17 PM PDT

Ok, Here is my story, I went to mysql.com site & downloaded the file mysql-5.6.11-winx64.zip into C:, then I unziped it. Then I went to bin folder & started the server by using this command:

C:\mysql-5.6.11-winx64\bin\mysqld --standalone --local-infile=1  

Everything was ok as the Mysql server started smoothly. I then stopped server using this command:

C:\mysql-5.6.11-winx64\bin\mysqladmin -u root shutdown  

The Server was shutdowned properly.

I used this way to start & stop mysql server a few times without any problem.

However, yesterday, I started the Mysql server but then, at the end of the day, i turned off my PC while my MySQL server was still in the Starting Mode (ie, i did not shutdown mysql using "bin\mysqladmin -u root shutdown" before turned off my PC).

Also, when my PC got turned off at that time, the Win 7 was starting to download some packages from the internet to update Win7 so the configuration of win7 could be changed.

But today I could not start Mysql Server using the above command as there's an error:

  [Warning] TIMESTAMP with implicit DEFAULT value is deprecated.   Pls use --explicit_defaults_for_timestamp server option (see documentation for more details).  

I searched over internet & some people said that I have to go to my.cnf file & add this line into:

explicit_defaults_for_timestamp = TRUE  

However, there is no my.cnf file in mysql 5.6, there a lot of cnf file in mysql5.6 but with different names:

mysql-5.6.11-winx64\data\auto.cnf  mysql-5.6.11-winx64\mysql-test\include\default_client.cnf  mysql-5.6.11-winx64\mysql-test\include\default_my.cnf  mysql-5.6.11-winx64\mysql-test\include\default_mysqld.cnf  mysql-5.6.11-winx64\mysql-test\include\default_mysqld_autosize.cnf  mysql-5.6.11-winx64\mysql-test\include\default_mysqld_ndbd.cnf  

I tried to add explicit_defaults_for_timestamp = TRUE into all above cnf file but no help, i still couln't start mysql

I don't want to reinstall cos I created a big DB in the current mysql Server.

So how to fix it?

Note: when first time I ran Mysql server, win7 pop up a message saying something (i couldn't remember) such as "do you allow ... Firewall", so do u think that is causing the issue since Win7 got its configuration updated & somehow it reset the Firewall so the Mysql server couldn't start?

How to set SQL Server index pages per fragment?

Posted: 30 May 2013 02:50 PM PDT

I have SQL Server 2008, and a number of databases. I have discovered that one of my table's indexes is extremely fragmented (How I know: http://msdn.microsoft.com/en-us/library/ms189858.aspx)

The avg_fragmentation_in_percent is a whopping 97% or more and the fragment count is in the thousands. Very interestingly, the avg_fragment_size_in_pages is just slightly more than 1. I can defrag it, which helps, but I'd like to prevent this from happening in the first place.

Why is the pages per fragment so low? I have FILEGROWTH = 128MB for the DB as a whole, but this particular table is the most active - so is there a way to tell SQL Server to allocate more growth to this table or index so that the pages-per-fragment is higher?

pgAdmin3 can't connect properly to Postgres 9.2

Posted: 30 May 2013 02:48 PM PDT

I have installed pgadmin3 version 1.10.2, however when it connects to PostgreSQL server version 9.2, it says:

Warning:

This version of pgAdmin has only been tested with PostgreSQL version 8.4 and below and may not function correctly with this server. Please upgrade pgAdmin.

Then it will throw this error:

An error has occurred:

ERROR: column "datconfig" does not exist LINE 1: ...b.dattablespace AS spcoid, spcname, datallowconn, datconfig,...

etc.

Also I am quite stuck with my working version of pgAdmin3 as I am using Ubuntu Lucid. So is there a work-around for pgAdmin3 to work properly keeping my curent version of pgAdmin3?

Relation to original tables or to existing linking table

Posted: 30 May 2013 11:27 AM PDT

In my database I have a table with different settings for my app. Each setting is in relation to a guest (table guests) and an event (table events). So basically each guest has specific settings for each event he is linked to.

Every guest which has settings is are allready linked to the events for other reasons so there is an existing event_guest table with the necessary links.

So I'm not exactly sure about how I should link the settings table with the others.

Option 1

I link the settings with the table event_guest which links guests and events.

enter image description here

Option 2

I link the settings with the "original" tables guests and events.

enter image description here

Spontaneous I would go with option 1 but I'm a little bit confused about it...

My concern with option 1 is, that if I have a lot of deep relations, maybe even another table after settings, I need more complex sql queries to get for example data from settings, guests and events. Could this become a performance issue?

Which is the better solution and what are its advantages and disadvantages?

OK to put temp tablespace on volatile storage or to omit it from backups? (Postgresql)

Posted: 30 May 2013 07:14 PM PDT

I would intuit that it's fine, but I just want to make sure there are no gotchas from a recovery point of view:

If I were to lose my temp tablespace upon system crash, would this prevent proper crash recovery?

Also, if I were to omit the temp tablespace from the base backup, would that prevent proper backup recovery?

mysql optimize table crash

Posted: 30 May 2013 02:14 PM PDT

When I try OPTIMIZE TABLE `table` (MyISAM) on a table which is about 300MB, then it is crashed and must be repaired. What could cause this problem? The same problem occurs on other tables over 300MB.

Is it possible to have extra tables in a Slave with MySQL Replication

Posted: 30 May 2013 01:14 PM PDT

As my title mention I have a Master and a Slave database.

Master if for operations data and my slave mainly for reporting stuff.

The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system.

How could I add tables on the Slave without Master caring about it ? Is it even possible ?

Replication issue - CREATE SELECT alternative?

Posted: 30 May 2013 03:14 PM PDT

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

query processor ran out of internal resources and could not produce a query plan

Posted: 30 May 2013 07:16 PM PDT

This is showing up in the logs several times a night. How do I find the query causing the issue? SQL Server 2008 R2 Sp1.

Thank you

Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress?

Posted: 30 May 2013 04:14 PM PDT

I'm trying to monitor the progress of a sqlite3 command importing a large .sql file into a database using zenity --progress.

I've tried the following which will import the file, however progress is not shown:

sqlite3 DATABASE < import_file.sql | zenity --progress --percentage=0 --auto-close  

I know I need to provide zenity a numeric source for the progress, but don't really know how to obtain the number.

Can anyone help me?

No comments:

Post a Comment

Search This Blog