Saturday, June 22, 2013

[how to] (newb) Riak nodes automatically shutting down

[how to] (newb) Riak nodes automatically shutting down


(newb) Riak nodes automatically shutting down

Posted: 22 Jun 2013 06:23 PM PDT

I'm going through the book "Seven Databases in Seven Weeks" and am on the Riak chapter. I'll installed Riak successfully and am able to start dev1, dev2, and dev3 nodes.

If I ps I see the three processes:

24447 ttys002    0:05.33 /Users/admin/riak-1.3.1/dev/dev1/erts-5.9.3.1/bin/beam.smp -K true -A 64 -  24692 ttys003    0:02.50 /Users/admin/riak-1.3.1/dev/dev2/erts-5.9.3.1/bin/beam.smp -K true -A 64 -  24815 ttys004    0:01.13 /Users/admin/riak-1.3.1/dev/dev3/erts-5.9.3.1/bin/beam.smp -K true -A 64 -  

If I ping soon after starting the nodes then I get a pong.

HOWEVER, after about 30 seconds the nodes seem to shut down. They don't show up in a ps check, are not responding to pings, and I get the following error when I try to cluster join:

Node dev2@127.0.0.1 is not reachable!  

What's going on?

Capture Data Change Vs Triggers

Posted: 22 Jun 2013 12:35 PM PDT

I am constructing a database that needs to keep track of historical data. It is a Electronic medical system. All data needs to be updated in real time so the user can see what changes happened in the application.

Would CDC be the way to go or should I just have triggers on the database that pushes the information into a Audit tables

How to determine that the chain of log degraded

Posted: 22 Jun 2013 12:10 PM PDT

MSSQL server 2005.
I have DB with full recovery model. If i execute on my base

ALTER DATABASE test SET RECOVERY SIMPLE  ALTER DATABASE test SET RECOVERY FULL  

or

BACKUP LOG test WITH  truncate_only  

then if i try run log backup

BACKUP LOG test TO  DISK = 'backupfile.bak'  

i receive error message
BACKUP LOG cannot be performed because there is no current database backup.
Question.
How do I check that I was not able to make a log backup. Without running backup log.

Migrating Oracle packages to SqlServer

Posted: 22 Jun 2013 08:31 PM PDT

I'm moving database from Oracle to SQLServer (2012 if that matters). The process of migrating tables/views is more or less straightforward and automated. Triggers are also not an issue since I don't have too many of them.

However, I wonder what is the best practice for moving Oracle packages. As far as I know, SQLServer doesn't have anything similar. My idea is to map each package to SQLServer schema (thus, for instance, I'll have 'package1' , 'package2', etc schemas in SQLServer, each of them has no tables, just functions/procedures/types). I believe it will preserve original purpose of packages and at the same time reduce number of changes required on application side.

Does it make sense or I'm on a wrong way?

Thank you.

SQL Server Partitioning...Really? [closed]

Posted: 22 Jun 2013 09:35 AM PDT

I was just looking at SQL Server's (2008) implementation of partitioning. Is it me or when compared to other RDBMSs (Oracle, Greenplum, etc), is SQL Server's implementation kind of a disaster from a management standpoint? I ask because it seems as though you have to create separate file groups and files for each partition. This doesn't seem to be an implementation that supports a very dynamic data set. It's great for data that is very well defined and known. However, it seems as though there would be a lot of work the DBA must go through in order to maintain partitions as the partition key values change. Can somebody educate me a bit more on why the underlying partitioning file storage mechanism isn't a bit more abstracted?

NpgSQL performance and reliability?

Posted: 22 Jun 2013 05:57 AM PDT

Is NpgSQL a good choice for serious LOB apps (WinForms / WPF)?

I did some testing and it does strike me like a solid .net provider, but I was hoping that I could confirm this with someone who had more experience using NpgSQL in LOB apps.

Cannot connect to server. Too many symbolic links. Postgres Error

Posted: 22 Jun 2013 07:54 AM PDT

I get this error when I try connecting to a PostgreSQL server:

psql: could not connect to server: Too many levels of symbolic links      Is the server running locally and accepting      connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?  

What can I do to fix this?

I have Postgres installed on my Mac.

csv file handling

Posted: 22 Jun 2013 12:25 PM PDT

I am using an EAV like structure to store CSV files. There aren't any predetermined columns, so it needs to be able to expand the columns / data on demand. Is this a fairly good way of handling such a task, or is there a better/more efficient way of doing this?

Below is some sql structure / an example query that the application builds on the fly to help search through the data / display it.

CREATE TABLE IF NOT EXISTS `table` (      `id` int(10) unsigned NOT NULL AUTO_INCREMENT,      `some_link_id` int(10) unsigned NOT NULL,      `date_created` datetime NOT NULL,      PRIMARY KEY (`id`),      KEY `some_link_id` (`some_link_id`),      CONSTRAINT `FK_some_fk_name` FOREIGN KEY     (`some_link_id`) REFERENCES `some_other_table` (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;    INSERT INTO `table` (`id`, `some_link_id`, `date_created`) VALUES      (111, 12, '2013-06-22 02:31:00'),  (112, 16, '2013-06-22 03:34:31');      CREATE TABLE IF NOT EXISTS `table_eav` (      `id` int(10) unsigned NOT NULL AUTO_INCREMENT,      `name` varchar(50) NOT NULL,      `data_id` int(10) unsigned NOT NULL,      `value` text NOT NULL,      PRIMARY KEY (`id`),      UNIQUE KEY `name_data_id` (`name`,`data_id`),      KEY `FK_table_eav_table` (`data_id`),      CONSTRAINT `FK_table_table_eav` FOREIGN KEY         (`data_id`) REFERENCES `table` (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;      INSERT INTO `table_eav` (`id`, `name`, `custom_data_id`, `value`) VALUES      (495, 'name', 112, 'brian'),  (496, 'email', 112, 'email@email.com');  

This is the query that gets built out automagically by the application[I get the distinct selection on the name column], I started with a stored procedure but needed to dynamically generate the columns on the fly....

select * from (select pcd.id, max(if(pcde.name = 'name', value, '')) as `name`, max(if(pcde.name = 'email', value, '')) as `email` from `table` pcd inner join `table_eav` pcde on pcde.data_id = pcd.id where pcd.some_link_id=16 group by pcd.id) as t1  

Estimated vs Actual rows differences (actual much smaller than estimated) - sort

Posted: 22 Jun 2013 04:46 AM PDT

I'm running a query that is processing some nodes out of an XML document. My estimated subtree cost is in the millions and it seems it all comes from a sort operation sql server is performing on some data that I extract from xml columns via XPath. The Sort operation has an estimated number of rows to be around 19 million, whereas the actual row count is about 800. The query itself runs reasonably well (1 - 2 seconds), but the discrepancy has me wondering about the query performance and why this difference is so large?

Filter on a window function without writing an outer SELECT statement

Posted: 22 Jun 2013 11:35 AM PDT

Since window functions cannot be included in the WHERE clause of the inner SELECT, is there another method that could be used to write this query without the outer SELECT statement? I'm using Oracle. Here is the sqlfiddle.

SELECT MERCHANTID, WAREHOUSEID, PRODUCTCODE  FROM (    SELECT 0    , WMP.MERCHANTID    , WMP.WAREHOUSEID    , WMP.PRODUCTCODE    , RANK() OVER (PARTITION BY ML.MASTERMERCHANTID, WMP.PRODUCTCODE ORDER BY ML.PREFERENCEORDER ASC NULLS LAST) MERCH_RANK    , RANK() OVER (PARTITION BY WMP.MERCHANTID, WMP.PRODUCTCODE ORDER BY WM.PREFERENCEORDER ASC NULLS LAST) WARE_RANK    FROM MW_WAREHOUSEMERCHANTPRODUCT WMP      LEFT OUTER JOIN MW_WAREHOUSEMERCHANT WM ON 0=0                  AND WMP.MERCHANTID  = WM.MERCHANTID                  AND WMP.WAREHOUSEID = WM.WAREHOUSEID      LEFT OUTER JOIN MW_MERCHANTLINK ML ON 0=0                  AND WMP.MERCHANTID = ML.LINKEDMERCHANTID      LEFT OUTER JOIN MW_WAREHOUSEMERCHANTPRODUCT MASTER ON 0=0                  AND ML.MASTERMERCHANTID = MASTER.MERCHANTID                  AND WMP.PRODUCTCODE     = MASTER.PRODUCTCODE    WHERE 0=0      AND WMP.STOCKLEVEL > 0      AND NVL(MASTER.STOCKLEVEL, 0) <= 0  )  WHERE 0=0    AND MERCH_RANK = 1    AND WARE_RANK  = 1  ;  

Errors while creating multiple mysql-5.5 instances

Posted: 22 Jun 2013 05:35 AM PDT

I have installed 3rd mysql instance on my testing server.

2 instances already running without any issues.

When I installed 3rd instance by mysql-5.5.30 zip source, it installed successfully but when I tried to restart 3rd instance of mysql it says,

MySQL server PID file could not be found! [FAILED]

Starting MySQL........................................................

The server quit without updating PID file.

1st instance running on 3305

BASEDIR: /usr/local/mysql  Configuration File: /etc/my.cnf  Socket: /tmp/mysql.stock  

2nd instance running on 3306

BASEDIR: /backup/mysql-cbt  Configuration File: /backup/mysql-cbt/my.cnf  Socket: /backup/mysql-cbt/mysql.stock  

3rd instance running on 3307

BASEDIR: /home/mysql-5/  Configuration File: /home/mysql-5/my.cnf  Socket: /home/mysql-5/mysql.stock  

Error Log is as follows.

130513 11:22:23 mysqld_safe Starting mysqld daemon with databases from /backup/mysql-cbt/data<br>  130513 11:22:23 InnoDB: The InnoDB memory heap is disabled<br>  130513 11:22:23 InnoDB: Mutexes and rw_locks use GCC atomic builtins<br>  130513 11:22:23 InnoDB: Compressed tables use zlib 1.2.3<br>  130513 11:22:23 InnoDB: Using Linux native AIO<br>  130513 11:22:23 InnoDB: Initializing buffer pool, size = 128.0M<br>  130513 11:22:23 InnoDB: Completed initialization of buffer pool<br>  130513 11:22:23 InnoDB: highest supported file format is Barracuda.<br>  130513 11:22:23  InnoDB: Waiting for the background threads to start<br>  130513 11:22:24 InnoDB: 5.5.30 started; log sequence number 1595675<br>  130513 11:22:24 [ERROR] /backup/mysql-cbt/bin/mysqld: unknown option '--safe- show-    database'<br>  130513 11:22:24 [ERROR] Aborting<br>  130513 11:22:24  InnoDB: Starting shutdown...<br>  130513 11:22:25  InnoDB: Shutdown completed; log sequence number 1595675<br>  130513 11:22:25 [Note] /backup/mysql-cbt/bin/mysqld: Shutdown complete<br>  130513 11:22:25  mysqld_safe mysqld from pid file /backup/mysql-cbt/cbt-instance.pid ended  

Still unable to figure out this error. How can I start the 3rd instance?

Installation

Here is the story from beginning. I have installed mysql via source:

  1. extracted source mysql-5.5.30-linux2.6-x86_64.tar.gz to directory, then recursively change permission of mysql directory to user & group mysql.
  2. in scripts directory of mysql I run this command as root:
[root@server /backup/mysql-cbt/scripts]# ./mysql_install_db --basedir=/backup/mysql-cbt/ --datadir=/backup/mysql-cbt/data --defaults-file=/backup/mysql-cbt/my.cnf --user=mysql**  Installing MySQL system tables...  OK  Filling help tables...  OK    To start mysqld at boot time you have to copy  support-files/mysql.server to the right place for your system    PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !  To do so, start the server, then issue the following commands:    /backup/mysql-cbt//bin/mysqladmin -u root password 'new-password'  /backup/mysql-cbt//bin/mysqladmin -u root -h 69.16.196.128 password 'new-password'    Alternatively you can run:  /backup/mysql-cbt//bin/mysql_secure_installation  which will also give you the option of removing the test  databases and anonymous user created by default.  This is  strongly recommended for production servers.    See the manual for more instructions.    You can start the MySQL daemon with:  cd /backup/mysql-cbt/ ; /backup/mysql-cbt//bin/mysqld_safe &    You can test the MySQL daemon with mysql-test-run.pl  cd /backup/mysql-cbt//mysql-test ; perl mysql-test-run.pl    Please report any problems with the /backup/mysql-cbt//scripts/mysqlbug script!  

When I restart this instance it gives error of updating pid and exit.

Which step is missing?

My my.cnf file (/backup/mysql-cbt/my.cnf):

    [mysqld]      federated      basedir         = /backup/mysql-cbt      datadir         = /backup/mysql-cbt/data      log-error       = /backup/mysql-cbt/cbt-instance.err      pid-file        = /backup/mysql-cbt/cbt-instance.pid      #socket         = /tmp/mysql.sock      socket          = /backup/mysql-cbt/mysql.sock      port            = 3310      user            = mysql      tmpdir          = /backup/mysql-cbt/tmpdir  

/etc/init.d/mysql_cbt

# If you want to affect other MySQL variables, you should make your changes  # in the /etc/my.cnf, ~/.my.cnf or other MySQL configuration files.    # If you change base dir, you must also change datadir. These may get  # overwritten by settings in the MySQL configuration files.    basedir=/backup/mysql-cbt  datadir=/backup/mysql-cbt/data    # Default value, in seconds, afterwhich the script should timeout waiting  # for server start.   # Value here is overriden by value in my.cnf.   # 0 means don't wait at all  # Negative numbers mean to wait indefinitely  service_startup_timeout=900    # Lock directory for RedHat / SuSE.  lockdir='/var/lock/subsys'  lock_file_path="$lockdir/mysql"    # The following variables are only set for letting mysql.server find things.    # Set some defaults  mysqld_pid_file_path=/backup/mysql-cbt/mysql-cbt-instance.pid  if test -z "$basedir"  then    basedir=/usr/local/mysql-cbt    bindir=/usr/local/mysql-cbt/bin    if test -z "$datadir"    then      datadir=/backup/mysql-cbt/data    fi    sbindir=/backup/mysql-cbt/bin    libexecdir=/backup/mysql-cbt/bin  else    bindir="$basedir/bin"    if test -z "$datadir"    then      datadir="$basedir/data"    fi    sbindir="$basedir/sbin"    libexecdir="$basedir/libexec"  fi  

How do I identify the remote db agent name to use in create_database_destination on Oracle 11gR2?

Posted: 22 Jun 2013 07:35 PM PDT

I am trying to setup DBMS_SCHEDULER in Oracle 11g to run a remote database job.

I have a remote Oracle 11g R2 database on unix and a local one on Windows.

I read that you can install the oracle scheduler agent from the 11g client install for machines that don't have Oracle installed but this is not needed for running remote jobs if Oracle is present on both machines. With the remote agent installation, you run schagent and provide parameters to register the agent to the remote machine but I cant find any instructions on the web regarding how to register remote agents when both machines have Oracle installed or what to use as the agent name in this case.

I have added an entry to tnsnames.ora for the remote DB and can tnsping, etc.

If I run the dbms_scheduler.create_database_destination procedure, it requires an agent name but where can I find this for the database or how can I check that it is running on Unix or Windows?

Database migration, how to

Posted: 22 Jun 2013 02:42 PM PDT

I have two databases DB1 and DB2. In both databases exists this two tables mo_sms and mt_sms. This is the structure of those tables:

CREATE TABLE IF NOT EXISTS `mo_sms` (    `id_MO` int(11) unsigned NOT NULL AUTO_INCREMENT,    `sms_proceso` char(1) NOT NULL COMMENT 's=SMS c=CHAT d=Descargas',    `msisdn` varchar(20) NOT NULL,    `texto_sms` varchar(160) DEFAULT NULL,    `brand_id` int(10) unsigned NOT NULL,    `fecha_sms` datetime NOT NULL,    `comando_id` int(10) unsigned NOT NULL DEFAULT '0',    `alias_desc` varchar(25) DEFAULT NULL,    `shortcode_id` int(10) unsigned NOT NULL,    `precio` float(11,2) unsigned DEFAULT '0.00' COMMENT 'Precio del MO',    `id_user` int(10) unsigned NOT NULL,    `state` char(1) NOT NULL DEFAULT '0' COMMENT '0=Por procesar 1=Procesado',    `tipo_sms` tinyint(4) NOT NULL DEFAULT '0' COMMENT '0=Normal, <>0 dependera del tipopredeterminado',    `anio_sms` smallint(4) unsigned NOT NULL DEFAULT '0',    `mes_sms` smallint(2) unsigned zerofill NOT NULL DEFAULT '00',    PRIMARY KEY (`id_MO`),    KEY `ix_carrier` (`brand_id`),    KEY `ix_fecha_sms` (`fecha_sms`),    KEY `ix_fecha_carrier_keyword` (`fecha_sms`,`brand_id`,`alias_desc`),    KEY `ix_msisdn` (`msisdn`),    KEY `ix_sms_proceso` (`sms_proceso`),    KEY `ix_sms_proceso_state` (`sms_proceso`,`state`),    KEY `ix_id_user` (`id_user`),    KEY `ix_fecha_sms_user` (`fecha_sms`,`id_user`),    KEY `ix_varios` (`anio_sms`,`mes_sms`,`comando_id`,`shortcode_id`,`brand_id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COMMENT='Tabla de MO' AUTO_INCREMENT=82972 ;    CREATE TABLE IF NOT EXISTS `mt_sms` (    `id_MT` int(11) unsigned NOT NULL AUTO_INCREMENT,    `sms_proceso` char(1) NOT NULL DEFAULT 'c' COMMENT 's=SMS c=CHAT d=Descargas',    `msisdn` varchar(20) NOT NULL,    `texto_sms` varchar(160) DEFAULT NULL,    `brand_id` int(10) unsigned NOT NULL,    `fecha_sms` datetime NOT NULL,    `comando_id` int(10) unsigned NOT NULL DEFAULT '0',    `alias_desc` varchar(25) DEFAULT NULL,    `shortcode_id` int(10) unsigned NOT NULL,    `id_user` int(10) unsigned NOT NULL,    `tipo_sms` tinyint(4) unsigned NOT NULL DEFAULT '0' COMMENT '0=Normal, <>0 dependera del tipopredeterminado',    `id_MO` int(11) unsigned NOT NULL,    `state` char(1) DEFAULT '0' COMMENT '0=Por Procesar 1=Procesado',    `anio_sms` smallint(4) unsigned NOT NULL DEFAULT '0',    `mes_sms` smallint(2) unsigned zerofill NOT NULL DEFAULT '00',    PRIMARY KEY (`id_MT`),    KEY `ix_carrier` (`brand_id`),    KEY `ix_fecha_sms` (`fecha_sms`),    KEY `ix_fecha_carrier_keyword` (`fecha_sms`,`brand_id`,`alias_desc`),    KEY `ix_msisdn` (`msisdn`),    KEY `ix_sms_proceso` (`sms_proceso`),    KEY `ix_id_user` (`id_user`),    KEY `ix_fecha_sms_user` (`fecha_sms`,`id_user`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COMMENT='Tabla de MT' AUTO_INCREMENT=93853;  

I have some values in DB2 that should be migrated (inserted) in DB1. My problem here is I don't know how to get the records from DB2.mo_sms tables and those relations from DB2.mt_sms and then insert to DB1.mo_sms and DB2.mt_sms. For example I'll get all the records from mo_sms with fecha_sms LIKE '%2013-04-19%' then if I insert those records in DB1.mo_sms new IDs will be generated then the integrity will be lost. Should I make this programatically or there is any way to do this using just SQL queries? I'm using MySQL as DBMS.

Optimize UNION query in MYSQL

Posted: 22 Jun 2013 05:35 PM PDT

I have a problem with a UNION query in MySQL. We have 10 millions players on our website and we would like to select players with a multi-criterias system. For exemple, selecting US people, men, > 35 yo.

We are using "vertical partionning" : 1 table per criter. For exemple :

* user_country  - id_user  - id_country  

We would like to do this kind of query :

SELECT id_inscri FROM userdata_langue   WHERE id_langue='43'     UNION   SELECT id_inscri FROM userdata_sexe   WHERE sexe='2'     UNION   SELECT id_inscri FROM userdata_nb_jour   WHERE nb_jour>='31'     UNION   SELECT id_inscri FROM userdata_last   WHERE last<='2013-04-07'     AND last>='2013-04-03' ;  

Do you have any idea on how to optimize that ?

Thanks !

François

----- More details Explain of the query :

id  select_type table   type    possible_keys   key key_len ref rows    Extra  1   PRIMARY userdata_langue ref id_langue   id_langue   1   const       398846  Using index  2   UNION   userdata_sexe   ref sexe    sexe    1   const   1667137 Using index  3   UNION   userdata_nb_jour    range   nb_jour nb_jour 2   NULL    5830    Using where; Using index  4   UNION   userdata_last   range   last    last    3   NULL    371614  Using where; Using index  NULL    UNION RESULT    <union1,2,3,4>  ALL NULL    NULL    NULL    NULL    NULL  

SHOW CREATE TABLE

Table   Create Table  userdata_langue CREATE TABLE `userdata_langue` (   `id_inscri` bigint(20) NOT NULL,   `id_langue` tinyint(3) unsigned NOT NULL,   PRIMARY KEY (`id_inscri`),   KEY `id_langue` (`id_langue`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1     

How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?

Posted: 22 Jun 2013 06:35 PM PDT

I have innodb_file_per_table set and just today my ibdata1 file jumped from 59M to 323M after I made several changes to an 800M table to reduce it to about 600M. That particular table's .ibd file was reduced but the server's ibdata1 file went crazy. Any ideas?

Is there slowdown inserting into an InnoDB table that has no index set?

Posted: 22 Jun 2013 03:35 PM PDT

I have an old application with lots of InnoDB tables, that have no indexes at all, not even a primary ID or such.

Those tables only contain a few thousand rows.

Would it be faster to INSERT data into these tables if I would set a primary index (that I don't need otherwise)?

Loading XML documents to Oracle 11g DB with control file

Posted: 22 Jun 2013 01:35 PM PDT

I am using Oracle 11g XML database and trying to load XML documents to this DB with a control file and the sqlldr utility. All these XML files have an element that contains a date string with time stamp (and the letter T in the middle of it). Oracle rejects this date string because of T in it and thus the XML file is not loaded to the DB.

I want to use the Oracle function TO_TIMESTAMP_TZ on the date string during the data load, but I do not know how to do it. That's where I need help. If there is any other way to import the XML (with date string with timestamp), I will try that also.

Here is the date entry in XML file:

<ns3:EntryDateTime cls="U">2013-04-20T21:02:52.468-04:00</ns3:EntryDateTime>  

And here is entire code the control file:

load data infile 'filelist.dat'     append into table STXP xmltype(XMLDATA)    ( filename filler char(120), XMLDATA lobfile(filename) terminated by eof )  

I believe that I can execute the above control file with the sqlldr utility on SQL*Plus command line also, but not sure about this option. If this is possible, I guess I can ALTER SESSION (to somehow format date string) on command line before executing the control file.

The filelist.dat mentioned above contains entries for input XML file, with one line listing one XML file. The above date entry is required in each XML file. Each XML file has about 50 different elements, some required and some optional. I would greatly appreciate your help.

UPDATE: I successfully registered the schema, which contains definition for the date string, and 100 other schema, with a script. Since this script is very large, I am posting only 2 registration portions of it:

DECLARE  SCHEMAURL VARCHAR2( 100 );  SCHEMADOC VARCHAR2( 100 );  BEGIN  SCHEMAURL := 'http://www.some.org/stxp/DataTypes.xsd';  SCHEMADOC := 'DataTypes.xsd';  DBMS_XMLSCHEMA.registerSchema(       SCHEMAURL,       BFILENAME( 'XSD_DIR', SCHEMADOC ),      LOCAL => TRUE, -- local      GENTYPES => TRUE,  -- generate object types      GENBEAN => FALSE, -- no java beans      GENTABLES => TRUE,  -- generate object tables      OWNER => USER );      SCHEMAURL := 'http://www.some.org/stxp/STXP.xsd';      SCHEMADOC := 'STXP.xsd';      DBMS_XMLSCHEMA.registerSchema(       SCHEMAURL,       BFILENAME( 'XSD_DIR', SCHEMADOC ),      LOCAL => TRUE, -- local      GENTYPES => TRUE,  -- generate object types      GENBEAN => FALSE, -- no java beans      GENTABLES => TRUE,  -- generate object tables      OWNER => USER );    END;    /  

The 2nd registration above is the last in the script, and this creates the table STXP, in which I am trying to load about 800 XML files. Each XML file has a root element called stxp.

This is the relevant definition of date string:

 <xsd:simpleType name="DT" xdb:SQLType="TIMESTAMP WITH TIME ZONE">      <xsd:restriction base="xsd:dateTime"/>  </xsd:simpleType>  

And this is how I am using the above definition:

<element name="EntryDateTime" type="oth:DT"/>  

When I make the above element optional (for testing purpose) and remove the date string entry (mentioned near the top of this question) from my XML file, the XML file is loaded successfully to Oracle XML database. When I put this entry back to XML file (because it is required), Oracle rejects it.

Because I let Oracle take care of population of STXP table with data from XML files, I am not sure if I can set a trigger to pre-process the date string from the input XML file before saving it in database. i think there is a way to do it in the control file.

Enabling/disabling/changing Oracle auditing without a shutdown?

Posted: 22 Jun 2013 04:35 AM PDT

I have a large database that needs auditing on a very detailed level (every select, update, insert, and delete, along with the actual text of the statement) for about half the users. I know how to do this (here is a related question for anyone interested), but I also realize we cannot do this for any extended amount of time because of how much quickly we would be collective massive amounts of data. So while there is a scheduled downtime coming up that we can implement the auditing, to change it to fine tune it (as management changes the request of what data they desire) or to disable it once we have enough data would require us having to take the database down to disable this. While this wouldn't be too horrible to do if we were able to schedule a short downtime late at night, it would be really nice if this could be avoided altogether, but every reference I've seen so far requires the database to be brought down and back up.

So, my question (which I believe to be general enough for the purposes of this site, even though the back story is specific) is if there is a way to enable/disable/change auditing without shutting down the database.

Edit: Oracle version 11r2. As for AUD$ vs. FGA, I'm not sure what FGA is, but AUD is the table which will hold the data, so I am assuming that one.

MySQL - run SELECT statement on another server without defining the table structure

Posted: 22 Jun 2013 08:35 AM PDT

In MySQL I can query information on another server using federated tables, as long as I've defined the same table structure locally.

In MS SQL Server, however, I can run any SQL statement against a linked server. Is it possible to do the same thing in MySQL?

Tools and methodologies to keep to DBs aligned

Posted: 22 Jun 2013 12:35 PM PDT

2 DBs having schemas that represent the same semantic objects. The first one is production DB (Non-RDBMS, in-house implemented in-memory DB with shitload of RAM). Other is Postgres.

Once in a while the production DB is changed (schema upgrade).

Question: what is the best practice to keep DBs of different types aligned semantically?

consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?

Posted: 22 Jun 2013 04:35 PM PDT

Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD.

Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time.

How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut?

Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway?

.

DB2 to require password each time

Posted: 22 Jun 2013 09:35 AM PDT

I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested.

More specifically, this is how the authentication configuration looks like:

db2 get dbm cfg|grep -i auth

 GSS Plugin for Local Authorization    (LOCAL_GSSPLUGIN) =    Server Connection Authentication          (SRVCON_AUTH) = NOT_SPECIFIED   Database manager authentication        (AUTHENTICATION) = CLIENT   Alternate authentication           (ALTERNATE_AUTH_ENC) = NOT_SPECIFIED   Cataloging allowed without authority   (CATALOG_NOAUTH) = NO   Trusted client authentication          (TRUST_CLNTAUTH) = SERVER   Bypass federated authentication            (FED_NOAUTH) = NO  

db2 connect to dbName

   Database Connection Information       Database server        = DB2/LINUXX8664 10.1.0     SQL authorization ID   = DB2INST1     Local database alias   = DBNAME  

db2 connect to dbName user db2inst1 using password

   SQL1639N  The database server was unable to perform authentication because      security-related database manager files on the server do not have the required      operating system permissions.  SQLSTATE=08001  

I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck.

How can I set a default session sql_mode for a given user?

Posted: 22 Jun 2013 02:35 PM PDT

I want to have a default session sql_mode that is different from the global sql_mode.

I can set it manually, but I'd like to do it automatically.

I thought I could add it to .my.cnf, but that doesn't work. I tried adding it to the [mysql] group in .my.cnf, and it does not throw an error.

Yet, when I connect my session sql_mode, the session still inherits the global sql_mode.

I'm using MySQL 5.5.16.

oracle streams apply: how to get a reason why LCR message was not applied

Posted: 22 Jun 2013 07:35 AM PDT

I've set up bidirectional oracle streams replication (11gR1) using identical scripts on both machines (DB1 and DB2). Although changes from DB1 are being applied to DB2, changes from DB2 to DB1 aren't.

I have only one rule for capture processes that checks for apply tag to prevent cyclic propagation, and have no rules for apply processes. LCRs from DB2 are dequeued at DB1 by apply reader process (update LCRs are among dequeued messages for sure, because when I issue 50 inserts at DB2, at DB1 dequeued messages counter increases by 50), but aren't processed by apply coordinator and servers :

Apply           Apply           Apply Apply  Name            Queue           Tag   Status  --------------- --------------- ----- ------------------------------  FROM_DB2_APP    FROM_DB2_APP_Q  02    ENABLED    Apply                                Messages       Last  Name            State                Dequeued        SCN  --------------- ------------------ ---------- ----------  FROM_DB2_APP    DEQUEUE MESSAGES          102    1118751    Apply                             Trans   Trans   Trans   Trans   Trans   Trans  Name            State             Appld  Assign    Rcvd   Ignrd  Rollbk   Error  --------------- --------------- ------- ------- ------- ------- ------- -------  FROM_DB2_APP    IDLE                  0       0       0       0       0       0    Apply           Server  Messages  Name                ID   Applied State  --------------- ------ --------- ------------------------------  FROM_DB2_APP         1         0 IDLE  

As far as I understand, in that case LCRs can be silently ignored (without throwing apply error) only if SCN of LCR is lesser that instantiation SCN for a table, but instantiation SCN is 1114348 (< 1118751):

Source       Object       Object       Instantiation  Database     Owner        Name                   SCN  ------------ ------------ ------------ -------------  DB2          DUMMYUSR     DUMMYTBL           1114348  

Oracle provides means to deal with errors, but how to check why message was not applied if there was no error?

SQL Server 2008 Setup Error 0x80070490

Posted: 22 Jun 2013 06:35 AM PDT

I am trying to install SQL Server 2008 x64 on Windows 2008 R2 and keep getting the following error:

SQL Server Setup has encountered the following error: Element not found. (Exception from HRESULT: 0x80070490)

I have applied all required patches and there are no other instances of SQL Server on the machine.

Any clues as to what the cause might be?

Thanks.

Replicating data from Oracle to MySQL

Posted: 22 Jun 2013 10:35 AM PDT

I work with a vendor that does data analytics, and they currently receive a replication stream from some of our databases using a product called Goldengate (which is very expensive). Goldengate has been great - it replicates transactions from the Tandem-NSK source and can apply the changes into any supported database - they're using MySQL at the remote end. We're switching our billing system to Oracle, and while we could continue to use Goldengate to move these logs, I'd like to see if there's another option.

We initially chose Goldengate because nothing else could get data out of the Tandem NSK, but now that we're moving to Oracle, there may be some more native (or at least simpler) choices. I've got nothing against them - like I said, it works great - but I'm hoping that two mainstrem databases are easier to do replication between than the NSK.

Are there any products of methods that would help get transactional data from an Oracle system into an MySQL database? I'm not sure if there's any way to do this kind of replication natively (I know we can do Oracle -> MSSQL using native replication, but not any way to target MySQL that I'm aware of), or if anybody knows of a product that could facilitate this (and costs less than Goldengate).

Thanks for any suggestions!

Users cannot view tables in non-default schema in SSMS

Posted: 22 Jun 2013 08:35 PM PDT

I'm having an issue setting the VIEW DEFINITION permission appropriately at the schema level for one of my users. I've created the schema TestSchema and added some tables. The user currently has permissions set to access & modify the table (SELECT, UPDATE, DELETE, etc) through the dbo_datareader and dbo_datawriter roles. However, they cannot see any of the tables in the SSMS object explorer.

I've tried granting permissions to view definitions:

grant view definition on SCHEMA :: [TestSchema] to [User]  

That didn't work. I tried setting the table-level permission:

grant view definition on [TestSchema].[NewTable] to [User]  

That also didn't work. Then I tried just a blanket grant:

grant view definition to [User]  

And that did work; they can now see TestSchema, as well as other schemas that they shouldn't have access to.

My goal here is to allow the user to view all tables within a given schema. How do I accomplish that? If I should be able to do this by default, what permissions should I be looking at to find why I can't?

How can I create different sets of sequential values based on another column in a table?

Posted: 22 Jun 2013 07:46 PM PDT

I've got a table that holds two types of notes. Collection Notes and Delivery Notes. They are identical data structures, hence using the same table.

CREATE TABLE Notes (     Id int IDENTITY(1,1) NOT NULL,     Type int  NOT NULL,     CustomerId int NOT NULL,     -- etc  )  

I am migrating data from a legacy system into this table and there is a requirement that Collection and Delivery notes have their own sequential numbers.

I had previously implemented two sequence tables as

CREATE TABLE CollectionNoteSequence (      Id int IDENTITY(1,1) NOT NULL,      NoteId int NOT NULL  )  

where the Id column is the unique, sequential Id for collection notes, and then the NoteId foreign key's to Notes.Id.

It's getting towards time to do the final (real) data migration and this setup seems hard to work with.

Is there a way I could bin the two sequence tables and add a NoteNo field to the Notes table such that NoteNo would be sequential depending on the Note.Type ? Is this a composite key or something?

The new table might look like

CREATE TABLE Notes (     Id int IDENTITY(1,1) NOT NULL,     NoteNo int NOT NULL,     Type int  NOT NULL,     CustomerId int NOT NULL,     -- etc  )  

and the data would look like:

Id    NoteNo    Type    CustomerId  1     4000      1       123  2     4001      1       456  3     15123     2       789  4     4002      1       753  5     15124     2       741  

I'm using MS SQL Server 2008.

No comments:

Post a Comment

Search This Blog