Saturday, June 8, 2013

[how to] Oracle Select with xmlquery() error - How to ignore errors?

[how to] Oracle Select with xmlquery() error - How to ignore errors?


Oracle Select with xmlquery() error - How to ignore errors?

Posted: 08 Jun 2013 06:51 PM PDT

In trying to extract a subset of data from a Oracle DB table, one of the columns is XMLType, which is extracted with xmlquery() for relevant information.

The problem is that the XML data is from an outside source, which often gives us malformed XML with unescaped special characters.

ORA-31011: XML parsing failed  ORA-19202: Error occurred in XML processing  LPX-00244: invalid use of less-than ('<') character (use &lt;)  Error at line 1  31011. 00000 -  "XML parsing failed"  *Cause:    XML parser returned an error while trying to parse the document.  *Action:   Check if the document to be parsed is valid.  

It would be ok for my purposes to ignore these rows.

How can I ignore these parse errors and have Oracle continue on with the SQL statement?

Thanks

mysql innodb_flush_log_at_trx_commit clarification

Posted: 08 Jun 2013 07:18 PM PDT

I have been reading up on innodb as a storage engine, mainly because I have recently moved to AWS and they do not recommend myISAM which my databases current run on (and have for about 10 years)

So moving to a new engine is a little bit of a scary operation.

I am apprehensive about the move, as it has been on myISAM for so long without issue, but if it gives me peace of mind long term, then it is better for the DB.

I have read the whitepapers on InnoDB and it seems fairly straight forward, The one thing that I want to clear in my mind is the innodb_flush_log_at_trx_commit option

By default this is set to 1, however from my reading this causes additional overheads, if the data was credit card transactions or something like that, I can understand its needs to be there, but it seems when dealing with non life changing data, that innodb_flush_log_at_trx_commit=2 is a better option.

What I want to know is, this does not affect the time that the query is actually committed does it? It only affects its recovery ?

I just want to make sure that when I do an insert or update that the query will run right at the time of processing and not 1 second later, no matter what the flush_log is set to.

My understanding is that in the case of a crash , setting to 1 will allow it to recover all queries run on the server, where as setting it to 2 may lose the last second or two of data when trying to recover from the crash, is this correct?

Also, if there is slow periods of updates/inserts (ie, not much happening on the server) does setting it to 2 add additional overheads on the server, or is it a case of the benefits of applying innodb_flush_log_at_trx_commit=2 to the database when the database is busy outweighs any additional overhead caused during slow times?

MySQL Logging Options

Posted: 08 Jun 2013 07:22 PM PDT

I want to know what logging options are available to me in MySQL; I find a lot more results about binary logging than I do about general/error logging and I'm not sure what options are available to me for such logs.

Could anyone tell me a list of the my.cnf variables I can set to get different types of logs?

Cheers!

Chris

Moving my sql server to another computer.

Posted: 08 Jun 2013 06:54 PM PDT

I am in the process of running a lot of data on a computer that has more power than the one i am currently using. However i run the data to an sql server that i have already created and do not want to change. is there a way to set up the same sql server on the other computer?

finding the path to start postgres

Posted: 08 Jun 2013 07:41 PM PDT

On my local machine, when I want to start the postgres server (postgres was installed via Homebrew), I do this

pg_ctl -D /usr/local/var/postgres -l logfile start  

I just installed postgres on a ubuntu vps (without Homebrew and not on OSX obviously) and the files are obviously set up a little differently, however, based on what I did on the local machine to start pg, I tried to find the path to postgres folder on the vps. From my home/michael directory, I backed up two levels cd ../../ to a folder which contains these folders

bin   etc         lib         mnt   root  selinux  tmp  vmlinuz  boot  home        lost+found  opt   run   srv      usr  dev   initrd.img  media       proc  sbin  sys      var  

Inside etc, there's a postgresql inside of which there's a 9.1 folder, inside of which is main that has all these files

environment  pg_hba.conf    postgresql.conf  pg_ctl.conf  pg_ident.conf  start.conf  

Based on how I start pg on my local machine, I'm assuming pg_ctl.conf is the file I'm looking for. I tried to start pg this way

 pg_ctl -D /etc/postgresql/9.1/main -l logfile start  

but I get the response

-bash: pg_ctl: command not found  

I'm assuming that I have to start the server once before launch of the app and then let it run, but I can't get it started.

Update

Once I finished installing postgres, I ran this command

sudo service postgresql restart  

I'm not sure if that starts postgres once and I never need to start it again (hence this question is moot), or if that was just something I do at install and then start it everytime I use it as I do on my local machine.

Why do shadow processes exist?

Posted: 08 Jun 2013 08:22 PM PDT

Oracle architecture has two types of process: server process or foreground process and background process, but:

Some people use it to refer to foreground processes, others use it for background processes

This is ambiguous. Please explain to me why do the concept of shadow process exist?

I can connect to a mysql database in Mysql Workbench but not through the command line

Posted: 08 Jun 2013 09:12 AM PDT

I am connected through a local socket/pipe and the username password is root/roots password. I can login and view 4 schema's I have created; however when I use the mysql command at the command line and login to the localhost the only schemas are "test" and "information_schema". What am I doing in correct and how to I login correctly?

Optimizing a query with ORDER BY in a derived table

Posted: 08 Jun 2013 02:27 PM PDT

The query below takes too long to execute (58 seconds). If I run two more queries like this with different nid values, the second and third query take much more time to execute. How can I optimize it?

SELECT consoleId,         servertime,         servertime AS servertimeUNIX,         nocName,         eventIdx,         nocStatus  AS status,         nid,         site,         machine,         clientversion,         timeExecuted,         count(*)   AS eventCount  FROM   (SELECT consoleId,                 servertime,                 nocName,                 eventIdx,                 nocStatus,                 nid,                 site,                 machine,                 clientversion,                 timeExecuted          FROM   temptest          ORDER  BY tid DESC) AS x  WHERE  servertime >= 1367902800         AND servertime <= 1370581199         AND nid = 1124  GROUP  BY FROM_UNIXTIME(servertime, '%Y-%m-%d'),            site,            machine,            nid;  

The table definition is

CREATE TABLE `temptest`    (       `tid`                INT(10) NOT NULL AUTO_INCREMENT,       `eventIdx`           INT(11) NOT NULL DEFAULT '0',       `servertime`         INT(11) DEFAULT NULL,       `nocName`            VARCHAR(50) DEFAULT NULL,       `site`               VARCHAR(50) DEFAULT NULL,       `machine`            VARCHAR(64) DEFAULT NULL,       `clientversion`      VARCHAR(20) DEFAULT NULL,       `nid`                INT(10) DEFAULT NULL,       `nocStatus`          VARCHAR(250) DEFAULT NULL,       `consoleId`          INT(11) DEFAULT NULL,       `timeExecuted`       VARCHAR(100) DEFAULT NULL,       `machineManufacture` VARCHAR(100) DEFAULT NULL,       PRIMARY KEY (`tid`),       KEY `consoleindex` (`consoleId`),       KEY `nitification` (`servertime`, `nid`)    )  ENGINE=InnoDB  AUTO_INCREMENT=8229683  DEFAULT CHARSET=latin1   

Why does the plan cache contain estimated and not actual execution plans?

Posted: 08 Jun 2013 02:38 PM PDT

I can think of many reasons behind this decision of storing estimated plans in the plan cache and not the actual plan. But I can't find the "correct" answer.

Query with two inner joins not running fast enough

Posted: 08 Jun 2013 06:13 AM PDT

I don't know why MySQL is too slow for querying about 39000 records which include inner join with other two tables. I have country, region and city tables as follows:

tbl_country

CREATE TABLE `tbl_country` (      `id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,      `countryName` varchar(128) NOT NULL,      `iso2` varchar(2) DEFAULT NULL,      `iso3` varchar(3) DEFAULT NULL,       PRIMARY KEY (`id`),       UNIQUE KEY `countryName` (`countryName`),       UNIQUE KEY `iso2` (`iso2`),       UNIQUE KEY `iso3` (`iso3`)       ) ENGINE=InnoDB AUTO_INCREMENT=276 DEFAULT CHARSET=utf8   

tbl_country_region

CREATE TABLE `tbl_country_region` (     `id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,     `countryId` smallint(5) unsigned NOT NULL,     `region` varchar(128) NOT NULL,     PRIMARY KEY (`id`),     UNIQUE KEY `countryId` (`countryId`,`region`),     CONSTRAINT `tbl_country_region_ibfk_1` FOREIGN KEY (`countryId`)              REFERENCES `tbl_country` (`id`)     ) ENGINE=InnoDB AUTO_INCREMENT=5433 DEFAULT CHARSET=utf8   

tbl_country_city

CREATE TABLE `tbl_country_city` (     `id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,     `regionId` smallint(5) unsigned NOT NULL,     `city` varchar(128) NOT NULL,     `cityType` enum('Kota','Kabupaten') DEFAULT NULL,     PRIMARY KEY (`id`),     UNIQUE KEY `regionId` (`regionId`,`city`,`cityType`),     KEY `city_index` (`city`),     CONSTRAINT `tbl_country_city_ibfk_1` FOREIGN KEY (`regionId`)     REFERENCES `tbl_country_region` (`id`)     ) ENGINE=InnoDB AUTO_INCREMENT=48304 DEFAULT CHARSET=utf8  

Query :

SELECT t1.city, t1.cityType, t2.region, t3.countryName  FROM tbl_country_city t1      INNER JOIN tbl_country_region t2          ON t2.id = t1.regionId      INNER JOIN tbl_country t3          ON t3.id = t2.countryId   ORDER BY t1.city  

and it tooks about 1.032s.

I run show profile and here are the results:

+------------------------------+----------+  | Status                       | Duration |  +------------------------------+----------+  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000710 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000816 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000734 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000717 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000703 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000698 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000715 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000684 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000718 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000706 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000740 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000786 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000795 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000716 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000716 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000757 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000709 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000740 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000714 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000719 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000712 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000724 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000741 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000739 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000774 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000719 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000716 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000724 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000744 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000744 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000703 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000691 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000719 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000720 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000715 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000720 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000943 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000819 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000703 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000732 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000708 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000703 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000702 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000776 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000442 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.018132 |  | end                          | 0.000018 |  | removing tmp table           | 0.392880 |  | end                          | 0.000019 |  | query end                    | 0.000009 |  | closing tables               | 0.000015 |  | freeing items                | 0.000029 |  | logging slow query           | 0.000003 |  | cleaning up                  | 0.000005 |  +------------------------------+----------+  

I use explain command and here are the results:

+----+-------------+-------+-------+-------------------+-------------+---------+------------+------+----------------------------------------------+  | id | select_type | table | type  | possible_keys     | key         | key_len | ref        | rows | Extra                                        |  +----+-------------+-------+-------+-------------------+-------------+---------+------------+------+----------------------------------------------+  |  1 | SIMPLE      | t3    | index | PRIMARY           | countryName | 386     | NULL       |  275 | Using index; Using temporary; Using filesort |  |  1 | SIMPLE      | t2    | ref   | PRIMARY,countryId | countryId   | 2       | nfis.t3.id |    6 | Using index                                  |  |  1 | SIMPLE      | t1    | ref   | regionId          | regionId    | 2       | nfis.t2.id |   50 | Using index                                  |  +----+-------------+-------+-------+-------------------+-------------+---------+------------+------+----------------------------------------------+  

Why is the query so slow?

MySQL is too slow [duplicate]

Posted: 08 Jun 2013 03:47 AM PDT

This question is an exact duplicate of:

I don't know why mysql is too slow for querying about 39000 records which include inner join with other two tables. I have country, region and city tables as follows:

tbl_country

CREATE TABLE `tbl_country` (      `id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,      `countryName` varchar(128) NOT NULL,      `iso2` varchar(2) DEFAULT NULL,      `iso3` varchar(3) DEFAULT NULL,       PRIMARY KEY (`id`),       UNIQUE KEY `countryName` (`countryName`),       UNIQUE KEY `iso2` (`iso2`),       UNIQUE KEY `iso3` (`iso3`)       ) ENGINE=InnoDB AUTO_INCREMENT=276 DEFAULT CHARSET=utf8   

tbl_country_region

CREATE TABLE `tbl_country_region` (     `id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,     `countryId` smallint(5) unsigned NOT NULL,     `region` varchar(128) NOT NULL,     PRIMARY KEY (`id`),     UNIQUE KEY `countryId` (`countryId`,`region`),     CONSTRAINT `tbl_country_region_ibfk_1` FOREIGN KEY (`countryId`)              REFERENCES `tbl_country` (`id`)     ) ENGINE=InnoDB AUTO_INCREMENT=5433 DEFAULT CHARSET=utf8   

tbl_country_city

CREATE TABLE `tbl_country_city` (     `id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,     `regionId` smallint(5) unsigned NOT NULL,     `city` varchar(128) NOT NULL,     `cityType` enum('Kota','Kabupaten') DEFAULT NULL,     PRIMARY KEY (`id`),     UNIQUE KEY `regionId` (`regionId`,`city`,`cityType`),     KEY `city_index` (`city`),     CONSTRAINT `tbl_country_city_ibfk_1` FOREIGN KEY (`regionId`)     REFERENCES `tbl_country_region` (`id`)     ) ENGINE=InnoDB AUTO_INCREMENT=48304 DEFAULT CHARSET=utf8  

Query :

SELECT t1.city, t1.cityType, t2.region, t3.countryName  FROM tbl_country_city t1      INNER JOIN tbl_country_region t2          ON t2.id = t1.regionId      INNER JOIN tbl_country t3          ON t3.id = t2.countryId   ORDER BY t1.city  

and it tooks about 1.032s.

I run show profile and here are the results:

+------------------------------+----------+  | Status                       | Duration |  +------------------------------+----------+  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000710 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000816 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000734 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000717 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000703 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000698 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000715 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000684 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000718 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000706 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000740 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000786 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000795 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000716 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000716 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000757 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000709 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000740 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000714 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000719 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000712 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000724 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000741 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000739 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000774 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000719 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000716 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000724 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000744 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000744 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000703 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000691 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000719 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000720 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000715 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000720 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000943 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000819 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000703 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000732 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000708 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000703 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.000702 |  | Waiting for query cache lock | 0.000004 |  | Sending data                 | 0.000776 |  | Waiting for query cache lock | 0.000006 |  | Sending data                 | 0.000442 |  | Waiting for query cache lock | 0.000005 |  | Sending data                 | 0.018132 |  | end                          | 0.000018 |  | removing tmp table           | 0.392880 |  | end                          | 0.000019 |  | query end                    | 0.000009 |  | closing tables               | 0.000015 |  | freeing items                | 0.000029 |  | logging slow query           | 0.000003 |  | cleaning up                  | 0.000005 |  +------------------------------+----------+  

I use explain command and here are the results:

+----+-------------+-------+-------+-------------------+-------------+---------+------------+------+----------------------------------------------+  | id | select_type | table | type  | possible_keys     | key         | key_len | ref        | rows | Extra                                        |  +----+-------------+-------+-------+-------------------+-------------+---------+------------+------+----------------------------------------------+  |  1 | SIMPLE      | t3    | index | PRIMARY           | countryName | 386     | NULL       |  275 | Using index; Using temporary; Using filesort |  |  1 | SIMPLE      | t2    | ref   | PRIMARY,countryId | countryId   | 2       | nfis.t3.id |    6 | Using index                                  |  |  1 | SIMPLE      | t1    | ref   | regionId          | regionId    | 2       | nfis.t2.id |   50 | Using index                                  |  +----+-------------+-------+-------+-------------------+-------------+---------+------------+------+----------------------------------------------+  

Why is the query so slow?

mysqldump freezing on a specific table

Posted: 08 Jun 2013 06:23 AM PDT

I dumped a database (sys_data) which is very big (800GB, all data in one ibdata file) from a remote server. But the dump was blocked at a table (tb_trade_376). My dump command:

mysqldump -uxx -pxx -h192.168.1.xxx --single-transcation sys_data > /home/sys_data.sql  

When the dump was blocked:

show processlist;  5306612 | root | 192.168.1.161:57180 | sys_data      | Query  | 23955 | Sending data | SELECT /*!40001 SQL_NO_CACHE */ * FROM `tb_trade_376`  

On the other hand I can dump the table tb_trade_376 successfully if I just dump the table only.

mysqldump -uxx -pxx -h192.168.1.xxx \    --single-transcation sys_data tb_trade_376 > /home/tb_trade_376.sql  

This works well and quickly! The table tb_trade_376 has about 700,000-800,000 rows.

What is the next step in investigating why I can't dump the whole database? How can I make it work?

Importing SQL Server database from a .sql file

Posted: 08 Jun 2013 12:37 AM PDT

I'd like to import a database without knowing its name from a .sql file.

In MySql server it is easy because there is tools like heidSQL in which I can import easily my database from a sql file but in SQL Server I can't find the method to do it.

I tried to read the file in notepad but the encoding makes the content unreadable.

jj

When i execute this query RESTORE HEADERONLY FROM DISK = N'C:\Planning\bdd.sql' i have this hhh

  1. How can I do it with SQL server?
  2. Can SQL Server Management Studio do this task or do I have to install a new tool?

What is the timeout of a query in SQL Server 2008

Posted: 08 Jun 2013 02:14 AM PDT

I was running a query to delete in batches of a huge table. After say around 3 hrs sometimes 4 hrs I get a message saying

Invalid object name '#table_name#'

After sometime is there a connection lost or something? What is the cause here?

Are there tips for writing Oracle DDL Triggers for auditing? Or should I just use AUDIT?

Posted: 08 Jun 2013 06:46 AM PDT

I've been tinkering with writing a trigger to log my table creation and manipulation for later tracking and review.

The challenge is that my trigger logic is stored in the same schema on which I've put the trigger. Because of this, whenever I recompile the trigger or the package or whatever, the trigger itself fires. Also, if another component requires a recompile, the trigger fires.

The trigger has been failing and, in fact, locking my database session. I have started putting guards on the trigger to prevent this and haven't worked it out yet.

Perhaps someone on the site has experience or advice? Or just tell me to use AUDIT commands instead?

Import Data replaces NULL with 0

Posted: 08 Jun 2013 03:06 AM PDT

I am using sql server 2008 r2 express. I am getting some problems when i use import data feature to import data from one database to another. I am using SSMS to do it. While using the wizard i choose "Select identity insert" option to copy the identity element property. By using this identity elements are imported properly. But after importing i compared the source database and destination database and found some changes!!. All NULL values in the table are replaced by 0. So how can i avoid it.Is there any option to copy it as it while using import data wizard? Please help

Why don't databases create their own indexes automatically?

Posted: 08 Jun 2013 10:57 AM PDT

I would have thought that databases would know enough about what they encounter often and be able to respond to the demands they're placed under that they could decide to add indexes to highly requested data.

I want to change ORACLE level#2 to log archiving mode instead of the current mode

Posted: 08 Jun 2013 02:38 PM PDT

I want to make that change to the archive log mode to use the redo log files to use it to make replication to database to another server.

This is easy and possible when the server is single not multi-clustered as the current case. What are the procedures to make that when there are two clustered servers?

I am using Oracle fail safe manager.

I am trying to make streams replication to another server, I have two clustered servers with operating system Windows server 2003. Cluster service is being used. Also, Oracle fail safe manager is used.

Jobs view is not visible to read-only account users

Posted: 08 Jun 2013 03:06 AM PDT

I am the admin of one of the database, and I have created a view in SYS schema using below code which returns the status of the running jobs:

CREATE OR REPLACE VIEW CAF_DB_REFRESH AS  SELECT *  FROM all_scheduler_job_run_details  WHERE OWNER = 'CAFANALYST'    AND log_id > 2573  ORDER BY log_date;  

I have created public synonym for this view and given access to read only account which will be used by all users in my team.

When they perform the following query, they are getting only column names with no rows:

select * from CAF_DB_REFRESH;  

Could you please guide me what is the problem and if any special privileges required for read only account? If yes, I request you to guide which one is the best to see but not to modify by read only user.

Should I increase max_connections in AWS RDS t1-micro for MySQL?

Posted: 08 Jun 2013 07:25 AM PDT

I have an AWS RDS t1-micro running MySQL 5.5. It gives me too many connections error. I checked and it allows 34 maximum connections concurrently. What I have read is that i can increase this max value by creating a DB parameter group for this Micro instance.

My confusion is

  • Should I increase the max connection value for micro in DB parameter group? or Should i consider upgrading to next RDS level which provides more maximum connections (125)?
  • Should I increase max_connections on micro RDS to 125 vs upgrading to RDS small instance?
  • Why and What factors should I make the decision on?

Thanks

Foreign key with multiple references

Posted: 08 Jun 2013 03:25 PM PDT

I have the following three tables in mysql database named "THE_COLLEGE"

mysql> desc students;    +----------+-------------+------+-----+---------+-------+  | Field    | Type        | Null | Key | Default | Extra |  +----------+-------------+------+-----+---------+-------+  | STU_Id   | char(5)     | NO   | PRI |         |       |  | STU_Name | varchar(20) | YES  |     | NULL    |       |  +----------+-------------+------+-----+---------+-------+    2 rows in set (0.00 sec)  mysql> desc staff;    +----------+-------------+------+-----+---------+-------+  | Field    | Type        | Null | Key | Default | Extra |  +----------+-------------+------+-----+---------+-------+  | STF_Id   | char(5)     | NO   | PRI |         |       |  | STF_Name | varchar(20) | YES  |     | NULL    |       |  +----------+-------------+------+-----+---------+-------+    2 rows in set (0.00 sec)  mysql> desc users;    +------------+-------------+------+-----+---------+-------+  | Field      | Type        | Null | Key | Default | Extra |  +------------+-------------+------+-----+---------+-------+  | Id         | char(5)     | NO   |     |         |       |  | user_Name  | varchar(20) | YES  |     | NULL    |       |  | Password   | varchar(20) | YES  |     | NULL    |       |  +------------+-------------+------+-----+---------+-------+    3 rows in set (0.02 sec)  

If a new row want to be inserted into the table "users", that row must be found in students table or staff table.

Which means a staff or a student can be a user.

Is it possible?

I tried with following foreign keys.

ALTER TABLE users  ADD CONSTRAINT fky_stu     FOREIGN KEY (ID)     REFERENCES Students (STU_Id);    ALTER TABLE users  ADD CONSTRAINT fky_stf     FOREIGN KEY (ID)     REFERENCES Staff (STF_Id);  

But the user table is allow to insert a row, if the row found in both tables (Student and Staff).

But I need to allow if the row found in any of the table (Student or Staff).

Complicated join with where clause

Posted: 08 Jun 2013 03:25 AM PDT

I have four tables:

  • sales
  • sales_thumbs
  • sales_images
  • sales_sizes

sales table:

+--------------+---------------------+------+-----+---------+----------------+  | Field        | Type                | Null | Key | Default | Extra          |  +--------------+---------------------+------+-----+---------+----------------+  | sale_id      | bigint(20) unsigned | NO   | PRI | NULL    | auto_increment |  | store_id     | bigint(20) unsigned | NO   |     | NULL    |                |  | store_name   | varchar(100)        | NO   |     | NULL    |                |  | sale_url     | varchar(255)        | NO   | UNI | NULL    |                |  | headline     | varchar(200)        | NO   |     | NULL    |                |  | description  | text                | NO   |     | NULL    |                |  | category     | varchar(100)        | NO   |     | NULL    |                |  | sub_category | varchar(100)        | NO   |     | NULL    |                |  | brand        | varchar(100)        | NO   |     | NULL    |                |  | gender       | varchar(5)          | NO   |     | NULL    |                |  | full_img_url | varchar(240)        | NO   |     | NULL    |                |  | onsale       | int(1)              | NO   |     | NULL    |                |  | soldout      | int(1)              | NO   |     | 0       |                |  | created_at   | datetime            | NO   |     | NULL    |                |  | updated_at   | datetime            | NO   |     | NULL    |                |  | parsed       | int(1)              | NO   |     | 0       |                |  +--------------+---------------------+------+-----+---------+----------------+  

sales_thumbs:

+------------+---------------------+------+-----+---------+-------+  | Field      | Type                | Null | Key | Default | Extra |  +------------+---------------------+------+-----+---------+-------+  | sale_id    | bigint(20) unsigned | NO   | MUL | NULL    |       |  | thumb_name | varchar(240)        | NO   | PRI | NULL    |       |  +------------+---------------------+------+-----+---------+-------+  

sales_images:

+------------+---------------------+------+-----+---------+-------+  | Field      | Type                | Null | Key | Default | Extra |  +------------+---------------------+------+-----+---------+-------+  | sale_id    | bigint(20) unsigned | NO   | MUL | NULL    |       |  | image_name | varchar(240)        | NO   | PRI | NULL    |       |  +------------+---------------------+------+-----+---------+-------+  

sales_sizes:

+---------+---------------------+------+-----+---------+-------+  | Field   | Type                | Null | Key | Default | Extra |  +---------+---------------------+------+-----+---------+-------+  | sale_id | bigint(20) unsigned | NO   | MUL | NULL    |       |  | size    | varchar(10)         | NO   |     | NULL    |       |  | country | varchar(20)         | NO   |     | NULL    |       |  +---------+---------------------+------+-----+---------+-------+  

I'm looking to build a query that let's me:

  • SELECT * FROM sales WHERE [something] GROUP BY created_at LIMIT [something] OFFSET [something]
  • JOIN sales_thumbs and sales_images ON sale_id
  • (most importantly) JOIN sales_sizes ON sale_id WHERE sales_sizes.size = [size]

But here's the catch: I want to return ALL sales_sizes.size for a unique sale_id that has a specific size among all its sizes.

For example if sale_id = 8655 has the following sizes, I want to return all sizes if it has (for example) 'm' as one of its sizes...

mysql> select * from sales_sizes where sale_id = 8655;  +---------+------+---------+  | sale_id | size | country |  +---------+------+---------+  |    8655 | s    | eu      |  |    8655 | m    | eu      |  |    8655 | l    | eu      |  +---------+------+---------+  

Extra info:

I'm building the query from a GET string, which could look something like this:

www.mysite.com?category=shirts&gender=women&size=m&page=2

Hope someone can help

Thanks

Oracle schema import is not importing all the tables present in the schema dump file

Posted: 08 Jun 2013 09:25 AM PDT

I have exported an existing oracle schema from another machine and then imported it in my local machine. Import was successful, but some tables which are present in the export dump file are not imported.

Here are the export and import commands i have used.

Export Command:  ---------------  exp sreeni/sreeni@agentrics1:1524/ezmodc full=n file=SreeniTrunkDump.dmp log=SreeniTrunkDump_Export.log     Import Command:  ---------------  imp badri/badri@localhost:1521/xe file=SreeniTrunkDump.dmp log=BadriSchemaImport_Import.log full=y     

The Oracle we are using is 10g EE.

What could be going wrong ? Can you please suggest a solution to this issue.

Execution of a java program by a trigger

Posted: 08 Jun 2013 12:25 AM PDT

Can an update trigger execute a java program?

I have a table T1 with a column named Flag. Whenever Flag changes, I want to run a trigger that results in an execution of a java program.

Twice job results in sql server, last one is left in progress

Posted: 08 Jun 2013 05:29 AM PDT

I have the job MP - Create CSV which execute a package SSIS in sql server 2008, when I check for the job history I see the result at step 0, I expand and I have 2 results for the step 1.
Fist result have the icon for Success with the result :

Message
Executed as user: companyname\sa. The step succeeded.

The second one have the step 1 also, but with the icon for In Progress with the result :

Message
Microsoft (R) SQL Server Execute Package Utility
Version 10.50.4000.0 for 64-bit
Copyright (C) Microsoft Corporation 2010. All rights reserved.

Started: 11:50:00 PM
DTExec: The package execution returned DTSER_SUCCESS (0).
Started: 11:50:00 PM
Finished: 11:50:26 PM
Elapsed: 26.645 seconds

My problem is I have a report to show all the job with their status not equal to 1 (success), my report do not work anymore because of this result "In progress".
Here is the sql I use to run my report :

SELECT *  from  msdb..sysjobhistory WHERE run_status != 1  

My questions are why do I have 2 results for my step ? I think I should have only one. And what should I do when I have a job with a step status left "In progress" ?

For more information, my job is running daily and succeed every time, thank you.

Want to Get Brands In Wordpress Database Structure Saved as Custom Post types

Posted: 08 Jun 2013 01:25 AM PDT

I am having difficulty in fetching the brands per category in the sidebar of my website.

Here is the description:

I am using the feeds and feed uploader to upload feeds and create brands and product categories in wordpress.

The problem is there is no link within the product_category and product_brand and I want to show specific brands in the side bar of a specific category not the whole brands list which is going to long down.

So I tried out these heavy queries to fetch the brands as per category, but using the INNER JOIN made the database too slow and the website keeps loading without showing brands the query is working fine the only thing I want to know is to speed up the queries for the customers so they don't fade away.

Here are the queries I am using:

$term = get_term_by('slug',get_query_var('term'),'product_category');    $q5 = "SELECT DISTINCT p1.ID         FROM $wpdb->posts p1         INNER JOIN $wpdb->term_relationships tr1 ON (p1.ID = tr1.object_id)         INNER JOIN $wpdb->term_taxonomy tt1 ON (tr1.term_taxonomy_id = tt1.term_taxonomy_id)         WHERE tt1.term_id = '$term->term_id'";    $q2 = "SELECT tt2.term_id         FROM $wpdb->posts p2         INNER JOIN $wpdb->term_relationships tr2 ON (p2.ID = tr2.object_id)         INNER JOIN $wpdb->term_taxonomy tt2 ON (tr2.term_taxonomy_id = tt2.term_taxonomy_id)         WHERE p2.ID IN ($q5)";    $q3 = "SELECT DISTINCT tt3.*         FROM $wpdb->posts p3         INNER JOIN $wpdb->term_relationships tr3 ON (p3.ID = tr3.object_id)         INNER JOIN $wpdb->term_taxonomy tt3 ON (tr3.term_taxonomy_id = tt3.term_taxonomy_id)         INNER JOIN $wpdb->terms t3 ON t3.term_id = tt3.term_id         WHERE 1=1           AND tt3.term_id IN ($q2)           AND tt3.taxonomy = 'product_brand'         ORDER BY t3.name ASC";    $brands = $wpdb->get_results($q3);   

The first two queries run fine but the last one makes the database query too slow.

How can I make the last query run faster?

User login error when trying to access secured SQL Server database

Posted: 08 Jun 2013 01:25 PM PDT

We have a username that was recently renamed from one username to another (think getting married). The Active Directory admin renamed the user because "it has always worked in the past".

One vendor package we use uses the built-in MS SQL Server security. Each module has three groups:

  • xxView = View Only
  • xxUser = Add, Update rows (cannot delete)
  • xxAdmin = Delete rows

So we can add a person to one of these groups an they get the appropriate access. I don't have the actual error message in front of me anymore, but it said that they are not authorized to table CriticalVendorTable. It worked before the rename. The admin removed the person from each group and re-added them. Still no go. I even restarted the server and it still doesn't work. My best guess is that there is UUID (or unique id) somewhere that is causing problems.

The vendor's response is to delete the user and then re-add them. I have only had time to do some brief searching, but I found this page; AD User SID Mis-mapping. Would this be worth trying? Would it be better to just delete the user and recreate them?

No comments:

Post a Comment

Search This Blog