Tuesday, June 25, 2013

[how to] How to set up ODBC for oracle in Windows7

[how to] How to set up ODBC for oracle in Windows7


How to set up ODBC for oracle in Windows7

Posted: 25 Jun 2013 07:37 PM PDT

Currently, I'm trying to figure out how to connect to Oracle database from my Client PC.

The purpose for it is to manupulate database within FileMaker/Access (But mainly with FileMaker)

My environment(192.168.5.40) is

  • Windows7(64bit)
  • Filemaker12

Server environment(192.168.10.100)

  • Oracle version 10.0.2

I've tried to install instant client(instantclient-basic-win-x86-64-10.2.0.3.0, and instantclient-odbc-win32-10.2.0.3-20061115)

It didn't work right. So I tried few ways. However, I always get some kind of errors.
Especially, I'm stucked with SysWow64 thing.

Does anyone know what files exactly I have to install for this situation?

Thanks for the help:)

Is it possible to convert mysql binlog from statement format to row format?

Posted: 25 Jun 2013 08:19 PM PDT

The mysql server online is of version 4.1, which doesn't support row-based binary log. Nevertheless, I need the row-based binlog. Can I use the binlog generated by old mysql and import it into another mysql of higher version that supports row-base binlog to get the row-based binlog?

Xtrabackup manager failed with apply-log step

Posted: 25 Jun 2013 06:27 PM PDT

I am using xtrabackup manager to backup mysql server. I have a problems with it. All my backup is run ok. But with the server that has about 200 GB of data, the backup task is always failed. I open the log file and see:

2013-06-26 01:15:09 +0700 [INFO] : [ XtraBackup step completed OK. Proceeding with apply-log step... ]  2013-06-26 01:15:09 +0700 [INFO] : [ Cleaning up files... ]  2013-06-26 01:15:21 +0700 [INFO] : [ Released lock on port 10000. ]  2013-06-26 01:15:21 +0700 [ERROR] : [ An error occurred while trying to perform the backup. Proceeding to log some details to help debug... ]  2013-06-26 01:15:21 +0700 [ERROR] : [ Error Caught: genericBackupTaker->takeFullBackupSnapshot: Error: Unable to create dir './tmp' in backup dir for apply log process to utilize. ]  2013-06-26 01:15:21 +0700 [ERROR] : [ Trace: #0 /usr/local/xtrabackup-manager/includes/fullonlyBackupTaker.class.php(110): genericBackupTaker->takeFullBackupSnapshot(Object(backupJob), Object(backupSnapshotGroup))  #1 /usr/local/xtrabackup-manager/includes/backupSnapshotTaker.class.php(221): fullonlyBackupTaker->takeScheduledBackupSnapshot(Object(backupJob))  #2 /usr/local/xtrabackup-manager/includes/cliHandler.class.php(902): backupSnapshotTaker->takeScheduledBackupSnapshot(Object(scheduledBackup))  #3 /usr/local/xtrabackup-manager/includes/cliHandler.class.php(103): cliHandler->handleBackupActions(Array)  #4 /usr/local/xtrabackup-manager/xbm(36): cliHandler->handleArguments(Array)  #5 {main} ]  

It looks like xtrabackup stucked when perform apply logs steps. Anyone has ideal to solve this problem.

Here is info about my software:

  • XtraBackup Manager v0.81
  • xtrabackup version 2.1.3 for Percona Server
  • 5.1.59 InnoDB Backup Utility v1.5.1-xtrabackup
  • Mysql Server 5.1.69

When importing raw files for internal conversion, should I use a secondary database or just isolate them within the database?

Posted: 25 Jun 2013 05:21 PM PDT

When importing raw files for internal conversion, should I use a secondary database or just isolate them within the database?

I've got between 1.25 and 2GB of CSV files to be imported (and already have most of the process running smooth as butter) so my question is: does it make sense as a "best practice" to use a "secondary" database for the import or just load them into the database that I'm going to be working in?

Example:

FinalDatabase -- this is the one I'm wanting to do, unless there's a net negative    core.Users    core.Addresses    core.Emails    core.OtherTables    staging.UsersImport    staging.ContactImport  

OR

FinalDatabase    core.Users    core.Addresses    core.Emails    core.OtherTables  StagingDB    dbo.UsersImport    dbo.ContactImport  

Obviously I'll be migrating from one table to the others via scripts, so it doesn't make much difference what the four-part name is going to be, whether it's just

INSERT INTO core.Users (fieldlist) SELECT fieldlist FROM staging.Users  

or

INSERT INTO core.Users (fieldlist) SELECT fieldlist FROM StagingDB.dbo.Users  

The cons I've identified with the first style are the following:

  • Increases backup size
  • Increases file size (including increasing the file size of the existing database -- autogrowth issues)
  • Negates the potential use of multiple spindles (in my case not an option :( )

Some of the pros I've identified with the first style:

  • The original import data stays with the database in case of future questions (until someone truncates it, of course)
  • It gets backed up with the latter data in case you need to recover something

What would be considered a best practice in this situation and why?

Convert units of measurement

Posted: 25 Jun 2013 07:52 PM PDT

Looking to calculate the most suitable unit of measurement for a list of substances where the substances are given in differing (but compatible) unit volumes.

Unit Conversion Table

The unit conversion table stores various units and how those units relate:

id  unit          coefficient                 parent_id  36  "microlitre"  0.0000000010000000000000000 37  37  "millilitre"  0.0000010000000000000000000 5   5  "centilitre"  0.0000100000000000000000000 18  18  "decilitre"   0.0001000000000000000000000 34  34  "litre"       0.0010000000000000000000000 19  19  "dekalitre"   0.0100000000000000000000000 29  29  "hectolitre"  0.1000000000000000000000000 33  33  "kilolitre"   1.0000000000000000000000000 35  35  "megalitre"   1000.0000000000000000000000 0  

Sorting by the coefficient shows that the parent_id links a child unit to its numeric superior.

This table can be created in PostgreSQL using:

CREATE TABLE unit_conversion (    id serial NOT NULL, -- Primary key.    unit text NOT NULL, -- Unit of measurement name.    coefficient numeric(30,25) NOT NULL DEFAULT 0, -- Conversion value.    parent_id integer NOT NULL DEFAULT 0, -- Relates units in order of increasing measurement volume.    CONSTRAINT pk_unit_conversion PRIMARY KEY (id)  )  

There should be a foreign key from parent_id to id.

Substance Table

The Substance Table lists specific quantities of substances. For example:

 id  unit          label     quantity   1   "microlitre"  mercury   5   2   "millilitre"  water     500   3   "centilitre"  water     2   4   "microlitre"  mercury   10   5   "millilitre"  water     600  

The table might resemble:

CREATE TABLE substance (    id bigserial NOT NULL, -- Uniquely identifies this row.    unit text NOT NULL, -- Foreign key to unit conversion.    label text NOT NULL, -- Name of the substance.    quantity numeric( 10, 4 ) NOT NULL, -- Amount of the substance.    CONSTRAINT pk_substance PRIMARY KEY (id)  )  

Problem

How would you create a query that finds a measurement to represent the sum of the substances using the fewest digits that has a whole number (and optionally real component)?

For example, how would you return:

  quantity  unit        label          15  microlitre  mercury          3.1  centilitre  water  

Mostly, I'm having troubles picking "centilitre" over "millilitre", taking into consideration millitre's parent measurement unit being the centilitre.

Source Code

So far I have (obviously non-working):

-- Normalize the quantities  select    sum( coefficient * quantity ) AS kilolitres  from    unit_conversion uc,    substance s  where    uc.unit = s.unit  group by    s.label  

Ideas

Does this require using log10 to determine the number of digits?

Does my.cfg affect number of inserts / sec?

Posted: 25 Jun 2013 04:17 PM PDT

I'm a complete noob to MySQL and after a few hours of innoDB tuning I got nowhere. Either I do something wrong, which I really hope :), or my.cfg settings doesn't effect insert performance?

On many websites I read it does, so here it goes, I start with the basic and try to explain my steps, bear with me and I hope someone can point out what I do wrong.

Server info: VPS on Tilaa, 8gb Ram, 120gb Raid10, 4 x 2.4ghz (90%, so 8.6ghz).

Database info: The table engine used is innoDB. Name table persons, primary key is "random" bigint, this ID is provided by 3th party, 6 others colums which are all ints and one is date. 2 triggers are fired, one after inserts and one after update (they insert each row in a mapping table with 2 columns (int, bigint, so I have a gapless table which I need).

Current number of inserts:
At the moment I am able to insert 800 rows a second in the persons table. But based on the "simple" table and the decent VPS, I should get more right? I read somehwere innoDB is not very font of the random ID part, but still slow right?

Steps I took: I open the my.cfg file located at etc/my.cfg and saw this:

[mysqld]  innodb_file_per_table=1  default-storage-engine=MyISAM  

So basicly this doesn't do a lot, after reading some questions on DBA stackexchange and surfing the Internet I added these extra settings:

key_buffer = 128M  thread_stack = 4M  thread_cache_size = 8  table_cache = 8192  max_heap_table_size = 256M  query_cache_limit = 4M  query_cache_size = 512M  innodb_buffer_pool_size = 4G   innodb_buffer_pool_instances=4  innodb_log_file_size = 512M  

After saving these settings I delete the files: ib_logfile1, ib_logfile0 and ibdata1 in the folder: var/lib/mysql. I did read somewhere this must be done.
Last, I restart the MySQL server with cPanel WHM.

Finally, running the insert query (10 000 times) I see again 9,7 seconds runtime, which is exact the same as before the upgrade of the cfg file.

Anyone has an idea what I do wrong/forget? Side note, when I open phpmyadmin and go to system variables I see indeed: bufferpool size 4096 MB (so the settings are really changed?).

Any help is welcome!

InnoDB pop queue

Posted: 25 Jun 2013 04:09 PM PDT

I have a question about implementing a queue with an InnoDB table, using the Python/Django framework. In a multiprocessing environment, my code hits a deadlock. Please see my question on StackOverflow for all the details.

Postgresql not starting correctly, or is it repairing itself first?

Posted: 25 Jun 2013 07:30 PM PDT

I never saw that problem before. I had problems and many postgresql processes were stuck so I killed them with a -KILL...

When I tried to restart, it says that it cannot restart, but the daemon continues to run and uses a little processor a lot of I/O. Is it trying to repair the database?

I get no log at all. I think there is a way to increase log output... I'll look into that. At this point, the socket to connect to the server doesn't get created, but the server does not quit or emit any error/message, so I have no clue what is going on!?

If anyone has a clue, I'd be glad to hear about it.

How to find mongo document by ObjectID age

Posted: 25 Jun 2013 03:19 PM PDT

I have a collection of documents I'd like to pull a subset created after a certain point in time. I understand the timestamp of creation is encoded in each documents ObjectID (assuming they are auto generated). I see the ObjectId has a getTimestamp method that returns that portion of the ObjectID as an ISOdate.

I'm not very fluent in mongo and am having trouble constructing this seemingly simple query.

For bonus points, once I figure out the "where clause", if you will, I'm wanting to select a single field from the documents using mongodump or what ever else might be available to export the results to a text file via a mongo shell.

Wrong return results

Posted: 25 Jun 2013 01:09 PM PDT

I'm trying to grab all the rows that have a risk of critical or high, with the discription or synopsis or solution or cve like password. But it keeps showing all rows not just rows with a risk of critical or high.

SELECT host,plugin_id,min(cve) as cve,risk,synopsis,description,solution, count(distinct(plugin_id)) from internal_network where risk in ('Critical','High') and description like '%password%' or synopsis like '%password%' or solution like '%password%' or cve like '%password%' group by risk,plugin_id,host,synopsis,description,solution;    192.168.21.128 |     17651 |               | None     | It is possible to retrieve the remote host's password policy using the+| Using the supplied credentials it was possible to extract the                  +|  n/a                                                                    |     1              |           |               |          | supplied credentials.                                                     | password policy for the remote Windows host.  The password policy must         +|                                                                      |               |           |               |          |                                                                        | conform to the Informational System Policy.                                     |                                                                      |   192.168.25.126 |     17651 |               | None     | It is possible to retrieve the remote host's password policy using the+| Using the supplied credentials it was possible to extract the                  +|  n/a                                                                    |     1              |           |               |          | supplied credentials.                                                  | password policy for the remote Windows host.  The password policy must         +|                                                                      |               |           |               |          |                                                                        | conform to the Informational System Policy.                                     |  

If I execute the follow query I get the correct return.

 SELECT host,plugin_id,min(cve) as cve,risk,synopsis,description,solution, count(distinct(plugin_id)) from internal_network where risk in ('Critical','High') and description like '%password%' group by risk,plugin_id,host,synopsis,description,solution;            host      | plugin_id |      cve      |   risk   |                            synopsis                            |                                   description                                   |                     solution                        | count   ----------------+-----------+---------------+----------+----------------------------------------------------------------+---------------------------------------------------------------------------------+--------  -----------------------------------------------+-------  172.18.12.150  |     11255 | CVE-1999-0502 | Critical | An account on the remote host uses a known password.           | The account 'root' on the remote host has the password 'root'. An              +| Change   the password for this account or disable it.   |     1              |           |               |          |                                                                | attacker may leverage this issue to gain total control of the affected         +|                                                     |               |           |               |          |                                                                | system.                                                                         |                                                     |   10.124.2.10    |     61708 |               | Critical | A VNC server running on the remote host is secured with a weak+| The VNC server running on the remote host is secured with a weak               +| Secure   the VNC service with a strong password.        |     1              |           |               |          | password.                                                      | password.  Nessus was able to login using VNC authentication and a             +|                                                     |               |           |               |          |                                                                | password of 'password'.  A remote, unauthenticated attacker could              +|                                                     |               |           |               |          |                                                                | exploit this to take control of the system.                                     |                                                     |   172.18.12.130  |     10205 | CVE-1999-0651 | High     | The rlogin service is listening on the remote port.            | The remote host is running the 'rlogin' service.  This service is dangerous in +| Comment   out the 'login' line in /etc/inetd.conf       |     1              |           |               |          |                                                                | the sense that it is not ciphered - that is, everyone can sniff the data that  +|                                                     |               |           |               |          |                                                                | passes between the rlogin client and the rloginserver. This includes logins    +|                                                     |               |           |               |          |                                                                | and passwords.                                                                 +|          

Query returning correct data, and additional data [on hold]

Posted: 25 Jun 2013 12:58 PM PDT

Thank you all in advance for any responses.

I am querying various Snort tables in order to produce a report. When I run my current query, I receive the expected results (as verified by our IDS console), but I also get incorrect results.

I suspect the issue is with my JOIN statements.

Here is a link for the Snort database layout:

http://acidlab.sourceforge.net/acid_db_er_v102.html

Here is my current query:

SELECT `event`.`sid`                AS `sid`,         `event`.`cid`                AS `cid`,         `event`.`signature`          AS `signature`,            /*Other columns removed for brevity*/            `signature`.`sig_gid`        AS `sig_gid`,         `sig_class`.`sig_class_id`   AS `sig_class_sig_class_id`,         `sig_class`.`sig_class_name` AS `sig_class_name`  FROM   `event`         INNER JOIN `iphdr`           ON ( `event`.`cid` = `iphdr`.`cid` )         INNER JOIN `sensor`           ON ( `event`.`sid` = `sensor`.`sid` )         INNER JOIN `signature`           ON ( `event`.`signature` = `signature`.`sig_id` )         INNER JOIN `sig_class`           ON ( `signature`.`sig_class_id` = `sig_class`.`sig_class_id` )   

Oracle 11g bug ? not returning the record until I triggered index to invisible and then to visible

Posted: 25 Jun 2013 12:42 PM PDT

We are using Oracle 11g, 11.2.0.3.

We know a record exists in a table but a select is not returning it for some odd reason.

  1. The execution plans the general query, not forcing any index, shows that an Index oneIndex is used. No record is returned.
  2. Forcing another index, using the same query returns the missing record.
  3. Setting oneIndex to invisible and back to visible...
  4. Rexecuting the initial query (not forcing any index) does return the record (but somehow, the execution plan is not using oneIndex anymore). Maybe it woke up and found that stats were old ???
  5. Taking the first query again and forcing the use of the faulty index (oneIndex) works fine now.

Facts: Statistics were quite old.

Session Parameter for ORACLE SQL Developer:

ALTER SESSION SET      OPTIMIZER_FEATURES_ENABLE = '11.2.0.1'      OPTIMIZER_MODE = FIRST_ROWS_1      "_HASH_JOIN_ENABLED" = FALSE      "_PUSH_JOIN_PREDICATE" = TRUE      "_PUSH_JOIN_UNION_VIEW" = TRUE      "_COMPLEX_VIEW_MERGING" = TRUE      "_TABLE_SCAN_COST_PLUS_ONE" = TRUE      "_ORDERED_NESTED_LOOP" = TRUE      "_NEW_INITIAL_JOIN_ORDERS" = TRUE      "_UNNEST_SUBQUERY" = TRUE      "_INDEX_JOIN_ENABLED" = TRUE      "_LIKE_WITH_BIND_AS_EQUALITY" = TRUE      NLS_SORT = BINARY      NLS_COMP = BINARY      NLS_NUMERIC_CHARACTERS = '.,'      QUERY_REWRITE_ENABLED = FORCE      CURSOR_SHARING = EXACT      DB_FILE_MULTIBLOCK_READ_COUNT = 0     OPTIMIZER_INDEX_COST_ADJ = 5     OPTIMIZER_INDEX_CACHING = 95  

The SQL is a bit had to follow since it is produced by a Content Management application. I will not provide it for now.

  • Q1: Why is an index no longer chosen by the execution plan after switching to invisible and then back to visible?

  • Q2: How can Oracle momentarily not see a record when searched from an index instead of another index?

Page Break is splitting in the middle of a single row in SSRS

Posted: 25 Jun 2013 12:56 PM PDT

I have an SSRS report for an Invoice and it generally works perfectly but occasionally it will page break in the middle of a row in the main Tablix. The row will split and leave part of the text on one page and the rest on the next. The Tablix has no inherent page breaking. I was just relying on it to break between rows (regardless of which rows). There are a couple rows that repeat on each page. There is a second Tablix below the Detail one with Summary Totals. This is just some background info about the report. I'm relatively new to SSRS and haven't had the greatest luck with the formatting thus far.

Thanks for any help!

Is it possible to build an UNDO framework for postgres?

Posted: 25 Jun 2013 05:25 PM PDT

I was thinking of a table which would automatically log all transactions made to other tables and the command to undo that modifications. So every time you issue an UNDO() command you would read the statement from the log table and execute it.

Example:

You issue a

INSERT INTO sales (id, client_id, product, value)   VALUES (4621, 231, 'Hamburguer', 500)  

so the log-trigger would do

INSERT INTO log (undo) VALUES ('DELETE FROM sales WHERE id = 4621')  

When you issue UNDO() the function would issue something like EVAL(SELECT undo FROM log ORDER BY id DESC LIMIT 1).

Someone experienced, please tell me if this is possible.

Latin1 to UTF8 on large MySQL database, minimal downtime

Posted: 25 Jun 2013 04:23 PM PDT

Let me lay out the basic scenario for the problem I'm up against.

  • I'm running MySQL 5.1 and have a single database with 2 large MyISAM tables (9M and 20M rows respectively).
  • Each of these tables has several VARCHAR and TEXT columns which are currently set to the latin1 character set and latin1_ci collation.
  • These tables are actively read / written to, so we want to avoid as much downtime as possible.

I'm confident in actually converting the data between the two character sets (using ALTER TABLE to convert columns to their binary counterparts, then back to the correct original column type in the correct charset). What I'm not sure about is how to go about the conversion without locking the entire table for the duration (which could take hours / days due to the number of rows and indexes).

One idea is to set up replication to a slave; turn off the replication; convert everything; re-enable replication and let the slave catch up; then finally promote the slave to master.

However, this doesn't solve the issue of how to set up replication in the first place. From what I've read, even mysqlhotcopy & xtrabackup still require table locks for MyISAM tables.

Is there another option I've missed, or am I going to have to turn off my entire application in order to set this up? Thanks for any advice.

Postgres Write Performance on Intel S3700 SSD

Posted: 25 Jun 2013 12:05 PM PDT

I'm not seeing the Postgres write performance increases I thought I would with a single SSD vs a hardware RAID 10 array of (8) 15k RPM SAS drives.

I have a Dell R820 with a PERC H700 hardware RAID card and 8 15k RPM SAS drives in a RAID 10 array, as well as an 800GB Intel s3700 SSD. The server has 128GB of RAM and 64 cores of Xeon E5-4640 at 2.40GHz, running CentOS 6.4 and Postgres 9.2.4.

I'm using pgbench to compare the SAS drives in a RAID 10 array to the single SSD.

15k RPM SAS RAID 10 Results

pgbench -U postgres -p 5432 -T 50 -c 10 pgbench  starting vacuum...end.  transaction type: TPC-B (sort of)  scaling factor: 1  query mode: simple  number of clients: 10  number of threads: 1  duration: 50 s  number of transactions actually processed: 90992  tps = 1819.625430 (including connections establishing)  tps = 1821.417384 (excluding connections establishing)

Single Intel s3700 SSD Results

pgbench -U postgres -p 5444 -T 50 -c 10 pgbench  starting vacuum...end.  transaction type: TPC-B (sort of)  scaling factor: 1  query mode: simple  number of clients: 10  number of threads: 1  duration: 50 s  number of transactions actually processed: 140597  tps = 2811.687286 (including connections establishing)  tps = 2814.578386 (excluding connections establishing)

In real world usage we have a very write-intensive process that takes about 7 minutes to complete, and the RAID 10 array and SSD are within 10 or 15 seconds of each other.

I expected far better performance from the SSD.

Here are Bonnie++ results for the SSD:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-  Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--  Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP  openlink2.rady 252G   532  99 375323  97 183855  45  1938  99 478149  54 +++++ +++  Latency             33382us   82425us     168ms   12966us   10879us   10208us  Version  1.96       ------Sequential Create------ --------Random Create--------  openlink2.radyn.com -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--                files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP                   16  5541  46 +++++ +++ +++++ +++ 18407  99 +++++ +++ +++++ +++  Latency              1271us    1055us    1157us     456us      20us     408us

Here are Bonnie++ results for the RAID 10 15k RPM drives:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-  Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--  Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP  openlink2.rady 252G   460  99 455060  98 309526  56  2156  94 667844  70 197.9  85  Latency             37811us   62175us     393ms   75392us     169ms   17633us  Version  1.96       ------Sequential Create------ --------Random Create--------  openlink2.radyn.com -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--                files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP                   16 12045  95 +++++ +++ +++++ +++ 16851  98 +++++ +++ +++++ +++  Latency              7879us     504us     555us     449us      24us     377us

Here are dd results for the SSD:

dd if=/dev/zero of=/path/on/ssd bs=1M count=4096 conv=fdatasync,notrunc  4294967296 bytes (4.3 GB) copied, 12.7438 s, 337 MB/s

And here are dd results for the RAID 10 15k RPM drives:

dd if=/dev/zero of=/path/on/array bs=1M count=4096 conv=fdatasync,notrunc  4294967296 bytes (4.3 GB) copied, 8.45972 s, 508 MB/s

I'd post the Postgres config, but its clear the SSD isn't outperforming the RAID 10 array, so it doesn't seem applicable.

So is the SSD performing as it should be?

Or is the RAID 10 with fast drives just so good that it outperforms a single SSD? A RAID 10 array of the SSD's would be awesome, but at $2,000 each the $8,000 price tag is hard to justify (unless we were sure to see the 2x to 5x gains we were hoping for in real world performance gains).

pg_upgrade fails with lc_ctype cluster values do not match

Posted: 25 Jun 2013 07:46 PM PDT

I'm upgrading my PostgreSQL version 9.1.4 database to version 9.2.4. Both the old and the new version are the bundled versions of postgresapp.com for Mac OS X.

When trying to upgrade the database I get this error:

# pg_upgrade -b /tmp/Postgres.app/Contents/MacOS/bin \  -B /Applications/Postgres.app/Contents/MacOS/bin \  -d var-9.1 \  -D var    Performing Consistency Checks  -----------------------------  Checking current, bin, and data directories                 ok  Checking cluster versions                                   ok  Checking database user is a superuser                       ok  Checking for prepared transactions                          ok  Checking for reg* system OID user data types                ok  Checking for contrib/isn with bigint-passing mismatch       ok  Creating catalog dump                                       ok    lc_ctype cluster values do not match:  old "de_DE", new "de_DE.UTF-8"  Failure, exiting  

I searched this error message and found no useful tip to fix this. Any idea?

Why is Postgres on 64-bit CentOS 6 significantly slower than Postgres on 32-bit CentOS 6

Posted: 25 Jun 2013 07:44 PM PDT

We have some Postgres + PostGIS applications that run well on CentOS 6 32-bit machines.

We've recently been testing them on CentOS 6 64-bit machines, with similar configuration (all our machines are managed by Puppet), and the applications run significantly slower.

Even loading the database schemas take several times as long. (On the 32-bit machines, this takes 7 seconds to load PostGIS, the schema, and fixtures; on the 64-bit machines, this takes 50-110 seconds.)

We initially had the problems with virtual servers, so ran tests on a physical machine and found the same problems. Note that the physical 64-bit machine is slower than virtual 32-bit machines.

The databases are not large at all.

We've experimented with various parameters on postgresql.conf and not gotten any improvement.

Is this a known issue with Postgres or PostGIS on 64-bit CentOS?

If not, how do we diagnose this?

How to repair Microsoft.SqlServer.Types assembly

Posted: 25 Jun 2013 04:39 PM PDT

When I run a checkdb('mydb') this is the only error message printed.

Msg 8992, Level 16, State 1, Line 1  Check Catalog Msg 3857, State 1: The attribute (clr_name=NULL) is required but is missing for row (assembly_id=1) in sys.assemblies.  

It is referring to 'Microsoft.SqlServer.Types' I do see that in the this db the clr_name is blank. but under the master db there is a value in there.

I tried to drop or alter the assembly to add this value but its restricted.

btw, this db was updated lately from sql-server 2005 to 2008R2.

Unable to connect to Amazon RDS instance

Posted: 25 Jun 2013 01:38 PM PDT

I recently created an oracle instance on Amazon RDS. Unfortunately, I'm not able to connect to the instance using Oracle SQL Developer.

The (relevant) information I have from Amazon;

Endpoint - The DNS address of the DB Instance: xxx.yyy.eu-west-1.rds.amazonaws.com

DB Name - The definition of the term Database Name depends on the database engine in use. For the MySQL database engine, the Database Name is the name of a database hosted in your Amazon DB Instance. An Amazon DB Instance can host multiple databases. Databases hosted by the same DB Instance must have a unique name within that instance. For the Oracle database engine, Database Name is used to set the value of ORACLE_SID, which must be supplied when connecting to the Oracle RDS instance: ZZZ

Master Username - Name of master user for your DB Instance: org

Port - Port number on which the database accepts connections: 1521

From this information, the connection settings in SQL Developer are pretty obvious, so I don't really see what I could be missing...

Will Partitions and Indexes on the same table help in performace of Inserts and Selects?

Posted: 25 Jun 2013 02:39 PM PDT

I have a table containing the list of visitors and this table has the following information.

  • Visitor Browser Information
  • Visitor Location Information
  • Visitor Time Information
  • No of Visits

I have a second table that maintains the history of each visits, which means I if the same visitor visits the site, I insert into the second table and update the no. of visits on the first table.

The kind of reports that I have to generate for this table are

  1. Count of Visitors/day or days (Search Between days)
  2. Count of Visitors/month
  3. Count of Visitors/year
  4. Count of Visitors/browser or grouped by browsers

On an average there are about 20000 inserts to the second table and about 15000 inserts to the first table, meaning 5000 were updates to the first table (5000 repeat visits).

I need to decide between partitioning the tables by month and sub-partitioning by days for the reports 1,2,3 and index the browser related columns for report 4.

There will be more reports in the future not sure on what clauses.

Does partitioning/sub-partitioning along with indexing help in the performance of inserts and selects?

Should I perform partitioning on both the tables?

I am currently using MySQL 5.5 + InnoDB

"Arithmetic overflow" when initializing SQL Server 2012 replication from backup

Posted: 25 Jun 2013 03:39 PM PDT

I'm initializing SQL Server replication from a backup, by following instructions from here:

http://www.mssqltips.com/sqlservertip/2386/initialize-sql-server-replication-using-a-database-backup/

...but, when I execute

USE MyDatabase  GO  EXEC sp_addsubscription   @publication = MyDatabasePublication,   @subscriber = 'AMAZONA-XXXXXXX',   @destination_db = MyDatabase,  @sync_type = 'initialize with backup',  @backupdevicetype ='disk',  @backupdevicename = 'D:\Temp\MyDatabasepublication.bak'  

I get the following error:

Msg 8115, Level 16, State 2, Procedure sp_MSsetupnosyncsubscriptionwithlsn, Line 237  Arithmetic overflow error converting expression to data type nvarchar.  

Any idea why, or at least where can I find this stored procedure to troubleshoot further?

SQL Server 2012, Standard Edition.

UPDATE: It looks like that the problem is caused by the fact that database was created using SQL Server 2008R2 and then attached here. Anyway, still need a solution for it.

limit the number of rows returned when a condition is met?

Posted: 25 Jun 2013 08:19 PM PDT

Is it possible to limit the number of rows returned when a condition is met? I am working on a query to check if a student is ready to graduate, and they need to meet a certain number of credits per subject. I don't want all classes, because any class past the number of credits needed can be used for electives.

EDIT: I forgot to mention that this is SQL 2008 R2

I was hoping to be able to do something like this (which I know doesn't work)

select top(sum(credits) > 3.0) c.name, c.subject, c.credits, c.pass  from classes as c  where c.Subject = 'Math' and c.passed = 1  

Any help would be great


Data

Subject        |  Name             | Grade | Credit | Pass  Social Studies | DL Psychology     | 83    | 0.50   | 1  Social Studies | Global Studies 10 | 82    | 1.00   | 1  Social Studies | Global Studies 9  | 83    | 1.00   | 1  Social Studies | Part. In Govern   | 84    | 0.50   | 1  Social Studies | US History 11     | 87    | 1.00   | 1  

Query

select c.Subject,              c.Name,              c.credits,              c.pass,              c.score      from @classes as c       where (c.Subject = 'Social Studies' and c.pass = 1 and c.Name like '%Econ%')      or    (c.Subject = 'Social Studies' and c.pass = 1 and c.Name like '%Gov%')      or    (c.Subject = 'Social Studies' and c.pass = 1)      group by c.Subject, c.Name, c.credits, c.pass, c.score      having Sum(credits) <= 2.0  

Im exprecting to see these rows returned

Expected Results

Subject        | Name              | Grade | Credit | Pass  Social Studies | Part. In Govern   | 84    | 0.50   | 1  Social Studies | DL Psychology     | 83    | 0.50   | 1  Social Studies | Global Studies 10 | 82    | 1.00   | 1  

tempdb logs on same drive as tempdb data or with other logs?

Posted: 25 Jun 2013 02:04 PM PDT

For many reasons I only have 3 hard drives (RAIDed and in an Always-On AG) for all my database files:

  • D: Data
  • E: Logs
  • F: Tempdb

Should the tempdb log file go on F: with the data file(s) or on E:?

My tempdb data file has the highest stalls by far, with the log file 4th out of 24.

In my limited DBA experience (I'm a developer) I would lean to putting the tempdb.ldf on E: as the writes will all be sequential.

How to remove column output in a for xml path query with a group by expression?

Posted: 25 Jun 2013 11:38 AM PDT

I forgot how to remove a column from being output in a FOR XML PATH query using a group by expression. I used it before but somehow I lost the article. In the below example. I do not wish to have idForSomething output in my result by I want to use it as condition for my inner query.

SELECT     idForSomething,      SUM(allSomething) AS [@sum],     (SELECT           innerSomething AS [@inner], innerSomething2 AS [@inner2]      FROM             someTable s2      WHERE            s2.innerSomething = s1.idForSomething      FOR XML PATH('innerlist'), TYPE)  FROM          someTable s1  WHERE         idForSomething = 1  GROUP BY       idForSomething  FOR XML PATH('listofsomethings')  

Added XML Body:

    <listofsomethings @sum="10">          <innerlist @inner="..." @inner2="..." />          <innerlist @inner="..." @inner2="..." />          <innerlist @inner="..." @inner2="..." />      </listofsomethings>  

I will look around again online, but I asking for the syntax to SQL Server to NOT USE "idForSomething" column in the final output. I thought it was something like NOOUTPUT but I can't remember and it does not work.

Why I don't need to COMMIT in database trigger?

Posted: 25 Jun 2013 11:51 AM PDT

We can't COMMIT/ROLLBACK in DML triggers because transaction is handled manually after DML statement. However, database triggers seems to be an exception. For example, suppose there's a database trigger:

CREATE OR REPLACE TRIGGER user_login_as    AFTER LOGON       ON SCHEMA  BEGIN    INSERT INTO user_login_log(username, log_date, action) VALUES (user, sysdate, 'User has logged in');  END user_login_as;  

The trigger does not contain autonomous transaction procedure with commit inside that, so who is commiting the insert? This triggger works like a charm and inserts new record into log table after user logon. It smells like hidden Oracle functionality and I can't find any reference in Oracle docs about that. I'm using Oracle11g.

Inserting query result to another table hangs on "Copying to temp table on disk" on MySQL

Posted: 25 Jun 2013 01:10 PM PDT

I started the process of inserting returned results to another table. The query groups the rows in respect of indexed IDs. This causes 149,000,000 rows to be decreased to 460,000 rows.

The query includes 3 table INNER JOINs, with each table having about 20,000,000 rows.

Further information, the process completes in about 12 seconds for a test file which has 1000 input rows, and returns 703 rows.

I started the query earlier ### we don't know when earlier is ###, but it is still running in the state: "Copying to temp table on disk" after 38000 seconds (10 and a half hours).

I think there is a problem during the insertion process. What am I probably doing wrong here? If it helps, the operating system of the computer is Windows 7, it has 3 GB RAM, an Intel Core2Duo 2.27GHz processor. ### you forgot to tell us details on the hard drive. One partition in, one out, same disk, same partitions, etc ###

Here's my query as it currently reads:

INSERT INTO kdd.contents               (adid,                descriptionwords,                purchasedkeywordwords,                titlewords)   SELECT t.adid,          dt.tokensid,          pkt.tokensid,          tt.tokensid   FROM   kdd.training t         INNER JOIN kdd.purchasedkeywordid_tokensid pkt                 ON t.keywordid = pkt.purchasedkeywordid          INNER JOIN kdd.titleid_tokensid tt                 ON t.titleid = tt.titleid          INNER JOIN kdd.descriptionid_tokensid dt                 ON t.descriptionid = dt.descriptionid   GROUP  BY adid;   

Primary key type change not reflected in foreign keys with MySQL Workbench

Posted: 25 Jun 2013 03:18 PM PDT

I have a problem with MySQL Workbench and primary/foreign keys.

I have some tables with PKs involved in relationship with other tables. If I modify the type of the PK, the type of the FK doesn't automatically update to reflect the change.

Is there any solution? Do I have to manually modify all the relations?

No comments:

Post a Comment

Search This Blog