Friday, June 14, 2013

[how to] the value of undo_sql in flashback_transaction_query view is always null

[how to] the value of undo_sql in flashback_transaction_query view is always null


the value of undo_sql in flashback_transaction_query view is always null

Posted: 14 Jun 2013 09:07 PM PDT

so I have a table defined as follow:

`create table test.test as (select * from dual);  

and ran the following update statements:

update test.test set dummy = '1' where dummy = 'X';  commit;  

I want to find the undo_sql for the transaction above, so I ran the following statement to find the transaction id

SQL> select dummy, versions_xid, versions_startscn, versions_endscn from test.test  versions between scn minvalue and maxvalue ;    D VERSIONS_XID     VERSIONS_STARTSCN VERSIONS_ENDSCN  - ---------------- ----------------- ---------------  1 0A000500B00A0000           2802291  X                                            2802291  

and ran the following statement to find the undo_sql statement

SQL> select undo_sql from flashback_transaction_query   where xid = (select  versions_xid from test.test versions   between scn minvalue and maxvalue where versions_xid is not null);    UNDO_SQL  --------------------------------------------------------------------------------  

as you can see, I got nothing, if I run the following statement, I also get nothing.

SQL> select undo_sql from flashback_transaction_query where undo_sql is not null;    no row selected  

looks like undo_sql is always null;

Is there anything I didn't configure ? Here are the values from v$system_parameter

SQL> select name, value from v$system_parameter where name like '%undo%';    NAME                 VALUE  ----------------     ----------------  undo_management      AUTO  undo_tablespace      UNDOTBS1  undo_retention       18000  

Does SQL Server check disk space before allocating a new page?

Posted: 14 Jun 2013 06:27 PM PDT

I have a bet with my old boss. I bet her that SQL Server, when allocating a new extent, always allocates from the buffer pool and never checks to see whether there is some place on disk where the allocation could be stored. Essentially, she contests that SQL Server should check for available space on the LUN before allocating a page. This seems wrong, since I could place my storage on the moon, which would cause some serious latency. I feel that she really wants SQL Server always to bring in a page from disk firstly and then perform the DML task(s).

Here is my "proof" that she is wrong. If you disagree with my "proof", then please most definitely respond with a better one!

Let us create a trivial database and table. The database's recovery model will be set to SIMPLE and AUTO_CREATE_STATISTICS will be shut off, in order to minimize log record bloat.

Before we begin, let me divulge the version of SQL Server that I am using.

  SELECT @@VERSION;  -------------------------------------------------------------------------------------  Microsoft SQL Server 2012 - 11.0.2100.60 (X64)   Developer Edition (64-bit) on Windows NT 6.1  (Build 7601: Service Pack 1)  

Now, the code...

  USE master;  GO    IF DATABASEPROPERTYEX(N'PageAllocDemo' , N'Version') > 0         BEGIN              ALTER DATABASE PageAllocDemo SET SINGLE_USER WITH ROLLBACK IMMEDIATE;              DROP DATABASE PageAllocDemo;        END;  GO    CREATE DATABASE PageAllocDemo  GO    USE PageAllocDemo;  GO  SET NOCOUNT ON;  GO    -- Set the database to SIMPLE and turn off log generating crapola  ALTER DATABASE PageAllocDemo SET RECOVERY SIMPLE;  GO  ALTER DATABASE PageAllocDemo SET AUTO_CREATE_STATISTICS OFF;  GO    CREATE TABLE dbo.X  (        c1 INT IDENTITY (1,1)  ) ON [PRIMARY];  GO  

Now, let us check how many pages were allocated? I suspect zero, since we have only created a "logical table", in our case an empty heap.

  -- How many pages are allocated to our table?  DBCC IND (PageAllocDemo,X,-1);  GO  

Now, clear the log.

  -- Clear the log  CHECKPOINT;  GO  

What is currently in the log?

  -- What is in the log right now?  SELECT * FROM fn_dblog(NULL,NULL);  GO    /*    ---------------------------------------  -- Operation -------------- Context ---  ---------------------------------------  LOP_BEGIN_CKPT      LCX_NULL  LOP_XACT_CKPT       LCX_BOOT_PAGE_CKPT  LOP_END_CKPT        LCX_NULL    */  

This is expected, since we are in the SIMPLE recovery model. We will now create an explicit transaction that will insert one and only one record into our table;but, before we do that, let us open Process Monitor and filter for our MDF and LDF file as well as the PID for the SQL Server process.

enter image description here

Begin the transaction:

  BEGIN TRAN        INSERT INTO dbo.X DEFAULT VALUES;      GO  

Process Monitor shows two writes to the transaction log file. enter image description here

Let us check the log records.

  -- What is in the log right now?      SELECT * FROM fn_dblog(NULL,NULL);        /*      I omitted all the log records for PFS, GAM, SGAM, etc.      ---------------------------------------------------------------      -- Operation -------------- Context ------- Transaction ID  ---      ---------------------------------------------------------------      LOP_BEGIN_XACT      LCX_NULL                0000:0000030e      LOP_BEGIN_XACT      LCX_NULL                0000:0000030f      LOP_FORMAT_PAGE     LCX_HEAP                0000:0000030f      LOP_COMMIT_XACT     LCX_NULL                0000:0000030f      LOP_INSERT_ROWS     LCX_HEAP                0000:0000030e      LOP_COMMIT_XACT     LCX_NULL                0000:0000030e    */  

I omit the bit map and PFS allocations and we can see that a page is allocated and one row is inserted as one would anticipate.

How many pages are allocated to our heap?

  -- How many pages are allocated to our table?      DBCC IND (PageAllocDemo,X,-1);      GO        /*        One IAM page and one data page and nothing else      ---------------------------------      PageFID PagePID     IAMFID IAMPID            ------- ----------- ------ ------       1       264         NULL   NULL              1       231         1      264                 */  

This is as anticipated. We have one IAM page and one data page. Now, our penultimate action is committing the transaction. I expect a 512B log block flush to occur at this point.

  COMMIT TRAN;  

enter image description here

Let us finish the "proof" with a checkpoint operation. So far, nothing has been committed to the data file only to the log file.

  CHECKPOINT;  GO  

enter image description here

Cool, the data pages flushed to disk as expected.

My conclusion, from the evidence from Process Monitor is that SQL Server allocates in-memory, adds the record in memory, and commits the page to disk all without checking anything at the storage level.

Does anyone object to this hypothesis? If yes, why?

Contact system database design

Posted: 14 Jun 2013 03:31 PM PDT

I'm designing a database for contacts and I'm looking for some feedback on the current design.

What i want:

  • A contact can be a person or an organization.
  • A contact can have many groups.
  • A contact can have many emails and addresses
  • A contact can have one organization (employed)
  • A organization can have many contacts (employees)
  • A group can have many contacts.

What i have:

  • Contact(Id, PersonId (FK), OrganizationId (FK), EmployerId (FK, self reference))
  • Person(Id, FirstName, LastName, Birthday)
  • Organization(Id, Name, Founded)
  • Group(Id, Name)
  • ContactGroup(Id, ContactId (FK), GroupId (FK))
  • Email(Id, ContactId (FK), Name)
  • Address(Id, ContactId (FK), Street, City, ZipCode, Country)

What bothers me the most is that the contact entity has one organization or one person, and also a self reference (EmployerId) if a organization has employees (which are also contact entities) and vice versa.

enter image description here

Is this a problem or see you a better way?

How to select a row on one distinct attribute?

Posted: 14 Jun 2013 02:16 PM PDT

I want to select * from A A but only where A.attr1 is distinct. How can I do this?

Errors keeping stored procedures through restore from backup

Posted: 14 Jun 2013 06:22 PM PDT

We are running SQL Server 2008 R2 on two identical servers. One, called LIVE, is our production environment. The other, called QA, is our dev enivornment. In order to check backup integrity and run integrity checks, and also so that our dev environment has reasonably fresh data, we restore LIVE's database backups on QA daily and then run integrity checks.

It all works well but for one issue: We need QA to maintain its own versions of the all the stored procedures. We develop new SPs and modify existing SPs on QA, and test them there. Only after they have been tested do we push them over to LIVE. In order to retain QA's versions on QA, what I have been doing is scripting the CREATE PROCEDURE commands which are saved to files. After the databases are restored, all the procedures are dropped. The CREATE PROCEDURE scripts are then executed, recreating all the databases' SPs as they were on QA prior to the restore from backup. For a time this worked well, although the occasional disappearance of a SP raised eyebrows. However, this week the process has not worked, as several SPs disappeared each day, escalating to 20 disappearances yesterday and--ugh--over 100 this morning.

A look through the job history log indicates the nature of the problem:

Msg 102, Level 15, State 1, Server QA, Procedure pln_tradepaper, Line 196  Incorrect syntax near 'GO'.    Msg 102, Level 15, State 1, Server QA, Procedure pln_tradepaper, Line 230  Incorrect syntax near 'END'.  Msg 102, Level 15, State 1, Server QA, Procedure pln_tradepaper_type, Line 187  Incorrect syntax near 'GO'.    Msg 2714, Level 16, State 1, Server QA, Procedure pln_tradepaper_type, Line 199  There is already an object named '#temp1' in the database.    Msg 137, Level 15, State 2, Server QA, Procedure pln_tradepaper_type, Line 201  Must declare the scalar variable "@group".    Msg 102, Level 15, State 1, Server QA, Procedure pln_tradepaper_type, Line 211  Incorrect syntax near 'END'.    Msg 102, Level 15, State 1, Server QA, Procedure msg_upsert_asset, Line 46  Incorrect syntax near 'GO'.    Msg 137, Level 15, State 2, Server QA, Procedure msg_upsert_asset, Line 64  Must declare the scalar variable "@client_id".  

And so forth, up to the limit for the log's message. I have run the command that generates the CREATE script for this particular database, and there are no syntax issues in it of any kind. Starting tomorrow and until this is resolved, I will be retaining the scripts as they are generated; unfortunately, they have been deleted after running up to this point, so I cannot rule out the possibility that the script did have syntax errors.

The command that generates the CREATE script is this:

bcp "SELECT definition + char(13) + 'GO' FROM db1.sys.sql_modules s INNER JOIN db1.sys.procedures p ON [s].[object_id] = [p].[object_id]" queryout "D:\scripts\SP_db1.sql" -S qa -U "sa" -P "password1" -t -w  

I have only two ideas as to what might be the issue. My leading suspicion is that a bcp has a character limit on the fields which causes db1.sys.sql_modules.definition to be truncated. This would explain why the errors started with Incorrect syntax near 'GO' (the previous module's description was truncated before END, so we had a GO without a BEGIN being closed off; the other errors cascaded from there). MAX(LEN(definition) for this database is 15134. However, the data type for db1.sys.sql_modules.definition is nvarchar(max), which as I read the documentation has no default limit.

The only other idea is that somehow I am not calling bcp correctly. Since I'm only selecting a single field, I turned the default \t field separator off by using -t with no argument. The default row separator \n should be fine. Perhaps I should not be specifying Unicode with -w, or something else is missing.

If it makes any difference, the CREATE scripts are executed with

sqlcmd -S QA -U "sa" -P "password1" -d db1 -i "D:\scripts\SP_db1.sql"  

I would appreciate any ideas anyone could provide as to why this issue is occurring. I am not opposed to a different approach to persisting SPs across--I haven't been able to shake the feeling that this is somewhat of a Rube-Goldberg-like solution.

How to group by when querying duplicate records?

Posted: 14 Jun 2013 12:51 PM PDT

I have a table with duplicates and I want to present it to the stakeholders so they can help me determine the source of duplication. Currently I can print 1 row for each duplicate entry, but I want to show all attributes for each. Here's the query I'm using:

select *, count(*), max(id) from foos group by name having count(*) > 1 order by name;  

I'm not sure how to alter the query to get each record where another record shares a name. Thanks for any guidance.

[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified

Posted: 14 Jun 2013 12:20 PM PDT

I am trying to open a program for the first time on Windows XP Pro that uses PostgreSQL 9. I'm getting an error message that says :

A problem was encountered while trying to log into or create the production database. Details: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified

In my ODBC manager, I have a list of User DSN's and System DSN's. I tried installing a postgres odbc driver to see if that would help, but it didn't.

There is a connect.dat file in the program file with a line saying "OLE DB Provider = MSDASQL". Changing this entry alters the error message I get to "Provider cannot be found, it may not be properly installed".

I don't know what provider name to insert to get this to work properly. I have done extensive research on this error to no avail. Any suggestions would be greatly appreciated.

How do these snapshot isolation level configurations interact on a SQL Server 2005 instance?

Posted: 14 Jun 2013 01:49 PM PDT

I started troubleshooting a tempdb issue we are having on SQL Server 2005 Enterprise Edition. A developer is getting an error that tempdb is out of space. Technically, the error is:

Transaction aborted when accessing versioned row in table 'dbo.inserted' in database 'dbname'. Requested versioned row was not found. Your tempdb is probably out of space. Please refer to BOL on how to configure tempdb for versioning

I looked at the database configuration in sys.databases and found the following settings:

snapshot_isolation_state: 0    snapshot_isolation_state_desc: OFF    is_read_committed_snapshot_on: 1  

I looked up what this meant in BOL and the salient information is as follows:

snapshot_isolation_state

State of snapshot-isolation transactions being allowed, as set by the ALLOW_SNAPSHOT_ISOLATION option:

0 = Snapshot isolation state is OFF (default). Snapshot isolation is disallowed.

1 = Snapshot isolation state ON. Snapshot isolation is allowed.

2 = Snapshot isolation state is in transition to OFF state. All transactions have their modifications versioned. Cannot start new transactions using snapshot isolation. The database remains in the transition to OFF state until all transactions that were active when ALTER DATABASE was run can be completed.

3 = Snapshot isolation state is in transition to ON state. New transactions have their modifications versioned. Transactions cannot use snapshot isolation until the snapshot isolation state becomes 1 (ON). The database remains in the transition to ON state until all update transactions that were active when ALTER DATABASE was run can be completed.

snapshot_isolation_state_desc

Description of state of snapshot-isolation transactions being allowed, as set by the ALLOW_SNAPSHOT_ISOLATION option:

  • OFF
  • ON
  • IN_TRANSITION_TO_ON
  • IN_TRANSITION_TO_OFF

is_read_committed_snapshot_on

1 = READ_COMMITTED_SNAPSHOT option is ON. Read operations under the read-committed isolation level are based on snapshot scans and do not acquire locks.

0 = READ_COMMITTED_SNAPSHOT option is OFF (default). Read operations under the read-committed isolation level use share locks.

If I'm understanding this correctly, snapshot_isolation_state is OFF and disallowed. However, the 1 on is_read_committed_snapshot_on indicates that the database is using snapshot isolation. How is this possible if it's disallowed?

related column contents

Posted: 14 Jun 2013 11:18 AM PDT

I've designed a mysql database made of several independent/unrelated tables to use in my office. Different tables hold information about clients (names, addresses etc), visits to the office (dates, reason etc) or other information like comments (related to a specific visit). When a known client enters the office I can retrieve all the relevant data, insert the new visit data and possible comments about the visit. I need to store visits in a specific order (1st visit 1st, etc). Sometimes I forget to store a visit and I have to enter the data later. The order of the visits gets messed up and I manually (update commands) correct it by changing the visits' autoincrementing primary key number (which represents the order). This way I get the right order of visits but comments (related to the changed primary key) cannot be retrieved. Is there any way to make sure that changing the primary key would make the necessary changes to the rest of the database, so that everything would work as before the changes? Do relational 1:many or other implementations accomplish that? And how? Please note that I'm an amateur, self-taugth "database designer".

Constraint based rules engine

Posted: 14 Jun 2013 09:54 AM PDT

I am evaluating software systems which configure products that tout constraint based versus rules based engines. Can someone explain (with maybe an example or two) to a non-expert what the difference is? My guess is that a constraint based engine is rules defining relationships between objects, and rules based engines are more like the WHERE clause of a SQL language query.

How to drop inheritance in Postgresql

Posted: 14 Jun 2013 06:10 PM PDT

I am new to PostgreSQL. The situation I have is someone created a child table inherits from the Parent table. And dropped the child table. But the Parent table still have the Down Arrow sign on it. I checked and theres no other link/relation on the Parent table. Is this really an issue? I was told that the Parent table is still in the 'been inherited' status and causing the performance issue. How to resolve this by removing the 'been inherited' status' from the parent table?

Transaction Log maintenance when switching to Simple Recovery

Posted: 14 Jun 2013 08:56 AM PDT

Background:

I recently inherited 50+ SQL Servers with 450+ databases. The nightly backups are roughly 8TB and, needless to say, we're using more disk space than we'd like. All of the databases are set to FULL recovery and the transaction logs have never been backed up. I've gone through all of the SQL Servers and identified low priority ones that only need a nightly backup and a where a day of data loss is acceptable.

Question:

I'm switching a lot of low priority databases to SIMPLE recovery mode from FULL. Will the existing transaction logs be truncated (when checkpoints are created)? Some of the existing transaction logs are 50-100GBs; what is the best approach in determining what I should shrink them down to for the purposes of moving forward? I obviously don't want to keep them that large. Or, will they shrink on their own over time (I don't think they will)?

Stored procedures vs. inline SQL

Posted: 14 Jun 2013 10:58 AM PDT

I know stored procedures are more efficient through the execution path (than the inline sql in applications). However, when pressed, I'm not super knowlegeable about why.

I'd like to know the technical reasoning for this (in a way that I can explain it to someone later).

Can anyone help me formulate a good answer?

How to find memory usage by table?

Posted: 14 Jun 2013 10:05 AM PDT

I want to discover how much memory is being used by each table. Is this information available in DMVs or elsewhere?

Does the size of the primary key contribute to table size?

Posted: 14 Jun 2013 07:46 PM PDT

I have a table (InnoDB) with data length around 36G and index length 23G.

It had a composite primary key across three columns.

As an initial attempt to reduce the size of this table, I dropped the primary key (wasn't actually needed anyway). Yet, the data length and index length are the same.

Is this to be expected?

Weird SQL error

Posted: 14 Jun 2013 09:15 AM PDT

am seeing the error logs below on my SQL server, I'm not sure if they are related to trace flag 3604. I also see something to do with memory dump as well. Please advice.

 15:15:53.81 spid594   m_pageId = (1:1172094)    m_headerVersion = 1       m_type = 2|0   15:15:53.81 spid594   m_typeFlagBits = 0x0      m_level = 0               m_flagBits = 0x0|0   15:15:53.81 spid594   m_objId = 954538534       m_indexId = 6             m_prevPage = (1:1102845)|0   15:15:53.81 spid594   m_nextPage = (1:1172095)  pminlen = 21              m_slotCnt = 212|0   15:15:53.81 spid594   m_freeCnt = 2584          m_freeData = 5880         m_reservedCnt = 0|0   15:15:53.81 spid594   m_lsn = (119582:10616:47) m_xactReserved = 0        m_xdesId = (0:753243376)|0   15:15:53.81 spid594   m_ghostRecCnt = 0         m_tornBits = -2147483127  |0   15:15:53.81 spid594   |0  Allocation Status|0  -----------------|0   15:15:53.81 spid594   GAM (1:1022464) = ALLOCATED                         |0   15:15:53.81 spid594   SGAM (1:1022465) = NOT ALLOCATED                    |0   15:15:53.81 spid594   PFS (1:1164672) = 0x40 ALLOCATED   0_PCT_FULL       DIFF (1:1022470) = CHANGED|0   15:15:53.81 spid594   ML (1:1022471) = NOT MIN_LOGGED                     |0   15:15:53.81 spid594   |0  DATA:|0  -----|0   15:15:53.81 spid594   |0  Memory Dump @0x3CFF0000|0  -----------------------|0   15:15:53.81 spid594   3CFF0000:  01020000 00000600 fdd31000 01001500 7fe21100  ....................|0   15:15:53.81 spid594   3CFF0014:  0100d400 261ae538 180af816 7ee21100 01000000  ....&..8....~.......|0   15:15:53.81 spid594   3CFF0028:  1ed30100 78290000 2f000000 f094e52c 00000000  ....x)../......,....|0   15:15:53.81 spid594   3CFF003C:  09020080 00000000 00000000 00000000 00000000  ....................|0   15:15:53.81 spid594   3CFF0050:  00000000 00000000 00000000 00000000 1a2c0d00  .................,..|0   15:15:53.83 spid594   3CFF0064:  009c0300 00e30300 0097f79a 10000000 00040000  ....................|0  

Auto-increment key and Foreign Key as composite for relational database

Posted: 14 Jun 2013 02:22 PM PDT

I do not have any experience in databases other than what I have from just playing with workbench and trying to understand how I need to setup this database for a project. I have been searching the internet and I have become increasingly confused as to how I will do what I want to do. Below is what I am trying to accomplish.

I have a need for a MySQL database that will hold patient information. Patient information will be name, id (unique), and so on as well as all of the tests and samples for each test.

  • Each patient can have multiple tests
  • Each test can have multiple samples
  • If I delete a patient then all test and samples are deleted
  • If I delete a test then all samples will be deleted
  • If I delete all samples of one test OR all tests of one patient then the test OR patient should not be deleted.

So far I understand a relational database is what I need. I also understand that I can use foreign keys to link the patient id's for each table as well as the test id for each table. I have also come to the conclusion that I would like data integrity so that I don't have to manually manage deleting and updating the rows. This would also not allow for orphaned rows. I believe this can be done with a composite key but this is where I start getting confused on how to set it up.

I would expect it to work like this (with each line being an insert statement):

Patient Table:

|-------------------|  |Patient ID  |Name  |  |-------------------|  |12345       |ANG   |  |54321       |JUE   |  |-------------------|  

Tests Table:

|----------------------|  |Test ID  |Patient ID  |  |----------------------|  |1        |12345       |  |2        |12345       |  |3        |12345       |  |1        |54321       |  |2        |54321       |  |4        |12345       |  |----------------------|  

Sample Table:

|----------------------------------|  |Sample ID  |Test ID  |Patient ID  |  |----------------------------------|  |1          |1        |12345       |  |2          |1        |12345       |  |3          |1        |12345       |  |1          |2        |12345       |  |1          |1        |54321       |  |1          |2        |54321       |  |2          |2        |54321       |  |2          |2        |12345       |  |----------------------------------|  

Can this be done easily? I also know that this can be accomplished with one or two triggers but I understood that to not handle deletions and the like. Is there another way to do this that is easier to implement? Also, data being written and read from this database is handled by my LabVIEW program.

Automated query log

Posted: 14 Jun 2013 09:03 AM PDT

I would like to ask for some opinions or recommendations or guidelines on how to create an automatic logging system for the executed querys on a MySQL database.

Some details:

  • its a php+mysql project
  • multiple users can use the database, based upon a user table and a authentication system made in php (they share 1 mysql user)
  • so far some query log is implemented, it is not very efficient because ever single logging procedure is hard coded in php
  • i am looking for a way to create a general procedure, to which i can pass the user id, and the rest is done automatically
  • i need to store previous value, new value, column name, table name, user name, query type, time stamp

I am familiar with TRIGGERS and STORED PROCEDURES, but cant really figure out if that's the the way, and if yes how (don't need code, just pure logic, although a usable minimalistic example would be welcomed).

Thank you.

Edit: Wow, is this question that hard ?

MySQL HDD activity with empty process list

Posted: 14 Jun 2013 11:29 AM PDT

What does this mean and how can I understand what is mysql doing? I use InnoDB, my database is huge. This happens when I disconnect my app.

Thanx.

Why don't databases create their own indexes automatically?

Posted: 14 Jun 2013 01:03 PM PDT

I would have thought that databases would know enough about what they encounter often and be able to respond to the demands they're placed under that they could decide to add indexes to highly requested data.

Is using SUM() twice suboptimal?

Posted: 14 Jun 2013 01:05 PM PDT

I know I have to write SUM twice, if I wish to use it in a HAVING clause (or use a derived table otherwise):

SELECT  id,    sum(hours) AS totalhours    FROM mytable    GROUP BY id    HAVING sum(hours) > 50;  

My question now is, whether or not this is suboptimal. As a programmer, this query looks like the DB will calculate the sum twice. Is that so, or should I rely on optimizations the DB engine will do for me?

Update: an explain of a comparable query:

postgres=> explain select sum(counttodo) from orderline group by orderlineid having sum(counttodo) > 100;                               QUERY PLAN                               --------------------------------------------------------------------   HashAggregate  (cost=1.31..1.54 rows=18 width=8)     Filter: (sum(counttodo) > 100)     ->  Seq Scan on orderline  (cost=0.00..1.18 rows=18 width=8)  (3 rows)  

Oracle RAC 11g R2 DB silent install: parameter needed for system class

Posted: 14 Jun 2013 08:28 PM PDT

I'm installing Oracle RAC DB 11gR2 using response file and silent mode. The installation is failing with the error message:

[INS-35421] This options installs a single instance database only
You have chosen to perform a Desktop class install on a cluster. This options will not install oracle RAC

Which parameter is needed for a server-class install? I tried changing the value for oracle.install.db.config.starterdb.control from DB_CONTROL to GRID_CONTROL, but I still get the same warning.

Getting "BadImageFormatException" when attempting to connect through Visual Studio to Oracle

Posted: 14 Jun 2013 02:28 PM PDT

I get the following message: "Attempt to load Oracle client libraries threw BadImageFormatException. This problem will occur when running in 64 bit mode with the 32 bit Oracle client components installed." when I attempt to create a connection from both Visual Studio 2008 and 2012 to Oracle from my Windows 8 64bit workstation.

I'm not running a web app when this message happens. I'm only trying to establish an Oracle connection with Visual Studio. Any solutions posted on the internet that I have seen that resolve this issue with others, doesn't seem to apply to my circumstances.

  • I did not have this issue when I was using Widows 7 64bit.
  • I have set the Active Solution Platform to both Any PC and x86 when I have a solution open, but I still get the same message with either setting.
  • I have the latest Oracle 32 bit Client since 64 bit Oracle client won't work with 32 bit Visual Studio.
  • At this point, I am only trying to connect to Oracle from Visual Studio and not run a web application. So IIS settings have nothing to do with this issue.

My environment is

  • Windows 8 64bit
  • Visual Studio 2012 and 2008
  • Oracle 32 bit Client. But also tried 64 Bit Client.
  • Oracle 11.2.0.3 database.
  • Active Solution Platform was tried with both Any PC and x86.

Any suggestions would be appreciated.

SHOW TABLE STATUS very slow on InnoDB

Posted: 14 Jun 2013 12:28 PM PDT

Recently we've been migrating from MyISAM to InnoDB and I understand that MyISAM uses meta information against each table to track information and such - however, the database is about 1.8gb with 1+ million records across 400+ or so tables.

The problem comes with software like PHPMyAdmin runs "SHOW TABLE STATUS FROM dbase;" where it can take up to 50 seconds to execute that command. Is there a way to optimise this? (MySQL or MariaDB)

Thanks!

Difference between database vs user vs schema

Posted: 14 Jun 2013 03:19 PM PDT

I'm really confused with the terms database, user and schema. Can anyone explain how they are different from each other (if they are)?

If they are same then, what are the similarities between them? How do we use them? And how do we create them?

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 14 Jun 2013 01:28 PM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

SQL Server BPA 2008R2

Posted: 14 Jun 2013 11:28 AM PDT

I have been using SQL server BPA for getting good information from Microsoft. I was using its 2005 version in which i used to export the results in csv format but recently I got two new servers which has got SQL Server 2008 R2 installed on it and I know I can't run BPA 2005 on these, so chose the R2 version of it, but it doesn't have an option to save the report on csv format, only xml, I have tried the excel to convert it into csv but no use, even-though it display the details but can't narrow down the results any idea of converting the results to csv format?

mysql duplicate entry error 1062 when restoring backup

Posted: 14 Jun 2013 09:28 PM PDT

Sorry, I seen similar threads but I still couldn't find it addressing my issue plus, I needed some more info on this.

Requirement: To create an exact replica 'db4' of an existing DB 'db3'.

Procedure followed:

  • mysqldump -uuser -ppass db3 > db3.sql (size is 6G)
  • mysql -uuser -ppass db4 < db3.sql (db4 was a newly created blank database)

The 2nd step throws in the error:

ERROR 1062 (23000) at line 5524: Duplicate entry '600806' for key 1"  

I ran the 2nd step again with --force. The restore completed but with 2 additional similar errors:

ERROR 1062 (23000) at line 6309: Duplicate entry '187694' for key 1      ERROR 1062 (23000) at line 6572: Duplicate entry '1567400' for key 1  

On completion when I queried certain tables of db4 database, I was able to see missing records.

Question:

  1. Does this indicate a corrupted/problematic db3 database?

  2. How to proceed to create a 'consistent/working' replica (db4) of db3?

  3. If (2) fails, how to possibly troubleshoot and find the reason behind why it occurs?

Thanks,

MySQL optimization - year column grouping - using temporary table, filesort

Posted: 14 Jun 2013 03:28 PM PDT

I have a transactions table which is having 600,000 records, I need to list the count for the dashboard on financial year basis. The table used is MyISAM. I tried adding index for the transaction date (tran_date). Even though it is using the index it creates temporary table which is taking more time because of the temporary table and the filesort. Is there any way to optimize the query to improve the query time?

  SELECT COUNT( * ) AS cnt, CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS financial_year  FROM `transactions1`  WHERE tran_date >= '2010-06-01'  GROUP BY financial_year    Showing rows 0 - 4 (5 total, Query took 1.2095 sec)  
  id select_type  table       type  possible_keys     key key_len    ref  rows    Extra  1   SIMPLE    transactions1 range PRIMARY,tran_date tran_date 8  NULL   346485  Using where; Using index; Using temporary; Using filesort  
  Keyname     Type    Unique  Packed  Field       Cardinality   Collation   PRIMARY     BTREE   Yes       No    tran_date      205720         A                                           tran_ID        617162         A   coupon_No   BTREE   No        No    coupon_No      617162         A       account_typeBTREE   No        No    account_type   3              A       prodCode    BTREE   No        No    prodCode       430            A                                           tran_date      308581         A   tran_date   BTREE   No        No    tran_date      205720         A       cust_ID     BTREE   No        No    cust_ID        3265           A                                           tran_date      308581         A                                       account_type   308581         A                                       points_earned  617162         A  

Update :

Tried adding partition which is not that much helpful in comparison with non partitioned one. Does replication help in this case for reading this table?. There will be more grouping based on the dates (using the date functions) when reading the data.

Edit:

I altered the query and reduced the query execution time. The query I used is,

  SELECT SUM( count )  FROM (  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date  ) AS s  GROUP BY format_date    Showing rows 0 - 4 (5 total, Query took 0.5636 sec)  
  id  select_type     table     type  possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>    ALL       NULL        NULL      NULL      NULL    229676  Using temporary; Using filesort  2   DERIVED     transactions1 index     NULL        tran_date   8       NULL    617162  Using index  

But when using

  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date    Showing rows 0 - 29 (229,676 total, Query took 0.0006 sec)  

gives less time without using the SUM(count) in the derived table. Is there any other way to get the sum without using the subquery in MySQL or can the subquery be optimized to get the index.

Copy Database option not found in SQL Server Management Studio 2008 R2

Posted: 14 Jun 2013 12:50 PM PDT

I am unable to see the Copy Database option in SQL Server Management Studio 2008 R2.

Can anyone tell me whats going on please? Copy Database not found Image

No comments:

Post a Comment

Search This Blog