Thursday, September 12, 2013

[how to] Unable to insert into or update table

[how to] Unable to insert into or update table


Unable to insert into or update table

Posted: 12 Sep 2013 08:26 PM PDT

A have the following table in a SQL Server 2000 database (service pack 2), with about 600,000 rows:

Id [int, primary key]
Form [int]
Element [int]
Data [char(8000)]
MoreData [varchar(8000)]

(I should note: the "Data" column probably holds less than 30 characters for almost every row. No idea was char(8000) was chosen previously.)

A simple INSERT INTO query with a single row of data was timing out on my website, and attempting to perform the same query directly in SQL Server Management Studio was fruitless after over 20 minutes of waiting. I ran DBCC SHOWCONTIG and received the following:

DBCC SHOWCONTIG scanning '(Table Name)' table...
Table: '(Table Name)' (926730454); index ID: 0, database ID: 8
TABLE level scan performed.
- Pages Scanned................................: 2555
- Extents Scanned..............................: 328
- Extent Switches..............................: 327
- Avg. Pages per Extent........................: 7.8
- Scan Density [Best Count:Actual Count].......: 97.56% [320:328]
- Extent Scan Fragmentation ...................: 99.39%
- Avg. Bytes Free per Page.....................: 349.8
- Avg. Page Density (full).....................: 95.68%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Thinking fragmentation was the problem, I ran DBREINDEX on the table for over two hours with no luck. After searching further, I also executed sp_lock and found over a million EXT (extent) locks, the bulk being attributed to the SPID for the website (same credentials I used to connect to the database in Management Studio). Most of the remainder were by SPID 123, which didn't come up using sp_who.

Is that number of locks (a million, for a fairly low traffic website) strange? Could they be caused by my attempts to rebuild the index when I killed the Management Studio window two or three times (for not responding)? Or could this be a problem with the physical drive?


Since gathering the above information a few hours ago the number of locks has dropped to only 16, and the table is working normally again. If anyone can help I'd still like some insight into what caused the problem so that it can be prevented in the future.

How to become a Superman?

Posted: 12 Sep 2013 07:44 PM PDT

I have this dilemma for many years and still, I wonder how to become a famous superhero of all time. Im still hoping that the database will help me to solve my childhood problem. You may be wonder why I ask this kind of question in this thread.

Is it better performance wise to check if a field needs update if you know the record does

Posted: 12 Sep 2013 04:54 PM PDT

In the queries we use all the records in one table that has data which needs to be placed in a another system had a Insert, Update and Delete flags (i, u, d being 1 in the record), this way we can pull out 50 records are flagged with updated data to process rather than processing 50,000

since one record in this table has data that's got to be placed in multiple tables (and since it's an Entity Attribute Value structure, sometimes multiple attribute records in one datatype table ie. product_varchar had name, description and url) when i go to update a value i always put in the where clause WHERE tbl1.field <> tbl2.field so that way instead of updating the fields which have the same data i'm only updating the ones which are different

in terms of performance would it be better to check if the fields are different every time before making them the same or should i just update regardless

Should I partition this large MySQL InnoDB table by date and if so, how?

Posted: 12 Sep 2013 04:13 PM PDT

Table Detail:

I have a fairly large (4.6 million rows, 74 columns (average data length is 239)) InnoDB table which stores data for about 1100 sensors polled every 10 minutes (about 150k rows inserted per day).


Goal:

I do not need data older than 30 days. Every day a query is executed to delete records older than 30 days.


Previous/Current Methods:

In previous incarnations, it was a MyISAM table and the query (DELETE FROM data WHERE timestamp < ...) seemed to run relatively quickly. Since conversion to InnoDB, delete operations are noticeably slower (I'm familiar with why).

I've converted the delete query to a stored procedure that iterates over the table in 1000-row chunks, which is more friendly to other clients accessing the table.

However there is still a significant amount of time involved waiting for the deletion to finish (sometimes an hour or more). Research led me to the concept of horizontal partitioning wherein the table would be split by date range, for example. Since the table is effectively a rolling log, my thought was to partition it by day, so that each day the oldest partition is simply dropped.


Question:

It appears that new partitions are not something that happen automatically once a table has been partitioned; in other words, it would not create a new partition at midnight. I was hoping that there is some way to define a scheme which MySQL would automatically write new rows to an appropriate new partition, which seems most logical for rolling logs such as this.

Is my understanding of partitioning incorrect, with regard to how new partitions are created and used? Ideally I'd like a "set and forget" approach; maybe partitioning is not the right solution.


Some additional info:

  • Most queries filter on the date column, which has an index
  • The hardware is a 2-processor Intel Xeon E5405 with 18 GB of RAM running Windows Server
  • The table create:

    CREATE TABLE `data` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `sensor_id` int(10) unsigned NOT NULL,    `timestamp` datetime NOT NULL,    -- 71 other columns...    PRIMARY KEY (`id`),    UNIQUE KEY `IX_sensor_timestamp` (`sensor_id`,`timestamp`),    KEY `IX_timestamp` (`timestamp`),    KEY `IX_sensor_id` (`sensor_id`)  ) ENGINE=InnoDB AUTO_INCREMENT=78542614 DEFAULT CHARSET=latin1  

Query Execution Plan for MySQL

Posted: 12 Sep 2013 04:15 PM PDT

In Microsoft SQL Server you can see Execution Plans for queries to see how a query is executed, what joins take the most time and see any missing indexs

now i want to do the same for MySQL scripts that run on a MySQL Server, i am wondering what programs or tools i can uses for this

i am aware of EXPLAIN however from my reading it only handles SELECT which is ok for SELECT and INSERT INTO ... SELECT queries but i also want to check out UPDATE and DELETE queries

EDIT: we are running MySQL 5.5

Error: "To run a ssis package outside of sql server data tools you must install send successful email of Integration Services or higher."

Posted: 12 Sep 2013 02:46 PM PDT

Version SSIS 2012. Package was originally 2005 ver. Package on file system.

DB Engine / Agent on Host 1, SSIS on Host 2.

Execution from Host 2 runs successfully. Execution from Host 1 gives the error in the question subject. Saw this post, but engine and agent are installed on Host 1. Has anyone seen this? Google returns no results when searching on the exact error (subject line is exact, wording is odd in the error message).

Thank you!

Tablespaces in oracle

Posted: 12 Sep 2013 08:41 PM PDT

I understand that tablespace is a logical collection of tables and database objects. It could be made up of multiple data files (which is the physical collection of objects). Apart from this, what is the aparent advantages that tablespaces provide? Anyone care to share use case(s) of how tablespace/data files can help solve problems.

How to retrieve a query from 3000 tables in SQL? [on hold]

Posted: 12 Sep 2013 01:33 PM PDT

I have plan to create 3000 tables for my database. But I am afraid how will I retrieve some data from those many tables.

Eg: Select * from table1, table2, table3.... table3000 where colmn='somevalue';

I can't type the names of all tables in the FROM clause.

Is it possible to work with those many tables in SQL ? If not which DB will be the best to work with those many tables ?

Please help. Thanks in advance.

How can I monitor a production database to see if there are table scans happening?

Posted: 12 Sep 2013 02:20 PM PDT

How can I monitor a production database to see if there are table scans happening? And what indexes might fix them...

I'm told that we have adequate indexes but I can't see every code path, and we have too many people touching the codebase (hey, lots of developers can sometimes be a good thing) so what's the most comprehensive way to see how if we've missed indexes based on our production load?

SQL Server 2008 R2, C#/ASP.NET code, NHibernate are the key factors in use.

How Can I Determine Object Distribution in a Multiple-File Filegroup?

Posted: 12 Sep 2013 12:49 PM PDT

I am looking to eliminate an extraneous data file in a filegroup in my DB but don't want to use the dreaded DBCC SHRINKFILE method, I am preferring to rebuild indexes into an alternate FG, perform the shrink with EMPTYFILE followed with removal of the file and then rebuild back into the original filegroup. Is there a method to identify which table/index objects are populating the utilized pages/extents in the target database file?

Duplicate key violates unique constraint

Posted: 12 Sep 2013 01:57 PM PDT

When I use this query everything is fine:

UPDATE users SET f_name='Mike', l_name='MyLastName', username='blabla', "password"='asdfsdaf', "class"=12, lang='en' WHERE id=50  

I created a view which combines two tables:

CREATE OR REPLACE VIEW students_data AS    SELECT students.id, users.f_name, users.l_name, users.username, users."password", users."class", users.permission, users.sessionid, users.lastlogin, users.lang     FROM students     LEFT JOIN users ON students.id = users.id;  

And I also created this rule:

CREATE OR REPLACE RULE students_data_update AS      ON UPDATE TO students_data DO INSTEAD  UPDATE users SET f_name = new.f_name, l_name = new.l_name, username = new.username, "password" = new."password", "class" = new."class", permission = new.permission, sessionid = new.sessionid, lastlogin = new.lastlogin, lang = new.lang;  

When I execute this query I get an error:

UPDATE students_data SET f_name='Mikaa', l_name='MikeL', username='blabla', "password"='asdfsdaf', "class"=12, lang='en' WHERE id=50  

ERROR: duplicate key violates unique constraint "username" SQL status:23505

I have no idea why I get this error, username is a unique column but I shouldn't give any problems when I update this column.

-- Table: users    -- DROP TABLE users;    CREATE TABLE users  (    id serial NOT NULL,    f_name character varying,    l_name character varying,    username character varying NOT NULL,    "password" character varying NOT NULL,    "class" integer NOT NULL,    permission integer NOT NULL,    sessionid character varying,    lastlogin integer,    lang character varying(5),    CONSTRAINT users_pkey PRIMARY KEY (id),    CONSTRAINT "class" FOREIGN KEY ("class")        REFERENCES "class" (id) MATCH SIMPLE        ON UPDATE NO ACTION ON DELETE NO ACTION,    CONSTRAINT username UNIQUE (username)  )  WITHOUT OIDS;  

Clustered index compression vs table compression - are they the same thing?

Posted: 12 Sep 2013 11:01 AM PDT

If I set compression (either page or row) on the clustered index of a table, is that the same as setting compression on the table?

SQL Server provides options for doing both, which suggests that they are different, but I was under the impression that a clustered index and a table were essentially the same thing, and my mental model of how clustered indexes work tells me that compressing the clustered index must also compress the table.

Constructing datetime from 2 fields: one integer and a datetime

Posted: 12 Sep 2013 10:38 AM PDT

I am looking for the most efficient way to parse 2 values; the first is stored as a datetime field with (hh:mm:ss.fff all being 0) and the second time field stored as an int with the format: hhmmssmmm). I need to take these 2 fields and convert them to a datetime field. For example:

date field = '2013-09-04'  time field = 192423998   

Should return:

2013-09-04 19:24:23.997  

I've seen solutions using DATEADD and CONVERT and I came up with my own (shown below) but I wonder if there's a better/more efficient way.

DECLARE @dt DATETIME = '2013-09-04'    SELECT  @dt+' '+ STUFF(STUFF(STUFF(RIGHT('0'+CAST(192423998 AS VARCHAR(9)),9)  ,7,0,'.'),5,0,':'),3,0,':')    

Note: In case you wonder why the RIGHT('0'+CAST(192423998 AS VARCHAR(9)),9) - this is because a time before 10 AM will only have one digit for the hour part.

Oracle empty string converts to null

Posted: 12 Sep 2013 12:51 PM PDT

When doing an insert, an empty string is converted to null:

insert into test (f) values ('');  

Now, there is a row with f containing a null.

But, when I query the table, I cannot use '':

select * from test where f='';  no rows selected  

I can use null:

select * from test where f is null;  ____F_  NULL  

So... it appears that Oracle decided that empty strings cannot be used for insert, but they remain empty strings when doing queries. Where is the documentation on when an empty string becomes a null and when it remains an empty string?

Backup SQL Server with a system backup

Posted: 12 Sep 2013 12:08 PM PDT

I have installed a Microsoft SQL Server 2012 on a server that also are used as a file server. Each day the server is automatic backed up with Acronis Backup & Recovery to a USB-disc that we switch every day.

My question is whether it is enough to make a system backup, or if I also need to do a SQL Server Backup. Are there any disadvantages to doing like this?

How to securely connect app and database servers?

Posted: 12 Sep 2013 08:44 PM PDT

(Updated) For a start, I have my app and database on separate servers. It's easy to connect them, except I am not sure how to secure my database server.

Here's what I've already done:

  • Ran mysql_secure_installation right after installing MySQL. So, all of these have been taken care of:

    • strong 64 char root password
    • no anonymous users
    • root login only on localhost
    • no test databases
  • A non-public network for the inter-server communication (in my my.cnf, there's something like this: bind-address = 10.128.2.18).

  • A separate user for the database, an unguessable username and 64 char strong password to go with it; and the ip addresses of the user accounts set to the private IP addresses of the app server. I created the user with command like this:

    GRANT ALL ON `app_db`.* TO 'db_user'@'10.128.2.9' IDENTIFIED BY 'password';  

    The app is wordpress, so I need GRANT ALL ON to avoid any unexpected issues.

  • (Update) IPSec to secure the connection between the servers (instead of using tech like SSH Tunnel, SSL, VPN, or Tinc).

Options considered:

  • SSH Tunnel or SSL: I could connect the app server and database server via SSH tunnel or SSL, and enable localhost-only access to the database on the database server. But I am told it's a bad idea performance-wise as sending lots of data through SSH or SSL will eat CPU.

  • OpenVPN: It's great as well, but doing high availability/scaling on openvpn is not really easy. Setting it up itself is quite complex to be honest, and this takes difficulty in scaling to a whole new level.

So, whatelse do I need, or am I good enough? How else do others do it? Please be as detailed as possible (link to a tutorial or something you are suggesting would help a lot).

A query executed in a backup of a database is performing better than if executed in the original database

Posted: 12 Sep 2013 04:08 PM PDT

We have two databases with the same data, let's call it Database A which is the original and Database B which is the backup from Database A. A query is performing much better in Database B for example the query takes 42 ms and in Database A takes 10 seconds. I noticed that the execution plans are different. I already asked the DBA of the company but he has no clue what could the problem be. There are 42 entries in the execution plan of Database A and only 33 entries in Database A, maybe that could be the problem but I don't know.

UPDATE: I've been researching and I found out that this has something to do with hash join. Database A is performing hash join on VW_PERSONA while Database B is performing NESTED JOIN. The end result is that HASH JOIN for this case performs bad. What could be the reason that Database A prefers HASH JOIN instead of NESTED LOOPS JOIN? I verified this by forcing a HASH JOIN using a hint in Database B and it performed as bad as Database A.

Version of the Database: Oracle Database 10g Release 10.2.0.4.0 - 64bit Production

This is the query:

SELECT t1.ID, t1.CLAVEESTATUS, t1.IDEJECUTIVO, t1.GERENCIA_ID, t1.IDTIPOEJECUTIVO, t1.IDDOMICILIO, t1.IDPERSONA, t0.ID, t0.CLAVEESTATUS, t0.CORREOELECTRONICO, t0.DESCRIPCION, t0.IDGERENCIA, t0.CENTROOPERACION_ID, t0.IDDOMICILIO, t0.IDPERSONARESPONSABLE, t2.BCATPURAASEG, t2.CALLENUMERO, t2.CIUDAD, t2.CLAVECIUDAD, t2.CLAVEESTADO, t2.CLAVEPAIS, t2.CLAVEUSUARIOMODIFICACION, t2.CODIGOPOSTAL, t2.ESTADO, t2.FECHAMODIFICACION, t2.IDCOLONIA, t2.idPersona, t2.COLONIA, t2.TIPODOMICILIO, t2.idDomicilio, t3.IDPERSONA, t3.APELLIDOMATERNO, t3.APELLIDOPATERNO, t3.BCAPTURAASEG, t3.CLAVEESTADOCIVIL, t3.CLAVEESTADONACIMIENTO, t3.CLAVEMUNICIPIONACIMIENTO, t3.CLAVENACIONALIDAD, t3.CLAVERAMAACTIV, t3.CLAVERAMAPERSONAFISICA, t3.CLAVERESIDENCIA, t3.CLAVESEXO, t3.CLAVESITUACION, t3.CLAVESUBRAMAACTIV, t3.CLAVETIPOPERSONA, t3.CLAVETIPOSECTOR, t3.CLAVETIPOSECTORMORAL, t3.CLAVETIPOSITUACION, t3.CLAVEUSUARIOMODIFICACION, t3.CORREOSADICIONALES, t3.CURP, t3.EMAIL, t3.FACEBOOK, t3.FECHAALTA, t3.FECHABAJA, t3.FECHACONSTITUCION, t3.FECHAMODIFICACION, t3.FECHANACIMIENTO, t3.IDREPRESENTANTE, t3.LUGARNACIMIENTO, t3.NOMBRE, t3.NOMBRECOMPLETO, t3.NOMBRECONTACTO, t3.PAGINAWEB, t3.PUESTOCONTACTO, t3.RAZONSOCIAL, t3.CODIGORFC, t3.TELEFONOCASA, t3.TELEFONOCELULAR, t3.TELEFONOFAX, t3.TELEFONOOFICINA, t3.TELEFONOSADICIONALES, t3.TWITTER       FROM MIDAS.TOEJECUTIVO t1       LEFT OUTER JOIN MIDAS.TOGERENCIA t0 ON (t0.ID = t1.GERENCIA_ID)       LEFT OUTER JOIN MIDAS.VW_DOMICILIO t2 ON ((t2.idDomicilio = t1.IDDOMICILIO) AND (t2.idPersona = t1.IDPERSONA))       LEFT OUTER JOIN MIDAS.VW_PERSONA t3 ON (t3.IDPERSONA = t1.IDPERSONA);  

The execution plans for that query:

Database A (performs bad, takes 10 seconds):

PLAN_TABLE_OUTPUT  Plan hash value: 1113260721    ---------------------------------------------------------------------------------------------------------------------  | Id  | Operation                             | Name                | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |  ---------------------------------------------------------------------------------------------------------------------  |   0 | SELECT STATEMENT                      |                     |    79 |   108K|       |   996K  (1)| 03:19:14 |  |*  1 |  HASH JOIN OUTER                      |                     |    79 |   108K|       |   996K  (1)| 03:19:14 |  |   2 |   VIEW                                |                     |    79 | 40290 |       |   659  (13)| 00:00:08 |  |   3 |    NESTED LOOPS OUTER                 |                     |    79 | 43687 |       |   659  (13)| 00:00:08 |  |   4 |     VIEW                              |                     |    79 | 22199 |       |     7  (15)| 00:00:01 |  |*  5 |      HASH JOIN OUTER                  |                     |    79 |  5925 |       |     7  (15)| 00:00:01 |  |   6 |       TABLE ACCESS FULL               | TOEJECUTIVO         |    79 |  1343 |       |     3   (0)| 00:00:01 |  |   7 |       TABLE ACCESS FULL               | TOGERENCIA          |    31 |  1798 |       |     3   (0)| 00:00:01 |  |*  8 |     VIEW PUSHED PREDICATE             |                     |     1 |   272 |       |     8  (13)| 00:00:01 |  |   9 |      WINDOW SORT                      |                     |     1 |   195 |       |     8  (13)| 00:00:01 |  |  10 |       NESTED LOOPS OUTER              |                     |     1 |   195 |       |     7   (0)| 00:00:01 |  |  11 |        NESTED LOOPS                   |                     |     1 |   156 |       |     5   (0)| 00:00:01 |  |  12 |         NESTED LOOPS                  |                     |     1 |   126 |       |     4   (0)| 00:00:01 |  |  13 |          TABLE ACCESS BY INDEX ROWID  | DOMICILIO           |     1 |    96 |       |     3   (0)| 00:00:01 |  |* 14 |           INDEX RANGE SCAN            | IDX01_DOMICILIO_PER |     1 |       |       |     2   (0)| 00:00:01 |  |* 15 |          TABLE ACCESS BY INDEX ROWID  | POBLACION           |     1 |    30 |       |     1   (0)| 00:00:01 |  |* 16 |           INDEX UNIQUE SCAN           | PK_POBLACION        |     1 |       |       |     1   (0)| 00:00:01 |  |* 17 |         TABLE ACCESS BY INDEX ROWID   | POBLACION           |     1 |    30 |       |     1   (0)| 00:00:01 |  |* 18 |          INDEX UNIQUE SCAN            | PK_POBLACION        |     1 |       |       |     1   (0)| 00:00:01 |  |* 19 |        TABLE ACCESS BY INDEX ROWID    | CODIGO_POSTAL       |     1 |    39 |       |     2   (0)| 00:00:01 |  |* 20 |         INDEX RANGE SCAN              | PK_CODIGO_POSTAL    |     5 |       |       |     1   (0)| 00:00:01 |  |  21 |   VIEW                                | VW_PERSONA          |   917K|   782M|       |   995K  (1)| 03:19:06 |  |  22 |    MERGE JOIN OUTER                   |                     |   917K|   803M|       |   995K  (1)| 03:19:06 |  |  23 |     VIEW                              |                     |   917K|   787M|       |   760K  (1)| 02:32:05 |  |  24 |      MERGE JOIN OUTER                 |                     |   917K|   734M|       |   760K  (1)| 02:32:05 |  |  25 |       VIEW                            |                     |   917K|   691M|       |   531K  (1)| 01:46:13 |  |  26 |        MERGE JOIN OUTER               |                     |   917K|   631M|       |   531K  (1)| 01:46:13 |  |  27 |         VIEW                          |                     |   917K|   584M|       |   508K  (1)| 01:41:47 |  |  28 |          MERGE JOIN OUTER             |                     |   917K|    91M|       |   508K  (1)| 01:41:47 |  |  29 |           TABLE ACCESS BY INDEX ROWID | PERSONA             |   917K|    75M|       |   258K  (1)| 00:51:41 |  |  30 |            INDEX FULL SCAN            | PK_PERSONA          |   917K|       |       |  1546   (1)| 00:00:19 |  |* 31 |           SORT JOIN                   |                     |   916K|    16M|    84M|   250K  (1)| 00:50:06 |  |  32 |            TABLE ACCESS BY INDEX ROWID| PERSONA_EXT         |   916K|    16M|       |   245K  (1)| 00:49:01 |  |  33 |             INDEX FULL SCAN           | PK_PERSONA_EXT      |   916K|       |       |  1959   (1)| 00:00:24 |  |* 34 |         SORT JOIN                     |                     | 73382 |  3869K|    10M| 22109   (1)| 00:04:26 |  |  35 |          TABLE ACCESS BY INDEX ROWID  | PERSONA_MORAL       | 73382 |  3869K|       | 21130   (1)| 00:04:14 |  |  36 |           INDEX FULL SCAN             | PK_PERSONA_MORAL    | 73382 |       |       |   129   (1)| 00:00:02 |  |* 37 |       SORT JOIN                       |                     |   843K|    39M|   129M|   229K  (1)| 00:45:53 |  |  38 |        TABLE ACCESS BY INDEX ROWID    | PERSONA_FISICA      |   843K|    39M|       |   219K  (1)| 00:43:49 |  |  39 |         INDEX FULL SCAN               | PK_PERSONA_FISICA   |   843K|       |       |  1633   (1)| 00:00:20 |  |* 40 |     SORT JOIN                         |                     |   760K|    13M|    46M|   235K  (1)| 00:47:02 |  |  41 |      TABLE ACCESS BY INDEX ROWID      | RFC                 |   760K|    13M|       |   230K  (1)| 00:46:10 |  |  42 |       INDEX FULL SCAN                 | PK_RFC              |   760K|       |       |  1558   (1)| 00:00:19 |  ---------------------------------------------------------------------------------------------------------------------    Predicate Information (identified by operation id):  ---------------------------------------------------       1 - access("T3"."IDPERSONA"(+)="T1"."IDPERSONA")     5 - access("T0"."ID"(+)="T1"."GERENCIA_ID")     8 - filter("ID_DOMICILIO"(+)="T1"."IDDOMICILIO" AND "CUENTA"(+)="NUMERO"(+))    14 - access("DOM"."ID_PERSONA"="T1"."IDPERSONA")         filter("ID_PERSONA" IS NOT NULL)    15 - filter("POBLACION"."CVE_TIPO_POBLAC"='ES' AND "POBLACION"."CVE_POBLAC_UBIC" IS NOT NULL AND                 "POBLACION"."CVE_POBLAC_UBIC"="DOM"."CVE_PAIS")    16 - access("POBLACION"."CVE_POBLACION"="DOM"."CVE_ESTADO")    17 - filter("CIUDAD"."CVE_POBLAC_UBIC" IS NOT NULL AND "CIUDAD"."CVE_TIPO_POBLAC"='CD' AND                 "CIUDAD"."CVE_POBLAC_UBIC"="DOM"."CVE_ESTADO")    18 - access("CIUDAD"."CVE_POBLACION"="DOM"."CVE_CIUDAD")    19 - filter("DOM"."COLONIA"="CP"."DESC_USR_CODPOST"(+))    20 - access("DOM"."CODIGO_POSTAL"="CP"."CODIGO_POSTAL"(+))    31 - access("PEXT"."ID_PERSONA"(+)="P"."ID_PERSONA")         filter("PEXT"."ID_PERSONA"(+)="P"."ID_PERSONA")    34 - access("MORAL"."ID_PERSONA"(+)="P"."ID_PERSONA")         filter("MORAL"."ID_PERSONA"(+)="P"."ID_PERSONA")    37 - access("FISICA"."ID_PERSONA"(+)="P"."ID_PERSONA")         filter("FISICA"."ID_PERSONA"(+)="P"."ID_PERSONA")    40 - access("RFC"."ID_PERSONA"(+)="P"."ID_PERSONA")         filter("RFC"."ID_PERSONA"(+)="P"."ID_PERSONA")  

Database B execution plan (performs better, takes 42 ms):

PLAN_TABLE_OUTPUT  Plan hash value: 2434982697    -----------------------------------------------------------------------------------------------------------  | Id  | Operation                           | Name                | Rows  | Bytes | Cost (%CPU)| Time     |  -----------------------------------------------------------------------------------------------------------  |   0 | SELECT STATEMENT                    |                     |  1522 |  1863K|  1271   (7)| 00:00:16 |  |   1 |  NESTED LOOPS OUTER                 |                     |  1522 |  1863K|  1271   (7)| 00:00:16 |  |*  2 |   HASH JOIN RIGHT OUTER             |                     |    79 | 77578 |   619   (1)| 00:00:08 |  |   3 |    TABLE ACCESS FULL                | TOGERENCIA          |    31 |  1953 |     3   (0)| 00:00:01 |  |   4 |    NESTED LOOPS OUTER               |                     |    79 | 72601 |   616   (1)| 00:00:08 |  |   5 |     TABLE ACCESS FULL               | TOEJECUTIVO         |    79 |  1738 |     3   (0)| 00:00:01 |  |   6 |     VIEW PUSHED PREDICATE           | VW_PERSONA          |     1 |   897 |     8   (0)| 00:00:01 |  |   7 |      NESTED LOOPS OUTER             |                     |     1 |   260 |     8   (0)| 00:00:01 |  |   8 |       NESTED LOOPS OUTER            |                     |     1 |   232 |     6   (0)| 00:00:01 |  |   9 |        NESTED LOOPS OUTER           |                     |     1 |   175 |     5   (0)| 00:00:01 |  |  10 |         NESTED LOOPS OUTER          |                     |     1 |   118 |     4   (0)| 00:00:01 |  |  11 |          TABLE ACCESS BY INDEX ROWID| PERSONA             |     1 |    98 |     2   (0)| 00:00:01 |  |* 12 |           INDEX UNIQUE SCAN         | PK_PERSONA          |     1 |       |     2   (0)| 00:00:01 |  |  13 |          TABLE ACCESS BY INDEX ROWID| RFC                 |     1 |    20 |     2   (0)| 00:00:01 |  |* 14 |           INDEX UNIQUE SCAN         | PK_RFC              |     1 |       |     1   (0)| 00:00:01 |  |  15 |         TABLE ACCESS BY INDEX ROWID | PERSONA_FISICA      |     1 |    57 |     2   (0)| 00:00:01 |  |* 16 |          INDEX UNIQUE SCAN          | PK_PERSONA_FISICA   |     1 |       |     1   (0)| 00:00:01 |  |  17 |        TABLE ACCESS BY INDEX ROWID  | PERSONA_MORAL       |     1 |    57 |     1   (0)| 00:00:01 |  |* 18 |         INDEX UNIQUE SCAN           | PK_PERSONA_MORAL    |     1 |       |     1   (0)| 00:00:01 |  |  19 |       TABLE ACCESS BY INDEX ROWID   | PERSONA_EXT         |     1 |    28 |     2   (0)| 00:00:01 |  |* 20 |        INDEX UNIQUE SCAN            | PK_PERSONA_EXT      |     1 |       |     1   (0)| 00:00:01 |  |* 21 |   VIEW PUSHED PREDICATE             |                     |     1 |   272 |     8  (13)| 00:00:01 |  |  22 |    WINDOW SORT                      |                     |     1 |   195 |     8  (13)| 00:00:01 |  |  23 |     NESTED LOOPS OUTER              |                     |     1 |   195 |     7   (0)| 00:00:01 |  |  24 |      NESTED LOOPS                   |                     |     1 |   156 |     5   (0)| 00:00:01 |  |  25 |       NESTED LOOPS                  |                     |     1 |   126 |     4   (0)| 00:00:01 |  |  26 |        TABLE ACCESS BY INDEX ROWID  | DOMICILIO           |     1 |    96 |     3   (0)| 00:00:01 |  |* 27 |         INDEX RANGE SCAN            | IDX01_DOMICILIO_PER |     1 |       |     2   (0)| 00:00:01 |  |* 28 |        TABLE ACCESS BY INDEX ROWID  | POBLACION           |     1 |    30 |     1   (0)| 00:00:01 |  |* 29 |         INDEX UNIQUE SCAN           | PK_POBLACION        |     1 |       |     1   (0)| 00:00:01 |  |* 30 |       TABLE ACCESS BY INDEX ROWID   | POBLACION           |     1 |    30 |     1   (0)| 00:00:01 |  |* 31 |        INDEX UNIQUE SCAN            | PK_POBLACION        |     1 |       |     1   (0)| 00:00:01 |  |* 32 |      TABLE ACCESS BY INDEX ROWID    | CODIGO_POSTAL       |     1 |    39 |     2   (0)| 00:00:01 |  |* 33 |       INDEX RANGE SCAN              | PK_CODIGO_POSTAL    |     5 |       |     1   (0)| 00:00:01 |  -----------------------------------------------------------------------------------------------------------    Predicate Information (identified by operation id):  ---------------------------------------------------       2 - access("T0"."ID"(+)="T1"."GERENCIA_ID")    12 - access("P"."ID_PERSONA"="T1"."IDPERSONA")    14 - access("RFC"."ID_PERSONA"(+)="T1"."IDPERSONA")         filter("RFC"."ID_PERSONA"(+)="P"."ID_PERSONA")    16 - access("FISICA"."ID_PERSONA"(+)="T1"."IDPERSONA")         filter("FISICA"."ID_PERSONA"(+)="P"."ID_PERSONA")    18 - access("MORAL"."ID_PERSONA"(+)="T1"."IDPERSONA")         filter("MORAL"."ID_PERSONA"(+)="P"."ID_PERSONA")    20 - access("PEXT"."ID_PERSONA"(+)="T1"."IDPERSONA")         filter("PEXT"."ID_PERSONA"(+)="P"."ID_PERSONA")    21 - filter("ID_DOMICILIO"(+)="T1"."IDDOMICILIO" AND "CUENTA"(+)="NUMERO"(+))    27 - access("DOM"."ID_PERSONA"="T1"."IDPERSONA")         filter("ID_PERSONA" IS NOT NULL)    28 - filter("POBLACION"."CVE_TIPO_POBLAC"='ES' AND "POBLACION"."CVE_POBLAC_UBIC" IS NOT NULL AND                 "POBLACION"."CVE_POBLAC_UBIC"="DOM"."CVE_PAIS")    29 - access("POBLACION"."CVE_POBLACION"="DOM"."CVE_ESTADO")    30 - filter("CIUDAD"."CVE_POBLAC_UBIC" IS NOT NULL AND "CIUDAD"."CVE_TIPO_POBLAC"='CD' AND                 "CIUDAD"."CVE_POBLAC_UBIC"="DOM"."CVE_ESTADO")    31 - access("CIUDAD"."CVE_POBLACION"="DOM"."CVE_CIUDAD")    32 - filter("DOM"."COLONIA"="CP"."DESC_USR_CODPOST"(+))    33 - access("DOM"."CODIGO_POSTAL"="CP"."CODIGO_POSTAL"(+))  

VW_PERSONA (VIEW):

CREATE OR REPLACE FORCE VIEW MIDAS.VW_PERSONA  (     IDPERSONA,     IDDOMICILIO,     CLAVERESIDENCIA,     CLAVENACIONALIDAD,     CLAVETIPOPERSONA,     FECHAALTA,     FECHABAJA,     CLAVETIPOSITUACION,     TELEFONOCASA,     TELEFONOFAX,     TELEFONOOFICINA,     CLAVETIPOSECTOR,     NOMBRECOMPLETO,     FECHAMODIFICACION,     CLAVEUSUARIOMODIFICACION,     BCAPTURAASEG,     EMAIL,     CURP,     TELEFONOCELULAR,     NOMBRECONTACTO,     PUESTOCONTACTO,     FACEBOOK,     TWITTER,     PAGINAWEB,     TELEFONOSADICIONALES,     CORREOSADICIONALES,     IDREPRESENTANTE,     RAZONSOCIAL,     FECHACONSTITUCION,     CLAVERAMAACTIV,     CLAVESUBRAMAACTIV,     CLAVETIPOSECTORMORAL,     NOMBRE,     APELLIDOPATERNO,     APELLIDOMATERNO,     CLAVESITUACION,     CLAVEESTADOCIVIL,     CLAVERAMAPERSONAFISICA,     CLAVESEXO,     FECHANACIMIENTO,     LUGARNACIMIENTO,     CLAVEESTADONACIMIENTO,     CLAVEMUNICIPIONACIMIENTO,     CODIGORFC,     SIGLASRFC,     FECHARFC,     HOMOCLAVERFC  )  AS     SELECT /*+ INDEX(pext PK_PERSONA_EXT) INDEX(p PK_PERSONA) INDEX(fisica PK_PERSONA_FISICA) INDEX(moral PK_PERSONA_MORAL) INDEX(rfc PK_RFC)*/           p  .id_persona AS idPersona,              p.id_domicilio AS idDomicilio,              p.cve_residencia AS claveResidencia,              p.cve_nacionalidad AS claveNacionalidad,              CASE WHEN p.cve_per_juridica = 'PF' THEN 1 ELSE 2 END                 AS claveTipoPersona,              p.f_alta AS fechaAlta,              p.f_baja AS fechaBaja,              p.sit_persona AS claveTipoSituacion,              p.telef_casa AS telefonoCasa,              p.telef_fax AS telefonoFax,              p.telef_oficina AS telefonoOficina,              p.cve_tipo_sector AS claveTipoSector,              p.nombre AS nombreCompleto,              p.fh_modificacion AS fechaModificacion,              p.cve_usuar_modif AS claveUsuarioModificacion,              p.b_captura_aseg AS bCapturaAseg,              --          pext.e_mail AS email,              --          trim(TRANSLATE(pext.e_mail,'áéíóúàèìòùãõâêîôôäëïöüçÁÉÍÓÚÀÈÌÒÙÃÕÂÊÎÔÛÄËÏÖÜÇ','aeiouaeiouaoaeiooaeioucAEIOUAEIOUAOAEIOOAEIOUC'))  email,              TRIM (pext.e_mail) email,              pext.curp AS curp,              pext.telef_celular AS telefonoCelular,              pext.nombre_contacto AS nombreContacto,              pext.puesto_contacto AS puestoContacto,              pext.facebook,              pext.twitter,              pext.pagina_web AS paginaWeb,              pext.tels_adicionales AS telefonosAdicionales,              pext.correos_adicionales AS correosAdicionales,              pext.id_representante AS idRepresentante,              moral.nom_razon_social AS razonSocial,              moral.f_constitucion AS fechaConstitucion,              moral.cve_rama_activ AS claveRamaActiv,              moral.cve_sub_rama_ac AS claveSubramaActiv,              moral.cve_tipo_sector AS claveTipoSectorMoral,              fisica.nombre_persona AS nombre,              fisica.apellido_paterno AS apellidoPaterno,              fisica.apellido_materno AS apellidoMaterno,              fisica.cve_actividad_pf AS claveSituacion,              fisica.cve_estado_civil AS claveEstadoCivil,              fisica.cve_rama_pf AS claveRamaPersonaFisica,              fisica.cve_sexo AS claveSexo,              fisica.f_nacimiento AS fechaNacimiento,              fisica.lugar_nacim AS lugarNacimiento,              fisica.cve_estado_nacim AS claveEstadoNacimiento,              fisica.cve_munic_nacim AS claveMunicipioNacimiento,              rfc.siglas_rfc || rfc.f_rfc || rfc.homoclave_rfc AS codigoRfc,              rfc.siglas_rfc AS siglasRfc,              rfc.f_rfc AS fechaRfc,              rfc.homoclave_rfc AS homoclaveRfc       FROM               SEYCOS.persona p                       LEFT JOIN                          SEYCOS.persona_ext pext                       ON (pext.id_persona = p.id_persona)                    LEFT JOIN                       SEYCOS.persona_moral moral                    ON (moral.id_persona = p.id_persona)                 LEFT JOIN                    SEYCOS.persona_fisica fisica                 ON (fisica.id_persona = p.id_persona)              LEFT JOIN                 SEYCOS.RFC rfc              ON (rfc.id_persona = p.id_persona);  

bulk_insert_buffer_size and InnoDB

Posted: 12 Sep 2013 10:47 AM PDT

I read the following on this page

http://dev.mysql.com/doc/mysql/en/server-system-variables.html

bulk_insert_buffer_size MyISAM uses a special tree-like cache to make bulk inserts faster for INSERT ... SELECT, INSERT ... VALUES (...), (...), ..., and LOAD DATA INFILE. This variable limits the size of the cache tree in bytes per thread. Setting it to 0 disables this optimization. Note: This cache is used only when adding data to a non-empty table. The default value is 8MB. This variable was added in MySQL 4.0.3. This variable previously was named myisam_bulk_insert_tree_size.

I take this to mean that bulk_insert_buffer_size has no bearing on InnoDB tables, only on MyISAM tables. Is this correct?

Create Login command error

Posted: 12 Sep 2013 12:40 PM PDT

What is wrong here in following screen shot? enter image description here

I am trying solutions posted here and here to add a login

Percona Xtradb Cluster : How to speed up insert?

Posted: 12 Sep 2013 11:53 AM PDT

I recently installed a 3 full master node cluster based on Percona Xtradb (very easy install). But now i need to make some tuning to increase INSERT/UPDATE requests. Actually, i made around 100 insertions every 5 minutes, but also made around 400 update in the same time. All this operation take less than 3 minutes when i was on a single server architecture. And now, with 3 node server, it takes more than 5 minutes ...

Is there any tuning i can do to speed up this operations ? Here is my actual cnf configuration :

[mysqld]  datadir=/var/lib/mysql  user=mysql    wsrep_provider=/usr/lib/libgalera_smm.so  wsrep_cluster_address=gcomm://dbnode01,dbnode02,dbnode03    binlog_format=ROW  default_storage_engine=InnoDB  innodb_locks_unsafe_for_binlog=1  innodb_autoinc_lock_mode=2  wsrep_node_address=1.2.3.4  wsrep_cluster_name=my_cluster  wsrep_sst_method=xtrabackup  wsrep_sst_auth="user:password"  

Here are the 3-server hard config :

Node#1

CPU: Single Processor Quad Core Xeon 3470 - 2.93Ghz (Lynnfield) - 1 x 8MB cache w/HT  RAM: 8 GB DDR3 Registered 1333  HDD: 500GB SATA II  

Node#2

CPU: Single Processor Quad Core Xeon 1270 V2 - 3.50GHz (Ivy Bridge) - 1 x 8MB cache w/HT  RAM: 4 GB DDR3 1333  HDD: 1.00TB SATA II  

Node#3

CPU: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz (4-Cores)  RAM: 32G  HDD: 2T  

UPDATE

Actualy there's around 2.4M records (24 fields each) in the table concerned by the INSERT/UPDATE statements (6 fields indexed).

How to design a table that each rows have 5K boolean attributes?

Posted: 12 Sep 2013 10:10 AM PDT

I have about 2M rows and each row looks like the following.

244 true false ... true

-> One integer column(V) and about 5K boolean columns(B1, B2, ..., B5K) associated to the integer.

Due to the limitation of the maximum number of columns that I can have for a row, I have separated the boolean columns(attributes) in a separate table.

Table V:    idx_V value_V  --------------  1     244      ...    Table B:    idx_V idx_B value_B  --------------------  1     1     true  1     2     false  ...  1     5K    true  ...  

This design works alright when I try to find V's that match one boolean column. For example, finding V's where the 2nd boolean attribute is true:

select value_V   where VT.idx_A = BT.idx_A       and idx_B = 2       and value_B = true   from V_Table as VT       and B_Table as BT  

But the query becomes awful when I have to find V's that match a multiple boolean columns, sometimes even for all 5K columns, like finding V's with B1=true, B2=false, B3=true, ... and B5K=false.

My primary use of the tables would be the following 2:

  1. Find V's that x1th, x2th and xnth boolean columns are false/true (n can be anything between 1 and 5K)
  2. Sublists:
    • Find the sequence of the boolean columns for a specific V: T F T T F F ...
    • Find other V's that match the sequence found in 2-A

I'm thinking about constructing a varchar[5K] field to store the boolean sequence to do 2 but it seems like there's just too much waste in space since each boolean only requires just 1 bit but I'm allocating a byte.

What would be the best way to go about this?

Sproc - Adding columns with matching ID column [on hold]

Posted: 12 Sep 2013 01:38 PM PDT

I currently have a sproc that is used for reporting on a table similar to the one below:

ID      Weight      QTY      OtherData  1         20         12        1234  1         44         4         1234  2         12         1         74747  2         88         7         74747  3         18         8         222222  4         22         9         88888  

What I am trying to achieve is that when for example there are multiple instances of ID 1 the Weight & QTY fields need to be added together and merged into one row. The actual schema is far more complex than this but luckily the only unique numbers should be the weight and QTY.

After many hours of googling this I give up and came and asked here. I am not looking for the exact code needed, just an example and a nudge in the right direction. Anticipated result:

ID   Weight QTY    OtherDATA  1    64     16     1234  2    100    8      74747  3    18     8      222222  4    22     9      88888  

Why does MySQL ignore the index even on force for this order by?

Posted: 12 Sep 2013 03:32 PM PDT

I run an EXPLAIN:

mysql> explain select last_name from employees order by last_name;  +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    | id | select_type | table     | type | possible_keys | key  | key_len | ref  | rows  | Extra          |  +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    |  1 | SIMPLE      | employees | ALL  | NULL          | NULL | NULL    | NULL | 10031 | Using filesort |  +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    1 row in set (0.00 sec)    

The indexes in my table:

mysql> show index from employees;    +-----------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+    | Table     | Non_unique | Key_name      | Seq_in_index | Column_name   | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |    +-----------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+    | employees |          0 | PRIMARY       |            1 | subsidiary_id | A         |           6 |     NULL | NULL   |      | BTREE      |         |               |    | employees |          0 | PRIMARY       |            2 | employee_id   | A         |       10031 |     NULL | NULL   |      | BTREE      |         |               |    | employees |          1 | idx_last_name |            1 | last_name     | A         |       10031 |      700 | NULL   |      | BTREE      |         |               |    | employees |          1 | date_of_birth |            1 | date_of_birth | A         |       10031 |     NULL | NULL   | YES  | BTREE      |         |               |    | employees |          1 | date_of_birth |            2 | subsidiary_id | A         |       10031 |     NULL | NULL   |      | BTREE      |         |               |    +-----------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+    5 rows in set (0.02 sec)    

There is an index on last_name but the optimizer does not use it.
So I do:

mysql> explain select last_name from employees force index(idx_last_name) order by last_name;    +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    | id | select_type | table     | type | possible_keys | key  | key_len | ref  | rows  | Extra          |    +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    |  1 | SIMPLE      | employees | ALL  | NULL          | NULL | NULL    | NULL | 10031 | Using filesort |    +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    1 row in set (0.00 sec)    

But still the index is not used! What am I doing wrong here?
Does it have to do with the fact that the index is NON_UNIQUE? BTW the last_name is VARCHAR(1000)

Update requested by @RolandoMySQLDBA

mysql> SELECT COUNT(DISTINCT last_name) DistinctCount FROM employees;    +---------------+    | DistinctCount |    +---------------+    |         10000 |    +---------------+    1 row in set (0.05 sec)        mysql> SELECT COUNT(1) FROM (SELECT COUNT(1) Count500,last_name FROM employees GROUP BY last_name HAVING COUNT(1) > 500) A;    +----------+    | COUNT(1) |    +----------+    |        0 |    +----------+    1 row in set (0.15 sec)    

Import from incremental backups to a new host in Oracle 11g

Posted: 12 Sep 2013 02:20 PM PDT

I am using Oracle 11g. I would like to know that whether it is possible to import from incremental level 0 & 1 backups to a new host using RMAN. If yes, how can I do that?

For level 1 I am using differential method.

Create table group sub-folders in Management Studio

Posted: 12 Sep 2013 04:04 PM PDT

When I was exploring the master database in SSMS I noticed that under the Tables folder there is another folder called Systems Tables that houses a bunch of tables. Is it possible for us to create a structure akin to Systems Tables within our database?

I am looking to organize tables and stored procedures into project specific folders. Under the new setup - when I am referring to my table object I would have to use the following syntax (I am guessing here):

[dbName].[projectName].[dbo].[tableName]   

Also, apart from clearing up the clutter, do anybody foresee any performance improvement/degradation because of this re-organization? I use Microsoft SQL Server 2008 R2.

Oracle schema migration to new database with zero downtime

Posted: 12 Sep 2013 10:20 AM PDT

I have a problem: I have two production schemas on one database each serving two different appplicaitons.

I need to migrate (move) one schema to an entirely new database that will be dedicated for that application with zero downtime. SO EXPORT/IMPORT can not be used.

I have license to even active data guard. But if I setup data guard from current database DB to new DB, should I switch over OR failover or what else so that the current primary will continue to be primary in its role and only the new DB will change its role from standby to fully operational primary. Then we simply direct the application to new and start the new db. Thanks for your help.

pg_dump from 9.2 to 8.4

Posted: 12 Sep 2013 05:11 PM PDT

I'm trying to restore a db dump from PostgreSQL 9.2 to PostgrSQL 8.4 (from separate machines, both are running Windows 7).

I followed the instructions in this answer: Use pg_restore to restore from a newer version of PostgreSQL.

However,the screen keeps writing

invalid command \N  invalid command \N  invalid command \N  invalid command \N  invalid command \N  invalid command \N  

How can I resolve this issue?

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 12 Sep 2013 03:20 PM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

No comments:

Post a Comment

Search This Blog