Saturday, March 2, 2013

[how to] Only one NULL for one part of primary key

[how to] Only one NULL for one part of primary key


Only one NULL for one part of primary key

Posted: 02 Mar 2013 05:51 PM PST

So,i have two date columns : "start date" and "finish date"
I want to make sure that there can be only one null value in "finish date" column(which should be current state if it's NULL) for one primary key(composite in my case).
Is there a way to do that ?
I'm using SQL Server 2008.

EDIT:
I'm sorry for making no sense.
There should be only one NULL for one part of the primary key.
Here are the columns :


PeriodID | PersonID | StartDate | FinishDate

1...................1.............1/1/10.......10/1/10
2...................1.............10/1/10....... NULL
3...................1..............somedate......MUST NOT BE NULL

And yes,"PersonID" is a foreign key.

MySQL design problem

Posted: 02 Mar 2013 08:44 PM PST

So I have nearly no experience in DB Design, so forgive me if I'm asking something super nooby and ridiculous :)

Basically my issue is this. I have say an item I need to create, this item will be constructed from 3-4 types of objects, with multiple object instances. Basically as per below:
Object 1: only 1 instance per item
Object 2: extends Object 1 and shares a foreign key, may have multiple instances, one instance mandatory
Object 3: extends Object 2 and shares a foreign key, may have multiple instances, one instance mandatory . There are 3 types of Object 3 which are dependent on a field from Object 2 (haven't yet figured out how these need to link without code).
Object 4: extends Object 3 and shares a foreign key, may have multiple instances, not mandatory, so may be omitted.

My question is this, is there a way to dynamically generate Object 3 through MySQL. Like if field #3 shows Object 3.1 call that table, if it shows 3.2 call that instead. Similar to a constructor.

Also is there a way for me to map related objects. For example, since everything is dependent on Object 1, I'd like to be able to call Object 1 and have it pull all related instance of Objects 2,3,4 related to that specific object. I was thinking of creating a separate table top hold primary keys of related objects, but I'm thinking there is probably a better way of doing that.

Also I've read that I shouldn't use stored procedures, and instead do everything with code. Is that correct, because I would think for my model it would be more convenient and cleaner to just call on Object one, and have MySQL do all the joins internally and return a single completed object

Thank you very much for your help!!

Possible INDEX on a VARCHAR field in MySql

Posted: 02 Mar 2013 06:57 PM PST

I am working in a MySql database, with a table like this:

+--------------+  |  table_name  |  +--------------+  |    myField   |  +--------------+  

...and I need to make a lot of queries like this (with 5-10 strings in the list):

SELECT myField FROM table_name  WHERE myField IN ('something', 'other stuff', 'some other a bit longer'...)  

There will be around 24.000.000 unique rows

1) Should I use a FULLTEXT or and INDEX key for my VARCHAR(150)?
2) If I increase the chars from 150 to 220 or 250... would it make a great difference? (Is there any way to calculate it?)
3) As I said, they are going to be unique, so myField should be a PRIMARY KEY. Isn't it rare to add a PRIMARY KEY to a field which is already a VARCHAR INDEX/FULLTEXT?

How to improve INSERT speed in large table that has unique constraint

Posted: 02 Mar 2013 01:36 PM PST

I have a simple MyISAM table:

explain entities;  +------------+-------------+------+-----+-------------------+----------------+  | Field      | Type        | Null | Key | Default           | Extra          |  +------------+-------------+------+-----+-------------------+----------------+  | id         | int(11)     | NO   | PRI | NULL              | auto_increment |  | name       | varchar(50) | YES  | UNI | NULL              |                |  | created_at | timestamp   | NO   |     | CURRENT_TIMESTAMP |                |  +------------+-------------+------+-----+-------------------+----------------+  

that is expected to have 10s of millions of rows (or even more) eventually. I noticed that as the number of rows in the table increases, insert statements start to take longer. This is expected, as the documentation states that "the size of the table slows down the insertion of indexes by log N, assuming B-tree indexes". I read in another question that LOAD INDEX INTO CACHE can be used to improve insert speed, but I found this to have a negligible effect. Are there any other pointers on how to improve insertion speed, other than the ones listed here? For example, would partitioning the table help with insertion speed? What guidelines can I use for determining if my insertion speeds are already at a near optimum?

System info: MySQL 5.1.67-0ubuntu0.10.04.1 Running on Linode VM with 512 MB RAM and quad-core processor

Conditional Import

Posted: 02 Mar 2013 12:15 PM PST

I have 300+ flat files. I am importing/loading them using

LOAD DATA LOCAL INFILE 'D:\MxxT\tot2098.TXT' INTO TABLE Twenty.F98 FIELDS TERMINATED BY '' lines terminated by '\r\n' ;

on my MYSQL Server database.

Each of 20 files makes 28 tables. I have a flag variable which is MachineType in all flat files. After import of 20 files. I have to do

SET SQL_SAFE_UPDATES=0;

DELETE FROM Twenty.F98 WHERE MachineType <>1;

SELECT DISTINCT MachineType FROM Twenty.F98;

  1. Is there any way I can place a condition when loading data file.

  2. In MS Access I have used Schema.ini which loads the files in a second or less. Is there such mechanism available for MYSQL ?

  3. I have to create a dashboard after loading of all files. I am thinking of Using Excel as front end. Is there anything else better than that ??

Thanks !!

How to view the current settings of Autovacuum in Postgres?

Posted: 02 Mar 2013 12:56 PM PST

I've found a million examples of how to set Autovacuum's configuration settings, but can't seem to find how to list the current configuration.

Postgres 9.1 is the version I'm most interested in.

Why is optimistic locking faster than pessimistic locking?

Posted: 02 Mar 2013 03:54 PM PST

Both forms of locking cause a process to wait for a correct copy of the record if its currently in use by another process. With pessimistic locking, the lock mechanism comes from the DB itself (a native lock object), whereas with optimistic locking, the lock mechanism is some form of row versioning like a timestamp to check whether a record is "stale" or not.

But both cause a 2nd process to hang. So I ask: why is optimistic locking generally considered faster/superior than pessimistic locking? And, are there are use cases where pessimistic is preferred over optimistic? Thanks in advance!

How does optimistic locking actually enforce re-read/update?

Posted: 02 Mar 2013 01:40 PM PST

My understanding of optimistic locking is that it uses a timestamp on each record in a table to determine the "version" of the record, so that when the record is access by multiple processes at the same time, each has a reference to the record's version.

Then, when an update is performed, the timestamp is updated. Before an update is committed, it reads the timestamp on the record a 2nd time. If the timestamp (version) that it has is no longer the timestamp on the record (because it's been updated since the first read), then the process must re-read the entire record and apply the update on the new version of it.

So, if anything I have stated is not correct, please begin by making clarifications for me. But, assuming I'm more or less correct here...

How does this actually manifest itself in a RDBMS? Is this 2nd read/verification enforced in the application logic (the SQL itself) or is it a tuning parameter/configuration that the DBA makes?

I guess I'm wondering where the logic comes from to read the timestamp and perform a 2nd update if the timestamp is stale. So I ask: does the application developer enforce optimistic locking, or is it enforced by the DBA? Either way, how? Thanks in advance!

Implementation of stack in MYSQL

Posted: 02 Mar 2013 07:23 PM PST

I had an abstract idea of implementing a stack in MYSql. Tried a lot of Google work but couldn't reach to any logical conclusion.

The Stack must be a collection of tables with the stack top pointing at a particular table. We may have various such stacks in our DB. The stack length may increase if for example the field from the table is full. Likewise the stack length may decrease in the opposite manner!

Could you please give me an implementation idea! is it really possible at first place??Hope im clear with my question :-P

Please HELp!!!!! Thanks in advance!

Selecting with varbinary(max) criteria (in the where clause)

Posted: 02 Mar 2013 09:58 AM PST

Basic info

  • Database: SQL Server Express 2008 R2
  • Client: SQL Server Management Studio

Backround (skip if not interested):

A project I'm maintaining uses an ORM, which apparently stored my enum values (which inherit from Byte) into binary serialized .Net objects stored in a varbinary(max) field. I only found out this was happening after a new requirement emerged dictating my code to run under medium trust. Since the .Net binary formatter needs full trust to be called, it started crashing on the enums.

To clean the mess up I need to create migration scripts that will convert these (varbinary(max)) values back to integer values. There are only a handful of different values so it shouldn't be a big problem (I thought).

The problem:

I am able to get string representations of the blobs when selecting:

SELECT BinValue FROM MyTable where Type = 'Object';  

It returns a string '0x...(an array of hexadecimal values)'.

But when I try to select on the column using copy-and-paste making sure I have the exact binary equivalent:

SELECT ItemId FROM MyTable WHERE Value=convert(varbinary(max), '0x...')  

it does not return any records.

So is it possible to solve this using a client (such as Management Studio)?

If so, what would be the correct syntax?

UML class diagramm for database design?

Posted: 02 Mar 2013 08:18 AM PST

I used to make database designs using a french method called MERIS that uses mainly two diagrams to create the database (MCD and MLD), then i heard about UML and that is used also for databases and not only for coding, i thought it has some special diagram for database design, but i noticed that some people use class diagram to make a database design, i find it not compatible with this because :

  • it uses many concepts that doesn't exist on RDBMS (inheritance, methods, aggregation, composition, methods, access modifiers)
  • UML is a modeling language and not a methodology so it doesn't specify rules how to pass from the diagram to the relation Schema.

Question is : Is UML a nice choice for a database design ?

When to use identity increment?

Posted: 02 Mar 2013 06:57 AM PST

Many times I face these three choices to work with IDs:

  • Using an auto-increment identity column?
  • Suggesting a new codification to use (eg, for students: first 4 position is the year when the student joins the university, 5th position is 0 for females 1 for males, the last 4 position is a sequential number)
  • Don't add any extra column and use only a unique information that already exists (eg: name (of course must be sure that's unique (there are no two possible entries with the same name)))

Question is: which choice do I make, and according to what criteria?

Unable to start Oracle Database Server : Get an error as I try to start it

Posted: 02 Mar 2013 05:13 AM PST

I just installed Oracle Database Express Edition 11g Release 2 for windows.It created a short cut icon on the desktop :

enter image description here

but as I click this icon I see this dialog box :

enter image description here

What is it ? How do I start my Oracle Database Server ?

check constraint that has dynamic list

Posted: 02 Mar 2013 04:19 AM PST

I need to create a check constraint, the problem i face is that the list of values accepted by the constraint changes in the future (ex:now "red,green and blue" in the next month "red,green,blue and white"). How to do this ?

Scheduling a backup in SQL Server 2005

Posted: 02 Mar 2013 07:18 AM PST

I am trying to schedule back up in SQL Server 2005 with management option. I am unable to find database name while scheduling a backup with full option. Database name is not over there in selected area. What can be the reason?

Help me to resolve this problem.

Can I run a Report Server and a Mirror server from a working Log Shipping secondary instance?

Posted: 02 Mar 2013 07:35 AM PST

Our setup:

  • City A - Primary server for logshipping
  • City B - Secondary server for logshipping

Now there is a lot of data being updated from A - B.

And the server in City B is underutilized, and it is a very powerful machine.

Can I install a Reporting server and a mirroring server on the server which hosts the secondary logshipping server, and run the Reporting and mirroring server to get the data from the sSecondary logshipping server?

We cannot have the data come across 3 times (one for Logs, Report, Mirror) - and the secondary server is updated regularly enough to allow for our requirements.

But the secondary logshipping server is obviously always in Standby/Readonly mode - is this all possible?

which one is the fastest way of moving database form local to live server in sql server

Posted: 02 Mar 2013 10:39 AM PST

I am using SQL Server 2008, I have big database almost 500 MB of data there. Now I wanted to upload or deploy this DB to Live server. Currently I am generating script and then uploading script to live and by command line I am deploying database.

But this took almost 2 hours to do that very cumbersome too. and one more problem is also there I am not able to use import export database option cause its loses constrains like primary key and all and also gets error while inserting record from same process.

Can you tell me how can I do all this and with proper data transfer.

Backup failure in SQL Server 2005

Posted: 02 Mar 2013 01:39 AM PST

I'm being unable to perform a backup in SQL Server 2005 installed on Windows 7 local machine. I was initially able to do so but as of now, every time I try I get the following message:

Backup failed for Server 'EBENEZERAKAGLO\SQLEXPRESS'. (Microsoft.SqlServer.Smo)

System.Data.SqlClient.SqlError: Cannot open backup device 'C:\Program Files (x86)\Microsoft SQL Server\MSSQL.3\MSSQL\Backup\Naass.bak'. Operating system error 5(Access is denied.). (Microsoft.SqlServer.Smo)

What are the advantages of a database design with single table for MtM relationships?

Posted: 02 Mar 2013 07:07 AM PST

The database that backs our software product has a table design like the following:

Positions  ID    Name    Parts  1     One     12345  2     Two     12346  3     Three   12347    Collections  ID      TableID  Collection  12345   1;1      1;2  12346   1;1      3;4  12347   1;2;2    5;1;2    Parts  ID     Name  1      TestOne  2      TestTwo  3      TestThree  4      TestFour  5      TestFive    SubParts  1      SubPartOne  2      SubPartOne  

From the above example, each Position has a collection of Parts, but these are mapped generically (without foreign key constraints) into the Collections table. The Collections table keeps track of all relationships between all objects, not just between the example tables shown above, and will be used any time a collection is used.

That means that if I want to get the Position with ID 1, and all of its parts. I have to do three queries:

SELECT * FROM Positions WHERE ID = 1  SELECT * FROM Collections WHERE ID = 12345    Split the string by the semicolons    SELECT * FROM Parts WHERE ID = 1 OR ID = 2  

The advantages of the current approach are:

  • A collection of something can actually be selected from more than table. For example, if SubPart inherits from Part, and position contains a list of Parts, this will be handled OK.

The disadvantages of the current approach are:

  • Speed? We need to do many queries to load data.

  • The Collections table is the largest or second largest table in our database.

  • No support from ORM frameworks? We are looking at switching our persistency layer to use EF or NHibernate etc. but I don't believe our current design will be supported by any of these frameworks.

My questions are:

  • Is there advantages or disadvantages to this approach that I haven't listed?

  • Is this design familiar to people and if so, does it have a name?

  • If the design is familiar, is it supported by out of the box ORM frameworks?

Global locking for multi-master Oracle GoldenGate replication

Posted: 02 Mar 2013 09:43 AM PST

This is a very complex scenario, but I figured a state-of-the-art challenge might interest some of the many high-end users of dba.se.

Problem

I'm working on an intercontinental data replication solution for a document production system, somewhat similar to a wiki, utilizing Oracle GoldenGate. The primary goals are to increase application performance and availability around the globe.

The solution must allow for simultaneous read/write access to the same pool of data from multiple locations, which means that we need some clever way of preventing or resolving conflicting updates without user interaction.

Focusing on collision prevention, we must allow an object (a document, an illustration, a set of metadata etc) to be locked globally, thus preventing multiple users from simultaneously editing the same object from different locations - ultimately causing a conflict.

Similarly an object must remain locked until any user's connected database have received the updated data for that object, less a user may start editing an old object without the latest updates.

Background

The application is somewhat latency sensitive, making access to a central data center slow from remote locations. Like many content focused systems, the read/write ratio is in the line of 4 to 1, making it a good candidate for a distributed architecture. If well-managed, the latter wil also work towards ensuring availability during site or network outages.

I have used a somewhat unconventional multi-loop bi-directional replication topology. This keeps the complexity at a manageable level {2(n-1) ways}, adds resilience for site outages and allows for fairly simple addition or removal of sites. The slight drawback is that it may take up to 30 seconds for a transaction to be replicated between the most remote sites via the central master database.

A more conventional design with direct replication between all sites would cut that time in half, but would also significantly increase the complexity of the configuration {n(n-1) ways}.

With five locations that would mean a 20-way replication as opposed to the 8-way replication in my design.

This illustration shows my current test environment across data centers in Europe, Asia and North America. The production environment is expected to have additional locations.

Illustration of replication topology

All the databases are Oracle 11.2.0.3 with Oracle GoldenGate 11.2.1.

My thoughts so far

I've been thinking along the lines of doing locking by inserting a row into a "locking" table over a database link to the central database, while letting the unlock (update or delete of the previously mentioned row) be replicated along with the updated data.

On behalf of the user we must then check the availability of a lock in both the central and local database before acquiring the lock and opening the object for editing. When editing is completed, we must release the lock in the local database which will then replicate the changes and the release of the lock to all other locations via the central database.

However, queries over a high latency database link can sometimes be very slow (tests show anywhere from 1.5 seconds to 7 seconds for a single insert), and I'm not sure if we can guarantee that the update or delete statement that removes a lock is the last statement to be replicated.

Calling a remote PL/SQL procedure to do the checking and locking will at least limit the operation to a single remote query, but seven seconds is still a very long time. Something like two seconds would be more acceptable. I'm hoping the database links can be optimized somehow.

There may also be an additional issues like trying to delete or update a row in the local locking table before that row have been successfully replicated from the central database.

On the bright side, with this kind of solution, it should be relatively simple to let the application enter a read-only state if communications to the central database is distrupted, or to redirect clients if a data center should become unavailable.

Are there anyone who have done anything similar? What might be the best way to approach this?

Like I said initially, this is a rather complex solution, feel free to ask about anything left unclear or left out.

Slow query performance

Posted: 02 Mar 2013 05:31 PM PST

My database contains 20,000 rows and the following query is slow. Please help me to speed this up:

SELECT r.report_id,         r.user_id,         u.user_name,         u.user_mail,         d.department_name,         r.report_comment,         r.report_target_date,         r.report_create_date,         r.report_revised_date,         r.report_root_id  FROM   report r         JOIN user u           ON u.user_id = r.user_id         JOIN department d           ON u.department_id = d.department_id  WHERE  ( u.user_name LIKE Concat('%', 'hieu', '%') )         AND ( d.department_name LIKE Concat('%', 'BOM', '%') )         AND r.report_id IN (SELECT r.report_id                             FROM   report r                             WHERE  r.user_id = 1                             UNION ALL                             SELECT r.report_id                             FROM   report r                                    JOIN user u                                      ON u.user_id = r.user_id                                    JOIN department d                                      ON u.department_id = d.department_id                                    JOIN authority a                                      ON r.user_id = a.user_src_id                             WHERE  a.user_dest_id = 1)         AND ( r.report_id IN (SELECT r.report_id                               FROM   report r                               WHERE  r.report_comment LIKE                                      Concat('%', 'this', '%'))                OR r.report_id IN (SELECT rc.report_id                                   FROM   report_content rc                                   WHERE  rc.visited_company LIKE                                          Concat('%', 'this', '%')                                           OR rc.report_description LIKE                                              Concat('%', 'this', '%')) )  ORDER  BY r.report_target_date DESC  LIMIT  0, 30   

CREATE TABLE :

CREATE TABLE IF NOT EXISTS `report` (    `report_id` int(11) NOT NULL AUTO_INCREMENT,    `user_id` int(11) NOT NULL,    `report_comment` text CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,    `report_target_date` datetime NOT NULL,    `report_create_date` datetime NOT NULL,    `report_revised_date` datetime DEFAULT NULL,    `report_revised_id` int(11) DEFAULT NULL,    `report_root_id` int(11) DEFAULT NULL,    `enable` int(2) NOT NULL DEFAULT '1',    PRIMARY KEY (`report_id`),    UNIQUE KEY `unique_report` (`report_id`),    KEY `support_report` (`report_id`,`report_target_date`,`report_create_date`,`report_revised_date`,`report_revised_id`,`report_root_id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=18497 ;      CREATE TABLE IF NOT EXISTS `report_content` (    `report_content_id` int(11) NOT NULL AUTO_INCREMENT,    `report_id` int(11) NOT NULL,    `timestamp` text COLLATE utf8_unicode_ci NOT NULL,    `visited_company` varchar(255) COLLATE utf8_unicode_ci NOT NULL,    `report_description` text COLLATE utf8_unicode_ci NOT NULL,    PRIMARY KEY (`report_content_id`),    UNIQUE KEY `unique_report` (`report_content_id`,`report_id`),    UNIQUE KEY `unique_report_content` (`report_content_id`,`report_id`),    KEY `support_report_content` (`report_content_id`,`report_id`,`visited_company`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=278 ;  

My EXPLAIN :

id  | select |   table   |    type   |    possible_keys    |    key   |key_len|    ref  | rows  |  extra  1   |PRIMARY |     r     |    ALL    |         (NULL)      |  (NULL)  | (NULL)| (NULL)  | 20246 | using where/filesort  1   |PRIMARY |     p     |  eq_ref   | PRIMARY,unique_user |  NULL    | 4   |   NULL    |1      | using where  5   |SUBQUERY| report_content|  all  |          NULL       |    NULL  | 4   |   NULL    |160    |  Using where  6   |UNION   | report_content|  all  |          NULL       |    NULL  | 4   |   NULL    |160    |  Using where  

NOW I change query following :

    SELECT r.report_id,           r.user_id,           u.user_name,           u.user_mail,           d.department_name,           r.report_comment,            r.report_target_date,           r.report_create_date,           r.report_revised_date,           r.report_root_id  FROM report AS r FORCE INDEX (unique_report) , `user`                   AS u FORCE INDEX (unique_user) , `department`                   AS d FORCE INDEX (unique_department)  WHERE (u.user_name LIKE CONCAT('%', 'hieu', '%'))           AND (d.department_name LIKE CONCAT('%', 'bom', '%'))           AND r.report_id IN (  SELECT r.report_id      FROM report r      WHERE r.user_id =1 UNION ALL  SELECT r.report_id      FROM report r FORCE INDEX (unique_report)      JOIN USER u FORCE INDEX (unique_user) ON u.user_id = r.user_id      JOIN department d FORCE INDEX (unique_department) ON u.department_id = d.department_id      JOIN authority a FORCE INDEX (unique_authority) ON r.user_id = a.user_src_id      WHERE a.user_dest_id =1) AND (r.report_id IN (  SELECT  r.report_id      FROM report r      WHERE r.report_comment LIKE CONCAT('%', 'when', '%')) OR EXISTS (  SELECT rc.report_id      FROM report_content rc      WHERE rc.visited_company LIKE CONCAT('%', 'when', '%')       OR MATCH (`timestamp`,report_description)       AGAINST('+when+' WITH QUERY EXPANSION) AND r.report_id = rc.report_id))  ORDER BY r.report_target_date DESC  LIMIT 0, 30  

AND faster than before : 1.97s.

So somebody help me for speedup more.

"Waiting for initial communication packet" error 0 in MySQL

Posted: 02 Mar 2013 02:01 AM PST

I am using MySQL server 5.1.51 on a Linux machine.

The application was working fine for the last 32 days, but since yesterday a number of clients are not able to connect to the database from our application. They are getting the following error:

-2147467259 [MySQL] [ODBC 5.1 Driver] Lost Connection to MySQL server          at 'waiting for initial communication packet',system error: 0  

What can I do to fix this?

Firebird monitoring tables

Posted: 02 Mar 2013 11:30 AM PST

I am working with a Firebird 2.1.1 database and I started reading about the Monitoring tables.

I am trying to see somehow in real-time what queries does each client performs in the database.

I had a look at the MON$STATEMENTS table but it does not provide much info. I would have expected more content inside the table.

Screenshot from Flamerobin when table is displayed

Am I looking in the wrong place or did I not understand the purpose / use of the Firebird monitoring tables?

How to modify an update in Oracle so it performs faster?

Posted: 02 Mar 2013 08:11 AM PST

I have this query:

UPDATE   (      SELECT   h.valid_through_dt, h.LAST_UPDATE_TMSTMP      FROM   ETL_FEE_SCH_TMP d, FEE_SCHEDULE_HISTORICAL h      WHERE       h.FUND_ID = d.FUND_ID      AND h.FEETYPE_NAME = d.FEETYPE_NAME      AND h.BREAKPOINT_TYPE = d.BREAKPOINT_TYPE      AND h.BREAKPOINT_QTY = d.BREAKPOINT_QTY      AND h.LOW_BREAKPOINT_AMT = d.LOW_BREAKPOINT_AMT      AND h.VALID_THROUGH = TO_DATE ('31-DEC-9999', 'dd-mon-yyyy')      AND h.universe = 'DC'      AND h.universe = d.universe      AND EXISTS      (          SELECT 1          FROM FEE_SCHEDULE s          WHERE s.FUND_ID = h.FUND_ID          AND s.FEETYPE_NAME = h.FEETYPE_NAME          AND s.BREAKPOINT_TYPE = h.BREAKPOINT_TYPE          AND s.BREAKPOINT_QTY = h.BREAKPOINT_QTY          AND s.LOW_BREAKPOINT_AMT = h.LOW_BREAKPOINT_AMT          AND s.universe = 'DC'      )  ) updateTable  SET     updateTable.VALID_THROUGH = (SYSDATE - 1),  updateTable.LAST_UPDATE_TMSTMP = SYSTIMESTAMP;  

The trouble that I am having is that this query takes a long time to run. I don't know whether it is possible to run this on parallel, or it would be easier to update a cursor in a pipeline function.

What would you suggest?

This is all the information that I believe it is relevant.

This is the execution plan of the internal select:

Execution Plan  ----------------------------------------------------------  Plan hash value: 57376096  ---------------------------------------------------------------------------------------------------------  | Id  | Operation                    | Name                     | Rows  | Bytes| Cost (%CPU)| Time     |  ---------------------------------------------------------------------------------------------------------  |   0 | SELECT STATEMENT             |                          |     1 |   306 |  8427   (1)| 00:01:42 |  |   1 |  NESTED LOOPS                |                          |       |       |            |          |  |   2 |   NESTED LOOPS               |                          |     1 |    306|  8427   (1)| 00:01:42 |  |   3 |    MERGE JOIN CARTESIAN      |                          |     1 |    192|  8426   (1)| 00:01:42 |  |*  4 |     INDEX RANGE SCAN         | SYS_C000666              |     1 |     96|     2   (0)| 00:00:01 |  |   5 |     BUFFER SORT              |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |   6 |      SORT UNIQUE             |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  7 |       TABLE ACCESS FULL      | FEE_SCHEDULE             |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  8 |    INDEX RANGE SCAN          | FEE_SCHDL_IDX1           |     1 |       |     1   (0)| 00:00:01 |  |*  9 |   TABLE ACCESS BY INDEX ROWID| FEE_SCHEDULE_HISTORICAL  |     1 |   114 |     1   (0)| 00:00:01 |  ---------------------------------------------------------------------------------------------------------  Predicate Information (identified by operation id):  ---------------------------------------------------     4 - access("D"."UNIVERSE"='DC')     7 - filter("S"."UNIVERSE"='DC')     8 - access("H"."UNIVERSE"='DC' AND "S"."FUND_ID"="H"."FUND_ID" AND                "S"."FEETYPE_NAME"="H"."FEETYPE_NAME" AND                "S"."BREAKPOINT_TYPE"="H"."BREAKPOINT_TYPE" AND                "S"."BREAKPOINT_QTY"="H"."BREAKPOINT_QTY" AND                "S"."LOW_BREAKPOINT_AMT"="H"."LOW_BREAKPOINT_AMT")         filter("H"."FUND_ID"="D"."FUND_ID" AND                "H"."FEETYPE_NAME"="D"."FEETYPE_NAME" AND                "H"."BREAKPOINT_TYPE"="D"."BREAKPOINT_UNIT_TY  

Table data:

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  UNIVERSE|FUND_ID   |FEETYPE_NAME |BREAKPOINT_TYPE|BREAKPOINT_QTY|LOW_BREAKPOINT_AMT|HIGH_BREAKPOINT_AMT|FEE_PCT|FEE_SCHDL_SEQ_ID|GROUP_ID|LAST_UPDATE_TMSTMP  |VALID_FROM|VALID_THROUGH|INSERT_TMSTMP        |JOB_ID|  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |4             |10000             |300000             |3.14   |780250          |null    |1/4/2012  3:59:54 PM|6/23/2012 |12/31/9999   |1/5/2011   3:59:54 PM|666   |  DC      |DCE86Y8XFU|RedemptionFee|Percentage     |9             |  100             |100500             |7.67   |780251          |null    |6/4/2012  4:49:54 PM|11/12/2011|12/31/9999   |8/17/2011  2:00:54 PM|666   |  DC      |DCAYL0KONA|FrontLoad    |Percentage     |2             |50000             |601500             |5.00   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |7             |80000             |900000             |2.24   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  

This is the script of the historical table:

CREATE TABLE FEE_SCHEDULE_HISTORICAL  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    FEE_SCHDL_SEQ_ID                NUMBER        NOT NULL,    GROUP_ID                        NUMBER,    LAST_UPDATE_TMSTMP              DATE          NOT NULL,    VALID_FROM                      DATE          NOT NULL,    VALID_THROUGH                   DATE          NOT NULL,    INSERT_TMSTMP                   DATE          NOT NULL,    JOB_ID                          NUMBER        NOT NULL  );    CREATE UNIQUE INDEX FEE_SCHDL_PK ON FEE_SCHEDULE_HISTORICAL(FEE_SCHDL_SEQ_ID);    CREATE UNIQUE INDEX FEE_SCHDL_HST_IDX ON FEE_SCHEDULE_HISTORICAL (      UNIVERSE,      FUND_ID,      FEETYPE_NAME,      BREAKPOINT_TYPE,      BREAKPOINT_QTY,       LOW_BREAKPOINT_AMT,      VALID_FROM,      JOB_ID  )    CREATE INDEX FEE_SCHEDULE_HST_IDX2 ON FEE_SCHEDULE_HISTORICAL(LAST_UPDATE_TMSTMP)    CREATE INDEX FEE_SCHEDULE_HST_IDX3 ON FEE_SCHEDULE_HISTORICAL(VALID_THROUGH)    ALTER TABLE FEE_SCHEDULE_HISTORICAL ADD (      CONSTRAINT FEE_SCHDL_PK      PRIMARY KEY      (FEE_SCHDL_SEQ_ID)  );  

This is the other table:

CREATE TABLE FEE_SCHEDULE  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    JOB_RUN_ID                      NUMBER        NOT NULL,    FILE_DATE                       DATE          NOT NULL,    CYCLE_DATE                      DATE          NOT NULL  )  

The temporary table is the result of FEE_SCHEDULE_HISTORICAL minus FEE_SCHEDULE

Get the rank of a user in a score table

Posted: 02 Mar 2013 09:00 AM PST

I have a very simple MySQL table where I save highscores. It looks like that:

Id     Name     Score  

So far so good. The question is: How do I get what's a users rank? For example, I have a users Name or Id and want to get his rank, where all rows are ordinal ordered descending for the Score.

An Example

Id  Name    Score  1   Ida     100  2   Boo     58  3   Lala    88  4   Bash    102  5   Assem   99  

In this very case, Assem's rank would be 3, because he got the 3rd highest score.

The query should return one row, which contains (only) the required Rank.

No comments:

Post a Comment

Search This Blog