Friday, September 13, 2013

[how to] Lock Pages in Memory keep SQL database engine stuck at allocating 60MB ram?

[how to] Lock Pages in Memory keep SQL database engine stuck at allocating 60MB ram?


Lock Pages in Memory keep SQL database engine stuck at allocating 60MB ram?

Posted: 13 Sep 2013 08:17 PM PDT

I'm using Window Server 2003 Enterprise. Microsoft SQL Server 2005 Standard Edition.

The problem is the SQL server memory always stuck at allocating 1.7GB ram max even i have 11GB ram left.

According to this thread, which the thread starter have them same issues: http://www.sqlservercentral.com/Forums/Topic499500-146-1.aspx#bm499829 . I tried adding /APE to my boot.ini, add the network service account which is running sqlserv.exe to Lock Pages in Memory option. And reconfigure SQL use AWE to allocate memory.

But i have no lucky, after restart the server. The SQL database engine cannot allocating more than 60MB ram. Which is terrible failure as my expected.

So after that, i must restore the Lock Pages in Memory setting - remove the network service account from Lock Pages in Memory option, restart my server and it come back to the first problem. The SQL server database engine come back stuck at allocating 1.7GB ram. So the Lock Pages in Memory keep SQL database engine stuck at allocating 60MB ram ? And how to resolve the first problem now ?

What do you call numerical coding systems that are left-aligned?

Posted: 13 Sep 2013 08:28 PM PDT

I'm writing an importing mechanism for the NAICS database. I have a few questions about this code-format. I've seen it before and I like it the setup. I'm going to ask some other questions about best-practices and navigation of this data, and I'd like to simply refer to it by the right name.

Essentially this is an example of the data

CODE, TITLE  "21","Mining, Quarrying, and Oil and Gas Extraction"  "212","Mining (except Oil and Gas)"  "2121","Coal Mining"  "21211","Coal Mining"  "212111","Bituminous Coal and Lignite Surface Mining "  "212112","Bituminous Coal Underground Mining "  

So if Bituminous Coal Underground Mining was your organization type, your code would be 212112. You could look up to find you were in the business-genre of Coal Mining, and up again to find you were in the business of Mining (except Oil and Gas), and up again to find you were in the business of "Mining, Quarrying, and Oil and Gas Extraction".

What is such a scheme called, is there a term to refer to this kind of organization of data?

I want to call something like recursive-base10 or recursive-decimal is there a name of it though?

Is this compound index unnecessary?

Posted: 13 Sep 2013 03:57 PM PDT

I have a large table which contains sensor data, along with the fields:

sensor_id  timestamp  

I do queries against it using these fields almost exclusively. There are multiple sensors, and different sensors might have the same timestamp for a given set of data, so neither index can be unique.

I created three indexes: one for each of these columns (not unique), and a compound one for both (unique).

The table is constantly being written to, and queries to read data seem increasingly slow.

My question is, is the compound index unnecessary? Would it be faster to have only the two separate indexes? (Or remove those and keep only the compound index?) No other columns are used for filtering query data.

My question is similar to this one: Do I need separate indexes for each type of query, or will one multi-column index work?

SQL Server: Additional cpus slow down batch

Posted: 13 Sep 2013 02:48 PM PDT

I'm running SQL Server 2012 in Windows 2008R2 in a virtualized environment. I've observed the following under both VMware Workstation 9 and Hyper-V 2012R1 and I don't know how to address it.

I've got a batch that takes around 5 minutes to run when there is a single CPU in the virtual machine. Bumping up anywhere from 2-8 causes it to take over 10 minutes to run. Watching the Task Manager I see that there is not much if any parallel execution and lots of context switching. If I limit sqlservr.exe to a single CPU by setting the processor affinity in Task Manager the time drops back down to 5 minutes.

The particular batch that I'm running is makes heavy use of cursors and dynamic sql which cannot be eliminated.

The query has been profiled and optimized. Statistics are all up to date and indexes are rebuilt.

Is there anything I can do to SQL Server to get better behavior? This seems not right. I would like to add additional CPU resources to the VM so that they can be used if necessary without a drastic performance hit for serialized processing.

CPU is i7-4770K with VT-x enabled both with and without hyperthreading enabled.

Easily, quickly replace MySQL table?

Posted: 13 Sep 2013 06:02 PM PDT

We have a process which will regenerate a table from scratch, drop the original table, and move the new table into the original's place. Now this should happen very quickly, but sometimes a website request hits the table after it's been dropped but before the new one is renamed. Apart from coding the website to be more robust when there's a database error, is there an easier way to do this?

Table Design for multiple user define assignment

Posted: 13 Sep 2013 12:21 PM PDT

I have a Weather table which contains the weather information (precipitation, humidity, temperature, etc.) of nearly the last two years. New weather condition info is inserted into the table every 15 minutes using information from a local weather observation station.

Now users from a web application can define their own weather type definition (meaning that they can define that "Hot" corresponds to when the temperature is above 26 centigrade ect.)

I currently have three table:

Weather (Table contains weather info updated every 15 mins)  WeatherID | ObservationTime | Temperature | Pressure | ....    WeatherGroup (Table contains user defined information)  WeatherGroupID | GroupName | IgnoreTemp | TempHigh | TempLow | IgnoreHumidity | et...        1        | Snowy     |     0      |    0     |   -100  |      1         | ...    WeatherAssignment (Table contains the mapping between a weather record and the weather group  WeatherAssignmentID | WeatherID | WeatherGroupID            1          |     23    |        2           1          |     35    |        2          ...  

The issue is that pulling data every 15 minutes means that one year's of data contains 35,040 records. And the more users create their own weather definitions the WeatherAssignment table will keep growing larger. Now for the next four, five years I don't see this design causing any problems. But for those of you with database design experience is there a better way I store the data so the data?

How to config MySQL Enterprise Cluster

Posted: 13 Sep 2013 11:38 AM PDT

I have 2 mysql enterprise server but I want to make it clustering by active and standby. How to config this?

Thank you.

What do you call a relationship between two entities without a foreign key?

Posted: 13 Sep 2013 09:54 AM PDT

Let's say I have two tables that are related in that one table contains a field that is a key to the other table. In other words, it would be a 1:1 or 1:* depending on constraints.

Let's call it an Customer/Orders relationship to give it some context.

However, let's say the requirement is that there be no referential integrity because the table needs to be archived without affecting the rest of the system. In our case, they want to archive the customers, but leave the orders in the system.

Ignoring the fact we now have dangling references, which are just a fact of life when you need to prune systems in this way. What would you call this kind of "loose" relationship where there is no actual FK?

How might you illustrate this in an ERD without implying there is a FK? But still wishing to show that there is a relationship?

I realize that the proper method might be to use an FK anyways, and null the reference when archiving the customer, but that adds extra complexity they don't want to deal with. Further, they don't want to update the orders table after it has been finalized. They want to be able to find that customer number and go back in the archive and find the customer if need be.

retrieving data speed tweaks MS SQL 2005

Posted: 13 Sep 2013 12:25 PM PDT

I have a database in Microsoft SQL Server 2005.

I have table with 3 columns, namely HASHKEY (BIGINT), NOTE_ID (INT) and TIME_OFFSET (INT).

Columns HASHKEY has 19 digit BIGINT values, which can DUPLICATE. But values will be always of 19 digits.

I have application mainly depends on data in this table. Application retrieves data from this table with query like:

select HASHKEY, NOTE_ID, TIME_OFFSET   from TABLE_NAME   where HASHKEY in (<list of around 30000 hashkeys>)  

But this query takes around 2 minutes to retrieve data. This is my problem. The application is real time processing application, and need to retrieve data in about 5 seconds. How can I tweak things in server, so time to retrieve data can be decreased?

I have indexed the table by HASHKEY column when I created table, but still retrieving data is taking much time.

Is there any setting which I can do in database to so time can be decreased? I will welcome any type of solution. But I need to solve this. I am not very expert in this.

Also HASHKEY is just random values of 19 digits, no relation with other values.

Result of following query is,

select count(HASHKEY) from TABLE_NAME  go  select count(distinct(HASHKEY)) from TABLE_NAME  

Result:

225899932  189200251  

Time taken: 2 minutes, 1 second

EDIT

This is script to create table:

USE [fp]  GO  /****** Object:  Table [dbo].[fp_core]    Script Date: 09/13/2013 22:40:43 ******/  SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO  CREATE TABLE [dbo].[fp_core](      [hashkey] [bigint] NOT NULL,      [note_id] [int] NOT NULL,      [timeoffset] [int] NOT NULL  ) ON [PRIMARY]  

I will ask random set of around 30000 hashkeys for data, no any ordering.

When I was having around 20000000 rows in table, query was taking less than 2 seconds, but now retrieval time is increasing.

EDIT

And here is script to create index, only one index in this table.

USE [fp]  GO  /****** Object:  Index [IX_fp_core]    Script Date: 09/13/2013 23:04:24 ******/  CREATE NONCLUSTERED INDEX [IX_fp_core] ON [dbo].[fp_core]   (      [hashkey] ASC  )WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF) ON [PRIMARY]  

Please help me.

Converting .TPS (TopSpeed) to SQL

Posted: 13 Sep 2013 10:26 AM PDT

I have an older application that uses TopSpeed as the database engine. I want to use some of the data contained in these tables in another application that uses SQL. To accomplish this, I purchased the TPS ODBC driver and used Access to move the data from the TPS tables to an SQL database by using the linked tables feature.

This works fine, but I'm looking for an automated solution (plus, the Access way is messy). Is there a tool out there that could help?

MySQL: sysbench test - InnoDB vs Memory tables

Posted: 13 Sep 2013 08:40 AM PDT

I've done some tests in order to investigate performance issue on the new HP Gen8 server (Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz)

I've created two tables, first one is using InnoDB storage engine and the second one is in Memory - heap table.

System details:

sysbench-0.4.12-5.el6.x86_64  CentOS release 6.4 (Final)  

Prepare stage:

InnoDB

# sysbench --db-driver=mysql --test=oltp --mysql-table-engine=innodb --oltp-table-size=1000000 --mysql-db=sbtest1 prepare  

Memory (heap)

# sysbench --db-driver=mysql --test=oltp --mysql-table-engine=heap --oltp-table-size=1000000 --mysql-db=sbtest2 prepare  

Testing stage:

Sysbench – read only test – single table with 1 mln rows - data size 559MB (527MB data + 31MB indexes)

InnoDB

# sysbench --db-driver=mysql --test=oltp --mysql-table-engine=innodb --num-threads=128 --max-requests=100000 --oltp-read-only run  

Total time: 16.3648s, TPS (transactions per second): 6111.40

Memory (heap)

# sysbench --db-driver=mysql --test=oltp --mysql-table-engine=heap --mysql-engine-trx=no --num-threads=128 --max-requests=100000 --oltp-read-only run  

This test is running much longer and I had to stop it as the load on the server was very high - even if this is in memory table!?.

SQL Query to fetch data from 4 different tables [migrated]

Posted: 13 Sep 2013 08:23 AM PDT

I have four tables and I need to fetch data from one table with where condition and the output contains ID's from three different tables using those ID's need to get the names of them.

Company:  CompanyID - PK  CompanyName  CompanyDescription    Users:  UserID - PK  FirstName  LastName  Email    TaskDetails:  TaskID - PK  CompanyID - FK of Company.CompanyID    TaskStatus:  TaskStatusID - PK  TaskID - FK of TaskDetails.TaskID  Status  Details  CreatedBy - FK of Users.UserID  UpdatedBy - FK of Users.UserID  CreatedAt  UpdatedAt  

Need a query to return something like below:

CompanyName,Email,Status,Details,CreatedAt,UpdatedAt with where condition on TaskStatus table TaskStatus.UpdatedBy!=1 and TaskStatus.UpdatedAt>'2013-08-01' and TaskStatus.status='COMPLETED'

On Oracle 12c, permitting and disallowing crashrecovery

Posted: 13 Sep 2013 09:21 AM PDT

To support a developer who is testing his application on an Oracle 12c database, has requested the following for two users:

  • USER0 who is permitted to start XA transactions and do crashrecovery.

and

  • USER1 who is permitted to start XA transaction but is not permitted to do crash recovery

I have not worked extensively with oracle and I am having a hard time tracking down how to honour this request. Any pointers or references would be appreciated.

Import Oracle schema data without losing modifications of stored procedures

Posted: 13 Sep 2013 05:57 PM PDT

I have this scenario:

  1. One huge (thousands of tables), complex production database, Oracle 8
  2. One huge (thousands of tables), complex development database, Oracle 9 ( same structure as production )
  3. Development database has modified stored procedures and packages as well as new ones
  4. Development database has new tables in some schemas
  5. Neither Oracle 8 exp nor Oracle 9 imp has CONTENT option

We usually do as follows because it's the only way to get data updated properly because a import with ignore=yes would only insert new data, but wouldn't update pre-existing rows with the same PK but different values in non-PK columns:

  1. Delete one schema, then create the user again to have an empty schema
  2. Import from user to user to the empty schema

The problem is:

  1. How to update the development database with fresh data from a production export without having to delete the schema first, since there's new stored procedures/packages as well as modified ones in the development database ?
  2. The comparing process to re-create only modified or new stored procedures after deleting the schema, for getting them back from a backup, would be too error prone.
  3. There are thousands (literally) of tables so we don't want to program a stored procedure to refresh the data in certain order etc. That would take months to write and test.

What would be an import-based solution to this ?

EDIT: I failed to mention that prod is Solaris and dev is RedHat.

Query getting periodically stuck in 'copying to tmp table' state, never completes

Posted: 13 Sep 2013 12:44 PM PDT

I am running Wordpress on a dedicated server with a MySQL backend. I have a query that usually takes <1 second to execute, but periodically, this query will get stuck in a 'copying to tmp table' state and stay that way indefinitely until it is either killed or until mysqld is restarted. After restarting mysqld the problem goes away, the (identical) query once again takes <1 second to execute. This leads me to believe this is a configuration problem.

How do I go about solving this problem? The query itself is not too intensive, and my server is not experiencing any sudden traffic spikes. The tables themselves are all InnoDB format.

Here is my my.cnf: http://pastebin.com/9UMPxfAr

The query:

SELECT ctt.term_id AS catid, p.ID, p.post_title AS title, GROUP_CONCAT(tt.term_id) as terms_id          FROM (SELECT * FROM wp_posts WHERE post_status = 'publish' AND post_type = 'post' AND post_date > '2013-09-09 17:00:00' AND post_date < '2013-09-10 08:14:00') AS p                  JOIN wp_postmeta AS pm                          ON (p.ID = pm.post_id AND pm.meta_key = 'wpo_sourcepermalink')                  JOIN wp_term_relationships AS ctr                          ON (p.ID = ctr.object_id)                  JOIN wp_term_taxonomy AS ctt                          ON (ctr.term_taxonomy_id = ctt.term_taxonomy_id AND ctt.taxonomy = 'category' AND ctt.term_id IN ('8','314','6'))                  JOIN wp_term_relationships AS tr                          ON (p.ID = tr.object_id)                  JOIN wp_term_taxonomy AS tt                          ON (tr.term_taxonomy_id = tt.term_taxonomy_id AND (tt.taxonomy = 'post_tag' OR tt.taxonomy = 'post_author'))          GROUP BY tr.object_id          ORDER BY pm.meta_value ASC;  

An EXPLAIN of the query: http://pastebin.com/m5ndBfVX

And the output of "SHOW ENGINE INNODB STATUS" when a query is stuck in the 'copying to tmp table' state: http://pastebin.com/h0xv4Sfa

Accumulo table design methodology

Posted: 13 Sep 2013 10:23 AM PDT

I am just getting started with Accumulo and NoSQL databases and I am looking for some discussion on table design. I get the key value structure that is seen in the manual. However, if I am trying to recreate a relational database, I am not sure how relationships work. Can someone explain to some degree how to setup and "Hello World" database (i.e., manager-employee database). I want to use key-value implementation.

Disaster Recovery for PostgreSQL 9.0

Posted: 13 Sep 2013 07:18 PM PDT

We have a number of PostgreSQL 9.0 servers. We use binary replication to have a host standby instance of those. The only problem is that is someone drops the master, with or without intentions, this will cascade to the replicas as well. I'm looking at the possible ways to avoid this. One possible option in seems to be Point in Time Recovery. I'm just wondering what could be a good design for this. Any ideas? Let's assume the master is compromised and we lose everything we have there. How can we avoid losing the replica or at least have a way to bring it back if it's dropped?

Oracle Patch Update

Posted: 13 Sep 2013 11:20 AM PDT

We have an Oracle RAC production environment with primary and secondary DB. Our DBA has asked to update oracle version from 11.2.0.1.0(64 bit) to 11.2.0.3(64 bit) with patch 6880880,10404530,16803769 and 16803775.

In our current database we have Shared storage,ACL settings, security settings,Gateway/Heteregenous connectivity, Dataguard, Data broker, Backup policy and Oracle Client installed on other machines.

DBA has estimated that he need to do installation, settings from scratch and test.. So, when the version is updated, do we really need to reconfig and install everything (Shared storage,ACL settings, security settings,Gateway/Heteregenous connectivity, Dataguard, Data broker, Backup policy and Oracle Client installed on other machines) ?? If yes its fine, but no then I need to justify it.

I can understand testing would be required..

Replicated Database Log File Maintenance

Posted: 13 Sep 2013 03:20 PM PDT

I have a database on the publisher that is involved in replication (publication configured for merge and transaction). Trying to regain control of the log file for this particular database (VLF count, size, etc.).

Is there anything I need to do (or be cautious of) with the replication setup before trying to perform any maintenance on the log file? I am not an expert in the area of replication and cannot find anything solid that provides guidance as to what measures should be taken.

Edit: This would include working on the distribution database as well, data retention was not configured at all for some reason.

SQL Server Designers, Failed Saves, and Generated Scripts

Posted: 13 Sep 2013 02:09 PM PDT

I am a big fan of the simple diagramming tool that comes with SSMS, and use it frequently. When I save changes to the model, I have it configured to automatically generate the change scripts that go along with the save. I then save (and source control) the resulting change script. This works great and an important piece of the process my team(s) uses.

What occasionally happens is that a save fails, and I still get the option to save my change script. I then fix the problem and save again (which results in another change script).

I'm never clear what I need to do at this point to maintain a consistent set of change scripts. There seems to be overlap between the two scripts (the failed and the successful), but they are not identical.

If I want to continue to use this feature, what should I be doing with the resulting script as soon as I get a failed save of the model?

How to avoid empty rows in SSIS Excel Destination?

Posted: 13 Sep 2013 08:20 PM PDT

Does anyone have a way to avoid empty rows when using SSIS to export to Excel. Here's a simple example of one data flow task:

OLE DB Source:

OLE DB Source:

Data Conversion (to handle the annoying UNICODE / NON-UNICODE deal):

Data Conversion

The end result is either of the two below depending on value of "FirstRowHasColumnName" in the Excel Connection Manager. Note, the blank rows.

output 1

output 2

How to add rows/columns to the table in runtime in SSRS 2008

Posted: 13 Sep 2013 10:20 AM PDT

Usually we design the table to have x number of rows and y number of columns in a report. But how can we create a report which adds the rows and columns dynamically at run time based on the result of the source query?

For example I want to list stdentId, StudentName and any course each student has enrolled in. As the number of courses is different from one person to the other, I should add the rows and related column for courses at run time based on the query result. How can it be done? For example:

enter image description here

Thanks for your help in advance.

How to disable oracle's MAX_ENABLED_ROLES limit

Posted: 13 Sep 2013 04:20 PM PDT

How to disable oracle's MAX_ENABLED_ROLES limit or expand the value of limitation. [oracle 10g (win32)]

In MySQL, does the order of the columns in a WHERE clause affect query performance,why?

Posted: 13 Sep 2013 01:20 PM PDT

I have a query that doesn't use any indexes:

SELECT 32,         guid,         1,         1,         1,         0,         5  FROM   test  WHERE  level >= 20         AND ( ( fun_GetIndexValue(data, 354) >> 16 ) +                ( fun_GetIndexValue(data, 355) >> 16 ) +                ( fun_GetIndexValue(data, 356) >> 16 ) +                ( fun_GetIndexValue(data, 357) >> 16 ) +                ( fun_GetIndexValue(data, 358) >> 16 ) +                ( fun_GetIndexValue(data, 359) >> 16 ) ) >= 1;   

The level column has only about 80-90 distinct values, the table test has about million rows, and the data column is passed to the function, so I think the query can not use any indexes. But I found that if I put the level condition in the end, the query performs slower. Why is that?

Delete word, its meanings, its meaning's example sentences from DB

Posted: 13 Sep 2013 05:20 PM PDT

I have three tables as below (simplified for demonstration):

words  =====  integer id  text    word    meanings  ========  integer id  integer word_id  text    meaning    examples  ========  integer id  integer meaning_id  text    sentence  

where, word_id stores id of the word in words table and meaning_id stores id of the meaning in meanings table. I am trying to figure out a sql query, given a word's id, to delete the word with all its meanings and example sentences all at one time. Is such sql query possible to compose? If so, how?

Edit1: I am using SQLite3 as the database.

Edit2: I figured the following solution which requires 3 sql queries in order:

DELETE FROM examples WHERE meaning_id IN (SELECT id FROM meanings WHERE word_id=the_given_id);  DELETE FROM meanings WHERE word_id=the_given_id;  DELETE FROM words WHERE id=the_given_id;  

I'm still looking for the answer to my question: is the whole process possible to be done in one query?

MySQL concurrent INSERTs

Posted: 13 Sep 2013 08:20 AM PDT

I have a MySQL database with InnoDB tables. There are different client processes making SELECT (to check the existence of a value) and INSERT or UPDATE (depending on the result of the select) statements. What I fear is a possible concurrent access to data causing only INSERTs and no UPDATEs. Is LOCK Table WRITE the only solution?

How can I optimize this query and support multiple SKUs?

Posted: 13 Sep 2013 12:20 PM PDT

My current query only can select one SKU at a time. I can leave salesite_id constant. If there is a way to also have varying salesite_ids that would be good too, but not necessary. Also any suggestions on indexes would be much appreciated also.

SELECT       available - (          SELECT COALESCE(sum(quantity), 0)           FROM product_locks           WHERE sku = 'sku1'      ) - (          SELECT COALESCE(sum(quantity), 0)           FROM               orderlineitems               INNER JOIN responses_authnets ON responses_authnets.id = orderlineitems.response_id           WHERE               sku = 'sku1' AND responses_authnets.salesite_id = 'site_id_1'      ) AS free,       available AS total,       sku,       on_hold   FROM product_inventories   WHERE sku = 'sku1' AND salesite_id = 'site_id_1';  

How to modify an update in Oracle so it performs faster?

Posted: 13 Sep 2013 02:20 PM PDT

I have this query:

UPDATE   (      SELECT   h.valid_through_dt, h.LAST_UPDATE_TMSTMP      FROM   ETL_FEE_SCH_TMP d, FEE_SCHEDULE_HISTORICAL h      WHERE       h.FUND_ID = d.FUND_ID      AND h.FEETYPE_NAME = d.FEETYPE_NAME      AND h.BREAKPOINT_TYPE = d.BREAKPOINT_TYPE      AND h.BREAKPOINT_QTY = d.BREAKPOINT_QTY      AND h.LOW_BREAKPOINT_AMT = d.LOW_BREAKPOINT_AMT      AND h.VALID_THROUGH = TO_DATE ('31-DEC-9999', 'dd-mon-yyyy')      AND h.universe = 'DC'      AND h.universe = d.universe      AND EXISTS      (          SELECT 1          FROM FEE_SCHEDULE s          WHERE s.FUND_ID = h.FUND_ID          AND s.FEETYPE_NAME = h.FEETYPE_NAME          AND s.BREAKPOINT_TYPE = h.BREAKPOINT_TYPE          AND s.BREAKPOINT_QTY = h.BREAKPOINT_QTY          AND s.LOW_BREAKPOINT_AMT = h.LOW_BREAKPOINT_AMT          AND s.universe = 'DC'      )  ) updateTable  SET     updateTable.VALID_THROUGH = (SYSDATE - 1),  updateTable.LAST_UPDATE_TMSTMP = SYSTIMESTAMP;  

The trouble that I am having is that this query takes a long time to run. I don't know whether it is possible to run this on parallel, or it would be easier to update a cursor in a pipeline function.

What would you suggest?

This is all the information that I believe it is relevant.

This is the execution plan of the internal select:

Execution Plan  ----------------------------------------------------------  Plan hash value: 57376096  ---------------------------------------------------------------------------------------------------------  | Id  | Operation                    | Name                     | Rows  | Bytes| Cost (%CPU)| Time     |  ---------------------------------------------------------------------------------------------------------  |   0 | SELECT STATEMENT             |                          |     1 |   306 |  8427   (1)| 00:01:42 |  |   1 |  NESTED LOOPS                |                          |       |       |            |          |  |   2 |   NESTED LOOPS               |                          |     1 |    306|  8427   (1)| 00:01:42 |  |   3 |    MERGE JOIN CARTESIAN      |                          |     1 |    192|  8426   (1)| 00:01:42 |  |*  4 |     INDEX RANGE SCAN         | SYS_C000666              |     1 |     96|     2   (0)| 00:00:01 |  |   5 |     BUFFER SORT              |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |   6 |      SORT UNIQUE             |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  7 |       TABLE ACCESS FULL      | FEE_SCHEDULE             |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  8 |    INDEX RANGE SCAN          | FEE_SCHDL_IDX1           |     1 |       |     1   (0)| 00:00:01 |  |*  9 |   TABLE ACCESS BY INDEX ROWID| FEE_SCHEDULE_HISTORICAL  |     1 |   114 |     1   (0)| 00:00:01 |  ---------------------------------------------------------------------------------------------------------  Predicate Information (identified by operation id):  ---------------------------------------------------     4 - access("D"."UNIVERSE"='DC')     7 - filter("S"."UNIVERSE"='DC')     8 - access("H"."UNIVERSE"='DC' AND "S"."FUND_ID"="H"."FUND_ID" AND                "S"."FEETYPE_NAME"="H"."FEETYPE_NAME" AND                "S"."BREAKPOINT_TYPE"="H"."BREAKPOINT_TYPE" AND                "S"."BREAKPOINT_QTY"="H"."BREAKPOINT_QTY" AND                "S"."LOW_BREAKPOINT_AMT"="H"."LOW_BREAKPOINT_AMT")         filter("H"."FUND_ID"="D"."FUND_ID" AND                "H"."FEETYPE_NAME"="D"."FEETYPE_NAME" AND                "H"."BREAKPOINT_TYPE"="D"."BREAKPOINT_UNIT_TY  

Table data:

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  UNIVERSE|FUND_ID   |FEETYPE_NAME |BREAKPOINT_TYPE|BREAKPOINT_QTY|LOW_BREAKPOINT_AMT|HIGH_BREAKPOINT_AMT|FEE_PCT|FEE_SCHDL_SEQ_ID|GROUP_ID|LAST_UPDATE_TMSTMP  |VALID_FROM|VALID_THROUGH|INSERT_TMSTMP        |JOB_ID|  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |4             |10000             |300000             |3.14   |780250          |null    |1/4/2012  3:59:54 PM|6/23/2012 |12/31/9999   |1/5/2011   3:59:54 PM|666   |  DC      |DCE86Y8XFU|RedemptionFee|Percentage     |9             |  100             |100500             |7.67   |780251          |null    |6/4/2012  4:49:54 PM|11/12/2011|12/31/9999   |8/17/2011  2:00:54 PM|666   |  DC      |DCAYL0KONA|FrontLoad    |Percentage     |2             |50000             |601500             |5.00   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |7             |80000             |900000             |2.24   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  

This is the script of the historical table:

CREATE TABLE FEE_SCHEDULE_HISTORICAL  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    FEE_SCHDL_SEQ_ID                NUMBER        NOT NULL,    GROUP_ID                        NUMBER,    LAST_UPDATE_TMSTMP              DATE          NOT NULL,    VALID_FROM                      DATE          NOT NULL,    VALID_THROUGH                   DATE          NOT NULL,    INSERT_TMSTMP                   DATE          NOT NULL,    JOB_ID                          NUMBER        NOT NULL  );    CREATE UNIQUE INDEX FEE_SCHDL_PK ON FEE_SCHEDULE_HISTORICAL(FEE_SCHDL_SEQ_ID);    CREATE UNIQUE INDEX FEE_SCHDL_HST_IDX ON FEE_SCHEDULE_HISTORICAL (      UNIVERSE,      FUND_ID,      FEETYPE_NAME,      BREAKPOINT_TYPE,      BREAKPOINT_QTY,       LOW_BREAKPOINT_AMT,      VALID_FROM,      JOB_ID  )    CREATE INDEX FEE_SCHEDULE_HST_IDX2 ON FEE_SCHEDULE_HISTORICAL(LAST_UPDATE_TMSTMP)    CREATE INDEX FEE_SCHEDULE_HST_IDX3 ON FEE_SCHEDULE_HISTORICAL(VALID_THROUGH)    ALTER TABLE FEE_SCHEDULE_HISTORICAL ADD (      CONSTRAINT FEE_SCHDL_PK      PRIMARY KEY      (FEE_SCHDL_SEQ_ID)  );  

This is the other table:

CREATE TABLE FEE_SCHEDULE  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    JOB_RUN_ID                      NUMBER        NOT NULL,    FILE_DATE                       DATE          NOT NULL,    CYCLE_DATE                      DATE          NOT NULL  )  

The temporary table is the result of FEE_SCHEDULE_HISTORICAL minus FEE_SCHEDULE

Query to find and replace text in all tables and fields of a mysql db

Posted: 13 Sep 2013 06:20 PM PDT

I need to run a query to find and replace some text in all tables of a mysql database.

I found this query, but it only looks for the text in the tbl_name table and just in the column field.

update tbl_name set column=REPLACE(column, 'fuschia', 'fuchsia');   

I need it to look in all tables and all fields: (everywhere in the database)

No comments:

Post a Comment

Search This Blog