Friday, May 24, 2013

[how to] Will Partitions and Indexes on the same table help in performace of Inserts and Selects?

[how to] Will Partitions and Indexes on the same table help in performace of Inserts and Selects?


Will Partitions and Indexes on the same table help in performace of Inserts and Selects?

Posted: 24 May 2013 02:09 PM PDT

I have a table containing the list of visitors and this table has the following information.

  • Visitor Browser Information
  • Visitor Location Information
  • Visitor Time Information
  • No of Visits

I have a second table that maintains the history of each visits, which means I if the same visitor visits the site, I insert into the second table and update the no. of visits on the first table.

The kind of reports that I have to generate for this table are

  1. Count of Visitors/day or days (Search Between days)
  2. Count of Visitors/month
  3. Count of Visitors/year
  4. Count of Visitors/browser or grouped by browsers

On an average there are about 20000 inserts to the second table and about 15000 inserts to the first table, meaning 5000 were updates to the first table (5000 repeat visits).

I need to decide between partitioning the tables by month and sub-partitioning by days for the reports 1,2,3 and index the browser related columns for report 4.

There will be more reports in the future not sure on what clauses.

Does partitioning/sub-partitioning along with indexing help in the performance of inserts and selects?

Should I perform partitioning on both the tables?

I am currently using MySQL 5.5 + InnoDB

which database suitable for huge inserts for analytics system [closed]

Posted: 24 May 2013 12:52 PM PDT

I want to develop a analytics system for very high traffic web sites, that records many behaviors of users. I want to choose a proper database for doing it well. This database must suitable for huge insert and multiple transactions and concurrency, Also proper for real time updates!

First of all I want to know I should use noSQL or SQLs? Second if noSQL, wich one? and if SQL which?

Creating a server on client machine [closed]

Posted: 24 May 2013 12:50 PM PDT

I'm trying to learn SQL and this post is regarding two questions.

  1. I know that MySQL is already installed on my machine (OS X 10.8) and I've seen the command line, but I'm not quite sure how to set up a local server for it. How can I set up a local server on my machine?

  2. Does Oracle offer a free to play with version of their database? Would setting up a local server be basically the same as for MySQL?

How often should MySQL (stock, Percona, etc.) be upgraded?

Posted: 24 May 2013 02:22 PM PDT

My organization runs several data-driven web applications backed by various flavors of MySQL on RHEL (i.e. we're just like everyone else!) Some of our web apps use the standard build of MySQL, and we use yum update periodically to update all yum-managed software on those systems. Other webapps on other hosts use the Percona build, which we installed manually. We use yum update periodically on those systems, too, but as our installation of Percona on those builds is not managed by yum, it isn't upgraded by that process.

Is there a rule of thumb for when to upgrade a component like MySQL (e.g. monthly, quarterly, or after each release)? We want to find the right balance between staying up-to-date and spending too much time on administration. What factors affect the decision? Regarding security, we use hardware firewalls and iptables to restrict access, so MySQL/Percona is not exposed to the outermost attack surface. The systems in question are dedicated, single-tenant hosts (i.e. there are no other users on the host).

Unable to restore cpanel database backup

Posted: 24 May 2013 08:01 PM PDT

I recently created a complete backup of my cpanel to move it into another server. Now the insert queries generated by cpanel for wp-posts table (corresponding to WordPress posts) are too big since they contain entire text of a post. So the database restore through cpanel fails. After running the query separately in phpMyAdmin it inserts 123 rows, which should ideally be 1 row (since it's one insert statement). How do I insert these big queries?

Please suggest a solution to this.

Can you grant an "Active directory Group" rights to linked server in SQL server 2008 R2?

Posted: 24 May 2013 11:56 AM PDT

My company uses sql server 2008 R2. I am trying to grant rights to an Active directory Group rights to a Linked Server. I am unable to do it, it only allows me to grant rights to individual active directory user. Does anyone know how to achieve this? Thanks in advance.

Backup on an empty SSAS database hangs

Posted: 24 May 2013 12:30 PM PDT

I created an SSIS package to iterate through all databases on an SSAS instance. It grabs the DatabaseId using AMO in a vbscript, then generates the XMLA to backup each database.

Most databases backup with no problems, but when the ForEach gets to one particular database, the Execute Analysis Services DDL Task seems to hang. The .abf file is created, but the SSIS package doesn't seem to get the response that the command is completed. It's like the backup process is stuck on the file.

When I backup the database using SSMS dialogs, the same problem occurs. I verified that other databases backup successfully using this method.

I should note that the database in question is 0.0MB in "estimated" size according to the db properties in SSMS. Is the database truly empty, or could this be a sign of corruption?

Why doesn't the backup process complete? I get no error messages, just a long-running process that does not end. Is there anything I can do to find the root cause?

Debugging SSAS Timeouts

Posted: 24 May 2013 10:08 AM PDT

I have a drill-through problem apparently related to security. Users in one role are seeing timeout failures when trying to invoke a drill-through action. They have permission on the action through that role and seem to have all necessary dimension and cell security rights.

Profiler is not showing me any obvious reason this is failing. What else can I use to debug this?

If I modify the user's role membership, the action works as expected, so I'm confident this is related to security somehow.

SQL Sever - Remove Duplicates in order

Posted: 24 May 2013 09:12 AM PDT

When removing duplicate rows, using the below stated query, how would I go about forcing which of the found duplicates to remove?

I am using this query, from this tutorial:

DELETE FROM dbo.ATTENDANCE WHERE AUTOID NOT IN (SELECT MIN(AUTOID) _  FROM dbo.ATTENDANCE GROUP BY EMPLOYEE_ID,ATTENDANCE_DATE)   

And this works great. The reason why I am using this one is because the only unique id available is one in the IDENTITY column. And to determine if there are duplicate rows I have too look at a combination of multiple columns.

But if I have a set of duplicate rows how do I / how does SQL Server decided which to remove? And how would I force it to remove all but the one with the latest IDENTITY ID.

EMPLOYEE_ID     ATTENDANCE_DATE     AUTOID  A001            2011-01-01          1  A001            2011-01-01          2  

If I would run the query now it will remove the second one, with AUTOID 2. But I am trying to remove all but this one (because this is the one latest added).

I am using SQL Sever 2012 Express.
Thank you very much,

How can I request a flush of the postgresql transaction logs?

Posted: 24 May 2013 08:53 PM PDT

I have the following problem: a "vertical" Linux distribution (Sophos UMT) comes with PostgreSQL 9.2 to store its configuration. Unfortunately, since the last update, it seems that the transaction logs (WAL) of some instances are growing without ever being flushed. This causes the pg_xlog folder to grow up to be several order of magnitude larger than the base folder.

I'm now in a delicate situation: due to the excessive growth of the WAL files, the disk of one of these machines (a VM) will get full before Monday. I have already opened a support case with the vendor but, so far, they aren't being very helpful (they suggest we rebuild the VM with larger disks).

This database is never backed up because the software is performing backups in a different way (it has it's own backup procedure and sends backup files by email) and I suppose that this is the reason why the WAFs are growing so much.

I'm afraid that I'm far from being a PostgreSQL expert so it's very likely I am asking a silly or obvious question but, what is the procedure for requesting the WAL files to be flushed ?

So, ideally, I'm looking for a procedure that will allow me to flush these WAL files on the problematic system in order to buy myself enough time to get the vendor to issue a better fix.

Updating a table with millions of records, its been 4 days

Posted: 24 May 2013 03:36 PM PDT

I am currently updating a table with millions of records, its been 4 days and query is still executing.

I checked the activity monitor its shows that query is running.

In event log there is no errors at all.

Performance wise:

  • Tempdb in disk A (850 gb free space)
  • database file in disk B (750 gb free space)
  • 16 GB ram

Please suggest me what should i do?

The query

UPDATE      dbo.table1  SET       costPercentage = ISNULL(t2.PaymentIndex, 1.0),      t2.TopUp_Amt = (ISNULL(t2.PaymentIndex, 1.0) - 1.0)      * ISNULL(dbo.table1.Initial_Tariff_Amt, 0.00),      Total_Tariff_Inc_t2 = ISNULL(t2.PaymentIndex, 1.0)      * ISNULL(dbo.table1.Initial_Tariff_Amt, 0.00)  FROM      dbo.table2 t2  WHERE      LEFT(dbo.test1.procodet, 3) = LEFT(t2.ProviderCode, 3) COLLATE database_default   

Troubleshooting a SQL Server instance that won't start

Posted: 24 May 2013 10:27 AM PDT

I've had a problem with one instance of SQL Server 2008 (the free version) not starting automatically. I always have to right click on the icon in the task bar and start it manually. It is set to "Automatic" start mode in the SQL Server Configuration Manager. I have checked the error log text file and there is nothing that looks likely. I don't see any attempt at automatic startup, just the log of my manual startup.

I then dug into the Event Viewer and I see two warning entries, given around computer boot time:

  1. Warning - SQLBrowser - The configuration of the SQL instance UPSWSDBSERVER is not valid.
  2. Warning - MSSQL$UPSWSDBSERVER -

    The description for Event ID ( 19011 ) in Source ( MSSQL$UPSWSDBSERVER ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event: (SpnRegister) : Error 1355.

UPSWSDBSERVER is the server instance for UPS WorldShip. This is NOT the instance that I am having trouble with. In fact I never have trouble with it... the UPS software works 100% reliably. The instance I am having trouble with is tied to a program called ShipWorks.

I have run the UPS Diagnostic tool which runs (what appears to be) a thorough check on its database, everything passes. I can not see the UPSWSDVSERVER instance in the Configuration Manager. Where the heck is it? The configuration manager only lists the ShipWorks instance.

Are individual queries faster than joins?

Posted: 24 May 2013 12:20 PM PDT

Conceptual question: Are individual queries faster than joins, or: Should I try to squeeze every info I want on the client side into one SELECT statement or just use as many as seems convenient?

TL;DR: If my joined query takes longer than running individual queries, is this my fault or is this to be expected?

First of, I am not very database savvy, so it may be just me, but I have noticed that when I have to get information from multiple tables, it is "often" faster to get this information via multiple queries on individual tables (maybe containing a simple inner join) and patch the data together on the client side that to try to write a (complex) joined query where I can get all the data in one query.

I have tried to put one extremely simple example together:

SQL Fiddle

Schema Setup:

CREATE TABLE MASTER   ( ID INT NOT NULL  , NAME VARCHAR2(42 CHAR) NOT NULL  , CONSTRAINT PK_MASTER PRIMARY KEY (ID)  );    CREATE TABLE DATA  ( ID INT NOT NULL  , MASTER_ID INT NOT NULL  , VALUE NUMBER  , CONSTRAINT PK_DATA PRIMARY KEY (ID)  , CONSTRAINT FK_DATA_MASTER FOREIGN KEY (MASTER_ID) REFERENCES MASTER (ID)  );    INSERT INTO MASTER values (1, 'One');  INSERT INTO MASTER values (2, 'Two');  INSERT INTO MASTER values (3, 'Three');    CREATE SEQUENCE SEQ_DATA_ID;    INSERT INTO DATA values (SEQ_DATA_ID.NEXTVAL, 1, 1.3);  INSERT INTO DATA values (SEQ_DATA_ID.NEXTVAL, 1, 1.5);  INSERT INTO DATA values (SEQ_DATA_ID.NEXTVAL, 1, 1.7);  INSERT INTO DATA values (SEQ_DATA_ID.NEXTVAL, 2, 2.3);  INSERT INTO DATA values (SEQ_DATA_ID.NEXTVAL, 3, 3.14);  INSERT INTO DATA values (SEQ_DATA_ID.NEXTVAL, 3, 3.7);  

Query 1:

select NAME from MASTER  where ID = 1  

Results:

| NAME |  --------  |  One |  

Query 2:

select ID, VALUE from DATA  where MASTER_ID = 1  

Results:

| ID | VALUE |  --------------  |  1 |   1.3 |  |  2 |   1.5 |  |  3 |   1.7 |  

Query 3:

select M.NAME, D.ID, D.VALUE   from MASTER M INNER JOIN DATA D ON M.ID=D.MASTER_ID  where M.ID = 1  

Results:

| NAME | ID | VALUE |  ---------------------  |  One |  1 |   1.3 |  |  One |  2 |   1.5 |  |  One |  3 |   1.7 |  

Of course, I didn't measure any performance with these, but one may observe:

  • Query A+B returns the same amount of usable information as Query C.
  • A+B has to return 1+2x3==7 "Data Cells" to the client
  • C has to return 3x3==9 "Data Cells" to the client, because with the join I naturally include some redundancy in the result set.

Generalizing from this (as far fetched as it is):

A joined query always has to return more data than the individual queries that receive the same amount of information. Since the database has to cobble together the data, for large datasets one can assume that the database has to do more work on a single joined query than on the individual ones, since (at least) it has to return more data to the client.

Would it follow from this, that when I observe that splitting a client side query into multiple queries yield better performance, this is just the way to go, or would it rather mean that I messed up the joined query?

How do you take export dump files on Oracle 7.3 running on Windows NT 3.51?

Posted: 24 May 2013 11:05 AM PDT

How to take dump files on Oracle7.3 running on Windows NT 3.51?

Should you stand in the Orant/bin folder and run one of the programs there like>

exp73 SYSTEM@[hostname] FULL=Y LOG=EXPORT.LOG  

Did not work

enter image description here

Trigger not Working

Posted: 24 May 2013 09:31 AM PDT

I created trigger on table PENDING.

Pending table has 3 columns

  • uniqueId
  • duration
  • maxDuration

I have another table COUNT with 2 columns

  • req_id
  • total

Here is my trigger--

CREATE TRIGGER plus3second BEFORE INSERT  ON PENDING  FOR EACH ROW  BEGIN  DECLARE req_id varchar(25);  DECLARE total int(11);  DECLARE duration int(2);    SET req_id = SUBSTR(new.uniqueId, 1, 14);    Select total into total from COUNT where req_id = 'req_id';    IF total > 100 THEN     SET duration = new.duration + 3;      IF duration < new.maxDuration Then       SET new.duration = duration;    END IF;      END IF;      END  

Trigger created successfully. I fired these queries on COUNT and PENDING-

insert into COUNT values ('77711422099653',200);  insert into PENDING (uniqueId, duration, maxDuration) values ('77711422099653919893277163', 3, 20);  

But trigger not working ...Where is the problem ?

Innodb table is taking randomly long time to execute the insert query and in show processlist showing as queryend/updating

Posted: 24 May 2013 12:26 PM PDT

Innodb table is taking randomly long time to execute the insert query and in show processlist showing as queryend/updating and it happens for the similar queries for the same table and the queries got strucks for five to fifteen minutes and suddenly process all the queued up list.

I cannot understand what happens suddenly sometimes...

The same queries for the same version, configuratios of the mysql in different server has no problem...

Here is the innodb configurations:

+---------------------------------+----------------------------------+  | Variable_name                   | Value                            |  +---------------------------------+----------------------------------+  | innodb_adaptive_flushing        | ON                               |  | innodb_adaptive_hash_index      | ON                               |  | innodb_additional_mem_pool_size | 1073741824                       |  | innodb_autoextend_increment     | 8                                |  | innodb_autoinc_lock_mode        | 1                                |  | innodb_buffer_pool_instances    | 1                                |  | innodb_buffer_pool_size         | 8589934592                       |  | innodb_change_buffering         | all                              |  | innodb_checksums                | ON                               |  | innodb_commit_concurrency       | 0                                |  | innodb_concurrency_tickets      | 500                              |  | innodb_data_file_path           | ibdata1:4G;ibdata2:4G:autoextend |  | innodb_data_home_dir            | /var/lib/mysql                   |  | innodb_doublewrite              | ON                               |  | innodb_fast_shutdown            | 1                                |  | innodb_file_format              | Antelope                         |  | innodb_file_format_check        | ON                               |  | innodb_file_format_max          | Antelope                         |  | innodb_file_per_table           | ON                               |  | innodb_flush_log_at_trx_commit  | 2                                |  | innodb_flush_method             |                                  |  | innodb_force_load_corrupted     | OFF                              |  | innodb_force_recovery           | 0                                |  | innodb_io_capacity              | 400                              |  | innodb_large_prefix             | OFF                              |  | innodb_lock_wait_timeout        | 50                               |  | innodb_locks_unsafe_for_binlog  | ON                               |  | innodb_log_buffer_size          | 33554432                         |  | innodb_log_file_size            | 536870912                        |  | innodb_log_files_in_group       | 2                                |  | innodb_log_group_home_dir       | ./                               |  | innodb_max_dirty_pages_pct      | 75                               |  | innodb_max_purge_lag            | 0                                |  | innodb_mirrored_log_groups      | 1                                |  | innodb_old_blocks_pct           | 37                               |  | innodb_old_blocks_time          | 0                                |  | innodb_open_files               | 300                              |  | innodb_print_all_deadlocks      | ON                               |  | innodb_purge_batch_size         | 20                               |  | innodb_purge_threads            | 0                                |  | innodb_random_read_ahead        | OFF                              |  | innodb_read_ahead_threshold     | 56                               |  | innodb_read_io_threads          | 10                               |  | innodb_replication_delay        | 0                                |  | innodb_rollback_on_timeout      | OFF                              |  | innodb_rollback_segments        | 128                              |  | innodb_spin_wait_delay          | 6                                |  | innodb_stats_method             | nulls_equal                      |  | innodb_stats_on_metadata        | ON                               |  | innodb_stats_sample_pages       | 8                                |  | innodb_strict_mode              | OFF                              |  | innodb_support_xa               | ON                               |  | innodb_sync_spin_loops          | 30                               |  | innodb_table_locks              | ON                               |  | innodb_thread_concurrency       | 0                                |  | innodb_thread_sleep_delay       | 10000                            |  | innodb_use_native_aio           | ON                                |  | innodb_use_sys_malloc           | ON                               |  | innodb_version                  | 5.5.30                           |  | innodb_write_io_threads         | 10                               |  +---------------------------------+----------------------------------+  

Can some one suggest me... why and what exactly happens

MySQL Err 1032 on only 1 Slave

Posted: 24 May 2013 10:50 AM PDT

I'm getting a very strange error on one of my MySQL servers and I'd appreciate some help.

The Setup

I have a separate environments. * Environment 1 consists of the Master Server and 1 Slave. * Environment 2 consists of 1 Slave.

(There are other MySQL servers in both environments but they're working fine with identical setups/settings so they can be ignored in this instance.)

In environment 1 the Slave is replicating from the Master. Both servers are configured to use:

  • READ_COMMITED
  • innodb_flush_method=O_DIRECT
  • innodb_support_xa = 0
  • innodb_flush_log_at_trx_commit = 1
  • and MIXED replication.

Replication in Environment 1 is working 100% and there are no issues at all.

Environment 2's Slave is replicating from the Master in environment 1. It's configured identically to the Slave in Environment 1 albeit with a different SERVER-ID.

The Problem

However, I keep getting Err 1032 on only 1 specific table on that slave.

I've double checked the settings and they are correct, I've re-created the databases just in case someone was incorrect with the restore and yet I keep running into this error.

I know it's not latency or the program writing to the master as it's only 1 table (and there are other servers in Environment 2 replicating correctly from Environment 1) and the Slave in Environment 1 is working off the exact same Master with the same settings and no error.

I'd greatly appreciate any recommendations or advice.

Merging two Access tables into one

Posted: 24 May 2013 01:30 PM PDT

I have mostly identical tables, however there are slightly differences like missing columns in older ones. I want to merge 5 tables into one. Is there any easy way to do this?

Configuring PostgreSQL for read performance

Posted: 24 May 2013 11:58 AM PDT

Our system write a lots of data (kind of Big Data system). The write performance is good enough for our needs but the read performance is really too slow.

The primary key (constraint) structure is similar for all our tables: timestamp(Timestamp) ; index(smallint) ; key(integer)

A table can have millions of row, even billion of rows, and a read request is usually for a specific period (timestamp / index) and tag. It's common to have a query that return around 200k lines. Currently, we can read about 15k lines per second but we need to be 10 times faster. Is this possible and if so, how?

Note: PostgreSQL is packaged with our software, so the hardware is different from one client to another.

[Edit] Added details below, performance was better for this test because I don't have access to the real setup right now. I will update as soon as I can access the setup.

[Edit2] Applied "dezso" suggestions, see configuration changes below and the specs of the server used for testing. Yes it's a VM used for testing, the VMs host is a Server 2008 R2 x64 with 24.0 GB of ram.

Server Spec (Virtual Machine VMWare)

Server 2008 R2 x64  2.00 GB of memory  Intel Xeon W3520 @ 2.67GHz (2 cores)  

postgresql.conf optimisations

shared_buffers = 512MB (default: 32MB)  effective_cache_size = 1024MB (default: 128MB)  checkpoint_segment = 32 (default: 3)  checkpoint_completion_target = 0.9 (default: 0.5)  default_statistics_target = 1000 (default: 100)  work_mem = 100MB (default: 1MB)  maintainance_work_mem = 256MB (default: 16MB)  

Table Definition

CREATE TABLE "AnalogTransition"  (    "KeyTag" integer NOT NULL,    "Timestamp" timestamp with time zone NOT NULL,    "TimestampQuality" smallint,    "TimestampIndex" smallint NOT NULL,    "Value" numeric,    "Quality" boolean,    "QualityFlags" smallint,    "UpdateTimestamp" timestamp without time zone, -- (UTC)    CONSTRAINT "PK_AnalogTransition" PRIMARY KEY ("Timestamp" , "TimestampIndex" , "KeyTag" ),    CONSTRAINT "FK_AnalogTransition_Tag" FOREIGN KEY ("KeyTag")        REFERENCES "Tag" ("Key") MATCH SIMPLE        ON UPDATE NO ACTION ON DELETE NO ACTION  )  WITH (    OIDS=FALSE,    autovacuum_enabled=true  );  

Query

The query take about 30 seconds to execute in pgAdmin3, but we would like to have the same result under 5 seconds if possible.

SELECT       "AnalogTransition"."KeyTag",       "AnalogTransition"."Timestamp" AT TIME ZONE 'UTC',       "AnalogTransition"."TimestampQuality",       "AnalogTransition"."TimestampIndex",       "AnalogTransition"."Value",       "AnalogTransition"."Quality",       "AnalogTransition"."QualityFlags",       "AnalogTransition"."UpdateTimestamp"  FROM "AnalogTransition"  WHERE "AnalogTransition"."Timestamp" >= '2013-05-16 00:00:00.000' AND "AnalogTransition"."Timestamp" <= '2013-05-17 00:00:00.00' AND ("AnalogTransition"."KeyTag" = 56 OR "AnalogTransition"."KeyTag" = 57 OR "AnalogTransition"."KeyTag" = 58 OR "AnalogTransition"."KeyTag" = 59 OR "AnalogTransition"."KeyTag" = 60)  ORDER BY "AnalogTransition"."Timestamp" DESC, "AnalogTransition"."TimestampIndex" DESC  LIMIT 500000;  

Explain (Edit2: Updated)

"Limit  (cost=0.00..125668.31 rows=500000 width=33) (actual time=2.193..3241.319 rows=500000 loops=1)"  "  Buffers: shared hit=190147"  "  ->  Index Scan Backward using "PK_AnalogTransition" on "AnalogTransition"  (cost=0.00..389244.53 rows=1548698 width=33) (actual time=2.187..1893.283 rows=500000 loops=1)"  "        Index Cond: (("Timestamp" >= '2013-05-16 01:00:00-04'::timestamp with time zone) AND ("Timestamp" <= '2013-05-16 15:00:00-04'::timestamp with time zone))"  "        Filter: (("KeyTag" = 56) OR ("KeyTag" = 57) OR ("KeyTag" = 58) OR ("KeyTag" = 59) OR ("KeyTag" = 60))"  "        Buffers: shared hit=190147"  "Total runtime: 3863.028 ms"  

In my latest test, It took 7 minutes to select my data!!! See below

Explain (Edit3)

"Limit  (cost=0.00..313554.08 rows=250001 width=35) (actual time=0.040..410721.033 rows=250001 loops=1)"  "  ->  Index Scan using "PK_AnalogTransition" on "AnalogTransition"  (cost=0.00..971400.46 rows=774511 width=35) (actual time=0.037..410088.960 rows=250001 loops=1)"  "        Index Cond: (("Timestamp" >= '2013-05-22 20:00:00-04'::timestamp with time zone) AND ("Timestamp" <= '2013-05-24 20:00:00-04'::timestamp with time zone) AND ("KeyTag" = 16))"  "Total runtime: 411044.175 ms"  

Thanks a lot for help!!

MySQL stored routine performance while using PREPARE

Posted: 24 May 2013 08:10 PM PDT

Instead of maintaining stored routines for each database in my current environment i have decided to create separate database just for stored routines storage. Mainly i am using them for reporting. All databases have same schema structure.

Stored routine body:

PROCEDURE `sp_name`(IN db_name  VARCHAR(50)                      ,IN stardate DATE                      ,IN enddate DATE)  ......  SET @sql_query = CONCAT(  'SELECT       .....      .....      .....  FROM ',  db_name, '.view   WHERE       ....  stardate, .... enddate .....   GROUP BY       ..... ,.....');    PREPARE sql_query FROM @sql_query;  EXECUTE sql_query;  DEALLOCATE PREPARE sql_query;  

Now i faced with performance problem. While executing stored routine from each database execution time is less then 1 sec. With prepared statement 5-6 sec. Could someone please advice if there is space for any improvements.

DB2 db2fm proccess

Posted: 24 May 2013 04:09 PM PDT

Server is been up for 365 days, however i got some weird repeated procceses.

Are these normal?

ps -fea | grep db2fm

  db2inst1  643284  229516  29   May 25      - 212564:06 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  671770  229516  56   May 14      - 227447:02 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  757794 1237058   0   Apr 19  pts/7  0:00 /bin/sh /home/db2inst1/sqllib/bin/db2cc  db2inst1  774232  229516  30   Sep 25      - 94218:54 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  962750  229516  30   Jul 18      - 145256:01 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  999450  229516  29   Aug 17      - 117710:27 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1 1179898  229516  58   Nov 02      - 75788:49 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  

ps -fea | grep db2agent

  db2inst1  409770  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  450750  778412   0   Apr 18      -  0:03 db2agent (idle) 0  db2inst1  618688  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  651440  680100   0   Nov 17      -  0:20 db2agent (DATABASEA) 0  db2inst1  655508  491676   0   Apr 19      -  0:04 db2agent (idle) 0  db2inst1  684038  680100   0   Mar 23      -  0:03 db2agent (DATABASEA) 0  db2inst1  790706  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  880672  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  913438  778412   0   Nov 16      -  0:20 db2agent (idle) 0  db2inst1  946182  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  991312  778412   0   Apr 17      -  0:16 db2agent (idle) 0  db2inst1 1077466  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1134726  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1142964  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1233112  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1261748  778412   0   Jun 15      -  0:18 db2agent (idle) 0  db2inst1 1384678  778412   0   Mar 23      -  0:27 db2agent (idle) 0  db2inst1 1404936  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1421368  778412   0   Mar 22      -  0:04 db2agent (idle) 0  db2inst1 1445936  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1482864  491676   0   Jun 16      -  0:31 db2agent (idle) 0  db2inst1 1503440  778412   0   Jun 15      -  0:56 db2agent (idle) 0  db2inst1 1519842  778412   0   Mar 23      -  0:00 db2agent (DATABASEA) 0  db2inst1 1531946  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1572884  680100   0   Apr 19      -  0:00 db2agent (idle) 0  

Other info

  oslevel -g  Fileset                                 Actual Level        Maintenance Level  -----------------------------------------------------------------------------  bos.rte                                 5.3.0.40            5.3.0.0    db2fm -s -S  Gcf module 'fault monitor' is NOT operable  Gcf module '/home/db2inst1/sqllib/lib/libdb2gcf.a' state is AVAILABLE      uptime    02:14PM   up 365 days,  12:51,  6 users,  load average: 6.69, 6.89, 6.97     db2level  DB21085I  Instance "db2inst1" uses "64" bits and DB2 code release "SQL08020"  with level identifier "03010106".  Informational tokens are "DB2 v8.1.1.64", "s040812", "U498350", and FixPak "7"    

How to snapshot or version a relational database when data changes?

Posted: 24 May 2013 02:10 PM PDT

My system receives data feeds. Each data feed will end up creating inserts and/or updates to most tables in the (relational) database.

I need to capture the snapshot of what the entire database looked like after each data feed is received. Basically I need a way to version the database each time a data feed is run through the system.

Note, by capturing a snapshot, I dont mean literally taking a snapshot of the database, but rather writing history records or some such mechanism so that I can query the database across "versions" to see what changed between versions (among other use cases)

Do known data model designs exist that can capture a snapshot of a database version like this?

Linked Server to SQL Server 2000 server from SQL Server 2012

Posted: 24 May 2013 03:18 PM PDT

I have a new SQL Server 2012 instance, and am migrating a database from SQL Server 2000. Some of the stored procedures in the DB I am moving reference another database back on that 2000 server. What is the best way to handle this?

Would a linked server help? I am having trouble creating a link for testing. Some people recommend using ODBC, others seem to use the SQL Server 10 client driver (SQLNCLI10). Using the SQL Server 10 client driver seems better to me, are there issues installing it on a SQL Server 2012 instance?

Optimization of a select statement

Posted: 24 May 2013 07:09 PM PDT

I'm using MySQL and have a table user_data like this:

user_id         int(10) unsigned  reg_date        int(10) unsigned  carrier         char(1)  

The reg_data is the unix timestamp of the registration date, and the carrier is the type of carriers, the possible values of which could ONLY be 'D', 'A' or 'V'.

I need to write a sql statement to select the registered user number of different carriers on each day from 2013/01/01 to 2013/01/31. So the desirable result could be:

2013/01/01   D   10  2013/01/01   A   31  2013/01/01   V   24  2013/01/02   D    9  2013/01/02   A   23  2013/01/02   V   14  ....  2013/01/31   D   11  2013/01/31   A   34  2013/01/31   V   22  

Can anyone help me with this question? I'm required to give the BEST answer, which means I can add index if necessary, but I need to keep the query efficient.

This is what I have right now:

select FLOOR((FROM_UNIXTIME(reg_date)) / 1000000) as reg_day, carrier, count(user_id) as user_count  from user_data  where reg_date >= UNIX_TIMESTAMP('2013-01-01 00:00:00') and reg_date < UNIX_TIMESTAMP('2013-02-01 00:00:00')  group by reg_day, carrier;  

Thanks!

The question has been moved to here and please find all the updates in the link.

Rent weekly cost database design

Posted: 24 May 2013 01:09 PM PDT

I have a database which contains a table BUILDING with in each row details about some building, another table BUILDING_UNIT contains rows with details about a single building unit which refers with a foreign key to the belonging BUILDING.ID. The BUILDING_UNIT table also refers to a table CATEGORY which tells whether the BUILDING_UNIT is of category A,B,C,D again with a foreign key pointing to CATEGORY.ID.

Now the final cost of renting the building unit depends on its building, category and on the number of days it is rented and specific period of the year. We only rent them weekly so I might as well use weeks only however I'd like it to be as flexible as possible in the future.

I cannot convince myself on a table which can represent this situation.

Do I have to use a table with coefficients for each day of the year and then a table with coefficients for A,B,C,D and then a table with coefficients for each Building and then somehow calculate a result?

Is there some standard and recognized implementation for problems of this type?

Thank you

EDIT: Notice the solution should abstract from the formula for calculating the cost which might change in the future. However I might be asked to make a specific week of the year, for building unit X inside building Y to cost 300$ while the week after 600$. Generally building units inside the same building and in the same week cost the same, however that might change in future so I'd like to treat already all specific cases.

How to add 'root' MySQL user back on MAMP?

Posted: 24 May 2013 12:09 PM PDT

On PhpMyAdmin, I removed 'root' user by mistake. I was also logged in as 'root'. How can I add the user 'root' back, on MAMP?

Database Link from Oracle 10g on Windows over to MySQL database

Posted: 24 May 2013 04:02 PM PDT

Didn't see any answers on this subject so hoping for a miracle! :-)

I've got an Oracle 10g database running on Windows Server 2003 and I'd like to create a database link to an external MySQL database which is running in the Amazon Web Service RDS cloud.

We may have the option of upgrading the Oracle version to 11g if that would help?

Has anyone done anything like this before?

Thanks for your help! Josh

Is there a way to recover a dropped mysql database?

Posted: 24 May 2013 10:23 AM PDT

I accidentally dropped a MySQL database on my server. Are there any ways to recover a dropped database?

No comments:

Post a Comment

Search This Blog