Monday, September 9, 2013

[how to] ER Schema "Optimisation"

[how to] ER Schema "Optimisation"


ER Schema "Optimisation"

Posted: 09 Sep 2013 07:20 PM PDT

I am trying to make an ER Schema for a person. The person will have Current Address and Permanent Address. So far I have it so that both Current and Permanent Address are Composite Attributes and they have the same Attributes.

Picture http://www.flickr.com/photos/101617879@N08/9715094700/

If you look at the picture you see that it is pretty much a copy of the same attributes. How can I combine them/make this better?

Classifieds Database Design

Posted: 09 Sep 2013 07:40 PM PDT

I have always worked with CMSs, but I am trying to get into using frameworks like Laravel and Yii. My main issue is when working with CMSs, I didn't have to think much about the database design since it was done for me. I have my plan drawn out on paper, but I am not really sure where to go from here....

I am trying to develop a Craigslist clone, but a little more specific. I have Googled all over for designs, and this is currently what I have.

However, I want certain categories to have specific fields. Some categories may have fields in common with other categories, but not all categories are the same.

For example:

    - Categories        - Electronics          - Title          - Type          - Description          - Year        - Furniture          - Title          - Type          - Description          - Brand        ....  

Those are just two examples, but I have a huge list of Categories and the required fields for each category.

My current plan is to load all of these fields into the ad table. What effect will this have on performance? At some point there could be 60 fields attached to the ad table, but only 5-10 may be filled at a time, and the others would be empty/NULL.

What is the best way to go about associating images with ads? I was thinking to just create an assets folder and create subfolders based on the ad id, and upload images to those subfolders of the corresponding ad id. Something like...

    - Public        - Assets          - Images            - 1 (post id = 1)              - img001.jpg              - img002.jpg            ...  

What's the best way to set up this kind of database? Would sticking to MySQL be best for this? What if I want some states to have certain categories but not others?

DB2 10.1 Client throws SQL0552N running a script

Posted: 09 Sep 2013 05:05 PM PDT

I am trying to run a sql script on a remote database using DB2 10.1 on AIX 6.1. I have granted my admin user (Adm101) on the database server SECADM. But when i run my script on the remote server with the db2 client (Client101) I get:

SQL0552N "Client101" does not have the privilege to perform operation "CREATE TABLE". SQLSTATE=42502   

Adm101 is in listed in the database but Client101 is not. How can I create a table on the remote client?

Which Postgresql Replication Solution to Use? (Asynchronous Multimaster / Bucardo?)

Posted: 09 Sep 2013 03:54 PM PDT

First, I'm not a DBA, so pardon me if any of this question seems "off."

I've written a peer-to-peer multiplayer game (the client) which connects to one of multiple servers for match making.

Currently, there is only one server (a linode, let's call it Server 1) which runs the game's custom matchmaking process and PostgreSQL 8.4 (I will be upgrading this to 9.1, 9.2 or 9.3 if necessary).

The matchmaking process uses libpq asynchronously for all SQL statements. Statements are not too complex, so load balancing is not an issue.

I plan to add more linodes (call them Servers 2, 3, 4, etc.) that run the matchmaking process and PostreSQL as necessary. The challenge is that I want high-availability for all clients. If server 1 is unreachable, then server 2 can be used instead, with access to all the same data.

The original plan was to have all servers connect to Server 1's database and asynchronously send SQL statements via libpq. The problem there is that if Server 1 is ever temporarily offline or unreachable, then every other server will fail.

The "simplest" solution I can imagine is for each of the servers to completely mirror the database. If Server 1 is down, clients can connect to Server 2 which reads and writes to its own database, replicating any changes to Servers 3 and 4 immediately, and Server 1 once it comes back online.

In this fashion, every server would hold an entire "mirrored" copy of the database.

After reading through the introductory sections of PostgreSQL 9.3's documentation on replication, it seems like way to implement this solution would be asynchronous multi-master replication. (Is Bucardo the only choice here?)

The thing I'm worried about with asynchronous replication is SQL inserts. When a new client plays for the first time, a player database entry is created. If Servers 2, 3, and 4 are online and Server 1 is offline, will there be any issues with 1, 3 or 4 if 2 inserts a new player row? (Imagine 1 coming back online and immediately trying to insert another player row.)

Is asynchronous multi-master the right way to go for the above mentioned scenario?

Or, is there a simpler or easier solution that I am overlooking? Perhaps one that doesn't require middleware, but just uses PostgreSQL 9.3's built in replication?

SSRS partial printing issue

Posted: 09 Sep 2013 07:34 PM PDT

We have SSRS 2005 that publishes reports. The reports are often as big as 1000-3000 pages. So the problem is that when the users try to print the report, it prints some of the pages and stops. And this happens randomly. For instance, 10 people can print at the approximately the same time and 4 could face the problem. When I look at the even log it shows an ASP.Net warning that says:

'The report execution <'session ID'> has expired or cannot be found. (rsExecutionNotFound)'.

In a development setting, I tried upgrading it to SSRS 2012 and moved all the reports. But I got the same error message:

An error occurred during printing. (0x80004005)`.

And the error log is:

'The report execution <'session ID'> has expired or cannot be found. (rsExecutionNotFound)'.

What I have already done is make changes to the config file and set EnableAuthPersistance to FALSE and CleanupCycleMinutes to 30 on the SSRS 2012. I have already ruled out network and hardware related issues by trying it on different servers.

Do you know why this is happening and why it is happening so randomly?

how to create a data warehouse with Kettle ETL and OLAP?

Posted: 09 Sep 2013 01:51 PM PDT

i'm building a data warehouse for a erp system in java and i have some doubt

first, I'm using kettle etl to perform the transformation of data of trasactional tables and store this data in a separate database for datawarehouse. for example joining the data in the tables invoices, debit notes and credit notes from the transactional database, formatting it and save it in the sales table of the datawarehouse database.

In these tables can be modified and deleting data within past year, for this reason the running ETL checking and updating data within the last year, using kitchen (a cronjob)

For this reason the ETL runs occasionally. but if I want to generate a sales report at the time, How do I show the data updated without affecting the speed of the report?

because if i run the etl before the report I add a big delay to it

second, in what case i must use olap? and for what?

thanks

SSRS 2008 parameters question

Posted: 09 Sep 2013 12:43 PM PDT

SSRS allows you to create 5 types of parameters: Text, Boolean, Date/Time, Integer i Float.

I created a new integer parameter which allows user to enter a minimum price and view a report. However if the user tries to enter for example 100.00, the reports throws an error. Is there a way to conver that parameter to allow decimal data types?

enter image description here

Oracle no privileges on tablespace USERS

Posted: 09 Sep 2013 03:58 PM PDT

I have a brand new Oracle database that is giving the error:

ORA-01950: no privileges on tablespace 'USERS'  

I have done:

alter user kainaw quota 100M on 'USERS';  grant unlimited tablespace to kainaw;  

Still, a single insert returns that error. Other than disk quota, what else causes the "no privileges on tablespace 'USERS'" error?

UPDATE:

Oracle version is 11.2.0.3.0 (11g). I am logging in from the command prompt on the server. So, I alter user kainaw as sysdba. Then, I logout and login a user kainaw to test:

insert into i.test values (1);  

Note: i.test is a table with only a number field. I get the error above. I logout as kainaw, login as sysdba, play with permissions, logout, login, test, error, logout, login, ...

"Restore With Replace" or drop/restore with Instant Initialization on?

Posted: 09 Sep 2013 07:09 PM PDT

What exactly does the restore argument "With Replace" do? I'm looking at needing to restore a database back to a beginning point on a regular basis, and I've been trying to figure out if there are any disadvantages to using Restore With Replace versus Dropping/Deleting the database entirely and restoring it.

Will "With Replace" wipe the log files and reset whatever bits might be left in system databases as well? It seems that it would be much quicker, as I don't have to wait for the database to finish dropping (the database in question is around 2TB). I've already checked the TechNet article on the Restore arguments, it doesn't go into this specific question.

MySQL Lat/Lon Analytics [on hold]

Posted: 09 Sep 2013 10:20 AM PDT

db n00b here,

What is the best way to store GPS data points along with a product ID (or key) for location based advertising. Each advertisement will post it's location along with it's currently displayed product to our central server. Our server will then store that data into an analytics table to show customers how much their product ad is displayed.

Should I compound (sum up the times) the data points daily, hourly or just throw them all in there (the database) one by one?

Normalizing nearly identical tables

Posted: 09 Sep 2013 09:29 AM PDT

Background

I'm managing a relatively small database project in which we are adding support for reporting involving status updates for items in out product table. I was bit thrown into this and I've only got about a month of ever writing SQL.

Problem Description

At it's core we have a central table [product] with a bigint unique key. We now want to record various messages that come in from a satellite application. The messages come in 2 major types (MessageA and MessageB) that are almost identical, MessageB contains an extra column that MessageA doesn't posses. Also of note is that there is no overlap between the 2 message_type columns and no columns are NULL. That is to say both messages have their own set of message_types.

MessageA:

  • id
  • timestamp
  • message_type
  • product_id
  • floor_id

MessageB:

  • id
  • timestamp
  • message_type
  • product_id
  • floor_id
  • section_number

What I tried

My initial design was to add 2 tables, one for each new data type exactly mirroring the datatypes. This "seemed" more "normalized" based off my month of so of SQL experience. But after I started writing a query that tried to combine the data into a report, I couldn't come up with a non-redundant query to build the dataset. My primitive query looked like:

Pseudocode

(     SELECT MessageA.* FROM product     WHERE <filtering crieteria on product>     JOIN MessageA ON MessageA.product_id = product.id  )  UNION ALL  (     SELECT MessageB.* FROM product     WHERE <identical to first filter>     JOIN MessageB ON MessageB.product_id = product.id  )  

I'm a little paranoid about the long-term performance implications of querying [product] twice since it's our biggest table (adds up to 1M rows a year, maybe more) and the DB runs largely unmaintained off-site on consumer level hardware for an average life-cycle of 3-5 years between upgrades and we have had some reports trickle in of issue at out largest sites. These new 2 tables would potentially grow at 3-7 times the rate of [product] (possibly 5 million rows per year or more).

I started to think it might be simpler to just have 1 table and make section_number NULL. If section_number = NULL then it is or type A otherwise it is B.

The actual question

Is this a good idea?

Should I be worrying about this optimization?

Is this even an optimization or just a more accommodating design?

I'm looking for some guidance whether I should shape the data based on "input" format or "output". Normalization is elegant but at what point should I bend the schema to look like the desired output structure?

SQL Server: deadlocked on lock communication buffer resources

Posted: 09 Sep 2013 09:23 AM PDT

what could be possible reason for this deadlock type? (not deadlock in general)

lock communication buffer resources is this indicated system is low in memory and buffers count ran out of limit?

Detailed Error: Transaction (Process ID 59) was deadlocked on lock communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction

How do I efficiently get "the most recent corresponding row"?

Posted: 09 Sep 2013 06:39 PM PDT

I have a query pattern that must be very common, but I don't know how to write an efficient query for it. I want to look up the rows of a table that correspond to "the most recent date not after" the rows of another table.

I have a table, "inventory" say, which represents the inventory I hold on a certain day.

date       | good | quantity  ------------------------------  2013-08-09 | egg  | 5  2013-08-09 | pear | 7  2013-08-02 | egg  | 1  2013-08-02 | pear | 2  

and a table, "price" say, which holds the price of a good on a given day

date       | good | price  --------------------------  2013-08-07 | egg  | 120  2013-08-06 | pear | 200  2013-08-01 | egg  | 110  2013-07-30 | pear | 220  

How can I efficiently get the "most recent" price for each row of the inventory table, i.e.

date       | pricing date | good | quantity | price  ----------------------------------------------------  2013-08-09 | 2013-08-07   | egg  | 5        | 120  2013-08-09 | 2013-08-06   | pear | 7        | 200  2013-08-02 | 2013-08-01   | egg  | 1        | 110  2013-08-02 | 2013-07-30   | pear | 2        | 220  

I know one way of doing this:

select inventory.date, max(price.date) as pricing_date, good  from inventory, price  where inventory.date >= price.date  and inventory.good = price.good  group by inventory.date, good  

and then join this query again to inventory. For large tables even doing the first query (without joining again to inventory) is very slow. However, the same problem is quickly solved if I simply use my programming language to issue one max(price.date) ... where price.date <= date_of_interest ... order by price.date desc limit 1 query for each date_of_interest from the inventory table, so I know there is no computational impediment. I would, however, prefer to solve the whole problem with a single SQL query, because it would allow me to do further SQL processing on the result of the query.

Is there a standard way to do this efficiently? It feels like it must come up often and that there should be a way to write a fast query for it.

[EDIT: I'm using Postgres, but an SQL-generic answer would be appreciated]

Plan cache memory: parameterized SQL vs stored procedures

Posted: 09 Sep 2013 10:27 AM PDT

In making a case to disallow parameterized SQL in my company's development environment the lead developer related a story about how the last time they used parameterized SQL the server had major performance issues. He said this was because the plan caching ate up almost all of the available memory on the server and switching to stored procedures cleared up the performance issues.

My question: is there a major difference in the memory footprint of a compiled/cached stored procedure and the cached plan for a parameterized SQL?

I have a guess that they also simplified the number of calls by going to procs and that probably had as much impact or more than just going to procs by itself, but I don't know.

Percona Xtradb Cluster : How to speed up insert?

Posted: 09 Sep 2013 09:15 AM PDT

I recently installed a 3 full master node cluster based on Percona Xtradb (very easy install). But now i need to make some tuning to increase INSERT/UPDATE requests. Actually, i made around 100 insertions every 5 minutes, but also made around 400 update in the same time. All this operation take less than 3 minutes when i was on a single server architecture. And now, with 3 node server, it takes more than 5 minutes ...

Is there any tuning i can do to speed up this operations ? Here is my actual cnf configuration :

[mysqld]  datadir=/var/lib/mysql  user=mysql    wsrep_provider=/usr/lib/libgalera_smm.so  wsrep_cluster_address=gcomm://dbnode01,dbnode02,dbnode03    binlog_format=ROW  default_storage_engine=InnoDB  innodb_locks_unsafe_for_binlog=1  innodb_autoinc_lock_mode=2  wsrep_node_address=1.2.3.4  wsrep_cluster_name=my_cluster  wsrep_sst_method=xtrabackup  wsrep_sst_auth="user:password"  

Here are the 3-server hard config :

Node#1

CPU: Single Processor Quad Core Xeon 3470 - 2.93Ghz (Lynnfield) - 1 x 8MB cache w/HT  RAM: 8 GB DDR3 Registered 1333  HDD: 500GB SATA II  

Node#2

CPU: Single Processor Quad Core Xeon 1270 V2 - 3.50GHz (Ivy Bridge) - 1 x 8MB cache w/HT  RAM: 4 GB DDR3 1333  HDD: 1.00TB SATA II  

Node#3

CPU: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz (4-Cores)  RAM: 32G  HDD: 2T  

UPDATE

Actualy there's around 2.4M records (24 fields each) in the table concerned by the INSERT/UPDATE statements (6 fields indexed).

mysql innodb space x did not exist in memory

Posted: 09 Sep 2013 09:33 AM PDT

Into my innodb log I got the errors below. How to fix? What did it mean? Some tables are corrupted but not all.

InnoDB: space id 1753 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute,  InnoDB: space id 1777 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_eav_attribute,  InnoDB: space id 1626 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_category_entity_int,  InnoDB: space id 1609 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_category_entity_text,  InnoDB: space id 1610 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/catalog_category_entity_varchar,  InnoDB: space id 1611 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/customer_eav_attribute,  InnoDB: space id 1746 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/customer_eav_attribute_website,  InnoDB: space id 1747 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/customer_form_attribute,  InnoDB: space id 1754 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_label,  InnoDB: space id 1779 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_option,  InnoDB: space id 1780 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_option_value,  InnoDB: space id 1781 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_entity_attribute,  InnoDB: space id 1784 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_form_element,  InnoDB: space id 1792 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/googleshopping_attributes,  InnoDB: space id 1804 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/salesrule_product_attribute,  InnoDB: space id 1516 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/googlebase_attributes,  InnoDB: space id 1798 did not exist in memory. Retrying an open.  130906 16:31:27  InnoDB: error: space object of table test_magento/eav_attribute_set,  

Is there any rule of thumb to optimize sql queries like this

Posted: 09 Sep 2013 07:31 PM PDT

this is my first question here. Although I've been helped out from this forum over a hundred times. I was having difficulties in optimizing sql query. It takes hours to execute. Record set is also large enough. The query was not written by me. So just to find the bottle neck I tried removing conditional clauses but that doesn't make any difference. Indexing the ID's done.

Can any sql guru here could throw some light on it. Is there any fine tuning room left in the query below? The server hosting the database in DB2. I'm not too pro in sql. Thanks as always. Regards, Nuh

This is the query:

SELECT  HEALTH_INSURANCE.RISK_DETAIL_ID ,  POLICY_RISK_COVER.RISK_COVER_ID ,  HEALTH_INSURANCE.RD_POLICY_SYSTEM_NO ,  RD_POLICY_END_NO_IDX ,  HEALTH_INSURANCE.RD_POLICY_ID ,  HEALTH_INSURANCE.RD_LEVEL1_ID ,  HEALTH_INSURANCE.RD_SUM_INSURED_AMT_LC ,  HEALTH_INSURANCE.RD_PREMIUM_AMT_LC ,  POLICY_RISK_COVER.PREMIUM_AMOUNT_FC ,  POLICY_RISK_COVER.SUM_INSURED_AMT_FC ,  HEALTH_INSURANCE.RD_REC_TYPE ,  HEALTH_INSURANCE.RD_EFFECT_FROM_DT ,  HEALTH_INSURANCE.RD_EFFECT_TO_DT ,  HEALTH_INSURANCE.RD_END_EFFECT_FROM_DT ,  HEALTH_INSURANCE.SEX_MAS_CD ,  HEALTH_INSURANCE.MARITAL_STATUS_CD ,  HEALTH_INSURANCE.EMP_CATG ,  HEALTH_INSURANCE.NO_OF_DEPENDENTS ,  CAST((  CASE       WHEN HEALTH_INSURANCE.AUTHORITY_LETTER_NO IS NULL       THEN HEALTH_INSURANCE.EMP_AL_NO       ELSE HEALTH_INSURANCE.AUTHORITY_LETTER_NO   END) AS INT) AS EMP_AL_NO ,  HEALTH_INSURANCE.DOB ,  HEALTH_INSURANCE.EFF_DATE ,  HEALTH_INSURANCE.EFF_DATE2 ,  HEALTH_INSURANCE.NAME ,  CAST((SUBSTR(HEALTH_INSURANCE.RELATIONSHIP_CD, 5,2)) AS INT) AS   RELATIONSHIP_CD_S ,  HEALTH_INSURANCE.RELATIONSHIP_CD ,  HEALTH_INSURANCE.DESIGNATION ,  HEALTH_INSURANCE.BRANCH ,  HEALTH_INSURANCE.BANK_ACCOUNT ,  HEALTH_INSURANCE.BANK_BRANCH_NAME ,  HEALTH_INSURANCE.PRE_EXISTING_AILMENT ,  HEALTH_INSURANCE.AUTHORITY_LETTER_NO ,  HEALTH_INSURANCE.AGE ,  HEALTH_INSURANCE.REGION ,  HEALTH_INSURANCE.CNIC ,  HEALTH_INSURANCE.CO_CODE ,  HEALTH_INSURANCE.EMP_LOCATION ,  HEALTH_INSURANCE.SUB_LOCATION ,  CATEGORY_LIMIT_HEADER.CLH_SYSTEM_NO ,  CATEGORY_LIMIT_HEADER.CTH_SYS_ID ,  CATEGORY_LIMIT_HEADER.CTH_POL_SYS_ID ,  CATEGORY_LIMIT_HEADER.CTH_END_NO_IDX ,  CATEGORY_LIMIT_HEADER.CTH_END_SR_NO ,  CATEGORY_LIMIT_HEADER.CTH_CATEGORY ,  CATEGORY_LIMIT_DETAIL.CLD_SYS_ID ,  CATEGORY_LIMIT_DETAIL.CLDH_SYS_ID ,  CATEGORY_LIMIT_DETAIL.CLD_COVER_CD ,  CATEGORY_LIMIT_DETAIL.CLD_END_IDX ,  CATEGORY_LIMIT_DETAIL.CLD_COVER_DESC ,  CATEGORY_LIMIT_DETAIL.CLD_CLM_TYPE_LIMIT ,  CATEGORY_LIMIT_DETAIL.CLD_CLM_REL ,  CATEGORY_LIMIT_DETAIL.CLD_CLM_AGE_FROM ,  CATEGORY_LIMIT_DETAIL.CLD_CLM_AGE_TO ,  CATEGORY_LIMIT_DETAIL.CLD_CLM_RB_LIMIT ,  CATEGORY_LIMIT_DETAIL.CLD_CATEGORY_LIMIT_FC ,  CATEGORY_LIMIT_DETAIL.CLD_CATEGORY_PREM_FC   FROM  DB2ADMIN.HEALTH_INSURANCE AS HEALTH_INSURANCE       INNER JOIN DB2ADMIN.POLICY_RISK_COVER AS POLICY_RISK_COVER       ON HEALTH_INSURANCE.RD_POLICY_SYSTEM_NO = POLICY_RISK_COVER.      RC_POLICY_SYSTEM_NO AND      TRIM(RD_LEVEL1_ID) = TRIM(RC_LEVEL1_ID)           INNER JOIN DB2ADMIN.CATEGORY_LIMIT_HEADER AS CATEGORY_LIMIT_HEADER           ON HEALTH_INSURANCE.RD_POLICY_ID = CATEGORY_LIMIT_HEADER.          CTH_POL_SYS_ID AND          HEALTH_INSURANCE.EMP_CATG = CATEGORY_LIMIT_HEADER.CTH_CATEGORY               INNER JOIN DB2ADMIN.CATEGORY_LIMIT_DETAIL AS               CATEGORY_LIMIT_DETAIL               ON CATEGORY_LIMIT_HEADER.CTH_SYS_ID = CATEGORY_LIMIT_DETAIL.              CLDH_SYS_ID AND              POLICY_RISK_COVER.RISK_COVER_CD = CATEGORY_LIMIT_DETAIL.              CLD_COVER_CD AND              HEALTH_INSURANCE.RELATIONSHIP_CD = CATEGORY_LIMIT_DETAIL.              CLD_CLM_REL   WHERE  COALESCE(HEALTH_INSURANCE.AGE, 1) BETWEEN CATEGORY_LIMIT_DETAIL.  CLD_CLM_AGE_FROM AND  CATEGORY_LIMIT_DETAIL.CLD_CLM_AGE_TO  

MySQL shutdown unexpectedly

Posted: 09 Sep 2013 08:15 PM PDT

2013-09-09 10:21:44 5776 [Note] Plugin 'FEDERATED' is disabled.  2013-09-09 10:21:44 1624 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the option innodb_use_sys_malloc and with the InnoDB's internal memory allocator.  2013-09-09 10:21:44 5776 [Note] InnoDB: The InnoDB memory heap is disabled  2013-09-09 10:21:44 5776 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions  2013-09-09 10:21:44 5776 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-09-09 10:21:44 5776 [Note] InnoDB: Not using CPU crc32 instructions  2013-09-09 10:21:44 5776 [Note] InnoDB: Initializing buffer pool, size = 16.0M  2013-09-09 10:21:44 5776 [Note] InnoDB: Completed initialization of buffer pool  2013-09-09 10:21:44 5776 [Note] InnoDB: Highest supported file format is Barracuda.  2013-09-09 10:21:44 5776 [Note] InnoDB: The log sequence numbers 0 and 0 in ibdata files do not match the log sequence number 19862295 in the ib_logfiles!  2013-09-09 10:21:44 5776 [Note] InnoDB: Database was not shutdown normally!  2013-09-09 10:21:44 5776 [Note] InnoDB: Starting crash recovery.  2013-09-09 10:21:44 5776 [Note] InnoDB: Reading tablespace information from the .ibd files...  2013-09-09 10:21:44 5776 [ERROR] InnoDB: Attempted to open a previously opened tablespace. Previous tablespace casualme/scraped_user uses space ID: 2 at filepath: .\casualme\scraped_user.ibd. Cannot open tablespace mysql/innodb_index_stats which uses space ID: 2 at filepath: .\mysql\innodb_index_stats.ibd  InnoDB: Error: could not open single-table tablespace file .\mysql\innodb_index_stats.ibd  InnoDB: We do not continue the crash recovery, because the table may become  InnoDB: corrupt if we cannot apply the log records in the InnoDB log to it.  InnoDB: To fix the problem and start mysqld:  InnoDB: 1) If there is a permission problem in the file and mysqld cannot  InnoDB: open the file, you should modify the permissions.  InnoDB: 2) If the table is not needed, or you can restore it from a backup,  InnoDB: then you can remove the .ibd file, and InnoDB will do a normal  InnoDB: crash recovery and ignore that table.  InnoDB: 3) If the file system or the disk is broken, and you cannot remove  InnoDB: the .ibd file, you can set innodb_force_recovery > 0 in my.cnf  InnoDB: and force InnoDB to continue crash recovery here.  

Hello guys.I need your help. my phpmyadmin is not working. How can i make it working again, i have a very large data, please help me how to fix this. i just want to get my database.

I have this following files. db.opt , scraped_user.frm , scraped_user.ibd is there a way to recover my database from this files?, I just want to get the tables data.

I want to reinstall my xampp but is there a way to back up my db when i can't run it? like copying this folder 'DBNAME' with files db.opt , scraped_user.frm , scraped_user.ibd

Why are these two INSERTs deadlocking? Is it the trigger? What does this deadlock trace 1222 log tell me?

Posted: 09 Sep 2013 04:29 PM PDT

We are seeing intermittent deadlocks in production when receiving multiple simultaneous API requests. Each request basically culminates in an INSERT statement into the same table, which is where we see the deadlock. I wrote a double-threaded console application that can reliably reproduce the issue by simply executing two API requests simultaneously, but only in production, not in staging. (This leads me to believe that there is something about our staging database -- possibly the volume of data, SQL Server 2012 vs 2005, or index tuning -- that differs from production in such a way that the deadlock is avoided. The code is identical, as I believe is the schema.)

Since I can now reproduce the deadlock, I was able to convince my boss to enable trace flag 1222 temporarily, and captured the log below:

Date,Source,Severity,Message  09/05/2013 16:32:19,spid71,Unknown,DBCC TRACEOFF 1222<c/> server process ID (SPID) 71. This is an informational message only; no user action is required.  09/05/2013 16:30:55,spid17s,Unknown,waiter id=processf34868 mode=X requestType=wait  09/05/2013 16:30:55,spid17s,Unknown,waiter-list  09/05/2013 16:30:55,spid17s,Unknown,owner id=processf35c18 mode=X  09/05/2013 16:30:55,spid17s,Unknown,owner-list  09/05/2013 16:30:55,spid17s,Unknown,objectlock lockPartition=0 objid=428945000 subresource=FULL dbid=8 objectname=MyDB.DomainTransferRAR id=lock120a72c80 mode=X associatedObjectId=428945000  09/05/2013 16:30:55,spid17s,Unknown,waiter id=processf35c18 mode=X requestType=convert  09/05/2013 16:30:55,spid17s,Unknown,waiter-list  09/05/2013 16:30:55,spid17s,Unknown,owner id=processf34868 mode=IX  09/05/2013 16:30:55,spid17s,Unknown,owner-list  09/05/2013 16:30:55,spid17s,Unknown,objectlock lockPartition=0 objid=2096426938 subresource=FULL dbid=8 objectname=MyDB.DomainTransferRANT id=lock11de95480 mode=IX associatedObjectId=2096426938  09/05/2013 16:30:55,spid17s,Unknown,resource-list  09/05/2013 16:30:55,spid17s,Unknown,VALUES (@p0<c/>  09/05/2013 16:30:55,spid17s,Unknown,(@p0 uniqueidentifier<c/>@p1 int<c/>@p2 int<c/>@p3 varchar(8000)<c/>@p4 char(5)<c/>@p5 int<c/>@p6 datetime<c/>@p7 datetime<c/>@p8 varchar(8000)<c/>@p9 char(5)<c/>@p10 int<c/>@p11 datetime<c/>@p12 datetime<c/>@p13 varchar(8000)<c/>@p14 int<c/>@p15 xml<c/>@p16 datetime<c/>@p17 datetime<c/>@p18 varchar(8000)<c/>@p19 datetime<c/>@p20 datetime<c/>@p21 varchar(8000)<c/>@p22 bit<c/>@p23 varchar(8000)<c/>@p24 varchar(8000)<c/>@p25 uniqueidentifier)INSERT INTO [dbo].[DomainTransferRANT]([DomainTransferRANTGUID]<c/> [PrebookedBillPackageId]<c/> [domainID]<c/> [DomainName]<c/> [GainingWNAccountID]<c/> [GainingRegistrantID]<c/> [dtGainingRequestSent]<c/> [dtGainingResponseReceived]<c/> [DomainTransferGainingPartyRespCode]<c/> [LosingWNAccountID]<c/> [LosingRegistrantID]<c/> [dtLosingRequestSent]<c/> [dtLosingResponseReceived]<c/> [DomainTransferLosingPartyRespCode]<c/> [SubmittedBillPackageId]<c/> [ExtraInfo]<c/> [dtRequestSentToRegistry]<c/> [dtResponseFromRegistry]<c/> [DomainTransferRegistryRespCode]<c/> [dtDeleted]<c/> [dtLastChecked]<c/> [DomainTransferStatusCode]<c/> [SendConfirmationForEachDomain]<c/> [SummaryStatus]<c/> [WHOISBeforeTransfer])  09/05/2013 16:30:55,spid17s,Unknown,inputbuf  09/05/2013 16:30:55,spid17s,Unknown,unknown  09/05/2013 16:30:55,spid17s,Unknown,frame procname=unknown line=1 sqlhandle=0x000000000000000000000000000000000000000000000000  09/05/2013 16:30:55,spid17s,Unknown,VALUES (@p0<c/> @p1<c/> @p2<c/> @p3<c/> @p4<c/> @p5<c/> @p6<c/> @p7<c/> @p8<c/> @p9<c/> @p10<c/> @p11<c/> @p12<c/> @p13<c/> @p14<c/> @p15<c/> @p16<c/> @p17<c/> @p18<c/> @p19<c/> @p20<c/> @p21<c/> @p22<c/> @p23<c/> @p24)  09/05/2013 16:30:55,spid17s,Unknown,INSERT INTO [dbo].[DomainTransferRANT]([DomainTransferRANTGUID]<c/> [PrebookedBillPackageId]<c/> [domainID]<c/> [DomainName]<c/> [GainingWNAccountID]<c/> [GainingRegistrantID]<c/> [dtGainingRequestSent]<c/> [dtGainingResponseReceived]<c/> [DomainTransferGainingPartyRespCode]<c/> [LosingWNAccountID]<c/> [LosingRegistrantID]<c/> [dtLosingRequestSent]<c/> [dtLosingResponseReceived]<c/> [DomainTransferLosingPartyRespCode]<c/> [SubmittedBillPackageId]<c/> [ExtraInfo]<c/> [dtRequestSentToRegistry]<c/> [dtResponseFromRegistry]<c/> [DomainTransferRegistryRespCode]<c/> [dtDeleted]<c/> [dtLastChecked]<c/> [DomainTransferStatusCode]<c/> [SendConfirmationForEachDomain]<c/> [SummaryStatus]<c/> [WHOISBeforeTransfer])  09/05/2013 16:30:55,spid17s,Unknown,frame procname=adhoc line=1 stmtstart=738 stmtend=2322 sqlhandle=0x0200000038d7940c60b76abf51c3cf1bc774fe27ba136260  09/05/2013 16:30:55,spid17s,Unknown,select @iLockDummy = 1 from DomainTransferRANT with (TABLOCKX<c/> HOLDLOCK) OPTION(EXPAND VIEWS)  09/05/2013 16:30:55,spid17s,Unknown,frame procname=MyDB.tri_DomainTransferRANT_InsUpd line=68 stmtstart=10336 stmtend=10544 sqlhandle=0x03000800c8c31e75b259fa002fa200000000000000000000  09/05/2013 16:30:55,spid17s,Unknown,executionStack  09/05/2013 16:30:55,spid17s,Unknown,process id=processf35c18 taskpriority=0 logused=2992 waitresource=OBJECT: 8:2096426938:0  waittime=171 ownerId=26880196297 transactionname=user_transaction lasttranstarted=2013-09-05T16:30:55.710 XDES=0x1c7b18b60 lockMode=X schedulerid=2 kpid=34404 status=suspended spid=69 sbid=0 ecid=0 priority=0 transcount=2 lastbatchstarted=2013-09-05T16:30:55.763 lastbatchcompleted=2013-09-05T16:30:55.710 clientapp=.Net SqlClient Data Provider hostname=WEB1 hostpid=13216 loginname=client isolationlevel=serializable (4) xactid=26880196297 currentdb=8 lockTimeout=4294967295 clientoption1=673316896 clientoption2=128056  09/05/2013 16:30:55,spid17s,Unknown,VALUES (@p0<c/>  09/05/2013 16:30:55,spid17s,Unknown,(@p0 uniqueidentifier<c/>@p1 int<c/>@p2 int<c/>@p3 varchar(8000)<c/>@p4 char(5)<c/>@p5 int<c/>@p6 datetime<c/>@p7 datetime<c/>@p8 varchar(8000)<c/>@p9 char(5)<c/>@p10 int<c/>@p11 datetime<c/>@p12 datetime<c/>@p13 varchar(8000)<c/>@p14 int<c/>@p15 xml<c/>@p16 datetime<c/>@p17 datetime<c/>@p18 varchar(8000)<c/>@p19 datetime<c/>@p20 datetime<c/>@p21 varchar(8000)<c/>@p22 bit<c/>@p23 varchar(8000)<c/>@p24 varchar(8000)<c/>@p25 uniqueidentifier)INSERT INTO [dbo].[DomainTransferRANT]([DomainTransferRANTGUID]<c/> [PrebookedBillPackageId]<c/> [domainID]<c/> [DomainName]<c/> [GainingWNAccountID]<c/> [GainingRegistrantID]<c/> [dtGainingRequestSent]<c/> [dtGainingResponseReceived]<c/> [DomainTransferGainingPartyRespCode]<c/> [LosingWNAccountID]<c/> [LosingRegistrantID]<c/> [dtLosingRequestSent]<c/> [dtLosingResponseReceived]<c/> [DomainTransferLosingPartyRespCode]<c/> [SubmittedBillPackageId]<c/> [ExtraInfo]<c/> [dtRequestSentToRegistry]<c/> [dtResponseFromRegistry]<c/> [DomainTransferRegistryRespCode]<c/> [dtDeleted]<c/> [dtLastChecked]<c/> [DomainTransferStatusCode]<c/> [SendConfirmationForEachDomain]<c/> [SummaryStatus]<c/> [WHOISBeforeTransfer])  09/05/2013 16:30:55,spid17s,Unknown,inputbuf  09/05/2013 16:30:55,spid17s,Unknown,unknown  09/05/2013 16:30:55,spid17s,Unknown,frame procname=unknown line=1 sqlhandle=0x000000000000000000000000000000000000000000000000  09/05/2013 16:30:55,spid17s,Unknown,VALUES (@p0<c/> @p1<c/> @p2<c/> @p3<c/> @p4<c/> @p5<c/> @p6<c/> @p7<c/> @p8<c/> @p9<c/> @p10<c/> @p11<c/> @p12<c/> @p13<c/> @p14<c/> @p15<c/> @p16<c/> @p17<c/> @p18<c/> @p19<c/> @p20<c/> @p21<c/> @p22<c/> @p23<c/> @p24)  09/05/2013 16:30:55,spid17s,Unknown,INSERT INTO [dbo].[DomainTransferRANT]([DomainTransferRANTGUID]<c/> [PrebookedBillPackageId]<c/> [domainID]<c/> [DomainName]<c/> [GainingWNAccountID]<c/> [GainingRegistrantID]<c/> [dtGainingRequestSent]<c/> [dtGainingResponseReceived]<c/> [DomainTransferGainingPartyRespCode]<c/> [LosingWNAccountID]<c/> [LosingRegistrantID]<c/> [dtLosingRequestSent]<c/> [dtLosingResponseReceived]<c/> [DomainTransferLosingPartyRespCode]<c/> [SubmittedBillPackageId]<c/> [ExtraInfo]<c/> [dtRequestSentToRegistry]<c/> [dtResponseFromRegistry]<c/> [DomainTransferRegistryRespCode]<c/> [dtDeleted]<c/> [dtLastChecked]<c/> [DomainTransferStatusCode]<c/> [SendConfirmationForEachDomain]<c/> [SummaryStatus]<c/> [WHOISBeforeTransfer])  09/05/2013 16:30:55,spid17s,Unknown,frame procname=adhoc line=1 stmtstart=738 stmtend=2322 sqlhandle=0x0200000038d7940c60b76abf51c3cf1bc774fe27ba136260  09/05/2013 16:30:55,spid17s,Unknown,select @iLockDummy = 1 from DomainTransferRAR with (TABLOCKX<c/> HOLDLOCK) OPTION(EXPAND VIEWS)  09/05/2013 16:30:55,spid17s,Unknown,frame procname=MyDB.tri_DomainTransferRANT_InsUpd line=67 stmtstart=10140 stmtend=10334 sqlhandle=0x03000800c8c31e75b259fa002fa200000000000000000000  09/05/2013 16:30:55,spid17s,Unknown,executionStack  09/05/2013 16:30:55,spid17s,Unknown,process id=processf34868 taskpriority=0 logused=3000 waitresource=OBJECT: 8:428945000:0  waittime=171 ownerId=26880196295 transactionname=user_transaction lasttranstarted=2013-09-05T16:30:55.710 XDES=0x1c7b18370 lockMode=X schedulerid=2 kpid=13932 status=suspended spid=93 sbid=0 ecid=0 priority=0 transcount=2 lastbatchstarted=2013-09-05T16:30:55.770 lastbatchcompleted=2013-09-05T16:30:55.710 clientapp=.Net SqlClient Data Provider hostname=MYDB hostpid=13216 loginname=client isolationlevel=serializable (4) xactid=26880196295 currentdb=8 lockTimeout=4294967295 clientoption1=673316896 clientoption2=128056  09/05/2013 16:30:55,spid17s,Unknown,process-list  09/05/2013 16:30:55,spid17s,Unknown,deadlock victim=processf35c18  09/05/2013 16:30:55,spid17s,Unknown,deadlock-list  09/05/2013 16:30:08,spid71,Unknown,DBCC TRACEON 1222<c/> server process ID (SPID) 71. This is an informational message only; no user action is required.  

One thing to note is that there is a trigger on the insert into the relevant table. The trigger is necessary to determine a status code for the overall record, which may depend on sibling records in the same table. For a long time we thought the trigger was the cause of the deadlocks, so we added increasingly aggressive locking hints to the trigger, culminating in the current setup where we do a TABLOCKX, HOLDLOCK on the relevant table(s) before the critical section. We figured this would completely prevent the deadlocks, at the expense of some performance, by effectively serializing all inserts. But it seems that is not the case. As I understand it, something else prior to our exclusive table locks must already be holding a shared or update lock. But what?

Other info that might help you help me: The table DomainTransferRANT is heavily indexed. Its primary key is a non-clustered GUID. There is a clustered index on another important INT column. And there are 7 other non-clustered indexes. Finally, there are several foreign key constraints.

xbase sql query for limiting the output

Posted: 09 Sep 2013 05:16 PM PDT

I want to query in my xbase database an limit the output like:

SELECT * FROM TB_TEST LIMIT 5;  

But it does not work on xbase

Looking for a database-design to model a availability problem (large data sets are expected) [on hold]

Posted: 09 Sep 2013 06:43 PM PDT

I am looking for a database-design to store and query information about disposability of cars in a care-sharing-community, where users can rent cars provided by other users.

The data which is relevant for this query will be:

  • general availability of a car - set by the car owner
  • rent contracts with a specific or recurring date/time - which reduces the availability

Some example queries that should be possible:

  1. get all cars available tomorrow from 8am to 2:30pm
  2. get all cars continuously available from 2013-10-01 until 2013-10-30
  3. get all cars available each Tuesday to Thursday from 10am to 6pm from 2013-11-01 until 2013-11-24

The last query is the most difficult. It seems that information when cars are booked (not available) could not be stored by just one date field and it could not be stored as a string (ie. something similar to iCalendar), because it would be hard to query. I guess the date needs to be "normalized" somehow.

I think I am not the first one facing this kind of problem - anyone knows some open source or literature dealing with this topic? Or could help with a draft.

BTW: Large data sets are expected (100.000 to 1.000.000 cars and 10.000.000 or more bookings contracts per year).

SqlPackage does not pick up variables from profile

Posted: 09 Sep 2013 06:20 PM PDT

I want to upgrade a database using .dacpac and sqlpackage.exe

here is how I run sqlpackage:

SqlPackage.exe      /Action:Publish      /SourceFile:"my.dacpac"      /Profile:"myprofile.publish.xml"  

The error I get is:

* The following SqlCmd variables are not defined in the target scripts: foo.

I have verified that myprofile.publish.xml file does contain that var:

<ItemGroup>    <SqlCmdVariable Include="foo">      <Value>bc\local</Value>    </SqlCmdVariable>  

I also verified that project that creates dacpac does publish successfully from within visual studio using myprofile.publish.xml

What else could I be missing?

(I'm using SQL Server 2012)

Speeding up mysqldump / reload

Posted: 09 Sep 2013 10:19 AM PDT

Converting a large schema to file-per-table and I will be performing a mysqldump/reload with --all-databases. I have edited the my.cnf and changed "innod_flush_log_at_trx_commit=2" to speed up the load. I am planning to "SET GLOBAL innodb_max_dirty_pages_pct=0;" at some point before the dump. I am curious to know which combination of settings will get me the fastest dump and reload times?

SCHEMA stats:

26 myisam tables 413 innodb ~240GB of data

[--opt= --disable-keys; --extended-insert; --quick, etc] --no-autocommit ??

vs prepending session vars like: "SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;"

Are the mysqldump options equivalent or not really?

Thanks for your advice!

Database Mail sending functionality not working on local system

Posted: 09 Sep 2013 01:20 PM PDT

I am using Database Mail functionality to send mail from a SQL Server 2008 database via following stored procedure execution:

EXEC sp_send_dbmail @profile_name = 'MyProfile',                       @recipients = 'abc@companyname.com',                       @subject = 'Test message',                      @body = 'Congrats Database Mail Received By you Successfully.'   

I have tried with my gmail account profile on my local system it's working properly but not with my company or outlook profile.

Error message:

The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 5 . Exception Message: Could not connect to mail server. (A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond abc.j.i.ooo:pp). )

Reference

http://blogs.msdn.com/b/suhde/archive/2009/07/12/how-to-configure-sql-server-database-mail-to-send-email-using-your-windows-live-mail-account-or-your-gmail-account.aspx  

What would be the problem?

Thanks

createdb: could not connect to database postgres: FATAL: could not write init file

Posted: 09 Sep 2013 08:20 PM PDT

RedHat Enterprise Server 3.0 32 Bits

psql (PostgreSQL) 8.2.3

user: postgres

server is running:

/soft/postgres/8.2.3/bin/pg_ctl start  pg_ctl: another server may be running; trying to start server anyway  2013-05-09 11:23:07 BRST---:FATAL:  lock file "postmaster.pid" already exists  2013-05-09 11:23:07 BRST---:HINT:  Is another postmaster (PID 12810) running in data directory "/opt/psql/dba/bdadms/data1/pstg"?  pg_ctl: could not start server  Examine the log output.  

I had just created a new database cluster with initdb; but when I run createdb:

8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb pstg  createdb: could not connect to database postgres: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb postgres  createdb: could not connect to database template1: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb template1  createdb: could not connect to database postgres: FATAL:  could not write init file  

any clues as to the cause and possible solutions to this problem?

Repeated values in group_concat

Posted: 09 Sep 2013 04:20 PM PDT

I have two tables, first the table food and Second is Activity:

INSERT INTO food      (`id`, `foodName`)  VALUES      (1, 'food1'),      (2, 'food2'),      (3, 'food3'),      (4, 'food4'),      (5, 'food5'),      (6, 'food6'),  ;  CREATE TABLE Activity      (`id` int,`place` varchar(14),`food_id` int,`timing` TIME,`date_and_time` DATETIME)  ;  INSERT INTO Activity      (`id`,`place`, `food_id`,`timing`,`date_and_time`)  VALUES      (1, 'place1', 1, '10:30am','2013-05-01'),      (2, 'place1', 1, '12:30pm','2013-05-01'),      (3, 'place1', 1, '04:30pm','2013-05-01'),      (4, 'place2', 2, '10:30am','2013-05-02'),      (5, 'place2', 2, '12:30pm','2013-05-02'),      (6, 'place2', 2, '4:30pm','2013-05-02'),      (7, 'place1', 2, '10:30am','2013-05-02'),      (8, 'place1', 2, '12:30pm','2013-05-02'),      (9, 'place1', 2, '4:30pm','2013-05-02'),      (10, 'place2', 3, '10:30am','2013-05-03'),      (11, 'place2', 3, '12:30pm','2013-05-03'),      (12, 'place2', 3, '4:30pm','2013-05-03')  ;  

For now I'm using the following query:

SELECT       a.activity_type AS Activity,       COUNT(DISTINCT p.id) AS Products,      CONVERT(GROUP_CONCAT(p.category_id SEPARATOR ',  ') USING utf8)         AS Categories  FROM       food AS p    JOIN       ( SELECT activity_type             , prod_id        FROM activity         WHERE activity_type <> ''         GROUP BY activity_type               , prod_id      ) AS a      ON p.id = a.prod_id  GROUP BY       activity_type  ORDER BY       Products DESC ;  

Could you please help me, I need output in the below format:

place | food_id | Timings             |                              |        |         |---------------------|Date                          |        |         |Time1 |Time2 | Time3 |                              |  ---------------+----------+------------------------------------------|  place1 | 1      | 10:30am| 12:30pm| 4:30pm |2013-05-01(MAX timestamp)|            place2 | 1      | 10:30am| 12:30am| 4:30am |2013-05-01(MAX timestamp)|  

MySQL PDO Cannot assign requested address

Posted: 09 Sep 2013 02:20 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

How much data is needed to show MySQL Cluster's performance scaling vs. InnoDB

Posted: 09 Sep 2013 09:20 AM PDT

I am evaluating MySQL Cluster as a possible replacement for an InnoDB schema. So far, I have tested it with 10s of MB of data, and found MySQL Cluster slower than InnoDB; however, I have been told MySQL Cluster scales much better.

How much data does it take to show a performance benefit to MySQL Cluster vs. an InnoDB schema? Or, is there a better way to demonstrate MySQL Cluster's merits?

EDIT

Perhaps an important note: My cluster is currently a heterogeneous cluster with 4 machines. On each machine, I have given an equal amount of Data and Index Memory; 4GB, 2GB, 2GB, and 1GB respectively. The machines are running i7's and are connected over a Gigabit Lan. NumOfReplicas is set to 2.

EDIT

This application is a low-usage analytics database, which has roughly 3 tables >= 200M rows and 5 tables <= 10K rows. When we use it, it takes 15 seconds to run our aggregate functions. My boss asked me to research MySQL Cluster, to see if we could increase performance, since we thought aggregate functions could run pretty well in parallel.

SQL Server: index creation date

Posted: 09 Sep 2013 12:08 PM PDT

In SQL Server 2005 and above, how can I find when an index was created?

No comments:

Post a Comment

Search This Blog