Tuesday, May 21, 2013

[how to] Properly index magento mysql tables to improve specific query performance

[how to] Properly index magento mysql tables to improve specific query performance


Properly index magento mysql tables to improve specific query performance

Posted: 21 May 2013 07:25 PM PDT

I am running magento community edition and its db performance is horrible. I have the following query:

SELECT `width_idx`.`value`, COUNT(DISTINCT width_idx.entity_id) AS `count` FROM `mage_catalog_product_flat_1` AS `e`      INNER JOIN `mage_catalog_category_product_index` AS `cat_index` ON cat_index.product_id=e.entity_id AND cat_index.store_id=1 AND cat_index.visibility IN(2, 4) AND cat_index.category_id='2'                                                    INNER JOIN `mage_catalog_product_index_price` AS `price_index` ON price_index.entity_id = e.entity_id AND price_index.  website_id = '1' AND price_index.customer_group_id = 0                                                                    INNER JOIN `mage_catalog_product_index_eav` AS `gender_idx` ON gender_idx.entity_id = e.entity_id AND gender_idx.attribute_id = '154' AND gender_idx.store_id = 1 AND gender_idx.value IN('70')                                                 LEFT JOIN `mage_review_entity_summary` AS `at_rating_summary` ON (at_rating_summary.`entity_pk_value`=e.entity_id) AND   (at_rating_summary.entity_type = '1') AND (at_rating_summary.store_id = '1')                                             INNER JOIN `mage_catalog_product_index_eav` AS `width_idx` ON width_idx.entity_id = e.entity_id AND width_idx.attribute_id = '153' AND width_idx.store_id = '1' GROUP BY `width_idx`.`value`;  

The table *mage_catalog_product_flat_1* has 211931 rows in it. Here is EXPLAIN output:

*************************** 1. row ***************************             id: 1    select_type: SIMPLE          table: width_idx           type: ref  possible_keys: PRIMARY,IDX_MAGE_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_MAGE_CATALOG_PRODUCT_IN  _MAGE_CATALOG_PRODUCT_INDEX_EAV_STORE_ID            key: IDX_MAGE_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID        key_len: 2            ref: const           rows: 18746          Extra: Using where; Using index; Using filesort  *************************** 2. row ***************************             id: 1    select_type: SIMPLE          table: cat_index           type: eq_ref  possible_keys: PRIMARY,IDX_MAGE_CAT_CTGR_PRD_IDX_PRD_ID_STORE_ID_CTGR_ID_VISIBILITY,18A130F8AA1            key: PRIMARY        key_len: 10            ref: const,shoestore_stg.width_idx.entity_id,const           rows: 1          Extra: Using where  *************************** 3. row ***************************             id: 1    select_type: SIMPLE          table: e           type: eq_ref  possible_keys: PRIMARY            key: PRIMARY        key_len: 4            ref: shoestore_stg.cat_index.product_id           rows: 1          Extra: Using where; Using index  *************************** 4. row ***************************             id: 1    select_type: SIMPLE          table: at_rating_summary           type: ref  possible_keys: IDX_MAGE_REVIEW_ENTITY_SUMMARY_STORE_ID            key: IDX_MAGE_REVIEW_ENTITY_SUMMARY_STORE_ID        key_len: 2            ref: const           rows: 603          Extra:   *************************** 5. row ***************************             id: 1    select_type: SIMPLE          table: gender_idx           type: eq_ref  possible_keys: PRIMARY,IDX_MAGE_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_MAGE_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX  _MAGE_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_MAGE_CATALOG_PRODUCT_INDEX_EAV_VALUE            key: PRIMARY        key_len: 12            ref: shoestore_stg.e.entity_id,const,const,const           rows: 1          Extra: Using where; Using index  *************************** 6. row ***************************             id: 1    select_type: SIMPLE          table: price_index           type: eq_ref  possible_keys: PRIMARY,IDX_MAGE_CATALOG_PRODUCT_INDEX_PRICE_CUSTOMER_GROUP_ID,IDX_MAGE_CATALOG_PRODUCT_INDEX_PRICE_WEBS  ITE_ID,IDX_MAGE_CAT_PRD_IDX_PRICE_WS_ID_CSTR_GROUP_ID_MIN_PRICE            key: PRIMARY        key_len: 8            ref: shoestore_stg.width_idx.entity_id,const,const           rows: 1          Extra: Using index  

The slow query log has this query running over a minute:

# Query_time: 66.226261  Lock_time: 0.000433 Rows_sent: 44  Rows_examined: 34271886   

How can I speed this query up ?

Oracle database and data has been corrupted. Is there a way to get it back?

Posted: 21 May 2013 06:32 PM PDT

The Oracle database has been corrupted and both the data and backups were lost in the corruption. Is there anything I can do to recover my data? It would be terrible if it were all lost.

checksum(newid()) as primary key - what happens if a collision occurs?

Posted: 21 May 2013 06:12 PM PDT

Inspecting a rather critical database that is used by some software on my system, I found that one of the tables had a primary key on the column Id, where Id is calculated using checksum(newid()). This makes me feel uneasy; the newids are guaranteed to be unique, but by applying the checksum (to a 32-bit integer, presumably for performance reasons?), you have some chance that you get a collision (with 1M rows in the table, I'd put that chance at around 1:4000, too much for my liking).

So:

  • Am I missing some crucial bit of information that says the above is actually okay?
  • If not - what happens if the application tries to insert a new row into the table and the checksum(newid()) gives a primary key that already exists? Will my table blow up? Will the insert fail and leave it up to the application what to do with it?

Best strategies to have a backup of an RDS MySQL database

Posted: 21 May 2013 02:57 PM PDT

I have an AWS RDS MySQL database. I'm not a DBA, so my knowledge is limited.

I wish to design strategies to have zero data loss in case of a failure, in terms of replication or data backup.

I know they are different terms. (And ignoring that a wrong delete statement can delete data in which case replication may not be useful).

All I wish to do is have a zero data loss when there is a database failure. AWS maintains snapshots but that may be few hours old. So there is data loss.

Should I consider setting a DB server outside AWS? Or what? Should I do replication or take regular backup?

Any other strategies that DBAs have in mind?

Merging two Access tables into one

Posted: 21 May 2013 11:45 AM PDT

I have mostly identical tables, however there are slightly differences like missing columns in older ones. I want to merge 5 tables into one. Is there any easy way to do this?

Is there any way to prevent the memo structure from being pruned?

Posted: 21 May 2013 01:37 PM PDT

We know that the memo structure is pruned and some expensive alternative plans are discarded during optimization. I was wondering if there is any way to prevent this and let the optimizer just consider every possible plan and select the best from all alternatives?

How to dive into the PostgreSQL's optimizer and trace each step it takes to optimize the query?

Posted: 21 May 2013 05:12 PM PDT

Just like the MEMO structure in SQL Server which is kind of a "paper trail" of steps the optimizer takes in optimizing the query. Does PostgreSQL have the similar structure which records steps the optimizer takes?

MySQL Workbench - question about creating FK relationships

Posted: 21 May 2013 07:56 PM PDT

Forgive me a little new at databases and I came across something while working with Workbench that I have a question on.

In a very simple testing database, with three tables: Company, Department, Employee

Company          Department      Employee   - id (PK)(AI)   - id (PK)(AI)    - id (PK)(AI)   - Name          - Name           - Name  

I created a 1 to many relationship between Company and Department and since Employees can only have one Department and One company, I created a 1 to many relationship between Company and Employee and Department and Employee.

When I did this, Workbench created a third Relationship column in Employee Called Company_Department_id and all of my columns that have FK all have now been check as PK as well, tables now looks like this:

    Company       Department             Employee      - id (PK)(AI) - id (PK)(AI)           - id (PK)(AI)      - Name        - Name                  - Name                    - Company_id (PK)(FK)   - Company_id (PK)(FK)                                            - Department_id (PK)(FK)                                            - Company_Department_id (PK)(FK)  

Questions

  1. Why does workbench automatically create the Company_Department_id table? I feel like I should be able to delete it, but I just want to make sure I'm not missing something

  2. How come Workbench automatically tagged each related column as a primary key? Is it because they are primary keys in their own tables? Should I keep them as primary keys?

Update my local db based on scripts executed in my server database

Posted: 21 May 2013 07:46 AM PDT

I need to create an solution that update my local db version based on changes that was made on my server database. First, I need to check the differences that has exists between these databases and so, based on a number of control, and I will be able to know whom scripts i need to execute in my local db. I will create an table to do control of what was executed and not. But I need an 'insight' on how i will store the script that was executed on my server db to be able to execute it again in my local database to make the update, and make both of them stay in the same 'version'. This need be done just by click on a button on my application, in other words, I pretend to automate this task.

P.S.: Sorry for my English :-) this is the first time that I make an question on a English forum.

Restore database created in SQL Server 2008 R2 to SQL Server 2008 R2 SP2

Posted: 21 May 2013 07:21 PM PDT

I have a .bak file created by some other user in (I think) SQL Server 2008 R2 (10.50) and I need to restore it in SQL Server R2 SP2 (10.52). I used a hex viewer to look into the .bak file and found out a string containing something like "MSSQL10_50.SQL...", so that's how I guessed it was created in R2.

I get errors when trying to restore it from Management Studio interface, and also when I try to only verify the .bak file using T-SQL:

RESTORE VERIFYONLY FROM DISK = N'D:\aep_p.bak'  GO  

I get:

Msg 3241, Level 16, State 13, Line 1
The media family on device 'D:\aep_p.bak' is incorrectly formed. SQL Server cannot process this media family.
Msg 3013, Level 16, State 1, Line 1
VERIFY DATABASE is terminating abnormally.

My questions are:

  1. Is it possible to restore a .bak from R2 to R2 SP2?
  2. If yes, then why do I get that error?

MySQL many to many relation

Posted: 21 May 2013 07:45 AM PDT

Sorry is this seems a stupid question, but I have a table of server addresses and a second table with a list of hardware devices which use the addresses, it is possible for multiple devices to use the same server address and a device to use multiple server addresses, so there is a many to many mapping between them. I am a little confused how I can represent the relationship between them in SQL, so far I am using a third table with the primary keys from the address table and device table and perform two joins in the query, but I'm not sure if this is the best way.

Approaches for deleting unnecessary records from tables

Posted: 21 May 2013 12:31 PM PDT

We have a database which stores the temperatures of 20 thermometer every 7 seconds. We want to delete all records in the way that every minute holds just one tempereature instead of 8 which are older than 3 months. I was planning to do it as follows as discussed in that question here:

  1. Select the (relatively few) surviving rows into a temporary table.
  2. Truncate the table.
  3. Re-insert the survivors.

But all devices are working without interruption and inserting values into the DB so I can not Truncate and rename the temp_tables since the system should be on all the time. Or can I? If not, do I have to do it by DELETE VACUUM?

By the way we delete all unnecessary records which are older than 3 months just once, afterwards we have to repeat this process every month for the 4.th month previous (I hope that part was clear). If we repeat this process at the beginning of 05.2013, we have to delete the unnecessary records of the month 01.2013. How can I automatize this process? Should I better write a .bat file for scheduled task or is there any better approaches?

Need an Oracle query to get the Oracle Home directory for a particular instance

Posted: 21 May 2013 07:45 AM PDT

I am searching for a query to get the Oracle Home for a particular instance name, say Eagle (if there are more than one instance on the host).

One filegroup, multiple datafiles, how to get list of tables in each file

Posted: 21 May 2013 06:13 PM PDT

In one of my production databases, I have a filegroup with 3 data files sitting on 3 different disks. There are over 50 tables on this filegroup.

How do I identify the location of each of the 50 tables between the 3 data files? I need to know on which data file does the clustered index for each table reside.

Example :

Table A : Datafile 1 : FG1  Table B : Datafile 2 : FG1  Table C : Datafile 1 : FG1  Table D : Datafile 3 : FG1  

Dropping Hypothetical Indexes

Posted: 21 May 2013 10:12 AM PDT

In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes.

I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything.

I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error.

Has anyone deleted these before and is there a trick I'm missing?


Addendum

Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out.

Backup very large table in SQL Server

Posted: 21 May 2013 06:01 PM PDT

everyone.

I'm deeply sorry if this is an amateurish question, but I have to update certain values of a large table (for the sake of a presumed example, it is called 'Resource' and it is over 5M rows) and thus I have to make a backup before performing the changes. We have not enough DB free space in order to store the full backed-up table.

Which is the best way? Is there a way to do this by blocks? I mean something like: backing up the first 100K rows from the original table, updating those 100K rows in the original table, deleting those 100K rows from the back-upped table, backing up the following 100K rows from the original table, and proceeding analogously. Is this feasible?

Thank you very much in advance. Martín.

Is it better to use AUTOCOMMIT = 0

Posted: 21 May 2013 09:25 AM PDT

Is it better to use (for perfomance tuning) AUTOCOMMIT = 0 before bulk inserts of data in MySQL, if the insert query looks like

INSERT INTO SomeTable (column1, column2) VALUES (val1,val2),(val3,val4),... ?

General tool to load dump files

Posted: 21 May 2013 09:12 AM PDT

I am a big fan of Postgres both for its price but also for its features. I am going to have to need to upload into it both Oracle dump and SQL Server files. I will try to ask and beg for plain .csv for schema DDL but I suspect that I will be given dmp files.

Is there a tool, most preferably open source one, that would allow me to read, profile and possibly load Oracle/SQL Server files into Postgres?

Thank you, Edmon

Backing up file groups with restore to earlier backup

Posted: 21 May 2013 09:24 AM PDT

I have a task: I have created a database with a filegroup lamb that has a few tables.

I need to extract a csv into a table in a filegroup named lamb and back up that filegroup.

Then extract another csv file again and back up again.

But I need to be able to restore the first backup.

What is the backup and restore method? Is it possible to restore only the lamb filegroup?

High Mysql Load , over 700% CPU

Posted: 21 May 2013 01:08 PM PDT

I had high mysql load on server linux 64 bit , 24 G.B ram , Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz ,

Alot of quiers in sending data mode

Here is mysql status

+------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  | Id   | User         | Host      | db           | Command | Time | State          | Info                                                                                        |  +------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  | 1    | leechprotect | localhost | leechprotect | Sleep   | 507  |                |                                                                                             |  | 422  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM likes WHERE userid='100002047302002' and pageid='113623891994626' Limit 1     |  | 440  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM likes WHERE userid='100003610486105' and pageid='137067399778568' Limit 1     |  | 745  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000156154972' and pageid='259472294088694' Limit 1     |  | 813  | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001079730269' and pageid='111612552199698' Limit 1     |  | 817  | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100003718366503' and pageid='105790599509795' Limit 1     |  | 888  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM friends WHERE userid='100004673917946' and friendid='100004836366957' Limit 1 |  | 930  | face_book  | localhost | face_book  | Query   | 4    | Sending data   | SELECT * FROM likes WHERE userid='100001857826693' and pageid='379878825440539' Limit 1     |  | 940  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002525443288' and pageid='432454306781258' Limit 1     |  | 976  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100001786746020' and pageid='266169045612' Limit 1        |  | 980  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000721604707' and pageid='188587591283392' Limit 1     |  | 999  | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001661124181' and pageid='161323847303028' Limit 1     |  | 1033 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100002273583367' and pageid='447287665321823' Limit 1     |  | 1064 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100003902289965' and pageid='315361025251697' Limit 1     |  | 1100 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100002720670629' and pageid='114370700612' Limit 1        |  | 1109 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100002279885489' and friendid='100002385585461' Limit 1 |  | 1111 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100001087100886' and friendid='100005456647732' Limit 1 |  | 1132 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100005404566097' and pageid='225594034251253' Limit 1     |  | 1148 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100005551654543' and pageid='104088939622341' Limit 1     |  | 1172 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000009185323' and pageid='110343285691930' Limit 1     |  | 1188 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100003468150624' and pageid='182937471830173' Limit 1     |  | 1192 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100000619411698' and pageid='527695070578211' Limit 1     |  | 1196 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002866966279' and pageid='244651575605946' Limit 1     |  | 1208 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100001057034709' and friendid='1080136538' Limit 1      |  | 1230 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100005246283878' and friendid='100002513789129' Limit 1 |  | 1240 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100005028574119' and friendid='100001229276848' Limit 1 |  | 1241 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='1681467791' and friendid='1537753959' Limit 1           |  | 1242 | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM friends WHERE userid='100001845705855' and friendid='1668437534' Limit 1      |  | 1247 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100003854517927' and friendid='100002322873932' Limit 1 |  | 1257 | face_book  | localhost | face_book  | Query   | 0    | Sorting result | SELECT lastpost,id FROM facesessions ORDER BY lastpost DESC                                 |  | 1276 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100001579975664' and pageid='402312375509' Limit 1        |  | 1284 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001827038305' and pageid='254365179238' Limit 1        |  | 1291 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='1587203387' and pageid='197678701083' Limit 1             |  | 1309 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100002156769339' and friendid='100001641695726' Limit 1 |  | 1318 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002230633105' and pageid='343669279056732' Limit 1     |  | 1325 | face_book  | localhost | face_book  | Sleep   | 8    |                |                                                                                             |  | 1333 | face_book  | localhost | face_book  | Sleep   | 2    |                |                                                                                             |  | 1338 | face_book  | localhost | face_book  | Sleep   | 0    |                |                                                                                             |  | 1339 | root         | localhost |              | Query   | 0    |                | show processlist                                                                            |  +------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  Uptime: 508  Threads: 38  Questions: 65938  Slow queries: 0  Opens: 51156  Flush tables: 1  Open tables: 34  Queries per second avg: 129.799  

and here is /etc/my.cnf

[client]  socket=/var/lib/mysql/mysql.sock    [mysqld]  tmpdir=/mysqltmp    port = 3306  socket = /var/lib/mysql/mysql.sock  skip-networking  skip-external-locking  local-infile=0    back_log = 100  #skip-innodb  max_connections = 50  table_cache = 32  key_buffer_size = 12284M  myisam_sort_buffer_size = 512M  myisam_max_sort_file_size = 8192M  join_buffer_size = 512M  read_buffer_size = 512M  sort_buffer_size = 512M  read_rnd_buffer_size = 512M  table_definition_cache = 8000  table_open_cache = 8000  thread_cache_size = 4  wait_timeout = 360  interactive_timeout = 60  connect_timeout = 10  tmp_table_size = 256M  max_heap_table_size = 256M  max_allowed_packet = 512M  max_seeks_for_key = 1000  group_concat_max_len = 1024  max_length_for_sort_data = 1024  net_buffer_length = 16384  max_connect_errors = 100000  concurrent_insert = 2  bulk_insert_buffer_size = 8M  query_cache_limit = 1M  query_cache_size = 256M  query_cache_type = 1  query_prealloc_size = 262144  query_alloc_block_size = 65536  range_alloc_block_size = 4096  transaction_alloc_block_size = 8192  transaction_prealloc_size = 4096  default-storage-engine = MyISAM  max_write_lock_count = 8    innodb_open_files = 500  innodb_data_file_path= ibdata1:10M:autoextend  innodb_buffer_pool_size = 64M  innodb_additional_mem_pool_size = 32M    innodb_log_files_in_group = 2  innodb_log_file_size = 5M  innodb_log_buffer_size = 8M  innodb_flush_log_at_trx_commit = 2  innodb_thread_concurrency = 8    #slow_query_log=0  #long_query_time=1  #slow_query_log_file=/var/log/mysql/log-slow-queries.log    open_files_limit=50000  [mysqld_safe]  log-error=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  nice = -5  open-files-limit = 8192    [mysqldump]  quick  max_allowed_packet = 512M    [myisamchk]  key_buffer_size = 6400M  sort_buffer_size = 1M  read_buffer_size = 1M  write_buffer_size = 1M    [mysqlhotcopy]  interactive-timeout  

i tried to optimize table and adjust my.cnf with mysqlreport still the same

i don't use InnoDB

mysql version

# mysql -V  mysql  Ver 14.14 Distrib 5.1.68, for unknown-linux-gnu (x86_64) using readline 5.1  

mysql> SHOW CREATE TABLE friends\G

*************************** 1. row ***************************         Table: friends  Create Table: CREATE TABLE `friends` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `userid` mediumtext COLLATE latin1_general_ci,    `friendid` mediumtext COLLATE latin1_general_ci,    `name` varchar(255) COLLATE latin1_general_ci NOT NULL,    `dateline` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastsend` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sendstatus` varchar(255) COLLATE latin1_general_ci NOT NULL,    PRIMARY KEY (`id`),    KEY `name` (`name`),    KEY `lastsend` (`lastsend`),    KEY `sendstatus` (`sendstatus`)  ) ENGINE=MyISAM AUTO_INCREMENT=1079024 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SHOW CREATE TABLE likes\G

*************************** 1. row ***************************         Table: likes  Create Table: CREATE TABLE `likes` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `userid` mediumtext COLLATE latin1_general_ci,    `pageid` mediumtext COLLATE latin1_general_ci,    `name` varchar(255) COLLATE latin1_general_ci NOT NULL,    `link` varchar(255) COLLATE latin1_general_ci NOT NULL,    `dateline` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastsend` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sendstatus` varchar(255) COLLATE latin1_general_ci NOT NULL,    PRIMARY KEY (`id`)  ) ENGINE=MyISAM AUTO_INCREMENT=2008744 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SHOW CREATE TABLE facesessions\G

*************************** 1. row ***************************         Table: facesessions  Create Table: CREATE TABLE `facesessions` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `session_key` varchar(255) COLLATE latin1_general_ci NOT NULL,    `uid` varchar(255) COLLATE latin1_general_ci NOT NULL,    `expires` varchar(255) COLLATE latin1_general_ci NOT NULL,    `secret` varchar(255) COLLATE latin1_general_ci NOT NULL,    `access_token` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sig` varchar(255) COLLATE latin1_general_ci NOT NULL,    `username` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastposttime` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastpost` varchar(255) COLLATE latin1_general_ci NOT NULL,    `nextsend` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `lastpoststatus` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `gender` varchar(20) COLLATE latin1_general_ci DEFAULT NULL,    `birthday` varchar(20) COLLATE latin1_general_ci DEFAULT NULL,    `location` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `imported` int(10) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `uid` (`uid`),    KEY `access_token` (`access_token`),    KEY `sig` (`sig`),    KEY `username` (`username`),    KEY `lastposttime` (`lastposttime`),    KEY `lastpost` (`lastpost`),    KEY `nextsend` (`nextsend`),    KEY `lastpoststatus` (`lastpoststatus`)  ) ENGINE=MyISAM AUTO_INCREMENT=16238 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SELECT SUM(index_length) FROM information_schema.tables WHERE engine='MyISAM';

+-------------------+  | SUM(index_length) |  +-------------------+  |         150684672 |  +-------------------+  1 row in set (0.01 sec)  

How do I execute an Oracle SQL script without sqlplus hanging on me?

Posted: 21 May 2013 12:05 PM PDT

For an automated task I would very much like to run some SQL scripts and make sure that sqlplus does not hang under any circumstancees, i.e.:

  • If the script contains any named substitution variable that has to be entered by the user, sqlplus should return with an error instead of prompting for the variable -- I cannot use set define off, as some of these scripts need to accept command line parameters that have to be resolved with &1
  • The script must not "hang" when it doesn't contain an exit; at the end.

    Solved: I think now that I can achieve this by wrapping the sql-script in a secondary "caller script". I.e., the caller script calls the other script with @ and then has a fixed exit; after that. The other script doesn't need an exit that way.

  • Anything else: If it would require a prompt, it should return with an error.

How can i do this with Oracle (and sqlplus or something else)?

MySQL 5.6 on OS X 10.8.2 disconnects on user creation?

Posted: 21 May 2013 08:40 AM PDT

Via Homebrew I just upgraded from MySQL 5.5.29 to 5.6.10 and all my queries and existing users are working fine, but whenever I try to modify permissions or create users now MySQL disconnects me. Are there any my.cnf settings I need to change for 5.6, or system table updates I need to do etc?

mysql> create user 'something'@'localhost' identified by 'password';  ERROR 2013 (HY000): Lost connection to MySQL server during query    mysql> create user 'something'@'localhost' identified by 'password';  ERROR 2006 (HY000): MySQL server has gone away  No connection. Trying to reconnect...  Connection id:    1  Current database: *** NONE ***    ERROR 2013 (HY000): Lost connection to MySQL server during query    mysql> SHOW VARIABLES LIKE "%version%";  ERROR 2006 (HY000): MySQL server has gone away  No connection. Trying to reconnect...  Connection id:    1  Current database: *** NONE ***    +-------------------------+---------------------+  | Variable_name           | Value               |  +-------------------------+---------------------+  | innodb_version          | 1.2.10              |  | protocol_version        | 10                  |  | slave_type_conversions  |                     |  | version                 | 5.6.10              |  | version_comment         | Source distribution |  | version_compile_machine | x86_64              |  | version_compile_os      | osx10.8             |  +-------------------------+---------------------+  7 rows in set (0.00 sec)  

Hard limitation for merge replication article count?

Posted: 21 May 2013 08:50 AM PDT

A bit of background. We developed an app that uses merge replication. Right now, we are currently publishing about 212 articles, and everything is fine, as it appears to fall into the 256 article count limit. (Using SQL Standard 2005 at the server, and 2005 Express at the subscriber level.)

We are in the process of developing a new version, with a new database built from the ground up, which will have a minimum system requirement of SQL 2008 R2 Standard server side and Express 2008 R2 at the client end, which also appears to have the same 256 count limit on published articles. (yes, we will also support 2012, it's limits are the same though)

The question comes in where we have successfully test published out over 400 articles in the new version, and everything tests functionally (pub creation, snapshot and subscriber creation, and replication itself). Has anyone else exceeded the count limit before, and is this just a soft recommended limit? We have not seen any errors, either during publication creation, or subscriber creation.

According to this article: http://msdn.microsoft.com/en-us/library/ms143432%28v=sql.105%29

I'm waiting on a call back from Microsoft, but I'd like to know if anyone has seen this before, in any ver of SQL.

postgis problem with shortest distance calculation

Posted: 21 May 2013 08:08 AM PDT

while working with POSTGIS pgrouting, for calculateing the distance between two roads(lines) i got the shortest_path function.

But the logic is based on Start_point(Start_id) and end_point(end_id) but in my data the linestring contains so many internal points like ('linestring(1 1,2 2,3 3,4 4,5 5)' just for example..)

it is taking start point (1 1) endpoint(5 5)

if other line starting with (5 5) it is showing as route...like ('linestring(5 5,6 6)')

But line which crossing the point inside the linestring like(2 2,3 3,4 4) which is not telling as connected.. example

table roads: id name way 1 A linestring(1 1,2 2,3 3,4 4,5 5) 2 B linestring(5 5,6 6) 3 c linestring(2 1,2 2,2 3)

if i am applying shortest_path function from point(1 1) to (6 6) its showing the way but for (1 1) to (2 3) it is not showing anything...but there is a route for this (1 1,2 2,2 3)

can anyone please help me out for finding the solution..

Regards Deepak M

Cannot GRANT privileges as root

Posted: 21 May 2013 09:16 AM PDT

Because of having some problems, I decided to re-create all users except for root@localhost. This works fine, but the newly created user has no right to do anything. What I want is to simply give all rights to root at some local IP. I (as root@localhost) tried

CREATE USER 'root'@'10.0.3.210';  GRANT ALL ON *.* TO 'root'@'10.0.3.210';  

The first command works, the second one fails with the message

ERROR 1045 (28000): Access denied for user 'root'@'localhost'  

I don't get why root@localhost can't do everything, I'm sure I didn't mess with its privileges. From

SHOW GRANTS FOR 'root'@'localhost'  

I get

Grants for root@localhost  GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TO 'root'@'localhost' WITH GRANT OPTION  GRANT ALL PRIVILEGES ON `%`.* TO 'root'@'localhost' WITH GRANT OPTION  

whatever this means. Am I missing a needed privilege? Can it be fixed?

I'm working with mysql Ver 14.14 Distrib 5.1.61, for debian-linux-gnu (x86_64).

SQL Server Alert Based On Severity Level Not Working

Posted: 21 May 2013 01:41 PM PDT

I am new to SQL Server alerts, and I am trying to setup an alert That will run a job and send a notification, based on severity level.

In order to test if the alert works correctly I have downloaded a sample corrupted database from MSDN Blogs, and I have restored the database and issued a query against the database which failed with error 824 and level 24.

But although the error is a severe error and was logged to both the SQL log and the Windows Application Log nothing seemed to work, I have even tested with both an alert on the specific database and on all databases. (I have even setup alerts on all other levels from 19 and above just to confirm that nothing is happening.)

However an alert based on error number 824 did work, as well as an alert based on the performance counter's Sql Error counter did worked correctly.

Any suggestions?

No comments:

Post a Comment

Search This Blog