Saturday, April 27, 2013

[how to] Mysql Forcing close of thread 946 user

[how to] Mysql Forcing close of thread 946 user


Mysql Forcing close of thread 946 user

Posted: 27 Apr 2013 07:11 PM PDT

My mysql box keeping shutting down and up. Below is snippet of the log file. There is quite a number of places I notice this Forcing close of thread 946 user:

130426 12:36:28 [Note] Event Scheduler: Purging the queue. 0 events  130426 12:36:28  InnoDB: Starting shutdown...  130426 12:36:31  InnoDB: Shutdown completed; log sequence number 0 1111309  130426 12:36:31 [Note] /usr/libexec/mysqld: Shutdown complete    130426 12:36:31 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended  130426 12:36:32 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  130426 12:36:32  InnoDB: Initializing buffer pool, size = 8.0M  130426 12:36:32  InnoDB: Completed initialization of buffer pool  130426 12:36:32  InnoDB: Started; log sequence number 0 1111309  130426 12:36:32 [Note] Event Scheduler: Loaded 0 events  130426 12:36:32 [Note] /usr/libexec/mysqld: ready for connections.  Version: '5.1.67'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution  130426 21:05:19 [Note] /usr/libexec/mysqld: Normal shutdown    130426 21:05:19 [Note] Event Scheduler: Purging the queue. 0 events  130426 21:05:21 [Warning] /usr/libexec/mysqld: Forcing close of thread 946  user: ''    130426 21:05:22  InnoDB: Starting shutdown...  130426 21:05:23  InnoDB: Shutdown completed; log sequence number 0 1111309  130426 21:05:23 [Note] /usr/libexec/mysqld: Shutdown complete    130426 21:05:23 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended  130426 21:05:25 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  130426 21:05:25  InnoDB: Initializing buffer pool, size = 8.0M  130426 21:05:25  InnoDB: Completed initialization of buffer pool  130426 21:05:25  InnoDB: Started; log sequence number 0 1111309  130426 21:05:25 [Note] Event Scheduler: Loaded 0 events  130426 21:05:25 [Note] /usr/libexec/mysqld: ready for connections.  Version: '5.1.67'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution  130428  0:47:12 [Note] /usr/libexec/mysqld: Normal shutdown    130428  0:47:12 [Note] Event Scheduler: Purging the queue. 0 events  130428  0:47:12  InnoDB: Starting shutdown...  130428  0:47:16  InnoDB: Shutdown completed; log sequence number 0 1111309  130428  0:47:16 [Note] /usr/libexec/mysqld: Shutdown complete  

Postgres: count(*) vs count(id)

Posted: 27 Apr 2013 08:01 PM PDT

I saw in the documentation the difference between count(*) and count(pk). I had been using count(pk) (where pk is a SERIAL PRIMARY KEY) not knowing about the existence of count(*).

My question is about Postgres' internal optimizations. Is it smart enough to pick up that a SERIAL PRIMARY KEY is going to exist in every row and never be false and just count rows or will it do redundant predicate checks for each row? I agree that this is probably too much of a pointless optimization but I'm just curious.

I took a look at the output of EXPLAIN and EXPLAIN VERBOSE for count(*), count(id) and count(id > 50) to see if EXPLAIN mentioned checking the predicates in its output. It doesn't.

Dropping a group of schemas with similar name patterns

Posted: 27 Apr 2013 04:58 PM PDT

Consider a situation where one need to perform a bunch of essentially identical operations, with the only variable being the name of some object.

In my case, I need to drop some schemas, all of the form ceu_shard_test_merge_*, to use shell globbing terminology. So, conceptually, this can be written as

DROP SCHEMA ceu_shard_test_merge_* CASCADE;  

by analogy with a Unix shell.

Of course this command doesn't work, so how can one do this with a single command? My understanding is that this cannot be done portably. I'm using PostgreSQL 8.4, but methods for more recent versions of PG are fine too.

It would be nice if the solution had a dry run or dummy option, so one could see what commands were going to be run before actually running them. Perhaps a way to just print the commands?

Also, an indication of how to deal with more general patterns than the example given would be nice.

MySQL replication: most important config parameters for performance on slave server?

Posted: 27 Apr 2013 12:43 PM PDT

I'm setting up a mysql master-slave configuration, where slave is located on a much weaker sever.

Since, if I understand correctly, slave only works on updates/inserts, what are the most critical configuration parameters that I can adjust in slave's my.cnf to ensure best performance and smallest possible latency?

  • Master: 32Gb RAM, Slave: 4Gb RAM
  • Mostly innodb tables
  • Percona-server 5.5 on both master and slave

Database for opening times of locations

Posted: 27 Apr 2013 03:33 PM PDT

I'm designing a database for opening times and created this solution.

The specifications of the database which will be a MySQL are, that a location have standard opening times in a week and can have special opening times for example on christmas or something like that.

So if I want the opening times of an entry I would first search in 'special_opening' for the given day and if it given no results back, I would load the data from 'opening'.

Is this a legit way to realize the database? Are there better ways to do it?

Add Contents in a Column and make them 0

Posted: 27 Apr 2013 07:50 PM PDT

http://sqlfiddle.com/#!3/96f11/3

In the above fiddle, I need the required output in the Fiddle.

ie., The UserID column in Filtered Table and Main table are equal. I need to get the Amt column data according to the Filtered table from Main table and Add them and add the Total amount to The Amt Column of UserID Admin Row. and the Corresponding Amt Rows have to be set to Zero.

I need to update the output into Main table

Can you help me in doing This ??

How to purge logs by using Flashback feature in Oracle Database

Posted: 27 Apr 2013 08:40 AM PDT

I've configured flash recovery area for Oracle Database. My question is: how can I delete Flashback Database log?or is there any retention policy for that?

2- After I've enabled archive log mode and flash recovery area ,does purging automatically archived logs which are located archive log mode folder(not FRA logs)?or do I have to backup them via RMAN?

Thanks,

How do I connect to a database with a blank password using a shell script?

Posted: 27 Apr 2013 03:25 PM PDT

## connect to mysql and source backup file ##  USER="root"  PASS=""    mysql -u$USER -p$PASS database_name < backup.sql  

Above is my shell script which I used to source database but still its asking for password. I just want to know how can I source database in case of blank password. If root password is not blank than it works fine.

Getting SELECT to return a constant value even if zero rows match

Posted: 27 Apr 2013 12:05 PM PDT

Consider this select statement:

SELECT *,          1 AS query_id   FROM players   WHERE username='foobar';  

It returns the column query_id with value 1 along with a player's other columns.

How would one make the above SQL return at least the query_id of 1 even if the select finds no rows that match?

BTW, it's PostgreSQL 8.4.

Cannot create PostgreSQL user

Posted: 27 Apr 2013 01:27 PM PDT

I'm using PostrgreSQL 9.1.9 on Ubuntu 13.04.

Using the following StackOverflow question, I tried to create a user/role in postgressql:

giladnaaman@giladnaaman-Lenovo-G550:~/Programming/Eclipse/Hephaestus$ createuser --version  createuser (PostgreSQL) 9.1.9    giladnaaman@giladnaaman-Lenovo-G550:~/Programming/Eclipse/Hephaestus$ createuser f4  Shall the new role be a superuser? (y/n) n  Shall the new role be allowed to create databases? (y/n) y  Shall the new role be allowed to create more new roles? (y/n) y  createuser: could not connect to database postgres: FATAL:  role "giladnaaman" does not exist  

How can I solve this problem?

Oracle transactions deadlock

Posted: 27 Apr 2013 02:43 PM PDT

How to rollback all active transactions on Oracle DB?

I execute query and see 4 transcations in ACTIVE status.

SELECT *   FROM V$TRANSACTION  

PosgreSQL: setting high work_mem does not avoid disk merge

Posted: 27 Apr 2013 10:19 AM PDT

This is not quite my day with postgres. On my server machine with PosgreSQL 9.2.3 I have set work_mem to 4MB to avoid Sort Method: external merge Disk: 2072kB but it did not help:

cwu=# vacuum analyze web_city;  VACUUM  cwu=# SHOW work_mem;   work_mem   ----------   4MB  (1 row)  cwu=# explain analyze select count(*) from web_city GROUP BY (left(name,5));                                                            QUERY PLAN                                                            ------------------------------------------------------------------------------------------------------------------------------   GroupAggregate  (cost=18304.35..20487.34 rows=95562 width=10) (actual time=1557.871..1809.029 rows=64459 loops=1)     ->  Sort  (cost=18304.35..18633.84 rows=131796 width=10) (actual time=1557.856..1707.069 rows=131796 loops=1)           Sort Key: ("left"((name)::text, 5))           Sort Method: external merge  Disk: 2072kB           ->  Seq Scan on web_city  (cost=0.00..4842.45 rows=131796 width=10) (actual time=1.050..174.907 rows=131796 loops=1)   Total runtime: 1828.936 ms  (6 rows)  

Setting work_mem to 8MB finally helps:

cwu=# SET work_mem = '8MB';  SET  cwu=# explain analyze select count(*) from web_city GROUP BY (left(name,5));                                                         QUERY PLAN                                                         ------------------------------------------------------------------------------------------------------------------------   HashAggregate  (cost=5501.43..6675.72 rows=93943 width=10) (actual time=207.628..244.667 rows=64459 loops=1)     ->  Seq Scan on web_city  (cost=0.00..4842.45 rows=131796 width=10) (actual time=0.749..102.511 rows=131796 loops=1)   Total runtime: 263.154 ms  (3 rows)  

But why 4MB is not enough? In postgres wiki, there is this note:

if you see a line like "Sort Method: external merge Disk: 7526kB" in there, you'd know a work_mem of at least 8MB would really improve how fast that query executed, by sorting in RAM instead of swapping to disk.

So I assumed it will be the same in my case.

EDIT: If I do:

cwu=# create index name_left_prefix on web_city(left(name, 5));  

then 4MB is finally enough. It seems that the index causes lower memory usage. If anyone would be that kind to explain all this behaviour I would be very grateful.

SAP Business Warehouse 0APO_LOCNO_ATTR datasource extraction

Posted: 27 Apr 2013 12:58 PM PDT

We are currently importing 0APO_LOCNO_ATTR from several different source systems. I want to be able to import the field PRECISID into BW from this datasource. Here is what I have discovered/tried so far.

  • Using RSA2, PRECISID is in the extract structure, but it is not marked as transferable. I have tried looking up the datasource in RSA6 to unhide this field, but it is not present in RSA6.

  • When I run RSA3 on this datasource, PRECISID is present and populated. However, when I display the fields of the datasource in RSA1 it is not present. I have tried replicating metadata, but that hasn't done anything. I was under the impression that if a field was present in RSA3 that it should be present in the fields of the PSA.

How to make a copy from a Emergency Database?

Posted: 27 Apr 2013 09:04 AM PDT

I've a database on a very old Computer (Windows xp, SQL server 8.0). Yesterday, the database has been marked as suspected, I've put it to the emergency state. So now the database is Read-Only. The only way to make it functional again is make a copy of its to another database. Previously, I use Backup to save the database to a File and restore to other database but now, the database is in Emergency mode so that this way is not working. I've tried to import/Export to other database but there is one problem, some key in some the table of the old Database was mark as Identity Specification and it will automatically generate. But now the new database is not mark as Identity Specification and the Null error is generate.

So why the import/export data doesn't make a exact copy of the old database to the new one? And how to make a exact copy to the database?

Maintenance in MYSQL when innodb_file_per_table disabled

Posted: 27 Apr 2013 04:04 PM PDT

I have read your post and I completely understand OPTIMIZE TABLE to perform in an environment where innodb_file_per_table is disabled, does not shrink the global ibdata1 tablespace.

But what if I need to perform index maintenance on InnoDB tables with ANALYZE command, it will grow the single tablespace also?

What other alternatives are there to increase performance or doing some maintenance in Innodb engine, when using a single tablespace and without grow out of control the single ibdata1.

Regards. Matthew

slow queries - set weight to token type in tsquery - postgresql

Posted: 27 Apr 2013 08:04 PM PDT

Postgresql version 9.2.3!

I'm working on a database for mapping of chemical names. My main table contains aprox 91 million records and it's indexed by gin. I want to query it with multiple names (I'm trying now with 100 entries), which I first put in a query table, create a tsquery column of the names and index it with gist.

Main table structure:

CREATE TABLE IF NOT EXISTS   pubchem_compounds_synonyms_all (      cid         int NOT NULL references  pubchem_compounds_index(cid) ON UPDATE CASCADE ON DELETE CASCADE,      name            VARCHAR(2000) NOT NULL,        synonym         VARCHAR(2000) NOT NULL,      PRIMARY KEY (cid, name, synonym)  );    CREATE TEMPORARY TABLE synonyms_temp AS SELECT * FROM pubchem_compounds_synonyms_all;    COPY synonyms_temp FROM  '...' delimiters E'\t';  INSERT INTO pubchem_compounds_synonyms_all SELECT DISTINCT * FROM  synonyms_temp;  

I was trying different approaches, thus for testing gin index I created a clone:

CREATE TABLE synonyms_all_gin_tsvcolumn AS  SELECT * FROM pubchem_compounds_synonyms_all;  

then:

 ALTER TABLE synonyms_all_gin_tsvcolumn ADD COLUMN tsv_syns TSVECTOR;   UPDATE synonyms_all_gin_tsvcolumn SET tsv_syns = to_tsvector(synonym);   CREATE INDEX gin_tsv_colum_index ON synonyms_all_gin_tsvcolumn USING GIN (tsv_syns);  

The query table is:

 CREATE TABLE IF NOT EXISTS cmap_names_query (      name        VARCHAR (2000) PRIMARY KEY NOT NULL  );  

Same as in the main table, I fill it in with COPY from via a temp table and then I add the tsquery column:

 .....   ALTER TABLE cmap_names_query ADD COLUMN lexemes TSQUERY;   UPDATE cmap_names_query SET lexemes = plainto_tsquery(name);   CREATE INDEX cmap_gist_tsquery_index ON cmap_names_query USING  gist (lexemes tsquery_ops);  

The query is basically a join between both tables:

SELECT DISTINCT ON (c.lexemes) c.name, s.cid, s.synonym,  ts_rank(s.tsv_syns,c.lexemes)  FROM synonyms_all_gin_tsvcolumn s   JOIN cmap_names_query c ON c.lexemes @@ s.tsv_syns;  

lexemes is the gist indexed tsquery column on my query table, whereas tsv_syns is the gin indexed tsvector column in the main names table, the one with 91 million records.

The query is intended to match names, exact matches if possible. It works very well for such a large table. Normal names, containing only characters, can be retreated even in microseconds. The problem is when the string names contains numbers. The operation tsvector and tsquery create one token for each number, and all together makes the query for this sort of entries rather slow, well, slower. Instead of a few milliseconds, they take aprox 1-2 seconds each. I would like to reduce this query time to a few milliseconds like the other entries, but I don't know how. I have tested it with and without ts_rank to find out that ranking only add half a second to the total query, if it even makes a difference. so that's not my problem

Some samples queries are:

query:

Limit  (cost=32.13..36.14 rows=1 width=92) (actual time=40.001..40.002 rows=1 loops=1)  Buffers: shared hit=201  ->  Bitmap Heap Scan on synonyms_all_gin_tsvcolumn  (cost=32.13..100.24 rows=17 width=92) (actual time=39.998..39.998 rows=1 loops=1)       Recheck Cond: (tsv_syns @@ plainto_tsquery('10-methoxyharmalan'::text))       Buffers: shared hit=201       ->  Bitmap Index Scan on gin_tsv_colum_index  (cost=0.00..32.13 rows=17 width=0) (actual time=39.984..39.984 rows=1 loops=1)             Index Cond: (tsv_syns @@ plainto_tsquery('10-methoxyharmalan'::text))              Buffers: shared hit=200   Total runtime: 40.037 ms  

result: (cid |name|synonym|tsv vector)

5474706 | 6-Methoxyharmalan | 10-Methoxyharmalan | '10':1 'methoxyharmalan':2  

query:

Limit  (cost=32.23..36.23 rows=1 width=92) (actual time=2215.127..2215.128 rows=1 loops=1)  Buffers: shared hit=10086  ->  Bitmap Heap Scan on synonyms_all_gin_tsvcolumn  (cost=32.23..148.34 rows=29 width=92) (actual time=2215.125..2215.125 rows=1 loops=1)       Recheck Cond: (tsv_syns @@ plainto_tsquery('1,4-chrysenequinone'::text))       Buffers: shared hit=10086       ->  Bitmap Index Scan on gin_tsv_colum_index  (cost=0.00..32.22 rows=29 width=0) (actual time=2215.108..2215.108 rows=1 loops=1)             Index Cond: (tsv_syns @@ plainto_tsquery('1,4-chrysenequinone'::text))              Buffers: shared hit=10085  Total runtime: 2215.182 ms  

result:

 180933 | 1,4-Chrysenedione | 1,4-Chrysenequinone | '1':1 '4':2 'chrysenequinon':3  

query:

 Limit  (cost=32.00..36.02 rows=1 width=92) (actual time=2852.934..2852.936 rows=1 loops=1)   Buffers: shared hit=7292   ->  Bitmap Heap Scan on synonyms_all_gin_tsvcolumn  (cost=32.00..36.02 rows=1 width=92) (actual time=2852.930..2852.930 rows=1 loops=1)       Recheck Cond: (tsv_syns @@ plainto_tsquery('2-deoxy-D-glucose'::text))       Buffers: shared hit=7292       ->  Bitmap Index Scan on gin_tsv_colum_index  (cost=0.00..32.00 rows=1 width=0) (actual time=2852.897..2852.897 rows=121 loops=1)             Index Cond: (tsv_syns @@ plainto_tsquery('2-deoxy-D-glucose'::text))             Buffers: shared hit=7291    Total runtime: 2852.992 ms  

result:

 40 | 2-deoxy-D-glucose | 2-deoxy-D-glucose | '2':1 'd':4 'deoxi':3 'deoxy-d-glucos':2 'glucos':5  

I wonder what the best way to make this last queries faster would be. I have tried with a pre-processing script that removes all the numbers, it speeds up the search up to 3 seconds in total, but I miss the exact/closest match that I was looking for in some of the cases, so that's no use. Other approaches that came to mind where:

Sol 1: assign priority weight to word tokens in contrast to numeric ones   

I think this could be a potential good solution for me, but as far as I have seen cannot be done. Tsvectors/queries can be labelled, but not token types or IS THERE A WAY TO LABEL TOKENS DIFFERENTLY WITHIN THE SAME TSVECTOR?

Sol 2: create a new configuration and delete numeric mappings with ALTER FULLTEXTMAPPING   or add numbers and characters like - ( ) to the stopword list  

Same as the parser, it might lead me to wrong matches, although since it keeps the positional information it me perform good. I'm not sure how i should do this though.

My postgres.conf parameters:

shared_buffers = 24GB  checkpoint_segments=512  effective_cache_size=72GB  work_mem = 1GB  system specif. are:  4 x Intel X7550 (@2.00GHz) (total 32 Cores), 256GB Memory, CentOS 5.6 (64 bit), 18TB local disk (SATA-RAID)  kernel.shmmax=64GB  kernel.shmall=4GB  

I have tried lower amounts of shared_buffer and effective_cache_size (16GB and 32GB respectively), no difference in performance from the current one, so I'm planing to change it back to those limits

I tried a gist index on querytree lexemes, didn't make much difference

I'm a little bit lost and I would appreciate any ideas or possible solutions to speed up my queries.

Thanks :)

PD: Any recommendations for nonSQL DBs that could improve performance?

how to import mysql system "mysql" database

Posted: 27 Apr 2013 05:04 PM PDT

This question might be already answered but it's almost impossible to google it.

When I perform full backup of mysql server using

mysqldump --all-databases  

then reinstall it from scratch I want to import FULL dump of all databases, including "mysql".

I successfully done that by executing

mysql -u -p < dumpfile  

but now even if all user records are in mysql.user these accounts are not working.

I can't even change their password, I always get

ERROR 1133 (42000): Can't find any matching row in the user table

Restarting mysqld is of no help

answer:

It was fixed by executing

MariaDB [mysql]> flush privileges;  Query OK, 0 rows affected (0.26 sec)  

now all users work as before

SUPER privilege not defined for master user in Amazon MySQL RDS

Posted: 27 Apr 2013 01:04 PM PDT

I have created one medium instance on amazon rds in asia pecific (singapore) region. i have created my master user with master password. and it is working/connecting fine with workbench installed on my local PC. When, I am going to create function on that instance, it show me following error

ERROR 1418: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable)

At my instance, my variable (log_bin_trust_function_creators) shows OFF. now when I go to change with variable using

SET GLOBAL log_bin_trust_function_creators = 1;  

it gives me another error

Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation

I don't know how to solve this error.

Can anybody help???

How to import table's data in MySql from Sql Server?

Posted: 27 Apr 2013 02:04 PM PDT

I am trying to export table from SQL Server 2008 R2 TO MySql 5.5. For this I am using SQL Server Import and Export Wizard of Sql Server 2008, but it is giving this error. Error

Here this error may be occurring because table in Sql Server has a column with data type image and table in MySql this column's data type is LONGBLOB.

Please provide your expert answers. If not possible through SQL Server Import and Export Wizard than please suggest any other method for data transfer.

Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?

Posted: 27 Apr 2013 03:04 PM PDT

Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?

Conceptually, I was thinking something like:

mysqldump -u root -p database_to_backup > db_backup_file.sql | sudo rsync -avz db_backup_file.sql myuser@mysite.com:/var/www/db_backup_file.sql  

I've seen people pipe the result to mysql for their one liner backup solution, but I was curious if it was possible with rsync. You know--- cause rsync is magic :)

Thanks for your time!

Amazon RDS for MySQL vs installing MySQL on an Amazon EC2 instance

Posted: 27 Apr 2013 11:04 AM PDT

At work, we host all our webservers on Amazon EC2 and usually have used MySQL databases installed on the same box as our Apache webserver, and communicated with them on localhost. We now face a need to migrate our database to its own server for one of our systems. I have a choice between two solutions: use Amazon RDS, or just launch a new Amazon EC2 box and install MySQL on it.

RDS, being a dedicated database service provided by the same company as EC2, seems like it ought to be the obviously better option. However, when I look at the pricing for the two options (see http://aws.amazon.com/ec2/pricing and http://aws.amazon.com/rds/pricing) it seems that an RDS server costs almost twice as much as an EC2 server for a box with the same specs.

Given that I'm capable of handling backups myself and that EC2 offers the same ability to scale up the instance as required that RDS does, I can't see any reason at all to use RDS instead of EC2. It seems like I'm probably missing something big, though, because if I were right, nobody would use RDS. What exactly am I missing, and what are the advantages of RDS over installing your own database on an EC2 instance?

MySQL Workbench sync keeps requesting the same changes

Posted: 27 Apr 2013 06:04 PM PDT

I am using MySQL Workbench, and when I try to "synchronize" it with my remote database, it keeps detecting some changes to make.

Specifically, the most recurrent ones are:

  • Dropping foreign keys and re-creating the same ones again
  • Changing the collation

I was compliant and executed all the queries given to me (and added the semi-colon that they forgot). MySQL didn't complain and executed them.

However it didn't help, I can run it 20 times in a row, it will still ask the same useless changes.

Understanding COUNT() as `count`,

Posted: 27 Apr 2013 11:04 AM PDT

I'm currently learning how to build a site in PHP mysql. However, I seem to fail to understand COUNT() as count and wouldn't mind some further explanation.

I get the principles of COUNT, 0 || 1, and how it returns all the values that pertain to that query. But, don't see how COUNT as count works.

Anyhow, this is how the code I'm writing goes - so we have a working example - and where I first became perplexed.

SELECT COUNT(`id`) as `count`,          `id`   FROM `user`   WHERE `email`='$email'   AND `password`='".md5$password."'";  

If anyone can explain be a great help!

Deleting Data From Multiple Tables

Posted: 27 Apr 2013 10:52 AM PDT

Suppose,I've a table called UNIVERSITY containing universities name:

universityID    universityNAME  isACTIVE       7            GNDU             1       6            PU               1       5            PTU              1       8            LPU              1  

Now these universities ID's has been(obviously) used in many tables within the database(name e.g.Education),Suppose 10 tables.

Q.Now what happen if i delete one university?

A.The universityID field in other tables becomes NULL.

But I don't want these,rather when I delete 1 university from UNIVERSITY TABLE,all its occurrences with Rows in all 10 table should get deleted.

What will be the shortest and easiest MySQL Query for this operation.

NOTE:I'm using PHP language.

mysql performance / cache configuration enigma

Posted: 27 Apr 2013 10:04 AM PDT

I have two mysql 5.1 instances (say A, B) hosting the same database schema. If I run (with mysql workbench) the same query on both instances I don't understand why I get very different response times with subsequent requests.

On instance A, first query execution takes 0.688s and second query execution takes 0.683s
On instance B, first query execution takes 0.688s and second query execution takes 0.027s

It looks like there's a cache configuration difference between the two instances but I can't find it.

Comparing the SHOW VARIABLES result on both instances gives only few distinct values (which I don't see how they may impact query execution time) :

general_log_file :    /path/to/file2.log    VS    /path/to/file1.log      hostname :    mysql2    VS    mysql1      pid_file :    /var/lib/mysql/mysql2.pid    VS    /var/lib/mysql/mysql1.pid      slave_max_allowed_packet :    1073741824    VS    (empty)      slow_query_log_file :    /var/lib/mysql/mysql2-slow.log   VS   /var/lib/mysql/mysql1-slow.log      system_time_zone :    CET   VS  CEST      timestamp :    1352219171   VS   1352219229      version :    5.1.66-0ubuntu0.10.04.1   VS   5.1.62-0ubuntu0.10.04.1  

Just to mention, instance A is our test environment and instance B is our production environment

Edit : (recommended by @Rick James)

The following variables are strictly identical on both environments

SHOW VARIABLES LIKE '%buffer%'    bulk_insert_buffer_size 8388608  innodb_buffer_pool_size 8388608  innodb_log_buffer_size  1048576  join_buffer_size    131072  key_buffer_size 16777216  myisam_sort_buffer_size 8388608  net_buffer_length   16384  preload_buffer_size 32768  read_buffer_size    131072  read_rnd_buffer_size    262144  sort_buffer_size    2097144  sql_buffer_result   OFF      SHOW VARIABLES LIKE 'innodb%'    innodb_adaptive_hash_index  ON  innodb_additional_mem_pool_size 1048576  innodb_autoextend_increment 8  innodb_autoinc_lock_mode    1  innodb_buffer_pool_size 8388608  innodb_checksums    ON  innodb_commit_concurrency   0  innodb_concurrency_tickets  500  innodb_data_file_path   ibdata1:10M:autoextend  innodb_data_home_dir      innodb_doublewrite  ON  innodb_fast_shutdown    1  innodb_file_io_threads  4  innodb_file_per_table   OFF  innodb_flush_log_at_trx_commit  1  innodb_flush_method   innodb_force_recovery   0  innodb_lock_wait_timeout    50  innodb_locks_unsafe_for_binlog  OFF  innodb_log_buffer_size  1048576  innodb_log_file_size    5242880  innodb_log_files_in_group   2  innodb_log_group_home_dir   ./  innodb_max_dirty_pages_pct  90  innodb_max_purge_lag    0  innodb_mirrored_log_groups  1  innodb_open_files   300  innodb_rollback_on_timeout  OFF  innodb_stats_method nulls_equal  innodb_stats_on_metadata    ON  innodb_support_xa   ON  innodb_sync_spin_loops  20  innodb_table_locks  ON  innodb_thread_concurrency   8  innodb_thread_sleep_delay   10000  innodb_use_legacy_cardinality_algorithm ON  

The actual SELECT :

select      post0_.id as id83_0_,      thread1_.id as id81_1_,      post0_.createDate as createDate83_0_,      post0_.editCount as editCount83_0_,      post0_.editDate as editDate83_0_,      post0_.editor_id as editor7_83_0_,      post0_.postType as postType83_0_,      post0_.poster_id as poster8_83_0_,      post0_.repliedTo_id as repliedTo9_83_0_,      post0_.text as text83_0_,      post0_.thread_id as thread10_83_0_,      thread1_.created as created81_1_,      thread1_.debate_id as debate16_81_1_,      thread1_.description as descript4_81_1_,      thread1_.lastPostDate as lastPost5_81_1_,      thread1_.poster_id as poster17_81_1_,      thread1_.status as status81_1_,      thread1_.threadType as threadType81_1_,      thread1_.title as title81_1_,      thread1_.updated as updated81_1_,      thread1_.viewCount as viewCount81_1_,      thread1_.imageUrl as imageUrl81_1_,      thread1_.mediaTypeValue as mediaTy11_81_1_,      thread1_.mediaUrl as mediaUrl81_1_,      thread1_.modoWeight as modoWeight81_1_,      thread1_.shortName as shortName81_1_,      thread1_.why as why81_1_,      thread1_.theme_id as theme18_81_1_,      thread1_.answer_debaterId as answer19_81_1_,      thread1_.answer_pollId as answer20_81_1_,      (SELECT          COUNT(*)       FROM          Post p       WHERE          p.thread_id = thread1_.id) as formula9_1_   from      Post post0_   inner join      Thread thread1_           on post0_.thread_id=thread1_.id   where      post0_.postType in (          'P', 'F'      )       and thread1_.debate_id=69       and thread1_.threadType<>'B'       and length(post0_.text)>0   order by      createDate desc limit 5  

The EXPLAIN SELECT (sam on both environments) :

id  select_type table   type    possible_keys   key key_len ref rows    Extra  1   PRIMARY thread1_    ref PRIMARY,FK9545FA2AEBB74893  FK9545FA2AEBB74893  5   const   3690    Using where; Using temporary; Using filesort  1   PRIMARY post0_  ref FK260CC0D74BC5F3    FK260CC0D74BC5F3    5   debate-atest.thread1_.id    2   Using where  2   DEPENDENT SUBQUERY  p   ref FK260CC0D74BC5F3    FK260CC0D74BC5F3    5   debate-atest.thread1_.id    2   Using where; Using index  

The CREATE TABLE STATEMENT (exact same on both except the constraint names) :

delimiter $$    CREATE TABLE `Post` (    `postType` varchar(1) NOT NULL,    `id` int(11) NOT NULL AUTO_INCREMENT,    `createDate` datetime DEFAULT NULL,    `editCount` int(11) NOT NULL,    `editDate` datetime DEFAULT NULL,    `text` longtext,    `editor_id` int(11) DEFAULT NULL,    `poster_id` int(11) DEFAULT NULL,    `repliedTo_id` int(11) DEFAULT NULL,    `thread_id` int(11) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `createDateIx` (`createDate`),    KEY `FK260CC0D74BC5F3` (`thread_id`),    KEY `FK260CC07C1A4F95` (`poster_id`),    KEY `FK260CC0960B3775` (`editor_id`),    KEY `FK260CC0D7C95B5F` (`repliedTo_id`),    CONSTRAINT `FK260CC0D7C95B5F` FOREIGN KEY (`repliedTo_id`) REFERENCES `Post` (`id`),    CONSTRAINT `FK260CC07C1A4F95` FOREIGN KEY (`poster_id`) REFERENCES `Debater` (`id`),    CONSTRAINT `FK260CC0960B3775` FOREIGN KEY (`editor_id`) REFERENCES `Debater` (`id`),    CONSTRAINT `FK260CC0D74BC5F3` FOREIGN KEY (`thread_id`) REFERENCES `Thread` (`id`)  ) ENGINE=InnoDB AUTO_INCREMENT=148523 DEFAULT CHARSET=utf8$$  

and

delimiter $$    CREATE TABLE `Thread` (    `threadType` varchar(1) NOT NULL,    `id` int(11) NOT NULL AUTO_INCREMENT,    `created` datetime DEFAULT NULL,    `description` varchar(700) DEFAULT NULL,    `lastPostDate` datetime DEFAULT NULL,    `status` int(11) NOT NULL,    `title` varchar(100) DEFAULT NULL,    `updated` datetime DEFAULT NULL,    `viewCount` int(11) NOT NULL,    `imageUrl` varchar(255) DEFAULT NULL,    `mediaTypeValue` int(11) DEFAULT NULL,    `mediaUrl` varchar(1000) DEFAULT NULL,    `modoWeight` int(11) DEFAULT NULL,    `shortName` varchar(15) DEFAULT NULL,    `why` longtext,    `debate_id` int(11) DEFAULT NULL,    `poster_id` int(11) DEFAULT NULL,    `theme_id` int(11) DEFAULT NULL,    `answer_debaterId` int(11) DEFAULT NULL,    `answer_pollId` int(11) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `answerPollIx` (`answer_pollId`),    KEY `FK9545FA2AAF24B581` (`theme_id`),    KEY `FK9545FA2A7C1A4F95` (`poster_id`),    KEY `FK9545FA2AEBB74893` (`debate_id`),    KEY `FK9545FA2A82957DB8` (`answer_debaterId`,`answer_pollId`),    CONSTRAINT `FK9545FA2A82957DB8` FOREIGN KEY (`answer_debaterId`, `answer_pollId`) REFERENCES `Answer` (`debaterId`, `pollId`),    CONSTRAINT `FK9545FA2A7C1A4F95` FOREIGN KEY (`poster_id`) REFERENCES `Debater` (`id`),    CONSTRAINT `FK9545FA2AAF24B581` FOREIGN KEY (`theme_id`) REFERENCES `Thread` (`id`),    CONSTRAINT `FK9545FA2AEBB74893` FOREIGN KEY (`debate_id`) REFERENCES `Debate` (`id`)  ) ENGINE=InnoDB AUTO_INCREMENT=49829 DEFAULT CHARSET=utf8$$  

Minimizing Indexed Reads with Complex Criteria

Posted: 27 Apr 2013 12:04 PM PDT

I'm optimizing a Firebird 2.5 database of work tickets. They're stored in a table declared as such:

CREATE TABLE TICKETS (    TICKET_ID id PRIMARY KEY,    JOB_ID id,    ACTION_ID id,    STATUS str256 DEFAULT 'Pending'  );  

I generally want to find the first ticket that hasn't been processed and is in Pending status.

My processing loop would be:

  1. Retrieve 1st Ticket where Pending
  2. Do work with Ticket.
  3. Update Ticket Status => Complete
  4. Repeat.

Nothing too fancy. If I'm watching the database while this loop runs I see the number of indexed reads climbs for each iteration. The performance doesn't seem to degrade terribly that I can tell, but the machine I'm testing on is pretty quick. However, I've received reports of performance degradation over time from some of my users.

I've got an index on Status, but it still seems like it scans down the Ticket_Id column each iteration. It seems like I'm overlooking something, but I'm not sure what. Is the climbing number of indexed reads for something like this expected, or is the index misbehaving in some way?

-- Edits for comments --

In Firebird you limit row retrieval like:

Select First 1    Job_ID, Ticket_Id  From    Tickets  Where    Status = 'Pending'  

So when I say "first", I'm just asking it for a limited record set where Status = 'Pending'.

Difference in between HAS MANY and BELONGS TO in Yii framework

Posted: 27 Apr 2013 10:23 AM PDT

I am new to the Yii Framework.

I discovered HAS MANY and BELONGS TO in the documentation, however I am confused about these features.

Can someone give some real life examples with a diagram that will clear up my doubts?

How to re-create the ##MS_PolicyEventProcessingLogin## principal

Posted: 27 Apr 2013 04:35 PM PDT

I'm getting a bunch of errors in my MS SQL Server logs about a missing ##MS_PolicyEventProcessingLogin## principal. I went to check the master database and yes, it is indeed missing. I don't know how this happened.

How do I recreate this principal?

No comments:

Post a Comment

Search This Blog