Friday, July 5, 2013

[how to] mysql tried to restart itself 3 or more times, but it failed finally

[how to] mysql tried to restart itself 3 or more times, but it failed finally


mysql tried to restart itself 3 or more times, but it failed finally

Posted: 05 Jul 2013 09:26 PM PDT

it happended the second time, could be the same reason cause it.

mysql tried to restart itself 3 or more times, but it failed finally.

here is mysql log

130706 09:43:27 mysqld_safe Number of processes running now: 0  130706 09:43:27 mysqld_safe mysqld restarted  130706  9:43:28  InnoDB: Initializing buffer pool, size = 8.0M  130706  9:43:28  InnoDB: Completed initialization of buffer pool  130706  9:43:29  InnoDB: Started; log sequence number 0 2552936  130706  9:43:29 [Note] Event Scheduler: Loaded 0 events  130706  9:43:29 [Note] /usr/libexec/mysqld: ready for connections.  Version: '5.1.69-log'  socket: '/var/www/data/mysql.sock'  port: 3306  Source distribution  130706 09:43:34 mysqld_safe Number of processes running now: 0  130706 09:43:34 mysqld_safe mysqld restarted130706  9:43:36  InnoDB: Initializing buffer pool, size = 8.0M  130706  9:43:36  InnoDB: Completed initialization of buffer pool  130706  9:43:36  InnoDB: Started; log sequence number 0 2552936  130706  9:43:36 [Note] Event Scheduler: Loaded 0 events  130706  9:43:36 [Note] /usr/libexec/mysqld: ready for connections.  Version: '5.1.69-log'  socket: '/var/www/data/mysql.sock'  port: 3306  Source distribution  130706 09:43:38 mysqld_safe Number of processes running now: 0  130706 09:43:39 mysqld_safe mysqld restarted  130706  9:43:41  InnoDB: Initializing buffer pool, size = 8.0M  130706  9:43:41  InnoDB: Completed initialization of buffer pool  130706  9:43:41  InnoDB: Started; log sequence number 0 2552936  130706 09:43:42 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended  

and following is content of my.cnf

there are 1.5G mem on this server

[mysqld]  datadir=/var/www/data  socket=/var/www/data/mysql.sock  user=mysql      symbolic-links=0  query_cache_size=128M      max_connections=512  key_buffer=512M   back_log=200  max_allowed_packet=256M  table_cache=512    sort_buffer_size=4M  myisam_sort_buffer_size=64M  thread_cache=128  wait_timeout=300      thread_concurrency=4  log_output=FILE   slow-query-log=1      slow-query-log-file=/var/www/logs/log-slow-query.log  long_query_time=1    [mysqld_safe]  log-error=/var/www/logs/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

but i can easily start it using "service mysqld start"

anyone knows what was happened? please help!

Cannot change root access in MySQL to %

Posted: 05 Jul 2013 08:43 PM PDT

This is a new MySQL Server install. I created my root password as part of the setup (on Centos 6.4).

When I connect to the server through a terminal, I can connect to MySQL and issue commands, using my root password.

select current_user;

gives me:

+----------------+  | current_user   |  +----------------+  | root@localhost |  +----------------+  

If I do:

GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'xxxxxx' WITH GRANT OPTION  

I get:

Query OK, 0 rows affected (0.00 sec)  

But when I do:

SHOW GRANTS;  

Here's what I get:

+----------------------------------------------------------------------------------------------------------------------------------------+  | Grants for root@localhost                                                                                                              |  +----------------------------------------------------------------------------------------------------------------------------------------+  | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' IDENTIFIED BY PASSWORD '*fdfgdgdggfggfgfg' WITH GRANT OPTION |  +----------------------------------------------------------------------------------------------------------------------------------------+  

Am I not supposed to see another line for root@% in addition to root@localhost?

The real issue I'm having is that I can't connect to MySQL from outside of localhost (as in, I'm logged in using a terminal session), and if the MySQL server is not giving root universal access (root@%), then that would explain the problem. When I try to connect using PHP (from my local MAC), the following is the returned MySQLi object:

mysqli Object  (      [affected_rows] =>       [client_info] =>       [client_version] => 50008      [connect_errno] => 2002      [connect_error] => Connection refused      [errno] =>       [error] =>       [field_count] =>       [host_info] =>       [info] =>       [insert_id] =>       [server_info] =>       [server_version] =>       [stat] =>       [sqlstate] =>       [protocol_version] =>       [thread_id] =>       [warning_count] =>   )  

I realize that granting root access from % is not a great idea, but at this point, I'm trying to figure out why I can't connect to the MySQL server, and when I solve that, I will restrict access.

"Error at line 1: ORA-00942: table or view does not exist." Sqlplus basic use, querying. Unable to retrieve schema from table "Course" or "emp"

Posted: 05 Jul 2013 06:38 PM PDT

I installed Oracle APEX 11g on my computer which is a WIN 7 64-bit OS. However, When I open the command prompt I try to execute the following query: "Select * from course;". I receive the following error: "Error at line 1: ORA-00942: table or view does not exist." I also receive that error when also trying to execute the following query: "select * from emp;". I have also tried to connect as a DBA and I receive the same error. I also receive the same error when trying to query while logging in by using my regular user id and password. I would like to learn sqlplus but right now I just want to learn how to query tables. I understand teh following queries but I receive the error above. Can someone please help? Thank you.

MySQL database datafiles in windows SystemData directory

Posted: 05 Jul 2013 02:45 PM PDT

When i create a database and some tables in MySQL Server, it makes some data files in Windows SystemData directory in C:\ProgramData\MySQL\MySQL Server 5.5\data\mydb\.

Each table has three files with these suffixes:

TableName.frm  TableName.MYD   TableName.MYI  

I don't know what those exactly are.

Please explain to me what those are and what is usages?

Thanks in advance.

List of variable defaults for InnoDB engine

Posted: 05 Jul 2013 02:29 PM PDT

I would like to list the defaults for the global variables that are innodb specific.

The problem is that the following commands doesn't list any innodb variables:

/usr/sbin/mysqld --no-defaults --verbose --help  /usr/sbin/mysqld --verbose --help  

However I can see the runtime ones with:

mysql -e "show global variables"  

Find records in same table with different company id but some other same values

Posted: 05 Jul 2013 12:59 PM PDT

I have subscriptions table that looks roughly:

table subscriptions
id (auto generated id)
company_id
city
state
subscription_date

I would like to find records that:

A) are of different company (company id) but have same city/state

B) are of different company (company id) but have same subscription date

For address, I was doing:

select distinct on (t1.company_id) t1.company_id, t2.company_id, t1.city, t1.state  from subscription t1, subscription t2  where t1.company_id <> t2.company_id  and t1.city = t2.city  and t1.state = t2.state  order by t1.company_id  

Is this efficient and accurate way to do it?

Return table based on 1, 2, 3, 4, 5 or 0 selected search options

Posted: 05 Jul 2013 03:45 PM PDT

Scenario: Lets say I have several tables for a software ticketing system: Developer, Supervisor, TicketType, TicketStatus and a Ticket table to join them on. Assume I have done my inner joins and now have a complete table listing all assigned Ticket info.

Id|TicketName|TicketStatus|Developer|Supervisor|DateAssigned| DateDue  |  ------------------------------------------------------------------------------           1  |  Report  |   Open     |   Josh  |    Bob   | 20130702   | 20130705              2  |   App    |   Pending  |   Brian |    Bob   | 20130701   | 20130802           3  |   App    |   Closed   |   Brian | Hannibal | 20130701   | 2013----    

Obviously the SQL for this is pretty basic:

SELECT TicketId, TicketName, TicketStatus ...  FROM Ticket INNER JOIN TicketType ON [Ticket].TicketID = [TicketType].TicketName       INNER JOIN [Status] ON .... ....  

So that's the DEFAULT static portion of the report. This is what initializes first, however the client, customer, manager, etc. wants the option of selecting zero, one, or more fields to sort data on for all fields in the report. For example:

User may choose 'app' from the TicketName category, 'Josh' as Developer, and BETWEEN date1 AND date2, while TicketStatus Remains NULL or ' '. The table should display all 'app' tickets assigned to 'Josh' between selected dates, while not breaking due to TicketStatus not being selected. Any ticket status should appear.

User could choose not to select any values, in which case the report should default to the example above, without breaking. Meaning I cannot use NULL as a place holder bacause it would cause the initial report to not initialize (unless I did something wrong?)

User can basically mix and match any variation of the fields and the table should sort on those selected fields.

Question: How can someone return a table based on all or no options being selected, without breaking the query?

Additional Info:

Using SQL Server 2008 r2, Business Intelligence Development Studio, T-SQL.

I have tried declaring variables and using LIKE statements, but this requires the user to select an option before the report will return anything. The User may not know what the field values are from memory.

CREATE PROCEDURE WorkAssignmentReport @Category varchar(25)                                   ,@---Name varchar(25)                                   ,@Stats varchar(25)                                    ,@---Name  varchar(25)                                    .........                                   ......      FROM [Ticket]   INNER JOIN dbo.[TicketType] ON [Ticket].TicketTypeID = [TicketType].TicketTypeID   INNER JOIN dbo.[Status] ON [Ticket].Stat ....  .....      WHERE [TicketType].Name LIKE @Category     AND [Ticket].Name LIKE @ReportName      AND [Status].StatusName LIKE @Status     AND [Phase].PhaseName LIKE @PhaseName      AND (BA.FullName = @Name1 OR AD.FullName = @Name1)  

PostGIS Query that selects from over 124GB of Indexed data, how to Optimize?

Posted: 05 Jul 2013 03:39 PM PDT

I have a query:

with contains as  ( Select reports.mmsi,            ST_Contains(q346_pareas.geom, reports.position_geom) as ST     from q346_pareas, reports    where report_timestamp between '2013-05-02 07:00:00' and '2013-05-02 08:00:00'      and (reports.position_geom && q346_pareas.geom)  )  select position_geom   from reports  where report_timestamp between '2013-05-02 07:00:00' and '2013-05-02 08:00:00'    and reports.mmsi = ANY         ( select mmsi           from contains           where contains.st = true             and contains.st is not null        );  

Everything that I am selecting from is indexed. The reports table is 164GB with an index size of 124GB

The q346_pareas table is a polygon made from the points:

 -72.2417667  66.268983,  -112.2589167  66.410833,  -125.5855167  77.681283,   -63.14165    81.332367,   -50.53       74.5,   -42.1917     55.844,   -48.583333   45.80575,   -67.2276333  40.5562,   -73.18415    46.8878167,   -72.2417667  66.268983  

The Explain analyze Information is:

                QUERY PLAN    --------------------------------------------------------------------------------  --------------------------------------------------------------------------------  ------------------------------------------   Nested Loop  (cost=192146.15..2907777.88 rows=577794 width=32) (actual time=236  2936.415..3156249.235 rows=80034 loops=1)     CTE contains       ->  Nested Loop  (cost=418.62..153615.11 rows=527503 width=236) (actual tim  e=1822818.674..2358354.485 rows=89045 loops=1)         ->  Seq Scan on q346_pareas  (cost=0.00..1.01 rows=1 width=200) (actu  al time=0.012..0.015 rows=1 loops=1)         ->  Bitmap Heap Scan on reports  (cost=418.62..20419.53 rows=6 width=  36) (actual time=1822791.372..2358148.699 rows=89045 loops=1)               Recheck Cond: (position_geom && q346_pareas.geom)               Rows Removed by Index Recheck: 359425857               Filter: ((report_timestamp >= '2013-05-02 07:00:00'::timestamp  without time zone) AND (report_timestamp <= '2013-05-02 08:00:00'::timestamp wit  hout time zone))               Rows Removed by Filter: 156874877               ->  Bitmap Index Scan on position_idx  (cost=0.00..418.62 rows=  5036 width=0) (actual time=1352314.373..1352314.373 rows=156963922 loops=1)                     Index Cond: (position_geom && q346_pareas.geom)     ->  HashAggregate  (cost=11206.14..11207.14 rows=100 width=4) (actual time=23  58424.949..2358426.072 rows=466 loops=1)           ->  CTE Scan on contains  (cost=0.00..10550.06 rows=262433 width=4) (ac  tual time=1822818.688..2358404.878 rows=72833 loops=1)                 Filter: (st AND (st IS NOT NULL))                 Rows Removed by Filter: 16212     ->  Bitmap Heap Scan on reports  (cost=27324.90..27429.29 rows=27 width=36) (  actual time=1662.920..1711.195 rows=172 loops=466)           Recheck Cond: ((mmsi = contains.mmsi) AND (report_timestamp >= '2013-05  -02 07:00:00'::timestamp without time zone) AND (report_timestamp <= '2013-05-02   08:00:00'::timestamp without time zone))           ->  BitmapAnd  (cost=27324.90..27324.90 rows=27 width=0) (actual time=1  660.391..1660.391 rows=0 loops=466)                 ->  Bitmap Index Scan on report_mmsi_idx  (cost=0.00..578.71 rows  =23532 width=0) (actual time=1578.394..1578.394 rows=180098 loops=466)                       Index Cond: (mmsi = contains.mmsi)                 ->  Bitmap Index Scan on report_timestamp_idx  (cost=0.00..26456.  54 rows=1157594 width=0) (actual time=49.747..49.747 rows=553929 loops=466)                       Index Cond: ((report_timestamp >= '2013-05-02 07:00:00'::ti  mestamp without time zone) AND (report_timestamp <= '2013-05-02 08:00:00'::times  tamp without time zone))   Total runtime: 3156257.974 ms  (23 rows)  

The last time I ran the query it took ~50 minutes. I understand that I will eventually reach the point where going through a table with hundreds of millions of records will take a long time, and that the network will become a factor, but I ideally need to increase the time frame of the query from 1 hour to two months.

Any help is greatly appreciated.

pt-table-checksum - Diffs cannot be detected because no slaves were found (1)

Posted: 05 Jul 2013 10:13 AM PDT

I'm new to Percona tools. I am trying to use pt-table-checksum as follows:

pt-table-checksum h=localhost -u root -p xyz--replicate=percona.checksums --create-replicate-table --databases=mysql  

But get the error.

Diffs cannot be detected because no slaves were found    Please read the --recursion-method documentation for information.  

My.cnf

#password       = your_password  port            = 3306  socket          = /tmp/mysql.sock  default-character-set=utf8    [mysqld]  datadir=/spacedb  socket=/tmp/mysql/mysql.sock  user=mysql  # Disabling symbolic-links is recommended to prevent assorted security risks  symbolic-links=0  init-connect='SET NAMES utf8'  character-set-server = utf8  max_connections = 300    [mysqld_safe]  log-error=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid    [mysqladmin]  socket=/tmp/mysql.sock    [mysql]  socket=/tmp/mysql.sock  default-character-set=utf8    [mysqld_multi]  mysqld     = /usr/bin/mysqld_safe  mysqladmin = /usr/bin/mysqladmin  user       = multi_admin  password   = user    [mysqld1]  socket     = /tmp/mysql.sock  port       = 3306  pid-file   = /var/run/mysqld/mysqld.pid  datadir    = /spacedb  language   = /usr/share/mysql/english  server-id  = 1  log-bin    = mysql-bin  binlog_format = mixed    # Expire_logs + MaxBinlogSize  expire_logs_days = 7  max_binlog_size = 100M    sync_binlog = 1  innodb_flush_log_at_trx_commit = 1  innodb_lock_wait_timeout=1    init-connect='SET NAMES utf8'  character-set-server = utf8  max_connections = 300    [mysqld2]  socket     = /tmp/mysql.sock2  port       = 3307  pid-file   = /var/run/mysqld/mysqld2.pid  datadir    = /repdb  language   = /usr/share/mysql/english  server-id  = 2  relay-log  = mysqld2-relay-bin    innodb_lock_wait_timeout=1  init-connect='SET NAMES utf8'  character-set-server = utf8  

Do I have to use DSN? And how does it work?

Cron job for Mysql Incremental backup

Posted: 05 Jul 2013 12:02 PM PDT

I want to run cron job Mysql Incremental backup, Can any one pls help me. I tried:

$command = "mysqldump-h $dbhost -u $dbuser -p$dbpass ".    $dbname." > $backup_file 2>&1";    system($command, $returned);  

it worked ,but "mysqlbackup" is not working. Mysql document is saying that we need to use "mysqlbackup" command rather than "mysqldump" command

Removing superfluous tables with a synchronization?

Posted: 05 Jul 2013 10:11 AM PDT

I'm using SQL Server 2008 r2's replication functionality to update my subscriber database through a transactional pull subscription. When I mark it for reinitialization, it does fix the schema and data of any modified local tables that exist in the publication snapshot, but it doesn't remove any new tables (and presumably SPs, triggers, etc.) that have been added. Is there any way to get the synchronization to remove superfluous objects like tables that don't exist in the publication shapshot, in addition to updating and adding existing objects?

InnoDB query duration inconsistent

Posted: 05 Jul 2013 12:23 PM PDT

I am running a series of UPDATE commands on a nearly empty InnoDB table, and around 1 out of every 20-30 queries will inexplicably take 10 times as long as the others. For example, the first 20 updates will take 20ms, and the 21st update will suddenly take 200ms. I've set up an incredibly basic test:

CREATE TABLE `test` (    `col1` int(11) DEFAULT NULL,    `col2` int(11) DEFAULT NULL  ) ENGINE=InnoDB DEFAULT CHARSET=utf8  

I insert a single row into the table, and then I have a C# console program that does a series of updates:

for (int i = 0; i < 15; i++) {     long start = CurrentTimeMillis();       MySqlCommand cmd = new MySqlCommand();     cmd.Connection = conn;     cmd.CommandText = "UPDATE test SET col1=" + i + " WHERE col2=1";     cmd.Prepare();     cmd.ExecuteNonQuery();       Console.WriteLine("Duration: " + (CurrentTimeMillis() - start));  }  

This is the output I see from the program:

MySQL version : 5.5.17  Duration: 36  Duration: 30  Duration: 16  Duration: 26  Duration: 152  Duration: 47  Duration: 71  Duration: 77  Duration: 46  Duration: 28  Duration: 21  Duration: 25  Duration: 17  

If I run "SHOW profile FOR QUERY" with the 152ms time, both the "Updating" and "query end" values are abnormally high.

If I switch the table to MyISAM, the query duration is perfect, but I don't want table-locking. Does anyone have a guess as to what is making InnoDB act this way?

TOAST Table Growth Out of Control - FULLVAC Does Nothing

Posted: 05 Jul 2013 10:54 AM PDT

Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to take advantage of autovacuum features and be in line with 30ish other PGSQL servers. This was done by a separate IT group who administrates the hardware, so we don't have much choice on any other upgrades (won't see 9+ for a while). The server exists in a very closed environment (isolated network, limited root privileges) and runs on RHEL5.5 (i686). After the upgrade, the database has constantly been growing to the tune of 5-6 GB a day. Normally, the database, as a whole, is ~20GB; currently, it is ~89GB. We have a couple other servers which run equivalent databases and actually synchronize the records to each other via a 3rd party application (one I do not have access to the inner workings). The other databases are ~20GB as they should be.

Running the following SQL, it's fairly obvious there's an issue with a particular table, and, more specifically, its TOAST table.

SELECT nspname || '.' || relname AS "relation",      pg_size_pretty(pg_relation_size(C.oid)) AS "size"    FROM pg_class C    LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)    WHERE nspname NOT IN ('pg_catalog', 'information_schema')    ORDER BY pg_relation_size(C.oid) DESC    LIMIT 20;  

Which produces:

                relation              |  size    ------------------------------------+---------      pg_toast.pg_toast_16874           | 89 GB      fews00.warmstates                 | 1095 MB      ...    (20 rows)  

This TOAST table is for a table called "timeseries" which saves large records of blobbed data. A SUM(LENGTH(blob)/1024./1024.) of all the records in timeseries yields ~16GB for that column. There should be no reason this table's TOAST table should be as large as it is.

I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the vacuum runs to completion with no errors.

INFO: vacuuming "pg_toast.pg_toast_16874"
INFO: "pg_toast_16874": found 22483 removable, 10475318 nonremovable row versions in 10448587 pages
DETAIL: 0 dead row versions cannot be removed yet.
Nonremovable row versions range from 37 to 2036 bytes long.
There were 20121422 unused item pointers.
Total free space (including removable row versions) is 0 bytes. 4944885 pages are or will become empty, including 0 at the end of the table. 4944885 pages containing 0 free bytes are potential move destinations.
CPU 75.31s/29.59u sec elapsed 877.79 sec.
INFO: index "pg_toast_16874_index" now contains 10475318 row versions in 179931 pages
DETAIL: 23884 index row versions were removed.
101623 index pages have been deleted, 101623 are currently reusable.
CPU 1.35s/2.46u sec elapsed 21.07 sec.

REINDEXed the table which freed some space (~1GB). I can't CLUSTER the table as there isn't enough space on disk for the process, and I'm waiting to rebuild the table entirely as I'd like to find out why it is so much bigger than equivalent databases we have.

Ran a query from the PostgreSQL wiki here - "Show Database Bloat", and this is what I get:

  current_database | schemaname |           tablename            | tbloat | wastedbytes |              iname              | ibloat | wastedibytes    -----------------+------------+--------------------------------+--------+-------------+---------------------------------+--------+--------------    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_synchlevel       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_localavail       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_expirytime       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_expiry_null      |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | uniq_localintid                 |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | pk_timeseries                   |    0.1 |            0    ptrdb04          | fews00     | idx_timeseries_expiry_null     |    0.6 |           0 | ?                               |    0.0 |            0  

It looks like the database doesn't consider this space as "empty," at all, but I just don't see where all the disk space is coming from!

I suspect that this database server is deciding to use 4-5x as much disk space to save the same records pulled from the other data servers. My question is this: Is there a way I can verify the physical disk size of a row? I'd like to compare the size of one row on this database to another "healthy" database.

Thanks for any help you can provide!

UPDATE 1

I ended up rebuilding the table from a dumped schema due to its size (couldn't leave it alone for another day). After synchronizing the data, via the software synch process, the TOAST table was ~35GB; however, I could only account for ~9GB of it from that blob column which should be the longest in terms of values. Not sure where the other 26GB is coming from. CLUSTERed, VACUUM FULLed, and REINDEXed to no avail. The postgresql.conf files between the local and remote data servers are exactly the same. Is there any reason this database might be trying to store each record with a larger space on disk?

UPDATE 2 - Fixed

I finally decided to just completely rebuild the database from the ground up- even going as far as to reinstall the PostgreSQL84 packages on the system. The database path was reinitialized and tablespaces wiped clean. The 3rd party software synchronization process repopulated the tables, and the final size came out to be ~12GB! Unfortunately, this, in no way, helps to solve what the exact source of the issue was here. I'm going to watch it for a day or two and see if there are any major differences with how the revitalized database is handling the TOAST table and post those results here.

"SQL1042C An unexpected system error" when creating a database

Posted: 05 Jul 2013 11:37 AM PDT

I installed DB2 Express-C 10.1 on OS X 10.8.4. I installed it in user mode, and it seems to have created an instance with my username (avernet). I checked the instance is indeed there by running db2ilist. The database manager is running, as when I run db2start, I get the message SQL1026N The database manager is already active.

Now, I am trying to create a database by running db2 from the command line, and typing the command create database orbeon, but it returns:

QL1042C  An unexpected system error occurred.  SQLSTATE=58004  

What am I missing here? How can I create a simple database from the command line (I prefer not to bother with Data Studio as it isn't available for OS X)?

In db2dump/db2diag.log, I see the follwing error (full file, in case you're interested):

2013-06-29-15.31.38.948217-420 E159047E344          LEVEL: Error (OS)  PID     : 1766                 TID : 140735175762304PROC : db2star2  INSTANCE: avernet              NODE : 000  HOSTNAME: huashan.local  FUNCTION: DB2 UDB, SQO Memory Management, sqloMemCreateSingleSegment, probe:100  CALLED  : OS, -, shmget                           OSERR: 17  

Note that for the install to go through, I increased the OS X shared memory, per this recommendation.

mysql replication goes out of sync for some tables

Posted: 05 Jul 2013 11:48 AM PDT

We are running mysql 5.1.61 on redhat systems and have the following setup

One master and four slaves replicating from the master, we recently added a new slave for replication and over a few days we have started noticing that on the newly added slave some tables ( not all ) loose some records , this happens only on this slave and it is not regular , over a period of 3 weeks this issue seems to have happened on 5-7 days .

We use statement based replication. I am not sure why this happens on only one slave. There seems to be no error in the mysql error logs. The only difference between the old slaves and the new slave is that the new slave has a slightly lower ram than the other ones but the new slave is not being used for anything right now.

Is there a way to trouble shoot this issue to see why this happens on only one slave ?. Could it be network related or anything else ? Any pointers on where to start looking at ?

Here is the memory info Old slave

            total       used       free   Mem:      24731204   24641572      89632     

New slave

Mem:      16431192   10112880    6318312   

Force View's query plan to update?

Posted: 05 Jul 2013 02:48 PM PDT

I have a View whose query plan appears to be cached, is there a way to force the View's plan to be recalculated on each access?

MySQL+Web-App Performance Issues using different servers

Posted: 05 Jul 2013 12:48 PM PDT

We are having a performance issue with our MySQL servers that does not make any sense. I have read countless articles from many people (mostly Percona) and have made my.cnf tweaks. We have even manage to squeeze out another 30% more TPS thanks to those Percona articles. However, our problem is with our in-house web-app (a Tomcat/Java/Apache model). It performs poorly when connected to certain servers - the better hardware servers.

Here is the symptom:

If we point our test application server (Ubuntu, Apache, Tomcat, Java) to server MYSQL02, the applications performance is acceptable. However, if we point the application to MYSQL01 or MYSQL03 (and these two boxes are idle!) the application performance is poor. There are high latencies. Example:

Retrieving an account (via our web-app) from MYSQL02 takes  2 to 3 seconds. On MYSQL01 or MYSQL03 it takes 12 seconds or longer.  

We cannot figure out why! The MySQL servers and MONyog do NOT report any problems! If we execute the statements (100's of them) manually they return instance results and their explanations show they are all using indexes. We do NOT get any slow query, deadlock, or contention notifications.

Here is some basic information about our MySQL systems. They are all DEDICATED MySQL servers:

PROD (current production, not in replication farm, standalone)

Best/newest hardware configuration (8 core/ 20GB / 15K RPM HW RAID 10)  MySQL: 5.0 (RHEL)  We are trying to migrate from this MySQL 5.0 to our MySQL 5.5 farm (see below).  This server will then be reloaded with MySQL 5.5 and return as the primary  production server.  

MYSQL01

Second best hardware configuration (8 core / 12GB RAM / 15K RPM HW RAID 10)  Supermicro X8FAH; 2xXEON X5550 @ 2.66GHz  2x LSI MegaRAID SAS 9260-8i  MySQL: 5.5.31 (Ubuntu)  OS: Ubuntu 12.04 LTS x64  

MYSQL02

Worst/oldest hardware configuration(4 core / 12GB RAM / 7.2K RPM SW {mdadm} RAID 5)  Supermicro X2DBE; 2xXEON 5150 @ 2.66GHz  MySQL: 5.5.31 (Ubuntu)  OS: Ubuntu 12.04 LTS x64    This is our EDI workhorse. It spends most of it's day importing data from  our old mainframe system. It replicates the data to 01 and 03.  

MYSQL03

Third best hardware configuration (8 core / 12GB RAM / 7.2K RPM HW RAID 10)  Supermicro X7DGB8-X; XEON E5410 @ 2.33GHz  MySQL: 5.5.31 (Ubuntu)  OS: Ubuntu 12.04 LTS x64  

We used sysbench to test and tweak all the above systems and here are the test results with notes.

NOTE: TPS = Transactions Per Second

Results before applying any new tweaks:

PROD: 1,179.72 TPS <- Expected  MYSQL01: 442.92 TPS <- Why?  MYSQL02: 543.22 TPS <- Expected  MYSQL03: 904.22 TPS <- Surprising  

Results after my.cnf tweaks:

MYSQL01: 655.00 TPS <- 32% improvement. Not bad but still very poor compared to its siblings.  MYSQL02: 754.81 TPS <- 28% improvement. This was unexpected.  MYSQL03: 969.59 TPS <- 07% improvement.  

We are unsure why MYSQL01's performance is so poor. We can only summarize that there is an OS, RAID CARD or BIOS setting(s) that may be improperly set. I am leaning towards the RAID Card/Configuration. They only way to know for sure is to shutdown this server and scrutinize the configuration. A reload may be necessary. However, since it is our ultimate plan to make the current PROD hardware the primary production MySQL server then we may leave MYSQL01 alone for now and re-purpose the hardware after migrating to the 5.5 farm. However, we can't migrate until we figure out why our application is behaving so poorly on certain hardware.

Anyone have any suggestions?

SQL Server database schema (and likely, some data changes) - how to auto-distribute over many database instances

Posted: 05 Jul 2013 10:48 AM PDT

Our development involves a SQL Server database (also might be Oracle or Postgres later) and we would sometimes make database schema changes or some other interventions in database.

What solutions exist to create a "patch" or "script" to distribute these changes on other installations of same database (we do not have direct access to these)? It needs to alter database schema and execute SQL and/or other complex, pre-programmed database data alterations as defined by person who initializes/designs change. On other instances, a system admin should be able just run some program/press button, so these changes would be applied automatically.

In addition, it is plus if such solution can take database snapshot and derive "difference" on contents of particular table that would be then distributed.

The solution can be commercial.

Thanks in advance!

MySQL slap with custom query

Posted: 05 Jul 2013 06:48 PM PDT

I want to conduct stress test on our MySQL DB. I have the list of queries i need to execute. I have tried using Apache JMeter for this but it is very time consuming. Is it possible to run mysqlslap with custom .sql file containing INSERT, UPDATE, SELECT queries on specified MySQL database?

Overview of how MongoDB uses its various threads

Posted: 05 Jul 2013 01:48 PM PDT

On one instance I have MongoDB using ~85 threads. In lieu of having time to investigate directly, I am curious:

  1. What are the threads used for? Do they fall into a few high-level usage categories?
  2. How can I control/influence the number of threads used? E.g. max connection params, etc.

MySQL information_schema doesn't update

Posted: 05 Jul 2013 08:48 PM PDT

I have a database, say abc, in mysql server. It has only one table named test. test uses innodb engine and I've set innodb_file_per_table to true.

After I run the query delete from abc.test, I want to calculate the database size of abc. Here is the query I use:

SELECT      table_schema "name",      sum( IF(engine = "MyISAM", data_length + index_length -  data_free,      data_length + index_length)) "size"  FROM information_schema.TABLES  where table_schema like "abc";  

The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone.

I've done this kind of test many times, this strange behavior happens sometimes.

I'm using percona mysql server 5.5.29-rel29.4.

Can anybody tell me what is wrong?

Update:

Actually, I use another thread to check the database size periodically.

How to succesfully run a batch file in an SQL Agent job?

Posted: 05 Jul 2013 07:19 PM PDT

I have a SQL Agent Job which generates a specific report in PDF-file and then copies the PDF to a network directory and then deletes the PDF file in the source directory.

The SQL Jobs consists of 2 steps: 1. Generate the report 2. Copy the report to the network location.

For step 2 I made a bat-file which handles the copying and removal of the pdf file.

The bat-file is as follows:

set source_dir=C:\Reports\Energie\Uur  set dest_dir=\\KWS2-WEB-SERVER\Share\Reports\Uur    C:\Windows\System32\Robocopy.exe %source_dir% %dest_dir% /MOV /Z  

However, when I run my the Job, it hangs on the second step. The status just stays on "Executing".

This is the line which I stated in the step (location of the bat-file to execute):

cmd.exe /c "C:\Reports\rapport_uur_copy.bat"  

My job-settings are as follows:

Step 1

Type: Operating system (CmdExec) On Success: Go to the next step

On Failure: Quit the job reporting failure

Step 2

Type: Operating system (CmdExec)

On Success: Quit the job reporting success

On Failure: Quit the job reporting failure

Some facts:

  • I have read/write permissions on the network directory
  • I run the steps as the Administrator-account (the logged in user, default)
  • Step 1 succeeds
  • I run Windows Server 2008 R2 Standard
  • I have SQL Server 2008 R2
  • When I run the script from the cmd manually, it works (under Administrator account).

MySQL partitioned tables?

Posted: 05 Jul 2013 03:48 PM PDT

I have a database that supports a web application with several large tables. I'm wondering if partitioned tables will help speed up certain queries. Each of these tables has a colum called client_id. Data for each client_id is independent from every other client_id. In other words, web queries will always contain a where clause with a single client_id. I'm thinking this may be a good column on which to partition my large tables.

After reading up on partitioned tables, I'm still a little unsure as to how best to partition. For example, a typical table may have 50 million rows distributed more or less evenly across 35 client_ids. We add new client_ids periodically but in the short term the number of client_ids is relatively fixed.

I was thinking something along these lines:

CREATE TABLE foo (      id INT NOT NULL PRIMARY KEY,      ... more column defs here...      client_id int      )  PARTITION BY KEY(client_id)  PARTITIONS 35;  

My question. Is this an optimal strategy for partitioning these types of tables? My tests indicate a considerable speedup over indexing on client_id, but can I do better with some other form of partitioning (i.e. hash or range)?

Limit memory used for mongoDb

Posted: 05 Jul 2013 09:48 AM PDT

Is there any way to limit using RAM for mongodb on Debian? I'm looking for a solution fo about 8 hours, but have no results.

Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries)

Posted: 05 Jul 2013 08:48 AM PDT

I'm trying to configure a load balancing system. I've a python script, invoked through mod_wsgi on Apache, that generates a query and executes it on pgpool: request-> wsgi python -> pgpool -> postgresql. Pgpool is configured as load balancer using 4 servers with 24GB ram and 350GB ssh hd. Our db is about 150GB and a query takes about 2 seconds. These are the configurations:

Pgpool

  • num_init_children 500
  • max_pool 2
  • child_life_time 300 seconds

Apache (mpm_prefork)

  • StartServers 100
  • MinSpareServers 20
  • MaxSpareServers 45
  • ServerLimit 2000
  • MaxClients 100
  • MaxRequestsPerChild 1000

PostgreSQL

  • max_connections = 1000
  • shared_buffers = 6GB
  • work_mem = 4GB

It seems not working When I try to submit more than 150 concurrent queries, although pgpool log file doesn't have any errors I get this error from the python script:

OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

Any ideas?

innodb_file_format Barracuda

Posted: 05 Jul 2013 11:26 AM PDT

I have a couple questions for those more familiar. Most of my instances have been running Antelope despite having support for Barracuda.

I was looking to play around with some compresses innodb tables. My understanding is this is only available under the Barracuda format.

  1. I see innodb_file_format is dynamic so I can just switch over with out a bounce. Are there any implications of doing this I should be aware of. All I can tell is that means new tables or subsequently altered will be created with that format. Is this all correct?
  2. I was hoping to have to not go through and convert all my tables. Is is kosher to have antelope and barracude tables coexisting in the same tablespace? Even if it works are there any gotcha's to look out for?

From what I've read and gathered from my tests the answers are: Yes. Yes. I'm not sure.

Update

I've been running w/ some Dynamic and some Compressed tables in various instances since this post with out issue. Further I neglected to read http://dev.mysql.com/doc/refman/5.5/en/innodb-file-format-identifying.html at the time.

After you enable a given innodb_file_format, this change applies only to newly created tables rather than existing ones. If you do create a new table, the tablespace containing the table is tagged with the "earliest" or "simplest" file format that is required for the table's features. For example, if you enable file format Barracuda, and create a new table that is not compressed and does not use ROW_FORMAT=DYNAMIC, the new tablespace that contains the table is tagged as using file format Antelope.

So tables will be created as Antelope even if you allow Barracuda. The mixing is unavoidable unless you specify every table as row_format dynamic or a compressed table.

There is no indication you should do a complete dump and reload when introducing your first Barracuda table (such as is recommended when upgrading major versions of mysql)

No comments:

Post a Comment

Search This Blog