Sunday, July 7, 2013

[how to] MySQL #1064 error on line 2

[how to] MySQL #1064 error on line 2


MySQL #1064 error on line 2

Posted: 07 Jul 2013 09:04 PM PDT

I'm quite new on learning SQL and I seem to be running into an issue with the following code:

CREATE TABLE `read`    (`order` NOT NULL UNIQUE AUTO_INCREMENT,    `title` VARCHAR(50) NOT NULL,    `url` VARCHAR(50) NOT NULL,    PRIMARY KEY(`title`,`url`)  )DEFAULT CHARSET=UTF8;  

It's returning the following error:

#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'NOT NULL UNIQUE AUTO_INCREMENT, title VARCHAR(50) NOT NULL, url VARCHAR(50' at line 2

oracle 11g control files error

Posted: 07 Jul 2013 05:08 PM PDT

i use oracle 11g r2 and when startup in sqlplus faced this problem

   ORA-00205: error in identifying control file, check alert log for more info          

then i went to alert log file i seen these errors

ORA-00210: cannot open the specified control file    ORA-00202: control file: '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/ora_control1'                                                                                          ORA-27037: unable to obtain file status  

Query mysql database through memcache

Posted: 07 Jul 2013 02:16 PM PDT

I have been trying to use memcache with MySQL 5.6 but I can not figure out how to do this.

I have add an entry into innodb_memcache.containers so I can query the database through memcache.

What I would usually do with memcache is get the value for the key by doing this:

$value = $memcache->get($key);  

but in mysql5.6 potentially there could be a lot of tables specified , so using PHP , how do I specify the table I am trying to query against using the memcache layer?.

MySQL Migration from Slave to Master

Posted: 07 Jul 2013 04:26 PM PDT

I recently had a problem with our server. It required us to install a new hard drive, cpanel, and setup the old hard drive as a slave. How can I migrate the mysql database from the slave drive to the master drive? The slave contains the tables with MYD, MYI, and frm files. The database is probably about 20G.

I'm assuming SSH is the best way to do this. I am not very familiar with SSH commands and cannot find out how to do this. I've already setup the new database (on the fresh install on the master drive) with the same name, users, and passwords.

mongos and monod have the same default port?

Posted: 07 Jul 2013 11:18 AM PDT

Is it true that by default mongos and monod uses the same port 27017?

I had to pick a different port for one of these two. For example --port 37017 for mongos.

Did anyone have the same confusion?

Thanks,

How to switch Quiescing to Normal Mode in Oracle?

Posted: 07 Jul 2013 10:40 AM PDT

I have two instance database Oracle, they are ORCL11 and ORCL10,.... I've configured them, it looks like correct. Then, I connect to replication user admin ORCL10

conn repadmin/repadmin@orcl10  

And run SQL command in replication admin user ORCL10,...

BEGIN DBMS_REPCAT.RESUME_MASTER_ACTIVITY (gname=>'scott_repg');end;/  

Then, i connect to replication admin user ORCL11 to see gname status.

conn repadmin/repadmin@orcl11;  select gname,status from dba_repgroup;  

It display gname = SCOTT_REPG and status = NORMAL,.... but if connect to replication admin ORCL10

conn repadmin/repadmin@orcl10;  select gname,status from dba_repgroup;  

It display that gname = SCOTT_REPG and STATUS = QUIESCING, I think it does not have any impact, so I test my replication on ORCL10, for example :

conn system/ok@orcl10;  insert into DEPT values (101,'Financial');  select*from DEPT;  

It works on table DEPT ORCL10 (1 row created), but when i switch to ORCL11 then run simple sql query :

conn system/ok@orcl11;  select*from DEPT;  

it doesn't show anything, i've searched it on Google but there is nothing tutorial how to fix that, I think the problem is about how to switch QUIESCING to NORMAL mode at ORCL10. I've try this way to fix it :

ALTER SYSTEM QUIESCE RESTRICTED;  ALTER SYSTEM UNQUIESCE;  

But it still doesn't solve the problem,..... Would you help me to fix this problem ??? thanks !!!

Should I deploy Mongodb sharding for 50 collections?

Posted: 07 Jul 2013 05:19 PM PDT

I have a 10 node clusters that runs about 50 concurrent jobs. Each job needs to read/write a separate collection. So I have about 50 collections roughly. Each collection has about 20 M records. Most of the time, jobs only need to do sequential read/write.

For simplicity, I could deploy a single instance of mongodb that has no replication, no sharding. And have 50 separate collections. But the single node where mogodb is running becomes a hotspot and the rest 9 nodes can't share the read/write load.

So I would like to leverage the resource and balance the load. But I guess a 20M record collection is not worth sharding? Especially I only need to do sequential read/write. I thought about merging 50 collections into one big collection. But I am stumbled on the document size limitation which is 16 MB.

Any suggestions? Thanks,

How do I monitor bandwidth used by SQL Server on port 1433?

Posted: 07 Jul 2013 06:55 AM PDT

I have a single server running a .NET web application and a SQL Server database (2008 Standard). I'm planning to move the database onto a separate server but in order to provision the network hardware I'd like to benchmark the data throughput between the web application and the database. Can port 1433 be monitored internally? If so is there any tool native to Windows 2008 R2 that can do this, or would I need some 3rd party application like WireShark? My connection string is referencing the database using server=localhost, is it still possible to tap into 1433 to see the bandwidth used on this port?

Essentially I'm trying to determine if I need a Gbit or 100 Mbit connection between the web server and database server. Any thoughts on this would be much appreciated.

/usr/sbin/mysqld was deleted and replaced by another mysqld with different inode

Posted: 07 Jul 2013 07:50 PM PDT

This happened on three of our mysql server running on RHEL. While checking for open and deleted files, I found that mysqld in use is deleted (as seen in lsof) and was replaced by a similar mysqld (in /usr/sbin/mysqld) with a different inode. Size and blocks of both the files (deleted and current) are same. new mysqld (not in use) seems to be of the same version as the deleted one. I am trying to figure out what could have caused this (there are no cronjobs running on the system). I checked system logs, database logs and yum logs and found nothing relevant. Any input is appreciated.

Thanks!

creating a report on the mysql jasperReport server

Posted: 07 Jul 2013 09:50 AM PDT

I have mysql local database, in it I have sample data mart. I have installed jasperReport server to my computer. Just using this tool, I want connect to the local mysql database and just create a pdf report or any report without using any tool except jasperReport server. Can I do ? HOw?

Note; report will be directly constructed on the whole data reside in the mysql database. (No filtering) I want to see the report on the jasperReport report page. If any change occur on the mysql database, it should be carried out on the report page of the JAsperReport server

Oracle remove and place (re-install) database on 10g

Posted: 07 Jul 2013 05:49 AM PDT

I need to put together a plan to execute some change scripts to a Oracle 10g database, which will add some values, change some values a drop a table and create a new table.

I have a step for creating a DMP of the original database before the changes and for running a script that will execute all the changes. But I need a back-out plan if things go wrong. I was wondering what is the bets step(s) to drop the database and import the old DMP file?

I know about the sqlplus command:

imp <touser>/<password>@<host_address>/<service> file=<filename>.dmp fromuser=<fromuser> touser=<touser> log=backup_import.log  

But what is the best way to drop the existing database? Is it just

drop user <schema_user> cascade;  

Thanks in advance.

K

SQL Agent embedded PowerShell script in CmdExec step fails with import-module sqlps

Posted: 07 Jul 2013 12:50 PM PDT

SQL Server 2008R2 PowerShell 2.1

I am trying to create a SQL Agent job that dynamically backs up all non-corrupted SSAS databases on an instance without the use of SSIS. In my SQL Agent job, when I create a CmdExec step and point to a PowerShell script file (.ps1) like this:

powershell.exe "c:\MyPSFile.ps1"   

the job executes successfully (or at least gets far enough to only encounter logic or other syntax issues).

This approach won't work for a final solution, because there is a requirement to keep the PowerShell script internal to SQL. So I have a different CmdExec step that embeds the PowerShell script like so:

powershell.exe "import-module sqlps –DisableNameChecking    $server_name = "localhost"  $backup_location = "C:\BackupsGoHere"    [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices") | out-null  $server = New-Object Microsoft.AnalysisServices.Server  $server.connect($server_name)    # Generate an array of databases to be backed up  foreach ($database in ($server.get_Databases() | Where {$_.EstimatedSize -gt 0 -and $_.Cubes.Count -gt 0})) {      $directory_path = $backup_location + "\" + $database.Name      if (!(Test-Path -Path $directory_path)) {          New-Item $directory_path -type directory | out-null      }      [string] $timestamp = date      $timestamp = $timestamp.Replace(':','').Replace('/','-').Replace(' ','-')      $database.Backup("$directory_path\$database-$timestamp.abf")  }  $server.disconnect()"  

However, when executed with the embedded script, the job errors out quickly with the following response:

The specified module 'sqlps' was not loaded because no valid module file was found in any module directory.

Why can't I reference the module from an embedded script, but doing so in a ps1 file works just fine?

MySQL error 2006 MySQL server has gone away after upgrade to 5.6.10 using homebrew on Max OS X 10.8.3

Posted: 07 Jul 2013 04:49 AM PDT

I upgraded my installation of MySQL on my Mac OS X 10.8.3 using homebrew

brew install mysql

Everything seemed to go smoothly, but...not really, as it turns out.

I cannot add a new user (neither through the command line nor through phpmyadmin. I always get the error

Error in processing request: #2006 MySQL server has gone away

I can create new databases without a problem, and I can install, for example Joomla 3.1.1 with no problems.

However, when I try to install a new extension into my Joomla installation, I get errors. For example, I get the following error when I try to install a component from NoNumber:

1062 Duplicate entry '1' for key 'PRIMARY' SQL=INSERT INTO `u5z87_extensions` (`name`,`type`,`element`,`folder`,`client_id`,`enabled`,`access`) VALUES ('plg_system_nnframework','plugin','nnframework','system','0','1','1')  

Same thing for many other extensions.

I tried to start mysql with

mysql.server start --log-warnings=2

and it started with no problem

Starting MySQL   SUCCESS!  

I looked up issues where error #2006 were being reported and how to solve them, and they referred to modifying /etc/my.cnf but there is no /etc/my.cnf configuration file on my system, and there wasn't one in the previous installation of MySQL 5.5.29 (also installed with homebrew) and I had no problems.

I ran brew doctor and the output is below

$ brew doctor  Warning: "config" scripts exist outside your system or Homebrew directories.  `./configure` scripts often look for *-config scripts to determine if  software packages are installed, and what additional flags to use when  compiling and linking.    Having additional scripts in your path can confuse software installed via  Homebrew if the config script overrides a system or Homebrew provided  script of the same name. We found the following "config" scripts:        /usr/local/opt/php54/bin/php-config  

The output from brew --config is also below:

$ brew --config  HOMEBREW_VERSION: 0.9.4  ORIGIN: https://github.com/mxcl/homebrew  HEAD: 86ce842c700a47abdc354e80cc3b0b3938ab2132  HOMEBREW_PREFIX: /usr/local  HOMEBREW_CELLAR: /usr/local/Cellar  CPU: 8-core 64-bit nehalem  OS X: 10.8.3-x86_64  Xcode: 4.6.2  CLT: 4.6.0.0.1.1365549073  LLVM-GCC: build 2336  Clang: 4.2 build 425  X11: 2.7.4 => /opt/X11  System Ruby: 1.8.7-358  Perl: /usr/bin/perl  Python: /usr/bin/python  Ruby: /Users/alisamii/.rvm/rubies/ruby-1.9.3-p374/bin/ruby  

Exploring a little further, I tried to revert to my previous install of MySQL using

brew switch mysql 5.5.29

This resulted in the following output:

$ brew switch mysql 5.5.29  Cleaning /usr/local/Cellar/mysql/5.5.29  Cleaning /usr/local/Cellar/mysql/5.6.10  135 links created for /usr/local/Cellar/mysql/5.5.29  

I then accessed mysql from the command line and ran status:

mysql> status  --------------  mysql  Ver 14.14 Distrib 5.5.29, for osx10.8 (i386) using readline 5.1    Connection id:      13  Current database:  Current user:       alisamii@localhost  SSL:            Not in use  Current pager:      stdout  Using outfile:      ''  Using delimiter:        ;  Server version:     5.6.10 Source distribution  Protocol version:       10  Connection:         Localhost via UNIX socket  Server characterset:    utf8  Db     characterset:    utf8  Client characterset:    utf8  Conn.  characterset:    utf8  UNIX socket:        /tmp/mysql.sock  Uptime:         1 hour 56 min 24 sec  

As you can see above, it reports mysql Ver 14.14 Distrib 5.5.29, for osx10.8 (i386) using readline 5.1

But, it also says Server version: 5.6.10 Source distribution

So something isn't syncing up.

Furthermore, when I show databases, I get the following output:

mysql> show databases      -> ;  +--------------------+  | Database           |  +--------------------+  | information_schema |  +--------------------+  1 row in set (0.00 sec)  

But I know I have more than just an information_schema database. in phpMyAdmin, I have 14 databases displaying.

So, somehow, it seems to me that mysql is not connecting to the right server, as phpmyadmin reports the same server information that mysql's status command reports: Software version: 5.6.10 - Source distribution.

At this point, I am completely lost, and would really need some help.

When SQL Server uses Multi Page Allocations

Posted: 07 Jul 2013 06:49 PM PDT

What are the consumers of MPA (multi page allocations) in SQL Server?

I know data pages are always 8K. Is there a situation where data/index page make use of MPA? It makes sense that execution plans can use MPA as they can exceed 8 KB.

There is a blog here that suggests use of MPA but it refers to stored procedures (one with 500 parameters). In the attached screenshot I see an execution plan using around 11 MB; does this use MPA? Is there a way to confirm that memory allocation for the execution plan is using multi page allocation?

enter image description here

My confusion is what actually uses MPA (multi page allocator). The example in the link I posted shows a complex execution plan which will require contiguous allocation over 8 KB.

An answer below suggests that there are many things like linked server or extended stored procedures that can use MPA. Which for some reason I am not able to agree. For example Extended Stored procedures can be C++ code using HeapAlloc which SQL server has little control over to manage its memory (it has to be at Windows OS level).

Extended stored procedures even in SQL 2012 still consumes memory outside buffer pool so it has to nothing to do with multi page allocation. The same applies to Linked servers especially if you use third party providers (e.g. ORACLE).

Is the key_buffer_size applicable to myisam tmp tables?

Posted: 07 Jul 2013 04:50 PM PDT

I have a database about 750GB in size. It's all innodb.

Larger analytical queries often need to group by several columns or use distinct so it's common that MySQL will have to create tmp tables. The tmp tables fit into memory.

My cache hit ratio (Key_reads / Key_read_requests) is 0.

When MySQL creates these tmp tables, I'm guessing it doesn't it create pseudo indexes to be used by key_buffer_size.

Optimize UNION query in MySQL

Posted: 07 Jul 2013 09:03 PM PDT

I have a problem with a UNION query in MySQL. We have 10 millions players on our website and we would like to select players with a multi-criterias system. For exemple, selecting US people, men, more than 35 years of age.

We are using "vertical partionning": 1 table per criter. For example:

* user_country  - id_user  - id_country  

We would like to do this kind of query:

SELECT id_inscri FROM userdata_langue  WHERE id_langue='43'    UNION  SELECT id_inscri FROM userdata_sexe  WHERE sexe='2'    UNION  SELECT id_inscri FROM userdata_nb_jour  WHERE nb_jour>='31'    UNION  SELECT id_inscri FROM userdata_last  WHERE last<='2013-04-07'    AND last>='2013-04-03' ;  

How do I optimize that?

----- More details

Explain output of the query:

id  select_type table   type    possible_keys   key key_len ref rows    Extra  1   PRIMARY userdata_langue ref id_langue   id_langue   1   const       398846  Using index  2   UNION   userdata_sexe   ref sexe    sexe    1   const   1667137 Using index  3   UNION   userdata_nb_jour    range   nb_jour nb_jour 2   NULL    5830    Using where; Using index  4   UNION   userdata_last   range   last    last    3   NULL    371614  Using where; Using index  NULL    UNION RESULT    <union1,2,3,4>  ALL NULL    NULL    NULL    NULL    NULL  

SHOW CREATE TABLE

Table   Create Table  userdata_langue CREATE TABLE `userdata_langue` (   `id_inscri` bigint(20) NOT NULL,   `id_langue` tinyint(3) unsigned NOT NULL,   PRIMARY KEY (`id_inscri`),   KEY `id_langue` (`id_langue`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  

Update one table from another table while sorting that table based on one column

Posted: 07 Jul 2013 03:50 PM PDT

This is the problem I'm trying to figure out in MySQL. We have an old table contains some forms submitted by our users. Somehow, the previous decision was each time a user comes to this survey, a new form will be submitted. So in the old table we easily have several rows with the same Firstname, Lastname, but different values in the other columns, and there's a timestamp column Date_Submission as well.

Now we are trying to move everything to a new table, but this time, for each person we only keep one row. And we want to keep some of the the latest old data of that user (like email, phone number, etc)

I could do the following:

update newtable, oldtable set newtable.email = oldtable.email where newtable.firstname = oldtable.firstname and newtable.lastname = oldtable.lastname;

Apparently this won't give me the "latest" old date for each person.

So I tried this one:

update newtable, oldtable set newtable.email = oldtable.email where newtable.firstname = oldtable.firstname and newtable.lastname = oldtable.lastname order by oldtable.Date_Submission;

But they MySQL will complain with:

"ERROR 1221 (HY000): Incorrect usage of UPDATE and ORDER BY".

So I'm wondering, what's the correct way to achieve this?

increasing mysql table open cache?

Posted: 07 Jul 2013 02:50 PM PDT

I often read that it is best to increase this variable slowly. Can someone explain why? My status indicates that I should increase it... What is best practice / "slowly"?

Thanks!

select count(*) in mysql 5.5 innodb-- rewrite advice?

Posted: 07 Jul 2013 11:50 AM PDT

I need advice on how to rewrite a select count(*) query for innodb tables mysql 5.5. in new environment its very slow...

select count(*)   from mails3     join questions using (question_id)   where mails3.from_user_id = '86696'     and mails3.is_community_star = 1     and mails3.rating = 3     and questions.is_rated_community = 1;  

the query execution plan looks simple enough but very slow

*************************** 1. row ***************************             id: 1    select_type: SIMPLE          table: mails3           type: ref  possible_keys: question_id_index,emails_doublestars,favorite_emails,user_responses,owner_responses,sentmail,watchlist,read_responses,rating_date,read_and_unrated            key: emails_doublestars        key_len: 8            ref: const,const,const           rows: 2784          Extra: Using where  *************************** 2. row ***************************             id: 1    select_type: SIMPLE          table: questions           type: eq_ref  possible_keys: PRIMARY,is_rated_community_senddate_idx            key: PRIMARY        key_len: 5            ref: answerology.mails3.QUESTION_ID           rows: 1          Extra: Using where  2 rows in set (0.00 sec)  

Why does this procedure raise a privilege error?

Posted: 07 Jul 2013 08:50 AM PDT

I am having trouble getting the following procedure to run on a remote MySQL database - the given error is privilege based (#1227). Locally, the procedure runs fine.

QUESTIONS

  • Could somebody help me understand which specific part of this code raises this error, i.e. requires the said privilege?
  • Is there any way I could have corresponding functionality without a privileged call?

enter image description here

MySQL5.6 on Mac OS X 10.6.8 problems when granting permissions to root

Posted: 07 Jul 2013 10:50 AM PDT

I'm having serious problems with a MySQL 5.6 instance on a Mac Server.

We had to upgrade MySQL to a newer version, but it turned to be a bad idea, as we lost control to it. We had a backup of the /data directory and the my.cnf file as well. However, when setting an init_file to restore the previous password and its permissions. So we created a text file with this content:

UPDATE mysql.user SET Password=PASSWORD('myOriginalPassword') WHERE User='root';  FLUSH PRIVILEGES;  

We have double checked the init_file permissions (we even gave it a chmod 777, to make sure it worked), but something is not working. When we run mysqld_safe --init_file=/usr/local/mysql/bin/init_file it tries to start the server, but then it shuts itself down. The log is as follows:

130308 13:23:52 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data  2013-03-08 13:23:53 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).  2013-03-08 13:23:53 18979 [Warning] Setting lower_case_table_names=2 because file system for /usr/local/mysql/data/ is case insensitive  2013-03-08 13:23:53 18979 [Note] Plugin 'FEDERATED' is disabled.  2013-03-08 13:23:53 18979 [Note] InnoDB: The InnoDB memory heap is disabled  2013-03-08 13:23:53 18979 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2013-03-08 13:23:53 18979 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-03-08 13:23:53 18979 [Note] InnoDB: CPU does not support crc32 instructions  2013-03-08 13:23:53 18979 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2013-03-08 13:23:53 18979 [Note] InnoDB: Completed initialization of buffer pool  2013-03-08 13:23:53 18979 [Note] InnoDB: Highest supported file format is Barracuda.  2013-03-08 13:23:53 18979 [Note] InnoDB: 128 rollback segment(s) are active.  2013-03-08 13:23:53 18979 [Note] InnoDB: Waiting for purge to start  2013-03-08 13:23:53 18979 [Note] InnoDB: 1.2.10 started; log sequence number 19440939  2013-03-08 13:23:53 18979 [ERROR] /usr/local/mysql/bin/mysqld: unknown option '--skip-locking'  2013-03-08 13:23:53 18979 [ERROR] Aborting    2013-03-08 13:23:53 18979 [Note] Binlog end  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'partition'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_CONFIG'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_DELETED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INSERTED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_METRICS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMPMEM'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_LOCKS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_TRX'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'InnoDB'  2013-03-08 13:23:53 18979 [Note] InnoDB: FTS optimize thread exiting.  2013-03-08 13:23:53 18979 [Note] InnoDB: Starting shutdown...  2013-03-08 13:23:54 18979 [Note] InnoDB: Shutdown completed; log sequence number 19440949  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'BLACKHOLE'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'ARCHIVE'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MRG_MYISAM'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MyISAM'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MEMORY'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'CSV'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'sha256_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'mysql_old_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'mysql_native_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'binlog'  2013-03-08 13:23:54 18979 [Note] /usr/local/mysql/bin/mysqld: Shutdown complete    130308 13:23:54 mysqld_safe mysqld from pid file /usr/local/mysql/data/server.myServerName.com.pid ended  

The problem may definitely come from the [Error] lines, but as we haven't used the --skip-locking, it's just confusing. If we try to log into mysql using mysql -r -p and set the password we used, it can't connect to the server, as it is not started. Any ideas where this problem may come from?

Importing data with a lot of SELECTS during transaction

Posted: 07 Jul 2013 01:50 PM PDT

I'm using SQL server 2008 R2 and I have a products table with an auto increment integer id column as primary key and a product_no column (unique) and 6 tables like articles and product_assets which have foreign keys to the products table.

I need to import about 1 million products into the products table and the other tables (about 16 million rows in total (all tables together))

This import should run as quickly as possible (<=6h) and I need to perform additional SELECTs per data row from the tables during import to validate the data. The data source is a CSV file so I'm making a transaction for each line which represents an article (entry for the articles table) due to the fact that the import could fail at any line.

During import it should also be possible to read data from the mentioned tables with a different connection.

Currently this import runs in over 24h (the SELECTs to validate the data which I have to do for every line from the CSV during import takes it toll)

Any advice how to improve import performance in such a case?

I can split the CSV data in an preprocessing step into parts (e.g. last char of the product number) so that the parts don't interfere on row level and run these parts in different threads simultaneously if that could improve the import speed somehow. I mention this because the import machine and the sql server machine have a low CPU utilization during import so i guess a lot of time is wasted with network I/O (machines are connected with gigabit ethernet).

PostgreSQL replication for archiving

Posted: 07 Jul 2013 06:49 AM PDT

I've looked around a bit and haven't found a very clear strategy or method for what I'm trying to do, which is surprising because I would think it would be a fairly common practice.

I currently use Bucardo to replicate in a Master Slave setup. This works fine except I want the Slave to retain all records ever written to it. The master is a smaller system which will need to have its tables truncated periodically. I don't want these truncates to be carried over to the Slave. I would think this would be a fairly common practice but have been stumped finding a solution that will work.

Could anyone point me in the right direction? The right direction doesn't necessarily need to involve using Bucardo either.

Thanks

Replicate RDS-MySQL to a non-amazon host?

Posted: 07 Jul 2013 02:37 PM PDT

I have a fairly large MySQL (5.1) database running in Amazon as an RDS instance. I'm working on migrating it out of Amazon. To do that smoothly I'd like to set up the new hardware as a read-only replica of the RDS instance.

I know RDS supports replication within RDS. Is there any way to configure it to replicate to a host which is not in RDS?

How to prevent high memory, CPU and time consumption by MySQL restore?

Posted: 07 Jul 2013 07:50 AM PDT

I have a local test machine with 2 GB RAM and a dual core processor.

I imported a dump on that using

mysql -uuser -psecret < script.sql  

script.sql is a 700 MB file which contains a single database with more than 100 tables.

Restore took approximately 25 minutes.

Question: Why it took so long time? I have already optimized my server settings. The dump also contains extended inserts and disables keys). Is 700 MB is too large for this kind of machine? mysqld was using 100% of one of the CPU when I checked using htop and 22% of the memory.

Now what actually I want is to convert all my tables of that database to InnoDB. So for that I executed a simple shell command:

mysql -uuser -psecret -Bse "select Concat('Alter Table ' , TABLE_SCHEMA ,'.',TABLE_NAME , ' Engine= \'InnoDB\' ;') from information_schema.tables where ENGINE='myisam' and TABLE_SCHEMA='db_name';" > Innodb.sql

so Innodb.sql contains the script for converting tables to InnoDB.

Now when I run Innodb.sql one of my table takes more than 20 minutes for conversion. It contains only 1,378,397 records.

In total it takes more than 30 minutes. In the meantime memory usage by mysqld daemon was 73%. The CPU usage was OK at this time.

How can I minimize the time consumtion? Should I go for changing MySQL server settings or anything else?

If anybody wants my my.cnf setting I will share that.

How do I find my current SCN?

Posted: 07 Jul 2013 11:44 AM PDT

Given any version of Oracle:

  • How do I find my current SCN?
  • What is the maximum possible SCN?

How can I profile SQL Azure?

Posted: 07 Jul 2013 02:30 PM PDT

I am writing a web site that uses SQL Azure heavily. However, it is painfully slow.

Is there an easy way to profile the live SQL Azure instance?

No comments:

Post a Comment

Search This Blog