Saturday, May 4, 2013

[how to] InnoDB errors after having deleted "ibdata1"

[how to] InnoDB errors after having deleted "ibdata1"


InnoDB errors after having deleted "ibdata1"

Posted: 04 May 2013 12:05 PM PDT

By accident, I have deleted the ibdata1 file in the MySQL directory because MySQL wasn't functioning correctly.

After reading so many forums, I found a post that recommended deleting the 3 files ib_logfile0, ib_logfile1 and ibdata1 from the MySQL directory to let InnoDB load again and MySQL come up. And this what I have did.

But now when InnoDB loads as engine the tables disappear.

After I do a graceful restart, InnoDB disappears totally from the server ("have_innodb | No"). But I can see all tables again for databases using InnoDB.

What I'm looking for is a way to load all the tables correctly than ask ibdata to state them as InnoDB then let the InnoDB engine function correctly.

Maximum memory setting in SQL Server

Posted: 04 May 2013 04:05 PM PDT

I'm running SQL Server 2008 and a web based application, on a single dedicated server, with only 2Gb of memory available.

As is noted elsewhere, SQL Server regularly takes up to 98% of physical memory, which appears to slow down the web application running on the server.

In Server Properties in SSMS, under Memory, Maximum Server Memory (in Mb) is set to: 2147483647 Sql Server

My question is, what would be the recommended number to put in the maximum server memory box, given the amount of memory I have available, and that the same server is also running the web application?

Additionally, is it safe to make a change to this number, while SQL Server is running?

Thank you for your advice.

Modifying replication distributor & publisher

Posted: 04 May 2013 07:39 AM PDT

I have a SQL Server 2008 transactional replication setup as follows:

  • 192.168.100.1 <-- Distributor
  • 192.168.100.2 <-- Publisher
  • 192.168.100.3 <-- Subscriber

Our network team want to change the IP address of the distributor, for example to: 192.168.200.1

Also, we want to move the publication database to a new server. so, the replication source (publisher) will be also moved and it will have a new IP address as well.

The subscriber will remain as is.

I cannot afford having long downtime as our systems are running for 24x7 and there are so many huge tables that are already in replication and generating a new snapshot will take long time to be replicated.

What is the best way to seamlessly re-configure the replication and keep my data in sync without rebuilding it from scratch

Thanks

SQL error "#1118 - row size too large" on the first row of the table only

Posted: 04 May 2013 07:26 AM PDT

I ran into a problem using a mySQL database. I have some columns as type text but when I try to enter data into the first row I get the error code "#1118 - Row size too large. The maximum row size for the used table type, not counting BLOBs, is 8126. You have to change some columns to TEXT or BLOBs" for anything with more than 15 characters. This only occurs for the first row, all the other rows in the column and table work just fine. Help!

  Field Type    Collation              Null   Default  Extra    pid   int(11)                        No     None     AUTO_INCREMENT                           3     text    utf8_general_ci        Yes    NULL                                  6     text    utf8_general_ci        Yes    NULL                                  7     text    utf8_general_ci        Yes    NULL                                  8     text    utf8_general_ci        Yes    NULL                                  9     text    utf8_general_ci        Yes    NULL                                  10    text    utf8_general_ci        Yes    NULL                                  21    text    utf8_general_ci        Yes    NULL                                  22    text    utf8_general_ci        Yes    NULL                                  23    text    utf8_general_ci        Yes    NULL                                  24    text    utf8_general_ci        Yes    NULL                                  25    text    utf8_general_ci        Yes    NULL                                  26    text    utf8_general_ci        Yes    NULL                                  27    text    utf8_general_ci        Yes    NULL                                  28    text    utf8_general_ci        Yes    NULL                                  29    text    utf8_general_ci        Yes    NULL                                  30    text    utf8_general_ci        Yes    NULL                                  31    text    utf8_general_ci        Yes    NULL                                  32    text    utf8_general_ci        Yes    NULL                                  33    text    utf8_general_ci        Yes    NULL                                  34    text    utf8_general_ci        Yes    NULL                                  35    text    utf8_general_ci        Yes    NULL                                  36    text    utf8_general_ci        Yes    NULL                                  37    text    utf8_general_ci        Yes    NULL                                  38    text    utf8_general_ci        Yes    NULL                                  39    text    utf8_general_ci        Yes    NULL                                  40    text    utf8_general_ci        Yes    NULL                                  41    text    utf8_general_ci        Yes    NULL                                  42    text    utf8_general_ci        Yes    NULL                                  44    text    utf8_general_ci        Yes    NULL                                  45    text    utf8_general_ci        Yes    NULL                                  46    text    utf8_general_ci        Yes    NULL                                  47    text    utf8_general_ci        Yes    NULL                                  48    text    utf8_general_ci        Yes    NULL                                  49    text    utf8_general_ci        Yes    NULL                                  50    text    utf8_general_ci        Yes    NULL                                  51    text    utf8_general_ci        Yes    NULL                                  52    text    utf8_general_ci        Yes    NULL                                  53    text    utf8_general_ci        Yes    NULL                                  54    text    utf8_general_ci        Yes    NULL                                  55    text    utf8_general_ci        Yes    NULL                                  56    text    utf8_general_ci        Yes    NULL                                  57    text    utf8_general_ci        Yes    NULL                                  58    text    utf8_general_ci        Yes    NULL                                  59    text    utf8_general_ci        Yes    NULL           
  Indexes:         Keyname Type    Unique  Packed  Field   Cardinality Collation   Null          PRIMARY BTREE   Yes     No      pid     61          A           

SQL Server 2000 dateformat permanent change

Posted: 04 May 2013 09:58 AM PDT

I am using SQL Server 2000 for an old system web site. I've just hit an issue that is going to cause me endless hassles down the road.

When I installed the server on my local system I took all the defaults but now any datetime data type is getting stored in mm/dd/yyyy format.

Now this is not an issue for inserting data but when getting the data out and writing it to a page its in US format.

The site is going to be based in the UK on a UK server so I need my local development system to mirror that, and I also want to have to avoid using CONVERT on every field which is a datetime.

Tried changing the database collation a few times without success.

So my question was this:

Is it possible to change the format of the datetime so that inserting 23/01/1972 is stored as such and not 1/23/1972?

SERVER Properties:

  • Default language: English (United States)
  • Collation: SQL_Latin1_General_Cp1_CI_AS

Database Properties

  • Collation: SQL_Latin1_General_CP1_CI_AS

Specifically as as example

CREATE TABLE [dbo].[lbg_app_section1](      [id_app_section1] [int] IDENTITY(1,1) NOT NULL,      [refid_applications] [int] NOT NULL,      [dateofbirth] [datetime] NULL,      [gender] [nvarchar](50) NULL,      [ninumber] [nvarchar](15) NULL,      [address_hsno] [nvarchar](60) NULL,      [address_twncty] [nvarchar](60) NULL,      [address_cnty] [nvarchar](60) NULL,      [address_postcode] [nvarchar](15) NULL,      [address_country] [nvarchar](30) NULL,      [dt_section1_modified] [datetime] NULL  ) ON [PRIMARY]  

Is it safe to call pg_ctl reload while doing heavy writes?

Posted: 04 May 2013 05:22 AM PDT

I am running Postgres 8.4, and I would like to change the authentication method for a user. It's my understanding that I have to do this in pg_hba.conf, and then reload the configuration files with pg_ctl reload (or, in this case, service postgresql reload) for the settings to take effect.

However, I am also importing a ton of data from flat files using multiple processes.

Is it safe to do this?

Calling a MySQL stored procedure with parameters in SSRS reporting through ODBC

Posted: 04 May 2013 09:06 AM PDT

In SSRS, when I pass hard-coded values to my stored procedure, it works fine. But it doesn't work when I try to pass the parameters.

Can you please tell me the right syntax for calling MySQL stored procedures in SSRS, through an ODBC datasource?

How to create table set from several XSD in SQL Server 2012 or with VS 2012

Posted: 04 May 2013 03:44 PM PDT

Is there any solution in T-SQL or in C# how to create XML schema collection in T-SQL or C#?

Problem is I cannot use:

USE Database_test  GO  CREATE XML SCHEMA COLLECTION Collection_from_XSD AS  N'  .................  text of XSD  ...............  GO  

Because I have lot of XSD (about 30) which having a dependency among themselves. So I´d like to do something like:

USE Database_test  GO    CREATE XML SCHEMA COLLECTION Collection_from_XSD AS N'C:\work\xsd\exange_format\VymennyFormatTypy.xsd'  

But this throws an error

Msg 2378, Level 16, State 1, Line 2 It was expected XML schema document.

So I´d like to ask you for help how create this schema from XSD in T-SQL or in C# in Visual Studio 2012 project. I have access to SQL server 2012 with full permission for managing database, so I´d like to prefer T-SQL, but if there won´t be any other choice I will use C#. The database will be used as a local database of the exchange format of addresses and real estates, so the first step is populate database from XML The second step is update database every week from other XML files

For example I attached picture of the XSD and XSD summary from Visual Studio.

Thank you very much

full XSD scheme

XSD example

Duplicating a Database with RMAN

Posted: 04 May 2013 12:16 PM PDT

I would like to duplicate to the same server Oracle Database with RMAN. I've configured flash recovery area for Oracle Database. My database is using SPFILE as shown below.

SQL> SELECT DECODE(value, NULL, 'PFILE', 'SPFILE') "Init File Type"  FROM sys.v_$parameter WHERE name = 'spfile    SPFILE  

My questions are:

  1. if my source database uses SPFILE then do I have create a PFILE from the SPFILE? If so, how can I do that?

  2. What do they mean by this?

    You only need to create directories that are referenced in the PFILE or SPFILE.

    Which directories need to be created?

    Production Database : /u01/app/oracle/oradata/DB11G/. My SPFILE (spfileDB11g.ora):

    DB11G.__java_pool_size=4194304  DB11G.__large_pool_size=4194304  DB11G.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment  DB11G.__pga_aggregate_target=155189248  DB11G.__sga_target=264241152  DB11G.__shared_io_pool_size=0  DB11G.__shared_pool_size=171966464  DB11G.__streams_pool_size=4194304  *.audit_file_dest='/u01/app/oracle/admin/DB11G/adump'  *.audit_trail='db'  *.compatible='11.2.0.0.0'  *.control_files='/u01/app/oracle/oradata/DB11G/control01.ctl','/u01/app/oracle/oradata/DB11G/control02.ctl'#Restore Controlfile  *.db_block_size=8192  *.db_domain='localdomain'  *.db_flashback_retention_target=720  *.db_name='DB11G'  *.db_recovery_file_dest='/u01/app/oracle/fra'  *.db_recovery_file_dest_size=3221225472  *.diagnostic_dest='/u01/app/oracle'  *.dispatchers='(PROTOCOL=TCP) (SERVICE=DB11GXDB)'  *.log_archive_dest_1='location=/u01/app/oracle/archive_logs'  *.log_archive_format='db11g%r_%t_%s.arc'  *.memory_target=417333248  *.open_cursors=3^AC^@^@C"^@^@^D^@^@^@^@^@^@^@^@^@^A^DU^F^@^@00  *.processes=150  *.remote_login_passwordfile='EXCLUSIVE'  *.undo_tablespace='UNDOTBS1'  
  3. if backup files (autobackup and backup set) are in a different path on the source host (not default FRA Path) then how can I use the duplicate command? BTW, only the flashback logs are in the default location (/u01/app/oracle/fra/flashback). Path: /u01/app/oracle/oradata/DB11G/autobackup and /u01/app/oracle/oradata/DB11G/backupset.

What is the optimal response time for file growth

Posted: 04 May 2013 07:13 AM PDT

We normally set our file growth in percent (10%,20%) or fixed size (10 MB, 20 MB etc). If the disk response time is too high for the requested size to grow, then users will be affected and have to wait until the file growth completed. We can see the response time in micro seconds at default trace file column named as 'Duration'.

My question is, what is the optimal response time for every growth in milliseconds?

How to tune mysql to be inmemory-like

Posted: 04 May 2013 06:28 PM PDT

I've given mysql 5G memory (I use innodb). But when I insert lots of data (dumpfile is 1Gb), hdd io is still a bottleneck (CPU is not busy and harddrive is). Is it possible to force Mysql don't make hard drive a bottleneck?

Thanx.

PostgreSQL backup error

Posted: 04 May 2013 05:38 PM PDT

I am trying to backup my company PostgreSQL database using pgAdmin III, so I selected our company DB from the tree, then right clicked on it and selected 'Backup', selected the destination folder for the backup file then clicked OK. Then I got this error message:

C:/Program Files/PostgreSQL/9.0/bin\pg_dump.exe --host localhost --port 5432 --username "pgres1" --format tar --blobs --encoding UTF8 --verbose --file "E:\DB_Backup\DB_Backup_TEST.backup" \"CompanyDB_TEST\" pg_dump: [archiver (db)] connection to database ""CompanyDB_TEST"" failed: FATAL: database ""CompanyDB_TEST"" does not exist pg_dump: *** aborted because of error

Process returned exit code 1.

So can someone please help me by telling me what I am doing wrong here?

I am 100% sure that CompanyDB_TEST do exist.

I am running the PostgreSQL under Windows Server 2003.

How to check which tables in DB (MYSQL) updated in last 1 hour / last 1 minute?

Posted: 04 May 2013 01:05 PM PDT

I have to create a xls datafeed for a website and I would like to know which tables are getting affected when I do a manual entry from CMS.

If i have installed fresh database and I'm doing first entry in it using CMS: I would like to know which tables got updated/appended in last 1 min in that DB.

It is somewhat similar to this question http://stackoverflow.com/questions/307438/how-can-i-tell-when-a-mysql-table-was-last-updated

But in my case I dont know which tables to check. I can check each and every table in the db using the solution posted in the question but I have a gut feeling that there is a better solution for this.

Mysql DB server hits 400% CPU

Posted: 04 May 2013 08:05 AM PDT

I have been facing problem with my database server quite a month, Below are the observations that I see when it hits the top.

 - load average 40 to 50   - CPU % - 400%    - idle % - 45%   - wait % - 11%   - vmstat procs r-> 14 and b-> 5   

And then drains down within 5 minutes. And when I check the show processlist I see queries for DML and SQL are halted for some minutes. And it processes very slowly. Whereas each query are indexed appropriately and there will be no delay most of the time it returns less than 1 second for any query that are being executed to server the application.

  • Mysql Version : 5.0.77
  • OS : CentOS 5.4
  • Mem: 16GB RAM (80% allocated to INNODB_BUFFER_POOL_SIZE)
  • Database Size: 450 GB
  • 16 Processor & 4 cores
  • Not in per-table model.
  • TPS ranges 50 to 200.
  • Master to a Slave of the same configuration and seconds behind is 0.

Below url shows show innodb status \G and show open tables; at the time spike. And this reduced within 5 minutes. Sometimes rare scenarios like once in two months I see the processes takes more than 5 to 8 hours to drain normal. All time I notice the load processor utilization and how it gradually splits its task and keep monitoring the process and innodb status and IO status. I need not do anything to bring it down. It servers the applications promptly and after some time it drains down to normal. Can you find anything suspicious in the url if any locks or OS waits any suggestion to initially triage with or what could have caused such spikes ?

http://tinyurl.com/bm5v4pl -> "show innodb status \G and show open tables at DB spikes."

Also there are some concerns that I would like to share with you.

  1. Recently I have seen a table that gets inserts only about 60 per second. It predominantly locks for a while waiting for auto-inc to get released. And thus subsequent inserts stays in the processlist tray. After a while the table gets IN_USE of about 30 threads and later I don't know what it makes to free them and clears the tray. (At this duration the load goes more than 15 for 5 minutes)

  2. Suppose if you say application functionality should be shapped to best suite the DB server to react. There are 3 to 5 functionalities each are independent entities in schema wise. Whenever I see the locks it gets affected to all other schemas too.

  3. Now what makes best fuzzy is the last one. I see slave keeps in synch with master with a delay of 0 second all time whereas slave has a single thread SQL operation that is passed from relay IO that which acts in FIFO model from the binary logs where Master had generated. When this single headed slave can keep the load less and have the operations upto-date, should the concurrent hits are really made to be concurrent for the functionalities which I assume making the possible IO locks in OS level. Can this be organized in application itself and keep the concurrent tenure density thinner?

Slow SSRS Report in production

Posted: 04 May 2013 03:05 PM PDT

I have an SSRS report which gets its data by firing a series of stored procedures.

Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine.

I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run.

What else should I be doing? Something with perfmon?

Error 1044 Access denied to user

Posted: 04 May 2013 04:05 PM PDT

This is driving me crazy.

When I try to create a few tables from my Workbench model I get this error 1044.

I've been trying to find a solution but nothing works for me.

Curiously when I run SELECT USER(),CURRENT_USER(); I get two versions of the same user. One is techboy@(an ip address) and the other is techboy@%.

Access is denied to both at one point or another.

The MySql server is a remote hosted server with the user permissions correctly set.

Multiple database servers for performance vs failover

Posted: 04 May 2013 05:05 PM PDT

If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best?

Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active?

I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.'

If it matters, we have traffic around 3,000-7,000 simultaneous users.

Access 2003 (SQL Server 2000) migration to SQL Azure

Posted: 04 May 2013 07:05 AM PDT

As my old Windows 2003 RAID controller started throwing errors, I am seriously thinking about switching current Access 2003 (adp/ADO) clients to use a Windows SQL Azure solution, in place of current SQL Server 2000.

Does anybody knows if this is a feasable/painless operation?

Consolidating indexes

Posted: 04 May 2013 09:05 AM PDT

I have one big table that is used to generate business intelligence cube. Currently it has around 40M rows and 55 columns. A lot of the cube dimensions are generated by running 'select distinct' on a column of the big table.

Currently I have one index for each 'select distinct' and 16 indexes in total. Is it better to merge some of those indexes? My thinking is that the cube needs to process all rows anyway, and the indexes speed up the process because it is much smaller than the entire table, and if I put two or three columns in one index the index size will not grow so much. Maybe the good candidates are columns that have low specificity.

Am I going in the right direction? Can the query make use of the second column (or third, fourth, ...) of the index?

I'm using SQL Server 2005 Standard Edition.

SQL Server distribution database log file grows uncontrollably after full database backup

Posted: 04 May 2013 11:05 AM PDT

We have a merge replication environment that is pushing to 8 subscribers. This is working fine. Our distribution database is setup in Simple recovery mode. We have a maintenance plan that will backup all database every day at 00:30. Once this process completes, the distribution log file grows over the next 30 minutes and absorbs all the remaining space on the hard drive (about 90GB)

What then happens is that the distribution database shows as "Recovery Pending" and we cannot do anything till we restart the machine. After this I can shrink the log file down to 2MB.

I have no idea why this is happening. The log file is running at about 10MB during the day. The database size is sitting at 15GB.

SSRS appears to be ignoring Permissions set using Report Manager

Posted: 04 May 2013 10:57 AM PDT

I have setup SSRS on SQL Server 2008 in native mode.

As an administrator I can login to report manager, upload reports and run them, and also use the Web Service URL to generate reports.

I have also created a local user on the machine, I went into Report Manager as Admin, and at the top level set permissions that should assign the local user to all roles.

When I login to the machine as that user, and then navigate to Report Manager I just get the heading for the page, but do not see any of the folders that are configured.

I've checked and the folders are set to inherit parent permissions and they are showing the newly created local user in there too.

It seems odd that I have set the permissions, yet SSRS is still not showing what I should be able to see. Is there another step I need to take other than configuring the permissions in Report Manager?

When logged in as the newly created local user:

Report Manager - Shows the heading for the page, but no folders/items    Web Service URL (http://machine/ReportServer) - rsAccessDeniedError  

Restore SQL Server database using Windows Powershell 3.0

Posted: 04 May 2013 08:05 PM PDT

I'm trying to restore a SQL Server database with a PowerShell script, but I'm having problems.

Here is the error I'm getting:

Exception calling "SqlRestore" with "1" argument(s): "Restore failed for Server 'WUSFK250042-OLU\SQLSERVER2008R2'. " At line:48 char:1 + $smoRestore.SqlRestore($server)

Here is my code:

#clear screen  cls    #load assemblies  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null  #Need SmoExtended for backup  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null    $backupFile = "C:\SafewayRT\SafewayRTFUll.bak"    #we will query the database name from the backup header later  $server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") "WUSFK250042-OLU\SQLSERVER2008R2"  $backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backupFile, "File")  $smoRestore = new-object("Microsoft.SqlServer.Management.Smo.Restore")    #restore settings  $smoRestore.NoRecovery = $false;  $smoRestore.ReplaceDatabase = $true;  $smoRestore.Action = "Database"  $smoRestorePercentCompleteNotification = 10;  $smoRestore.Devices.Add($backupDevice)    #get database name from backup file  $smoRestoreDetails = $smoRestore.ReadFileList($server)    #display database name  "Database Name from Backup Header : " +$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #give a new database name  $smoRestore.Database =$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #specify new data and log files (mdf and ldf)  $smoRestoreFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")  $smoRestoreLog = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")    #the logical file names should be the logical filename stored in the backup media  $smoRestoreFile.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"]  $smoRestoreFile.PhysicalFileName = $server.Information.MasterDBPath + "\" + $smoRestore.Database + "_Data.mdf"  $smoRestoreLog.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"] + "_Log"  $smoRestoreLog.PhysicalFileName = $server.Information.MasterDBLogPath + "\" + $smoRestore.Database + "_Log.ldf"  $smoRestore.RelocateFiles.Add($smoRestoreFile)  $smoRestore.RelocateFiles.Add($smoRestoreLog)    #restore database  $smoRestore.SqlRestore($server)  

mysqlbackup mysql enterprise utility issue

Posted: 04 May 2013 12:05 PM PDT

I recent took a backup using mysqlbackup.

While restoring it, I noticed that the files that were copied into datadir are with root:root (user:group) instead of mysql:mysql.

Is anything wrong with taking backup or what?....

Cross Database transactions - Always on

Posted: 04 May 2013 06:05 PM PDT

Recently we are working on a POC to get Always on work and happened to see this article in BOL

http://technet.microsoft.com/en-us/library/ms366279.aspx

This article suggests that there would be logical inconsistency when we are dealing with Synchronous mode too, but will this actually be the case?

Consider for example databases A and B on which the transaction is running and A is in High-safety mode and B is not mirrored. The log of A has to go to Mirrored database then the Primary database commits eventually two phase commit(transaction on B) succeeds but article suggests that log will not be transferred in the first place and results in commit on B which is contradictory. Please help me in understanding Whether the statement suggested in above article is true. If yes how can it be :).

PS :Please let me know if I need to provide more information around this.

MySQL generic trigger to save identification data for later accessing of the changed row

Posted: 04 May 2013 07:05 PM PDT

I am pretty inexperienced with this.

I need a generic trigger, able to create and save in a fixed table some sort of identification data for a changed row from generic (any) table. The identification data should be used later to SELECT the changed item in the given table.

Can be this done without previously knowing the table structure?

The only idea I had, but it's way too inefficient in my opinion, also requires previous knowledge of the table column names, is to save a hash by:

MD5(concat(NEW.field1, NEW.field2, NEW.field3, ...))  

then

SELECT * FROM chaged_table WHERE hash = MD5(concat(field1, field2, field3, ...))  

to identify the changed row in the table which triggered the change.

I would greatly appreciate any help or suggestions!

Slow backup and extremely slow restores

Posted: 04 May 2013 10:05 AM PDT

I don't normally work with MySQL but with MS-SQL and am having issues restoring a dump backup of a 9 GB database. I converted it to MS-SQL and it takes a grand total of 4 minutes to restore but the MySQL DB takes over an hour on the same server. The MySQL database is using InnoDB but is there an alternative to speeding up the restores? Both databases are on the same machine, Windows 2008R2 in a VM with a dymanic SANs.

Correction - it takes MS-SQL 1 minute to restore, 1 hour to restore the same database in MySQL

EDIT: mysql.ini (with commented lines removed):

[client]  no-beep  port=3306  [mysql]  default-character-set=utf8  [mysqld]  port=3306  basedir="C:\Program Files\MySQL\MySQL Server 5.5\"  datadir="C:\ProgramData\MySQL\MySQL Server 5.5\data\"  character-set-server=utf8  default-storage-engine=INNODB  sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"  log-output=NONE  general-log=0  general_log_file="CM999-SV510.log"  slow-query-log=0  slow_query_log_file="CM999-SV510-slow.log"  long_query_time=10  log-error="CM999-SV510.err"  max_connections=100  query_cache_size=0  table_cache=256  tmp_table_size=22M  thread_cache_size=8  myisam_max_sort_file_size=100G  myisam_sort_buffer_size=43M  key_buffer_size=8M  read_buffer_size=64K  read_rnd_buffer_size=256K  sort_buffer_size=256K  innodb_additional_mem_pool_size=4M  innodb_flush_log_at_trx_commit=1  innodb_log_buffer_size=2M  innodb_buffer_pool_size=124M  innodb_log_file_size=63M  innodb_thread_concurrency=9  

1 comment:

Search This Blog