Wednesday, May 8, 2013

[how to] MySQL: Slow INSERTs to MEMORY table

[how to] MySQL: Slow INSERTs to MEMORY table


MySQL: Slow INSERTs to MEMORY table

Posted: 08 May 2013 08:57 PM PDT

I've been trying to work this issue out, been really killing me! We've been experiencing some slow database queries for a Prestashop installation that we host, and a look into the query log revealed that it is fairly full of simple insert queries like this:

INSERT INTO `ps_guest` (`id_operating_system`,`id_web_browser`,`id_customer`,`javascript`,`screen_resolution_x`,`screen_resolution_y`,`screen_color`,`sun_java`,`adobe_flash`,`adobe_director`,`apple_quicktime`,`real_player`,`windows_media`,`accept_language`) VALUES ('3','1','0','0','0','0','0','0','0','0','0','0','0','');  

Since the data in this table is not something that we need to persist for the long term, we changed the table into a MEMORY table.

CREATE TABLE `ps_guest` (    `id_guest` int(10) unsigned NOT NULL AUTO_INCREMENT,    `id_operating_system` int(10) unsigned DEFAULT NULL,    `id_web_browser` int(10) unsigned DEFAULT NULL,    `id_customer` int(10) unsigned DEFAULT NULL,    `javascript` tinyint(1) DEFAULT '0',    `screen_resolution_x` smallint(5) unsigned DEFAULT NULL,    `screen_resolution_y` smallint(5) unsigned DEFAULT NULL,    `screen_color` tinyint(3) unsigned DEFAULT NULL,    `sun_java` tinyint(1) DEFAULT NULL,    `adobe_flash` tinyint(1) DEFAULT NULL,    `adobe_director` tinyint(1) DEFAULT NULL,    `apple_quicktime` tinyint(1) DEFAULT NULL,    `real_player` tinyint(1) DEFAULT NULL,    `windows_media` tinyint(1) DEFAULT NULL,    `accept_language` varchar(8) DEFAULT NULL,    PRIMARY KEY (`id_guest`),    KEY `id_customer` (`id_customer`),    KEY `id_operating_system` (`id_operating_system`)  ) ENGINE=MEMORY AUTO_INCREMENT=188 DEFAULT CHARSET=utf8 |  

A MEMORY table should be fast in theory right? And most of the time it is, but occasionally that same insert query still shows up in our slow query log.

Since this insert query does block visitor page loads, we hope to minimise/eliminate it completely.

What else should I be looking at? Below is my.cnf

[mysql]    # CLIENT #  port                           = 3306    [mysqld]    # GENERAL #  port                           = 3306  server-id                      = 1  user                           = mysql  default_storage_engine         = InnoDB  innodb_doublewrite             = 1  low_priority_updates           = 0  max_write_lock_count           = 1    # MyISAM #  key_buffer_size                = 8M  myisam_recover                 = FORCE,BACKUP    # SAFETY #  max_allowed_packet             = 16M  max_connect_errors             = 1000000  #skip_name_resolve  #sql_mode                       = ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTITUTION,NO_ZERO_DATE,NO_ZERO_IN_DATE,ONLY_FULL_GROUP_BY  sql_mode                       = NO_ENGINE_SUBSTITUTION  sysdate_is_now                 = 1  innodb                         = FORCE  innodb_strict_mode             = 1    # DATA STORAGE #  datadir                        = /var/lib/mysql/data/    # BINARY LOGGING #  log_bin                        = /var/lib/mysql/data/mysql-bin  expire_logs_days               = 7  sync_binlog                    = 0    # CACHES AND LIMITS #  tmp_table_size                 = 32M  max_heap_table_size            = 32M  query_cache_type               = 1  query_cache_size               = 256M  query_cache_limit              = 4M  max_connections                = 40  table_cache                    = 8196  thread_cache_size              = 32  open_files_limit               = 65535  table_definition_cache         = 4096  table_open_cache               = 8196  join_buffer_size               = 4M    # INNODB #  #innodb_flush_method            = O_DIRECT  innodb_io_capacity             = 75  innodb_log_files_in_group      = 2  innodb_log_file_size           = 384M  innodb_log_buffer_size         = 8M  innodb_flush_log_at_trx_commit = 2  innodb_file_per_table          = 1  innodb_buffer_pool_size        = 1536M  innodb_thread_concurrency      = 5    # LOGGING #  log-warnings                   = 2  log_error                      = /var/lib/mysql/data/mysql-error.log  general-log                    = 0  general_log_file               = /var/lib/mysql/data/mysql-general.log  log_queries_not_using_indexes  = 1  long_query_time                = 3  slow_query_log                 = 1  slow_query_log_file            = /var/lib/mysql/data/mysql-slow.log  log_queries_not_using_indexes  = OFF  

Join first table with nearest geometry from second table in PostGIS

Posted: 08 May 2013 08:57 PM PDT

There are 2 tables users and dogs, both containing geometry column geom.

How can I return results containing all columns of users and join all columns for the nearest (point) geometry from dogs?

My attempt below returns blank for columns from dogs. Any ideas why that happened?

Attempt

SELECT u.*, d.*  FROM users as u  LEFT JOIN dogs as d      ON ST_DWithin(u.geom, d.geom, 10000)    ORDER BY ST_Distance(u.geom, d.geom)  

IIS Application pool running under network service can't connect to SQL Server

Posted: 08 May 2013 05:14 PM PDT

I just deployed an ASP.NET MVC 4 application to IIS 7 but when I browse it i get the following error:

Cannot open database "PCSPI" requested by the login. The login failed.  Login failed for user 'NT AUTHORITY\SYSTEM'.    Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.     Exception Details: System.Data.SqlClient.SqlException: Cannot open database "PCSPI" requested by the login. The login failed.  Login failed for user 'NT AUTHORITY\SYSTEM'.    Source Error:       Line 23:   Line 24:             //* Role initialization method  Line 25:             WebSecurity.InitializeDatabaseConnection(  Line 26:                 "DatabaseConnection",  Line 27:                 "UserProfile",    Source File: c:\Users\OSCAR\Documents\Visual Studio 2012\Projects\PCSPI\WebUI\Global.asax.cs    Line: 25     Stack Trace:       [SqlException (0x80131904): Cannot open database "PCSPI" requested by the login. The login failed.  Login failed for user 'NT AUTHORITY\SYSTEM'.]     System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +6675286     System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) +688     System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) +4403     System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +84     System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) +55     System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean ignoreSniOpenTimeout, TimeoutTimer timeout, Boolean withFailover) +368     System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString connectionOptions, SqlCredential credential, TimeoutTimer timeout) +6703926     System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(TimeoutTimer timeout, SqlConnectionString connectionOptions, SqlCredential credential, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance) +6704427     System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, SqlCredential credential, Object providerInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString userConnectionOptions) +610     System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, DbConnectionPoolKey poolKey, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection, DbConnectionOptions userOptions) +1049     System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnectionPool pool, DbConnectionOptions options, DbConnectionPoolKey poolKey, DbConnectionOptions userOptions) +74     System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnectionOptions userOptions) +6706995     System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnectionOptions userOptions) +78     System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection) +2192     System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection) +116     System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection) +1012     System.Data.ProviderBase.DbConnectionClosed.TryOpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions) +6711619     System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry) +152     System.Data.SqlClient.SqlConnection.Open() +229     WebMatrix.Data.Database.EnsureConnectionOpen() +115     WebMatrix.Data.<QueryInternal>d__0.MoveNext() +79     System.Linq.Enumerable.FirstOrDefault(IEnumerable`1 source) +247     WebMatrix.WebData.SimpleMembershipProvider.CheckTableExists(IDatabase db, String tableName) +90     WebMatrix.WebData.SimpleMembershipProvider.CreateTablesIfNeeded() +64     WebMatrix.WebData.WebSecurity.InitializeMembershipProvider(SimpleMembershipProvider simpleMembership, DatabaseConnectionInfo connect, String userTableName, String userIdColumn, String userNameColumn, Boolean createTables) +95     WebMatrix.WebData.WebSecurity.InitializeProviders(DatabaseConnectionInfo connect, String userTableName, String userIdColumn, String userNameColumn, Boolean autoCreateTables) +86     WebUI.MvcApplication.Application_Start() in c:\Users\OSCAR\Documents\Visual Studio 2012\Projects\PCSPI\WebUI\Global.asax.cs:25    [HttpException (0x80004005): Cannot open database "PCSPI" requested by the login. The login failed.  Login failed for user 'NT AUTHORITY\SYSTEM'.]     System.Web.HttpApplicationFactory.EnsureAppStartCalledForIntegratedMode(HttpContext context, HttpApplication app) +12864673     System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr appContext, HttpContext context, MethodInfo[] handlers) +175     System.Web.HttpApplication.InitSpecial(HttpApplicationState state, MethodInfo[] handlers, IntPtr appContext, HttpContext context) +304     System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr appContext, HttpContext context) +404     System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext) +475    [HttpException (0x80004005): Cannot open database "PCSPI" requested by the login. The login failed.  Login failed for user 'NT AUTHORITY\SYSTEM'.]     System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +12881540     System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +159     System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +12722601  

But when I change the identity of the application pool the loggin that IIS uses to access SQL Server changes, so I identified that I need to allow acces to the network service on my SQL Server problem is I can't seem to figure it out.

Please, how can I grant access to the networkService?

Any help will be appreciated, thanks in advance

Is the key_buffer_size applicable to myisam tmp tables?

Posted: 08 May 2013 02:51 PM PDT

I have a database about 750GB in size. It's all innodb.

Larger analytical queries often need to group by several columns or use distinct so it's common that MySQL will have to create tmp tables. The tmp tables fit into memory.

My cache hit ratio (Key_reads / Key_read_requests) is 0.

When MySQL creates these tmp tables, I'm guessing it doesn't it create pseudo indexes to be used by key_buffer_size.

What is the syntax for querying a spatial column?

Posted: 08 May 2013 02:39 PM PDT

What is the right syntax to get all the points within the bounding box? The table consists of a lat and lng column and a spatial column "pt" generated from the lat/lng values.

calculated the topleft and bottomright coords for the MBR    tested with the following query ( without the haversine )    SELECT *   FROM tb_gps   WHERE (lng BETWEEN 4.88263070241 AND 4.90030729759)     AND (lat BETWEEN 52.3677890788 and 52.3785809212);  

testing the spatial column generates an error

SELECT *   FROM tb_gps   WHERE MBRWITHIN(pt, MULTIPOINT(52.3641917981 4.87673850395, 52.3821782019 4.90619949605))  

This will give null results

SELECT * FROM tb_gps as t WHERE MBRCONTAINS( GeomFromText( 'MULTIPOINT(52.3641917981 4.87673850395, 52.3821782019 4.90619949605)' ) , t.pt )=1;  

The mysql version is

protocol_version    10  version     5.1.68-cll  

It could be that I am using the wrong syntax, am not sure. Could someone help me out with this

( the coords in the second query are different, but that's not really the point)

How do I identify the remote db agent name to use in create_database_destination on Oracle 11gR2?

Posted: 08 May 2013 01:07 PM PDT

I am trying to setup DBMS_SCHEDULER in Oracle 11g to run a remote database job.

I have a remote Oracle 11g R2 database on unix and a local one on Windows.

I read that you can install the oracle scheduler agent from the 11g client install for machines that don't have Oracle installed but this is not needed for running remote jobs if Oracle is present on both machines. With the remote agent installation, you run schagent and provide parameters to register the agent to the remote machine but I cant find any instructions on the web regarding how to register remote agents when both machines have Oracle installed or what to use as the agent name in this case.

I have added an entry to tnsnames.ora for the remote DB and can tnsping, etc.

If I run the dbms_scheduler.create_database_destination procedure, it requires an agent name but where can I find this for the database or how can I check that it is running on Unix or Windows?

Are SQL server statistics stored in database or buffer pool?

Posted: 08 May 2013 01:07 PM PDT

Just wondering are statistics kept in the database but not in the memory? If I backup/restore the database from a prod server to a development server, would it keep the same statistics so that execution plans won't be much different if I do it on the development server?

General tool to load dump files

Posted: 08 May 2013 11:22 AM PDT

I am a big fan of Postgres both for its price but also for its features. I am going to have to need to upload into it both Oracle dump and SQL Server files. I will try to ask and beg for plain .csv for schema DDL but I suspect that I will be given dmp files.

Is there a tool, most preferably open source one, that would allow me to read, profile and possibly load Oracle/SQL Server files into Postgres?

Thank you, Edmon

Restore exceeds licensed limit of 10240 MB per database. All I really need is the DB structure

Posted: 08 May 2013 02:35 PM PDT

I received a .bak file from a customer that I need to Restore, but the Restore exceeds licensed limit of SQL Server 2008 R2 10240 MB per database. All I really need is the DB structure. Is there a way to Restore anyway or for me to just Restore the structure?

Export complete database to a remote server including (Tables, Stored Procedures, Triggers)

Posted: 08 May 2013 07:37 PM PDT

How do we "export" a complete Microsoft SQL server database from one server to a "remote" server (including all the "stored procedures" and "triggers"). The export facility provided in the management studio seem to only export the tables and nothing else.

EXTRA NOTE:

Many thanks guys for your kind and detailed response. To be more specific I'm trying to upload a local database that I created along with my .NET application to my web hosting server which I have limited access to. I can only connect to my database through SQL Management Studio and there is no cPanel or Plesk type configuration that would allow me to perform a backup/restore operation or anything fancy. I did as Kin suggested and generated a script and ran it on the remote database. The following is the list of errors generated which seem to be a permission issues. Is there any way I could elevate the security of my user account to bypass these errors on my side. OR are there any other way of performing this export?

Msg 102, Level 15, State 1, Line 3  Incorrect syntax near 'CONTAINMENT'.  Msg 15247, Level 16, State 1, Line 1  User does not have permission to perform this action.  Msg 5069, Level 16, State 1, Line 1  ALTER DATABASE statement failed.  Msg 15247, Level 16, State 11, Line 1  User does not have permission to perform this action.  Msg 5069, Level 16, State 1, Line 1  ALTER DATABASE statement failed.  Msg 15247, Level 16, State 1, Line 1  User does not have permission to perform this action.  Msg 5069, Level 16, State 1, Line 1  ALTER DATABASE statement failed.  Msg 102, Level 15, State 6, Line 1  Incorrect syntax near 'FILESTREAM'.  Msg 102, Level 15, State 6, Line 1  Incorrect syntax near 'TARGET_RECOVERY_TIME'.  

How can I minimize SQL Server data loss in AWS in the event of failure?

Posted: 08 May 2013 09:32 AM PDT

If I am running a SQL Server in AWS. What is a low cost solution to minimize my risk of data loss in the event of failure at the database level?

Lets assume I am performing backups every 6 hours so worst case I can have my servers back up in a couple hours but depending of the time of the failure I could lose 6 hours of data.

Lets assume I am ok with the 2 hours to bring the servers back up but I am not ok with 6 hours of data loss.

Is there a way to narrow my dataloss gap to around 15 mins or less without getting into replication etc.

I am open to any way to host SQL Server in AWS

How can I start my Oracle database? Is it not allowing user login? [closed]

Posted: 08 May 2013 09:33 AM PDT

Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp.    C:\Documents and Settings\bh>sqlplus    SQL*Plus: Release 11.2.0.2.0 Production on Wed May 8 20:17:40 2013    Copyright (c) 1982, 2010, Oracle. All rights reserved.    Enter user-name: system  Enter password:    ERROR:  ORA-01033: ORACLE initialization or shutdown in progress  Process ID: 0  Session ID: 0 Serial number: 0  

Should I favor using a param or a scalar function result within a SELECT?

Posted: 08 May 2013 08:46 AM PDT

I have a select statement in SQL Server that must compare a large set of rows against a single calculated date.

The calculation itself is negligible; just looking for the prior Monday. Given that the column this calculated date will be matched against is indexed, is there any difference whether I pass in the calculated date from application code as a parameter or use a scalar function to calculate the date?

Said another way: should I favor using a param or calling a scalar function, or are they basically equal in this scenario?

Saving a Microsoft SQL Database to a file and opening it on another server under any database name

Posted: 08 May 2013 08:54 AM PDT

enter image description hereI am trying to find an efficient way of being able to physically save and open a Microsoft SQL database like how it's done with Access. I would like to be able to save my entire database into a file and open it on some other server. I have already tried backup/restore but they're very tricky and it does not work all the time. Is there a way to do it other than the backup/restore procedure?

The following is what I get when I try to do the restore. I tried every possible approach that I could think of, but it simply doesn't restore.

SQL Server Trigger Performance- Is Using Trigger is a good way?

Posted: 08 May 2013 08:56 AM PDT

We are using SQL Server 2008 for a j2ee web application. I need to maintains the audits for updates for each column with their previous value and new value on one of my table which is very heavy, around 100 columns and 10K records and increasing rapidly on daily basis. Around 200 users work with this application and updates also occur very frequently. I know handling it via code will require very frequent connections to database. I want to know which is a better approach for doing this. Is handling it through code will provide a better performance than using SQL Server trigger for that.

RavenDB - Disaster Recovery on a Replicated Instance

Posted: 08 May 2013 03:34 PM PDT

  • Assume that I have two RavenDB instances.
  • Instance 1 contains one document: users/1.
  • Instance 2 contains three documents: users/2, users/3, and users/4.

_

  • Assume that I configure and enable replication for both instances.
    • That is, Instance 1 will replicate to Instance 2 and Instance 2 will replicate to Instance 1.
  • After replication, both instances will have all four documents.
    • But both instances will agree that users/1 's Raven-Replication-Source is Instance 1, and users/2, users/3, and users/4 will have their Raven-Replication-Source set to Instance 2.

_

  • Assume that Instance 1 suffers a catastrophic failure and I need to reinstall RavenDB.

_

  • My testing shows that in this scenario: users/2, users/3, and users/4 will again be replicated to Instance 1.
  • However, users/1 will not be replicated to Instance 1 because its Raven-Replication-Source is Instance 1.

My Question:

  • How can I force Instance 2 to replicate users/1 back to Instance 1, even though users/1 was originally created on Instance 1 and its Raven-Replication-Source is Instance 1?

Choosing a database for storing large amount of data and allowing for thousands of writes

Posted: 08 May 2013 08:43 PM PDT

I am working on a project that has some serious requirements regarding its database and having hard time choosing the best solution. The details are:

  • Database will easily reach tens of terabytes of data
  • Schema is simple in nature. In relational database terms it would consist of 3 tables, each with millions and billions of rows. Each table has a few columns only. Records are like 10K in size each
  • Allow for tens of thousands of writes per second. I don't have good estimate for the reads but they should be less. Possibly will move to bulk inserts (write clients will insert at average 10 new rows per second so I can combine those and have 5000 bulk inserts per second)
  • Obviously will need to be able to scale horizontally easily. These number are just for start and I expect they will multiply over the years. Hopefully :)
  • SQL solution is not required
  • Read queries performed will be over ranges of data. Geospatial support would be nice though not needed. Secondary keys also would be cool
  • Should be easy to access via a C/C++ application. I am still considering Java as our choice of platform to build the servers that will talk to the database, but probably will end up with C/C++ (for reasons I am not going to put up here)
  • CAP related - obviously we would like to have it all and it won't happen. Availability wise we will be fine with some read/write delays (in terms of seconds). Eventually consistent is OK as long as it doesn't take ages for the database to become consistent. Partition tolerance should be enough to cover the numbers listed earlier. So I can't really put some serious accent on which of those three is most important for us. Only data loss is not acceptable :)
  • Cross-data centre replication would be nice, although this is not planned in the scope of the project.
  • Updates/Deletes of the data are minimal. Just insert and read.
  • There may be some Map/Reduce queries to the data but most likely they will not be executed very often and their results will be cached. The very least the heavy queries can be performed on a replica of the database so writes can continue while the heavy analysis is performed.
  • Schema can be easily fixed, flexibility is not necessary although if we can have it without sacrificing something else - why not? In fact a key-value store with 3-4 buckets is an option.

Basically the design of the project resembles a one-way VoIP application with constant recording of the data. A set of clients constantly push data in the database (several thousands of clients) at 10Hz rate and a similar number of clients constantly read the data (at 1Hz rate). Applying Pub/Sub solution would take off some of the database load so we are considering that as part of the solution. Any suggestions on that are also welcome. This also means the working set is much smaller. Real-time data is being passed between the clients and the recorded data is accessed far less frequently.

I do not have experience with NoSQL solutions but probably it is the way to go in that case.

So far I have few names in mind:

  • MongoDB (so far seems to match my requirements)
  • CouchBase (though I have yet to find the documentation how to use it from C)
  • MySQL with NDB (Probably will fail in time - I haven't found someone using it at such scale but I am an old MySQL user and have quite happily lived with it for ages)
  • Cassandra (I have some hard time getting my head around their data models so far but I will figure it out)

Does anyone have any suggestions based on experience with data warehouses at that scale? Which database solution would be the best in the provided scenario?

Thanks in advance to anyone who posts an answer :)

Edit: I missed something and one of the answers reminded me to add it - we are running *nix boxes only (blade servers).

Modifying replication distributor & publisher

Posted: 08 May 2013 12:08 PM PDT

I have a SQL Server 2008 transactional replication setup as follows:

  • 192.168.100.1 <-- Distributor
  • 192.168.100.2 <-- Publisher
  • 192.168.100.3 <-- Subscriber

Our network team want to change the IP address of the distributor, for example to: 192.168.200.1

Also, we want to move the publication database to a new server. so, the replication source (publisher) will be also moved and it will have a new IP address as well.

The subscriber will remain as is.

I cannot afford having long downtime as our systems are running for 24x7 and there are so many huge tables that are already in replication and generating a new snapshot will take long time to be replicated.

What is the best way to seamlessly re-configure the replication and keep my data in sync without rebuilding it from scratch

Thanks

SQL Server: Worse performance on a new server

Posted: 08 May 2013 02:12 PM PDT

We've been on a dedicated server (single quad-core, 6 GB RAM) and are moving to a new dedicated server (2x hex-core, 32 GB RAM). Both are Windows Server 2008, SQL Server 2008. The performance on the new server is slightly worse than the old, slower server.

In testing, our ASP.NET application runs 10 - 20% slower. Running individual expensive queries with STATISTICS IO and STATISTICS TIME shows 10 - 20% greater elapsed time on the new server. SQL Query Profile shows higher CPU usage on expensive queries.

Task Manager on the new server shows sqlserver.exe is consuming 22 GB of RAM, but the CPU values always stay very low.

I've updated all statistics, rebuilt or reorganized indexes, etc. Execution plans should be stored on the new server at this point, given the amount of testing I've done. If there are any missing indexes (I don't think there are) they affect the old and new servers equally. New has a restored backup of the same data on the old.

I'd expected that the performance on the new server would be better, but of more concern is load. If the old server is performing better even under load, then what will happen when this new, slightly worse server has to take that load?

What else could I be missing here?

EDIT: MAXDOP set to 6.

Old server has OS, databases, and tempdb's on the same physical drives (RAID 10). Total of 4 15k 3 Gb/s 3.5 inch SAS. New server has three drive sets: OS on RAID 1, database on RAID 10, tempdb on RAID 5. Total of 9 15K 6 Gb/s 2.5 Inch SAS.

Old server has 1 x Intel Xeon E5620 2.40 GHz Quad-Core 8 Threads (w H/T). New server has 2 x Intel Xeon E5-2640 2.5 GHz Six -Core 12 Threads (w H/T).

EDIT 2: Here's the final analysis:

The power plan was on balanced, not high performance. Switched that over.

Tempdb was on a RAID 5, not RAID 10. Added another HD to create two physically distinct RAID 10 configs, one for tempdb and one for everything else.

Excluded SQL-related files (mdf, ldf, ndf, bak) from virus scanning.

Rebuilt all indexes following the move to the new server. They were very fragmented -- possibly as a result of backup, copy, restore?

And I realized that the processor jump was not that big. The queries aren't going to execute that much faster, but with more processors, more cores, more RAM, we'll be more scalable.

Msg 21, Level 21, State 1, Line 1 Warning: Fatal error 9001

Posted: 08 May 2013 07:54 PM PDT

We recently moved to a SQL Server that's on a VPS and every now and then we get a 9001 error when trying to do an update.

Rebooting the server fixes the problem.

I changed the maximum server memory (in MB) to 2000 because it's a 4GB VPS.

But I just don't know what's causing the error.

Update one table from another table while sorting that table based on one column

Posted: 08 May 2013 03:06 PM PDT

This is the problem I'm trying to figure out in MySQL. We have an old table contains some forms submitted by our users. Somehow, the previous decision was each time a user comes to this survey, a new form will be submitted. So in the old table we easily have several rows with the same Firstname, Lastname, but different values in the other columns, and there's a timestamp column Date_Submission as well.

Now we are trying to move everything to a new table, but this time, for each person we only keep one row. And we want to keep some of the the latest old data of that user (like email, phone number, etc)

I could do the following:

update newtable, oldtable set newtable.email = oldtable.email where newtable.firstname = oldtable.firstname and newtable.lastname = oldtable.lastname;

Apparently this won't give me the "latest" old date for each person.

So I tried this one:

update newtable, oldtable set newtable.email = oldtable.email where newtable.firstname = oldtable.firstname and newtable.lastname = oldtable.lastname order by oldtable.Date_Submission;

But they MySQL will complain with:

"ERROR 1221 (HY000): Incorrect usage of UPDATE and ORDER BY".

So I'm wondering, what's the correct way to achieve this?

increasing mysql table open cache?

Posted: 08 May 2013 02:06 PM PDT

I often read that it is best to increase this variable slowly. Can someone explain why? My status indicates that I should increase it... What is best practice / "slowly"?

Thanks!

select count(*) in mysql 5.5 innodb-- rewrite advice?

Posted: 08 May 2013 10:06 AM PDT

I need advice on how to rewrite a select count(*) query for innodb tables mysql 5.5. in new environment its very slow...

select count(*)   from mails3     join questions using (question_id)   where mails3.from_user_id = '86696'     and mails3.is_community_star = 1     and mails3.rating = 3     and questions.is_rated_community = 1;  

the query execution plan looks simple enough but very slow

*************************** 1. row ***************************             id: 1    select_type: SIMPLE          table: mails3           type: ref  possible_keys: question_id_index,emails_doublestars,favorite_emails,user_responses,owner_responses,sentmail,watchlist,read_responses,rating_date,read_and_unrated            key: emails_doublestars        key_len: 8            ref: const,const,const           rows: 2784          Extra: Using where  *************************** 2. row ***************************             id: 1    select_type: SIMPLE          table: questions           type: eq_ref  possible_keys: PRIMARY,is_rated_community_senddate_idx            key: PRIMARY        key_len: 5            ref: answerology.mails3.QUESTION_ID           rows: 1          Extra: Using where  2 rows in set (0.00 sec)  

Why does this procedure raise a privilege error?

Posted: 08 May 2013 08:06 AM PDT

I am having trouble getting the following procedure to run on a remote MySQL database - the given error is privilege based (#1227). Locally, the procedure runs fine.

QUESTIONS

  • Could somebody help me understand which specific part of this code raises this error, i.e. requires the said privilege?
  • Is there any way I could have corresponding functionality without a privileged call?

enter image description here

Postgres performance difference on insert

Posted: 08 May 2013 12:57 PM PDT

I was tried to partition one large table, and come with such Postgres behavior that I could not explain. Maybe you will have any suggestions?

I went quite straight forward way (i know that it is not most efficient, but still i had some time):

create table target_table  as select * from parent_table limit 0;  insert into target_table select * from parent_table where date_field >= '20120101' and date_field <='20120131' and tariff_id <> -1;  insert into target_table select * from parent_table where date_field >= '20120201' and date_field <='20120228' and tariff_id <> -1;  insert into target_table select * from parent_table where date_field >= '20120301' and date_field <='20120331' and tariff_id <> -1;  insert into target_table select * from parent_table where date_field >= '20120401' and date_field <='20120430' and tariff_id <> -1;  insert into target_table select * from parent_table where date_field >= '20120501' and date_field <='20120531' and tariff_id <> -1;  .....  

And there is results how much time each command took to execute:

08:44:42  [INSERT - 8981055 row(s), 119.962 secs]  Command processed  08:46:36  [INSERT - 8222656 row(s), 114.057 secs]  Command processed  08:48:43  [INSERT - 8981454 row(s), 126.789 secs]  Command processed  08:51:02  [INSERT - 8929325 row(s), 139.815 secs]  Command processed  08:53:13  [INSERT - 9465383 row(s), 130.752 secs]  Command processed  08:55:25  [INSERT - 9302664 row(s), 131.911 secs]  Command processed  08:57:36  [INSERT - 9581798 row(s), 130.853 secs]  Command processed  08:59:58  [INSERT - 10080875 row(s), 142.071 secs]  Command processed  09:15:03  [INSERT - 9717698 row(s), 905.030 secs]  Command processed  10:24:22  [INSERT - 10169181 row(s), 4159.346 secs]  Command processed  11:33:11  [INSERT - 10000601 row(s), 4128.590 secs]  Command processed  12:38:36  [INSERT - 10576846 row(s), 3924.579 secs]  Command processed  

I can't find the explanation why sometime inserts from the same table took 2 minutes and sometimes it took 1 hour! What could be reasons for such behavior?

Some info:

PG server running on amazon cloud xlarge EC2 instance.
I am the only one user and it was not overloaded with other tasks.
work_mem = '1GB'
parent_table is pretty large - 800 million of rows.

SQL Server split mdf into multiple files

Posted: 08 May 2013 10:49 AM PDT

I have a database called example.mdf with a total size of 1GB which suffers from performance issues. I checked the allocated hardware and it is higher than required, I double checked the design and every thing looks normal, when I look at the .mdf files in their physical location (C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\example.mdf) I found that the database is split into multiple files example_1.mdf, example_2.mdf, example_3.mdf, up to example_7.mdf.

I have another database file on the same SQL Server that has the same issue.

Why does this happen? Does this affect the performance? How can I prevent or stop SQL Server from splitting my .mdf files? Moreover I need to combine back the already split files.

Search every column in every table in Sybase Database

Posted: 08 May 2013 08:06 PM PDT

I'm been taxed with the task of creating an application that pulls data from our Micros Point of Sales system. The POS is using a Sybase database running on one of our servers. The database schema is very convoluted. I've been able to figure out most of the schema to gain access to the data I need, however, there are a few things still left to find. I know what they are called in the actual POS, but I can't seem to find them anywhere in the database (although I haven't actually looked through all 200+ tables).

I'm wondering if there's any way to run a query to search for a specific string in all the columns in all the tables. BTW, I'm using the "Interactive SQL" application that comes with the Sybase Database Client software to connect to the database.

MySQL5.6 on Mac OS X 10.6.8 problems when granting permissions to root

Posted: 08 May 2013 09:06 AM PDT

I'm having serious problems with a MySQL 5.6 instance on a Mac Server.

We had to upgrade MySQL to a newer version, but it turned to be a bad idea, as we lost control to it. We had a backup of the /data directory and the my.cnf file as well. However, when setting an init_file to restore the previous password and its permissions. So we created a text file with this content:

UPDATE mysql.user SET Password=PASSWORD('myOriginalPassword') WHERE User='root';  FLUSH PRIVILEGES;  

We have double checked the init_file permissions (we even gave it a chmod 777, to make sure it worked), but something is not working. When we run mysqld_safe --init_file=/usr/local/mysql/bin/init_file it tries to start the server, but then it shuts itself down. The log is as follows:

130308 13:23:52 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data  2013-03-08 13:23:53 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).  2013-03-08 13:23:53 18979 [Warning] Setting lower_case_table_names=2 because file system for /usr/local/mysql/data/ is case insensitive  2013-03-08 13:23:53 18979 [Note] Plugin 'FEDERATED' is disabled.  2013-03-08 13:23:53 18979 [Note] InnoDB: The InnoDB memory heap is disabled  2013-03-08 13:23:53 18979 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2013-03-08 13:23:53 18979 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-03-08 13:23:53 18979 [Note] InnoDB: CPU does not support crc32 instructions  2013-03-08 13:23:53 18979 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2013-03-08 13:23:53 18979 [Note] InnoDB: Completed initialization of buffer pool  2013-03-08 13:23:53 18979 [Note] InnoDB: Highest supported file format is Barracuda.  2013-03-08 13:23:53 18979 [Note] InnoDB: 128 rollback segment(s) are active.  2013-03-08 13:23:53 18979 [Note] InnoDB: Waiting for purge to start  2013-03-08 13:23:53 18979 [Note] InnoDB: 1.2.10 started; log sequence number 19440939  2013-03-08 13:23:53 18979 [ERROR] /usr/local/mysql/bin/mysqld: unknown option '--skip-locking'  2013-03-08 13:23:53 18979 [ERROR] Aborting    2013-03-08 13:23:53 18979 [Note] Binlog end  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'partition'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_CONFIG'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_DELETED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INSERTED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_METRICS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMPMEM'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_LOCKS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_TRX'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'InnoDB'  2013-03-08 13:23:53 18979 [Note] InnoDB: FTS optimize thread exiting.  2013-03-08 13:23:53 18979 [Note] InnoDB: Starting shutdown...  2013-03-08 13:23:54 18979 [Note] InnoDB: Shutdown completed; log sequence number 19440949  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'BLACKHOLE'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'ARCHIVE'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MRG_MYISAM'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MyISAM'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MEMORY'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'CSV'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'sha256_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'mysql_old_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'mysql_native_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'binlog'  2013-03-08 13:23:54 18979 [Note] /usr/local/mysql/bin/mysqld: Shutdown complete    130308 13:23:54 mysqld_safe mysqld from pid file /usr/local/mysql/data/server.myServerName.com.pid ended  

The problem may definitely come from the [Error] lines, but as we haven't used the --skip-locking, it's just confusing. If we try to log into mysql using mysql -r -p and set the password we used, it can't connect to the server, as it is not started. Any ideas where this problem may come from?

EMS SQL manager permission problem

Posted: 08 May 2013 08:48 AM PDT

I have a strange problem with EMS SQL Manager.

I'm using MySQL 5.5 with linux servers. One of my developers need to edit store procedure, so I grant him:

GRANT CREATE ROUTINE, ALTER ROUTINE ON `testdb`.* TO 'testuser'@'192.168.13.11'  

When he clicked edit store in EMS SQL Manager, what he got is:

SELECT command denied to user 'testuser'@'192.168.13.11' for table 'user'  

I find out that EMS SQL Manager ask for select privilege in some tables, so I need to grant this user some extra permissions.

GRANT SELECT ON `mysql`.`user` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`db` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`tables_priv` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`columns_priv` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`procs_priv` TO 'testuser'@'192.168.13.11';  

So my question is, WHY EMS SQL Manager need those privileges? I can create, drop procedure in mysql command line client without problem.

Update

After some investigation, I still do not understand EMS SQL Manager. This is the action EMS SQL Manager take when click the Compile button.

Executed  : 5/8/2013 3:10:40 PM  Operation : CONNECT  Result    : "OK."    Executed  : 5/8/2013 3:10:40 PM  Operation : show variables like 'character_set_results'  Result    : "OK."    Executed  : 5/8/2013 3:10:40 PM  Operation : SELECT * FROM mysql.user ORDER BY User, Host  Result    : "SELECT command denied to user 'ems'@'192.168.13.11' for table  'user'"    Executed  : 5/8/2013 3:10:40 PM  Operation : DISCONNECT  Result    : "OK."    Executed  : 5/8/2013 3:10:41 PM  Operation : select p.`db`, p.`name`, p.`type`, p.`specific_name`,  p.`language`, p.`sql_data_access`, p.`is_deterministic`, p.`security_type`,  p.`param_list`, p.`returns`, p.`body`, p.`definer`, p.`created`,  p.`modified`, p.`sql_mode`, p.`comment` from `mysql`.`proc` as p where  p.`type` = 'PROCEDURE' and p.`db`='reportingdb' and  p.`name`='100_rpt_campaign_ip'  Result    : "OK."    Executed  : 5/8/2013 3:10:41 PM  Operation : SHOW CREATE PROCEDURE `reportingdb`.`100_rpt_campaign_ip`  Result    : "OK."  

So my question, What is the Compile button trying to do? Why does it want to select mysql.user? I think that editing a stored procedure is two steps of drop and re create it, why does it want to do more?

Resolving ORA-01033 "initialization or shutdown in progress"

Posted: 08 May 2013 12:16 PM PDT

I followed the instructions at http://www.oracle-base.com/articles/10g/oracle-data-pump-10g.php to create a database export.

I used:

expdp scott/tiger@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log  

to create the export. However I get this error message:

UDE-00008: operation generated ORACLE error 12154  ORA-01033: ORACLE initialization or shutdown in progress  

How do I resolve this issue?

No comments:

Post a Comment

Search This Blog