Thursday, February 28, 2013

[how to] unable to change engine to MyISAM in mysql

[how to] unable to change engine to MyISAM in mysql


unable to change engine to MyISAM in mysql

Posted: 28 Feb 2013 08:07 PM PST

i have installed mysql 5.6.10 in mac os 10.6.5

Now the issue is mysql is using InnoDB as default engine.

i have checked following engines are supported in mysql

show engines\G      *************************** 1. row ***************************        Engine: FEDERATED       Support: NO       Comment: Federated MySQL storage engine  Transactions: NULL            XA: NULL    Savepoints: NULL  *************************** 2. row ***************************        Engine: MRG_MYISAM       Support: YES       Comment: Collection of identical MyISAM tables  Transactions: NO            XA: NO    Savepoints: NO  *************************** 3. row ***************************        Engine: MyISAM       Support: YES       Comment: MyISAM storage engine  Transactions: NO            XA: NO    Savepoints: NO  *************************** 4. row ***************************        Engine: BLACKHOLE       Support: YES       Comment: /dev/null storage engine (anything you write to it disappears)  Transactions: NO            XA: NO    Savepoints: NO  *************************** 5. row ***************************        Engine: CSV       Support: YES       Comment: CSV storage engine  Transactions: NO            XA: NO    Savepoints: NO  *************************** 6. row ***************************        Engine: MEMORY       Support: YES       Comment: Hash based, stored in memory, useful for temporary tables  Transactions: NO            XA: NO    Savepoints: NO  *************************** 7. row ***************************        Engine: ARCHIVE       Support: YES       Comment: Archive storage engine  Transactions: NO            XA: NO    Savepoints: NO  *************************** 8. row ***************************        Engine: InnoDB       Support: DEFAULT       Comment: Supports transactions, row-level locking, and foreign keys  Transactions: YES            XA: YES    Savepoints: YES  *************************** 9. row ***************************        Engine: PERFORMANCE_SCHEMA       Support: YES       Comment: Performance Schema  Transactions: NO            XA: NO    Savepoints: NO  

and i want to change it to MyISAM , so that i have changed the setting in my.cnf

[mysqld]  default-storage-engine=MyISAM  

after that i restart the mysql from system preferences. but if i again check the engines and create a table its being still created under InnoDB but not MyISAM

can anyone tell me how would i change it to MyISAM

here is my.cnf file content

# For advice on how to change settings please see  # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html    [mysqld]  default-storage-engine=MyISAM    # Remove leading # and set to the amount of RAM for the most important data  # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.  # innodb_buffer_pool_size = 128M    # Remove leading # to turn on a very important data integrity option: logging  # changes to the binary log between backups.  # log_bin    # These are commonly set, remove the # and set as required.  # basedir = .....  # datadir = .....  # port = .....  # server_id = .....  # socket = .....    # Remove leading # to set options mainly useful for reporting servers.  # The server defaults are faster for transactions and fast SELECTs.  # Adjust sizes as needed, experiment to find the optimal values.  # join_buffer_size = 128M  # sort_buffer_size = 2M  # read_rnd_buffer_size = 2M    # sql_mode=STRICT_TRANS_TABLES   

"custom archiver out of memory" error when restoring large DB using pg_restore

Posted: 28 Feb 2013 05:37 PM PST

I'm trying to a restore a local copy of a 30GB DB (with only one table) dumped using the Heroku wrappers (described here) using pg_restore. The Heroku DB is using 9.1.7 and I have had the same result using 9.1.5 and 9.1.7 under OS X 10.8.

Using the command:

pg_restore --clean --no-acl --no-owner -d db_name -U username -h localhost -v db.dump  

I get the following output on the command line ("cleaning" commands omitted) after 20 minutes and seeing the DB size grow to about 30 GB:

pg_restore: creating SCHEMA public  pg_restore: creating COMMENT SCHEMA public  pg_restore: creating EXTENSION plpgsql  pg_restore: creating COMMENT EXTENSION plpgsql  pg_restore: creating TABLE video_data  pg_restore: creating SEQUENCE video_data_id_seq  pg_restore: creating SEQUENCE OWNED BY video_data_id_seq  pg_restore: executing SEQUENCE SET video_data_id_seq  pg_restore: creating DEFAULT id  pg_restore: restoring data for table "video_data"  pg_restore(11938) malloc: *** mmap(size=18446744071605936128) failed (error code=12)  *** error: can't allocate region  *** set a breakpoint in malloc_error_break to debug  pg_restore: [custom archiver] out of memory  pg_restore: *** aborted because of error  

Turning up log level to DEBUG1 reveals the error:

...(many of these checkpoints hints)...  LOG:  checkpoints are occurring too frequently (15 seconds apart)  HINT:  Consider increasing the configuration parameter "checkpoint_segments".  ERROR:  extra data after last expected column  CONTEXT:  COPY video_data, line 463110542: "3599794500:59:01497 797942  2012    23422-08-96 2400892 08.794219   08 0617 2342118769-08 72    115 2046225 1..."  STATEMENT:  COPY video_data (id, video_id, views, favorites, comments, created_at, likes, dislikes) FROM stdin;  

I have tried increasing SHMMAX and SHMALL to 2GB, checkpoint_segments to 30, as well as upgrading PG to match patchlevels. A search reveals a small handful of people with this problem several years back, and no resolution.

Any recommendations?

How much data is needed to show MySQL Cluster's performance scaling vs. InnoDB

Posted: 28 Feb 2013 03:36 PM PST

I am evaluating MySQL Cluster as a possible replacement for an InnoDB schema. So far, I have tested it with 10s of MB of data, and found MySQL Cluster slower than InnoDB; however, I have been told MySQL Cluster scales much better.

How much data does it take to show a performance benefit to MySQL Cluster vs. an InnoDB schema? Or, is there a better way to demonstrate MySQL Cluster's merits?

Open source tools to manage Data Quality

Posted: 28 Feb 2013 03:30 PM PST

Are there open source (or commercial) tools that can report data quality issues in a data warehouse using the Kimball star schema model? It doesn't have to be the silver bullet. I'm just looking for something to start with that can give basic details of data quality.

Load balancing after clustering is failed in sql server cluster

Posted: 28 Feb 2013 03:24 PM PST

This is what i have been asked today on my interview for sql server DBA position:

It is an active - active cluster configuration. The primary node is in NY and the other node is in Rio. The NY server is failed and the failover cluster took over the activities. But the Rio server is becoming overloaded because of the additional activities. What will you do to solve the overload problem?

Please help as i am a newbee for the clustering concept.

Thanks. GTT

Compressions vs shrinking a database

Posted: 28 Feb 2013 01:40 PM PST

Is compression and shrinking a database in SQL Server same? If I compress a table, will it decrease query performance and DML?

Find all the queries executed recently on a database

Posted: 28 Feb 2013 01:21 PM PST

[I am a beginner level T-SQL Programmer]
[..and hopefully I am on the right stack exchange site]

I would like to get a list of all the queries I executed (in the least, those I executed today since morning). I need to make a report on the execution times of the queries.

Online search didn't yield me much useful info. The only query that I found online which seemed pretty close is

SELECT      deqs.last_execution_time AS [Time],       dest.TEXT AS [Query]   FROM       sys.dm_exec_query_stats AS deqs      CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest  ORDER BY       deqs.last_execution_time DESC  

This query returned some odd results (most of which were a bunch of sprocs). Moreover, all these results show queries executed since today afternoon (I need queries from morning).

I couldn't find anything in the previous questions (if similar question has already been asked, please point me to it).

I saw some suggestions regarding SQL Profiler, but I guess profiler would help me only if I have already started the trace (correct me if I am wrong).

Can someone suggest me how I should go about to get a list of all queries that have been executed on the database since morning (including the query execution times)..

[It would be helpful (not a requirement) if I can also somehow get the username of the user who executed the query]

MySQL specific database configuration file

Posted: 28 Feb 2013 12:48 PM PST

In MySQL's configuration file I've globally disabled autocommit as so.

[mysqld]  autocommit=0  

I need to turn MySQL's autocommit on for a specific Ruby on Rails database though. It could be for the user or the database itself (doesn't matter). Thinking it would look something like this.

[mysqld]  autocommit=0  execute_sql="Custom SQL to set autocommit for a database"  

Book / Tutorials / Website - Creating Custom C Database

Posted: 28 Feb 2013 05:03 PM PST

I want to create the custom database for my project. Kindly recommend me Good Book or Tutorials or Website which provide the detailed information.

The features want to implement are full text search, server/client application and encryption of text. As all the text is available in plain text.

The application would be cross platform compatible using MingW32 through Cygwin to make portable.

mysql query - how to delete duplicates based on highest int?

Posted: 28 Feb 2013 10:48 AM PST

I'm running a reverse auction website and I need to delete the duplicate bids and keep only the highest amount . lastPriceBid_rounded is the "amount". The duplicate is defined by the userId.

+----------------------+--------------+------+-----+-------------------+-------------------+  | Field                | Type         | Null | Key | Default           | Extra             |  +----------------------+--------------+------+-----+-------------------+-------------------+  | lastPriceBid         | varchar(125) | YES  |     | NULL              |                   |  | dateBidded           | varchar(125) | YES  |     | NULL              |                   |  | owner                | varchar(65)  | YES  |     | NULL              |                   |  | status               | int(11)      | YES  |     | 0                 |                   |  | userId               | varchar(125) | YES  |     | NULL              |                   |  | siteId               | int(11)      | YES  |     | NULL              |                   |  | lastPriceBid_rounded | int(11)      | YES  |     | NULL              |                   |  | from_nickname        | varchar(125) | YES  |     | NULL              |                   |  | id                   | int(11)      | NO   | PRI | NULL              | auto_increment    |  | time_updated         | timestamp    | NO   |     | CURRENT_TIMESTAMP | on update         |  |                                                                        CURRENT_TIMESTAMP |  +----------------------+--------------+------+-----+-------------------+-------------------+  

So far I've tried

DELETE n1   FROM offers n1, offers n2   WHERE n1.lastPriceBid_rounded < n2.lastPriceBid_rounded   AND n1.userId = n2.userId;  

but it doesn't seem to work well. I still have some duplicates (I see that when I use SELECT DISTINCT).

Which text-index I should create for xooops engine to achieve better search results?

Posted: 28 Feb 2013 06:04 PM PST

In one of projects we use xoops engine to manage content. In mysql slow query log most of queries are following :

SELECT p.uid,f.forum_id, p.topic_id, p.poster_name, p.post_time, f.forum_name, p.post_id, p.subject              FROM xps33_bb_posts p,              xps33_bb_posts_text pt,                  xps33_bb_forums f WHERE p.post_id = pt.post_id AND p.approved = 1 AND p.forum_id = f.forum_id AND f.forum_id IN (1,4,61,7,9,17,20,45,35,44,38,39,43,53,54,55,56,57,58,60,14,29,40,26,18,41,33,24,32,59,25) AND ((p.subject LIKE '%rivi%' OR pt.post_text LIKE '%orvi%') AND (p.subject LIKE '%care%' OR pt.post_text LIKE '%gor%'))  ORDER BY p.post_time DESC LIMIT 0, 5;  

I can't change them as It would involve changing the engine which is not an option atm. But I can help the engine to search faster. As I understood as the table uses MyIsam engine I can create text indicies which should make search faster, am I right?

So in general for which indicies I should create to avoid following queries run for long time?

Making a message system

Posted: 28 Feb 2013 12:34 PM PST

I have created a messaging system with MySQL structure is as follows

      `message_id` int(11) NOT NULL AUTO_INCREMENT,        `message_from` int(11) NOT NULL,        `message_to` int(11) NOT NULL,        `message_body` varchar(255) NOT NULL,        `message_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,        `message_deleted` tinyint(1) NOT NULL DEFAULT '0',        `message_read` tinyint(1) NOT NULL DEFAULT '0',        PRIMARY KEY (`message_id`),        KEY `message_from` (`message_from`),        KEY `message_to` (`message_to`)  

Currently The PK is an AUTO_INCREMENT so I am wondering if using this kind of key will be efficient for this type of system. Also the main reason why i am having a asking this question is to help in my function I have been having some issues with loading more messages. due to the fact that messages are deleted the PKs will not be consistent such as 1,2,3,10,255,300. When I try to run the more function using

(SELECT MAX(message_id)   FROM member_messages    WHERE (message_to = :userId   AND message_from = :friendId)   OR     (message_to = :friendId   AND message_from = :userId))AS max_id,  FROM member_messages AS MSG  WHERE (MSG.message_to = :userId  AND MSG.message_from = :friendId)  OR   (MSG.message_to = :friendId  AND MSG.message_from = :userId)  AND MSG.message_deleted = '0'  HAVING MSG.message_id >= max_id - 20  AND MSG.message_id <= :upperId   ORDER BY message_id ASC  

In effect this query should return messages from the last 20 entered, but due to the PK being inconsistent sometimes no messages are returned. Eg. if the max_id is 200, 200- 20 is 180 and the messages with ID between this range are deleted or not associated with this members(userId) recipient(friendId). Any insight into this problem as to whether I should change the key type or change the query itself? Thanks for your time.

EDIT

Just a side note I used ASC because I wanted the newest messages to appear at the bottom of the DIV, while the fetched results would be older and appear at the top.

String handling in postgres 8.4

Posted: 28 Feb 2013 05:07 PM PST

This started as a question about indexes, but then when I got my answer I realized I would have to try and convert the answer into 8.4 syntax (I can't believe that format does not exist).

I thought I was handling the variables correctly, but for some reason I can't get the typing to function properly:

CREATE OR REPLACE FUNCTION add_column(schema_name TEXT, table_name  TEXT,  column_name TEXT, data_type TEXT) RETURNS BOOLEAN AS $BODY$  DECLARE   _tmp text;  BEGIN       EXECUTE 'SELECT COLUMN_NAME FROM information_schema.columns WHERE        table_schema='''+|| schema_name+'''       AND table_name='''+||table_name+'''       AND column_name='''+||column_name+''''   INTO _tmp;       IF _tmp IS NOT NULL THEN       RAISE NOTICE USING MESSAGE = 'Column '+||column_name+' already exists in '+||schema_name+'.'+||table_name;       RETURN FALSE;   END IF;       EXECUTE format('ALTER TABLE %I.%I ADD COLUMN %I %s;', schema_name,   table_name, column_name, data_type);         RAISE NOTICE USING MESSAGE = 'Column '+||column_name+' added to '+||schema_name+'.'+||table_name;       RETURN TRUE;  END; $BODY$ LANGUAGE 'plpgsql';  

Here is the error message I get:

ERROR: operator does not exist: text + unknown

Does anyone know how I can get this to work in 8.4?

postgres GROUP BY and ORDER BY problem

Posted: 28 Feb 2013 07:46 PM PST

I have two tables like this:

CREATE TABLE cmap5 (     name     VARCHAR(2000),     lexemes      TSQUERY   );  

and

 CREATE TABLE IF NOT EXISTS synonyms_all_gin_tsvcolumn (      cid         int NOT NULL references pubchem_compounds_index(cid) ON UPDATE CASCADE ON DELETE CASCADE,      name            VARCHAR(2000) NOT NULL,        synonym         VARCHAR(2000) NOT NULL,      tsv_syns                TSVECTOR,      PRIMARY KEY (cid, name, synonym)  );  

My current query is:

SELECT s.cid, s.synonym, c.name, ts_rank(s.tsv_syns,c.lexemes,16)   FROM synonyms_all_gin_tsvcolumn s, cmap5 c WHERE c.lexemes @@ s.tsv_syns  

and the output is:

cid     |  synonym                              | name (query)              | rank  5474706 | 10-Methoxyharmalan                    | 10-methoxyharmalan        | 0.0901673  1416    | (+/-)12,13-EODE                       | 12,13-EODE                |  0.211562  5356421 | LEUKOTOXIN B (12,13-EODE)             | 12,13-EODE                |  0.211562   180933 | 1,4-Chrysenequinone                   | 1,4-chrysenequinone       |  0.211562  5283035 | 15-Deoxy-delta-12,14-prostaglandin J2 | 15-delta prostaglandin J2 |  0.304975  5311211 | 15-deoxy-delta 12 14-prostaglandin J2 | 15-delta prostaglandin J2 |  0.304975  5311211 | 15-deoxy-Delta(12,14)-prostaglandin J2| 15-delta prostaglandin J2 |  0.304975  5311211 | 15-Deoxy-delta-12,14-prostaglandin J2 | 15-delta prostaglandin J2 |  0.304975  5311211 | 15-Deoxy-delta 12, 14-Prostaglandin J2| 15-delta prostaglandin J2 |  0.304975  

I would like to return the name matches of all rows in cmap5 in my main table ranked by the ts_rank function but for each row in cmap5 I want to:

 -- select only the best X cids to each query (group by cid)   -- or ORDER BY my results as 1+ts_rank/count(cid)  

To get the best match I add a select distinct on c.name, but when the rank is the same I wanna get the cid with more matches to the query. i have tried adding a simple group by at the end of the query but I get an error, how could I do this?

SQL Server 2005 Replication

Posted: 28 Feb 2013 02:06 PM PST

I am in the process of creating Replication between 2 Remote Servers, server 1 is the Distributor and Publisher and server 2 is the Subscription.

server 1 windows 2003 server 192.168.10.1 connected by vpn SQL Server 2005 domain1.local

server 1  windows 2003 server  192.168.10.1 connected by vpn  SQL Server 2005  domain1.local  

server 2 windows 2003 server 192.168.10.6 connected by vpn SQL Server 2005 domain2.local

server 2  windows 2003 server  192.168.10.6 connected by vpn  SQL Server 2005  domain2.local  

When I setup up Replication everything looked fine until I looked at the sync status and it said:

The Agent could not be started    An exception occurred while executing a transact-sql statement or batch    sqlserveragent error request to run job  server1-username blah blah blah  

From user sa refused because the job is already running from a request by user sa changed database context to technical error 22022.

I have cleared jobs in the server agent as well as restarted the service.

Could this be something to do with authentication between two non trusted domains as I can browse and even control each sql server via SQL studio but just not setup replication?

Yes I can manage each SQL Server in SSMS and we are using merge with snapshot.

How to Correctly Estimate the Average Number of I/O Operations my RAID 10 disk can offer?

Posted: 28 Feb 2013 03:41 PM PST

The average Input/Output Operations per Second a disk can sustain is a very important factor in the tuning of MySQL / InnoDB databases, as it drives the speed with which Dirty InnoDB Buffer Pool pages are flushed back to the disk.

Is there a method or a formula I can use to estimate the average I/O Operations per Second my RAID 10 disk array can run on?

Mysqldump tables excluding some fields

Posted: 28 Feb 2013 11:15 AM PST

Is there a way to mysqldump a table without some fields?

Let me explain:
I have a MySQL database called tests. In tests I have 3 tables: USER, TOTO and TATA. I just want to mysqldump some fields of table USER, so excluding some fields like mail, ip_login, etc.

How can I do this?

How to do something like UPDATE DELAYED in MySQL

Posted: 28 Feb 2013 12:29 PM PST

I have an averages table that should keep track of an average value over time. I don't want to have a row for each value, just a single row that continuously updates the average. What I've come up with is this:

set @value=4;  set @name="myAverageValue";  UPDATE `timing` SET    `max` = greatest(`max`,@value),    `average` = `average` + ((@value - `average`) / (`count` + 1)),    `count` = `count` + 1  WHERE `name` = @name  

Many clients may be doing this at the same time, and I don't want there to be any locking issues. I don't care what order the updates are run in, since in the end it will all end up the same. I just want to have a query that sends the UPDATE to the database, and it will process it eventually, similar to an INSERT DELAYED. Does UPDATE LOW_PRIORITY do this, or does that cause the client to wait until it is available?

SQL Server 2012 database backup successfully report but no backup file

Posted: 28 Feb 2013 06:29 PM PST

enter image description here

I have created maintenance plan in SQL Server 2012. And every day, the maintenance plan should backup the database. There is no database backup file when I look in the folder where the backups must be stored. But SQL Server logs history about this maintenance plan are successful.

What is the problem?

I am using SQL Server 2012. The operating system is Windows Server 2008 R2.

Thank you for paying attention.

enter image description here

phpmyadmin - Login issue, takes me back to the login page after logging in

Posted: 28 Feb 2013 03:08 PM PST

I'd setup phpmyadmin quite a while ago and it worked well and I got everything setup and working etc... However, I went to login recently and it always just takes me back to the login page. The credentials are correct, but never the less I always end up at the login screen.

I can't find anything in my error logs, and in the access logs everything is status code 200, except the POST which is 302 followed by 304s. None of that really stands out.

I'm using Nginx+PHP5-FPM+MySQL if that changes anything.

What types of things might I have changed that would cause this sort of activity?

Replication master binlog rotation when network is unavailable

Posted: 28 Feb 2013 02:29 PM PST

I recently experienced an issue where the binlog file in master rotated because network connectivity between the slave and master was unavailable.

After solving the network issue, the slave was not able to follow the master as it was doing through previous binlog and position.

That was solved by purging the binlog to last binlog in master and pointing the slave to that last binlog and previous binlog's position which was following.

I am wondering if this issue is normal?

Take individual MySQL database offline

Posted: 28 Feb 2013 05:29 PM PST

MySQL does not have any SQL commands or internal mechanisms for

  • making an individual database unavailable / offline
  • moving an individual database

Thtis being the case, how can you take an individual database offline?

Choosing shard key and friendly URL Ids for my MongoDB

Posted: 28 Feb 2013 07:29 PM PST

I have decided to use MongoDB as my Database for a web application. However, I have some difficulties to get started and I hope that you can help me out with a few questions.

I am developing my application in ASP.NET and with MongoDB as the back-end. I intend to start with a single server + 1 replication but wanted to built it right so I won't have problem sharding the database in the future if I have to.

One of my biggest problems is choosing the right shard key and friendly URLs for my website.

I have a folders collection and files as embedded collection inside the folders collection. Each user can create any number of folders and add files to it. Each folder belongs to one user. I wanted to know what is the best shard key for this type of collection? Many queries will query by the user, getting the folder and its items by querying the folders collection by its unique id. I will also use the id in the URL to get the folder and its filers: ex. mywebsite.com/folder/[the_id_of_the_folder]

I will also will use paging in my application, so I need to query the data (also in a sharded environment) and get for example: the last 10 records, page 2 with 10 records - all ordered by the last time the were inserted/updated

  • So my first question is what is the best shard key to use for a single machine, but considering that I will have to shard in the future
  • Does the shard key has to be the primary unique id of the document in MongoDB?
  • How can I generate more user friendly URLs - I prefer a numerical value instead of GUID (is there option to convert it?)

Help will be very appreciated, as I am stuck and can continue until I solve this.

How to write a shell script to check the mysql database backup status.

Posted: 28 Feb 2013 08:29 PM PST

I have mysql production databases running in main server(Linux) and from this server every night the backup is taken into local server. Now i want to find a shell script to check if all the databases backup is up to date or not(for an example: i have twenty database in the main server so i want to know all of them have been backed up completely or not) and compare the each database's current size with their previous size, in a sense i want to make sure with through this script that my backup is perfect and if not i will have a track of that occurrence.

Please help me to get a solution of this problem.

Creating a New Database Across a Network with SQL Server 2008 R2

Posted: 28 Feb 2013 08:08 PM PST

I appreciate that the above should not be done ('Creating a New Database Across a Network with SQL Server 2008 R2') for all of the reasons outlined here but a client wants to know how to do this and cannot be told otherwise.

To setup and test this I have two laptops connected across a local area network (machine one is 'VAIOE', machine two is 'VAION'). I am attempting to write a new database using VIAOE running SQL Server 2008 R2 Developer Edition across the network to a shared drive on VIAON called 'CShare' (VAION's C:Drive). I have enabled all of the appropriate permissions such that I am able to navigate (using VAIOE) on to VAION '\VAION\CShare\Databases\' and create a new Access database (for example).

Now to attempt to create a database (from VAIOE onto the C:Drive of VAION) using SQL Server 2008 R2 in management studio, I open a new scipt and write (USING Master):

CREATE DATABASE [networkedR2] ON  PRIMARY  ( NAME = N'networkedR2', FILENAME = N'\\VAION\CShare\aaData\aaSSDBs\networkedR2.mdf' ,     SIZE = 3072KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )   LOG ON   ( NAME = N'networkedR2_log', FILENAME = N'\\VAION\CShare\aaData\aaSSDBs\networkedR2_log.ldf' ,     SIZE = 1024KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)   GO  

and I get the following error

Msg 5133, Level 16, State 1, Line 1  Directory lookup for the file "\\VAION\CShare\aaData\aaSSDBs\networkedR2.mdf" failed with the operating system error 5(failed to retrieve text for this error. Reason: 15105).  Msg 1802, Level 16, State 1, Line 1  CREATE DATABASE failed. Some file names listed could not be created. Check related errors.  

indicating I do not have the correct permissions. Can someone tell me ewhat I have done wrong here as I have hit a wall?

Edit: I am aware that for earlier versions of SQL Server the DBCC TRACEON(1807, -1) flag was required. This should not be the case here.

Is MySQL Replication Affected by a High-Latency Interconnect?

Posted: 28 Feb 2013 12:26 PM PST

We've got a vanilla master and slave MySQL setup that reside in different data centers, and another slave in the same datacenter as the master.

The bandwidth between the datacenter is pretty high (in network benchmarks we've done we can reach 15MB/second), but latency exists, it is around 28ms. It's not high by any means, but it is much higher than the sub-second latency in the same datacenter.

Occasionally, we experience serious lags (2000 seconds and more) with the remove slave, while the local slave stays up to date. When looking at the lagging remote slave, the SQL thread usually spends the time waiting for the IO thread to update the relay log. The master shows "waiting for net" or something of the sort at the same time.

So it means it's network, but we still have free bandwidth at the time this happens.

My question is: can the latency between the datacenters affect the performance of replication? Does the slave io thread just stream the events until the master stops sending them, or is it pooling the master somehow between events?

Sql Anywhere 11: Restoring incremental backup failure

Posted: 28 Feb 2013 07:08 PM PST

We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers.

We are doing a full backup as follows:

dbbackup -y -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\full  

This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups:

dbbackup -y -t -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\inc    dbbackup -y -l -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"       c:\backuppath\live  

However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:

10092: Unable to find table definition for table referenced in transaction log

The transaction log restore command is:

dbeng11 "c:\dbpath\dbname.db" -a "c:\backuppath\dbname.log"  

The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore.

Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11?

UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ...

Edit:

You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure

No comments:

Post a Comment

Search This Blog