Wednesday, March 6, 2013

[how to] Minimising downtime implementing merge replication on live system

[how to] Minimising downtime implementing merge replication on live system


Minimising downtime implementing merge replication on live system

Posted: 06 Mar 2013 09:01 PM PST

Background

We have three Servers running SQL Server 2008 which all contain a Product database. These are currently kept in sync once per day by sending a copy of the MDF to the two Subscribing servers and detaching the existing DB and replacing it with the latest copy.

The two subscribers contain a Customer database which is not present on the Subscriber. The Customer database contains customer purchase information, so it needs to reference ProductIDs in the Product table.

We have implemented Merge Replication on a similar set up in our testing environment and are preparing to implement these changes on our live system.

The end result should look something like: http://puu.sh/2dl2z (Apologies for the linked image, I can't paste images due to a lack of reputation).

The Problem

In order to implement these changes, we will need to generate a snapshot and synchronise the data between the Publisher and the Subscribers. While this is occurring, any client application which relies on the data in the Product database will fail, and any Stored Procedures on the Customer database which make cross database joins will also fail.

We'd like to keep this downtime to a minimum or eliminate it completely.

We've had some crazy ideas thrown around which may or may not work (using synonyms to point a subscriber to another server's Product table while snapshot generation/synchronisation takes place) but they don't seem very elegant, if they'll work at all. I'm hoping someone else has had a similar issue and might have some insight :)

PostgreSQL user can not connect to server after changing password

Posted: 06 Mar 2013 08:42 PM PST

I've met this with 4 roles I created:
After changing password for a user in pgAdmin III using the GUI (1), that user can not log in any more.
pgAdmin III show error message:

An error has occurred:    Error connecting to the server: FATAL:  password authentication failed for user "sam"  FATAL:  password authentication failed for user "sam"  

My system: Postgresql 9.2 on Ubuntu 12.04

Is there any way to fix this? Thanks.

(1): login with account postgres, right click user in Login Roles, go to tab 'Definition' and enter password

mySQL Quickly find rows linked to another row

Posted: 06 Mar 2013 08:30 PM PST

For example if I have a posts table with id and a likes table width id post_id user_id, how can I set up the database to organize the likes table by post_id in order so queries are faster? I can't make the post_id the primary key because there are more than one likes for each post.

Log Shipping - RESTORE WITH STANDBY - on SQL Server 2012 keeps breaking

Posted: 06 Mar 2013 08:38 PM PST

We are using log shipping and RESTORE WITH STANDBY on SQL Server 2012 in order to restore the database in read-only mode for reporting purposes. However, the log shipping setup keeps breaking after completing a restore of one or two log backups. Log-shipping only breaks when it is running as RESTORE WITH STANDBY; RESTORE WITH NORECOVERY does not cause any problems.

My only intuition about this is that the primary database is not that dynamic. Therefore, when there are no transactions, this causes issues with the RESTORE process, maybe?

Any ideas, known fixes?

I had it working for a few days by running a regular job that does heavy updating on two tables. When the job stopped running the log shipping setup quickly failed, unable to process the .trn file. I reset log-shipping and tried to see if it would keep running by just doing a small update, changing the value of one column of one record in a table, whoever it still failed.

Thanks for all your responses.

Automatically create linked row in a different table

Posted: 06 Mar 2013 07:41 PM PST

What I mean is that for example if I have a table user with id name prefId, and a second table preferences with id etc..., is it possible that when I add a row to user, it would automatically create a row in preferences and set the row's id to prefId? Should I create the preferences row first and manually put its id? And is this a good practice?

Why does myisam mysql table indexes go out of date?

Posted: 06 Mar 2013 08:08 PM PST

I have few myisam tables, where only the autoincrement primary index is up to date with the number of columns, but not other indexes? Any idea, why this happens?

Ways to make this stored procedure run faster and optimaised in SQL SERVER for ASP.NET?

Posted: 06 Mar 2013 06:18 PM PST

I have a very complex stored procedure which runs very slow takes long time to respond, i am using this to create form and working in ASP entity framework, however i am just wondering is there are better ways of doing this or optimized this to make this run faster i.e views, do single selects in the coding level with multiple query's ..etc

here is the stored procedure ..

ALTER PROCEDURE [dbo].[_t_ORDER_SHEET_GoodsReceiving]    @COM_KEY int = -1,  @BS_KEY int = -1,  @StartDate VARCHAR(20) = '',  @EndDate VARCHAR(20) = '',  @status int = 0,  @cpokey int = -1,  @po varchar(50) = '',  @invno varchar(50) = '',  @dlvemail varchar(50) = ''  AS  BEGIN  IF @po <> ''  BEGIN  SSELECT     s.[OS_KEY] as OS_KEY,[OS_CPO_KEY],[OS_PO_NUMBER],[OS_ACCOUNT],[OS_DATE_APPROVE], [OS_DATE_SUBMIT], OS_COM_KEY, OS_BS_KEY, OS_USER_ORDER, OS_USERID_ORDER, OS_USER_APPROVE, OS_USERID_APPROVE,                 IsNull([OS_TOTAL_ITEMS],0) AS OS_TOTAL_ITEMS,[OS_RECEIVED_GOODS],[OS_SUPPLY_STORE], OS_DLV_EMAIL, OS_DLV_STREET, OS_DLV_SUBURB, OS_DLV_POSTCODE, OS_DLV_STATE, OS_DLV_CONTACT, OS_DLV_PHONE,                 [OS_RECEIVED_DATE],                  IsNull(RXQty,0) as RXQty, SentQty,RxDate, InvAmt, RXAmt, Extra,      d.[DSP_KEY],[DSP_PO_KEY],[DSP_BS_KEY],[DSP_SHIP],[DSP_DATE_SENT],[DSP_TIME_SENT],[DSP_SR_CANCELLED],[DSP_DATE_RETURNED],[DSP_DAYS_TO_DESP],[DSP_CARTONS],[DSP_TY_PACK_UNIT],[DSP_TY_RECEIVER],[DSP_BDLV_KEY],[DSP_CON_NOTE],[DSP_TY_CARRIER],[DSP_TOT_ITEMS]      ,[DSP_CCMS],[DSP_BACKORD],[DSP_INVOICE],[DSP_TY_JOB],[DSP_COMMENT],[DSP_TY_PICKER],[DSP_PACKER],DocumentDate as InvoiceDate, DlvRxPerson,                   (CASE WHEN ISNULL(r.RXQty, 0) = 0 THEN 'Not Received' WHEN r.RXQty = SentQty THEN 'Fully Received' ELSE 'Partially Received' END) AS RStatus,          (CASE          WHEN DSP_SHIP =0 THEN 'Std'            ELSE 'Backorder' END) as BOrders, t_BAY_WAREHOUSE.BWH_LOCATION          FROM         t_BAY_WAREHOUSE INNER JOIN                       t_BUSINESS ON dbo.t_BAY_WAREHOUSE.BWH_KEY = dbo.t_BUSINESS.BS_DEF_WAREHOUSE RIGHT OUTER JOIN                         t_ORDER_SHEET s inner join t_ORDER_LINE l on s.OS_KEY = l.OL_OS_KEY LEFT JOIN t_DESPATCH d ON s.OS_CPO_KEY = d.DSP_PO_KEY              left join (select  RXDocumentNumber, SUM(IsNull(RX_QTY,0)) as RXQty, MAX(RX_DATE) as RxDate, SUM(CONVERT(float, s.UnitPrice) * CONVERT(float, x.RX_QTY)) AS RXAmt from t_GOODSRX AS x INNER JOIN                                                     dbo.t_INVOICE_LINEITEMS AS s ON x.PKLineItems = s.PKLineItems GROUP BY RXDocumentNumber) r on r.RXDocumentNumber = d.DSP_INVOICE              inner join (select DocumentNumber, sum(CONVERT(int,Qty)) as SentQty , SUM(CONVERT(float, i.UnitPrice)* CONVERT(float, i.Qty)) AS InvAmt from t_INVOICE_LINEITEMS AS i where (UnitUsed = 'PCS' OR UnitUsed = 'Pair' OR UnitUsed='EA' or UnitUsed='PR' or UnitUsed='BX')  and ItemCode not like 'EMB%' and ItemCode not like 'SRT%' and ItemCode not like 'IND%' and ItemCode not like 'HAN%' and Qty  not like '%[a-z,A-Z]%' and Qty not like '%.%' GROUP BY DocumentNumber) i on i.DocumentNumber = d.DSP_INVOICE              left join (select DocumentNumber, SUM(CONVERT(float,UnitPrice)*CONVERT(float,Qty))as Extra from t_INVOICE_LINEITEMS  where   (ItemCode  like 'EMB%' or ItemCode like 'SRT%' or ItemCode like 'SEW%'or ItemCode like 'IND%' or ItemCode like 'HAN%') and UnitPrice not like '%[a-z,A-Z]%' and Qty  not like '%[a-z,A-Z]%' and Qty not like '%.%' GROUP BY DocumentNumber) e on e.DocumentNumber = d.DSP_INVOICE              inner join t_INVOICE_HEADER h on h.DocumentNumber = d.DSP_INVOICE              inner join (select OS_KEY, ISNULL(DlvREName, OS_DLV_EMAIL) as DlvRxPerson from t_ORDER_SHEET ss left join (select UserName, FirstName + ' ' + LastName as DlvREName from  t_USER_PROFILES p inner join t_USERS u on p.UserId = u.UserId) i on i.UserName = ss.OS_DLV_EMAIL) dd on dd.OS_KEY = s.OS_KEY  ON                         dbo.t_BUSINESS.BS_KEY = s.OS_BS_KEY    WHERE    OS_COM_KEY = @COM_KEY             AND (OS_PO_NUMBER = @po OR @po = '')    GROUP BY s.[OS_KEY],[OS_CPO_KEY],[OS_PO_NUMBER],[OS_ACCOUNT],[OS_RECEIVED_GOODS],[OS_SUPPLY_STORE],  OS_TOTAL_ITEMS, OS_USER_ORDER, OS_USERID_ORDER, OS_USER_APPROVE, OS_USERID_APPROVE, OS_DLV_EMAIL, OS_DLV_STREET, OS_DLV_SUBURB, OS_DLV_POSTCODE, OS_DLV_STATE, OS_DLV_CONTACT, OS_DLV_PHONE, d.[DSP_KEY],[DSP_PO_KEY],[DSP_BS_KEY],[DSP_SHIP],[DSP_DATE_SENT],[DSP_TIME_SENT],[DSP_SR_CANCELLED],[DSP_DATE_RETURNED],[DSP_DAYS_TO_DESP],[DSP_CARTONS],[DSP_TY_PACK_UNIT],[DSP_TY_RECEIVER],[DSP_BDLV_KEY],[DSP_CON_NOTE],[DSP_TY_CARRIER],[DSP_TOT_ITEMS]      ,[DSP_CCMS],[DSP_BACKORD],[DSP_INVOICE],[DSP_TY_JOB],[DSP_COMMENT],[DSP_TY_PICKER],[DSP_PACKER],[OS_RECEIVED_DATE],SentQty, RxDate, InvAmt, RXAmt, Extra, [OS_DATE_APPROVE], [OS_DATE_SUBMIT], OS_COM_KEY, OS_BS_KEY, RXQty, DocumentDate, DlvRxPerson,BWH_LOCATION    ORDER BY OS_CPO_KEY DESC  END    ELSE  IF @cpokey <> -1  BEGIN  ------------------------------  -----------------------------  END    ELSE  if @invno <> ''  BEGIN  --------------------------------------  -----------------------  END  ELSE  IF @status <= 0  BEGIN  -------------------------------------  ------------------------------------  END    ELSE IF @status = 1  BEGIN  ----------------------------  -----------------------------  END    ELSE IF @status = 2  BEGIN  -----------------------------------  --------------------------  END    ELSE IF @status = 3  BEGIN  --------------------------------------------  -------------------------------------------  END    END  

Designing ERD1 and ERD2 with crows foot

Posted: 06 Mar 2013 04:01 PM PST

What would be the perfect ERD1 & ERD2 for the following case study? And what are the attributes of the entities?

Five Aside Football Independent League (FAFIL)

The Five Aside Football Independent League (FAFIL) is an amateur organisation which was set-up to organise and record the results of a number of five aside football clubs who during the football season play relatively friendly matches with each other. FAFIL has recently decided to record the details of each team and which other team they play against by using a relational database system and is calling on you as renowned experts in this field to solve their problems.

The organisation consists of a number of clubs which regularly take part in the prestigious challenge cup tournament, which consists of each club putting forward a team to compete for the honour of winning the challenge cup by scoring the most points when competing with the other clubs. Each club may have more than one team, but can only enter one team into this prestigious tournament.

Throughout the season FAFIL has to arrange and record the order in which each team will play against each other and the venue and date that the match will take place on, as well as arranging for a designated FAFIL referee to oversee the match. The venue will be held at one of the team's home ground and consequently be an away match for the other team. Each team must play all other teams twice, once at home and once away.

Therefore the first task which the database system must provide is to automatically create a fixtures list which will take as input the existing teams and produce a listing having each team playing all the others both home and away. The organisers can then arrange the matches required and record the referee and venue accordingly. Difficulties have arisen in the past when during the season it was discovered that either the referee or the venues were double booked on the same day.

Once the matches are arranged the results must be recorded as and when they come in. Points are awarded to the teams for each match they play according to the end result, two points for a win and one point for a draw and zero points if they lost or failed to show.

Once all the matches have been played it should be any easy job to calculate which team has the most number of points. Teams with equal points scored are ordered by the difference between goals scored and goals conceded. It is hoped that the database system will be able to record the points and allocate them to the correct team as the results are put into the system.

Due to some very aggressive players and the occasional hospital bill during the last season it has been decided to record and allocate penalty-points against players if they misbehave.

Accordingly the FAFIL referees where sent instructions to allocate penalty-points, such as: 1-bad language, 5-rude gestures, 10-stabings, and these will be allocated against each player as the season progresses. It was further decided that any player who exceeds 10 penalty-points e.g. stabs another player and then swears at them, must not be allowed to take part in any further matches.

It has also been decided to reward the best player with a highly prized book voucher. Accordingly each time a match is played, one player is chosen as the 'player of the match'. At the end of the season the player who was 'player of the match' the most is awarded the prize, if there are a number of players who have been nominated the same number of times, they must share the book voucher as FAFIL is not made of money.

In an aid to assist your design plans, FAFIL has supplied a typical list of questions which it believes should be easy for the new database to answer and hence save its organisers a lot of time and effort.

Typical Set of Questions.

  1. List the different types of penalty-points which can be awarded give the highest scoring ones first.

  2. Provide an alphabetical list of all the teams and their players.

  3. List all the players names and how many time they have been awarded 'player of the match', showing who is in the lead for being awarded the prized book voucher.

(Note this has to be calculated live by counting how often each player has been chosen as player of the match)

  1. Produce a list of all Teams playing all other Teams indicating who is at home and who is away.

Efficient way to move rows across the tables?

Posted: 06 Mar 2013 03:51 PM PST

This is somewhat long question as I would like to explain all details of the problem.

System Description

We have a queue of incoming messages from external system(s). Messages are immediately stored in the e.g. INBOX table. Few thread workers fetch the job chunk from the table (first mark some messages with UPDATE, then SELECT marked messages). Workers do not process the messages, they dispatch them to different internal components (called 'processors'), depending on message command.

Each message contains several text fields (longest is like 200 varchars), few ids and some timestamp(s) etc; 10-15 columns total.

Each internal component (i.e. processor) that process messages works differently. Some process the message immediately, others triggers some long operation, even communicating via HTTP with other parts of the system. In other words, we can not just process message from the INBOX and then remove it. We must work with that message for a while (async task).

Still, there are not too many processors in the system, up to 10.

Messages are all internal, i.e. it is not important for user to browse them, paginate etc. User may require list of processed relevant messages, but that's not mission-critical feature, so it does not have to be fast. Some invalid message may be deleted sometimes.

Its important to emphasize that expected traffic might be quite high - and we don't want bottlenecks because of bad database design. Database is MySql.

Decision

The one of the decisions is not to have one big table for all messages, with some flags column that will indicate various messages states. Idea is to have tables per processors; and to move messages around. For example, received messages will be stored in INBOX, then moved by dispatcher to some e.g. PROCESSOR_1 table, and finally moved to ARCHIVE table. There should not be more then 2 such movements. W

While in processing state, we do allow to use flags for indicating processing-specific states, if any. In other words, PROCESSOR_X table may track the state of the messages; since the number of currently processing messages will be significantly smaller.

The reason for this is not to use one BIG table for everything.

Question

Since we are moving messages around, I wonder how expensive this is with high volumes. Which of the following scenarios is better:

(A) to have all separate similar tables, like explained, and move complete messages rows, e.g. read complete row from INBOX, write to PROCESSOR table (with some additional columns), delete from INBOX.

or

(B) to prevent physical movement of the content, how about to have one big MESSAGES table that just stores the content (and still not the state). We would still have other tables, as explained above, but they would contain just IDs to messages and additional columns. So now, when message is about to move, we physically move much less data - just IDs. The rest of the message remains in the MESSAGE table unmodified all the time.

In other words, is there a penalty in sql join between one smaller and one huge table?

Thank you for your patience, hope I was clear enough.

Open a Locked Text File with SQL Server

Posted: 06 Mar 2013 08:20 PM PST

When I try to

BULK INSERT table FROM 'c:\file.txt'  

I get

Msg 4861, Level 16, State 1, Line 1  Cannot bulk load because the file "c:\file.txt" could not be opened. Operating system error code 32(The process cannot access the file because it is being used by another process.).  

error since the file is a log file opened by another process.

However with C# I can open the file with System.IO.FileShare.ReadWrite as:

using (System.IO.FileStream fileStream = new System.IO.FileStream("c:\\file.txt", System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.ReadWrite))          {              using (System.IO.StreamReader streamReader = new System.IO.StreamReader(fileStream))              {                  file = streamReader.ReadToEnd();              }          }  

Is there a way to have that ReadWrite share functionality within SQL Server (bulk insert or any other)?

Thanks

relational databse for address model

Posted: 06 Mar 2013 07:40 PM PST

I want to design a "Address" model for all type of entity like user, business and etc.

I have now two type of main models their one is User and another one is Business. Each one has varies address types like below.

      User    1.Contact Address  2.Billing Address           Business  1.Contact Address  2.something  

So I created a address model with addresstype column like this

Address  id  addresstype  user  addressline1  addressline2  

Relationship

User --OneToMany-->Business User --OneTOMany-->Address(User Column)

Now using above relation addresstype and user column will be in relation, But Business address is not with any relationship with address.

So How to design this one on efficient way

sql 2005 upgrade to sql 2005 service pack 4 (SP4) via command line

Posted: 06 Mar 2013 03:01 PM PST

I have been wracking my brain over upgrading sql server 2005 to sql server 2005 sp4. I have a script for installing new instances of sql servers for all versions and they all work wonderfully. Problem is, sql server 2012 can't upgrade sql 2005 SP3 and below. So, I have to install up to sql 2005 SP4, AND THEN upgrade through sql 2012, which is highly annoying.

I need to do this silently via the command line, but I cannot get it to work at all. Here are some samples I tried:

"SQLSP4.exe /qb /ACTION=Upgrade /INSTANCENAME="FOOBAR""

"SQLSP4.exe /quiet /instancename="FOOBAR""

among a couple other variations. I could not find anything online at all. In fact, the one helpful thread ended in saying that Microsoft didn't have any support for this, but I'm not sure about that.

What can be done here? Does anyone have any experience in updating service packs via command line?

Useful topics on database theories [closed]

Posted: 06 Mar 2013 12:24 PM PST

Useful topics to write about on different Database Theories?

Hi, I want to know what useful topics I can research on database theories.

For example I have already wrote up e-learning and I based e-learning on these for topics below:

  • How has e-learning affected education
    • What can we gain from e-learning
    • Benefits of e-learning
    • Limitations of e-learning

Now for database theories I am going to write it in a synthesis matrix but what topics should I talk about?

Here is a synthesis matrix example if you don't know what that is: http://writingcenter.fiu.edu/resources/synthesis-matrix-2.pdf Thanks

Using MySQL InnoDB as an Archive

Posted: 06 Mar 2013 12:37 PM PST

My site has a main MySQL InnoDB table that it does most of its work on. New rows get inserted at a rate of 1 million per week, and rows older than a week gets moved over to an archive table on a daily basis. These archived rows are processed once a week for stuff like finding trends.

This archive table consequently grows at 1 million new rows every week, and querying it can get really slow. Is MySQL suited for archiving data, or is my strategy very flawed?

Please advise, thank you!

Sequential joining of tables in order

Posted: 06 Mar 2013 12:33 PM PST

Table albums has (among other field) field id.

Table photos has column id, field album which is a foreign key referring to album id and some other fields (which are irrelevant for the question I ask).

So in an album there is several photos (photos in an album are ordered by id of a photo).

Now I introduce one more "level of indirection": Bunches. There may be several albums in one bunch.

For this I add fields bunch and seq INT UNSIGNED into the table albums. bunch is the ID of the bunch where the album belongs and seq is the number of the album in the bunch.

Now the problem:

Let it is given a bunch ID.

I want to make a SELECT query which selects all photos from albums belonging to the given bunch, ordered first by seqs of the albums and then by IDs of photos in the album.

TSQL DateTime DataType - fractional seconds vs milliseconds

Posted: 06 Mar 2013 12:15 PM PST

Wanting to get all records from a single day. So everything between 2013-03-05 00:00:00.000 and 2013-03-05 23:59:59.999. We're getting overlap at the backend with things that happened at 2013-03-06 00:00:00.000.

So, in testing I run in SSMS:

select cast('2013-03-05 23:59:59.999' as datetime)  select cast('2013-03-05 23:59:59.996' as datetime)  select cast('2013-03-05 23:59:59.994' as datetime)  

And get:

2013-03-06 00:00:00.000  2013-03-05 23:59:59.997  2013-03-05 23:59:59.993  

The millisecond part comes back off by a little. In the case of the first line it really matters. Because I don't want 2013-03-06 00:00:00.000 - I want a millisecond before midnight.

TSQL states in documentation that it uses fractional seconds as opposed to .Net datetime which uses milliseconds. This would seem to be a syntax difference only but if you really care about millisecond precision in TSQL are you forced to use DateTime2 across the board. Casting from datetime2 to datetime still mucks things up by a couple milliseconds.

In c# DateTime.Parse("03/5/2013 23:59:59.999").Millisecond still returns 999.

Am I interpreting this all right? If I care about millisecond precision am I stuck with datetime2 only.

Normally that would be an easy fix but in this case we're using a Microsoft system function that receives datetime as parameter.

SQL Server Database Hot Backups: Which Technique(s) to Use for Multi-Level Replication?

Posted: 06 Mar 2013 01:57 PM PST

I'm reading the many other posts here about mirroring vs. replication, but I'd still appreciate it if someone with some experience would weigh in.

My small development team needs to have local automatic synchronization of databases which are hosted remotely.

The network situation is a little complicated:

  • The actual databases (dev, test, prod) are being hosted remotely on VMs in a DMZ. Our team does not have direct access to the DMZ.
  • Instead we have VPN access to the host's internal network, and the firewall between the internal network and the DMZ currently only allows RDP to pass through.
  • Locally we have a development database server, and individual developers have local SQL Servers on their development laptops.

We have requested and received an always-on VM in the remote host internal network that we plan to use to monitor the health of our dev/test/prod servers as well as to maintain hot backups of our databases.

So we are looking at two levels of data synchronization:

  1. Constantly (but asynchronously) updated copies of dev/test/prod databases on our monitoring VM.
  2. Occasionally-updated copies of the databases on our local development database server and/or developer laptops.

We aren't looking to implement fail-over at this point, because even the "prod" environment is not critical (it's more of a pre-prod environment).

We also need this to be as simple as possible, and SQL Server 2012 is not an option. We are stuck with 2008.

Here's what I'm thinking:

  1. Implement database mirroring between our dev/test/prod database servers and our always-on monitoring VM in the host's network.
  2. Implement log shipping to occasionally (when on the VPN) synchronize our development database server and laptops with the DB copies on the always-on monitoring VM.

My questions are:

  1. Is this the best and/or simplest way to accomplish what we are trying to do? If not, what is better and why?
  2. Is it even possible to do log shipping off a mirrored database?
  3. What ports are required to be open in order to implement database mirroring (or log shipping if that's better) between dev/test/prod in the DMZ and our monitor VM in the internal network?

Thank you for your time.

Manually moving SQL instance to another node causes immediate database mirror failover

Posted: 06 Mar 2013 12:33 PM PST

I have a two-node SQL cluster (2008 R2).

Some of the databases within that SQL instance are mirrored to another server on a remote site, using the High safety with automatic failover. The mirroring connection timeout value for those databases is set to 90 seconds.

When I move SQL from one node in the cluster to another node, using the Failover Cluster Manager application's option of "Move this service or application to another node" the databases that are mirrored are instantly failing over to the mirror.

This is undesirable behaviour. My reason for setting the mirroring connection timeout value is that I only want to fail over to the database mirror if the cluster fails completely and there are no functioning nodes.

Is there any way to achieve this? It feels as though it should be possible, otherwise the concept of mixing clustering and automatic failover database mirroring would be unworkable as every node failover within the cluster would trigger a mirror failover.

Thanks.

Write differences between varchar and nvarchar

Posted: 06 Mar 2013 01:38 PM PST

Currently in our SQL Server 2012 database, we're using varchar, and we'd like to change that nvarchar. I've generated a script to do that.

My question is are there any differences in how SQL Server writes to varchar columns vs. nvarchar columns? We have a number of backend procedures that I'm concerned about.

Edit:
Not sure if this helps, but the columns don't have indexes, f/k, or constraints on them.

MS Access ER diagram

Posted: 06 Mar 2013 04:04 PM PST

I have database scheme for Microsoft Access. I'm interested how I can convert the scheme into an ER diagram? Is there any tool that can do this based on scheme relations?

How to properly kill MySQL?

Posted: 06 Mar 2013 07:48 PM PST

I have CentOS 64bit with CPanel installed and I use

service mysql stop  

However, it keeps doing ..... for minutes and it never stops. It used to be instant. Any idea why it does that and how to fix? Right now I have to do killall -9 mysql but is there a better way?

The server is also very very active.

mysqld_safe version different than mysqld?

Posted: 06 Mar 2013 12:05 PM PST

Is it a problem that mysqld socket has a different version than the mysqld server? I noticed this in my mysqld log during startup

120420 19:30:06 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  120420 19:30:06  InnoDB: Initializing buffer pool, size = 8.0M  120420 19:30:06  InnoDB: Completed initialization of buffer pool  120420 19:30:06  InnoDB: Started; log sequence number 0 44233  120420 19:30:06 [Note] Event Scheduler: Loaded 0 events  120420 19:30:06 [Note] /usr/libexec/mysqld: ready for connections.  Version: '5.1.61'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution  

Yet, when I show variables I get this

SHOW VARIABLES LIKE "%version%";

+-------------------------+-------------------------------------------+  | Variable_name           | Value                                     |  +-------------------------+-------------------------------------------+  | innodb_version          | 5.5.30                                    |  | protocol_version        | 10                                        |  | slave_type_conversions  |                                           |  | version                 | 5.5.30-cll                                |  | version_comment         | MySQL Community Server (GPL) by Atomicorp |  | version_compile_machine | x86_64                                    |  | version_compile_os      | Linux                                     |  +-------------------------+-------------------------------------------+  7 rows in set (0.00 sec)  

Based on golimar's questions I ran a

ps aux

and I see this

mysql 633 31.4 33.5 11942788 5452172 ? SNl 08:18 101:31 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/host.reviewcritical.com.err --open-files-limit=4096 --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock --port=3306

root 32674 0.0 0.0 108292 1576 ? SN 08:18 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock --pid-file=/var/run/mysqld/mysqld.pid --basedir=/usr --user=mysql

Tuning Postgres config for new Database Server

Posted: 06 Mar 2013 04:20 PM PST

I have finally gotten a Purpose Built Database machine for a project i'm working on.

The server is a 4 Core Xeon with 64GB of Ram and a Raid10 of 10krpm drives.

I have just got the database moved over to the new machine; the performance with the same code are worse than when it was running on a VM.

I'm looking for suggestions on what settings to adjust to what values.

Currently, I've upped shared_buffers to 60GB and the kernel settings needed to make that change.

temp_buffers is 32MB

work_mem is 5MB

I'm working on doing some stuff I'd like to get loaded in quickly so I have synchronous_commit set to off.

Can anyone point me in the right direction as to how to improve the speed? I had it running quite a bit faster on a slower machine with much less memory and drives shared with the machines that were making calls to it, so I'm not really sure what the issues are.

Update: 2013-03-06 Performance is falling off a cliff shortly after a run starts. Not sure what to do. Settings

shared_buffers = 12GB  temp_buffers=32MB  work_mem = 5MB  maintenance_work_mem = 256MB  fsync = off  synchronous_commit = off  wal_buffers = 32MB  checkpoint_segments = 256  checkpoint_completion_target = .9  effective_cache_size 50GB  auto_vacuum = on  autovacuum_naptime = 1min  

The task is a long script that's taking data from a copied in table and normalizing it into the database. So big reads occasionally to pick up 1000 rows or more, then lots of little reads to de-duplicate the record and find IDs etc, then some inserts along the way that are needed, and finally lots of inserts at the end. Then Repeat.

Any Suggestions? or ideas what's falling off? This is one of my slower queries, I'd love ideas of how to speed it up.

EXPLAIN (ANALYZE, BUFFERS) select provider_id, count(list_alias.name_part_id)  from list_alias  where provider_id in (1,4,5,6,7,8)  and name_part_id in (5,7,8,3,111)  group by provider_id  order by count(list_alias.name_part_id) desc  limit(1)  

The output.

"Limit  (cost=31.62..31.62 rows=1 width=8) (actual time=0.157..0.157 rows=0 loops=1)"  "  Buffers: shared hit=17 read=1"  "  ->  Sort  (cost=31.62..31.62 rows=1 width=8) (actual time=0.153..0.153 rows=0 loops=1)"  "        Sort Key: (count(name_part_id))"  "        Sort Method: quicksort  Memory: 25kB"  "        Buffers: shared hit=17 read=1"  "        ->  GroupAggregate  (cost=0.00..31.61 rows=1 width=8) (actual time=0.147..0.147 rows=0 loops=1)"  "              Buffers: shared hit=17 read=1"  "              ->  Index Scan using "list_alias provider_id" on list_alias  (cost=0.00..31.59 rows=1 width=8) (actual time=0.146..0.146 rows=0 loops=1)"  "                    Index Cond: (provider_id = ANY ('{1,4,5,6,7,8}'::integer[]))"  "                    Filter: (name_part_id = ANY ('{5,7,8,3,111}'::integer[]))"  "                    Buffers: shared hit=17 read=1"  "Total runtime: 0.238 ms"  

Global locking for multi-master Oracle GoldenGate replication

Posted: 06 Mar 2013 05:21 PM PST

This is a very complex scenario, but I figured a state-of-the-art challenge might interest some of the many high-end users of dba.se.

Problem

I'm working on an intercontinental data replication solution for a document production system, somewhat similar to a wiki, utilizing Oracle GoldenGate. The primary goals are to increase application performance and availability around the globe.

The solution must allow for simultaneous read/write access to the same pool of data from multiple locations, which means that we need some clever way of preventing or resolving conflicting updates without user interaction.

Focusing on collision prevention, we must allow an object (a document, an illustration, a set of metadata etc) to be locked globally, thus preventing multiple users from simultaneously editing the same object from different locations - ultimately causing a conflict.

Similarly an object must remain locked until any user's connected database have received the updated data for that object, less a user may start editing an old object without the latest updates.

Background

The application is somewhat latency sensitive, making access to a central data center slow from remote locations. Like many content focused systems, the read/write ratio is in the line of 4 to 1, making it a good candidate for a distributed architecture. If well-managed, the latter wil also work towards ensuring availability during site or network outages.

I have used a somewhat unconventional multi-loop bi-directional replication topology. This keeps the complexity at a manageable level {2(n-1) ways}, adds resilience for site outages and allows for fairly simple addition or removal of sites. The slight drawback is that it may take up to 30 seconds for a transaction to be replicated between the most remote sites via the central master database.

A more conventional design with direct replication between all sites would cut that time in half, but would also significantly increase the complexity of the configuration {n(n-1) ways}.

With five locations that would mean a 20-way replication as opposed to the 8-way replication in my design.

This illustration shows my current test environment across data centers in Europe, Asia and North America. The production environment is expected to have additional locations.

Illustration of replication topology

All the databases are Oracle 11.2.0.3 with Oracle GoldenGate 11.2.1.

My thoughts so far

I've been thinking along the lines of doing locking by inserting a row into a "locking" table over a database link to the central database, while letting the unlock (update or delete of the previously mentioned row) be replicated along with the updated data.

On behalf of the user we must then check the availability of a lock in both the central and local database before acquiring the lock and opening the object for editing. When editing is completed, we must release the lock in the local database which will then replicate the changes and the release of the lock to all other locations via the central database.

However, queries over a high latency database link can sometimes be very slow (tests show anywhere from 1.5 seconds to 7 seconds for a single insert), and I'm not sure if we can guarantee that the update or delete statement that removes a lock is the last statement to be replicated.

Calling a remote PL/SQL procedure to do the checking and locking will at least limit the operation to a single remote query, but seven seconds is still a very long time. Something like two seconds would be more acceptable. I'm hoping the database links can be optimized somehow.

There may also be an additional issues like trying to delete or update a row in the local locking table before that row have been successfully replicated from the central database.

On the bright side, with this kind of solution, it should be relatively simple to let the application enter a read-only state if communications to the central database is distrupted, or to redirect clients if a data center should become unavailable.

Are there anyone who have done anything similar? What might be the best way to approach this?

Like I said initially, this is a rather complex solution, feel free to ask about anything left unclear or left out.

SSDT 2010 and SQL70590 errors

Posted: 06 Mar 2013 12:14 PM PST

I have Visual Studio 2010 SP1 with SSDT (with Dec 2012 update). I have created a new SQL Server project and imported objects from a database on a SQL Server 2008 R2 instance. The database has a compatibility mode of 80, can't change this at the time.

Anyway. When I do a build, which has to be clean in order for me to do a schema compare, it is throwing SQL70590 errors on the variables although SQL Server itself has no problem with them. Looking at them it seems it is because of the case sensitivity of the variable (e.g. declared as @VARIABLE but referenced as @Variable).

The database collation is set to Latin1_General_BIN, is this why the error is being thrown? I can't see why it sees it as an error yet the procedure code runs without error on the database.

Is there a way to write intelligent queries against a merge table in MySQL that represents a bunch of tables partitioned by date?

Posted: 06 Mar 2013 01:33 PM PST

I have a set of MySQL tables that are partitioned by date, with a merge table representing all of them. For the query I am writing, the results are joined by date, so rows in table A_2012-12-05 will only join to rows in B_2012-12-05, and rows in A_2012-12-06 will only join to rows in B_2012-12-06, etc. Is there a way to write an intelligent query so that MySQL won't go looking for rows in B_2012-12-06 to join with rows in A_2012-12-05?


Edit by RolandoMySQLDBA 11:17 EDT

Please insert the output of these into this section:

They all look pretty much like this:

CREATE TABLE `mrg_31_session` (    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,    `phpsessionid` varchar(32) NOT NULL,    `start_datetime` datetime NOT NULL,    PRIMARY KEY (`id`),    KEY `ix_phpsessionid` (`phpsessionid`),    KEY `ix_startdatetime` (`start_datetime`),  ) ENGINE=MRG_MyISAM DEFAULT CHARSET=latin1 PACK_KEYS=1 UNION=(`session_20121205`,`session_20121204`,`session_20121203`,`session_20121202`,`session_20121201`,`session_20121130`,`session_20121129`,`session_20121128`,`session_20121127`,`session_20121126`,`session_20121125`,`session_20121124`,`session_20121123`,`session_20121122`,`session_20121121`,`session_20121120`,`session_20121119`,`session_20121118`,`session_20121117`,`session_20121116`,`session_20121115`,`session_20121114`,`session_20121113`,`session_20121112`,`session_20121111`,`session_20121110`,`session_20121109`,`session_20121108`,`session_20121107`,`session_20121106`,`session_20121105`)  

Mysql innoDB write operations are extremely slow

Posted: 06 Mar 2013 07:33 PM PST

I'm having serious performance problems with MySQL and the InnoDB engine. Even the simplest table makes writing operations (creating the table, inserting, updating and deleting) horribly slow, as you can see in the following snippet.

mysql> CREATE TABLE `test` (`id` int(11) not null auto_increment,     -> PRIMARY KEY(`id`)) ENGINE=InnoDB;  Query OK, 0 rows affected (4.61 sec)    mysql> insert into test values ();  Query OK, 1 row affected (1.92 sec)    mysql> insert into test values ();  Query OK, 1 row affected (0.88 sec)    mysql> insert into test values ();  Query OK, 1 row affected (1.10 sec)    mysql> insert into test values ();  Query OK, 1 row affected (6.27 sec)    mysql> select * from test;  +----+  | id |  +----+  |  1 |  |  2 |  |  3 |  |  4 |  +----+  4 rows in set (0.00 sec)    mysql> delete from test where id = 2;  Query OK, 1 row affected (0.28 sec)    mysql> delete from test where id = 3;  Query OK, 1 row affected (6.37 sec)  

I have been looking at htop and the long waiting times are not because of abnormal CPU load. It's almost zero, and memory usage is also normal. If I create the same table using the MyISAM engine, then it works normally. My my.cnf file contains this (if I remember right I haven't changed anything from the default Debian configuration):

[client]  port        = 3306  socket      = /var/run/mysqld/mysqld.sock  [mysqld_safe]  socket      = /var/run/mysqld/mysqld.sock  nice        = 0    [mysqld]  user        = mysql  pid-file    = /var/run/mysqld/mysqld.pid  socket      = /var/run/mysqld/mysqld.sock  port        = 3306  basedir     = /usr  datadir     = /var/lib/mysql  tmpdir      = /tmp  language    = /usr/share/mysql/english  skip-external-locking  bind-address        = 127.0.0.1  key_buffer      = 40M  max_allowed_packet  = 16M  thread_stack        = 128K  thread_cache_size       = 8  myisam-recover         = BACKUP  max_connections        = 100  table_cache            = 64  thread_concurrency     = 10  query_cache_limit   = 1M  query_cache_size        = 40M  log_slow_queries    = /var/log/mysql/mysql-slow.log  long_query_time = 2  log-queries-not-using-indexes  expire_logs_days    = 10  max_binlog_size         = 100M    [mysqldump]  quick  quote-names  max_allowed_packet  = 16M    [isamchk]  key_buffer      = 16M  !includedir /etc/mysql/conf.d/  

I have also tried to restart the server, but it doesn't solve anything.

The slow queries log doesn't give any extra information.

How large should be mysql innodb_buffer_pool_size?

Posted: 06 Mar 2013 05:38 PM PST

I have a busy database with solely InnoDB tables which is about 5GB in size. The database runs on a Debian server using SSD disks and I've set max connections = 800 which sometimes saturate and grind the server to halt. The average query per second is about 2.5K. So I need to optimize memory usage to make room for maximum possible connections.

I've seen suggestions that innodb_buffer_pool_size should be up to %80 of the total memory. On the other hand I get this warning from tuning-primer script:

Max Memory Ever Allocated : 91.97 G  Configured Max Per-thread Buffers : 72.02 G  Configured Max Global Buffers : 19.86 G  Configured Max Memory Limit : 91.88 G  Physical Memory : 94.58 G  

Here are my current innodb variables:

| innodb_adaptive_flushing                          | ON                                                                                                                     |  | innodb_adaptive_hash_index                        | ON                                                                                                                     |  | innodb_additional_mem_pool_size                   | 20971520                                                                                                               |  | innodb_autoextend_increment                       | 8                                                                                                                      |  | innodb_autoinc_lock_mode                          | 1                                                                                                                      |  | innodb_buffer_pool_instances                      | 1                                                                                                                      |  | innodb_buffer_pool_size                           | 20971520000                                                                                                            |  | innodb_change_buffering                           | all                                                                                                                    |  | innodb_checksums                                  | ON                                                                                                                     |  | innodb_commit_concurrency                         | 0                                                                                                                      |  | innodb_concurrency_tickets                        | 500                                                                                                                    |  | innodb_data_file_path                             | ibdata1:10M:autoextend                                                                                                 |  | innodb_data_home_dir                              |                                                                                                                        |  | innodb_doublewrite                                | ON                                                                                                                     |  | innodb_fast_shutdown                              | 1                                                                                                                      |  | innodb_file_format                                | Antelope                                                                                                               |  | innodb_file_format_check                          | ON                                                                                                                     |  | innodb_file_format_max                            | Antelope                                                                                                               |  | innodb_file_per_table                             | ON                                                                                                                     |  | innodb_flush_log_at_trx_commit                    | 2                                                                                                                      |  | innodb_flush_method                               | O_DIRECT                                                                                                               |  | innodb_force_load_corrupted                       | OFF                                                                                                                    |  | innodb_force_recovery                             | 0                                                                                                                      |  | innodb_io_capacity                                | 200                                                                                                                    |  | innodb_large_prefix                               | OFF                                                                                                                    |  | innodb_lock_wait_timeout                          | 50                                                                                                                     |  | innodb_locks_unsafe_for_binlog                    | OFF                                                                                                                    |  | innodb_log_buffer_size                            | 4194304                                                                                                                |  | innodb_log_file_size                              | 524288000                                                                                                              |  | innodb_log_files_in_group                         | 2                                                                                                                      |  | innodb_log_group_home_dir                         | ./                                                                                                                     |  | innodb_max_dirty_pages_pct                        | 75                                                                                                                     |  | innodb_max_purge_lag                              | 0                                                                                                                      |  | innodb_mirrored_log_groups                        | 1                                                                                                                      |  | innodb_old_blocks_pct                             | 37                                                                                                                     |  | innodb_old_blocks_time                            | 0                                                                                                                      |  | innodb_open_files                                 | 300                                                                                                                    |  | innodb_purge_batch_size                           | 20                                                                                                                     |  | innodb_purge_threads                              | 0                                                                                                                      |  | innodb_random_read_ahead                          | OFF                                                                                                                    |  | innodb_read_ahead_threshold                       | 56                                                                                                                     |  | innodb_read_io_threads                            | 4                                                                                                                      |  | innodb_replication_delay                          | 0                                                                                                                      |  | innodb_rollback_on_timeout                        | OFF                                                                                                                    |  | innodb_rollback_segments                          | 128                                                                                                                    |  | innodb_spin_wait_delay                            | 6                                                                                                                      |  | innodb_stats_method                               | nulls_equal                                                                                                            |  | innodb_stats_on_metadata                          | ON                                                                                                                     |  | innodb_stats_sample_pages                         | 8                                                                                                                      |  | innodb_strict_mode                                | OFF                                                                                                                    |  | innodb_support_xa                                 | ON                                                                                                                     |  | innodb_sync_spin_loops                            | 30                                                                                                                     |  | innodb_table_locks                                | ON                                                                                                                     |  | innodb_thread_concurrency                         | 4                                                                                                                      |  | innodb_thread_sleep_delay                         | 10000                                                                                                                  |  | innodb_use_native_aio                             | ON                                                                                                                     |  | innodb_use_sys_malloc                             | ON                                                                                                                     |  | innodb_version                                    | 1.1.8                                                                                                                  |  | innodb_write_io_threads                           | 4                                                                                                                      |  

A side note that might be relevant: I see that when I try to insert a large post (say over 10KB) from Drupal (which sits on a separate web server) to database, it lasts forever and the page does not return correctly.

Regarding these, I'm wondering what should be my innodb_buffer_pool_size for optimal performance. I appreciate your suggestions to set this and other parameters optimally for this scenario.

NuoDB with Amazon S3

Posted: 06 Mar 2013 01:31 PM PST

I think about developing my new mobile and web app against NuoDB. I believe in their technology. The thing that I care the most is backup and restore in case of data corruption and read / write speed scalability.

From what I've read, you can use NuoDB with Amazon S3 as the archive node for storing the data. That looks very useful because you aren't limited on how much data you can store on a single bucket. Furthermore, It's easy to backup the data because you can detach the archive node and you get to have a backup of the data which you can restore back by attaching the archive node back and let the other node sync with it.

Because I expect a lot of data (terabytes), it seems like a good solution, because I will not need to export the data, compress it and store it on EBS volume(s).

1) What I wanted to know is how efficient is S3 as archive node with NouDB for high read/write scenerios?

2) Is this is a good backup option to use with NouDB (the one that I described above)?

3) Should I be better to use MySQL Load balancers. I've read that their are many solutions that you can use on Amazon RDS to scale the DB horizontally and without any code and DB modification (ScaleBase?).

No comments:

Post a Comment

Search This Blog