[how to] Minimising downtime implementing merge replication on live system |
- Minimising downtime implementing merge replication on live system
- PostgreSQL user can not connect to server after changing password
- mySQL Quickly find rows linked to another row
- Log Shipping - RESTORE WITH STANDBY - on SQL Server 2012 keeps breaking
- Automatically create linked row in a different table
- Why does myisam mysql table indexes go out of date?
- Ways to make this stored procedure run faster and optimaised in SQL SERVER for ASP.NET?
- Designing ERD1 and ERD2 with crows foot
- Efficient way to move rows across the tables?
- Open a Locked Text File with SQL Server
- relational databse for address model
- sql 2005 upgrade to sql 2005 service pack 4 (SP4) via command line
- Useful topics on database theories [closed]
- Using MySQL InnoDB as an Archive
- Sequential joining of tables in order
- TSQL DateTime DataType - fractional seconds vs milliseconds
- SQL Server Database Hot Backups: Which Technique(s) to Use for Multi-Level Replication?
- Manually moving SQL instance to another node causes immediate database mirror failover
- Write differences between varchar and nvarchar
- MS Access ER diagram
- How to properly kill MySQL?
- mysqld_safe version different than mysqld?
- Tuning Postgres config for new Database Server
- Global locking for multi-master Oracle GoldenGate replication
- SSDT 2010 and SQL70590 errors
- Is there a way to write intelligent queries against a merge table in MySQL that represents a bunch of tables partitioned by date?
- Mysql innoDB write operations are extremely slow
- How large should be mysql innodb_buffer_pool_size?
- NuoDB with Amazon S3
Minimising downtime implementing merge replication on live system Posted: 06 Mar 2013 09:01 PM PST BackgroundWe have three Servers running SQL Server 2008 which all contain a Product database. These are currently kept in sync once per day by sending a copy of the MDF to the two Subscribing servers and detaching the existing DB and replacing it with the latest copy. The two subscribers contain a Customer database which is not present on the Subscriber. The Customer database contains customer purchase information, so it needs to reference ProductIDs in the Product table. We have implemented Merge Replication on a similar set up in our testing environment and are preparing to implement these changes on our live system. The end result should look something like: http://puu.sh/2dl2z (Apologies for the linked image, I can't paste images due to a lack of reputation). The ProblemIn order to implement these changes, we will need to generate a snapshot and synchronise the data between the Publisher and the Subscribers. While this is occurring, any client application which relies on the data in the Product database will fail, and any Stored Procedures on the Customer database which make cross database joins will also fail. We'd like to keep this downtime to a minimum or eliminate it completely. We've had some crazy ideas thrown around which may or may not work (using synonyms to point a subscriber to another server's Product table while snapshot generation/synchronisation takes place) but they don't seem very elegant, if they'll work at all. I'm hoping someone else has had a similar issue and might have some insight :) |
PostgreSQL user can not connect to server after changing password Posted: 06 Mar 2013 08:42 PM PST I've met this with 4 roles I created: My system: Postgresql 9.2 on Ubuntu 12.04 Is there any way to fix this? Thanks. (1): login with account postgres, right click user in Login Roles, go to tab 'Definition' and enter password |
mySQL Quickly find rows linked to another row Posted: 06 Mar 2013 08:30 PM PST For example if I have a |
Log Shipping - RESTORE WITH STANDBY - on SQL Server 2012 keeps breaking Posted: 06 Mar 2013 08:38 PM PST We are using log shipping and RESTORE WITH STANDBY on SQL Server 2012 in order to restore the database in read-only mode for reporting purposes. However, the log shipping setup keeps breaking after completing a restore of one or two log backups. Log-shipping only breaks when it is running as RESTORE WITH STANDBY; RESTORE WITH NORECOVERY does not cause any problems. My only intuition about this is that the primary database is not that dynamic. Therefore, when there are no transactions, this causes issues with the RESTORE process, maybe? Any ideas, known fixes? I had it working for a few days by running a regular job that does heavy updating on two tables. When the job stopped running the log shipping setup quickly failed, unable to process the .trn file. I reset log-shipping and tried to see if it would keep running by just doing a small update, changing the value of one column of one record in a table, whoever it still failed. Thanks for all your responses. |
Automatically create linked row in a different table Posted: 06 Mar 2013 07:41 PM PST What I mean is that for example if I have a table |
Why does myisam mysql table indexes go out of date? Posted: 06 Mar 2013 08:08 PM PST I have few myisam tables, where only the autoincrement primary index is up to date with the number of columns, but not other indexes? Any idea, why this happens? |
Ways to make this stored procedure run faster and optimaised in SQL SERVER for ASP.NET? Posted: 06 Mar 2013 06:18 PM PST I have a very complex stored procedure which runs very slow takes long time to respond, i am using this to create form and working in ASP entity framework, however i am just wondering is there are better ways of doing this or optimized this to make this run faster i.e views, do single selects in the coding level with multiple query's ..etc here is the stored procedure .. |
Designing ERD1 and ERD2 with crows foot Posted: 06 Mar 2013 04:01 PM PST What would be the perfect ERD1 & ERD2 for the following case study? And what are the attributes of the entities? Five Aside Football Independent League (FAFIL) The Five Aside Football Independent League (FAFIL) is an amateur organisation which was set-up to organise and record the results of a number of five aside football clubs who during the football season play relatively friendly matches with each other. FAFIL has recently decided to record the details of each team and which other team they play against by using a relational database system and is calling on you as renowned experts in this field to solve their problems. The organisation consists of a number of clubs which regularly take part in the prestigious challenge cup tournament, which consists of each club putting forward a team to compete for the honour of winning the challenge cup by scoring the most points when competing with the other clubs. Each club may have more than one team, but can only enter one team into this prestigious tournament. Throughout the season FAFIL has to arrange and record the order in which each team will play against each other and the venue and date that the match will take place on, as well as arranging for a designated FAFIL referee to oversee the match. The venue will be held at one of the team's home ground and consequently be an away match for the other team. Each team must play all other teams twice, once at home and once away. Therefore the first task which the database system must provide is to automatically create a fixtures list which will take as input the existing teams and produce a listing having each team playing all the others both home and away. The organisers can then arrange the matches required and record the referee and venue accordingly. Difficulties have arisen in the past when during the season it was discovered that either the referee or the venues were double booked on the same day. Once the matches are arranged the results must be recorded as and when they come in. Points are awarded to the teams for each match they play according to the end result, two points for a win and one point for a draw and zero points if they lost or failed to show. Once all the matches have been played it should be any easy job to calculate which team has the most number of points. Teams with equal points scored are ordered by the difference between goals scored and goals conceded. It is hoped that the database system will be able to record the points and allocate them to the correct team as the results are put into the system. Due to some very aggressive players and the occasional hospital bill during the last season it has been decided to record and allocate penalty-points against players if they misbehave. Accordingly the FAFIL referees where sent instructions to allocate penalty-points, such as: 1-bad language, 5-rude gestures, 10-stabings, and these will be allocated against each player as the season progresses. It was further decided that any player who exceeds 10 penalty-points e.g. stabs another player and then swears at them, must not be allowed to take part in any further matches. It has also been decided to reward the best player with a highly prized book voucher. Accordingly each time a match is played, one player is chosen as the 'player of the match'. At the end of the season the player who was 'player of the match' the most is awarded the prize, if there are a number of players who have been nominated the same number of times, they must share the book voucher as FAFIL is not made of money. In an aid to assist your design plans, FAFIL has supplied a typical list of questions which it believes should be easy for the new database to answer and hence save its organisers a lot of time and effort. Typical Set of Questions.
(Note this has to be calculated live by counting how often each player has been chosen as player of the match)
|
Efficient way to move rows across the tables? Posted: 06 Mar 2013 03:51 PM PST This is somewhat long question as I would like to explain all details of the problem. System DescriptionWe have a queue of incoming messages from external system(s). Messages are immediately stored in the e.g. INBOX table. Few thread workers fetch the job chunk from the table (first mark some messages with UPDATE, then SELECT marked messages). Workers do not process the messages, they dispatch them to different internal components (called 'processors'), depending on message command. Each message contains several text fields (longest is like 200 varchars), few ids and some timestamp(s) etc; 10-15 columns total. Each internal component (i.e. processor) that process messages works differently. Some process the message immediately, others triggers some long operation, even communicating via HTTP with other parts of the system. In other words, we can not just process message from the INBOX and then remove it. We must work with that message for a while (async task). Still, there are not too many processors in the system, up to 10. Messages are all internal, i.e. it is not important for user to browse them, paginate etc. User may require list of processed relevant messages, but that's not mission-critical feature, so it does not have to be fast. Some invalid message may be deleted sometimes. Its important to emphasize that expected traffic might be quite high - and we don't want bottlenecks because of bad database design. Database is MySql. DecisionThe one of the decisions is not to have one big table for all messages, with some flags column that will indicate various messages states. Idea is to have tables per processors; and to move messages around. For example, received messages will be stored in INBOX, then moved by dispatcher to some e.g. PROCESSOR_1 table, and finally moved to ARCHIVE table. There should not be more then 2 such movements. W While in processing state, we do allow to use flags for indicating processing-specific states, if any. In other words, PROCESSOR_X table may track the state of the messages; since the number of currently processing messages will be significantly smaller. The reason for this is not to use one BIG table for everything. QuestionSince we are moving messages around, I wonder how expensive this is with high volumes. Which of the following scenarios is better: (A) to have all separate similar tables, like explained, and move complete messages rows, e.g. read complete row from INBOX, write to PROCESSOR table (with some additional columns), delete from INBOX. or (B) to prevent physical movement of the content, how about to have one big MESSAGES table that just stores the content (and still not the state). We would still have other tables, as explained above, but they would contain just IDs to messages and additional columns. So now, when message is about to move, we physically move much less data - just IDs. The rest of the message remains in the MESSAGE table unmodified all the time. In other words, is there a penalty in sql join between one smaller and one huge table? Thank you for your patience, hope I was clear enough. |
Open a Locked Text File with SQL Server Posted: 06 Mar 2013 08:20 PM PST When I try to I get error since the file is a log file opened by another process. However with C# I can open the file with Is there a way to have that ReadWrite share functionality within SQL Server (bulk insert or any other)? Thanks |
relational databse for address model Posted: 06 Mar 2013 07:40 PM PST I want to design a "Address" model for all type of entity like user, business and etc. I have now two type of main models their one is User and another one is Business. Each one has varies address types like below. So I created a address model with addresstype column like this Relationship User --OneToMany-->Business User --OneTOMany-->Address(User Column) Now using above relation addresstype and user column will be in relation, But Business address is not with any relationship with address. So How to design this one on efficient way |
sql 2005 upgrade to sql 2005 service pack 4 (SP4) via command line Posted: 06 Mar 2013 03:01 PM PST I have been wracking my brain over upgrading sql server 2005 to sql server 2005 sp4. I have a script for installing new instances of sql servers for all versions and they all work wonderfully. Problem is, sql server 2012 can't upgrade sql 2005 SP3 and below. So, I have to install up to sql 2005 SP4, AND THEN upgrade through sql 2012, which is highly annoying. I need to do this silently via the command line, but I cannot get it to work at all. Here are some samples I tried: "SQLSP4.exe /qb /ACTION=Upgrade /INSTANCENAME="FOOBAR"" "SQLSP4.exe /quiet /instancename="FOOBAR"" among a couple other variations. I could not find anything online at all. In fact, the one helpful thread ended in saying that Microsoft didn't have any support for this, but I'm not sure about that. What can be done here? Does anyone have any experience in updating service packs via command line? |
Useful topics on database theories [closed] Posted: 06 Mar 2013 12:24 PM PST Useful topics to write about on different Database Theories? Hi, I want to know what useful topics I can research on database theories. For example I have already wrote up e-learning and I based e-learning on these for topics below:
Now for database theories I am going to write it in a synthesis matrix but what topics should I talk about? Here is a synthesis matrix example if you don't know what that is: http://writingcenter.fiu.edu/resources/synthesis-matrix-2.pdf Thanks |
Using MySQL InnoDB as an Archive Posted: 06 Mar 2013 12:37 PM PST My site has a main MySQL InnoDB table that it does most of its work on. New rows get inserted at a rate of 1 million per week, and rows older than a week gets moved over to an archive table on a daily basis. These archived rows are processed once a week for stuff like finding trends. This archive table consequently grows at 1 million new rows every week, and querying it can get really slow. Is MySQL suited for archiving data, or is my strategy very flawed? Please advise, thank you! |
Sequential joining of tables in order Posted: 06 Mar 2013 12:33 PM PST Table Table So in an album there is several photos (photos in an album are ordered by Now I introduce one more "level of indirection": Bunches. There may be several albums in one bunch. For this I add fields Now the problem: Let it is given a bunch ID. I want to make a SELECT query which selects all photos from albums belonging to the given bunch, ordered first by |
TSQL DateTime DataType - fractional seconds vs milliseconds Posted: 06 Mar 2013 12:15 PM PST Wanting to get all records from a single day. So everything between So, in testing I run in SSMS: And get: The millisecond part comes back off by a little. In the case of the first line it really matters. Because I don't want 2013-03-06 00:00:00.000 - I want a millisecond before midnight. TSQL states in documentation that it uses fractional seconds as opposed to .Net datetime which uses milliseconds. This would seem to be a syntax difference only but if you really care about millisecond precision in TSQL are you forced to use In c# Am I interpreting this all right? If I care about millisecond precision am I stuck with Normally that would be an easy fix but in this case we're using a Microsoft system function that receives |
SQL Server Database Hot Backups: Which Technique(s) to Use for Multi-Level Replication? Posted: 06 Mar 2013 01:57 PM PST I'm reading the many other posts here about mirroring vs. replication, but I'd still appreciate it if someone with some experience would weigh in. My small development team needs to have local automatic synchronization of databases which are hosted remotely. The network situation is a little complicated:
We have requested and received an always-on VM in the remote host internal network that we plan to use to monitor the health of our dev/test/prod servers as well as to maintain hot backups of our databases. So we are looking at two levels of data synchronization:
We aren't looking to implement fail-over at this point, because even the "prod" environment is not critical (it's more of a pre-prod environment). We also need this to be as simple as possible, and SQL Server 2012 is not an option. We are stuck with 2008. Here's what I'm thinking:
My questions are:
Thank you for your time. |
Manually moving SQL instance to another node causes immediate database mirror failover Posted: 06 Mar 2013 12:33 PM PST I have a two-node SQL cluster (2008 R2). Some of the databases within that SQL instance are mirrored to another server on a remote site, using the High safety with automatic failover. The mirroring connection timeout value for those databases is set to 90 seconds. When I move SQL from one node in the cluster to another node, using the Failover Cluster Manager application's option of "Move this service or application to another node" the databases that are mirrored are instantly failing over to the mirror. This is undesirable behaviour. My reason for setting the mirroring connection timeout value is that I only want to fail over to the database mirror if the cluster fails completely and there are no functioning nodes. Is there any way to achieve this? It feels as though it should be possible, otherwise the concept of mixing clustering and automatic failover database mirroring would be unworkable as every node failover within the cluster would trigger a mirror failover. Thanks. |
Write differences between varchar and nvarchar Posted: 06 Mar 2013 01:38 PM PST Currently in our SQL Server 2012 database, we're using My question is are there any differences in how SQL Server writes to Edit: |
Posted: 06 Mar 2013 04:04 PM PST I have database scheme for Microsoft Access. I'm interested how I can convert the scheme into an ER diagram? Is there any tool that can do this based on scheme relations? |
Posted: 06 Mar 2013 07:48 PM PST I have CentOS 64bit with CPanel installed and I use However, it keeps doing ..... for minutes and it never stops. It used to be instant. Any idea why it does that and how to fix? Right now I have to do The server is also very very active. |
mysqld_safe version different than mysqld? Posted: 06 Mar 2013 12:05 PM PST Is it a problem that mysqld socket has a different version than the mysqld server? I noticed this in my mysqld log during startup Yet, when I show variables I get this
Based on golimar's questions I ran a
and I see this
|
Tuning Postgres config for new Database Server Posted: 06 Mar 2013 04:20 PM PST I have finally gotten a Purpose Built Database machine for a project i'm working on. The server is a 4 Core Xeon with 64GB of Ram and a Raid10 of 10krpm drives. I have just got the database moved over to the new machine; the performance with the same code are worse than when it was running on a VM. I'm looking for suggestions on what settings to adjust to what values. Currently, I've upped shared_buffers to 60GB and the kernel settings needed to make that change.
I'm working on doing some stuff I'd like to get loaded in quickly so I have Can anyone point me in the right direction as to how to improve the speed? I had it running quite a bit faster on a slower machine with much less memory and drives shared with the machines that were making calls to it, so I'm not really sure what the issues are. Update: 2013-03-06 Performance is falling off a cliff shortly after a run starts. Not sure what to do. Settings The task is a long script that's taking data from a copied in table and normalizing it into the database. So big reads occasionally to pick up 1000 rows or more, then lots of little reads to de-duplicate the record and find IDs etc, then some inserts along the way that are needed, and finally lots of inserts at the end. Then Repeat. Any Suggestions? or ideas what's falling off? This is one of my slower queries, I'd love ideas of how to speed it up. The output. |
Global locking for multi-master Oracle GoldenGate replication Posted: 06 Mar 2013 05:21 PM PST This is a very complex scenario, but I figured a state-of-the-art challenge might interest some of the many high-end users of dba.se. Problem I'm working on an intercontinental data replication solution for a document production system, somewhat similar to a wiki, utilizing Oracle GoldenGate. The primary goals are to increase application performance and availability around the globe. The solution must allow for simultaneous read/write access to the same pool of data from multiple locations, which means that we need some clever way of preventing or resolving conflicting updates without user interaction. Focusing on collision prevention, we must allow an object (a document, an illustration, a set of metadata etc) to be locked globally, thus preventing multiple users from simultaneously editing the same object from different locations - ultimately causing a conflict. Similarly an object must remain locked until any user's connected database have received the updated data for that object, less a user may start editing an old object without the latest updates. Background The application is somewhat latency sensitive, making access to a central data center slow from remote locations. Like many content focused systems, the read/write ratio is in the line of 4 to 1, making it a good candidate for a distributed architecture. If well-managed, the latter wil also work towards ensuring availability during site or network outages. I have used a somewhat unconventional multi-loop bi-directional replication topology. This keeps the complexity at a manageable level {2(n-1) ways}, adds resilience for site outages and allows for fairly simple addition or removal of sites. The slight drawback is that it may take up to 30 seconds for a transaction to be replicated between the most remote sites via the central master database. A more conventional design with direct replication between all sites would cut that time in half, but would also significantly increase the complexity of the configuration {n(n-1) ways}. With five locations that would mean a 20-way replication as opposed to the 8-way replication in my design. This illustration shows my current test environment across data centers in Europe, Asia and North America. The production environment is expected to have additional locations. All the databases are Oracle 11.2.0.3 with Oracle GoldenGate 11.2.1. My thoughts so far I've been thinking along the lines of doing locking by inserting a row into a "locking" table over a database link to the central database, while letting the unlock (update or delete of the previously mentioned row) be replicated along with the updated data. On behalf of the user we must then check the availability of a lock in both the central and local database before acquiring the lock and opening the object for editing. When editing is completed, we must release the lock in the local database which will then replicate the changes and the release of the lock to all other locations via the central database. However, queries over a high latency database link can sometimes be very slow (tests show anywhere from 1.5 seconds to 7 seconds for a single insert), and I'm not sure if we can guarantee that the update or delete statement that removes a lock is the last statement to be replicated. Calling a remote PL/SQL procedure to do the checking and locking will at least limit the operation to a single remote query, but seven seconds is still a very long time. Something like two seconds would be more acceptable. I'm hoping the database links can be optimized somehow. There may also be an additional issues like trying to delete or update a row in the local locking table before that row have been successfully replicated from the central database. On the bright side, with this kind of solution, it should be relatively simple to let the application enter a read-only state if communications to the central database is distrupted, or to redirect clients if a data center should become unavailable. Are there anyone who have done anything similar? What might be the best way to approach this? Like I said initially, this is a rather complex solution, feel free to ask about anything left unclear or left out. |
Posted: 06 Mar 2013 12:14 PM PST I have Visual Studio 2010 SP1 with SSDT (with Dec 2012 update). I have created a new SQL Server project and imported objects from a database on a SQL Server 2008 R2 instance. The database has a compatibility mode of 80, can't change this at the time. Anyway. When I do a build, which has to be clean in order for me to do a schema compare, it is throwing The database collation is set to |
Posted: 06 Mar 2013 01:33 PM PST I have a set of MySQL tables that are partitioned by date, with a merge table representing all of them. For the query I am writing, the results are joined by date, so rows in table A_2012-12-05 will only join to rows in B_2012-12-05, and rows in A_2012-12-06 will only join to rows in B_2012-12-06, etc. Is there a way to write an intelligent query so that MySQL won't go looking for rows in B_2012-12-06 to join with rows in A_2012-12-05? Edit by RolandoMySQLDBA 11:17 EDT Please insert the output of these into this section: They all look pretty much like this: |
Mysql innoDB write operations are extremely slow Posted: 06 Mar 2013 07:33 PM PST I'm having serious performance problems with MySQL and the InnoDB engine. Even the simplest table makes writing operations (creating the table, inserting, updating and deleting) horribly slow, as you can see in the following snippet. I have been looking at htop and the long waiting times are not because of abnormal CPU load. It's almost zero, and memory usage is also normal. If I create the same table using the MyISAM engine, then it works normally. My my.cnf file contains this (if I remember right I haven't changed anything from the default Debian configuration): I have also tried to restart the server, but it doesn't solve anything. The slow queries log doesn't give any extra information. |
How large should be mysql innodb_buffer_pool_size? Posted: 06 Mar 2013 05:38 PM PST I have a busy database with solely InnoDB tables which is about 5GB in size. The database runs on a Debian server using SSD disks and I've set max connections = 800 which sometimes saturate and grind the server to halt. The average query per second is about 2.5K. So I need to optimize memory usage to make room for maximum possible connections. I've seen suggestions that innodb_buffer_pool_size should be up to %80 of the total memory. On the other hand I get this warning from tuning-primer script: Here are my current innodb variables: A side note that might be relevant: I see that when I try to insert a large post (say over 10KB) from Drupal (which sits on a separate web server) to database, it lasts forever and the page does not return correctly. Regarding these, I'm wondering what should be my innodb_buffer_pool_size for optimal performance. I appreciate your suggestions to set this and other parameters optimally for this scenario. |
Posted: 06 Mar 2013 01:31 PM PST I think about developing my new mobile and web app against NuoDB. I believe in their technology. The thing that I care the most is backup and restore in case of data corruption and read / write speed scalability. From what I've read, you can use NuoDB with Amazon S3 as the archive node for storing the data. That looks very useful because you aren't limited on how much data you can store on a single bucket. Furthermore, It's easy to backup the data because you can detach the archive node and you get to have a backup of the data which you can restore back by attaching the archive node back and let the other node sync with it. Because I expect a lot of data (terabytes), it seems like a good solution, because I will not need to export the data, compress it and store it on EBS volume(s). 1) What I wanted to know is how efficient is S3 as archive node with NouDB for high read/write scenerios? 2) Is this is a good backup option to use with NouDB (the one that I described above)? 3) Should I be better to use MySQL Load balancers. I've read that their are many solutions that you can use on Amazon RDS to scale the DB horizontally and without any code and DB modification (ScaleBase?). |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment