[how to] Any solution to fix circular replication fail-over? |
- Any solution to fix circular replication fail-over?
- SUM 1 column with same ID and SELECT multiple columns from multiple tables
- Duplicate GROUP_CONCAT values
- Why monitor bytes sent and received on mysql
- SQL Server freezes (because of application), need logging
- Grouping results based on link table
- How to optimize table_cache?
- Will table partitioning speed up my queries?
- Where should a default setting field be?
- Create multiple columns from a single column filtering criteria?
- How would you track DB storage use and then extrapolate it?
- What does the wait type PREEMPTIVE_XE_CALLBACKEXECUTE mean?
- bad ELF interpreter while installing Oracle 9.2.0.4 in Red Hat 6.2
- Transactional Replication - Snapshot metrics
- How to determine Oracle LOB storage footprint?
- Where to store versioned/historized records?
- Update column value in entire database
- Filtering results by date [migrated]
- MySQL Master-Slave setup: master is not indexed and slave is indexed
- "Mysql2::Error: Table doesn't exist" after power outtage
- InnoDB tables inaccessible after reboot
- Fulltext stoplist replication
- Process attempted to unlock a resource it does not own
- compare the same table
- sql server database sharding - what to do with common data / non sharded data
- ProcessID Owner with third party app
- Merge two packages into one package in Oracle 10g
- Sys.WebForms.PageRequestManager Error in SSRS
- Could not open File Control Bank (FCB) for invalid file ID X when restoring from snapshot
Any solution to fix circular replication fail-over? Posted: 01 Jul 2013 08:45 PM PDT Here's my current situation ... I've developed a P.O.S system for our corporation, which has 25+ branch across the country, each branch has it's own P.O.S database on wamp server. I want for every event occur on any branch to affect the other branches. Some people advised me with ring topology (circular replication), but there's a big problem within it, that when one server gets down, the replication will stop and will go through infinite loop. Please advice. |
SUM 1 column with same ID and SELECT multiple columns from multiple tables Posted: 01 Jul 2013 07:00 PM PDT I would like to SUM the values of the column TimeTook when the MPRO column is the same. I also require to grab data from multiple tables and select multiple columns. I have 2 queries with the information and need to know how to mix them together in 1. |
Posted: 01 Jul 2013 02:13 PM PDT Writing a plugin for Wordpress using a custom post type and an additional alias table to hold repetition of the post events. The problem comes with duplicate post meta values. wp_post_meta table: wp_post table: wp_calendar_alias MYSQL QUERY: The query gives me a single row result but the post_meta_values repeats the same values. post_meta_keys would as well except I applied distinct to it. The problem is that post_meta_values may have same meta_value but for different meta_values. I am still a MYSQL novice so I am not sure what to do. It looks like it applies the alias results to every post_meta result. Not sure how to make it not do that. Thanks, Anthony |
Why monitor bytes sent and received on mysql Posted: 01 Jul 2013 03:02 PM PDT I want to know why we should monitor bytes sent and received on mysql ? I hope I have not asked a vague question. Thanks. |
SQL Server freezes (because of application), need logging Posted: 01 Jul 2013 04:58 PM PDT We have an application running ontop of our SQL Server 2005 instance and a couple of times a week this application (unannounced) will cause the SQL Server to freeze. I can't even restart the SQL Server service; I have to restart the entire machine. Needless to say, I can't open a query window to run
At the time it freezes, CPU is pegged at 90-97% and memory is maxed at 8 GB. Server has 12 GB but the max is set to 8192 for SQL Server. |
Grouping results based on link table Posted: 01 Jul 2013 01:52 PM PDT I have three tables: users groups user_groups I want to create a query so that I return an array of group names which contains an array of users. I don't want to return groups which have no users, but I would like to return users which have no group. At the moment I am getting the groups which have items first. Then iterating through the groups and adding users. Is there a way to combine these queries? |
Posted: 01 Jul 2013 12:22 PM PDT I searched google but couldnt find an exact answer, some have this value at 10000 or even 20000. Others say that even 1000 is too much for 1gb of RAM. Confusion. My number of But Also some people say that the problem is usually in creating a lot of temp tables, as we can see from the stats its not my case. Whats the ratio between thanks |
Will table partitioning speed up my queries? Posted: 01 Jul 2013 04:56 PM PDT My database is set up with what we call our staging tables and our live tables. Currently for we have about 2 million rows for a table called products. Our customers we call manufacturers load their data into the staging tables and when they are done updating, adding, deleting data we then push that data into our live tables. These live tables feed a web service for a mobile app and websites. The pushing of data from staging to live consists of deleting all the data in the live tables and inserting all of the data from the staging tables. All of this is segregated by a column called Some manufacturers have 500 products, some have 75,000. Depending on this we sometimes have REALLY REALLY slow web service responses because of paging through all of the 2 million records. Deleting the data from the live tables also seems to be getting ridiculously slow. Would partitioning my products table by |
Where should a default setting field be? Posted: 01 Jul 2013 10:40 AM PDT I am creating a database to track occurrences of incidents on buses based on an existing system. Currently the drivers can set a default bus number, which is used to pre-populate the bus number on the incident form. Where they can change that then, if they are driving another bus. This is a selection made by the driver. Not a bus assigned to them, per say. I was going to put this field in the staff table, not the bus table. However, someone said I should not, that I should put it in the bus table. I guess it would technically work either way, but I don't feel like the field belongs to the bus - it belongs to the staff member. Am I overthinking this? |
Create multiple columns from a single column filtering criteria? Posted: 01 Jul 2013 11:25 AM PDT thanks in advance for your time, what I need to achieve is generate multiple columns result from a single query, the thing is this: I got a table were is storing the next: What I need is to get the next result: I have tried using Group by like this: But the output is way to far from what I am spectating to get: The idea is to get the proper object returned to avoid over cycling the server side code with unnecessary processes, so If the database can do the handling in one call this would be great. I tried googling for hours and all I could get were a lot of results for 'single column from multiple rows' which is something I do not need, thanks for your help in advance. The final result set with many users should look like this: Updated: After more researching I found this: http://stackoverflow.com/questions/14172282/sql-query-needed-to-get-result-in-two-columns-grouped-by-type-and-percentage-in This is almost what I need after some editing I still can't get it to send it in solid numbers instead percentages, I lack the knowledge for working group cases, I will keep trying if any of you can help it is greatly appreciated. Solution Hey I managed to do this, it was easy, but my lack of knowledge was in the middle, I read the chapter Generating Summaries in MySQL Cookbook, this is where they explain that you can create count, sum and other methods inside your query to gain the proper values in those columns you define, so this is the query I ended up using: This will fill those values by count only if criteria is met. |
How would you track DB storage use and then extrapolate it? Posted: 01 Jul 2013 09:26 AM PDT I need to track the size of various DBs we have, then see it's growth rate so that we can guess where it will be at certain times. I'm using Oracle 11g first, but will eventually be applying this to other DBMSs like DB2 and Sybase. I'm not even sure where to begin or what commands/queries I'll need to run. |
What does the wait type PREEMPTIVE_XE_CALLBACKEXECUTE mean? Posted: 01 Jul 2013 10:15 AM PDT I have two instances in the last week where a Can anyone explain what this wait type means? More Background Info: In the first instance, I ran a T-SQL command to drop the session and it completed a day later. Using During this time, Object Explorer's queries to gather metadata were being blocked and recieving lock-timeouts, but there were no other complaints (or so I thought at the time). I tried to drop another session on Friday, and the same behavior occurred, except it didn't just go away after a day as the with the first event. Instead, I found out this morning that the client application could not connect. It was being blocked by the A restart of the SQL Service cleared out the blocking query. I can't find anything that helps to diagnose what this wait type this is... and why my event sessions won't drop like they should. Can you help? Server Info: SQL 2008 R2 Enterprise with SP2 |
bad ELF interpreter while installing Oracle 9.2.0.4 in Red Hat 6.2 Posted: 01 Jul 2013 08:56 AM PDT Fellow DBAs When trying to run runInstaller for Oracle 9.2.0.4 in a Red Hat server I get the following error: Both the installer disks and the OS are 64 bits. Since we haven't that version of Oracle installed in that version of Red Hat before, we wonder whether the OS isn't correctly installed (ie missing libraries) or Oracle 9.2.0.4 won't run on Red Hat 6.2. Currently, we don't have Oracle support. Please shed some light on this regard. |
Transactional Replication - Snapshot metrics Posted: 01 Jul 2013 08:00 PM PDT I have an application that is using Transactional Replication. We are using snapshots to initialize the subscriptions. All publishers and subscribers are using SQL Server 2008. Due to the way our deployment works, I end up having to reinitialize the subscribers every time we do a build. This is time consuming and annoys the business, so I have to make sure that our developers are aware of the impact their changes will have on the size of the snapshots and the time it will take to apply them. I also want to measure the impact of settings changes that I make on the publications and agents (bcp threads, batch size, etc). The metrics I want to capture are:
Is it possible to get this information by querying distribution? I realize that I could get snapshot size by looking at the filessystem after it is generated, but I would prefer to avoid that if possible. |
How to determine Oracle LOB storage footprint? Posted: 01 Jul 2013 08:41 AM PDT With Timing is fairly easy to profile but what's the easiest way to get a reasonably accurate measurement of how much space a specific LOB column takes up? |
Where to store versioned/historized records? Posted: 01 Jul 2013 11:38 AM PDT In our application we are using versioning records using triggers. The application is supposed to handle millions of records in the future and the history/versioning tables are there to ensure that there is an audit trail for various transactions. Currently everything is stored into one database. My concern is that these tables will bloat the transactional database, and I really want to move all record history into another database mainly because:
Are there any performance advantages to choosing this strategy or is the extra database another moving part which is not needed? What is the prescribed solution for this sort of thing? |
Update column value in entire database Posted: 01 Jul 2013 11:01 AM PDT In my database, I have around 30 tables, each with an How can I update the value of |
Filtering results by date [migrated] Posted: 01 Jul 2013 08:32 AM PDT I have the following query: I want to select all items before '06/26/2013' INCLUDING '06/26/2013', but the result is before '06/26/2013' NOT including '06/26/2013'. Any ideas how to do that, i tried >= but nothing changed. |
MySQL Master-Slave setup: master is not indexed and slave is indexed Posted: 01 Jul 2013 12:56 PM PDT Is it possible to have a MySQL master-slave setup where the master has no indexes (for faster insertion) and the slave will be indexed? Thanks in advance! |
"Mysql2::Error: Table doesn't exist" after power outtage Posted: 01 Jul 2013 09:31 AM PDT The power went out while my site was running and now when trying to access my site running locally on my mac I'm getting this error: Is there anything I can do other than dropping the database and recreating it? EDIT: I'm getting this when trying to drop the database: |
InnoDB tables inaccessible after reboot Posted: 01 Jul 2013 11:32 AM PDT After reboot each time, I can't see my InnoDB table. However, when use command mode I can see my tables, but cannot access to it. But when I want to access a table it says table doesn't exist; Edit by RolandoMySQLDBA Please run the following in MySQL: Please run the following in the OS: Hi RolandoMySQLDBA, Thank you for your quick reply. Here is all those outputs. My data directory is not in "/var/lib/mysql" path. I have been changed it to "/var/www/xampp/mysql/data" path. So I am going to execute the following code. Hi Mannoj These tables are created in these same database. I have some errors in my "/var/log/mysql/error.log". I don't understand what are those. Here are they .... Thanks to both of you. |
Posted: 01 Jul 2013 07:45 PM PDT In MS SQL Server 2008 R2 there is replication of table with fulltext index on it. But stoplist that is associated with replicated fulltext index doesn't replicate. Is there any possibility to replicate stoplist also? |
Process attempted to unlock a resource it does not own Posted: 01 Jul 2013 01:45 PM PDT SQL Server 2005 SP4 32-Bit I have a DBCC CHECKDB job running nightly. Last night, soon after the job started, I got the errors below. The database is NOT in suspect mode, and CHECKDB comes back clean when I run it now. The database is fully accessible. Should I be concerned? I'd hate to go back to a backup at this point. |
Posted: 01 Jul 2013 11:45 AM PDT I am facing an issue with the following query. When I execute the query, it takes very long. I broke the query into two parts, compared with a shell script, but is there any chance to go with one query? Any suggestion welcome. |
sql server database sharding - what to do with common data / non sharded data Posted: 01 Jul 2013 06:45 PM PDT We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level. We are looking to shard the database now to ensure that we can handle company growth and future loads. We have decided what particular data should be sharded. It is a subset of our database which is highly utilized. However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc . I see two options to handle this common/universal data: 1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs. 2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards. concerns about design #1 1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data. 2)a loss of referential integrity. Not possible to do cross database referential integrity. 3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards. 4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3. concerns about design #2 In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively. I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see. thank you in advance. |
ProcessID Owner with third party app Posted: 01 Jul 2013 11:35 AM PDT I am having problems trying to audit a third party app with a SQLServer backend. The app manages users and user access via a SQL table, using a single SQL login to access the databases. I am trying to audit this by using the Host PID much like Windows Task Manager associates a PID and Owner for every process. Here is what I have attempted and tried
Any thoughts on how I can get the Owner? |
Merge two packages into one package in Oracle 10g Posted: 01 Jul 2013 11:39 AM PDT I want to combine two different packages into one single new package in Oracle 10g. Is there some way to accomplish this? |
Sys.WebForms.PageRequestManager Error in SSRS Posted: 01 Jul 2013 11:45 AM PDT When i am executing on SSRS report i am getting below error I'm getting this error at the bottom left yellow colored exclamation mark. What could be the cause of getting this error in SSRS?? Please help! |
Could not open File Control Bank (FCB) for invalid file ID X when restoring from snapshot Posted: 01 Jul 2013 11:49 AM PDT We have a scenario in which we, during integration tests, make a backup of an existing database and restore it under a new name. We then create a snapshot of our "new" database. After each test we restore the database from the snapshot so that different tests don't affect each other. This works well except for in a specific scenario where the restore fails with the following error message:
The code that produces that looks like this: In almost all cases the above code works just fine, however when the test executes a few specific SQL commands that uses temporary tables AND performs deletes the above error occurs. Note however that in other cases commands that uses temporary tables works just fine. Likewise, other deletes work well as well. Furthermore, this ONLY happens when we run the tests on our build server against a separate database machine. Running the tests locally against a local SQL Server instance works fine. So does running the tests locally against the same database server that the build server uses. Finally, note that other tests that uses this rollback setup works fine on the build server. Even the rollback after the now failing tests ran just fine at a previous point in time, but have now stopped working. The only real difference between now and then seems to be that some additional data has been added. We're using SQL Server 2008 R2 and .NET 4. UPDATE: Based on the answer by Pondlife we attempted to run DBCC but without success as it reported the snapshot marked as suspect. We then checked the logs and found "Error: 17053, Severity: 16, State: 1.". Trying to find information about that we found that one cause of this could be when using VMWare and having a PVSCSI adapter configured. The build server does run on VMWare (the db server is native) but it isn't configured with a PVSCSI adapter but a LSI Logic SAS adapter. UPDATE 2: We've narrowed this down to tests which cause relatively many (330) deletes. If we change a test whose teardown usually causes this error so that the actual test code only triggers a single delete instead of hundreds it works fine. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment