[how to] Subscriber database to be in sync using Distributor |
- Subscriber database to be in sync using Distributor
- Get the list of all Tables, Views, Stored procedures that are not added in a publication for replication
- How to determine if there are [idle connections with] uncommitted transactions in PostgreSQL?
- MySQL upgrade 5.0.88 to latest
- Oracle's UTL_FILE
- SQL 2008: Why is *any* user, with minimal permissions able to restart/stop the SQL instance through Management Studio
- Transfer data from DynamoDB to SQL server 2008?
- How do I define a Standard Deviation function in Pentaho Schema Workbench
- How to handle update or insert situations
- Could too many idle connections affect PostgreSQL 9.2 performance?
- How to speed up queries on a large 220 million rows table (9 gig data)?
- How can I remove a bad execution plan from SQL Azure?
- SQL server slow on Xeon machine, fast on Corei7 x990
- SQL Index order and performance based on cardinality and data
- AS400 CL Programming - Export Summary data to Excel
- Column partially determining accepted values of another column, should be somehow normalized?
- Is a REINDEX required after CLUSTER?
- Main Considerations When Moving From MS Access Programming to SQL Server
- Should all queries where you expect a specific order, include an ORDER BY clause?
- Use single table with foreign key or two twin tables for text entries?
- What index to add in case of join by two optional fields
- Most efficient way to sort data fields into SQL
- Backing up a 400-500 GB database on MySQL/Amazon RDS?
- View SQL's errorlog in Amazon RDS
- Why would mysql "show global status" query be taking 15 minutes?
- Is there a way to export Oracle's UNDO?
- MySQL-5.5 InnoDB memory issue
- MySQL table relations, inheritance or not?
- Is there a way to do data export from Amazon Web Services RDS for SQL Server to On-Premises SQL Server?
Subscriber database to be in sync using Distributor Posted: 10 Apr 2013 08:59 PM PDT I have database D1 as a publisher, D2 as a subscriber and they are using Distributor 'DIST' which is reside on the subscriber side. Both D1 and D2 database having DR databases called DR1 and DR2. I have setup the Tran. Replication from D1 to D2, D1 to DR1 and D2 to DR2 as shown in below diagram: When my production incoming traffic is on to database D1 and data is replicating to databases D2, DR1, and DR2. so obviously there might be some delay in data transferred to DR1, and DR2. so lets say a disaster happen and my D1, Dist, and D2 database server got crashed, in that case i have to divert my traffic to DR1 and i have to setup Tran. replication between DR1 to DR2 (with Nosync option, as i don't want to reinitialize it) but because of delay in data transfer there might be data mismatch between DR1 and DR2. is there any setting on distributor that can give me the surety that the data transfer to DR1 and DR2 be in same transaction. or is there any alternative ways to handle this problem ? Note: i am using SQL SERVER 2008, Transactional Replication. I can not afford to re-initialize the replication between DR1 and DR2. |
Posted: 10 Apr 2013 08:57 PM PDT How do i get the list of all Tables, Views, Stored procedures that are NOT added in a publication for replication?Can you help me to make the stored Proc for that?I am new to Replication....Thanks |
How to determine if there are [idle connections with] uncommitted transactions in PostgreSQL? Posted: 10 Apr 2013 08:33 PM PDT According to a comment on this question I asked about idle connections in PostgreSQL 9.2, some uncommitted transactions (possibly related to some of those idle connections) might cause some performance issues. What is a good way to determine if there are uncommitted transactions (bonus points if there is a way to know if the connection they're on is idle or not)? Thanks very much! |
MySQL upgrade 5.0.88 to latest Posted: 10 Apr 2013 07:07 PM PDT Very interesting and informative blog.. I have a question - i have a requirement of upgrading from 5.0.88 to latest version..
Please do guide me Thanks Devi |
Posted: 10 Apr 2013 06:49 PM PDT I am learning Oracle pl/sql by myself. I need to create a procedure that exports a table to a csv file. For that I am thinking of UTL_FILE.Is there any good book or a site to get me started? I did already looked and find some samples to look to be good like this one: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:88212348059 but I find few explanations, also Oracle docs are not very user friendly. I come from javascript and don't have much experience with databases. From the samples I see on the internet, for me PL/SQL scripts look like C language. |
Posted: 10 Apr 2013 05:45 PM PDT So I'm on a project that I need to help admin SQL 2008 . I'm coming from a MySQL background. Why is it that a basic user, with the most basic/default permissions, through Management Studio is able to restart or stop the entire instance of SQL but not view a simple table? |
Transfer data from DynamoDB to SQL server 2008? Posted: 10 Apr 2013 03:23 PM PDT I want to transfer data from amazon dynamo DB to SQL server 2008. What are the tools or methods which can be used to do this ? |
How do I define a Standard Deviation function in Pentaho Schema Workbench Posted: 10 Apr 2013 07:34 PM PDT I'm building an OLAP Analysis with Pentaho's BI Suite (Community Edition). Many of my measures are standard deviations of the variables in my fact tables. Does someone has a tip on how to define a Standard Deviation aggregation function in Schema Workbench? Lots of my jobs could benefit of it. |
How to handle update or insert situations Posted: 10 Apr 2013 03:24 PM PDT From time to time my stored procedures looks like probably there is fault in design of app which calls this sproc. Is there better way to achieve updating or inserting data in one approach ? |
Could too many idle connections affect PostgreSQL 9.2 performance? Posted: 10 Apr 2013 08:34 PM PDT Some queries on my database server seem to take a long time to respond, and I believe the CPU usage is high. When running I am also using PgBouncer with transaction-level pooling. I am suspecting that I can easily reduce the number of Can many Thanks very much! |
How to speed up queries on a large 220 million rows table (9 gig data)? Posted: 10 Apr 2013 02:51 PM PDT The issue: We have a social site where members can rate each other for compatibility or matching. This MySQL version:
TABLE INFO :: mysql> SHOW COLUMNS FROM user_match_ratings;
SAMPLE QUERY: The table has 3 indexes set up:
mysql> show index from user_match_ratings; | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | | user_match_ratings | 0 | PRIMARY | 1 | id | A | 220781193 | NULL | NULL | | BTREE | | | user_match_ratings | 1 | user_match_ratings_index1 | 1 | rater_user_id | A | 11039059 | NULL | NULL | | BTREE | | | user_match_ratings | 1 | user_match_ratings_index1 | 2 | created_at | A | 220781193 | NULL | NULL | | BTREE | | | user_match_ratings | 1 | user_match_ratings_index2 | 1 | rated_user_id | A | 4014203 | NULL | NULL | | BTREE | | | user_match_ratings | 1 | user_match_ratings_index2 | 2 | rater_user_id | A | 220781193 | NULL | NULL | | BTREE | | | user_match_ratings | 1 | user_match_ratings_index3 | 1 | rated_user_id | A | 2480687 | NULL | NULL | | BTREE | | SO even with the indexes these queries are slow. My Question:Would separating this table/data unto another database on a server that has enough ram to store this data in memory would this speed up these queries? Is there anything in anyway that the tables/indexes are set up that we can improve upon to make these queries faster? |
How can I remove a bad execution plan from SQL Azure? Posted: 10 Apr 2013 02:33 PM PDT
(Source was bad indexes -> bad stats, etc. That's all fixed but a bad plan won't go away.) UPDATE: I selected @mrdenny's solution as he got there first. I am, however, successfully using @Aaron Bertrand's script to perform the work. Thanks to everybody for the help!! |
SQL server slow on Xeon machine, fast on Corei7 x990 Posted: 10 Apr 2013 02:33 PM PDT i have searched this problem but have had no luck find any improvement. I have a complex application that used stored procedures in SQL server. Some of the stored procedures use cursors due to the complex nature of the SQL in them. A few months ago I purchased a new machine expecting an improvement in speed. The new machine is Xeon E5-2690 single CPU with 32Gig of ram. The same application runs faster on the new machine as expected. But the database is much slower. One of the stored procedures is more than 20x slower. What takes seconds on the old machine takes minutes on the Xeon machine. The database and application are identical. I generate the DB from scripts and have done back to back tests with clean empty databases. Still many times slower. I have searched for answers and tried what little I could find, no help. I have looked at the settings of SQL server on both machines, many times, and cant find a difference. I tried turning off HT on the Xeon because I found a post that it helped someone, didn't help me however. I have started to move some of the DB processing into the applications to try and reduce the time penalty running on the new Xeon. Any help would be appreciated. Cheers. |
SQL Index order and performance based on cardinality and data Posted: 10 Apr 2013 06:09 PM PDT I've been working quite a bit on a query that's not operating very efficiently on a DB2 database. The thing will return up to 30k rows or so and take up to 10 seconds. I've been working on getting this thing to run faster and I think I've finally tracked it down to an improper index. A new and more appropriate index should fix the problem, but I'm not entirely sure how this should be ordered as there appears to be different ways for different data. The query itself isn't terribly complicated. Here's a brief rundown: A few details about these tables:
The join between A and B is taking a third of the query cost. The explain plan shows a FETCH operation that uses a particular index four times. I was able to get a VERY detailed explain plan report and found that this operation is returning all 400k rows, then performing the join, which only needs about 26k of them. I found a short-circuit method that added this to the predicate: Logically this doesn't affect the result set, but it does tweak the optimizer. Adding this cut the query time in half. Checking the detailed explain showed that it added an additional filter factor that reduced the number of estimated rows in that FETCH operation to 26k or so, which is about right. This in turn reduced the estimate for the number of rows from A that would be returned overall and enabled index scans on A that hadn't been used prior due to the optimizer thinking it could be grabbing nearly half of A. The result was a much faster query. However, the COALESCE is a hacky bit and not suited for production, but it was handy for finding an issue. So now it falls to creating a new index on B to fix this problem. The DB2 query analyzer suggested an index on all six fields, starting with B.TYPE_CODE and following with the rest in no particular logical order. This would cover all selected fields and the table itself would not need to be touched, which certainly has its advantages. It has also been suggested that an index on (B.A_ID1, B.A_ID2, B.TYPE_CODE) could be used as the more selective columns are first, which would narrow the results faster. I've seen different suggestions based on the cardinality of the data, so I'm wondering if anyone has some advice on how to construct the index here. I've been doing a lot of reading on indexes lately, but it can be difficult to find a good guide on certain aspects. |
AS400 CL Programming - Export Summary data to Excel Posted: 10 Apr 2013 07:35 PM PDT I'm new here! I can already export detailed reports from AS400 QM query (WRKQMQRY). But, Here's the issue: I'm summing data and then returning the summary data, but I want to produce an Excel file. Here's my code for creating the report file: This command is executing the query, RPTHLDITMS, then saving the results to a flatfile, RPTHLDITMX. Thanks! |
Column partially determining accepted values of another column, should be somehow normalized? Posted: 10 Apr 2013 02:57 PM PDT I have a table that describes a set of properties (yes it's metadata, in this case there's a valid reason to have it in the database); among other things I tell the type of the data that can be assigned to them (Type) and a default value. The valid values for the default value are thus restricted by the Type column, but the default value is genuinely an attribute of the table, they are not uniquely determined by the Type column. I'm not sure though, should this be somehow normalized or is it right as it is? Edit: here's an approximate description of the current structure of the relevant part of the database, as requested. Don't mind the Values table, that's a separate issue. Just consider that ValueType restricts the set of DefaultValue permitted values. Also, consider this only as an example, I'm interested in the problem in general, the problem being namely columns that limit the set of valid values of another column but that don't determine its exact value, thus as far as I understand don't constitute multivalued dependencies (but, as an aside, it would be useful to have an integrity constraint that enforce the limit - I'm not sure that this issue can be separated from the former). |
Is a REINDEX required after CLUSTER? Posted: 10 Apr 2013 04:35 PM PDT I'm considering using CLUSTER to reorder a table by an index. I understand that this recreation of the table data makes all the existing indexes either bloat or be useless. I've seen some indications that a REINDEX is required after a CLUSTER. I've found other references that indicate that CLUSTER does a REINDEX. The Official Documentation says nothing at all about REINDEX being part of CLUSTER or required (Although it does suggest running ANALYZE after the CLUSTER) Can anyone definitively (i.e. with some sort of reference to official docs) say whether or not a REINDEX is required after a CLUSTER? |
Main Considerations When Moving From MS Access Programming to SQL Server Posted: 10 Apr 2013 05:31 PM PDT First post, be gentle... I am a 100% self taught MS Access programmer (main part of my job is programming), I am now building larger databases; still using MS Access as the UI but SQL Server to store all data and do more of the 'work'. In essence my question is; what subject matters do I need to know for SQL Server that I probably didn't learn or need when using Access? Not looking for you to tell me how to do anything, more what you think are the most important things I should go an research - there's a lot of subjects and a hell of a lot of detail, don't want to find myself a long way down a less valuable path... Brain dump:
If it helps I work for a mid sized retailer and the databases I predominantly work on cover such things as
Thanks in advance Simon |
Should all queries where you expect a specific order, include an ORDER BY clause? Posted: 10 Apr 2013 04:00 PM PDT In order to better understand the SQL Server query processor, I've been thinking about how the It appears SQL Server will provide results in exactly the same order for any given set of data that remains the same and never, ever changes. Once you introduce any kind of uncertainty such as developers changing something, the order of results can no longer be expected to be the same. Simply seeing the results in the same order, each time you press Try this: The results: As you see, adding a simple index on the fields selected in the query alters the order of the results. From here on, I'm adding |
Use single table with foreign key or two twin tables for text entries? Posted: 10 Apr 2013 07:42 PM PDT I have a table The content column is a varchar since we want to be able to search through the answer content quickly. However, we also want to be able to store full text as answers. There are two approaches: A. Using two "twin" table (with the same columns names) : a table with the column B. Using a single This approach seems cleaner, however it means doing two UPDATEs AND INSERTs for each entry modification and a JOIN on the two table to pull out the information. In general is this bad practice to have "twin" tables in a database? Thanks! |
What index to add in case of join by two optional fields Posted: 10 Apr 2013 04:01 PM PDT I have query similar to the one below, it joins two tables by field which can have NULL value. The result matchs the data if both tables have the same data or consider the condition as optional. The best index for table B I could think of is such which It helps a bit and from execution plan I can see the index is used but 90 % of the cost is spent in Nested Loops (Inner join). Is there any way how to get this query working faster? |
Most efficient way to sort data fields into SQL Posted: 10 Apr 2013 08:01 PM PDT I'm trying to decide on the most efficient way to sort various data values. Here's how the data arrives: Device X sends a text string "name=value&name2=value&name=value" On arrival that string is stuffed into a sql row along with the unique address of the sending device. This keeps data flowing in easily to a SQLite database. My parsing script first gets all unique device addresses. Those are put in a hash for the parser and inserted into a new database. (the hash contains the rowid from the db after the insert.) (with more logic to keep race conditions out of the mix) Then each row of string data is split up by the Here's the general table layout: Each row is read from the rawData, sorted and marked as processed. (makes it easy to muck around with the parsing script) Data is placed into these: I'm trying to decide on the most efficient method of inserting this data. It ends up as thousands of rows. I supposed I could keep a hash of known device/name pairs. Then if the hash doesn't know about a new name I can go ahead and insert it and refresh the hash... Am I missing something totally obvious? The goal is to keep selects to a minimum for efficiency! |
Backing up a 400-500 GB database on MySQL/Amazon RDS? Posted: 10 Apr 2013 07:37 PM PDT We have an Amazon RDS small instance for a MySQL datawarehousing database. Database size will be around 400-500 GB very shortly. What is the best way to backup our database? Can it be done efficiently with automatic backups and DB snapshots available with RDS? Or should we consider some other options? What would those be? Using EC2, hosting in some other cloud, or using any other database? |
View SQL's errorlog in Amazon RDS Posted: 10 Apr 2013 07:48 PM PDT Is there a way to view the SQL Server errorlogs in Amazon's RDS? I usually use sp_ReadErrorlog, but in RDS we need to be a member of securityadmin role. |
Why would mysql "show global status" query be taking 15 minutes? Posted: 10 Apr 2013 05:01 PM PDT I'm reviewing the slow log, and on one of my slaves the average time for SHOW GLOBAL STATUS is 914s. Any idea how to determine the cause of this? |
Is there a way to export Oracle's UNDO? Posted: 10 Apr 2013 06:01 PM PDT I tried exp utility to dump all database. Looks like this exports only the last version of data skipping undo log. Using flashback queries I see: What I'm trying to do is to capture db changes, make backup for later use with the ability to flashback to timestamp. With rman backup I have similar situation: Update: I managed to do what I needed only by increasing undo retention and direct copying of data files and control file modification on cloned instance. |
Posted: 10 Apr 2013 09:01 PM PDT Version in use is mysql-5.5.24. In the Enterprise version of MySQL I am not seeing free space in Is there such a difference between the Enterprise and Community versions of MySQL? |
MySQL table relations, inheritance or not? Posted: 10 Apr 2013 03:01 PM PDT Im building a micro CMS. Using Mysql as RDMS, and Doctrine ORM for mapping. I would like to have two types of pages. Static Page, and Blog Page. Static page would have page_url, and page_content stored in database. Blog page would have page_url, but no page_content. Blog would have Posts, Categories... Lets say I have route like this: This is page, with page url that can be home, or news, or blog... That page can be either Static page, and then I would joust print page_content. But it can also be Blog Page, and then I would print latest posts as content. How should I relate these Static Page and Blog Page tables? Is this inheritance, since both are pages, with their URL, but they have different content? Should I use inheritance, so that both Static and Blog page extends Page that would have page_url? Or should I made another table page_types and there store information about available page types? |
Posted: 10 Apr 2013 07:59 PM PDT We have a large Amazon Web Services RDS for SQL Server instance and we would like to do incremental data transfers from RDS to On-Premesis SQL Server on a regular basis. The On-prem server will be used to feed information into other systems on an acceptable 1-day delay. However reading through the docs and searching on google, forums etc, we have not found a seamless way to do off-AWS data transfers using RDS for SQL Server. Built-in SQL Server functions such as Change Data Capture (CDC) are turned off as well as Replication and off-site Backup/Restore services. Is there a way to do this or is it a limitation of using RDS? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment