Wednesday, April 10, 2013

[how to] Subscriber database to be in sync using Distributor

[how to] Subscriber database to be in sync using Distributor


Subscriber database to be in sync using Distributor

Posted: 10 Apr 2013 08:59 PM PDT

I have database D1 as a publisher, D2 as a subscriber and they are using Distributor 'DIST' which is reside on the subscriber side. Both D1 and D2 database having DR databases called DR1 and DR2. I have setup the Tran. Replication from D1 to D2, D1 to DR1 and D2 to DR2 as shown in below diagram: enter image description here

When my production incoming traffic is on to database D1 and data is replicating to databases D2, DR1, and DR2. so obviously there might be some delay in data transferred to DR1, and DR2. so lets say a disaster happen and my D1, Dist, and D2 database server got crashed, in that case i have to divert my traffic to DR1 and i have to setup Tran. replication between DR1 to DR2 (with Nosync option, as i don't want to reinitialize it) but because of delay in data transfer there might be data mismatch between DR1 and DR2.

is there any setting on distributor that can give me the surety that the data transfer to DR1 and DR2 be in same transaction. or is there any alternative ways to handle this problem ? Note: i am using SQL SERVER 2008, Transactional Replication. I can not afford to re-initialize the replication between DR1 and DR2.

Get the list of all Tables, Views, Stored procedures that are not added in a publication for replication

Posted: 10 Apr 2013 08:57 PM PDT

How do i get the list of all Tables, Views, Stored procedures that are NOT added in a publication for replication?Can you help me to make the stored Proc for that?I am new to Replication....Thanks

How to determine if there are [idle connections with] uncommitted transactions in PostgreSQL?

Posted: 10 Apr 2013 08:33 PM PDT

According to a comment on this question I asked about idle connections in PostgreSQL 9.2, some uncommitted transactions (possibly related to some of those idle connections) might cause some performance issues.

What is a good way to determine if there are uncommitted transactions (bonus points if there is a way to know if the connection they're on is idle or not)?

Thanks very much!

MySQL upgrade 5.0.88 to latest

Posted: 10 Apr 2013 07:07 PM PDT

Very interesting and informative blog..

I have a question - i have a requirement of upgrading from 5.0.88 to latest version..

  1. Why should i upgrade?
  2. if so, which version should i consider?
  3. What are all the known risks
  4. upgrading step by step is suggested? or binary replacement suggested?

Please do guide me

Thanks Devi

Oracle's UTL_FILE

Posted: 10 Apr 2013 06:49 PM PDT

I am learning Oracle pl/sql by myself. I need to create a procedure that exports a table to a csv file. For that I am thinking of UTL_FILE.Is there any good book or a site to get me started? I did already looked and find some samples to look to be good like this one:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:88212348059

but I find few explanations, also Oracle docs are not very user friendly.

I come from javascript and don't have much experience with databases. From the samples I see on the internet, for me PL/SQL scripts look like C language.

SQL 2008: Why is *any* user, with minimal permissions able to restart/stop the SQL instance through Management Studio

Posted: 10 Apr 2013 05:45 PM PDT

So I'm on a project that I need to help admin SQL 2008 . I'm coming from a MySQL background.

Why is it that a basic user, with the most basic/default permissions, through Management Studio is able to restart or stop the entire instance of SQL but not view a simple table?

Transfer data from DynamoDB to SQL server 2008?

Posted: 10 Apr 2013 03:23 PM PDT

I want to transfer data from amazon dynamo DB to SQL server 2008. What are the tools or methods which can be used to do this ?

How do I define a Standard Deviation function in Pentaho Schema Workbench

Posted: 10 Apr 2013 07:34 PM PDT

I'm building an OLAP Analysis with Pentaho's BI Suite (Community Edition). Many of my measures are standard deviations of the variables in my fact tables.

Does someone has a tip on how to define a Standard Deviation aggregation function in Schema Workbench? Lots of my jobs could benefit of it.

How to handle update or insert situations

Posted: 10 Apr 2013 03:24 PM PDT

From time to time my stored procedures looks like

create procedure handle_data            @fk int           ,@value varchar(10)      as          begin              if exists (select * from my_table where id=@fk)              begin                  update my_table                      set value=@value                  where id=@fk              end               else                   begin                          insert into my_table                          (fk,value)                          select @fk,@value                  end           end   

probably there is fault in design of app which calls this sproc.
Should I avoid making apps which do same sproc and methods for inserting new data and also updating old ones?

Is there better way to achieve updating or inserting data in one approach ?
I am using MSSQL2005/8.

Could too many idle connections affect PostgreSQL 9.2 performance?

Posted: 10 Apr 2013 08:34 PM PDT

Some queries on my database server seem to take a long time to respond, and I believe the CPU usage is high. When running ps aux, I see ~250 "idle" connections (which I believe to be too many). I haven't started doing a full diagnosis, but I wanted to know if this is a good place to start looking.

I am also using PgBouncer with transaction-level pooling. I am suspecting that I can easily reduce the number of idle connections by tuning the pool size. However, I don't want to start doing too many changes unless there is a good reason to do it.

Can many idle connections in PostgreSQL 9.2 affect performance?

Thanks very much!

How to speed up queries on a large 220 million rows table (9 gig data)?

Posted: 10 Apr 2013 02:51 PM PDT

The issue:

We have a social site where members can rate each other for compatibility or matching. This user_match_ratings table contains over 220 million rows (9 gig data or almost 20 gig in indexes). Queries against this table routinely show up in slow.log (threshold > 2 seconds) and is the most frequently logged slow query in the system:

Query_time: 3  Lock_time: 0  Rows_sent: 3  Rows_examined: 1051  "select rating, count(*) as tally from user_match_ratings where rated_user_id = 395357 group by rating;"      Query_time: 4  Lock_time: 0  Rows_sent: 3  Rows_examined: 1294  "select rating, count(*) as tally from user_match_ratings where rated_user_id = 4182969 group by rating;"      Query_time: 3  Lock_time: 0  Rows_sent: 3  Rows_examined: 446  "select rating, count(*) as tally from user_match_ratings where rated_user_id = 630148 group by rating;"      Query_time: 5  Lock_time: 0  Rows_sent: 3  Rows_examined: 3788  "select rating, count(*) as tally from user_match_ratings where rated_user_id = 1835698 group by rating;"      Query_time: 17  Lock_time: 0  Rows_sent: 3  Rows_examined: 4311  "select rating, count(*) as tally from user_match_ratings where rated_user_id = 1269322 group by rating;"  

MySQL version:

  • protocol_version: 10 version: 5.0.77-log
  • version_bdb: Sleepycat Software: Berkeley DB 4.1.24: (January 29, 2009)
  • version_compile_machine:x86_64 version_compile_os:redhat-linux-gnu

TABLE INFO :: mysql> SHOW COLUMNS FROM user_match_ratings;

  • id int(11) NO PRI NULL auto_increment
  • rater_user_id int(11) NO MUL NULL
  • rated_user_id int(11) NO MUL NULL
  • rating varchar(1) NO NULL
  • created_at datetime NO NULL

SAMPLE QUERY:

mysql> select * from mutual_match_ratings where id=221673540;    - id  | rater_user_id | rated_user_id | rating | created_at     - 221673540 |   5699713 |   3890950 | N    | 2013-04-09 13:00:38   

The table has 3 indexes set up:

  1. single index on rated_user_id
  2. composite index on rater_user_id & created_at
  3. composite index on rated_user_id & rater_user_id

mysql> show index from user_match_ratings;

| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |

| user_match_ratings | 0 | PRIMARY | 1 | id | A | 220781193 | NULL | NULL | | BTREE | |

| user_match_ratings | 1 | user_match_ratings_index1 | 1 | rater_user_id | A | 11039059 | NULL | NULL | | BTREE | |

| user_match_ratings | 1 | user_match_ratings_index1 | 2 | created_at | A | 220781193 | NULL | NULL | | BTREE | |

| user_match_ratings | 1 | user_match_ratings_index2 | 1 | rated_user_id | A | 4014203 | NULL | NULL | | BTREE | |

| user_match_ratings | 1 | user_match_ratings_index2 | 2 | rater_user_id | A | 220781193 | NULL | NULL | | BTREE | |

| user_match_ratings | 1 | user_match_ratings_index3 | 1 | rated_user_id | A | 2480687 | NULL | NULL | | BTREE | |


SO even with the indexes these queries are slow.

My Question:

Would separating this table/data unto another database on a server that has enough ram to store this data in memory would this speed up these queries? Is there anything in anyway that the tables/indexes are set up that we can improve upon to make these queries faster?

How can I remove a bad execution plan from SQL Azure?

Posted: 10 Apr 2013 02:33 PM PDT

DBCC FREEPROCCACHE doesn't work in SQL Azure. How else can I force a plan to kick itself out of the cache in a way that won't hurt a production system (i.e. I can't just go alter tables willy nilly)? This is specifically for SQL created by Entity Framework, so these aren't me-managed stored procs - it's effectively dynamic SQL.

(Source was bad indexes -> bad stats, etc. That's all fixed but a bad plan won't go away.)

UPDATE: I selected @mrdenny's solution as he got there first. I am, however, successfully using @Aaron Bertrand's script to perform the work. Thanks to everybody for the help!!

SQL server slow on Xeon machine, fast on Corei7 x990

Posted: 10 Apr 2013 02:33 PM PDT

i have searched this problem but have had no luck find any improvement.

I have a complex application that used stored procedures in SQL server.

Some of the stored procedures use cursors due to the complex nature of the SQL in them.

A few months ago I purchased a new machine expecting an improvement in speed.

The new machine is Xeon E5-2690 single CPU with 32Gig of ram.

The same application runs faster on the new machine as expected. But the database is much slower. One of the stored procedures is more than 20x slower. What takes seconds on the old machine takes minutes on the Xeon machine.

The database and application are identical. I generate the DB from scripts and have done back to back tests with clean empty databases. Still many times slower.

I have searched for answers and tried what little I could find, no help.

I have looked at the settings of SQL server on both machines, many times, and cant find a difference.

I tried turning off HT on the Xeon because I found a post that it helped someone, didn't help me however.

I have started to move some of the DB processing into the applications to try and reduce the time penalty running on the new Xeon.

Any help would be appreciated. Cheers.

SQL Index order and performance based on cardinality and data

Posted: 10 Apr 2013 06:09 PM PDT

I've been working quite a bit on a query that's not operating very efficiently on a DB2 database. The thing will return up to 30k rows or so and take up to 10 seconds. I've been working on getting this thing to run faster and I think I've finally tracked it down to an improper index. A new and more appropriate index should fix the problem, but I'm not entirely sure how this should be ordered as there appears to be different ways for different data.

The query itself isn't terribly complicated. Here's a brief rundown:

SELECT   ...  B.TYPE_CODE,  B.FIELD,  ...  FROM A  INNER JOIN B  ON A.ID1 = B.A_ID1  AND A.ID2 = B.A_ID2  --A few inner joins and some left joins--  WHERE  B.TYPE_CODE IN (:parameter) --No more than four values within the IN clause  

A few details about these tables:

  • A has about 900k rows and B has about 2.7 million.
  • B.TYPE_CODE has 15 possible values. Stats are accurate. The four parameters together are contained in about 400k rows (about 15% of all entries in the table). The A_ID1/2 columns are much more unique, occurring 15 times at most, of course.
  • B already has an index on (TYPE_CODE, A_ID1, A_ID2). This is being used, but it's not optimal at all
  • There are 6 rows being selected by the DB2-optimized query. It grabs A_ID1 and 2 along with TYPE_CODE and FIELD, of course, but it's also grabbing B.ID1 and B.ID2, which together form the primary key for B. These fields are not used in the original query, but the optimizer is including them, although I don't think they're used in any joins or filters. They're just there.
  • B will have far more reads than write operations on it

The join between A and B is taking a third of the query cost. The explain plan shows a FETCH operation that uses a particular index four times. I was able to get a VERY detailed explain plan report and found that this operation is returning all 400k rows, then performing the join, which only needs about 26k of them.

I found a short-circuit method that added this to the predicate:

AND COALESCE(B.TYPE_CODE, B.TYPE_CODE) = B.TYPE_CODE  

Logically this doesn't affect the result set, but it does tweak the optimizer. Adding this cut the query time in half. Checking the detailed explain showed that it added an additional filter factor that reduced the number of estimated rows in that FETCH operation to 26k or so, which is about right. This in turn reduced the estimate for the number of rows from A that would be returned overall and enabled index scans on A that hadn't been used prior due to the optimizer thinking it could be grabbing nearly half of A. The result was a much faster query. However, the COALESCE is a hacky bit and not suited for production, but it was handy for finding an issue.

So now it falls to creating a new index on B to fix this problem. The DB2 query analyzer suggested an index on all six fields, starting with B.TYPE_CODE and following with the rest in no particular logical order. This would cover all selected fields and the table itself would not need to be touched, which certainly has its advantages. It has also been suggested that an index on (B.A_ID1, B.A_ID2, B.TYPE_CODE) could be used as the more selective columns are first, which would narrow the results faster. I've seen different suggestions based on the cardinality of the data, so I'm wondering if anyone has some advice on how to construct the index here. I've been doing a lot of reading on indexes lately, but it can be difficult to find a good guide on certain aspects.

AS400 CL Programming - Export Summary data to Excel

Posted: 10 Apr 2013 07:35 PM PDT

I'm new here! I can already export detailed reports from AS400 QM query (WRKQMQRY). But, Here's the issue:

I'm summing data and then returning the summary data, but I want to produce an Excel file.

Here's my code for creating the report file:

     STRQMQRY   QMQRY(BAAI.CUS/RPTHLDITMS) OUTPUT(*OUTFILE) +                 OUTFILE(QQRYLIB/RPTHLDITMX) OUTMBR(*FIRST +                   *REPLACE) ALWQRYDFN(*YES)  

This command is executing the query, RPTHLDITMS, then saving the results to a flatfile, RPTHLDITMX.

Thanks!

Column partially determining accepted values of another column, should be somehow normalized?

Posted: 10 Apr 2013 02:57 PM PDT

I have a table that describes a set of properties (yes it's metadata, in this case there's a valid reason to have it in the database); among other things I tell the type of the data that can be assigned to them (Type) and a default value. The valid values for the default value are thus restricted by the Type column, but the default value is genuinely an attribute of the table, they are not uniquely determined by the Type column.

I'm not sure though, should this be somehow normalized or is it right as it is?

Edit: here's an approximate description of the current structure of the relevant part of the database, as requested. Don't mind the Values table, that's a separate issue. Just consider that ValueType restricts the set of DefaultValue permitted values.

Also, consider this only as an example, I'm interested in the problem in general, the problem being namely columns that limit the set of valid values of another column but that don't determine its exact value, thus as far as I understand don't constitute multivalued dependencies (but, as an aside, it would be useful to have an integrity constraint that enforce the limit - I'm not sure that this issue can be separated from the former).

Properties (  ID int PRIMARY KEY  Name varchar(100)  ValueType int REFERENCES ValueTypes(ID)  DefaultValue int REFERENCES Values(ID)  )    ValueTypes (  ID int PRIMARY KEY  Name varchar(100)  ...  )    Values (  ID int PRIMARY KEY  ...  )  

Is a REINDEX required after CLUSTER?

Posted: 10 Apr 2013 04:35 PM PDT

I'm considering using CLUSTER to reorder a table by an index. I understand that this recreation of the table data makes all the existing indexes either bloat or be useless. I've seen some indications that a REINDEX is required after a CLUSTER. I've found other references that indicate that CLUSTER does a REINDEX. The Official Documentation says nothing at all about REINDEX being part of CLUSTER or required (Although it does suggest running ANALYZE after the CLUSTER)

Can anyone definitively (i.e. with some sort of reference to official docs) say whether or not a REINDEX is required after a CLUSTER?

Main Considerations When Moving From MS Access Programming to SQL Server

Posted: 10 Apr 2013 05:31 PM PDT

First post, be gentle...

I am a 100% self taught MS Access programmer (main part of my job is programming), I am now building larger databases; still using MS Access as the UI but SQL Server to store all data and do more of the 'work'.

In essence my question is; what subject matters do I need to know for SQL Server that I probably didn't learn or need when using Access?

Not looking for you to tell me how to do anything, more what you think are the most important things I should go an research - there's a lot of subjects and a hell of a lot of detail, don't want to find myself a long way down a less valuable path...

Brain dump:

  • Maintenance: what are the most important (check database, reduce database, update statistics, rebuild etc)
  • Indexes - I don't know as much as I should, is there a good book/ blog etc that can teach me the basics upwards?
  • Anything else I have missed (there's probably lots, as I said I am new to SQL Server...)

If it helps I work for a mid sized retailer and the databases I predominantly work on cover such things as

  • Reporting platform (summarises sales/ receipts/ inventory etc from main system and provides fast reporting)
  • A reconciling tool between a third part and what our stores put through registers (imports data from third party and cross references the transaction logs)
  • Stores all data to do with our promotions; product, prices, projections, actual results etc

Thanks in advance Simon

Should all queries where you expect a specific order, include an ORDER BY clause?

Posted: 10 Apr 2013 04:00 PM PDT

In order to better understand the SQL Server query processor, I've been thinking about how the ORDER BY clause works, and how SQL Server provides results.

It appears SQL Server will provide results in exactly the same order for any given set of data that remains the same and never, ever changes. Once you introduce any kind of uncertainty such as developers changing something, the order of results can no longer be expected to be the same.

Simply seeing the results in the same order, each time you press F5 does not guarantee the results you expect.

Try this:

USE TempDB;  DROP TABLE TestOrderBy;  DROP TABLE TestOrderBy2;  CREATE TABLE TestOrderBy  (      ID INT NOT NULL CONSTRAINT PK_TestOrderBy PRIMARY KEY CLUSTERED IDENTITY(1,1)      , SomeData varchar(255)  );  INSERT INTO TestOrderBy (SomeData) VALUES ('E');  INSERT INTO TestOrderBy (SomeData) VALUES ('D');  INSERT INTO TestOrderBy (SomeData) VALUES ('C');  INSERT INTO TestOrderBy (SomeData) VALUES ('B');  INSERT INTO TestOrderBy (SomeData) VALUES ('A');    CREATE TABLE TestOrderBy2  (      ID INT NOT NULL CONSTRAINT PK_TestOrderBy2 PRIMARY KEY CLUSTERED IDENTITY(1,1)      , SomeData varchar(255)  );  INSERT INTO TestOrderBy2 (SomeData) VALUES ('E');  INSERT INTO TestOrderBy2 (SomeData) VALUES ('D');  INSERT INTO TestOrderBy2 (SomeData) VALUES ('C');  INSERT INTO TestOrderBy2 (SomeData) VALUES ('B');  INSERT INTO TestOrderBy2 (SomeData) VALUES ('A');    SELECT SomeData   FROM TestOrderBy  UNION ALL  SELECT SomeData  FROM TestOrderBy2;    CREATE INDEX IX_TestOrderBy_SomeData ON TestOrderBy (SomeData);  CREATE INDEX IX_TestOrderBy2_SomeData ON TestOrderBy2 (SomeData);    SELECT SomeData   FROM TestOrderBy  UNION ALL  SELECT SomeData  FROM TestOrderBy2;  

The results:

enter image description here

As you see, adding a simple index on the fields selected in the query alters the order of the results.

From here on, I'm adding ORDER BY unless I truly don't care.

Use single table with foreign key or two twin tables for text entries?

Posted: 10 Apr 2013 07:42 PM PDT

I have a table answers that has several columns including the content iself such as

ID | q_id | list_id | user_module_id | content | updated_timestamp  

The content column is a varchar since we want to be able to search through the answer content quickly.

However, we also want to be able to store full text as answers. There are two approaches:

A. Using two "twin" table (with the same columns names) : a table with the column content of type varchar and a table with the column content of type text.

B. Using a single answer table and storing in the content column a hash key such as q_id7user_module_id362 and use another table that contains only the text entry:

hashkey | content  

This approach seems cleaner, however it means doing two UPDATEs AND INSERTs for each entry modification and a JOIN on the two table to pull out the information.

In general is this bad practice to have "twin" tables in a database?

Thanks!

What index to add in case of join by two optional fields

Posted: 10 Apr 2013 04:01 PM PDT

I have query similar to the one below, it joins two tables by field which can have NULL value. The result matchs the data if both tables have the same data or consider the condition as optional.

SELECT B.X, A.Y FROM tableA A  INNER JOIN tableB B ON        ISNULL(B.level5, A.f05) = A.f05       AND ISNULL(B.level6, A.f06) = A.f06  

The best index for table B I could think of is such which

CREATE NONCLUSTERED INDEX [Index_On_Table_B] ON B  ( X ASC ) INCLUDE (level5, level6)  

It helps a bit and from execution plan I can see the index is used but 90 % of the cost is spent in Nested Loops (Inner join).

Is there any way how to get this query working faster?

Most efficient way to sort data fields into SQL

Posted: 10 Apr 2013 08:01 PM PDT

I'm trying to decide on the most efficient way to sort various data values. Here's how the data arrives: Device X sends a text string "name=value&name2=value&name=value"

On arrival that string is stuffed into a sql row along with the unique address of the sending device. This keeps data flowing in easily to a SQLite database.

My parsing script first gets all unique device addresses. Those are put in a hash for the parser and inserted into a new database. (the hash contains the rowid from the db after the insert.) (with more logic to keep race conditions out of the mix)

Then each row of string data is split up by the &, then by the =.

Here's the general table layout:

rawData(address, string, timestamp, processed)  

Each row is read from the rawData, sorted and marked as processed. (makes it easy to muck around with the parsing script)

Data is placed into these:

devices(rowid, address)  dataNames(rowid, devicesid, name)  dataValues(nameId, value, timestamp)  

I'm trying to decide on the most efficient method of inserting this data. It ends up as thousands of rows. I supposed I could keep a hash of known device/name pairs. Then if the hash doesn't know about a new name I can go ahead and insert it and refresh the hash...

Am I missing something totally obvious? The goal is to keep selects to a minimum for efficiency!

Backing up a 400-500 GB database on MySQL/Amazon RDS?

Posted: 10 Apr 2013 07:37 PM PDT

We have an Amazon RDS small instance for a MySQL datawarehousing database. Database size will be around 400-500 GB very shortly. What is the best way to backup our database?

Can it be done efficiently with automatic backups and DB snapshots available with RDS?

Or should we consider some other options? What would those be? Using EC2, hosting in some other cloud, or using any other database?

View SQL's errorlog in Amazon RDS

Posted: 10 Apr 2013 07:48 PM PDT

Is there a way to view the SQL Server errorlogs in Amazon's RDS? I usually use sp_ReadErrorlog, but in RDS we need to be a member of securityadmin role.

Why would mysql "show global status" query be taking 15 minutes?

Posted: 10 Apr 2013 05:01 PM PDT

I'm reviewing the slow log, and on one of my slaves the average time for SHOW GLOBAL STATUS is 914s.

Any idea how to determine the cause of this?

Is there a way to export Oracle's UNDO?

Posted: 10 Apr 2013 06:01 PM PDT

I tried exp utility to dump all database. Looks like this exports only the last version of data skipping undo log. Using flashback queries I see:

01466. 00000 -  "unable to read data - table definition has changed"  *Cause:    Query parsed after tbl (or index) change, and executed             w/old snapshot  

What I'm trying to do is to capture db changes, make backup for later use with the ability to flashback to timestamp.

With rman backup I have similar situation:

ORA-01555: snapshot too old: rollback segment number 3 with name "_SYSSMU3_2472002983$"  too small   01555. 00000 -  "snapshot too old: rollback segment number %s with name \"%s\" too small"  *Cause:    rollback records needed by a reader for consistent read are overwritten by other writers`.   

Update: I managed to do what I needed only by increasing undo retention and direct copying of data files and control file modification on cloned instance.

MySQL-5.5 InnoDB memory issue

Posted: 10 Apr 2013 09:01 PM PDT

Version in use is mysql-5.5.24.

In the Enterprise version of MySQL innodb_buffer_pool space is released after a while, however this does not appear to be happening in the community version.

I am not seeing free space in innodb_buffer_pool even after all the apps are shut down.

Is there such a difference between the Enterprise and Community versions of MySQL?

MySQL table relations, inheritance or not?

Posted: 10 Apr 2013 03:01 PM PDT

Im building a micro CMS. Using Mysql as RDMS, and Doctrine ORM for mapping.

I would like to have two types of pages. Static Page, and Blog Page. Static page would have page_url, and page_content stored in database. Blog page would have page_url, but no page_content. Blog would have Posts, Categories...

Lets say I have route like this:

/{pageurl}  

This is page, with page url that can be home, or news, or blog... That page can be either Static page, and then I would joust print page_content. But it can also be Blog Page, and then I would print latest posts as content.

How should I relate these Static Page and Blog Page tables? Is this inheritance, since both are pages, with their URL, but they have different content? Should I use inheritance, so that both Static and Blog page extends Page that would have page_url? Or should I made another table page_types and there store information about available page types?

Is there a way to do data export from Amazon Web Services RDS for SQL Server to On-Premises SQL Server?

Posted: 10 Apr 2013 07:59 PM PDT

We have a large Amazon Web Services RDS for SQL Server instance and we would like to do incremental data transfers from RDS to On-Premesis SQL Server on a regular basis.

The On-prem server will be used to feed information into other systems on an acceptable 1-day delay.

However reading through the docs and searching on google, forums etc, we have not found a seamless way to do off-AWS data transfers using RDS for SQL Server.

Built-in SQL Server functions such as Change Data Capture (CDC) are turned off as well as Replication and off-site Backup/Restore services.

Is there a way to do this or is it a limitation of using RDS?

No comments:

Post a Comment

Search This Blog