Sunday, September 8, 2013

[how to] How to Partition an existing Table which already have clustered index with primary key

[how to] How to Partition an existing Table which already have clustered index with primary key


How to Partition an existing Table which already have clustered index with primary key

Posted: 08 Sep 2013 08:59 PM PDT

I have one table CREATE TABLE [dbo].[entry]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [EntryDate] [datetime] NOT NULL, [createddate] [datetime] NOT NULL, CONSTRAINT [PK_Entry_ID] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY] ) ON [PRIMARY]

where the primary key is referenced by another table. To partition this table I have already done following steps :

1.CREATE PARTITION FUNCTION EntryFunc (DATE)

AS RANGE LEFT

FOR VALUES ('2011-01-01')

2. CREATE PARTITION SCHEME EntryScheme

AS PARTITION EntryFunc

TO ([FileGroup2], [PRIMARY])

the above 2 steps successfully completed,but when i am partitioning table i am not able to drop primary key clustered index because it is referenced by other table. My motive is to partition the table Entry according to created date.

Merging Large Database Tables Quickly

Posted: 08 Sep 2013 08:53 PM PDT

So I found this other post; Merge multiple tables in a database And that's the general idea, but at an insanely large scale.

Anyway -- Over the years, Our site has adapted and three different things have developed in the tables and they're kinda the same thing; submitted content. I have journals, images, and links; However there's not a REAL reason for these to be separate, and for the benefit of table organization, i'd like to reduce them to a single table; posts.

So the problem is this;

Journals has 2.6 million rows. Images has 1.4 million rows. And Links has a cute 45 thousand rows.

Now, some of the columns need to be remapped for the sake of logic. Simple stuff like "friends_only" column becoming "privacy", which is simple enough.

What would the recommended course of action be for migrating these tables, or is sticking with a perl script that runs through all 4 million+ records the best way of doing it?

Can't connect to service after updating ODBC driver

Posted: 08 Sep 2013 08:23 PM PDT

I am upgrading a program at work and one of the changes is that it now uses PostgreSQL 9.2.4 instead of 8. I was getting a 'client encoding mismatch' error, so I updated the ODBC driver, and the problem went away. However, with the new driver, my program does not want to connect to a custom service that it uses anymore.

The custom service uses postgres a lot. The error I'm getting is '(10061) connection is forcefully rejected'. Postgres is configured to accept connections from any IP address, so I'm not sure why I'm getting this error. The program will connect fine to the custom service with the old version of the ODBC driver, but as soon as I start using the new driver, it does not want to connect. I've checked the services list and both postgres and the custom service are started.

At one point, while trying to connect to the custom service, I was getting an error that said something like "OLE DB error: cannot send query to the backend". However, I can't seem to reproduce this error message anymore, it is simply not connecting.

I don't have a lot of database experience, so I apologize if this information is confusing or incomplete. Please let me know if you need clarification on anything.

Any suggestions would be appreciated, even if they are just ideas on how to troubleshoot this issue.

Create a high availability cluster configuration PostgreSQL for Windows

Posted: 08 Sep 2013 06:06 PM PDT

I want a server to be replicated, here is the scenario: When my primary server goes down or fail I want the secondary or the standby server to take over, I'm using Windows 7 for the primary server, another server in the same OS and windows xp in a virtual machine for the secondary server; Any help would be appreciated.

What's the fastest and most accurate way to verify data migration?

Posted: 08 Sep 2013 04:45 PM PDT

I'm migrating ~1TB of data from SS to MySQL. The verification process is multi-stage.

  1. Implement Linked Server in MySQL and run binary_checksum on source tbl (SS) and destination tbl (MySQL)

  2. Assuming the checksums do not match, then implement checksums on each row - e.g. Add column on each destination table with a corresponding MD5 hash of the record values. Then compare the MD5 from source to destination.

Any ideas on how to make this easier, faster, better?

Thanks in advance!

How do I convert unicode data from UTF-16LE to UTF-8 when migrating from SQL Server to MySQL?

Posted: 08 Sep 2013 04:25 PM PDT

Objective: Ensure existing Unicode (UTF-16LE) data is displayed the when moved to MySQL (UTF-8).

Background: I'm migrating 1TB from SQL Server to MySQL. The current Collation in SS is "Latin1_General_CPI_CI_AS". Given MySQL supports Latin1, I am assuming there shouldn't be a problem with the collation.

My main concern is the Character Set conversion. In SS, all numerical data-types are set to Character Set = NULL. The remaining are set to "UNICODE". Since SS Unicode is UTF-16LE and it needs to be UTF-8 in MySQL, I fear I'm stuck verifying each character in all non-numerical columns. That would be very long and intensive given ~4k tables.

Potential solutions: 1. Using SqlDataReader, validate each record value, change if necessary, and then save updated results to an export file. 2. BCP data out of SS and save to .txt file. Reopen each .txt file and re-save with the UTF-8 encoding.

Problems encountered: When converting the character set from UTF-16LE to UTF-8 using a sqldatareader, it works great except when the source value to be converted is already in UTF-8. Ideally, the solution would only run if the data needed to be converted.

Does anyone know of an easy way to ensure data is correctly set and verified from database to another? Am I over thinking this? Is this even an issue?

Thanks in advance!

Select on several {name,value} pairs

Posted: 08 Sep 2013 05:42 PM PDT

I have a this schema:

CREATE TABLE Pairs(     id    VARCHAR(8)  NOT NULL     ,name  VARCHAR(40) NOT NULL     ,value VARCHAR(1000) NOT NULL     ,PRIMARY KEY (id,name));  

I want to write a query to retrieve the id which is present in several rows with several constraints on name and value like:

  • name like 'forname%' AND value like 'CAMI%'
  • name like 'projet%' AND value like '%gociation'
  • name ... AND value ...
  • ...

The cardinality of the constraints is unknown at compile time, it comes from a GUI.

I wrote this query:

select * from (        SELECT * FROM Pairs WHERE ( name   =  'projet'   ) AND ( value like '%gociation' )  union SELECT * FROM Pairs WHERE ( name like 'forname%' ) AND ( value like 'CAMI%' )  ) as u order by id  

the result is:

id      name    value  AEMIMA  projet  renegociation  AENABE  projet  renegociation  AEREDH  projet  renegociation  AGOGLY  projet  renegociation  AHOGAL  projet  renegociation  AHSPID  projet  renegociation       <<<<<<<<<<<<  AHSPID  fornameE    CAMILLE         <<<<<<<<<<<<  AIOSAP  projet  renegociation  AIPNEU  projet  renegociation  

Only the <<<<<<<<<<<< marked lines are good and the id I want is AHSPID.

How can I eliminate the lines which doesn't match ALL the constraints?

Postgres corrupted DB recovery

Posted: 08 Sep 2013 07:44 PM PDT

Is there a way to recover data from partially damaged Postgres DB? My HDD was damaged in transport, I was able to recover most of the files but I cannot connect to the DB or use dump: I'm getting error:

psql: FATAL:  could not read block 0 in file "base/18819/11700": read only 0 of 8192 bytes.   

This is one of the missing files though most of them are fine. Is there a way?

Thanks

SQL Server 2008R2 Database refresh in isolated environment

Posted: 08 Sep 2013 07:43 PM PDT

I would greatly appreciate your help on my database "refresh" issue.

The scenario:

I have 12 databases on a QA server that were restored there from a Production server approx. 2 years ago. Now QAs need to sync those 12 databases with databases on a Prod server; all account logins (and corresponding permissions) have to remain unchanged in QA environment - QAs only need to "refresh" databases so that data is as current as it is in Prod environment.

For security reasons there's no (and cannot be) connection between two servers (hence it is not possible to import data with append option), so I had to request DBAs on that Prod server to backup databases and to place backup files in a shared folder (already there).

My question is - what is the best way to "refresh" 12 databases in QA environment - is it to delete/drop old databases and restore them from backup files (then what would happen to the current QA server logins?) or is it to try to restore databases from backups without dropping 12 old databases and is this possible, would be data just appended to existing data and current logins stay unchanged ?

Thank you in advance for any input.

Connection issues with JDBC to SQL Server named instance

Posted: 08 Sep 2013 11:57 AM PDT

We have a clustered instance, Instance1, assigned its own virtual network name and IP (say 172.0.0.1). This instance is configured to listen on port 1433 in SQL Server network configuration (verified in the logs).

We can verify connectivity to Instance1 within SSMS (specifying 172.0.0.1,1433 as the server) using the credentials for apiUser. When the connection string is specified as given below, the connection fails and we see error 18456 state 38 in the logs.

The database db1 exists and the login has a mapped user in db1; in fact db1 is the default database for the login. When logging in via SSMS, the database context is successfully set to db1 upon successful login.

We are at a loss as to why error 18456 state 38 would be thrown in this scenario. Per Aaron Bertrand's article, Troubleshooting Error 18456, the issue seems to be with a missing specified database in the connection string or permissions for the login to open the specified database, but this doesn't appear to be the case. Any help would be greatly appreciated.

<connection-url>jdbc:sqlserver://172.0.0.1:1433;databaseName=db1</connection-url> <user-name>apiUser</user-name> <password>#####</password>

Why does "GRANT SELECT ... TO Role;" does not allow members of Role to SELECT?

Posted: 08 Sep 2013 03:23 PM PDT

Please consider the following scenario on a Oracle 11g Database.

The user ADMIN performs the following:

CREATE USER Alice IDENTIFIED BY pwdalice;    GRANT CREATE SESSION TO Alice;    CREATE ROLE Viewer IDENTIFIED BY pwdviewer;    GRANT Viewer TO Alice;    GRANT SELECT ON Table_1 TO Viewer;  

Then Alice logs into the database as 'Alice' and executes the following command:

SELECT * FROM Table_1;  

Alice gets the following error:

SELECT * FROM Table_1                *  ERROR at line 1:  ORA-00942: table or view does not exist  

I thought that granting privileges to a role would enable its member-users to get those privileges. However, this scenario shows that it is not so. What am I missing here? How can I grant SELECT to Alice using a role?

Update:

Following the helpful answers, tried 3 fixes with no success

1) Using Fully-qualified Table Names

I missed to include the schema name in SELECT * FROM Table_1; command. However, even after adding the schema name as shown below, still the error comes.

Alice executes:

SELECT * FROM ADMIN.Table_1;  

Gets the error:

SELECT * FROM ADMIN.Table_1                       *  ERROR at line 1:  ORA-00942: table or view does not exist  

2) Using a synonym for the fully-qualified table name

Unfortunately, this does not seem to solve the problem either.

Alice executes the following:

CREATE SYNONYM Syn_Table_1 FOR ADMIN.Table_1;  CREATE SYNONYM Syn_Table_1 FOR ADMIN.Table_1  *  ERROR at line 1:  ORA-01031: insufficient privileges  

3) Altering the session

ALTER SESSION SET current_schema = ADMIN;    Session altered.    SELECT * FROM Table_1;    SELECT * FROM Table_1                *  ERROR at line 1:  ORA-00942: table or view does not exist  

Do id columns help (in speed) select statements?

Posted: 08 Sep 2013 08:18 AM PDT

I have a table which contains data from aggregation software. The columns mostly are int columns, but three of them are string columns. It looks sort of like:

userId, someData1, someData2,     someData3, otherIdColumn, childId, websiteId  ------------------------------------------------------------------------------    1, somestring,  someotherstring  justastring    3,             2,       1  1, somestring1, someotherstring1 justastring    3,             3,       3  1, somestring2, someotherstring2 justastring    2,             2,       1  1, somestring3, someotherstring3 justastring    1,             9,       5  2, somestring4, someotherstring4 justastring    4,             2,      10  

and so on.

The table contains something like 80m rows for now. When I'm trying to make a query using aggregation functions (SUM) and grouping by one of the string columns (someData1, someData2, someData3) it takes very VERY long time (more than 10 minutes for a query).

I'm trying to optimize the table right now, the first thing that I've did I've added indexes to the string columns, but I want to make it even faster. I thought of adding an ID column (pk, ai, nn) as of it will be make the select queries faster.

What do you think about it? do you have any advice how more I can optimize this table? Note: I have only like 5 columns that I'm using group by. 3 of them is string columns, and they are making the problem.

postgres-xc : Address already in use

Posted: 08 Sep 2013 08:33 AM PDT

I am starting a postgres-xc server with two datanodes and one coordinator.
I used same parameter for both nodes but from one node I am getting the following error:

LOG:  could not bind IPv4 socket: LOG:  could not bind IPv4 socket: Address already in use  HINT:  Is another postmaster already running on port 5660? If not, wait a few seconds and retry.  WARNING:  could not create listen socket for "localhost"  FATAL:  could not create any TCP/IP sockets  

and the other node gives me the following error

LOG:  autovacuum launcher started  ERROR:  PGXC Node datanode1: object already defined  STATEMENT:  CREATE NODE datanode1 WITH (TYPE = 'datanode', PORT = 15671)  LOG:  PGXC node datanode2: Applying default host value: localhost  STATEMENT:  CREATE NODE datanode2 WITH (TYPE = 'datanode', PORT = 15672)  

Kindly share your inputs on this issues.

postgres-xc - ERROR: object already defined

Posted: 08 Sep 2013 05:09 AM PDT

I am configuring two data nodes and a coordinator on postgres-xc. I get eh following error:

ERROR:  PGXC Node datanode1: object already defined  CREATE NODE   pgxc_pool_reload  ------------------   t  (1 row)  

Please provides inputs on this problem.

Accessing unsharded database in a MongoDB shard cluster without going through mongos?

Posted: 08 Sep 2013 06:49 PM PDT

As far as I can understand, not all databases in a MongoDB shard cluster have to be sharded; I can keep some databases unsharded, on only one of the shards.

So suppose I have a shard cluster, with shards (i.e. replica sets) rs0 and rs1. rs0 has a database foo, which is unsharded. I have no plan to shard it or move it to rs1.

Is there any harm in letting some clients connect to rs0 directly and use the foo database without going through mongos?

I have only tested that it is possible to connect directly to rs0 and make queries in foo, but I don't know if it is potentially dangerous in my use case.

Locking in "Read Committed Snapshot"

Posted: 08 Sep 2013 03:19 PM PDT

IF update command is run on a table with "Read Committed_Snapshot" isolation level and Commit is pending

eg:

1) update table1 set col1 = col1 + 1 where PKcol < 3

2) update table1 set col1 = col1 + 1 where PKcol = 3

3) update table1 set col1 = col1 + 1 where NonPKcol < 3

4) update table1 set col1 = col1 + 1 where NonPKcol = 3

5) update table1 set col1 = col1 + 1 where PKcol < 3 and NonPKcol = 5

(In above case PKcol is primary key in table and NonPKcol is a non-primary key)

then whether Update is locked for only rows satisfying 'where' condition ? (is it based on value or index or Primary column ?)

Backup / Export data from MySQL 5.5 attachments table keeps failing!

Posted: 08 Sep 2013 11:19 AM PDT

Can anyone please help! - I have a large table in a MySQL 5.5 database. It is a table which holds a mixture of blobs/binary data and just data rows with links to file paths. It has just over a million rows.

I am having desperate problems in getting the data out of this table to migrate it to another server.

I have tried all sorts - mysqldump (with and without -quick), dumping the results of a query via the command line. Using a MySQL admin tool (Navicat) to open and export the data to file, CSV, or do a data transfer (line by line) to another DB and/or another server but all to no avail.

When trying to use the DB admin tool (Navicat), it gets to approx 250k records and then fails with an "Out of memory" error. I am not able to get any error messages from the other processes I have tried, but they seem to fall over at approximately the same number of records.

I have tried playing with the MySQL memory variables (buffer size, log file size, etc) and this does seem to have an effect on where the export stops (currently I have actually made it worse).

Also - max_allowed_packet is set to something ridiculously large as I am aware this can be a problem too.

I am really shooting in the dark, and I keep going round and round trying the same things and getting no further. Can anyone give me any specific guidance, or recommend perhaps any tools which I might be able to use to extract this data out??

Thanks in hope and advance!

A little more information below - following some questions and advice:

The size of the table I am trying to dump - it is difficult to say, but the sql dump gets to 27gb when the mysqldump dies. It could be approximately 4 times that in total.

I have tried running the following mysqldump command:

mysqldump --single-transaction --quick mydatabase attachments --password=abc123 -u root > d:\attachments.sql   

And this gives the error:

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table attachments at row: 251249 – Mayb2Moro 4 hours ago

The server has 8gb RAM, Some of the relevant settings copied below. It is an INNODB database/table.

innodb_buffer_pool_size=3000M  innodb_log_file_size=1113M  max_allowed_packet=2024M  query_cache_size=52M  tmp_table_size=500M  myisam_sort_buffer_size=50M  

Constraint to one of the primary keys as foreign key

Posted: 08 Sep 2013 01:52 PM PDT

Table1: grid_col (col_id,f_id,f_value)       (col_id,f_id) is primary key.     Table2: grid (grid_id,col_id,text)       (grid_id) is primary key.   

I want to have a constraint for grid that, col_id should be present in grid_col. I can't have foriegn key constraint here. I can create a function constraint which scans the grid_col while inserting in grid but in that case it increases the chances of having deadlock. How to add a constriant here?

disk I/O error in SQLite

Posted: 08 Sep 2013 02:19 PM PDT

What are the possible things that would trigger the "disk I/O error"? I've been having this problem and I couldn't find a solution. I have a SQLite3 database, and I'm trying to insert data from a file that contains SQL inserts.

Sample data in the file:

insert into files (filesize, filedate, md5, fullpath, origin) values (5795096,1370159412, "e846355215bbb9bf5f30102a49304ef1", "SDs/16G-1/DSC00144.JPG", "SDs");   insert into files (filesize, filedate, md5, fullpath, origin) values (5435597,1370159422, "1a7bcf3a4aaee3e8fdb304ab995ff80f", "SDs/16G-1/DSC00145.JPG", "SDs");  insert into files (filesize, filedate, md5, fullpath, origin) values (5121224,1370159432, "16d28e83599c731657a6cd7ff97a4903", "SDs/16G-1/DSC00146.JPG", "SDs");  

I tried inserting that in the db file with the following command:

$ sqlite3 allfiles.db < insert.sql  

See below the error that I get:

Error: near line 27: disk I/O error  Error: near line 28: disk I/O error  Error: near line 34: disk I/O error  Error: near line 39: disk I/O error  Error: near line 47: disk I/O error  Error: near line 129: disk I/O error  

The input lines that don't generate error are successfully included, but I don't understand why some lines have errors, and they are not inserted into the DB. There's nothing special in the lines with error, and if I run the command again I get errors in different lines, which means it's random (not related to the data itself). I tried adding pragma syncrhonous = off; and pragma temp_store = memory;, to no success. I'm running that on a lubuntu, which runs in a VirtualBox virtual machine. The host machine is a windows 7. The pwd of the files is a shared folder, i.e., it's a folder in the host machine. If I run it in a "local folder" in the guest machine, the error doesn't happen, although for some reason it's much slower... In any case, I'd like to know about the I/O error.

Creating the MySQL slow query log file

Posted: 08 Sep 2013 12:19 PM PDT

What do I need to do to generate the slow logs file in MySQL?

I did:

 log_slow_queries  = C:\Program Files\MySQL\MySQL Server 5.1\mysql-slow.log    long_query_time  = 1   

What more do I need to do to?

how to add attachment(text file) to database mail?

Posted: 08 Sep 2013 07:19 AM PDT

I have scenario

Daily i run a sql job to apply a new updates to one table - this job will create one text file daily - text file contains all new updates

I can send a mail to client that job is successfully completed - now i need to send him a text file as a attachment

Is there any way to send attachment through GUI (SQL Server Job setting)

I cann't run the script

EXEC sp_send_dbmail  

I googled for this scenario but no information from GUI END - i could find from only with scripts

Efficient way to move rows across the tables?

Posted: 08 Sep 2013 05:19 AM PDT

This is somewhat long question as I would like to explain all details of the problem.

System Description

We have a queue of incoming messages from external system(s). Messages are immediately stored in the e.g. INBOX table. Few thread workers fetch the job chunk from the table (first mark some messages with UPDATE, then SELECT marked messages). Workers do not process the messages, they dispatch them to different internal components (called 'processors'), depending on message command.

Each message contains several text fields (longest is like 200 varchars), few ids and some timestamp(s) etc; 10-15 columns total.

Each internal component (i.e. processor) that process messages works differently. Some process the message immediately, others triggers some long operation, even communicating via HTTP with other parts of the system. In other words, we can not just process message from the INBOX and then remove it. We must work with that message for a while (async task).

Still, there are not too many processors in the system, up to 10.

Messages are all internal, i.e. it is not important for user to browse them, paginate etc. User may require list of processed relevant messages, but that's not mission-critical feature, so it does not have to be fast. Some invalid message may be deleted sometimes.

Its important to emphasize that expected traffic might be quite high - and we don't want bottlenecks because of bad database design. Database is MySql.

Decision

The one of the decisions is not to have one big table for all messages, with some flags column that will indicate various messages states. Idea is to have tables per processors; and to move messages around. For example, received messages will be stored in INBOX, then moved by dispatcher to some e.g. PROCESSOR_1 table, and finally moved to ARCHIVE table. There should not be more then 2 such movements. W

While in processing state, we do allow to use flags for indicating processing-specific states, if any. In other words, PROCESSOR_X table may track the state of the messages; since the number of currently processing messages will be significantly smaller.

The reason for this is not to use one BIG table for everything.

Question

Since we are moving messages around, I wonder how expensive this is with high volumes. Which of the following scenarios is better:

(A) to have all separate similar tables, like explained, and move complete messages rows, e.g. read complete row from INBOX, write to PROCESSOR table (with some additional columns), delete from INBOX.

or

(B) to prevent physical movement of the content, how about to have one big MESSAGES table that just stores the content (and still not the state). We would still have other tables, as explained above, but they would contain just IDs to messages and additional columns. So now, when message is about to move, we physically move much less data - just IDs. The rest of the message remains in the MESSAGE table unmodified all the time.

In other words, is there a penalty in sql join between one smaller and one huge table?

Thank you for your patience, hope I was clear enough.

SSIS Script to split string into columns

Posted: 08 Sep 2013 04:19 PM PDT

I have a dataset (log file) with a number of columns; one of them is "Other-Data" below (unordered string) and need to parse the string to create the derived columns according the u value (U1, U2, U3, etc...). The output columns should be something like:

U1   U2  U3                  U4   U5   etc.  null odw odw : CH : de : hom null null     EUR  sss DE:de:hom           null null     EUR  crm crm                 null null     

Other-Data:

u3=odw : CH : de : hom;u2=odw : Product : DSC-HX20V;~oref=http://www.bidl.ch/lang/de/product/dsc-h-series/dsc-hx20v  u1=EUR;u2=sss:Checkout-Step4:Orderacknowledgement;u3=DE:de:hom;u11=1;u12=302338533;u13=SVE1511C5E;u14=575.67;~oref=https://shop.bidl.de/shop/bibit/success.do  u15=1187;u13=SVE14A1C5E~VAIOEWY401;u11=1~1;u10=843.9~121.14;u9=1038~149;u3=crm : FI : fi : hom;u1=EUR;u2=crm : Checkout : Order acknowledgement;~oref=https://shop.bidl.fi/shop/bibit/success.do  

Can anyone help with this?

Impact of changing the DB compatibility level for a Published replicated DB from 90 to 100

Posted: 08 Sep 2013 10:19 AM PDT

I have a SQL Server 2008 R2 server with a bunch of published databases that are currently operating under compatibility level 90 (2005).

The subscription databases are also SQL Server 2008 R2, however the destination databases are set to compatibility level 100 and replication is working fine.

If I change the compatibility level for the Published databases, will it affect replication in any way, or will it just be a case of reinitializing all the subscriptions and restarting replication?

I suspect that changing the published database compatibility level may change how the replication stored procedures function slightly, but I'm not 100% sure.

Is this the case?

Is there a way to do data export from Amazon Web Services RDS for SQL Server to On-Premises SQL Server?

Posted: 08 Sep 2013 06:19 AM PDT

We have a large Amazon Web Services RDS for SQL Server instance and we would like to do incremental data transfers from RDS to On-Premesis SQL Server on a regular basis.

The On-prem server will be used to feed information into other systems on an acceptable 1-day delay.

However reading through the docs and searching on google, forums etc, we have not found a seamless way to do off-AWS data transfers using RDS for SQL Server.

Built-in SQL Server functions such as Change Data Capture (CDC) are turned off as well as Replication and off-site Backup/Restore services.

Is there a way to do this or is it a limitation of using RDS?

No comments:

Post a Comment

Search This Blog