[how to] How to Partition an existing Table which already have clustered index with primary key |
- How to Partition an existing Table which already have clustered index with primary key
- Merging Large Database Tables Quickly
- Can't connect to service after updating ODBC driver
- Create a high availability cluster configuration PostgreSQL for Windows
- What's the fastest and most accurate way to verify data migration?
- How do I convert unicode data from UTF-16LE to UTF-8 when migrating from SQL Server to MySQL?
- Select on several {name,value} pairs
- Postgres corrupted DB recovery
- SQL Server 2008R2 Database refresh in isolated environment
- Connection issues with JDBC to SQL Server named instance
- Why does "GRANT SELECT ... TO Role;" does not allow members of Role to SELECT?
- Do id columns help (in speed) select statements?
- postgres-xc : Address already in use
- postgres-xc - ERROR: object already defined
- Accessing unsharded database in a MongoDB shard cluster without going through mongos?
- Locking in "Read Committed Snapshot"
- Backup / Export data from MySQL 5.5 attachments table keeps failing!
- Constraint to one of the primary keys as foreign key
- disk I/O error in SQLite
- Creating the MySQL slow query log file
- how to add attachment(text file) to database mail?
- Efficient way to move rows across the tables?
- SSIS Script to split string into columns
- Impact of changing the DB compatibility level for a Published replicated DB from 90 to 100
- Is there a way to do data export from Amazon Web Services RDS for SQL Server to On-Premises SQL Server?
How to Partition an existing Table which already have clustered index with primary key Posted: 08 Sep 2013 08:59 PM PDT I have one table CREATE TABLE [dbo].[entry]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [EntryDate] [datetime] NOT NULL, [createddate] [datetime] NOT NULL, CONSTRAINT [PK_Entry_ID] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY] ) ON [PRIMARY] where the primary key is referenced by another table. To partition this table I have already done following steps : 1.CREATE PARTITION FUNCTION EntryFunc (DATE) AS RANGE LEFT FOR VALUES ('2011-01-01') 2. CREATE PARTITION SCHEME EntryScheme AS PARTITION EntryFunc TO ([FileGroup2], [PRIMARY]) the above 2 steps successfully completed,but when i am partitioning table i am not able to drop primary key clustered index because it is referenced by other table. My motive is to partition the table Entry according to created date. |
Merging Large Database Tables Quickly Posted: 08 Sep 2013 08:53 PM PDT So I found this other post; Merge multiple tables in a database And that's the general idea, but at an insanely large scale. Anyway -- Over the years, Our site has adapted and three different things have developed in the tables and they're kinda the same thing; submitted content. I have journals, images, and links; However there's not a REAL reason for these to be separate, and for the benefit of table organization, i'd like to reduce them to a single table; posts. So the problem is this; Journals has 2.6 million rows. Images has 1.4 million rows. And Links has a cute 45 thousand rows. Now, some of the columns need to be remapped for the sake of logic. Simple stuff like "friends_only" column becoming "privacy", which is simple enough. What would the recommended course of action be for migrating these tables, or is sticking with a perl script that runs through all 4 million+ records the best way of doing it? |
Can't connect to service after updating ODBC driver Posted: 08 Sep 2013 08:23 PM PDT I am upgrading a program at work and one of the changes is that it now uses PostgreSQL 9.2.4 instead of 8. I was getting a 'client encoding mismatch' error, so I updated the ODBC driver, and the problem went away. However, with the new driver, my program does not want to connect to a custom service that it uses anymore. The custom service uses postgres a lot. The error I'm getting is '(10061) connection is forcefully rejected'. Postgres is configured to accept connections from any IP address, so I'm not sure why I'm getting this error. The program will connect fine to the custom service with the old version of the ODBC driver, but as soon as I start using the new driver, it does not want to connect. I've checked the services list and both postgres and the custom service are started. At one point, while trying to connect to the custom service, I was getting an error that said something like "OLE DB error: cannot send query to the backend". However, I can't seem to reproduce this error message anymore, it is simply not connecting. I don't have a lot of database experience, so I apologize if this information is confusing or incomplete. Please let me know if you need clarification on anything. Any suggestions would be appreciated, even if they are just ideas on how to troubleshoot this issue. |
Create a high availability cluster configuration PostgreSQL for Windows Posted: 08 Sep 2013 06:06 PM PDT I want a server to be replicated, here is the scenario: When my primary server goes down or fail I want the secondary or the standby server to take over, I'm using Windows 7 for the primary server, another server in the same OS and windows xp in a virtual machine for the secondary server; Any help would be appreciated. |
What's the fastest and most accurate way to verify data migration? Posted: 08 Sep 2013 04:45 PM PDT I'm migrating ~1TB of data from SS to MySQL. The verification process is multi-stage.
Any ideas on how to make this easier, faster, better? Thanks in advance! |
How do I convert unicode data from UTF-16LE to UTF-8 when migrating from SQL Server to MySQL? Posted: 08 Sep 2013 04:25 PM PDT Objective: Ensure existing Unicode (UTF-16LE) data is displayed the when moved to MySQL (UTF-8). Background: I'm migrating 1TB from SQL Server to MySQL. The current Collation in SS is "Latin1_General_CPI_CI_AS". Given MySQL supports Latin1, I am assuming there shouldn't be a problem with the collation. My main concern is the Character Set conversion. In SS, all numerical data-types are set to Character Set = NULL. The remaining are set to "UNICODE". Since SS Unicode is UTF-16LE and it needs to be UTF-8 in MySQL, I fear I'm stuck verifying each character in all non-numerical columns. That would be very long and intensive given ~4k tables. Potential solutions: 1. Using SqlDataReader, validate each record value, change if necessary, and then save updated results to an export file. 2. BCP data out of SS and save to .txt file. Reopen each .txt file and re-save with the UTF-8 encoding. Problems encountered: When converting the character set from UTF-16LE to UTF-8 using a sqldatareader, it works great except when the source value to be converted is already in UTF-8. Ideally, the solution would only run if the data needed to be converted. Does anyone know of an easy way to ensure data is correctly set and verified from database to another? Am I over thinking this? Is this even an issue? Thanks in advance! |
Select on several {name,value} pairs Posted: 08 Sep 2013 05:42 PM PDT I have a this schema: I want to write a query to retrieve the
The cardinality of the constraints is unknown at compile time, it comes from a GUI. I wrote this query: the result is: Only the How can I eliminate the lines which doesn't match ALL the constraints? |
Postgres corrupted DB recovery Posted: 08 Sep 2013 07:44 PM PDT Is there a way to recover data from partially damaged Postgres DB? My HDD was damaged in transport, I was able to recover most of the files but I cannot connect to the DB or use dump: I'm getting error: This is one of the missing files though most of them are fine. Is there a way? Thanks |
SQL Server 2008R2 Database refresh in isolated environment Posted: 08 Sep 2013 07:43 PM PDT I would greatly appreciate your help on my database "refresh" issue. The scenario: I have 12 databases on a QA server that were restored there from a Production server approx. 2 years ago. Now QAs need to sync those 12 databases with databases on a Prod server; all account logins (and corresponding permissions) have to remain unchanged in QA environment - QAs only need to "refresh" databases so that data is as current as it is in Prod environment. For security reasons there's no (and cannot be) connection between two servers (hence it is not possible to import data with append option), so I had to request DBAs on that Prod server to backup databases and to place backup files in a shared folder (already there). My question is - what is the best way to "refresh" 12 databases in QA environment - is it to delete/drop old databases and restore them from backup files (then what would happen to the current QA server logins?) or is it to try to restore databases from backups without dropping 12 old databases and is this possible, would be data just appended to existing data and current logins stay unchanged ? Thank you in advance for any input. |
Connection issues with JDBC to SQL Server named instance Posted: 08 Sep 2013 11:57 AM PDT We have a clustered instance, Instance1, assigned its own virtual network name and IP (say 172.0.0.1). This instance is configured to listen on port 1433 in SQL Server network configuration (verified in the logs). We can verify connectivity to Instance1 within SSMS (specifying 172.0.0.1,1433 as the server) using the credentials for apiUser. When the connection string is specified as given below, the connection fails and we see error 18456 state 38 in the logs. The database db1 exists and the login has a mapped user in db1; in fact db1 is the default database for the login. When logging in via SSMS, the database context is successfully set to db1 upon successful login. We are at a loss as to why error 18456 state 38 would be thrown in this scenario. Per Aaron Bertrand's article, Troubleshooting Error 18456, the issue seems to be with a missing specified database in the connection string or permissions for the login to open the specified database, but this doesn't appear to be the case. Any help would be greatly appreciated.
|
Why does "GRANT SELECT ... TO Role;" does not allow members of Role to SELECT? Posted: 08 Sep 2013 03:23 PM PDT Please consider the following scenario on a Oracle 11g Database. The user ADMIN performs the following: Then Alice logs into the database as 'Alice' and executes the following command: Alice gets the following error: I thought that granting privileges to a role would enable its member-users to get those privileges. However, this scenario shows that it is not so. What am I missing here? How can I grant SELECT to Alice using a role? Update:Following the helpful answers, tried 3 fixes with no success 1) Using Fully-qualified Table NamesI missed to include the schema name in Alice executes: Gets the error: 2) Using a synonym for the fully-qualified table nameUnfortunately, this does not seem to solve the problem either. Alice executes the following: 3) Altering the session |
Do id columns help (in speed) select statements? Posted: 08 Sep 2013 08:18 AM PDT I have a table which contains data from aggregation software. The columns mostly are int columns, but three of them are string columns. It looks sort of like: and so on. The table contains something like 80m rows for now. When I'm trying to make a query using aggregation functions (SUM) and grouping by one of the string columns (someData1, someData2, someData3) it takes very VERY long time (more than 10 minutes for a query). I'm trying to optimize the table right now, the first thing that I've did I've added indexes to the string columns, but I want to make it even faster. I thought of adding an ID column (pk, ai, nn) as of it will be make the select queries faster. What do you think about it? do you have any advice how more I can optimize this table? Note: I have only like 5 columns that I'm using group by. 3 of them is string columns, and they are making the problem. |
postgres-xc : Address already in use Posted: 08 Sep 2013 08:33 AM PDT I am starting a postgres-xc server with two datanodes and one coordinator. and the other node gives me the following error Kindly share your inputs on this issues. |
postgres-xc - ERROR: object already defined Posted: 08 Sep 2013 05:09 AM PDT I am configuring two data nodes and a coordinator on postgres-xc. I get eh following error: Please provides inputs on this problem. |
Accessing unsharded database in a MongoDB shard cluster without going through mongos? Posted: 08 Sep 2013 06:49 PM PDT As far as I can understand, not all databases in a MongoDB shard cluster have to be sharded; I can keep some databases unsharded, on only one of the shards. So suppose I have a shard cluster, with shards (i.e. replica sets) Is there any harm in letting some clients connect to I have only tested that it is possible to connect directly to |
Locking in "Read Committed Snapshot" Posted: 08 Sep 2013 03:19 PM PDT IF update command is run on a table with "Read Committed_Snapshot" isolation level and Commit is pending eg: 1) update table1 set col1 = col1 + 1 where PKcol < 3 2) update table1 set col1 = col1 + 1 where PKcol = 3 3) update table1 set col1 = col1 + 1 where NonPKcol < 3 4) update table1 set col1 = col1 + 1 where NonPKcol = 3 5) update table1 set col1 = col1 + 1 where PKcol < 3 and NonPKcol = 5 (In above case PKcol is primary key in table and NonPKcol is a non-primary key) then whether Update is locked for only rows satisfying 'where' condition ? (is it based on value or index or Primary column ?) |
Backup / Export data from MySQL 5.5 attachments table keeps failing! Posted: 08 Sep 2013 11:19 AM PDT Can anyone please help! - I have a large table in a MySQL 5.5 database. It is a table which holds a mixture of blobs/binary data and just data rows with links to file paths. It has just over a million rows. I am having desperate problems in getting the data out of this table to migrate it to another server. I have tried all sorts - mysqldump (with and without -quick), dumping the results of a query via the command line. Using a MySQL admin tool (Navicat) to open and export the data to file, CSV, or do a data transfer (line by line) to another DB and/or another server but all to no avail. When trying to use the DB admin tool (Navicat), it gets to approx 250k records and then fails with an "Out of memory" error. I am not able to get any error messages from the other processes I have tried, but they seem to fall over at approximately the same number of records. I have tried playing with the MySQL memory variables (buffer size, log file size, etc) and this does seem to have an effect on where the export stops (currently I have actually made it worse). Also - max_allowed_packet is set to something ridiculously large as I am aware this can be a problem too. I am really shooting in the dark, and I keep going round and round trying the same things and getting no further. Can anyone give me any specific guidance, or recommend perhaps any tools which I might be able to use to extract this data out?? Thanks in hope and advance! A little more information below - following some questions and advice: The size of the table I am trying to dump - it is difficult to say, but the sql dump gets to 27gb when the mysqldump dies. It could be approximately 4 times that in total. I have tried running the following mysqldump command: And this gives the error:
The server has 8gb RAM, Some of the relevant settings copied below. It is an INNODB database/table. |
Constraint to one of the primary keys as foreign key Posted: 08 Sep 2013 01:52 PM PDT I want to have a constraint for grid that, col_id should be present in grid_col. I can't have foriegn key constraint here. I can create a function constraint which scans the grid_col while inserting in grid but in that case it increases the chances of having deadlock. How to add a constriant here? |
Posted: 08 Sep 2013 02:19 PM PDT What are the possible things that would trigger the "disk I/O error"? I've been having this problem and I couldn't find a solution. I have a SQLite3 database, and I'm trying to insert data from a file that contains SQL inserts. Sample data in the file: I tried inserting that in the db file with the following command: See below the error that I get: The input lines that don't generate error are successfully included, but I don't understand why some lines have errors, and they are not inserted into the DB. There's nothing special in the lines with error, and if I run the command again I get errors in different lines, which means it's random (not related to the data itself). I tried adding |
Creating the MySQL slow query log file Posted: 08 Sep 2013 12:19 PM PDT What do I need to do to generate the slow logs file in MySQL? I did: What more do I need to do to? |
how to add attachment(text file) to database mail? Posted: 08 Sep 2013 07:19 AM PDT I have scenario Daily i run a sql job to apply a new updates to one table - this job will create one text file daily - text file contains all new updates I can send a mail to client that job is successfully completed - now i need to send him a text file as a attachment Is there any way to send attachment through GUI (SQL Server Job setting) I cann't run the script I googled for this scenario but no information from GUI END - i could find from only with scripts |
Efficient way to move rows across the tables? Posted: 08 Sep 2013 05:19 AM PDT This is somewhat long question as I would like to explain all details of the problem. System DescriptionWe have a queue of incoming messages from external system(s). Messages are immediately stored in the e.g. INBOX table. Few thread workers fetch the job chunk from the table (first mark some messages with UPDATE, then SELECT marked messages). Workers do not process the messages, they dispatch them to different internal components (called 'processors'), depending on message command. Each message contains several text fields (longest is like 200 varchars), few ids and some timestamp(s) etc; 10-15 columns total. Each internal component (i.e. processor) that process messages works differently. Some process the message immediately, others triggers some long operation, even communicating via HTTP with other parts of the system. In other words, we can not just process message from the INBOX and then remove it. We must work with that message for a while (async task). Still, there are not too many processors in the system, up to 10. Messages are all internal, i.e. it is not important for user to browse them, paginate etc. User may require list of processed relevant messages, but that's not mission-critical feature, so it does not have to be fast. Some invalid message may be deleted sometimes. Its important to emphasize that expected traffic might be quite high - and we don't want bottlenecks because of bad database design. Database is MySql. DecisionThe one of the decisions is not to have one big table for all messages, with some flags column that will indicate various messages states. Idea is to have tables per processors; and to move messages around. For example, received messages will be stored in INBOX, then moved by dispatcher to some e.g. PROCESSOR_1 table, and finally moved to ARCHIVE table. There should not be more then 2 such movements. W While in processing state, we do allow to use flags for indicating processing-specific states, if any. In other words, PROCESSOR_X table may track the state of the messages; since the number of currently processing messages will be significantly smaller. The reason for this is not to use one BIG table for everything. QuestionSince we are moving messages around, I wonder how expensive this is with high volumes. Which of the following scenarios is better: (A) to have all separate similar tables, like explained, and move complete messages rows, e.g. read complete row from INBOX, write to PROCESSOR table (with some additional columns), delete from INBOX. or (B) to prevent physical movement of the content, how about to have one big MESSAGES table that just stores the content (and still not the state). We would still have other tables, as explained above, but they would contain just IDs to messages and additional columns. So now, when message is about to move, we physically move much less data - just IDs. The rest of the message remains in the MESSAGE table unmodified all the time. In other words, is there a penalty in sql join between one smaller and one huge table? Thank you for your patience, hope I was clear enough. |
SSIS Script to split string into columns Posted: 08 Sep 2013 04:19 PM PDT I have a dataset (log file) with a number of columns; one of them is "Other-Data" below (unordered string) and need to parse the string to create the derived columns according the u value (U1, U2, U3, etc...). The output columns should be something like: Other-Data: Can anyone help with this? |
Impact of changing the DB compatibility level for a Published replicated DB from 90 to 100 Posted: 08 Sep 2013 10:19 AM PDT I have a SQL Server 2008 R2 server with a bunch of published databases that are currently operating under compatibility level 90 (2005). The subscription databases are also SQL Server 2008 R2, however the destination databases are set to compatibility level 100 and replication is working fine. If I change the compatibility level for the Published databases, will it affect replication in any way, or will it just be a case of reinitializing all the subscriptions and restarting replication? I suspect that changing the published database compatibility level may change how the replication stored procedures function slightly, but I'm not 100% sure. Is this the case? |
Posted: 08 Sep 2013 06:19 AM PDT We have a large Amazon Web Services RDS for SQL Server instance and we would like to do incremental data transfers from RDS to On-Premesis SQL Server on a regular basis. The On-prem server will be used to feed information into other systems on an acceptable 1-day delay. However reading through the docs and searching on google, forums etc, we have not found a seamless way to do off-AWS data transfers using RDS for SQL Server. Built-in SQL Server functions such as Change Data Capture (CDC) are turned off as well as Replication and off-site Backup/Restore services. Is there a way to do this or is it a limitation of using RDS? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment