[how to] Doing a point in time restore with CDC enabled; Possible? |
- Doing a point in time restore with CDC enabled; Possible?
- Mysql: update query with subquery
- Recover oracle 10g database from old data filse
- MySQL innodb engine table and database backup and recovery
- Move SQL Server 2008 database from machine A to B with no downtime
- The remote copy of database “<DBNAME>” is not related to the local copy of the database
- Is there any way to use different InnoDB settings for different databases on the same server?
- Query to return fields of distinct values per key
- Selecting the highest value in a column?
- Oracle's UTL_FILE when connected locally or through listener has different behaviour
- E/R diagram for sports league database
- bulk insert not working in stored proc when using exec
- Composite Primary Key efficiency as a Foreign Key
- How to remove diacritics in computed persisted columns? COLLATE is non-deterministic and cannot be used
- Initializing Transactional Replication From two Different Database Backup
- Performance: order of tables in joined statement
- How to design database for polymorphic relationships
- MYSQL LOAD DATA INFILE taking a long time
- Unclear oracle problem with getting response of query
- Restrict range of dynamic ports available to SQL Server
- Probable circular dependency, but query still runs
- Calculating the median value in a MySQL table with CodeIgniter
- Access denied for user 'root'@'%'
- Parallel inserts cause deadlock on DB2
- Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries) [on hold]
- Can I move rows between partitions by updating the partition key?
- Tool to generate large datasets of test data
Doing a point in time restore with CDC enabled; Possible? Posted: 03 Oct 2013 08:42 PM PDT I discovered this week, the hard way, that the database restore options Doing research on MSDN etc. I cannot find any documentation on restoring a database with I was able to find one attempt that did the full and subsequent logs without the If the intent is to |
Mysql: update query with subquery Posted: 03 Oct 2013 08:25 PM PDT Hello i need to perform the following query Query The tables are as following: Taxonomy(id, oldId, sourceId, name, language) TaxonomyMapping(id, oldId, sourceId, name, language) Exact copy of Taxonomy CategoryMapping(en_ID, language_ID) What im trying to accomplish The original taxonomy table has got categories in a certain language there are translations of it, but the need they are known in Category mapping, now i need to add the english translation to the sourceId, Can somebody help me with this? at the moment it fills one id for all im suspecting that the oldId(ta.oldId = oldId) i use is not the oldId of the table to update. Is it possible to do a query like this or should i search for another solution? |
Recover oracle 10g database from old data filse Posted: 03 Oct 2013 09:16 PM PDT I have oracle 10g database, unfortunately, our machine and system crashed and all the backups and oracle archive logs, but we have old image copies of our data files and one of the control file (Copy & Paste) how I can restore our database from this copies? I tried to create a new database and replace the old files with new files, but I couldn't (I got message, we need to open the database with resetlogs, but we can't open it if we didn't make incomplete recovery) please help me |
MySQL innodb engine table and database backup and recovery Posted: 03 Oct 2013 01:54 PM PDT I am using storage snapshot to backup the mysql database for myisam database. can you guide me how to take the innodb database and tablespace backup and restore using storage snapshot. |
Move SQL Server 2008 database from machine A to B with no downtime Posted: 03 Oct 2013 05:45 PM PDT I got a bit of a problem. I have a SQL Server database in box A which has approximately 60gb of data. I need to move this to a new machine but I cannot have downtime at all. Every second that goes by there are new data in the db so we cant do it at some random time in the middle of the night. Is there any way to achieve this? I am pretty much able to do whatever I want so any suggestion will be useful. |
The remote copy of database “<DBNAME>” is not related to the local copy of the database Posted: 03 Oct 2013 11:41 AM PDT I'm trying to join a secondary DB to my SQL Server 2012 Availability Group following this article: Join a Secondary Database to an Availability Group (SQL Server) When I attempt this the local SQL server windows event log gives error 1407: I have followed all the steps and created a recent backup and restore with NORECOVERY. The error log of the destination SQL server gives these errors: The cluster and all its nodes are online. The AG dashboard does not give any additional information to help diagnose the problem. Opening all firewalls did not solve the issue. The Windows Event log gives this line: However, the DB is left in Restoring mode when you restore it and specify the NORECOVERY flag as per the tutorial guidelines. How can I join the DB to my availability group? |
Is there any way to use different InnoDB settings for different databases on the same server? Posted: 03 Oct 2013 12:28 PM PDT From what i know, this is not possible but i'm surprised its not highly requested feature. I'd love to be able to have one db using InnoDB in ACID compliant mode and another db on the same server used for non-critical data that uses InnoDB with more relaxed settings related to disk writes. Is there a way to achieve this setup? |
Query to return fields of distinct values per key Posted: 03 Oct 2013 08:35 PM PDT In the process of building a data warehouse we are creating some aggregation tables for a reporting engine. We would like to build a single aggregation table from the fact table that can exist as list of all possible values of certain fields that are present in the fact table. I would like to create a table that has the following output structure: So that for each Org, the non-NULL DISTINCT list in each column represents the list of distinct values in the Fact table for each column. I can't use a DISTINCT operator because that will look for the list of distinct combinations of fields, not where each field is distinct and the number of rows for each Org is at most the number of distinct values for that Org. The database I'm using is SQL Server 2012, if it matters. |
Selecting the highest value in a column? Posted: 03 Oct 2013 12:13 PM PDT I am dealing with a table that has 3 columns and I need a way to select, for every The I.E. for |
Oracle's UTL_FILE when connected locally or through listener has different behaviour Posted: 03 Oct 2013 03:35 PM PDT We have a procedure which creates a CSV file with ULT_FILE. (env is Oracle 11.2.0.1 and RHEL 6.4) Until now, we were storing that CSV file into system So we added the With this, logging into system as And now , as far as I know (and Oracle's documentation says), this should works, but it doesn't .. not at all. If I connect locally with sqlplus, It works. But if I connect from a remote machine or through the listener, I doesn't! I'll show you: Local connection with sqlplus: Now, lets try it again connecting through the listener Let's see where is my tnsnames pointing to: Now, If I change ???!! I don't get why this behaviour. As Oracle's documentation says:
The owner of the shadow process its the same in both methods (i.e. ''oracle'' system user), so,.. why this? Anybody has a clue? Am I missing something? Regards! |
E/R diagram for sports league database Posted: 03 Oct 2013 04:43 PM PDT I am attempting to create a data model for a sports league There are two entities in my domain. The first entity is a Manager and the second is a Team. A Manager has
A Team has one Manager If these are the design constraints, then how can we connect these two constraints? |
bulk insert not working in stored proc when using exec Posted: 03 Oct 2013 01:50 PM PDT We have a SQL Server 2005 database with a stored procedure that needs to do a bulk insert. The query in the stored procedure is defined as such: and the stored procedure is called from the web app nothing gets inserted, but if the stored procedure is changed to just run the bulk insert using a hard coded SQL statement and called from the web app: the data gets loaded. The developer needs to be able to eventually use a dynamic version because the path and file will change and be passed from the web app as a parameter. What's even more confusing is when you execute the stored procedure from a query window on the database server the bulk insert works either way, it only has a problem when called from the web app. |
Composite Primary Key efficiency as a Foreign Key Posted: 03 Oct 2013 07:32 AM PDT I have a table with a Composite Primary key (consisting of 4 columns) which is used to ensure no duplicates are entered into the table. I am now in need of a new table which will need to reference the keys in this table as foreign keys. My question is which approach is more efficient for lookup speeds: 1) Do I create the new table including all 4 columns and reference them all in a foreign key. or 2) Do I create a new identity column in the Primary Key table and use this as a foreign key in the new table. This database is expected to hold a very large amount of data, so I have built it up until now with a view to minimising the amount of data held in each table. With this in mind, option 2 would be the best approach since I will save 2 int columns and a datetime column for every row, but I want to avoid increasing the lookup time if unnecessary. |
Posted: 03 Oct 2013 09:59 AM PDT I have a function that includes: This is useful, for example, to remove accents in French; for example: gives But using Q1. Is there another (quick and easy) way to remove accents like this, with a deterministic function? Q2. (Bonus Question) The reason I do this computed persisted column is 1)to search 2)to detect duplicates. For example the user may enter the customer's last name as either 'Gagne' or 'Gagné' or 'GAGNE' or 'GAGNÉ' and the app will find it using the persisted computed column (and the update/insert will balk at the duplicate). Is there a better way to do this? EDIT: Using SQL Server 2012 and SQL-Azure. |
Initializing Transactional Replication From two Different Database Backup Posted: 03 Oct 2013 08:53 AM PDT I have two different large Database (200GB for each) which I want to replicate their data to a central database (3rd DB). I followed this link and it was very useful for a single database. But my problem is I have to restore two different backups to a single database. Is that possible at all? |
Performance: order of tables in joined statement Posted: 03 Oct 2013 08:02 AM PDT I have the following SQL statement, running on a SQLite database on a Windows mobile device. As there are several hundred thousand records in each table and the device isn't really new this takes some time. Would the performance be better, if I switched the tables, somehow like this: Please note: in the first statement I select from table 1` and join table 2, in the second, it is switched. Why or why not is it faster? |
How to design database for polymorphic relationships Posted: 03 Oct 2013 10:13 AM PDT I have the following scenerio:
How can a model be made without any repetition, and which correctly addresses the polymorphic nature of attributes? |
MYSQL LOAD DATA INFILE taking a long time Posted: 03 Oct 2013 03:13 PM PDT I have a MYSQL DB running on a raspberry pi. Now under normal circumstances MYSQL actually runs slightly quicker than it did on my much more powerful desktop. However I am trying to insert 60 million records in to the database using LOAD DATA INFILE. I tried it all in one go (a 1.2GB File) and it was still trying to load the data 1.5 days later. So I tried loading in 100 000 chunks which was fine for the first 3 million records but soon started to grind to a halt. So I then removed the indexes from the table and it seems to run a bit quicker but I noticed that for each 100 000 rows I insert the time increases by about 20 seconds. What is strange is that when I did a database restore from my original desktop machines database (an identical db with 60million rows in the main table) the restore only took about 1 hour. What is causing the slowdown for LOAD DAT I should point out that I am using InnoDB EDIT: I reduced the chunks to 1000 records and left it running which did appear to speed things up as after about 1 hour it had inserted 24million records however each insert of 1000 was taking about 30 seconds. However I then decided to stop it running and restarted the raspberry pi. Then I ran the import again and low and behold the initial inserts were back to less than one second again. So my question is, do I need to clear a cache or something as MYSQL appears to be getting bogged down rather than the actual LOAD DATA INFILE being slow. It is almost as if it is filling up memory and not releasing it or something much more technical to do with MYSQL. |
Unclear oracle problem with getting response of query Posted: 03 Oct 2013 08:21 AM PDT The problem is very unclear, I have an application server, that constructs queries and sends it to Oracle to get response. The Oracle database is installed on a high perfomance server: The application server was tested on different locations inside of the intranet, including that high perfomance server, where the database is located. Several versions of the application server (from old to newest) were tested also. The problem: When the application server sends a query to the database, it has to wait for about 300 seconds to get response and construct logged output. But when I'm connecting to the database with PL/SQL Developer and running that query using that IDE, I'm getting a response in less than one second. Is there something familiar for you, guys? We're just cracked our heads trying to determine where the problem could be located. If you need more information - I'll update this post immideately. |
Restrict range of dynamic ports available to SQL Server Posted: 03 Oct 2013 10:38 AM PDT Is there a way to limit the range of dynamic ports available to SQL Server from the database side, or database server side? Our SOP is to use static ports across the network firewall and a vendor is having trouble locking down their ports. Theoretically, if we allowed a range of 1000 ports inside the dynamic range (49152–65535) across the firewall how would I limit SQL Server to only assign a dynamic port inside that range? |
Probable circular dependency, but query still runs Posted: 03 Oct 2013 08:27 PM PDT I built this ER diagram in MySQL Workbench and it will run and generate the database. My concern is that the relationship between the Because I know people will ask, this is something I was working on in a class. I'm not asking for help with homework, this is something we were working on in class and I took a different route than the professor. I'm asking for clarification about database modelling theory. |
Calculating the median value in a MySQL table with CodeIgniter Posted: 03 Oct 2013 05:25 PM PDT This is my model file in codeigniter. I need to calculate the median for this variable: |
Access denied for user 'root'@'%' Posted: 03 Oct 2013 08:56 PM PDT I used to access the root user in MySQL just fine. But recently, I am no longer able to. I am able to login fine : Here is the mysql status after login : But when I want to do any action, such as : I understand |
Parallel inserts cause deadlock on DB2 Posted: 03 Oct 2013 07:38 AM PDT The scenario is the following: The application attempts to insert into the same table from two parallel threads making 4000 insertions in each thread (separate transaction per thread). It causes the DB always to fail with the following exception:
The full log is ( It looks like that lock space is exhausted much earlier then transaction log is exhausted. Solutions which I see (and seem to work):
Both of them are not strictly acceptable because:
The test application works OK both for MySQL, HSQL and MSSQL as is (with no further tricks). So I believe there should be a way to make it working on DB2 without code refactoring and staying on SQL'92 compliant level. Is it possible with enterprise-level DB2? Perhaps I miss something trivial, any feedback is welcomed. |
Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries) [on hold] Posted: 03 Oct 2013 09:25 AM PDT I'm trying to configure a load balancing system. I've a python script, invoked through mod_wsgi on Apache, that generates a query and executes it on pgpool: request-> wsgi python -> pgpool -> postgresql. Pgpool is configured as load balancer using 4 servers with 24GB ram and 350GB ssh hd. Our db is about 150GB and a query takes about 2 seconds. These are the configurations: Pgpool
Apache (mpm_prefork)
PostgreSQL
It seems not working When I try to submit more than 150 concurrent queries, although pgpool log file doesn't have any errors I get this error from the python script:
Any ideas? |
Can I move rows between partitions by updating the partition key? Posted: 03 Oct 2013 09:56 AM PDT I would think that this would be a fairly simply question, but I've actually had a difficult time finding an answer for this. The question: Can you move rows of data within a partitioned table from one partition to another by simply updating the partition column so that it crosses the partition boundary? For example, if I have a table that has a partition key: With the partition function that maps to the primary key: Can I move a row from the first partition to the third partition by changing the SampleID from 1 to (say) 500,000? Note: I'm tagging this as both sql server 2005 and 2008, since they both support partitioning. Do they handle it differently? |
Tool to generate large datasets of test data Posted: 03 Oct 2013 07:28 AM PDT Many times when trying to come up with an efficient database design the best course of action is to build two sample databases, fill them with data, and run some queries against them to see which one performs better. Is there a tool that will generate (ideally straight into the database) large (~10,000 records) sets of test data relatively quickly? I'm looking for something that at least works with MySQL. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment