[how to] Altering the location of Oracle-Suggested Backup |
- Altering the location of Oracle-Suggested Backup
- How do I safely import MySQL data in a directory to a running MariaDB server?
- MySQL Replication - Used to work, now get error 2026
- Creating a database redundancy based on Mysql
- Database update rather than cycle through?
- How do I get the aggregate of a window function in Postgres?
- mysql works, mysqldump doesn't
- Optimizing PostgreSQL for transient data
- Which one is more efficient: select from linked server or insert into linked server?
- Count Most Recent Browser Used By All Users
- compilation error while referencing pl/sql variable in FOR loop of a procedure
- mongostat causing CPU 100%
- How do I get Pentaho Reporting/Kettle to produce PDF reports by business entity?
- Relational Database. Data Input. SQL/PHP [on hold]
- Installing Full Text Search to SQL Server 2008 R2
- multi master replication
- SQL server windows authentication through network
- How to merge two rows in a table? [on hold]
- linked list in SQL
- Most efficient ordering post database design
- How to do incremental/differential backup every hour in Postgres 9.1?
- Deploying to SSIS catalog and get permissions error
- InnoDB Failure of some kind
- MySQL is running but not working
- Mistake during Oracle 11g PITR
- How can I query data from a linked server, and pass it parameters to filter by?
- DBCC CHECKDB Notification
- Problem with order by in MySQL subquery
- How to check growth of database in mysql?
Altering the location of Oracle-Suggested Backup Posted: 12 Jul 2013 08:29 PM PDT On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere. As far as I can see, the scheduled backup job is simply: Asking RMAN to If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM, Why might Oracle still choose to dump the backupset to the Ultimately, how can I change the backup destination? |
How do I safely import MySQL data in a directory to a running MariaDB server? Posted: 12 Jul 2013 08:15 PM PDT We're integrating another office into ours. That includes some of their data. I have a new installed MariaDB 5.5 server. The data dir = /home/data/MariaDB. I've done a new/fresh install into that directory: The server is up & running. I inherited some data from a now-defunct MySQL server. It's in: I want to move data for three DBs -- "proj_DB_2136/", "proj_DB_4216/", and "proj_DB_6217/" -- into my working MariaDB server. Same names, same data, everything. I just need to do this one time, so I can start working on the old data on my new server. I'd like to do it the right way, without losing anything!
What's the correct way to get this data safely brought over? |
MySQL Replication - Used to work, now get error 2026 Posted: 12 Jul 2013 07:00 PM PDT I've had replication running now for about 2 years, with 1 master and 2 slaves. Both are through an SSL connection. I'm using MySQL 5.5.15 on the master (CentOS), and MySQL 5.14.46 on the slaves (on a Windows machine - don't ask). Last week both slaves stopped replicating off the master, giving me an error code of 2026 and saying that they can't connect to the master. Looking in the error logs, both stopped being able to connect to the server at the same time - 16:46 in the afternoon. I was here, I'm the only system admin and I was fiddling with anything on the server. In fact, it has run smoothly now for a long time. The SSL certificates appear to still be set up correctly in the master. Has anybody else had a problem like this? |
Creating a database redundancy based on Mysql Posted: 12 Jul 2013 06:13 PM PDT I have this requirement where my web application running on a tomcat is behind a loadbalancer and I am planning to deploy this application on multiple servers. This webapp though needs a database ( mysql ) for managing profile and other similar stuff, now i can only have a single master database and I want all my front end servers to talk to this database. obviously if I do this it becomes my single point of failure , hence the need to run a redundancy/ dynamic failover. Can you please guide me on same. my requirement though becomes a little more complicated something which i couldn't find on available articles is how to set up a connection to this cluster. Below is the example of how the server.xml is configured in my environment when it talks to single DB ========================================================================================================================= removeAbandonedTimeout="60" logAbandoned="true" **url="jdbc:mysql://localhost:53306/master?autoReconnect=true" />**Now only thing I can replace is the URL but how is the question. and what should be the localhost pointing to. Really appreciate any response/suggestions here. please feel free to ask me any information you feel i haven't supplied , suggesting an alternative approach to my problem is equally welcome but please note i cant change the webapp as it is a 3rd party application. |
Database update rather than cycle through? Posted: 12 Jul 2013 06:32 PM PDT I'm new when it comes to MySQLi and databases so I'm not sure how exactly to title this question. I have a PHP script setup to where it inserts data into the database. Pretty self explanatory. However, I noticed that the database runs through each row instead of just inserting the data. Possibly to check for duplicates? Wouldn't it be much faster not having it query through each row by just inserting the data? The data I'm inserting will never be duplicated anyway unless I force it. How can I just get it to insert the data without it running through each value in the row? My apologies if this is off-topic. |
How do I get the aggregate of a window function in Postgres? Posted: 12 Jul 2013 08:59 PM PDT I have a table containing a two columns of permutations/combinations of integer arrays, and a third column containing a value, like so: I want to find out the average and standard deviation for each permutation, as well as for each combination. I can do that with this query: However, that query can get pretty slow when I have a lot of data, because the "foo" table (which in reality, consists of 14 partitions each with roughly 4 million rows) needs to be scanned twice. Recently, I learned that Postgres supports "Window Functions", which is basically like a GROUP BY for a particular column. I modified my query to use these like so: While this works for the "combo_count" column, the "combo_average_value" and "combo_stddev" columns are no longer accurate. It appears that the average is being taken for each permutation, and then being averaged a second time for each combination, which is incorrect. How can I fix this? Can window functions even be used as an optimization here? |
mysql works, mysqldump doesn't Posted: 12 Jul 2013 04:23 PM PDT I have mysql 5.5 on my ubuntu 12.04 server. This command: works perfectly, but this gives me error: In my.cnf file, the socket is set to: so I have no idea where /var/run/mysqld/mysqld.sock is coming from. Thanks. |
Optimizing PostgreSQL for transient data Posted: 12 Jul 2013 08:58 PM PDT I have several tables with 100-300 columns of integer types each, that hold highly volatile data. The datasets are keyed by one or two primary keys, and when refresh occurs, the whole dataset is deleted and new data is inserted in one transaction. Dataset size is usually a few hundred rows, but can be up to several thousand rows in extreme cases. Refresh occurs once per second, and dataset updates for different keys are usually disjointed, so dropping and recreating the table is not feasible. How do I tune Postgres to handle such load? I can use the latest and greatest version if that makes any difference. |
Which one is more efficient: select from linked server or insert into linked server? Posted: 12 Jul 2013 08:17 PM PDT Suppose I have to export data from one server to another (through linked servers). Which statement will be more efficient? Executing in source server: Or executing in target server: Which one will be faster and consume fewer resourcers in total (both source and target server)? Both servers are SQL Server 2005. |
Count Most Recent Browser Used By All Users Posted: 12 Jul 2013 03:43 PM PDT I have a table with columns This appears to work fine, but I have a lot of old browsers that I know are no longer being used: How would I only select the most recent browser used by each user based on timestamp (or Browser_ID could work also since they are written sequentially). I also intend to ignore data by users who have not logged in during the past year, but that is simple to add later (I only mention it so that no one brings up why I would want browser data that is old for inactive users - I do plan on addressing that as well but I know how to do that already). If anyone is up for a challenge, there are some users who log in from a couple different computers - or perhaps from their phone and from their desk. It would be great if I could determine that they frequently log in from two different places and also collect that data, but this might be too complicated. If anyone figured out how to add this complicated step, I'll gladly award a 50 point bounty to the answer. |
compilation error while referencing pl/sql variable in FOR loop of a procedure Posted: 12 Jul 2013 02:32 PM PDT I have written a PL/SQL procedure to get the output in a file for each sale_location, but I am getting an errors as follows: Errors in above script are as follows: Can anyone please help. Thanks |
Posted: 12 Jul 2013 01:39 PM PDT On a 64Bit Debian 6 OS with 4 CPU cores and 8GB RAM, I can reproduce an issue with mongostat. Whenever I create an index, reIndex or even index in background, according to
Nothing after that, only CPU load at the limit until Ctrl + C for stopping mongostat. Few seconds later, CPU load scales down to ~50% and all is fine again. Trying Is mongodb expecting more CPUs or what's wrong? |
How do I get Pentaho Reporting/Kettle to produce PDF reports by business entity? Posted: 12 Jul 2013 12:42 PM PDT We use Kettle/PDI to handle usage records from legacy systems and feed them to an analysis database/DWH and also we report to our customers their activity and a backend commission report on a monthly basis from this legacy data. Right now, I compile this into one enormous PDF using Pentaho Report Designer, then print it and hand it to the Finance gals who send checks out to the customer for the amount on the statement I've made. They hand-collate all this. Obviously we would like to put these online, since we have a file-area for each customer already. The thing I need is for Pentaho to make, instead of a new page, a new PDF file for each business, and then name it the Business ID number and month, or something equally meaningful. Is this possible? We have experimented with splitting up the PDF but it takes someone several hours to process, and it is not pleasant work at all. It seems this should be possible, but I do not know enough of the intricacies of Pentaho to make it work. |
Relational Database. Data Input. SQL/PHP [on hold] Posted: 12 Jul 2013 12:12 PM PDT I am starting to explore database design using php and sql; just looking to bulk out high level understanding as I read through the details of SQL, PHP and RDBMS. Example Two tables: CUSTOMER and ADDRESS; customerid and addressid are the respective auto-incrementing primary keys. The addressid is a foreign key in the customer table. Goal: Customers use a web form to populate the database with their customer and address details. I split out addresses into a separate table in order to avoid data duplication. If a customer is filling out a web form and has an address that is already stored, how would I go about creating a lookup to select and link new customer details with the address already there? starting to enter the address will create a new address record distinguished from existing records by the automatically created new addressid. Appreciate this question is basic and probably similar to others somewhere on the web; any pointers towards useful articles/information sources would be much appreciated! Kind regards, |
Installing Full Text Search to SQL Server 2008 R2 Posted: 12 Jul 2013 01:03 PM PDT I installed sql server 2008 R2 Express successfully, then realised I need to get Full Text Search. So I downloaded the "Advanced Services Installation Package", but when I run it there is no option in the "Feature Selection" part for Full Text Search. Please don't tell me I hav to uninstall and reinstall? |
Posted: 12 Jul 2013 02:22 PM PDT Let us say I have a master(m) slave(s) replication. Now, in the scheme of things, I introduce another database/schema(foo) not associated with the master in any way and I want this schema to be replicated into the slave(s). Can I do this? AFAIK, this cannot be done. What is the best way to pull this off? The reason I want to do this is I want to join tables from foo with s/m. The data replication need not happen in real time, can be a daily cron job too. Is mysqldump the way to go? Is there some hack that I can pull off? |
SQL server windows authentication through network Posted: 12 Jul 2013 12:44 PM PDT I am using sql server 2008 r2.There are about 50 pc in my office connected in network.In a sql server 2008 r2 express(installed in windows 8) i have created all users windows account with same name as their name in their pc and then created windows authentication user in sql. Now all the users are able to connect to sql server using windows authentication. but now i am trying to do same for the another sql server 2008 r2 express which is installed in windows xp sp3. But it is not working when i try to connect to sql server using windows authentication from network pc message comes like "Login failed for user 'PC91\Guest'. " It is not recognizing the windows account of my pc. why it happens? Please tell me a solution. |
How to merge two rows in a table? [on hold] Posted: 12 Jul 2013 11:12 AM PDT I have a table called In What I want is to replicate all the values from the other columns ( What is the easiest way to do this? |
Posted: 12 Jul 2013 10:48 AM PDT Though SQL is more affiliated with table like operations and not so much with recursive, say we would like to implement the linked (or double-linked) list concept (like we have for instance in C). Though I pinned SQL Server, this is an academic like question, so a solution in any other is also good, even if we just get to the conclusion that this is something that should never be brought to the database. |
Most efficient ordering post database design Posted: 12 Jul 2013 12:06 PM PDT I have posts table that is has post_order column. I store order of each post in it. when I change the order of a row from 25 to 15, I should update all the row from 15 to end. It's good for few rows, But in thousands rows it's worst. Is there any better design for ordering posts, that is more efficient? |
How to do incremental/differential backup every hour in Postgres 9.1? Posted: 12 Jul 2013 09:05 PM PDT Trying to do an hourly hot incremental backup of a single postgres server to s3. I have the following setup in postgresql.conf: I did a base backup with Which made a big base.tar file in the archive folder and added some long file name files, which I assume are the WALs.
|
Deploying to SSIS catalog and get permissions error Posted: 12 Jul 2013 12:58 PM PDT When I attempt to deploy to SSIS 2012 Catalog on new server I get an error. I have researched it on the web for several hours and all of the information available online does not fix my issue. |
Posted: 12 Jul 2013 01:56 PM PDT I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked. InnoDB is listed under SHOW ENGINES; Any idea what the cause is or how to fix this so other InnoDB sites can be used? I had these errors: |
MySQL is running but not working Posted: 12 Jul 2013 03:56 PM PDT In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs. Here is why is so confusing:
My websites using MySQL almost all say: Another say: This is my current my.cnf: I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql? |
Mistake during Oracle 11g PITR Posted: 12 Jul 2013 10:56 AM PDT I tried using set time until.. and mis-typed the date. Can anyone help me understand how to get my backups into a manageable state? After the accidental recover, most of my backupset disappeared. I recovered them and used 'catalog recovery area' .. and they're listed in 'list backupset'. But something still isn't right. When I do a PITR now, I get messages that my dbf files aren't available and... the 'list backupset' seems to show backupsets. But they are listed differently than the files which weren't included in the 'bad' recovery. Gists with the error and the list of backupsets are here https://gist.github.com/akinsgre/5561254 |
How can I query data from a linked server, and pass it parameters to filter by? Posted: 12 Jul 2013 04:00 PM PDT I have a really big query that needs to be run on multiple databases, and the results appended to a temp table and returned. The basic syntax looks something like this: The query runs quickly if run locally on the the individual servers, however it takes a long time to run if it's run from a linked server using the 4-part names like above. The problem appears to be it's querying the linked server for the unfiltered result set first, then joining it to the If I hardcode the Ids to filter the result set on the linked server, such as it runs quickly in just a few seconds. Is there a way to run this query so it filters the result set of the query from the linked server by the Some things to note
|
Posted: 12 Jul 2013 12:28 PM PDT There are plenty of questions on DBA.SE regarding I came across this article by Cindy Gross, which has some very good notes. In it she mentions use of SQL Server Agent that if it finds errors from the execution of the Now I am curious if anyone knows that the Check Database Integrity Task in a maintenance plan would do the same thing? MSDN does not mention that it will and I have not truthfully been an environment where it has come across a corruption issue; so can't say that it does. This would be versus simply setting up a SQL Agent Job with multiple steps that runs the specific command against each database, as Cindy suggested. Thoughts? Obviously proof is in the pudding so providing more than just a guess would be helpful... |
Problem with order by in MySQL subquery Posted: 12 Jul 2013 03:32 PM PDT I'm running a query that runs in about 2 seconds when the subquery does not include the I'm confused, since it seems that I'm using a index for both queries, and as I've understood, the subquery should be able to use the index for the subquery. Any ideas where I went wrong? The table structure Tester index SHOW CREATE TABLE SALE |
How to check growth of database in mysql? Posted: 12 Jul 2013 04:56 PM PDT I want to know is there any method to check the growth of database on file EXAMPLES
|
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment