[how to] Taking a backup of a streaming WAL slave |
- Taking a backup of a streaming WAL slave
- How to keep the structure of the Tablix when there is no data to show
- Database Structure and Design
- Understanding multiple indexes with MySQL (and performance)
- What Are the Log File Names?
- Cannot connect remotely to a named instance
- Detect Convert_Implicit usages
- How do I avoid duplicate rows using FREETEXT?
- Vacuum settings for mostly append-only data
- How can I join two tables while filtering the results with a" WHERE IN" clause?
- Why does MySQL ignore the index even on force for this order by?
- Issue adding shared data source (.rsds) to SSRS project
- Why isn't the rows of an EXPLAIN accurate?
- why is DBMS_SCHEDULER giving me an error?
- Query Optimization for added value
- How do I delete only related records in a multi-key MERGE in SQL Server?
- Oracle impdp - Importing file from remote server
- Can I give the same name to a table and a stored procedure?
- Transactional Replication Failing Suddenly - Distributor Error
- SymmetricDS fails to start for PostgreSQL 9.2
- Large table and poor performance - what to do next?
- Drop a SQL Server database
- Help with Postgres 9.1 data import (is 4x expected data size, 4x slower than MySQL, and index still needed)
- Import from incremental backups to a new host in Oracle 11g
- Waiting for response from Postgres server
- Oracle RAC 11g R2 DB silent install: parameter needed for system class
- Which database could handle storage of billions/trillions of records?
- Designing Simple Schema for Disaggregation of Demand Forecast
- ORA-09925: Unable to create audit trail file
Taking a backup of a streaming WAL slave Posted: 13 Aug 2013 09:14 PM PDT I have an environment with pair of postgresql servers set up to do WAL streaming replication. I also have a test environment and periodically I want to snapshot the production DBMS and restore that onto the test server. Right now I do: and then restore that archive on the test server. Unfortunately that makes things really slow down on the production server so I'd like to switch that to pulling the clone from the replication slave. But I can't - the error I get is:
What do I need to do to accomplish my goal of not killing performance on the master when I need to clone it? Running on Ubuntu 12.04 x86_64, postgresql server |
How to keep the structure of the Tablix when there is no data to show Posted: 13 Aug 2013 06:38 PM PDT We have a SSRS report which shows the result of a dataset in a tablix. If the query in the dataset returns no data, the header of the Tablix gets displayed but not the tablix cells below the header. I know there is a property to I even set the rule for each cell value of the tablix using Do you have any idea? Thanks for your help in advance. |
Posted: 13 Aug 2013 06:49 PM PDT Okay my project in simple words. Multiple shop sites are hosted on different hosting services receiving orders from customers. All the orders are to be directed to the main company website where the employees will receive the orders, complete and update the system. So its a two way traffic of information between shop sites and the company website. Its basically a network of content writing websites where orders will be received and sent to the main writers lounge being hosted on the company website. Initially the plan was to build 1 centralized database on the company website and remotely connect all the website to it BUT there are many factors causing the recent change in plans such as security, vulnerability, sustainability of each website and speed issues since all the websites will be connecting remotely to the database being hosted on the company website. There are almost 20 tables in the database, give or take. And the client does not want to invest in the dedicated hosting or anything, hence it has to be different websites on different hosting plans. I would appreciate any of your professional thoughts on this. Diversify your answer by assuming two situations: 1) All the websites are being hosted with different hosting service providers and read:writer ratio is 70:30. 2) All the websites are being hosted with the same hosting service provider but different hosting accounts and not a dedicated VPS thing. This might help in overcoming the delay issues in remote database queries. Solutions that we could think of : * Host 1 centralized database being hosted on the company website and connect all the sites remotely to it. * Host a personal database of each website on its hosting. With a separate database for the Writers being hosted on the main company website and copy the data daily from multiple databases to the writers database. Although we don't think this method is really worth it because it will anyways delay the process and hinder the smoothness of the system. |
Understanding multiple indexes with MySQL (and performance) Posted: 13 Aug 2013 03:50 PM PDT I have a "phones" table with around 120 000 records, it contains phone_id, phone_number, phone_user_name, phone_last_contacted, etc. The phone_id is index and looks like: 1,2,3,4 ... 120 000. But I often update different fields like "phone_last_contacted":
The "slow query" log says that query read many records before finding the right one to update; Would it be a performance upgrade if I set the phone_number to be the index? Considering that all phone_numbers are unique in this table. If the phone_number is the index, does this mean that mysql will know how to go directly to the row to update instead of reading many rows and find the right row ? Is that the purpose of an index or multiple indexes ? |
Posted: 13 Aug 2013 05:33 PM PDT I moved my database files. When I go attach the MDF file by itself, SQL Server will tell me that it can't find the log files. How do I go about asking the MDF what files it is expecting before attaching it? More background information: I have a bunch of SAN backup volumes. I attached them to Windows through iSCSI, and now half the drive letters are messed up. Moreover, I multiple volumes should be mapped to the save drive letter, so I can't restore the correct drive letters. I know that the files are all there, but I don't know how many and which LDF/NDFs I should be attaching with each MDF. |
Cannot connect remotely to a named instance Posted: 13 Aug 2013 01:52 PM PDT So, I have two servers (ServerA and ServerB) that I've set up with Windows Server 2008 R2 and SQL Server 2008 R2. My goal is to replicate between these two in a failover fashion (with ServerC as a witness instance, but that's for later). Both servers are configured in the same way. Firewall rules added, SQL Server installed with the same settings. Network via TCP/IP and named pipes enabled. I can connect with SSMS to ServerA fine, but I get the following when connecting to ServerB:
Now, the instance name is correct and I can successfully connect to ServerA with the instance name (they're the same on both servers). If I leave out the instance name, I get another error saying it just timed out. The only thing that works is running SSMS on the local machine via shared memory, but obviously that doesn't solve the problem. I feel like I'm missing something obvious. Help? |
Detect Convert_Implicit usages Posted: 13 Aug 2013 12:33 PM PDT I suspect that my Sql Server is using CONVERT_IMPLICIT a lot because many of my fields are set as string in NHibernate while they are set as varchar in database. Is there any option to detect when server is using The CONVERT_IMPLICIT function? Thanks |
How do I avoid duplicate rows using FREETEXT? Posted: 13 Aug 2013 12:09 PM PDT I have the following query: How do I avoid duplicate rows in these results? |
Vacuum settings for mostly append-only data Posted: 13 Aug 2013 06:18 PM PDT I have a table with the following characteristics:
From my shallow knowledge of Postgres VACUUM, it seems like this table might be a good candidate for a non-standard VACUUM strategy. A few questions I have...
|
How can I join two tables while filtering the results with a" WHERE IN" clause? Posted: 13 Aug 2013 04:26 PM PDT What I am trying to do is set up a selection filter for a cascading list on my site. When I say selection filter, I mean I am trying to limit, and sort the results based on what the user selects. I have set up the ability for my users to click a button and mark an item as a favorite. That information is stored in "billdata.favoritebills". Also, for efficiency I have saved vote totals for each bill in billdata.billvotesmeta. billdata.billvotesmeta contains the column names: billid,totalvotes,yesvotes,presentvotes, and novotes. The primary key is billid. My database is set up as follows: billdata.bills billdata.favoritebills billdata.billvotesmeta In this particular filter I am trying to set up, I want to limit the results to only being ones that the user has marked as a favorite and at the same time I want to order them from least to greatest total number of votes. billdata.favoritebills contains the column names: userid and billid and a combination either exists in marked as a favorite or is deleted if it is unmarked as a favorite. The resulting output from PHP is: What should happen is that it limits the billdata.bills.* selection to only being favorites and then orders them by the joined table's total vote count. The data from billdata.bills is the information used to build the page. I think the problem is in the " |
Why does MySQL ignore the index even on force for this order by? Posted: 13 Aug 2013 08:41 PM PDT I run an The indexes in my table: There is an index on last_name but the optimizer does not use it. But still the index is not used! What am I doing wrong here? Update requested by @RolandoMySQLDBA |
Issue adding shared data source (.rsds) to SSRS project Posted: 13 Aug 2013 11:01 AM PDT I'm trying to add a shared data source to an SSRS project. The rsds is stored online on our intranet. I right-click on Shared Data Sources in the Solution Explorer and select Add -> Existing Item. I enter the URL in the box that pops up, find the .rsds file I want to add to the set of Shared Data Sources, and add it. Instead of adding the file to Shared Data Sources, however, it gets added to the Reports folder in Solution Explorer and isn't available to use as a data source. This is my first time using a .rsds like this, but a coworker who has done it before was just as confused as I am. Is there something I'm doing wrong, or is there a glitch in the system somewhere along the way? |
Why isn't the rows of an EXPLAIN accurate? Posted: 13 Aug 2013 11:22 AM PDT I have the following table: I do an The rows are 9894. I was expecting 10000. and re-run mysql> explain select last_name from employees order by last_name; The rows are now 10031. Does anyone know why the rows is never 10000? I have noticed this in other cases as well. |
why is DBMS_SCHEDULER giving me an error? Posted: 13 Aug 2013 11:27 AM PDT I am trying to run the DBMS_SCHEDULER using this on oracle 11g: This is the error I get: ) , * & = - + < / > at in is mod remainder not rem => , ; for group having intersect minus order start union where I dont understand why is that? |
Query Optimization for added value Posted: 13 Aug 2013 10:30 AM PDT I have a query that's behaving a bit oddly. In my database I have a table called "records". It tells me a bunch of information about what applications a user ran on my company's machines. I'm trying to aggregate some statistics, but am having some odd issues with a query. This query runs in about 6.5 minutes (~30 million entries in "records"). I would expect it to take longer when divisionName isn't specified, but it seems to be taking an unreasonable amount of time to finish (overnight and still chugging). Is there an alternate structure to speed up the query? I have an index on (computerID,divisionName) in locate and (programID,programName) in Programs as well as a multitude of indexes in records. Note: Programs contains 4 fields and locate contains 2. I don't think the joins are exceptionally large. |
How do I delete only related records in a multi-key MERGE in SQL Server? Posted: 13 Aug 2013 01:37 PM PDT Suppose you have something like this: Source table (variable): Target table: I want to merge
(That last rule is hard to describe, sorry!) For instance:Source: Target: Merge result:Result Target: So...Here's what I have so far, which takes care of How do I do the |
Oracle impdp - Importing file from remote server Posted: 13 Aug 2013 01:13 PM PDT Am I missing something or is it not possible to import a dump (.dmp) file using impdp to a database on another server other than where it was created? All of my investigations lead me to believe that this cannot be done...that the dump file needs to reside locally on the server where the data is to be imported or NFS mounted so it appears to be local. This seems to be a capability that the old "exp/imp" utilities used to have, but no longer exist. I know you can move data using impdp and the REMOTE_LINK option but in order to use this, the data must physically reside in a schema within the database instance on the remote side for it to be copied. It can't reside in a dump/exported file. For example. I have Server "A" and Server "B". Each of them with an Oracle instance on it. On "A", I perform an export using expdp of schema "TESTDATA" to a dumpfile named "testdata.dmp" where it is store on "A". At some point in the future I would like to restore the contents of the "TESTDATA" dump file (testdata.dmp) to a new schema (TESTDATA2) on server "B". At this point, is it true that my only options are to:
Either I'm missing something here, or Oracle left a huge gap in functionality between impdp and imp. |
Can I give the same name to a table and a stored procedure? Posted: 13 Aug 2013 12:13 PM PDT Is it possible to name a stored procedure the same as a table? I have an existing table named 'buyers', and I'm trying to create a stored procedure named 'buyers', but I'm getting this error:
|
Transactional Replication Failing Suddenly - Distributor Error Posted: 13 Aug 2013 02:11 PM PDT I have two databases (both SQL Server 2008 R2) taking part in transactional replication. This exists to pass data to a customer of ours, so I am troubleshooting from the Publisher side. Replication has been working fine for months, then all of a sudden (about a week ago) the customer reported that data is no longer showing up in their subscription tables. So - I started troubleshooting and found this: (the error = The process could not execute '{call sp_MSget_subscription_guid(20)}' on ''.) It seems that the distributor has failed for this subscription. My questions (I am relatively new to Replication and have been pouring through documentation):
Note: This is the only subscription we have. Update: Query result (mentioned in the comments below) |
SymmetricDS fails to start for PostgreSQL 9.2 Posted: 13 Aug 2013 06:09 PM PDT I'm trying to get SymmetricDS up and running with PostgreSQL. I've followed the tutorial (almost) exactly. (I have not set up a separate node yet since, for my purposes, I need that separate node to be truly separate and on a different VM.) Unfortunately, I am not able to get the database import step to function, as SymmetricDS will not connect to the database. Following advice from Connecting to local instance of PostgreSQL from JDBC, I ensured that the second SLOC in I've ensured that (or something to this effect? I'm very new to databases.) If I had to guess where I went wrong, it'd be here. I edited the engine file As far as I know, this is all that is needed to get SymmetricDS up and running (at least for the import step). Obviously, something went wrong; a stack trace is included below. What did I miss? Shell log: Some more resources as I find them in my sea of open tabs: |
Large table and poor performance - what to do next? Posted: 13 Aug 2013 09:08 PM PDT We have a Windows 2008 R2 / SQL Server 2008 R2 (Standard) server that is used to host a single database. The database itself mainly consists of a single large table containing both live and historical data. The table is currently 101 million rows of 35 columns, and growing at the rate of around 250,000 rows a day. Splitting the table into smaller tables unfortunately isn't really an option due to a mass of legacy code. The database itself (around 100Gb) is held as a single file on a single SSD drive. The server has another two 10K SAS disks used for the boot OS, paging etc, and the server has 22Gb of RAM. Although everything's running fine, we have a few dozen users who need to query the data in this table. We have limited control over what these queries do: sometimes it's pulling a few hundred rows from yesterday, at other times it's tens of thousands of rows from 6 months ago. 99.9% of the activity is reading rows; there is very little writing apart from the live data being We have indexes in place that are helping, but the ultimate bottleneck appears to be disk I/O. The SSD in place isn't the fastest, and as a result we're looking at retrofitting a RAID1+0 array of high-end SSD drives to increase performance (we've checked the array card can handle the throughput). Assuming we have this array in place, what is the best plan to increase read throughput to this database? If we have a super-fast SSD array, is that enough? Or would partitioning the database into separate files on separate logical drives be a better idea, even though they're essentially destined for the same disks? Similarly, would splitting database and log files across logical drives in the same array make any difference? |
Posted: 13 Aug 2013 10:03 AM PDT Taking too long to drop a 500gb+ SQL Server database. My question being what ways can we drop a database other than through GUI or T-SQL? Would deleting the |
Posted: 13 Aug 2013 01:33 PM PDT I have 25 billion Here are my commands: I'm running Postgres 9.1. There are no other tables in the database. This is not a production environment. The files are TSV text files. The only output from each Am I doing something wrong here, or is this the expected behavior?
Subquestion 2: If storage requirements are indeed 4x that of expected size, can I speed up importing (i.e. with some configuration change, etc)? Subquestion 3: I need an index on id1, so what will storage requirements be for that during and after creation (I planned to run |
Import from incremental backups to a new host in Oracle 11g Posted: 13 Aug 2013 01:31 PM PDT I am using Oracle 11g. I would like to know that whether it is possible to import from incremental level 0 & 1 backups to a new host using RMAN. If yes, how can I do that? For level 1 I am using differential method. |
Waiting for response from Postgres server Posted: 13 Aug 2013 09:44 AM PDT I have a problem when query from application (Java) to PostgreSQL. Here my query list (in 1 session user): In my case, I have to insert 500k rows into db & when rows ~ 64k I am waiting for long time (30 - 60 seconds). I checked postgres server log file, it's empty. The reasons may be server config or schema (constraint, index...). Until now, I do not have solution. Please help me. Many thanks. P/S: Here my detailed query (you can copy & paste to view) - thanks: |
Oracle RAC 11g R2 DB silent install: parameter needed for system class Posted: 13 Aug 2013 09:04 PM PDT I'm installing Oracle RAC DB 11gR2 using response file and silent mode. The installation is failing with the error message:
Which parameter is needed for a server-class install? I tried changing the value for |
Which database could handle storage of billions/trillions of records? Posted: 13 Aug 2013 08:54 PM PDT We are looking at developing a tool to capture and analyze netflow data, of which we gather tremendous amounts of. Each day we capture about ~1.4 billion flow records which would look like this in json format: We would like to be able to do fast searches (less than 10 seconds) on the data set, most likely over narrow slices of time (10 - 30 mintes intervals). We also want to index the majority of the data points so we can do searches on each of them quickly. We would also like to have an up to date view of the data when searches are executed. It would be great to stay in the open source world, but we are not opposed to looking at proprietary solutions for this project. The idea is to keep approximately one month of data, which would be ~43.2 billion records. A rough estimate that each record would contain about 480 bytes of data, would equate to ~18.7 terabytes of data in a month, and maybe three times that with indexes. Eventually we would like to grow the capacity of this system to store trillions of records. We have (very basically) evaluated couchbase, cassandra, and mongodb so far as possible candidates for this project, however each proposes their own challenges. With couchbase the indexing is done at intervals and not during insertion of the data so the views are not up to date, cassandra's secondary indexes are not very efficient at returning results as they typically require scanning the entire cluster for results, and mongodb looks promising but appears to be far more difficult to scale as it is master/slave/sharded. Some other candidates we plan to evaluate are elasticsearch, mysql (not sure if this is even applicable), and a few column oriented relational databases. Any suggestions or real world experience would be appreciated. |
Designing Simple Schema for Disaggregation of Demand Forecast Posted: 13 Aug 2013 03:02 PM PDT I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case: I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).
Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product). There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:
Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be: PRODUCT_HIERARCHY table could look like this: ORDERS table might look like this: where,
How to store forecast? What would be a good basic schema for such a requirement? My idea to select orders for 26 weekly buckets is: But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle? Please help designing this database structure. (will be using Oracle 11g) |
ORA-09925: Unable to create audit trail file Posted: 13 Aug 2013 02:43 PM PDT I'm using Oracle 11.1.0.6.0 db, on Linux 64 bit server on Amazon cloud. I'm getting following error when I run the connect command When I run this df -h command I'm not seeing the file system it's just showing as below I do not understand cause of this issue. Please let me know how to solve this issue |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment