[how to] Using a PostgreSQL tablespace from two different servers |
- Using a PostgreSQL tablespace from two different servers
- Better way to iterating through tables with foreign key
- How to create a trigger in a different database?
- Oracle: list user-created tables in the SYS schema
- SQL Server - compare pictures in db as varbinary [on hold]
- What happens if the mysql database's innodb log files are lost?
- Connecting to an external database with pgAdmin III
- Extract time portion of Datetime2(7) in hhmmssfff format
- Connect to SQL Server 2012 running on Azure VM via local SSMS 2012
- Retrieve list of matched words in PostgreSQL
- Table redirect / filter / trigger on select
- View with fallback (performance/optimization question)
- Distributed transaction and Read Committed Snapshot
- How do I determine how much data is being written per day through insert, update and delete operations?
- Oracle Patch Update
- Updateable Subscriptions: Rows do not match between Publisher and Subscriber
- Replicated Database Log File Maintenance
- How to avoid empty rows in SSIS Excel Destination?
- How to add rows/columns to the table in runtime in SSRS 2008
- How to disable oracle's MAX_ENABLED_ROLES limit
- In MySQL, does the order of the columns in a WHERE clause affect query performance,why?
- effective mysql table/index design for 35 million rows+ table, with 200+ corresponding columns (double), any combination of which may be queried
- Delete word, its meanings, its meaning's example sentences from DB
- How can I optimize this query and support multiple SKUs?
- How to modify an update in Oracle so it performs faster?
- Query to find and replace text in all tables and fields of a mysql db
Using a PostgreSQL tablespace from two different servers Posted: 14 Aug 2013 08:27 PM PDT I have a PostgreSQL database living on an external hard disk (RAID0 via Thunderbolt). So far I accessed it from a PostgreSQL server running on my MacBook Pro. As my work on this database is getting more and more intensive the queries are getting more and more complex, too. Therefore I'd like to use my brand spanking new iMac from now on. My question is: Can I somehow tell the new PostgreSQL server (running on the iMac) to use the data that is already living in the tablespace on the external disk? Or will I have to export & import all the data manually (pg_dump, pg_restore)? |
Better way to iterating through tables with foreign key Posted: 14 Aug 2013 08:13 PM PDT I have a master table (M), and 2 slave tables (S1, S2). S1 and S2 references M through one-to-many foreign keys (F1, F2). Now, I'm writing a PHP (or whatever language) function, to get some records from M, along with information stored in S1 and S2. Basically, there are 2 ways to do it:
This may sound idiot, but I'm thinking about which is the better approach. The concerns are:
|
How to create a trigger in a different database? Posted: 14 Aug 2013 07:46 PM PDT Is it possible to create a stored procedure that creates a table trigger (DDL) in a different database than the one the stored procedure itself resides. The databases are on the same server instance. If yes then how? For example this does not work: When called like this: It returns this error:
Which is fair enough. Is there way to achieve what I want? |
Oracle: list user-created tables in the SYS schema Posted: 14 Aug 2013 05:46 PM PDT I need to delete all tables (hundreds) in the SYS schema that someone accidentally created with sqlplus. Looking in |
SQL Server - compare pictures in db as varbinary [on hold] Posted: 14 Aug 2013 08:00 PM PDT is possible create stored procedure which will returned similarity ratio of images saved in db as varbinary ? Input parameters could be image id and output near ratio. This is my db structure |
What happens if the mysql database's innodb log files are lost? Posted: 14 Aug 2013 07:51 PM PDT What I did was then removed the files: then modified my.cnf file, variable: and then: and allowed the files to be recreated I later discovered that the global variable The question is, how much data was lost ? Note: I still have the old files ib_logfile0, ib_logfile1, not deleted yet. And the website relying on the database appears to be working. |
Connecting to an external database with pgAdmin III Posted: 14 Aug 2013 03:40 PM PDT I'm trying to connect to an external database from pgAdmin III (which is installed on both machines). The client complains:
Where the server explicitly states that all connections from the internal network are accepted.
I have already restarted the postmaster for the changes to take effect, and have gone so far as to restart the entire machine. What else could be going wrong here? |
Extract time portion of Datetime2(7) in hhmmssfff format Posted: 14 Aug 2013 12:38 PM PDT I need to extract the time portion of a datetime2(7) column in hhmmssfff format and I am doing it like this: Is there a better approach than doing this ugly replaces/convert/left? I need this to join to a DimTime dimension whose key is in hhmmssfff format. |
Connect to SQL Server 2012 running on Azure VM via local SSMS 2012 Posted: 14 Aug 2013 01:06 PM PDT I have been testing VM on Azure. I have created a SQL VM running SQL 2012 on Windows 2012 and would like to connect to it via SSMS 2012 on my local instead of connecting via RDP through Azure Portal. Thanks! |
Retrieve list of matched words in PostgreSQL Posted: 14 Aug 2013 12:44 PM PDT I'm new to PostgreSQL and really more than your more basic queries/inserts. I've created a TSVector column to my table that I'm searching and have set the column to look at the specific column in the table. Now what I'm trying to do is find out what words matched in a query. If I had the query It would obviously return:
Is there a way I can get it to return a list instead so I just see big and red in seperate rows? |
Table redirect / filter / trigger on select Posted: 14 Aug 2013 10:43 AM PDT Is there any way to redirect queries to different tables / views based on the package that's referencing the table? I.e. packages A and B both have "select grade from schema1.grd_tbl", but I want package A to get the percent grade that's stored in the table, and package B to get a letter grade that's calculated from the percent. I'd like to avoid modifying the (dozens of) packages that reference the table. I'd rather 'spoof' the table somehow if we can, replacing the percent in the grade column with a letter when called from those packages. (The column's varchar2; percents are coded as characters.) First thought was to create a synonym pointing to a view that massages the column based on the calling package, but the code fully qualifies the table name in most cases so that doesn't seem doable. I went looking for something equivalent to a trigger on select; the closest I've found is fine-grained audit, and it's not row-based. Is there magic somewhere that might let me do this? Any hints appreciated. Perry. |
View with fallback (performance/optimization question) Posted: 14 Aug 2013 03:44 PM PDT I have a table with tariffs for stuff; the tariffs table is not important in this scenario, the "tariff values" are. In this Demonstration SQL Fiddle the I have, for example, a default tariffplan (key = If have tariffs defined for items So, what I do is I select the This results, as expected, in: Because I want to abstract this away I want to put this into a table valued function so I can create a "dynamic view": This should result in a "virtual table" (or "dynamic view") similar to the This results in: All I need to do now is stuff this query into a TVF: And there we have it. We can call our function ("dynamic view") as intended (and also use it in selects/joins etc.) Now my first question is: I have a feeling all these So I'm hoping someone here has some ideas on how to improve this. My second question is: What if I had a product (tariff_type (Demonstrated in this SQL fiddle) In the above example I use another |
Distributed transaction and Read Committed Snapshot Posted: 14 Aug 2013 08:56 AM PDT Distributed transactions are not supported for snapshot isolation level in SQL Server 2008 R2. But what about read committed snapshot? |
Posted: 14 Aug 2013 09:57 AM PDT The longevity of SSDs is largely determined by the amount of bytes written by insert, update and delete operations. What is the best way to accurately determine how much data is being written by MariaDB 5.5 on a daily basis so that I can use this to estimate the possible longevity of SSDs if used in a heavy write database environment? Current setup is that all tables are InnoDB. Can I use |
Posted: 14 Aug 2013 11:15 AM PDT We have an Oracle RAC production environment with primary and secondary DB. Our DBA has asked to update oracle version from 11.2.0.1.0(64 bit) to 11.2.0.3(64 bit) with patch 6880880,10404530,16803769 and 16803775. In our current database we have Shared storage,ACL settings, security settings,Gateway/Heteregenous connectivity, Dataguard, Data broker, Backup policy and Oracle Client installed on other machines. DBA has estimated that he need to do installation, settings from scratch and test.. So, when the version is updated, do we really need to reconfig and install everything (Shared storage,ACL settings, security settings,Gateway/Heteregenous connectivity, Dataguard, Data broker, Backup policy and Oracle Client installed on other machines) ?? If yes its fine, but no then I need to justify it. I can understand testing would be required.. |
Updateable Subscriptions: Rows do not match between Publisher and Subscriber Posted: 14 Aug 2013 09:05 PM PDT I have transactional replication with updatable subscribers set up in SQL Server 2008. It has 3 subscribers and 1 publisher. I had to setup replication again due to some errors related to the database and the application which uses the database. However, now I run into issues when I try updating a section in my application. It does not go through the with update and gives the following error:
The update statement obviously doesn't go through. However, when I try it the second time it works. Replication is working. Everything seems to be replication. Can anyone explain why this error would occur and how I can resolve this issue. I would really appreciate the help!... |
Replicated Database Log File Maintenance Posted: 14 Aug 2013 03:05 PM PDT I have a database on the publisher that is involved in replication (publication configured for merge and transaction). Trying to regain control of the log file for this particular database (VLF count, size, etc.). Is there anything I need to do (or be cautious of) with the replication setup before trying to perform any maintenance on the log file? I am not an expert in the area of replication and cannot find anything solid that provides guidance as to what measures should be taken. Edit: This would include working on the distribution database as well, data retention was not configured at all for some reason. |
How to avoid empty rows in SSIS Excel Destination? Posted: 14 Aug 2013 08:05 PM PDT Does anyone have a way to avoid empty rows when using SSIS to export to Excel. Here's a simple example of one data flow task: OLE DB Source: Data Conversion (to handle the annoying UNICODE / NON-UNICODE deal): The end result is either of the two below depending on value of "FirstRowHasColumnName" in the Excel Connection Manager. Note, the blank rows. |
How to add rows/columns to the table in runtime in SSRS 2008 Posted: 14 Aug 2013 10:05 AM PDT Usually we design the table to have x number of rows and y number of columns in a report. But how can we create a report which adds the rows and columns dynamically at run time based on the result of the source query? For example I want to list stdentId, StudentName and any course each student has enrolled in. As the number of courses is different from one person to the other, I should add the rows and related column for courses at run time based on the query result. How can it be done? For example: Thanks for your help in advance. |
How to disable oracle's MAX_ENABLED_ROLES limit Posted: 14 Aug 2013 04:05 PM PDT How to disable oracle's MAX_ENABLED_ROLES limit or expand the value of limitation. [oracle 10g (win32)] |
In MySQL, does the order of the columns in a WHERE clause affect query performance,why? Posted: 14 Aug 2013 01:05 PM PDT I have a query that doesn't use any indexes: The |
Posted: 14 Aug 2013 07:05 PM PDT I am looking for advice on table/index design for the following situation: i have a large table (stock price history data, InnoDB, 35 million rows and growing) with a compound primary key (assetid (int),date (date)). in addition to the pricing information, i have 200 double values that need to correspond to each record. i initially stored the 200 double columns directly in this table for ease of update and retrieval, and this had been working fine, as the only querying done on this table was by the assetid and date (these are religiously included in any query against this table), and the 200 double columns were only read. My database size was around 45 Gig However, now i have the requirement where i need to be able to query this table by any combination of these 200 columns (named f1,f2,...f200), for example: i have not historically had to deal with this large of an amount of data before, so my first instinct was that indexes were needed on each of these 200 columns, or i would wind up with large table scans, etc. To me this meant that i needed a table for each of the 200 columns with primary key, value, and index the values. So i went with that. i filled up and indexed all 200 tables. I left the main table intact with all 200 columns, as regularly it is queried over assetid and date range and all 200 columns are selected. I figured that leaving those columns in the parent table (unindexed) for read purposes, and then additionally having them indexed in their own tables (for join filtering) would be most performant. I ran explains on the new form of the query Indeed my desired result was achieved, explain shows me that the rows scanned are much smaller for this query. However i wound up with some undesirable side effects. 1) my database went from 45 Gig to 110 Gig. I can no longer keep the db in RAM. (i have 256Gig of RAM on the way however) 2) nightly inserts of new data now need to be done 200 times instead of once 3) maintenance/defrag of the new 200 tables take 200 times longer than just the 1 table. It cannot be completed in a night. 4) queries against the f1, etc tables are not necessarily performant. for example: the above query, while explain shows that it lookgin at < 1000 rows, can take 30+ seconds to complete. I assume this is because the indexes are too large to fit in memory. Since that was alot of bad news, I looked further and found partitioning. I implemented partitions on the main table, partitioned on date every 3 months. Monthly seemed to make sense to me but i have read that once you get over 120 partitions or so, performance suffers. partitioning quarterly will leave me under that for the next 20 years or so. each partition is a bit under 2 Gig. i ran explain partitions and everything seems to be pruning properly, so regardless i feel the partitioning was a good step, at the very least for analyze/optimize/repair purposes. I spent a good deal of time with this article http://ftp.nchu.edu.tw/MySQL/tech-resources/articles/testing-partitions-large-db.html my table currently is partitioned with primary key still on it. The article mentions that primary keys can make a partitioned table slower, but if you have a machine that can handle it, primary keys on the partitioned table will be faster. Knowing i have a big machine on the way (256 G RAM), i left the keys on. so as i see it, here are my options Option 11) remove the extra 200 tables and let the query do table scans to find the f1, f2 etc values. non-unique indexes can actually hurt performance on a properly partitioned table. run an explain before the user runs the query and deny them if the number of rows scanned is over some threshold i define. save myself the pain of the giant database. Heck, it will all be in memory soon anyways. sub-question:does it sound like i have chosen an appropriate partition scheme? Option 2Partition all the 200 tables using the same 3 months scheme. enjoy the smaller row scans and allow the users to run larger queries. now that they are partitioned at least i can manage them 1 partition at a time for maintenance purposes. Heck, it will all be in memory soon anyways. develop efficient way to update them nightly. sub-question:do you see a reason that i may avoid primary key indexes on these f1,f2,f3,f4... tables, knowing that i always have assetid and date when querying? seems counter intuitive to me but i am not used to data sets of this size. that would shrink the database a bunch i assume Option 3Drop the f1,f2,f3 columns in the master table to reclaim that space. do 200 joins if i need to read 200 features, maybe it wont be as slow as it sounds. Option 4You all have a better way to structure this than i have thought of so far. * NOTE: i will soon be adding another 50-100 of these double values to each item, so i need to design knowing that is coming thanks for any and all help Update #1 - 3/24/2103I went with the idea suggested in the comments i got below and created one new table with the following setup: I partitioned the table in 3 month intervals. I blew away the earlier 200 tables so that my database was back down to 45 Gig and started filling up this new table. A day and a half later, it completed, and my database now sits at a chubby 220 Gigs! It does allow the possibility of removing these 200 values from the master table, as i can get them from one join, but that would really only give me back 25 Gigs or so maybe I asked it to create a primary key on assetid, date,feature and an index on value, and after 9 hours of chugging it really hadn't made a dent and seemed to freeze up so i killed that part off. i rebuilt a couple of the partitions but it did not seem to reclaim much/any space. So that solution looks like it probably isn't going to be ideal. Do rows take up significantly more space than columns i wonder, could that be why this solution took up so much more space? I came across this article http://www.chrismoos.com/2010/01/31/mysql-partitioning-tables-with-millions-of-rows it gave me an idea. where he says "At first, I thought about RANGE partitioning by date, and while I am using the date in my queries, it is very common for a query to have a very large date range, and that means it could easily span all partitions." Now i am range partitioning by date as well, but will also be allowing searches by large date range, which will decrease the effectiveness of my partitioning. I will always have a date range when i search, however i will also always have a list of assetids. Perhaps my solution should be to partition by assetid and date, where i identify typically searched assetid ranges (which i can come up with, there are standard lists, S&P 500, russell 2000, etc). this way i would almost never look at the entire data set. Then again, i am primary keyed on assetid and date anyways, so maybe that wouldnt help much. any more thoughts/comments would be appreciated thanks |
Delete word, its meanings, its meaning's example sentences from DB Posted: 14 Aug 2013 05:05 PM PDT I have three tables as below (simplified for demonstration): where, Edit1: I am using SQLite3 as the database. Edit2: I figured the following solution which requires 3 sql queries in order: I'm still looking for the answer to my question: is the whole process possible to be done in one query? |
How can I optimize this query and support multiple SKUs? Posted: 14 Aug 2013 12:05 PM PDT My current query only can select one SKU at a time. I can leave |
How to modify an update in Oracle so it performs faster? Posted: 14 Aug 2013 02:05 PM PDT I have this query: The trouble that I am having is that this query takes a long time to run. I don't know whether it is possible to run this on parallel, or it would be easier to update a cursor in a pipeline function. What would you suggest? This is all the information that I believe it is relevant. This is the execution plan of the internal select: Table data: This is the script of the historical table: This is the other table: The temporary table is the result of FEE_SCHEDULE_HISTORICAL minus FEE_SCHEDULE |
Query to find and replace text in all tables and fields of a mysql db Posted: 14 Aug 2013 06:05 PM PDT I need to run a query to find and replace some text in all tables of a mysql database. I found this query, but it only looks for the text in the tbl_name table and just in the column field. I need it to look in all tables and all fields: (everywhere in the database) |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
Thank you for putting up a descriptive post on a very useful aspect of SQL. This actually helped me a lot to understand this topic.
ReplyDeleteSSIS PostgreSql Write