[how to] 2 Outer Joins on Same Table? [closed] |
- 2 Outer Joins on Same Table? [closed]
- How to avoid empty rows in SSIS Excel Destination?
- Need to divide single column into multiple in sql server 2005
- Column of generic/variable type?
- how I can create check sum to validate no changes at int string pairs on table
- postgresql backup and restore
- MySQL Cluster Setup for WAN on Windows
- Connecting to MySQL 5.1 using IBM Data Studio
- Is there any query to fetch the a single row data in chunks?
- SQL Server Replication Alternative Software
- Forgotten PostgreSQL Windows password
- High CPU usage on SQL server - Slow queries
- Bechmarking for MongoDB vs MySQL
- How to run an exe file exists on a different server from SQL Server Job
- Insert performance with Geography column
- More CPU cores vs faster disks
- Is it a good practice to create tables dynamically in a site?
- How to add rows/columns to the table in runtime in SSRS 2008
- How to disable oracle's MAX_ENABLED_ROLES limit
- Avoiding Multiple Queries when Searching for Records Associated with a Set of Records
- sql server 2008 execution plan different on two production servers
- In MySQL, does the order of the columns in a WHERE clause affect query performance,why?
- effective mysql table/index design for 35 million rows+ table, with 200+ corresponding columns (double), any combination of which may be queried
- MySQL Replication using SSL
- Finding out the hosts blocked by mysql server
- Delete word, its meanings, its meaning's example sentences from DB
- How can I optimize this query and support multiple SKUs?
- How to modify an update in Oracle so it performs faster?
- Query to find and replace text in all tables and fields of a mysql db
2 Outer Joins on Same Table? [closed] Posted: 15 Jun 2013 02:32 PM PDT Here is a question which has been boggling me for few days now, and I searched and searched but couldn't find any convincing answer ! Simple question, why is it restricted to have 2 Outer Joins in SQL, on same table even with different columns being used ? Also I can overcome them using nested sub query or ANSI joins, but then why it is even restricted in the first place ! I'm referring to error "ORA-01417: a table may be outer joined to at most one other table" Thanks, Shubham |
How to avoid empty rows in SSIS Excel Destination? Posted: 15 Jun 2013 03:24 PM PDT Does anyone have a way to avoid empty rows when using SSIS to export to Excel. Here's a simple example of one data flow task: OLE DB Source: Data Conversion (to handle the annoying UNICODE / NON-UNICODE deal): The end result is either of the two below depending on value of "FirstRowHasColumnName" in the Excel Connection Manager. Note, the blank rows. |
Need to divide single column into multiple in sql server 2005 Posted: 15 Jun 2013 11:41 AM PDT table have data with single column looks like and i want it to display like |
Column of generic/variable type? Posted: 15 Jun 2013 10:03 AM PDT In SQL, I would like to create a table which can contain generic data types. This types can change from every row. The solution I could come up to is: Here in the 'type' column I put the type of data contained in value. Is there a better solution? |
how I can create check sum to validate no changes at int string pairs on table Posted: 15 Jun 2013 02:11 PM PDT Business case is. My task is to help administrator to on easy way be sure that there is no changes between paper document and digital document. Document is written in three sql tables. There is others columns in these tables which can be updated freely after Boss sing document without needs to Boss sing it again. My idea is to from these three table / combination make some kind of SHA or CheckSum. Which is going to be same while there is no changes on document. I start thinking that I can for int - float make sum( int * folat) How I can shrink extra large numbers to be easily readable and comparable by humans ? Is there another way to I ensure that there was no changes on my protected columns in one period of time ? I am using MSSQL2005/8 server. My reporting engine is most of rime SqlReportingServices, but I prefer to do this loginc on database level not reporting level. I do not want to lock editing of document because sometimes Boss push users to go back and correct document before signing. |
Posted: 15 Jun 2013 08:53 AM PDT I'm trying to backup the database I made in postgresql but it doesn't seem to work Can someone help me , please :) |
MySQL Cluster Setup for WAN on Windows Posted: 15 Jun 2013 03:15 PM PDT I would like to know whether replication or clustering for MySQL is it possible over a WAN environment? For example
Can they replicated each other's date? Are there any guides that I can follow to setup clustering over WAN? |
Connecting to MySQL 5.1 using IBM Data Studio Posted: 15 Jun 2013 01:28 PM PDT I am trying to connect to MySQL database version 5.1 hosted on a web server using IBM Data Studio 4.1. I am getting an error I downloaded the connector file from "http://dev.mysql.com/downloads/connector/j/" but IBM Data Studio doesn't recognize this .zip file as the right .jar file. I was able to connect using SQL Developer but it isn't as friendly as I wished for. So, can someone please help me with the right .jar file and instructions for updating data studio with the jar so that I can connect successfully. |
Is there any query to fetch the a single row data in chunks? Posted: 15 Jun 2013 07:45 AM PDT The data stored in a row is just like Is there any query that may help me to retrieve this row in parts or chunks. |
SQL Server Replication Alternative Software Posted: 15 Jun 2013 08:19 PM PDT We have used SQL Server Replication for a long time and had some issues with it that sometimes we needed to reinitialize subscriptions to fix some issues & other times we needed to destroy the whole replication structure & rebuild it again! Our main concern is that once we have a replication issue, almost all the time the easy solution will be to reinitialize the replication which is not accepted for our business requirements. Now we are preparing to release a new big project & we are trying to look for 3rd party software for doing SQL Server replication. Our setup includes Servers distributed in branches ( Different countries ) + mobile clients ( laptops with local SQL Server databases ) and we need to replicate data between all of these with the ability to offer article filtering. Would somebody please suggest some alternate solutions for us? |
Forgotten PostgreSQL Windows password Posted: 15 Jun 2013 06:40 AM PDT This morning I've been trying to connect the Postgresql database on my Windows 7 professional desktop. The default value is 'postgres', but sure enough I forgot what password I used when I originally installed it. I have googled and found a post related to reset your password- http://www.postgresql.org/message-id/6BCB9D8A16AC4241919521715F4D8BCE476A42@algol.sollentuna.se I followed the same. but the end result is a bit different then mentioned in the post. I have used-
to reset the password for my database but instead of a success message I am getting :
system error. Please guide me if I have missed or approaching wrong to get the end result. Any help will be really appreciated. Thanks! |
High CPU usage on SQL server - Slow queries Posted: 15 Jun 2013 03:32 PM PDT Our MS SQL Server is using about 95% of the CPU-power. After a server (hardware) restart, or a SQL-Service restart, the usage is 0% and slowly increases over the course of 1-3 days. Depending on how much it is used. When it's over 80%, every query is extremely slow. Our website is dealing with alot of big queries, so some of them takes 45-60 seconds. After a restart (CPU usage less than 80%), it takes 11-20 seconds for the same Query. How can I fix this? I've read online that affinity masks can adjust the CPU usage, but the Affinity settings are disabled. I cannot change them. Is this because I only have 1 processor? There are plenty of tricks to do with the queries themselves, but our websites and services are quite big, and there is simply too much to change. Most of them are already pretty well optimized. I cannot keep restarting the SQL-Service, even though it only takes 2 seconds, because we have an alarm service that allows people to call in and record a message, a selected group will then be called and hear the recorded message. This system is used by hundreds Search and Rescue teams, and if the SQL-Service restarts during an alarm, it will terminate and the person that called it in will not be notified. I have searched all over the place, but found nothing except for stuff about "Affinity Masks", which I cannot change. There must be a way to clear out the CPU cache, without terminating current queries... right? |
Bechmarking for MongoDB vs MySQL Posted: 15 Jun 2013 10:16 AM PDT I know that MongoDB will use available free memory for caching, and swap to disk as needed to yield memory to other applications on the same server. For the best performance you'll want to have enough RAM to keep your indices and frequently used data ("working set") in memory. So let's assume that I have 1 GB of data in MySQL which took 5 GB of disk space when it restored in MongoDB. Currently I have 2 GB of RAM, when we run analytic queries, MongoDb took 6 times more than the excecution time in MySql. As my "working Set" is around 500MB and 1GB free RAM on my Machine instead of that query performance is very poor than MySQL so that I am a bit confused about how to calculate how much RAM I need to run analytic queries in MongoDB so that query execution time could be better than MySql. |
How to run an exe file exists on a different server from SQL Server Job Posted: 15 Jun 2013 07:45 AM PDT We have an exe file on a server, say Server1 which should be run from a SQL job exists on a different server, e.g.: Server2. How can it be done? I know if it's a local file, I can use Thanks for your help. |
Insert performance with Geography column Posted: 15 Jun 2013 10:17 AM PDT I've been tasked with inserting data into a SQL Server table with a geography column. I've found that my times for doing inserts (same data 1.5 million rows) go up increasingly. I started out with no geography column and it took 6 minutes, then I added a geography column and it took 20 minutes (again same data). Then I added a spatial index and it took 1 hour and 45 minutes. I'm new at anything spatial, but this seems like really bad performance. Is there anything I can do to help speed this up or is this just the performance I'm going to see when dealing with SQL Server spatial data? |
More CPU cores vs faster disks Posted: 15 Jun 2013 08:47 AM PDT I'm part of a small company so as usual covering a number of different roles. The latest of which is procuring a dedicated SQL Server box for our .NET web app. We've been quoted on a dual Xeon E5-2620 (six core) 2.00 GHz CPU configuration (12 cores in total), with 32 GB of RAM. This has left us with a limited budget for the disk array, which would essentially consist of two 2.5" SAS 300 GB drives (15k RPM) in a RAID 1 config. I know that the disk setup is sub-optimal for SQL Server and I'd really like to push for RAID 10 so we can put the database, log files and tempdb on their own drives. In order to make this compatible with our budget should I consider reducing the number of CPU cores? or would I get better bank for buck keeping the cores and using fewer drives, perhaps 4 in a dual RAID 1 setup? Here are some additional stats
|
Is it a good practice to create tables dynamically in a site? Posted: 15 Jun 2013 03:29 PM PDT A friend asked me to build a site with a few "static" and "dynamic" tables. In fact he wants to have a few tables which can't be deleted, and some "dynamic" tables which can be created directly from site users, according to their needs. I.E. if the user needs some "optional" that in the current db doesn't exist, he creates a new table for his specific need. I think this is not a good way to do it, i think it is better to have a list of all possible optionals in a table and then flag them for each user, i.e.: Table Table If This is my idea. My friend says it would be better to dinamically create tables and connect them to the user to dinamically define new flowers. Wouldn't it be better to have every possible flower as a field of the |
How to add rows/columns to the table in runtime in SSRS 2008 Posted: 15 Jun 2013 08:29 AM PDT Usually we design the table to have x number of rows and y number of columns in a report. But how can we create a report which adds the rows and columns dynamically at run time based on the result of the source query? For example I want to list stdentId, StudentName and any course each student has enrolled in. As the number of courses is different from one person to the other, I should add the rows and related column for courses at run time based on the query result. How can it be done? For example: Thanks for your help in advance. |
How to disable oracle's MAX_ENABLED_ROLES limit Posted: 15 Jun 2013 01:28 PM PDT How to disable oracle's MAX_ENABLED_ROLES limit or expand the value of limitation. [oracle 10g (win32)] |
Avoiding Multiple Queries when Searching for Records Associated with a Set of Records Posted: 15 Jun 2013 08:29 PM PDT So, I am sure I have done something really stupid while designing this, and I'm open to schema changes if they'll really help me out. On to the problem: I have a custom shopping cart system (backed by a MySQL database) that includes a products table and a price_rules table that's used for computing discounts and applying promo code discounts. Some price rules don't have promo codes attached to them; some are simply "10% off of product X from March 1st through April 1st" or similar. Because a single price rule can apply to many individual products, I also have a join table called price_rule_product. When showing a set of products (for example, on the main shop page or listing all products in a category) I'm currently running a separate query for each product to look for price rules that apply to that product. Here's what one of those queries looks like: Oh SQL Gods, I pray you have some suggestions/solutions for this. It is causing some significant performance issues, and I'm just not experienced enough with SQL to figure out where to go from here. EDIT: Here's the output of EXPLAIN SELECT both with and without DISTINCT: WITH DISTINCT WITHOUT DISTINCT |
sql server 2008 execution plan different on two production servers Posted: 15 Jun 2013 05:28 PM PDT We have two servers running SQL server 2008 R2, with identical databases, where we implemented a datawarehousing solution. The cube processing operation was taking a VERY long time in one of the servers, which led us to dig into the queries generated by SQL Server. We found the query that was taking long time, and we saw that it is generated from a Many-to-many relationship in the datawarehouse, where 6 Joins of the same table are being executed. this table contains about 4M records. We tried to look into the execution plans of this query on both servers. In the first server, the execution plan uses parallelism and executes in 90 secs, whereas the seconds uses only sequential executions, which results in a 14 HOURS execution time. Data in the two servers is different. The server which takes more time has more data (obviously). We tried to update statistics, rebuild indexes, re-compute execution plans, copy statistics from one server to the other, but no result! Hope you can help us guys with this problem, because we're running on a production server and clients are waiting to see their reports from the datawarehouse. Thanx in advance |
In MySQL, does the order of the columns in a WHERE clause affect query performance,why? Posted: 15 Jun 2013 09:29 AM PDT I have a query that doesn't use any indexes: The |
Posted: 15 Jun 2013 11:29 AM PDT I am looking for advice on table/index design for the following situation: i have a large table (stock price history data, InnoDB, 35 million rows and growing) with a compound primary key (assetid (int),date (date)). in addition to the pricing information, i have 200 double values that need to correspond to each record. i initially stored the 200 double columns directly in this table for ease of update and retrieval, and this had been working fine, as the only querying done on this table was by the assetid and date (these are religiously included in any query against this table), and the 200 double columns were only read. My database size was around 45 Gig However, now i have the requirement where i need to be able to query this table by any combination of these 200 columns (named f1,f2,...f200), for example: i have not historically had to deal with this large of an amount of data before, so my first instinct was that indexes were needed on each of these 200 columns, or i would wind up with large table scans, etc. To me this meant that i needed a table for each of the 200 columns with primary key, value, and index the values. So i went with that. i filled up and indexed all 200 tables. I left the main table intact with all 200 columns, as regularly it is queried over assetid and date range and all 200 columns are selected. I figured that leaving those columns in the parent table (unindexed) for read purposes, and then additionally having them indexed in their own tables (for join filtering) would be most performant. I ran explains on the new form of the query Indeed my desired result was achieved, explain shows me that the rows scanned are much smaller for this query. However i wound up with some undesirable side effects. 1) my database went from 45 Gig to 110 Gig. I can no longer keep the db in RAM. (i have 256Gig of RAM on the way however) 2) nightly inserts of new data now need to be done 200 times instead of once 3) maintenance/defrag of the new 200 tables take 200 times longer than just the 1 table. It cannot be completed in a night. 4) queries against the f1, etc tables are not necessarily performant. for example: the above query, while explain shows that it lookgin at < 1000 rows, can take 30+ seconds to complete. I assume this is because the indexes are too large to fit in memory. Since that was alot of bad news, I looked further and found partitioning. I implemented partitions on the main table, partitioned on date every 3 months. Monthly seemed to make sense to me but i have read that once you get over 120 partitions or so, performance suffers. partitioning quarterly will leave me under that for the next 20 years or so. each partition is a bit under 2 Gig. i ran explain partitions and everything seems to be pruning properly, so regardless i feel the partitioning was a good step, at the very least for analyze/optimize/repair purposes. I spent a good deal of time with this article http://ftp.nchu.edu.tw/MySQL/tech-resources/articles/testing-partitions-large-db.html my table currently is partitioned with primary key still on it. The article mentions that primary keys can make a partitioned table slower, but if you have a machine that can handle it, primary keys on the partitioned table will be faster. Knowing i have a big machine on the way (256 G RAM), i left the keys on. so as i see it, here are my options Option 11) remove the extra 200 tables and let the query do table scans to find the f1, f2 etc values. non-unique indexes can actually hurt performance on a properly partitioned table. run an explain before the user runs the query and deny them if the number of rows scanned is over some threshold i define. save myself the pain of the giant database. Heck, it will all be in memory soon anyways. sub-question:does it sound like i have chosen an appropriate partition scheme? Option 2Partition all the 200 tables using the same 3 months scheme. enjoy the smaller row scans and allow the users to run larger queries. now that they are partitioned at least i can manage them 1 partition at a time for maintenance purposes. Heck, it will all be in memory soon anyways. develop efficient way to update them nightly. sub-question:do you see a reason that i may avoid primary key indexes on these f1,f2,f3,f4... tables, knowing that i always have assetid and date when querying? seems counter intuitive to me but i am not used to data sets of this size. that would shrink the database a bunch i assume Option 3Drop the f1,f2,f3 columns in the master table to reclaim that space. do 200 joins if i need to read 200 features, maybe it wont be as slow as it sounds. Option 4You all have a better way to structure this than i have thought of so far. * NOTE: i will soon be adding another 50-100 of these double values to each item, so i need to design knowing that is coming thanks for any and all help Update #1 - 3/24/2103I went with the idea suggested in the comments i got below and created one new table with the following setup: I partitioned the table in 3 month intervals. I blew away the earlier 200 tables so that my database was back down to 45 Gig and started filling up this new table. A day and a half later, it completed, and my database now sits at a chubby 220 Gigs! It does allow the possibility of removing these 200 values from the master table, as i can get them from one join, but that would really only give me back 25 Gigs or so maybe I asked it to create a primary key on assetid, date,feature and an index on value, and after 9 hours of chugging it really hadn't made a dent and seemed to freeze up so i killed that part off. i rebuilt a couple of the partitions but it did not seem to reclaim much/any space. So that solution looks like it probably isn't going to be ideal. Do rows take up significantly more space than columns i wonder, could that be why this solution took up so much more space? I came across this article http://www.chrismoos.com/2010/01/31/mysql-partitioning-tables-with-millions-of-rows it gave me an idea. where he says "At first, I thought about RANGE partitioning by date, and while I am using the date in my queries, it is very common for a query to have a very large date range, and that means it could easily span all partitions." Now i am range partitioning by date as well, but will also be allowing searches by large date range, which will decrease the effectiveness of my partitioning. I will always have a date range when i search, however i will also always have a list of assetids. Perhaps my solution should be to partition by assetid and date, where i identify typically searched assetid ranges (which i can come up with, there are standard lists, S&P 500, russell 2000, etc). this way i would almost never look at the entire data set. Then again, i am primary keyed on assetid and date anyways, so maybe that wouldnt help much. any more thoughts/comments would be appreciated thanks |
Posted: 15 Jun 2013 07:29 PM PDT I am in the process of replicating my database so i can have a master slave configuration, one of the issues i have is with security i am basically generating my server/client keys and certificates using openssl i also generate my own CA key and certificate to self sign, i understand the issues with self signing certificates on a public website, but do you think this will be as a serious problem when used in replication? |
Finding out the hosts blocked by mysql server Posted: 15 Jun 2013 06:29 PM PDT Can someone tell me how to list the hosts which are blocked by the mysql server due to the reason that they crossed the limit of max_connect_errors. Is there any table in which MySQL server keeps this data. I am using mysql-server-5.1.63 |
Delete word, its meanings, its meaning's example sentences from DB Posted: 15 Jun 2013 02:29 PM PDT I have three tables as below (simplified for demonstration): where, Edit1: I am using SQLite3 as the database. Edit2: I figured the following solution which requires 3 sql queries in order: I'm still looking for the answer to my question: is the whole process possible to be done in one query? |
How can I optimize this query and support multiple SKUs? Posted: 15 Jun 2013 10:28 AM PDT My current query only can select one SKU at a time. I can leave |
How to modify an update in Oracle so it performs faster? Posted: 15 Jun 2013 12:28 PM PDT I have this query: The trouble that I am having is that this query takes a long time to run. I don't know whether it is possible to run this on parallel, or it would be easier to update a cursor in a pipeline function. What would you suggest? This is all the information that I believe it is relevant. This is the execution plan of the internal select: Table data: This is the script of the historical table: This is the other table: The temporary table is the result of FEE_SCHEDULE_HISTORICAL minus FEE_SCHEDULE |
Query to find and replace text in all tables and fields of a mysql db Posted: 15 Jun 2013 04:29 PM PDT I need to run a query to find and replace some text in all tables of a mysql database. I found this query, but it only looks for the text in the tbl_name table and just in the column field. I need it to look in all tables and all fields: (everywhere in the database) |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment