[how to] How to avoid large joins when determining which rows a user is authorized for? |
- How to avoid large joins when determining which rows a user is authorized for?
- Splitting SQL query with many joins into smaller ones helps?
- Retrieving 10 records from multiple tables
- Wrong in select statement [closed]
- MySQL Table Grant All users
- Postgres connection access denied on IPv6 address
- Is a geographically distributed application with SQL Server replication a good idea?
- Reduce Clustered Index seek cost SQL Server 2008 R2
- How to add custom sequence number in table which is going to be populated automatically
- Exporting database Oracle 7.3 on Windows NT 3.51
- MySql is not optimizing the query properly
- Best practice: Unions or a derived table?
- Migration of SQL Server to Oracle Database
- Unable to drop the table since its showing waiting for meta data lock
- Fulltext stoplist replication
- SQL server ODBC connection for Active Directory user who doesn't log in to the windows
- Process attempted to unlock a resource it does not own
- Setting DATEFIRST to 6
- mysql second slave not syncing while first slave works fine
- How does MySQL determine the 'index first key' and 'index last key' with indexes?
- compare the same table
- Show processlist / Max concurrent connections seem to max out at 131
- Deadlock free MERGE-alike in MySQL?
- sql server database sharding - what to do with common data / non sharded data
- set the clock on mysql [closed]
- Does pt-table-checksum point out rows that are different between Master and Slave?
- Database Design - Creating Multiple databases to avoid the headache of limit on table size
- Is it OK to blindly add missing indexes?
How to avoid large joins when determining which rows a user is authorized for? Posted: 01 Jun 2013 09:08 PM PDT Typical wep app situation: you have logged in users querying for resources exposed through an API. As a hypothetical example, company administrators quering for their employees' time sheets: Say I wanted to get all the time sheets that I'm authorized for/I have access to as an admin of that company. Without any optimizations, in the above I'd have to join admin with company with employees with time_sheets to determine all the time_sheets I can look at. You can see how for a wider schema the joins could add up very fast. I realize that in modern databases joins are ludicrously fast, but I'm guessing this is still a case you don't want to completely ignore. Is there a good solution to this? One could potentially have a separate admin to time_sheets many to many join table just to speed-up those queries, but then maintaining it might be tricky. Any suggestions? |
Splitting SQL query with many joins into smaller ones helps? Posted: 01 Jun 2013 05:23 PM PDT We need to do some reporting every night on our SQL server 2008R2. Calculating the reports takes several hours. In order to shorten the time we precalculate a table. This table is created based on JOINining 12 quite big (tens of milions row) tables. The calculation of this aggregation table took until few days ago cca 4 hours. Our DBA than split this big join into 3 smaller joins (each joining 4 tables). The temporar result is everytime saved into a temporary table, which is used in the next join. The result of the DBA enhancment is, that the aggregation table is calculated in 15 minutes. I wondered how is that possible. DBA told me that it is because the number of data the server must process is smaller. In other words, that in the big original join the server has to work with more data than in summed smaller joins. However, I would presume that optimizer would take care of doing it effeciently with the original big join, splitting the joins on its own and sending only the number of columns needed to next joins. The other thing he has done is that he created an index on one of the tmeporary tables. However, once again I would think that the optimizer will create the appropriate hash tables if needed and alltogether better optimize the computation. I talked about this with our DBA, but he was himself uncertain about what cased the improvement in processing time. He just mentioned, that he would not blame the server as it can be overwhelming to compute such big data and that it is possible that the optimizer has hard time to predict the best execution plan ... . This I understand, but I would like to have more defining answer as to exactly why. So, the questions are: 1. "What could possibly cause the big improvement?" 2. "Is it a standard procedure to split big joins into smaller?" 3. "Is the amount of data which srever has to process really smaller in case of multiple smaller joins?" Here is the original query: The new splitted joins after DBA great work: |
Retrieving 10 records from multiple tables Posted: 01 Jun 2013 03:38 AM PDT I have two tables where I store post data. What I want to do is to get 10 records from those two tables. Table 1 - all posts, Table 2 posts that user read. There are some scenarios;
How can I get this done in one query? |
Wrong in select statement [closed] Posted: 01 Jun 2013 02:07 AM PDT When a test this with SQL Navigator, it works fine: But with SQL Developer, it get an error: Why is that? |
Posted: 01 Jun 2013 04:50 AM PDT Is it possible to assign a Grant to a table for All users or a Default permission so when new users are created they will have the specific grants for that table SELECT,UPDATE,INSERT,DELETE |
Postgres connection access denied on IPv6 address Posted: 01 Jun 2013 08:55 PM PDT Installed PostgreSQL 9.1 x64 on Windows, set up a listen address, but when connecting with pgAdmin I get the following error. Not sure why PostgreSQL is seeing my IPv6 address and not my regular IP address: To get authentication working, based on the error message, I updated
That worked, but it's ugly, and too specific. I tried the following based on PostgreSQL docs, but none worked, I get the same 'access denied' error: I got this one working, which covers the entire IPv6 address space, but how can I specify an IPv6 range for more restriction? Questions
|
Is a geographically distributed application with SQL Server replication a good idea? Posted: 01 Jun 2013 08:59 PM PDT We are currently thinking about a target architecture for a geographically distributed application. This application tracks and schedules the movements of all material units across various production sites. There will be two logical entities:
Here is the logical diagram of the application: The implementation of this application cannot be directly derived from the previous diagram because we have to take into account the following constraints:
Therefore, I thought about a new architecture where we would create identical instances of the global entity (same executables) with synchronized databases for each area: I was wondering if anyone already used SQL Server replication to achieve the same kind of goal. Is it the right way to proceed ? All recommendations/warnings are welcome ! Thanks a lot, Sébastien. |
Reduce Clustered Index seek cost SQL Server 2008 R2 Posted: 01 Jun 2013 05:24 PM PDT I am running a query and it have records in lakhs, it takes more than 20 mins to fetch data. After running the execution plan i noticed that clustered index seek cost can be be the reason. How can I reduce the clustered index seek cost of below mentioned query? Foreign keys costs around 13% to 23%. Revised Query: Revised Execution Plan: |
How to add custom sequence number in table which is going to be populated automatically Posted: 01 Jun 2013 04:56 PM PDT I have table something like this I want to column doc_number have sequential numbers for each tip. Can I in MSSQL2005 have column which will automatically have values. Shod this be done by computed column, default (constraint) or trriger ? |
Exporting database Oracle 7.3 on Windows NT 3.51 Posted: 01 Jun 2013 01:51 AM PDT How to take dump files on Oracle7.3 running on Windows NT 3.51? Should you stand in the Orant/bin folder and run one of the programs there like> Did not work These services: |
MySql is not optimizing the query properly Posted: 01 Jun 2013 06:41 AM PDT I have a table structure as follows : NOTE : I have an Index UNIQUE = When I run this query : MySql Uses the index Unique and the execution time is 0.7 millisecond BUT when I run this query MySql does not use the UNIQUE index and the execution time is 76 millisecond. Mysql : 5.5.27 InnoDB Version : 1.1.8 My Question is Why is mysql behaving in such a way. Can some one please help me with this. EDIT : |
Best practice: Unions or a derived table? Posted: 01 Jun 2013 04:16 PM PDT I've inherited a medium-sized database with a terrible schema. The sanitized portion in question is like so: And two similar tables, customers and locations: So, yes, when an invoice is not yet fulfilled, it's in the "pending" table, then when the company's finished with it, it gets moved to the "done" table. So any given invoice will only be on one of the two tables. Unfortunately, we're still actively using the proprietary software that runs on this schema, so I am powerless to fix it. The "pending" table will usually have around 9000 rows, whereas the "done" table is ~800,000 and counting. For two years (the full extent of my LAMP/MySQL experience), I have been writing UNIONs to capture information from the customers or locations tables, with the first query joining customers/location/both against "pending" and the second joining customers/locations/both against "done." I've recently had a nagging feeling that writing two nearly-identical queries UNIONed together is probably newbish and should be done differently. So I finally applied my more recent learning to a problem I had "solved" in my first few weeks of working with MySQL and realized that I can just use a derived table with a UNION inside that subquery to achieve the same results with less typing. The execution time is near-similar, usually 0.01s for the derived query and slightly less than that for the UNIONed query, but the dervied table query uses far fewer characters and requires less time to type out. So my question for those with more experience, is using a derived table subquery more sane and maintainable in the long run? It's easier for me, the programmer, so that makes me suspicious that maybe it's not the best practice! Sample queries below: |
Migration of SQL Server to Oracle Database Posted: 01 Jun 2013 05:52 AM PDT In Sql server, My Procedure will create temporary tables where column names are dynamically comes from every user, then inserts the data into the temporary tables and i use the data further based on the requirement. Finally, we drop the temporary tables. There is no problem when we create temporary tables with the same name and with different columns because in SQL Server the temporary tables are Session Based. Now, I want to migrate the sql logic code to Oracle Database. Is their any solution to achieve this requirement in Oracle Database Server. Answers are greatly appreciated. |
Unable to drop the table since its showing waiting for meta data lock Posted: 01 Jun 2013 12:16 AM PDT We are trying to drop one table but it's getting hanged, and when we see the 'SHOW PROCESSLIST' command it's showing as 'waiting for meta data lock'. Even we are unable to perform any operation on that particular table. Does anybody know how to resolve it? |
Posted: 01 Jun 2013 07:16 PM PDT In MS SQL Server 2008 R2 there is replication of table with fulltext index on it. But stoplist that is associated with replicated fulltext index doesn't replicate. Is there any possibility to replicate stoplist also? |
SQL server ODBC connection for Active Directory user who doesn't log in to the windows Posted: 01 Jun 2013 02:16 PM PDT Do you think it is possible to create a SQL server ODBC Connection for an active directory user who doesn't log in into the windows. Ideally this type of users will be used in the batch process. So, Another person logs in and creates a batch process and runs it with another user. Note: I dont want to create a SQL server authentication. Instead would like to use active directory. Thanks. |
Process attempted to unlock a resource it does not own Posted: 01 Jun 2013 01:16 PM PDT SQL Server 2005 SP4 32-Bit I have a DBCC CHECKDB job running nightly. Last night, soon after the job started, I got the errors below. The database is NOT in suspect mode, and CHECKDB comes back clean when I run it now. The database is fully accessible. Should I be concerned? I'd hate to go back to a backup at this point. |
Posted: 01 Jun 2013 05:16 PM PDT I would like to set But: i get many languages, but no one has the Question: Can i add another language based on an existing one, and set de field Can i modify an existing syslanguage ? All what i want, is te set it to 6 parmanently, because i need it to get the right week number. Thank you for help ! |
mysql second slave not syncing while first slave works fine Posted: 01 Jun 2013 09:11 PM PDT I have a master (m) - slave (s1) setup using mysql 5.1.45 When I try to add a second slave (s2) the slave lags behind and never catches up on the sync. Even after having synced the s2 with the whole system offline and there were (Seconds_Behind_Master = 0) after a few hours the s2 gets out of sync. Strange is that s1 is always on sync. any ideas? iperf results between servers: vmstat for s2 top output on s2 Any ideas? Is there any chance that the Mysql version is the culprit of all this in conjuction with the nearly 5 fold increase in traffic to the master ? If that is the case then why s1 syncs and not s2? Any ideas if 5.6.x solves similar probs ? |
How does MySQL determine the 'index first key' and 'index last key' with indexes? Posted: 01 Jun 2013 09:16 PM PDT In other words, how does MySQL find the result range with indexes on the first step when it's executing a select query? |
Posted: 01 Jun 2013 11:16 AM PDT I am facing an issue with the following query. When I execute the query, it takes very long. I broke the query into two parts, compared with a shell script, but is there any chance to go with one query? Any suggestion welcome. |
Show processlist / Max concurrent connections seem to max out at 131 Posted: 31 May 2013 11:16 PM PDT When I put my database under a load test (basically just a basic write and basic read from our app), and run |
Deadlock free MERGE-alike in MySQL? Posted: 01 Jun 2013 01:16 AM PDT I am trying to implement some parts of So, whatever we try, deadlocks occur. What we do, we start a transaction, then I can't just switch isolation levels because
And per http://stackoverflow.com/a/2032096/308851
I can't require every Drupal setup to have
So what now? |
sql server database sharding - what to do with common data / non sharded data Posted: 01 Jun 2013 06:16 PM PDT We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level. We are looking to shard the database now to ensure that we can handle company growth and future loads. We have decided what particular data should be sharded. It is a subset of our database which is highly utilized. However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc . I see two options to handle this common/universal data: 1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs. 2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards. concerns about design #1 1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data. 2)a loss of referential integrity. Not possible to do cross database referential integrity. 3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards. 4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3. concerns about design #2 In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively. I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see. thank you in advance. |
set the clock on mysql [closed] Posted: 01 Jun 2013 09:16 AM PDT I would like to know where does mysql get it current time. I am updating a table and using the time is 2 hours early. the time on the server is completly different, so I'm a little confused... I tried but I get |
Does pt-table-checksum point out rows that are different between Master and Slave? Posted: 01 Jun 2013 08:57 PM PDT Trying to verify replication integrity of my replica. And I found that I have a simple Master->Slave set up. Have created a sample table with 10 rows on the Master, which has successfully replicated on the Slave. Intentionally on the slave I have changed a column in one of the rows of that table. Executed - And indeed in my Output was - Strangely the DIFF column says 0 which is incorrect. What am I missing here? Percona tool kit version: 2.1.3 |
Database Design - Creating Multiple databases to avoid the headache of limit on table size Posted: 01 Jun 2013 05:45 PM PDT I need to design database to store information contained in millions of log files generated by devices on trial on network. My first approach was storing all the information in tables contained in a single database. But this approach seems to fail as the data to be stored is too large. So, I thought of creating separate database for each device and storing each device log files in separate database. My question is if this approach can spare me of the headache involved with table size limit. Below are given my platform specification
I want to build my database design in a way so that I need not to apply shrink of tables procedures in future |
Is it OK to blindly add missing indexes? Posted: 01 Jun 2013 07:52 AM PDT I often use SSMS to test my slow stored procedures for missing indexes. Whenever I see a "Missing Index (Impact xxx)" my kneejerk reaction is to just create the new index. This results in a faster query every time as far as I can tell. Any reason why I shouldn't continue to do this? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment