[how to] Finding stored procedures with missing indexes |
- Finding stored procedures with missing indexes
- Simple scenario needs to go from Unnormalized Data to 3NF - Review Please
- Saving settings for multiple accounts in SQL Server
- table design to track significant events associated with an item
- I would like to query a range of criteria on multiple columns in MySQL
- mysql_connect() system error :111
- Fast paginated result, for various filter clauses
- Improving performance of SQL Server after migration to cluster servers [closed]
- Type conversion in expression may affect "CardinalityEstimate" in query plan choice?
- RSExecRole missing
- Implementation of stack in MySQL
- Want to Get Brands In Wordpress Database Structure Saved as Custom Post types
- MySQL incredibly slow on WAMP and XAMPP
- PostgreSQL replication for archiving
- column precision exceeded. ORA-01438 is so unmeaningful
- Applying user-defined fields to arbitrary entities in a SQL Server database
- Access denied Error in MySQL and Drupal
- ON DELETE cascade alternate
- Parallel Statistics Update
- Stored Procedures Best Practices: Fenced or Unfenced?
- How to prevent high memory, CPU and time consumption by MySQL restore?
- MySQL Performance Monitoring
Finding stored procedures with missing indexes Posted: 09 Mar 2013 08:13 PM PST How to scan for stored proc that need indexes to be created. Anyone have any idea? Basically we have stored procedures that run very slowly. I'm wondering if there is some kind of automated script or app that can scan to indicate if an index needs to be created? |
Simple scenario needs to go from Unnormalized Data to 3NF - Review Please Posted: 09 Mar 2013 07:26 PM PST Here is the simple scenario. It is an animal charity, and at the moment, to track their establishment's costs, they have a spreadsheet like so: 1NF: Am I right in saying that the above is already in 1NF because according to the definition of 1NF:
Then, I went on to figure out what I would make my primary key. Consequently, went for the following:
2NF: Am I also right in saying that the above is already in 2NF, because again according to the definition:
3NF: Let me start by the definition here:
Here I am stuck, I do not know how to get it into 3NF. As it currently stands, it is possible that the Name attribute would duplicate (say they might order some more Dog food) and of course as you can see the Bought From attribute is already repeating. I would greatly appreciate some help. Thank You. |
Saving settings for multiple accounts in SQL Server Posted: 09 Mar 2013 07:25 PM PST I'm using SQL 2012 to store the account settings for multiple accounts (each user account will have a set of settings). These settings could have any number of fields (has not been completely set and could grow) but each user account will have the same set of settings (different values of course). Is it better (faster with a smaller footprint and more efficient) to:
|
table design to track significant events associated with an item Posted: 09 Mar 2013 08:59 PM PST An Each Of these 20 or so events, only a handful will always take place, while the rest are just potential events. Also, many - but not all - of these events have at least two points which need storage:
The events data will be updated in the database from external sources in the form of What would be the best design for storing this milestone event data? Or would you try a less normalized approach and flatten all the possible events out into one very wide events table that mimics what the Or is there another pattern you would use? |
I would like to query a range of criteria on multiple columns in MySQL Posted: 09 Mar 2013 12:02 PM PST I have 3 columns in a mysql table. I'm using Innodb engine. I want to be able to search for some values on those columns and also anything close to those values. For example : We have 3 columns as "Fast" "Smart" "Sharp" and they are of type boolean/int I have 1000 records in the table, as an example, I list 5 here. and let's say we wanna search for people who are smart and sharp as 0 1 1 but if there is no record with the value of 0 1 1 we want to get the closest to it which is 1 1 1 or 1 0 1 or etc. So now when I search for it, I get the exact value, if the value doesn't exist, I won't get anything back from DB. So how can I achieve this and if I get a bunch of result, how can I sort them from the closets to my query to the feartest ? |
mysql_connect() system error :111 Posted: 09 Mar 2013 10:01 AM PST I have 2 VPS. I would like to connect from the frist to the second's mysql db like this : I got this error:
My server is works on first server is works on port 222. In the first server I don't have a mysql. On the second is running on port 3306 First and second server's firewall got these rules : What i'm doing wrong? |
Fast paginated result, for various filter clauses Posted: 09 Mar 2013 07:39 AM PST I have been working on obtaining a paginated result from a table in my database (11g). While I have a query that does work (ie, the result is correct) it does not perform as well as I'd like to and I am trying to improve its efficiency (est. 60 calls per second on that query only). So first of all, I read What is an Effective Way to Count the Number of Pages in Oracle?, and the article pointed too, but unfortunately it does not discuss the execution plan of the queries presented at all (I'll get to see them come Monday). Here is the query I had come up with (the table is partitioned on Note: most of the queries will be carried out with a single Note: ~72 partitions in total, but only ~39 active at most, ~300,000 different values of It does work, but there is a nasty surprise in the execution plan: the 2nd query (with In the execution-plan this shows up as: I can create an index over the table if necessary, and I can transform the existing partitioned index ( Is there a way to prevent the "materialization" of the view and have Oracle build it on the fly as the outer query needs more data (ie, moving to lazy evaluation of the inner queries) ? |
Improving performance of SQL Server after migration to cluster servers [closed] Posted: 09 Mar 2013 05:00 AM PST Since migrating our database (SQL Server 2008) to Cluster Server, it has become very slow, and we cannot fetch data and reports from it. What are the most important things we should be checking after such a change to get the performance back on track? |
Type conversion in expression may affect "CardinalityEstimate" in query plan choice? Posted: 09 Mar 2013 08:36 PM PST I maintain an archive database that stores historical data in partitioned views. The partitioning column is a datetime. Each table under the view stores one month of data. We constraint the events on each table with a check constraint on the datetime column. This allows the optimizer to limit the tables that are searched for queries that filter on the event datetime column. The names of the check constraints were generated by SQL Server, so it's hard to know what they do by looking at their name. I want the constraint names to have the form 'CK_TableName_Partition'. I can generate a rename script using this query and copying data from from sql_text column. The WHERE clause matches check constraints whose names look like they were generated by SQL Server: The output looks like this: The result of the query seems to be correct and the server executes it quickly. But the root node of the execution plan has a warning:
What does that mean in this context? Is such a complex filter confusing the optimizer? Is it something I should be worried about? |
Posted: 09 Mar 2013 05:08 AM PST I have multiple servers and 4 of them have reporting services installed with the A few days ago I was setting up security for a new login and I found out that the I have found a way to create it on the Has anyone encountered this problem before? Can anyone help me with a script and an explanation about this role? |
Implementation of stack in MySQL Posted: 08 Mar 2013 11:15 PM PST I had an abstract idea of implementing a stack in MySQL. Tried a lot of Google work but couldn't reach to any logical conclusion. The Stack must be a collection of tables with the stack top pointing at a particular table. We may have various such stacks in our DB. The stack length may increase if for example the field from the table is full. Likewise the stack length may decrease in the opposite manner! Could you please give me an implementation idea! is it really possible at first place? Hope I'm clear with my question. |
Want to Get Brands In Wordpress Database Structure Saved as Custom Post types Posted: 09 Mar 2013 12:34 PM PST Hello out there here I am having difficulty in fetching the brands per ctegory in the sidebar of my website Here is the description .. I am using the feeds and feed uploader to upload feeds and create brands and product categories in wordpress The problem is there is no link within the So I tried out these heavy queries to fetch the brands as per category, but using the Here are the queries I am using The first two queries runs fine but the last one make the datbase query too slow so any help please |
MySQL incredibly slow on WAMP and XAMPP Posted: 09 Mar 2013 08:34 PM PST I've installed WAMP on my new Windows 8 machine and for some reason when I use PHPMyAdmin to import an sql file (it's only about 5mb), it takes several minutes before timing out. And then when I check I can see it's only imported a fraction of the tables. I've tried other sql files and it's the same story. These same sql files import perfectly fine (and in a few couple of seconds) on my Windows 7 machine also running WAMP. I've since uninstalled WAMP and tried XAMPP, and the problem still exists. Even just browsing the databases with PHPMyAdmin takes a long time between page loads. I am guessing it's a problem with MySQL. Any suggestions would be helpful. |
PostgreSQL replication for archiving Posted: 09 Mar 2013 04:09 AM PST I've looked around a bit and haven't found a very clear strategy or method for what I'm trying to do, which is surprising because I would think it would be a fairly common practice. I currently use Bucardo to replicate in a Master Slave setup. This works fine except I want the Slave to retain all records ever written to it. The master is a smaller system which will need to have its tables truncated periodically. I don't want these truncates to be carried over to the Slave. I would think this would be a fairly common practice but have been stumped finding a solution that will work. Could anyone point me in the right direction? The right direction doesn't necessarily need to involve using Bucardo either. Thanks |
column precision exceeded. ORA-01438 is so unmeaningful Posted: 09 Mar 2013 05:34 PM PST I am not sure if this is a question to StackExchange DB Admin group. please let me know if its not. Let's suppose you have a Java batched prepared statement writing data to database. If one of the columns has a value larger than specified precision it throws ORA-01438 exception. Is there any way to find out which column is that? |
Applying user-defined fields to arbitrary entities in a SQL Server database Posted: 09 Mar 2013 07:34 PM PST Currently we have an old (rather crude) system that has user defined fields, which are mapped against rows in arbitrary tables. This was an after-the-fact modification based on a customer request, and it wasn't really designed to scale well. Our system has around 60 different types of entities, which makes things even more complicated. Essentially the implementation looks like this:
etc... This gets nice and fun when we generate our own ways to index compound primary keys, but that's another DailyWTF-worthy story. Obviously this is pretty hideous, and leads to some spectacularly horrific queries being generated, but it's worked alright for now because we limit each entity to a maximum of 5 user defined fields. As a quick disclaimer, I wasn't with the company when this design decision was made! ;) Anyway, we're about to start a shiny new project and will inevitably need a better way of doing this, with no restrictions on the number of UDFs we can apply to entities, increased performance, and less horror in the generated query department. If we could run a NoSQL solution like Mongo or Redis I'd be happy and wouldn't need to bother you all, but sadly that's not an option. Instead, I need a way to do this from within SQL Server 2008 R2. So far, I've come up with the following options:
So, do any of these ideas have merit? Is there a better alternative? |
Access denied Error in MySQL and Drupal Posted: 09 Mar 2013 12:34 AM PST We are trying to deploy a Drupal application on our production server. We see this strange error We made all necessary corrections in We are able to connect to the database server from our web server through the command line. There is no firewall enabled on either server. Any update we make to the database means the site goes down and displays an Can any one guide me to resolve this issue? Thanks |
Posted: 09 Mar 2013 01:34 PM PST I am trying to create a database for 2 Tables like:
Table pseudo-code: Table2: There can be 2 records in Is there any other MySQL method/constraint that can fulfill the above requirement. |
Posted: 09 Mar 2013 05:55 PM PST In SQL Server 2008 or later, is |
Stored Procedures Best Practices: Fenced or Unfenced? Posted: 09 Mar 2013 08:12 AM PST I believe I understand the reasons behind fenced and unfenced stored procedures. Fenced run "outside" of the database (in our case DB2) so as to prevent possible corruption of the database engine should there be issues with things like pointers. Unfenced runs "inside" of the database, which means that performance is better. From what I have also researched, SQL PL is always basically unfenced, because it is SQL and therefore cannot access memory like programming languages can. C/C++ and Java procedures can run fenced or unfenced. But since they can possibly access memory, there should be a consideration for running them fenced, unless there is a certainty on the quality of the code to not crash and it needs performance. First of all, am I correct in my understand of the above? Next, is it generally a best practice to start out with all stored procedures (even those defined as SQL PL) as fenced first? Any other best practices for stored procedures, especially as related to fencing and/or security?
|
How to prevent high memory, CPU and time consumption by MySQL restore? Posted: 09 Mar 2013 04:34 AM PST I have a local test machine with 2 GB RAM and a dual core processor. I imported a dump on that using
Restore took approximately 25 minutes. Question: Why it took so long time? I have already optimized my server settings. The dump also contains extended inserts and disables keys). Is 700 MB is too large for this kind of machine? Now what actually I want is to convert all my tables of that database to InnoDB. So for that I executed a simple shell command:
so Now when I run Innodb.sql one of my table takes more than 20 minutes for conversion. It contains only 1,378,397 records. In total it takes more than 30 minutes. In the meantime memory usage by mysqld daemon was 73%. The CPU usage was OK at this time. How can I minimize the time consumtion? Should I go for changing MySQL server settings or anything else? If anybody wants my my.cnf setting I will share that. |
Posted: 09 Mar 2013 02:19 PM PST I already have system availability monitoring for all my MySQL instances in Nagios. I'm looking to get some trending monitoring going for activity. It'd be nice to see how all the I've seen Cacti exists and seems pretty popular. What have your experiences with Cacti been (pros/cons)? Are there any other common solutions I should investigate? |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment