[how to] I've backlogged queries on my SQL Server - what now? |
- I've backlogged queries on my SQL Server - what now?
- How to use group by to select only group row in these case scenarios
- 'ambiguous column name' error refers to database, table, and column
- Moving large databases
- MySQL - password encryption type
- Databases not coming up in SQL Server 2008 R2
- Impoving performance of query by using greater than on a primary key
- PostgreSQL: Group By primary key or DISTINCT increase query time over 1000x with limit
- Restore to last transaction log restore
- SQL server agent SSIS error
- Is this a good strategy for importing a large amount of data and decomposing as an ETL?
- Dropping Hypothetical Indexes
- SA permissions issues with many nested objects
- How should I set up my social network database design?
- Error "#1118 - row size too large" on the first row of the table only
- Database user specified as a definer
- Problem in Fetching Result set in Desired Format
- Best practice for tempdb log file
- How do I execute an Oracle SQL script without sqlplus hanging on me?
- Database schema design help needed
- Object name 'Clean Up History' in use when trying to complete the maintenance plan wizard in SQL Server 2008 R2
- Second time query execution using different constants makes faster?
- SSRS appears to be ignoring Permissions set using Report Manager
- Privileges needed for Oracle Text
- Linked Server Query Results: OPENQUERY vs Distributed Transactions
- Merge Replication Subquery Filters
I've backlogged queries on my SQL Server - what now? Posted: 20 Jul 2013 05:07 PM PDT So, I'm a bit of a plonker right... By accident, I issued in the region of 500,000 individual queries against my DB. It's been running for 6 hours now. I restart the server, they continue to insert. I need them to stop - How can I do this? |
How to use group by to select only group row in these case scenarios Posted: 20 Jul 2013 08:22 PM PDT I'm learning SQL for the first time and I was given these queries to do . I wrote those statements to solve the problem and they work, but they don't use Can somebody please suggest how I change this? They should only use
|
'ambiguous column name' error refers to database, table, and column Posted: 20 Jul 2013 10:14 AM PDT sqlite3 gives an 'ambiguous column name' error with the query below. The error itself references what seems to be about as an unambiguous column name as one can imagine, identifying the database as 'sourcedb', the table as 'answers', and the field as 'markup'. Why is this query triggering the error? Some additional detail... |
Posted: 20 Jul 2013 09:08 PM PDT I have a centos server and /var/lib/mysql/ is 125GB (disk has 1GB free space). Ordinarily I would use mysqldump to backup the databases, but I don't normally work with such large databases, so I need to know the safest way of copying the databases over to a new server. All advice appreciated! |
MySQL - password encryption type Posted: 20 Jul 2013 09:13 AM PDT I would like to confirm if MySQL uses SHA1 for 'PASSSWORD' hashing and why test below doesn't work: |
Databases not coming up in SQL Server 2008 R2 Posted: 20 Jul 2013 11:14 AM PDT Here is something happened on my system that I just figured out. None of the databases are showing up in Object Explorer window even the server has been connected successfully. It is showing only one database that I have created exactly 1 year back. How can I get those databases back, they are really important. |
Impoving performance of query by using greater than on a primary key Posted: 20 Jul 2013 06:05 AM PDT I have a Reporting table where i store description i want to query the description field and save in another table if there is no entry in second table , i create a entry with default values if there is and entry in second table , i update 2nd table , with the the first 3 characters in the description field Since the first table has million of rows so, i search the first table with but the explain shows that it scans the same no of rows , when i query the first table from the first |
PostgreSQL: Group By primary key or DISTINCT increase query time over 1000x with limit Posted: 19 Jul 2013 11:23 PM PDT Also see http://stackoverflow.com/questions/17741167/hibernate-jpa-improve-performance-of-distinct-query but I realized this is mainly a PostgreSQL issue. My application uses a 3rd party extension to PostgreSQL to for searching chemical structures. This is in general slow. I can not change the SQL directly as the application uses hibernate and native query is not an option. I have a many-to-many relationship and the "Link-table" has an additional column. Basically I have a Mixture that consists of elements and an element occurs in the mixture at a certain percentage. favorite I use Spring-Data JPA with QueryDSL, hibernate and PostgreSQL. I have a query with 2 Joins It's a many too many with a link-table that has additional columns. Bascially I have a Mixture that consists of elements and an element occurs in the mixture at a certain percentage. I'm now searching all Mixtures that contain an element matching the given criteria. Because a mixture can have multiple elements that match the criteria, the query may return the same entity multiple times. I want to prevent that hence DISTINCT or GROUP BY primary key. The query is also paged meaning it uses limit and offset. The query runs perfectly fine without either distinct or group by but then I can get duplicate rows. If I add either group by or distinct query is over 1000 times slower. Query with DISTINCT (note SQL from hibernate): EXPLAIN ANALYZE with DISTINCT: also http://explain.depesz.com/s/ocC The long time is caused by searching the 3rd party index for chemical structure search. For some reason the whole indexed is searched. If the distinct is removed, limit and offset are correctly applied to the 3rd part index and query is fast: EXPLAIN ANALYZE: Also http://explain.depesz.com/s/gU2 Is there anyway I can tune PostgreSQL to apply the 3rd party index only according to the limit and offset clause when using distinct? EDIT: After further thinking about the issue I came to the conclusion that there is no solution that I can implement. With group by or distinct the whole query obviously must be run regardless of limit clause. And if the whole query is run the 3rd party index must be used an that takes time (without that index such a search would take minutes not seconds). Now about statistics. Here a quote from supplier:
|
Restore to last transaction log restore Posted: 20 Jul 2013 03:20 PM PDT We have a log shipping setup where we replicate from our primary database to a secondary. Now that database is, by the way of log shipping, in standby and thus read-only. We set it up so we could run reports from it, but it seems the reporting software used needs to go through the application server which needs to log onto the server - in this case, Microsoft Dynamics AX and Microsoft Dynamics ERP. Our plan is to have a teritary database - this would also get log backups via log shipping, but in a different way - we would like to roll the database back to what it was at the last transaction log restore, then restore the latest log file(s). Is this possible, and how would we go about it? EDIT 2013/07/21: Allow me to rephrase my question. If I have a SQL database with full transaction logging, can I tell it to roll back all changes to a certain point, and if so, how may I accomplish this? Thanks |
Posted: 20 Jul 2013 01:30 AM PDT I get the following error when I execute my package as a SQL server agent job. It is an SSIS 2008 package running on a SQL Server 2008 instance. My package security is DontSaveSensitive. I don't even know how to begin fixing this error. Where should I check first? |
Is this a good strategy for importing a large amount of data and decomposing as an ETL? Posted: 20 Jul 2013 02:31 PM PDT I have a set of five tables (a highly decomposed schema for an ETL if I understand the nomenclature) that I'm going to load via bulk import, then run some inserts from those five tables into a SLEW of other tables, including inserts that just rely on the values I just inserted into the first tables. I can do my inserts as an A, B, C process, where I insert into the first table, then insert into some table S where exists in A + T (being some other table that has preloaded "configuration data"), then inserting into Z where exists in B + U, etc. Should I be trying to batch those inserts with a cursor (I know, stone the traitor) or should I just run the raw Should I stage out the inserts as:
OR should I insert into all the tales where the data is needed but do it via cursor, in a "for loop" style pattern of 100k rows at a time. FWIW, this is a behavior I saw from the DBAs at my last job, so I figure that's "what I'm supposed to do" (the batch process via cursors) but maybe I don't understand enough about what they were doing (they were also live-loading into systems that already had data, and were loading new data afterwards). Also bear in mind that I'm normally a C# dev, but I've got the most TSQL experience here and I'm trying to make the best process I can for raw-loading this data as opposed to our "current" method that is mostly webservice fetches and NHibernate save-commits. Things I think are important to the question:
|
Posted: 20 Jul 2013 11:31 AM PDT In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes. I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything. I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error. Has anyone deleted these before and is there a trick I'm missing? Addendum Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out. |
SA permissions issues with many nested objects Posted: 20 Jul 2013 05:32 PM PDT I have a broker application that's relatively complicated. Today, after I made some changes, I started getting the error:
The whole scenario up to the point of the error is: (In Database ABC)
The check in the trigger I believe is what is causing the issue. If I run the update manually, it works fine. I have also used Other relevant facts:
Is there some sort of strange scoping happening because all this is running in the context of broker? Updates Some more info:
|
How should I set up my social network database design? Posted: 20 Jul 2013 08:31 AM PDT I am designing a db for a Social Network type website where users enter lot of information varying from family member details, education, employment, personal favorite such as TV, movie, music, food, books etc using InnoDB and expect exponentially increasing writes but far few reads. I already have 26 tables. |
Error "#1118 - row size too large" on the first row of the table only Posted: 20 Jul 2013 02:30 AM PDT I ran into a problem using a mySQL database. I have some columns as type text but when I try to enter data into the first row I get the error code "#1118 - Row size too large. The maximum row size for the used table type, not counting BLOBs, is 8126. You have to change some columns to TEXT or BLOBs" for anything with more than 15 characters. This only occurs for the first row, all the other rows in the column and table work just fine. Help! Indexes: Server Version: 5.5.9 |
Database user specified as a definer Posted: 20 Jul 2013 10:31 AM PDT I have a view in my database. problem is below Error SQL query: MySQL said:
i Google for a solution User is created for Host & not for Global. How to create the User for Global ???? |
Problem in Fetching Result set in Desired Format Posted: 20 Jul 2013 04:31 AM PDT I have 2 tables having the following columns The query is The resultset returned is as follows One Person can have many item_type and can order same item many times,I want to show the record set in such a manner that no matter the item is purchased by person or not , if the item is present in the record set i am drawing a column and have to show the result corresponding to that as such if the item is purchase 3 times then i want to show the item and it's corresponding record i.e order_date,status etc in sequential manner. For eg :- If there are 3 items A, B,C then output should be shown on the screen as :- PID A B C 1 Y N Y 2/5/2012 5/5/2012 7/5/2012 ----------- Y 17/5/2012 2 Y N 12/6/2012 15/6/2012 Now Suppose there is another item so what i am doing is first fetching the result set from above query and then traversing through that result set to check the existence of that item & if that item is present i am showing it & it's corresponding values , so 1st i have to traverse the whole result set to get the item_type and then have to show it . |
Best practice for tempdb log file Posted: 20 Jul 2013 12:30 AM PDT I have read many blogs here and there about how to configure tempdb data files but i didnt found any information regarding the tempdb log file. Here's the strategy that I'm presently using with my tempdb:
But like I said, nobody is talking about the tempdb log file. What should I do with it? On my setup, this file is at the same place as the tempdb data files. What is the size and the autogrowth value that I should use with the tempdb log file? |
How do I execute an Oracle SQL script without sqlplus hanging on me? Posted: 20 Jul 2013 03:32 PM PDT For an automated task I would very much like to run some SQL scripts and make sure that
How can i do this with Oracle (and |
Database schema design help needed Posted: 20 Jul 2013 09:31 AM PDT I am developing a PHP application expecting millions of records both parent and children. My goal is to design an optimized database design to achieve high speed and performance. This application will have 1M users, each user will have their own dashboard where they can create their own users/roles, pages and posts. I am thinking about two possible solutions
I Think with #1 the query will be slow as all users will be sharing one table, second one also don't seems a perfect solution as number of tables will increase exponentially. Please suggest me a best possible solution and share tips to design an optimized MySQL schema. |
Posted: 20 Jul 2013 03:30 AM PDT I am trying to create a maintenance plan on a instance running SQL Server 2008 R2 SP1 (no CU's installed). When completing the wizard I get the following error:
I've checked both The server is being backed up by Microsoft DPM. Could it be that DPM inserts a cleanup job somewhere that I don't see? |
Second time query execution using different constants makes faster? Posted: 20 Jul 2013 12:31 PM PDT Can someone explain or direct me how execution on indexes happen with different constants at intervals in Mysql. I notice only for the first execution on the table it takes time, after that with different constants it executes the query very quickly. I would like to know how to execute the query in such a way that it should take same amount of time every time it executes with different constants, is there a way to set some parameter off / on? Query executed time : 9 mins. Query executed time : 18 Secs. |
SSRS appears to be ignoring Permissions set using Report Manager Posted: 20 Jul 2013 06:32 PM PDT I have setup SSRS on SQL Server 2008 in native mode. As an administrator I can login to report manager, upload reports and run them, and also use the Web Service URL to generate reports. I have also created a local user on the machine, I went into Report Manager as Admin, and at the top level set permissions that should assign the local user to all roles. When I login to the machine as that user, and then navigate to Report Manager I just get the heading for the page, but do not see any of the folders that are configured. I've checked and the folders are set to inherit parent permissions and they are showing the newly created local user in there too. It seems odd that I have set the permissions, yet SSRS is still not showing what I should be able to see. Is there another step I need to take other than configuring the permissions in Report Manager? When logged in as the newly created local user: |
Privileges needed for Oracle Text Posted: 20 Jul 2013 01:35 PM PDT I'm new to Oracle Text and I am using it to support a search feature in an application I have written. The everything seems to be working correctly except I'm not happy with the fact that I had to store a bunch of things in the ctxsys schema in order to get around insufficient privilege issues. (This is with Oracle 10gR2). Here is the SQL script I have to set up Oracle Text for my application This script is intended to be run by my personal user, which has the DBA role. All the relevant tables are owned by the "DMG" schema, which has very limited privileges. As you can see I am using a user_datastore to aggregate text from multiple tables into a single document. Also, I could only makes this work if the indexing procedure, the index itself and the dmbs_job to sync the index periodically were all owned by ctxsys, which seems like a bad idea. Preferably, I would like everything to be owned by the DMG schema. The biggest issue I ran into was in the create index statement. It would always fail with After some digging I figured out that the create index statement was attempting to create some tables under the ctxsys schema called How can I make everything owned by the DMG schema (i.e. minimum privileges I need to grant the DMG schema)? Is there a way for me to remove the grants on the dmg tables to ctxsys? EDIT: I was originally incorrect in saying that giving the DMG schema the This is a very strange message because there are NO quotas setup anywhere on this database. I have no idea how it is hitting up against a quota limit. |
Linked Server Query Results: OPENQUERY vs Distributed Transactions Posted: 19 Jul 2013 11:30 PM PDT I have two databases. They are on different servers. One is named REPORTDB and the other is named TRANSDB. TRANSDB is referenced as a linked server on REPORTDB named TRANS. I execute a query like this from REPORTDB using Four Part Naming and Distributed Transactions: I expect the result of this query to generate a CREATE SYNONYM statement for each table in TRANSDB inside my REPORTDB (Table1, Table2, Table3). However, it only creates one statement and that is the last table that is returned in the result set from TRANSDB (Table3). This correctly returns three CREATE SYNONYM statements for Table1, Table2, and Table3 using OPENQUERY:. Since I do not wish to use |
Merge Replication Subquery Filters Posted: 20 Jul 2013 05:19 AM PDT I am using merge replication with SQL 2012. I have a number of articles in my publication and will have to come up with quite an elaborate set of filters. If I add a filter on my article it can contain a sub query, but I noticed it doesn't pick up on changes properly. For instance Table1, and Table2. Filter on Table1 is If more records are added to Table2 I would want the contents of Table1 to be changed accordingly when the sync process next occurs. I found out that this is not working. Is it because when I tried it Table2 was not an article in my merge replication publication, or is it because merge replication doesn't support subqueries like this and recognising when a filter has changed. What it would mean is that the sync process would have to scan each filter, and work out what tables the subquery relates to and figure out whether these have changed. Does it do something like this? UPDATE: I have tried a number of things, but they don't work. So the only other option mentioned is a view, but I cannot get the view to work either. The first problem is that if I replicate a view it just gets replicated as a view and the underlying table needs to be there. But Simon I read your other post and that is not what you are saying. I think what you are saying is that I should be able to select a table in my replication publication, and then filter it based on a view like this, But when I look in management studio I see a filter for my table which looks like this, And that part of my filter is fixed, I cannot edit [dbo].[berm] and replace it with a view. So what I think you are suggesting is that I can change the above to, And the BermView is a view which filters the Berm table. Is that right, and if so how? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment