[how to] Was it mandatory to put a "distinct" field as the first field in a query? |
- Was it mandatory to put a "distinct" field as the first field in a query?
- Counting the same column of different value sets in a single group by clause
- Why isn't postgres prompting me about a new user?
- Can listen_addresses really be set to a list?
- Show clients with staff assigned and no staff assigned
- ORA-12560 again
- Dividing DBCC CHECKDB over multiple days
- Configuring PostgreSQL to match server configuration
- Problems working with decode function when there is more then one expression
- Index will be removed
- How to troubleshoot db/app latency during transaction log backup
- startup mount issues with Oracle
- SQL Server 2008 - Question about index behaviour
- "Use Database" command inside a stored procedure
- How does MySQL or PostgreSQL deal with multi-column indexes in ActiveRecord?
- Run Multiple Scripts In One Transaction Across Multiple Servers [duplicate]
- mongodb config servers not in sync
- MySQL: ERROR 126 (HY000): Incorrect key file for table + slow logical backups
- Why does my SQL Server show more than half a million active tasks?
- How to get data comparations from two mysql tables [on hold]
- Moving large databases
- Dropping Hypothetical Indexes
- SA permissions issues with many nested objects
- Database user specified as a definer
- How do I execute an Oracle SQL script without sqlplus hanging on me?
- SSRS appears to be ignoring Permissions set using Report Manager
- How can I copy data from one MySQL server to another based on a SELECT statement (then delete the data from the original)?
- Couldn't install SQL Server 2012 on machine with Windows 7 SP1, VS 2010 SP1
- SQL Query Formatter
Was it mandatory to put a "distinct" field as the first field in a query? Posted: 19 Aug 2013 08:48 PM PDT Just out of curiosity, looks like a See this example in SQLite, Why is that? Do I really have a syntax error? |
Counting the same column of different value sets in a single group by clause Posted: 19 Aug 2013 08:42 PM PDT I have a table (SQLite DB) like this,
Now I need to count how man But as far as I can go, I can't do this altogether, but with two SQL phrases. e.g And a So, can I merge the two queries into a single one? |
Why isn't postgres prompting me about a new user? Posted: 19 Aug 2013 08:37 PM PDT I'm following these instructions to get postgres working with rails on Windows 7:
But this isn't happening. After I run #1, it asks me for a password, then skips straight to 5. This is completely screwing me over! Where did 2 through 4 go? So then I tried doing it manually with So I guess I could I'm becoming really miserable about this. Please help... |
Can listen_addresses really be set to a list? Posted: 19 Aug 2013 08:19 PM PDT I have a VM with IP address 192.168.0.192 running postgreSQL. If I specify then I can connect from another VM at 192.168.0.191 and from localhost. But I can't seem to use a list to tell postgreSQL to use those two addresses. If I change listen_addresses to a list: then I can no longer connect from 192.168.0.191. I notice that almost all examples on stackexchange set listen_addresses to '*'. Is this because the list form does not work? |
Show clients with staff assigned and no staff assigned Posted: 19 Aug 2013 08:12 PM PDT I am trying to write a query to show a client list including whether or not there is a staff member assigned. If I use this query: It shows all clients in a group although the staff assignment is left out. If I add the staff in: It results in a blank result. Can anyone steer me to the correct way to build this? |
Posted: 19 Aug 2013 02:32 PM PDT I have a Windows Server 2008 virtual machine with Oracle XE 11.2.0, the Oracle service is running and I can ping the machine from the Windows 7 64-bit host. The host's For several days I was able to connect to Oracle from the host machine. Today, I'm again enlightened by the lovely error message Yes I have set Yes Yes the Oracle service is running. Most forum entries on this error message suggest to use Sigh. It was working for several days. Any ideas? |
Dividing DBCC CHECKDB over multiple days Posted: 19 Aug 2013 02:32 PM PDT I'm working on implementing Paul Randal's method of manually spreading DBCC CHECKDB over several days for very large databases, which basically consists of:
Has anyone used this technique? Any existing scripts out there? I'm concerned this may not actually cover everything that CHECKDB does; the Books Online documentation for CHECKDB says that in addition to CHECKALLOC, CHECKCATALOG and CHECKTABLE, it also:
So here are my questions:
(Note: this will be a standard routine for thousands of existing databases across hundreds of servers, or at least every database over a certain size. This means that options like restructuring all databases to use CHECKFILEGROUP aren't really practical for us.) |
Configuring PostgreSQL to match server configuration Posted: 19 Aug 2013 02:17 PM PDT We are currently running web application and database server on singe dedicated machine. This is hardware configuration - http://www.hetzner.de/en/hosting/produkte_rootserver/ex10. Around 50Gb of ram is free. PostgresSQL takes only 600Mb while webserver processes take 7Gb. Average CPU load is 25%. Software is Ubuntu 12.04 and Postgres 9.1. Database size is 15Gb. As load increased our application response time increased from 230ms to 450ms over last few months. Application takes 40% share while DB takes 60% of response time. We cached a lot of things, but was wondering if we could get something by tweaking Postgres configuration. After bit of researched we found that default PostgreSQL configuration (https://gist.github.com/darkofabijan/9453c793ceec1ac6274d) is really conservative and that we should definitely tweak our configuration. After running pgtune we got following recommended values. Once we started running our PostgreSQL server with recommended values we got somehow regular spikes when application response time jumped to 2000ms+ while increase was in database response time. After running it for couple of hours we reverted to original Ubuntu 12.04/PostgreSQL 9.1 configuration. Obviously we don't have much experience with running DBs. Both concrete recommendations regarding pgtune suggested values and pointers to good resources would be much appreciated. Edit 1: |
Problems working with decode function when there is more then one expression Posted: 19 Aug 2013 09:18 PM PDT I am using Oracle 11g. I have these two tables: Table acct: Table addruse: I am listing all the mailing addresses. So the query I need should compare the "mailing address" column from the acct table with the "type" column in the addruse table and list all those addresses. If the mailing address field is null, that means its using a primary address so it should grab the primary address from the addruse table but if there is no primary address then it should grab whatever there is in the addruse table. This is what I have so far: I dont know how to change this so that it should first check for a primary address and if there is no primary address, it should grab whatever there is like for acctnbr = 000003. |
Posted: 19 Aug 2013 12:50 PM PDT I'm trying to simply change the length of a column within a table through SQL Server Management Studio, but receive a 'Validation Warning': I remember from experience that if I actually go through with this, the binding will not be readded. In the past I've had to manually write a script to re-add the view. This doesn't seem right - why would it just remove it without re-adding it at the end of the operation? It makes me think I don't have something set up correctly, or am going about this the wrong way - am I? Or is it 'correct' procedure to have to manually re-add things in this way after table changes are made? |
How to troubleshoot db/app latency during transaction log backup Posted: 19 Aug 2013 09:14 AM PDT I have an application that uses a SQL server (2008 R2) database and we are periodically having performance problems with the application and it coincides with our 15-minute transaction log backups. The application controls latency sensitive industrial machinery and we are finding that when the system has a high transaction volume occurring, the application has queries that run too long (>5 seconds for a brief period of time). These queries only run slow at the same time when the transaction log backups are happening so it appears to be related to the transaction log backups. When the system is performing without problems, our transaction log backups take 2-4 seconds and when we are under a heavier load and we have problems they are taking 6-7 seconds. That apparently is just enough to cause some FIFO message dispatch queues to fill up on the application. First of all I was under the impression that transaction logs should be pretty transparent to the application, with no locking or anything else going on. Does this point to some kind of IO contention as being invoved if we are seeing database latency when we are doing the transaction log backup. What are the pros and cons of moving to something like a 5-minute transaction log backup cadence instead of 15-minute? Disk backend is a NetApp FAS2220 with a bunch of 600 GB 10k SAS drives. The DBA is convinced that this is an application problem and not a database problem so I need to know how to troubleshoot this to either being a problem with the application or the database. TLDR: Database or application latency seen under heavy load during transaction log backup. How to troubleshoot and resolve? |
startup mount issues with Oracle Posted: 19 Aug 2013 10:10 AM PDT I am a newbie with Oracle so bear with me. I am trying to simply shutdown and restart an Oracle db instance on a Windows Server 2008 machine (Oracle 11.2.0). Logged in via sqlplus, I issue the command reponses. I exit out of sqlplus, and try to log back in (ORACLE_SID is set correctly) so I can issue the error. All of the Windows services involving my database dantest3 are started. Doing research, I found the oradim command, but while that starts my database and allowing me to login, that is the full startup, and I want to only be in "mount" mode. I am logging in with the following command - If I try the simple oradim command - Requested variables - Any advice or suggestions would be greatly appreciated! |
SQL Server 2008 - Question about index behaviour Posted: 19 Aug 2013 01:31 PM PDT I have a general question about advanced issues regarding index behaviour. In short, roughly one year ago, we dynamically dropped and recreated a number of indexes with different filters, but using the same name as before. Our tests seemed to run ok, but we later found out that our testing environments resolved to using the plans according to the old index definitions, while our production environment used the new ones. The tests had therefore produced the wrong results, and we changed these indexes back to the old versions in production, where the old filter definitions were immediately applied to the plans. This worked fine for the past 6 months. Now however, we have the opposite problem. Our production environment has suddenly fallen back to using the plans for these 6 months old, falsely filtered indexes, where until a few weeks back it was still using the ones the existing indexes are supposed to use. We've tested fixing this by again dropping these problem indexes, and this time creating them with a different name entirely. This seems to be working fine. But my question is this: Considering that the indexes have been dropped (not just renamed), then created with the same name, and the query plan cache has been cleared AND the statistics updated several times, how is it possible that SQL Server seems to have a mind of its own and now has resolved to using ancient plans that I didn't even know could have existed any more? Basically, how exactly does SQL Server store and use the data associated with indexes and their plans? How can you force SQL Server to clear that cache, wherever it may be, completely so that it can absolutely not simply decide to use ancient detrimental plans simply based on the same index name? How does all of this work, so that we can understand it and will never have to deal with this issue again? Thanks! EDIT: It's now all but confirmed that these 6 months old filtered indexes were the reason. I restored the DB to a testing environment and ran problem queries against it, providing the wrong execution plans compared to another ancient testing environment. Checking each and every one of the indexes used by the older, functional environment, every single index definition was identical. I updated the stats, reorganized and rebuilt the indexes so none of them had fragmentation above 35, cleared the query plan cache, and still the problem persisted. I then proceeded to find specifically those indexes involved in the query that were briefly filtered 6 months ago, dropped them and recreated them (the first time they still didn't work right, after the second attempt after another restore, they DID start working right). After this, I dropped and created the index with a different name that hasn't been used before, otherwise using the same definitions. This fixed it every time, and the execution planner would now use the correct indexes. Then I dropped these, created the indexes again using the filter definitions from 6 months back, again with a new name to ensure the query planner would use the new definitions instead of some ghost statistics from older ones. The failed plan produced after these indexes was identical to the ones initially used when all definitions fragmentation, statistics etc had been cleared and checked. Proving once and for all that despite all the seemingly available metadata, the execution planner was working under the assumption that the indexes were filtered and thus not usable, all along. Do any of you know what could be going on? Or is this something that should be reported as a bug, regardless of how rare it might be. Because the implications of this behavior and the effects it's already had, are severe enough that currently I'm considering logging every index name just so none of them will ever be reused. Otherwise this paints a grim picture in that SQL Server may store ancient statistics in the background and start using them at any time, completely nullifying the structure and purpose of new indexes which may be completely business-critical. While it seems extremely likely that the failover had something to do with this, I still can't understand how perfectly working indexes suddenly be replaced by outdated and completely wrong definitions and statistics to the point where no manner of rebuilding them or updating the applicable metadata would help. With no real way to even diagnose that this has happened, other than a sudden decrease in performance, and quirky behaviour on the query planner's part. |
"Use Database" command inside a stored procedure Posted: 19 Aug 2013 02:40 PM PDT I would like to dynamically use a database inside a stored procedure but the What alternatives are there? |
How does MySQL or PostgreSQL deal with multi-column indexes in ActiveRecord? Posted: 19 Aug 2013 09:31 AM PDT I'm creating indexes for my models right now and I want to know how MySQL and PostgreSQL deal with an index with more than 1 column like: That should utilize the index when I do a query like (I think): But will it also utilize the index if I only use the username in a query? |
Run Multiple Scripts In One Transaction Across Multiple Servers [duplicate] Posted: 19 Aug 2013 01:25 PM PDT This question already has an answer here:
We have deployment scripts that we need to run on databases that are spread across multiple servers. One script only runs on one database, but the scripts depend on each other. We are looking for a way to run all of the scripts as one big transaction so that all scripts either commit or rollback as a whole. How do I do this? I would prefer a way to do this from ADO.NET, but SSMS is cool, too. My current solution (that does not work) is that I begin transaction in every database, run all my scripts, and then commit/rollback once everything is good. However, I can't run all my scripts since cross-database dependencies are blocking indefinitely. |
mongodb config servers not in sync Posted: 19 Aug 2013 09:56 AM PDT I have setup with 2 shards, with 2 replica servers and 3 config servers, and 2 mongos. I have following problems: 1) mongo config servers out of sync: 2) I use this document to sync servers: http://docs.mongodb.org/manual/tutorial/replace-config-server/ 3) After sync i restart one mongos server, and see this in logs: First mongos also have this error "warning: error loading initial database config information :: caused by :: Couldn't load a valid config for collection.documents after 3 attempts. Please try again." but work for now. Second mongos after restart don't work; What is the next steps to recover config servers? All advice are welcome. |
MySQL: ERROR 126 (HY000): Incorrect key file for table + slow logical backups Posted: 19 Aug 2013 02:56 PM PDT I've got '/tmp' directory mounted with 'tmpfs' and for some reason this is causing the following error: - Please note that the same query works fine when '/tmp' dir is mounted with ext4 file system. EDIT: Server_01 but this also happened on server with much less tables: Server_02 I was using this query to list all databases but as it didn't work with I was watching disk space on Basically I've got a problem with logical backups on the server with ~8000 DBs - it takes many hours (~24) to complete this task. I've created a simple BASH script (please see below) and instead of Backups running very fast initially and then slowing down dramatically: Script: |
Why does my SQL Server show more than half a million active tasks? Posted: 19 Aug 2013 08:04 PM PDT I ran the above statement on a SQL Server instance, and found it had about 633,000 records. How can I Close/Kill the useless tasks? The MDW data collector have about 4000 page allocate in tempdb per time. And this cause the IO pressure when server in busy time. This is production server, We do not want to restart the service. And the version number is 11.0.3000. Scheduler_Id is 0 - 47 and the amount of rows are average. and other columns are null. |
How to get data comparations from two mysql tables [on hold] Posted: 19 Aug 2013 11:01 AM PDT What I have: The next structure: Table ChrM
Table DMRs
What I want : Select all values from table DMRs that fulfill the next condition, This for every LOCUS in table ChrM (Total of 173 times) to get an (LOCUS, idDMRs, EndPos, StartPos, OTHER1, OTHer2, OTHER n) array. As a newbie to Mysql I tried doing this one by one printing Table ChrM in paper and inserting LOCUS respective StartPoint and EndPoint by hand and querying individually LOCUS by LOCUS in the following manner: And the results are what I want, but I wonder if I will ever finish to analyze all my data when I have 50 different DMRs tables to analyze doing this 173 times for table. Example of my query output This is the result I want to get for every LOCUS So, in conclusion I would like help on how to resolve this problem or any advice on what I should read or know in order to resolve this on my own. |
Posted: 19 Aug 2013 09:09 PM PDT I have a centos server and /var/lib/mysql/ is 125GB (disk has 1GB free space). Ordinarily I would use mysqldump to backup the databases, but I don't normally work with such large databases, so I need to know the safest way of copying the databases over to a new server. All advice appreciated! |
Posted: 19 Aug 2013 12:09 PM PDT In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes. I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything. I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error. Has anyone deleted these before and is there a trick I'm missing? Addendum Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out. |
SA permissions issues with many nested objects Posted: 19 Aug 2013 06:09 PM PDT I have a broker application that's relatively complicated. Today, after I made some changes, I started getting the error:
The whole scenario up to the point of the error is: (In Database ABC)
The check in the trigger I believe is what is causing the issue. If I run the update manually, it works fine. I have also used Other relevant facts:
Is there some sort of strange scoping happening because all this is running in the context of broker? Updates Some more info:
|
Database user specified as a definer Posted: 19 Aug 2013 11:09 AM PDT I have a view in my database. problem is below Error SQL query: MySQL said:
i Google for a solution User is created for Host & not for Global. How to create the User for Global ???? |
How do I execute an Oracle SQL script without sqlplus hanging on me? Posted: 19 Aug 2013 04:09 PM PDT For an automated task I would very much like to run some SQL scripts and make sure that
How can i do this with Oracle (and |
SSRS appears to be ignoring Permissions set using Report Manager Posted: 19 Aug 2013 07:09 PM PDT I have setup SSRS on SQL Server 2008 in native mode. As an administrator I can login to report manager, upload reports and run them, and also use the Web Service URL to generate reports. I have also created a local user on the machine, I went into Report Manager as Admin, and at the top level set permissions that should assign the local user to all roles. When I login to the machine as that user, and then navigate to Report Manager I just get the heading for the page, but do not see any of the folders that are configured. I've checked and the folders are set to inherit parent permissions and they are showing the newly created local user in there too. It seems odd that I have set the permissions, yet SSRS is still not showing what I should be able to see. Is there another step I need to take other than configuring the permissions in Report Manager? When logged in as the newly created local user: |
Posted: 19 Aug 2013 09:07 PM PDT I have a very large log table from which I want to copy rows, putting them into the same table structure on a new server. I don't want to copy everything, only old rows, so that the table on the main server stays small-ish. So I have to SELECT the data I want and only move (and delete) that. Keep in mind that there is a lot of data, and I don't want to copy it all with a The best I've come up with is a PHP script, which I will post as a tentative answer, although I'm certain it's not the best option. |
Couldn't install SQL Server 2012 on machine with Windows 7 SP1, VS 2010 SP1 Posted: 19 Aug 2013 06:55 PM PDT I am trying to install SQL Server 2012 RTM on my pc, I have installed Windows 7 SP1, VS 2010 SP1 but it again and again giving this error:
When I go to the Microsoft link I find
How can resolve this, at first i thought adding current user in Display Replay controller making the problem, then uninstalled everything and again tried installing but couldn't succeed. Can anyone suggest what may be causing these problem and possible resolve for it. I have also SQL Server 2008 R2 installed which I want to keep simultaneously with 2012 Additional Information's: Its on local drive i.e on my harddisk, so there should be no error and Its the RTM version of SQL 2012. I find below logs on C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\ Feature: SQL Client Connectivity SDK Status: Failed: see logs for details Reason for failure: An error occurred during the setup process of the feature. Next Step: Use the following information to resolve the error, and then try the setup process again. Component name: SQL Server Native Client Access Component Component error code: 1316 Component log file: C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20121022_164723\sqlncli_Cpu64_1.log Error description: A network error occurred while attempting to read from the file: D:\Microsoft SQL Server 2012 RTM\1033_ENU_LP\x64\setup\x64\sqlncli1.msi Error help link: go.microsoft.com/… This problem not yet solved, although other people installed SQL2012 with this installer on their PC's. I think something wrong with my PC or some settings i am using. Please can anyone help me out? |
Posted: 19 Aug 2013 01:40 PM PDT Are there any (Linux based) SQL Query Formatting programs/plugins/extensions? I use PostgreSQL and MySQL but other DB's are welcome as well. I can use a VM to test with but would prefer a Linux (Ubuntu) based solution. I have seen a online version but nothing as a installable. Eclipse Based IDE's are a plus as well Example: to something like Here is a online example: But I would rather this be in a local environment Related: UPDATE: Looking at this: |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment