Monday, August 19, 2013

[how to] Was it mandatory to put a "distinct" field as the first field in a query?

[how to] Was it mandatory to put a "distinct" field as the first field in a query?


Was it mandatory to put a "distinct" field as the first field in a query?

Posted: 19 Aug 2013 08:48 PM PDT

Just out of curiosity, looks like a distinct field must be placed ahead of any other fields, am I wrong?

See this example in SQLite,

sqlite> select ip, distinct code from parser; # syntax error?  Error: near "distinct": syntax error  sqlite> select distinct code, ip from parser; # works  

Why is that? Do I really have a syntax error?

Counting the same column of different value sets in a single group by clause

Posted: 19 Aug 2013 08:42 PM PDT

I have a table (SQLite DB) like this,

CREATE TABLE parser (ip text, user text, code text);

Now I need to count how man code have a value of either 1, 2, or 3, and how many are not, group by ip field.

But as far as I can go, I can't do this altogether, but with two SQL phrases.

e.g

select count(*) as cnt, ip   from parser   where code in (1, 2, 3)   group by ip   order by cnt DESC   limit 10  

And a not in query.

So, can I merge the two queries into a single one?

Why isn't postgres prompting me about a new user?

Posted: 19 Aug 2013 08:37 PM PDT

I'm following these instructions to get postgres working with rails on Windows 7:

Run "Start Command Prompt with Ruby" then...

  1. createuser -U postgres -P NewUserName
  2. When it prompts for a password for new role, assign it a new password.
  3. No, the new role should not be a superuser.
  4. Yes, the new role should be allowed to create databases.
  5. No, the new role should not be allowed to create more new roles.
  6. Enter the postgresPWD you made when we installed PostgreSQL.

But this isn't happening. After I run #1, it asks me for a password, then skips straight to 5. This is completely screwing me over! Where did 2 through 4 go?

So then I tried doing it manually with createuser -s -U postgres -d -r (I have to do -U postgres because otherwise it asks for a password for a user that seems to be based on my Windows account, and no password I give it is successful...I tried fixing this by changing pg_hba.conf and then NOTHING worked)....so anyway I tried doing it manually and it was successful, but then I tried createdb newdb_name and it said createdb: could not connect to database template1: FATAL: password authentication failed for user "{my windows account}".

So I guess I could createdb as postgres, but then my rails commands to rake db:create would still fail. I need the active account to have full rights so that I can keep doing rails work.

I'm becoming really miserable about this. Please help...

Can listen_addresses really be set to a list?

Posted: 19 Aug 2013 08:19 PM PDT

I have a VM with IP address 192.168.0.192 running postgreSQL.

If I specify

listen_addresses = '*'  

then I can connect from another VM at 192.168.0.191 and from localhost.

But I can't seem to use a list to tell postgreSQL to use those two addresses. If I change listen_addresses to a list:

listen_addresses = '192.168.0.191, localhost'  

then I can no longer connect from 192.168.0.191.

I notice that almost all examples on stackexchange set listen_addresses to '*'. Is this because the list form does not work?

Show clients with staff assigned and no staff assigned

Posted: 19 Aug 2013 08:12 PM PDT

I am trying to write a query to show a client list including whether or not there is a staff member assigned. If I use this query:

SELECT g.name AS group,  cl.name_lastfirst_cs AS client  FROM clients cl, groups g, link_group lg  WHERE lg.zrud_group = ?   AND lg.zrud_group = g.zzud_group  AND cl.zzud_client = lg.zrud_client  ORDER BY cl.name_lastfirst_cs  

It shows all clients in a group although the staff assignment is left out. If I add the staff in:

SELECT g.name AS group,  cl.name_lastfirst_cs AS client, s.staff_name_cs AS Staff  FROM clients cl, groups g, link_group lg, staff s  WHERE lg.zrud_group = 'NEWHAN'  AND lg.zrud_group = g.zzud_group  AND cl.zzud_client = lg.zrud_client  ORDER BY cl.name_lastfirst_cs  

It results in a blank result. Can anyone steer me to the correct way to build this?

ORA-12560 again

Posted: 19 Aug 2013 02:32 PM PDT

I have a Windows Server 2008 virtual machine with Oracle XE 11.2.0, the Oracle service is running and I can ping the machine from the Windows 7 64-bit host. The host's tnsnames.ora is correctly set up.

For several days I was able to connect to Oracle from the host machine.

Today, I'm again enlightened by the lovely error message ORA-12560 TNS protocol adapter error when I start sqlplus.exe.

Yes I have set ORACLE_SID and ORACLE_HOME environment variables.

Yes tnsnames.ora is correctly set up.

Yes the Oracle service is running.

Most forum entries on this error message suggest to use tnsping which should be in the bin folder of my Oracle client installation. However, a complete search for tnsping on the host machine didn't show any results.

Sigh. It was working for several days.

Any ideas?

Dividing DBCC CHECKDB over multiple days

Posted: 19 Aug 2013 02:32 PM PDT

I'm working on implementing Paul Randal's method of manually spreading DBCC CHECKDB over several days for very large databases, which basically consists of:

  • Dividing the tables in the database roughly equally between 7 buckets
  • Running a DBCC CHECKALLOC twice a week
  • Running a DBCC CHECKCATALOG once a week
  • Running a DBCC CHECKTABLE on one bucket each day of the week

Has anyone used this technique? Any existing scripts out there?

I'm concerned this may not actually cover everything that CHECKDB does; the Books Online documentation for CHECKDB says that in addition to CHECKALLOC, CHECKCATALOG and CHECKTABLE, it also:

  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM. (SQL 2008 only)
  • Validates the Service Broker data in the database.

So here are my questions:

  1. Are these additional checks necessary/important? (Indexed views are probably a bit more concerning to me, I don't think we are using Service Broker or FILESTREAM yet.)

  2. If so, are there ways to perform these additional checks separately?

  3. CHECKALLOC and CHECKCATALOG seem to run very quickly, even on large dbs. Any reason not to run these every day?

(Note: this will be a standard routine for thousands of existing databases across hundreds of servers, or at least every database over a certain size. This means that options like restructuring all databases to use CHECKFILEGROUP aren't really practical for us.)

Configuring PostgreSQL to match server configuration

Posted: 19 Aug 2013 02:17 PM PDT

We are currently running web application and database server on singe dedicated machine. This is hardware configuration - http://www.hetzner.de/en/hosting/produkte_rootserver/ex10. Around 50Gb of ram is free. PostgresSQL takes only 600Mb while webserver processes take 7Gb. Average CPU load is 25%.

Software is Ubuntu 12.04 and Postgres 9.1. Database size is 15Gb.

As load increased our application response time increased from 230ms to 450ms over last few months. Application takes 40% share while DB takes 60% of response time.

We cached a lot of things, but was wondering if we could get something by tweaking Postgres configuration. After bit of researched we found that default PostgreSQL configuration (https://gist.github.com/darkofabijan/9453c793ceec1ac6274d) is really conservative and that we should definitely tweak our configuration.

After running pgtune we got following recommended values.

maintenance_work_mem = 1GB   checkpoint_completion_target = 0.7   effective_cache_size = 44GB   work_mem = 288MB   wal_buffers = 4MB   checkpoint_segments = 8   shared_buffers = 15GB   max_connections = 200   

Once we started running our PostgreSQL server with recommended values we got somehow regular spikes when application response time jumped to 2000ms+ while increase was in database response time. After running it for couple of hours we reverted to original Ubuntu 12.04/PostgreSQL 9.1 configuration.

Obviously we don't have much experience with running DBs. Both concrete recommendations regarding pgtune suggested values and pointers to good resources would be much appreciated.

Edit 1:

            name            |         current_setting          |        source  ----------------------------+----------------------------------+----------------------   application_name           | psql                             | client   client_encoding            | UTF8                             | client   DateStyle                  | ISO, MDY                         | configuration file   default_text_search_config | pg_catalog.english               | configuration file   external_pid_file          | /var/run/postgresql/9.1-main.pid | configuration file   lc_messages                | en_US.UTF-8                      | configuration file   lc_monetary                | en_US.UTF-8                      | configuration file   lc_numeric                 | en_US.UTF-8                      | configuration file   lc_time                    | en_US.UTF-8                      | configuration file   log_line_prefix            | %t                               | configuration file   log_timezone               | localtime                        | environment variable   max_connections            | 100                              | configuration file   max_stack_depth            | 2MB                              | environment variable   port                       | 5432                             | configuration file   shared_buffers             | 24MB                             | configuration file   ssl                        | on                               | configuration file   TimeZone                   | localtime                        | environment variable   unix_socket_directory      | /var/run/postgresql              | configuration file  

Problems working with decode function when there is more then one expression

Posted: 19 Aug 2013 09:18 PM PDT

I am using Oracle 11g. I have these two tables:

Table acct:

acctnbr || Name    || Mailing address     000001  || Slater  ||     000002  || Baker   || Alt    000003  || Parke   ||     

Table addruse:

acctnbr || Address    || Type    000001  || 1 aby rd   || Pri    000001  || 2 ctr strt || Res    000002  || 3 fbry rd  || Alt    000003  || 4 jnry rd  || Res    

I am listing all the mailing addresses. So the query I need should compare the "mailing address" column from the acct table with the "type" column in the addruse table and list all those addresses. If the mailing address field is null, that means its using a primary address so it should grab the primary address from the addruse table but if there is no primary address then it should grab whatever there is in the addruse table. This is what I have so far:

 select distinct a.acctnbr, a.Name,     DECODE(a.mailingaddress ,NULL,'PRI',a.mailingaddress) MAILCD,     b.address  from acct a, addruse b  where a.acctnbr = b.acctnbr  and   DECODE(a.mailingaddress,NULL,'PRI',a.mailingaddress) = b.type (+)  

I dont know how to change this so that it should first check for a primary address and if there is no primary address, it should grab whatever there is like for acctnbr = 000003.

Index will be removed

Posted: 19 Aug 2013 12:50 PM PDT

I'm trying to simply change the length of a column within a table through SQL Server Management Studio, but receive a 'Validation Warning':

'MyTable' table  -Warning: The following schema-bound objects will be modified:    -View 'dbo.MyViewName': indexes and schema binding will be removed.  

I remember from experience that if I actually go through with this, the binding will not be readded. In the past I've had to manually write a script to re-add the view. This doesn't seem right - why would it just remove it without re-adding it at the end of the operation?

It makes me think I don't have something set up correctly, or am going about this the wrong way - am I? Or is it 'correct' procedure to have to manually re-add things in this way after table changes are made?

How to troubleshoot db/app latency during transaction log backup

Posted: 19 Aug 2013 09:14 AM PDT

I have an application that uses a SQL server (2008 R2) database and we are periodically having performance problems with the application and it coincides with our 15-minute transaction log backups.

The application controls latency sensitive industrial machinery and we are finding that when the system has a high transaction volume occurring, the application has queries that run too long (>5 seconds for a brief period of time). These queries only run slow at the same time when the transaction log backups are happening so it appears to be related to the transaction log backups. When the system is performing without problems, our transaction log backups take 2-4 seconds and when we are under a heavier load and we have problems they are taking 6-7 seconds. That apparently is just enough to cause some FIFO message dispatch queues to fill up on the application.

First of all I was under the impression that transaction logs should be pretty transparent to the application, with no locking or anything else going on. Does this point to some kind of IO contention as being invoved if we are seeing database latency when we are doing the transaction log backup. What are the pros and cons of moving to something like a 5-minute transaction log backup cadence instead of 15-minute?

Disk backend is a NetApp FAS2220 with a bunch of 600 GB 10k SAS drives. The DBA is convinced that this is an application problem and not a database problem so I need to know how to troubleshoot this to either being a problem with the application or the database.

TLDR: Database or application latency seen under heavy load during transaction log backup. How to troubleshoot and resolve?

startup mount issues with Oracle

Posted: 19 Aug 2013 10:10 AM PDT

I am a newbie with Oracle so bear with me.

I am trying to simply shutdown and restart an Oracle db instance on a Windows Server 2008 machine (Oracle 11.2.0). Logged in via sqlplus, I issue the command shutdown immediate;. I get the

Database closed.    Database dismounted.    ORACLE instance shut down  

reponses. I exit out of sqlplus, and try to log back in (ORACLE_SID is set correctly) so I can issue the startup mount; command, but I get the

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor  

error. All of the Windows services involving my database dantest3 are started. Doing research, I found the oradim command, but while that starts my database and allowing me to login, that is the full startup, and I want to only be in "mount" mode.

I am logging in with the following command -

sqlplus sys/[passwordforacct]@dantest3 as sysdba  

If I try the simple sqlplus / as sydba, I get a

ORA-12560: TNS:protocol adapter error  

oradim command -

oradim -start -sid dantest3  

Requested variables -

TNS_ADMIN = C:\Oracle\product\11.2.0\dbhome_1\NETWORK\ADMIN  ORACLE_SID = dantest3  ORACLE_HOME_11gR2 = C:\Oracle\product\11.2.0\dbhome_1  

Any advice or suggestions would be greatly appreciated!

SQL Server 2008 - Question about index behaviour

Posted: 19 Aug 2013 01:31 PM PDT

I have a general question about advanced issues regarding index behaviour.

In short, roughly one year ago, we dynamically dropped and recreated a number of indexes with different filters, but using the same name as before. Our tests seemed to run ok, but we later found out that our testing environments resolved to using the plans according to the old index definitions, while our production environment used the new ones. The tests had therefore produced the wrong results, and we changed these indexes back to the old versions in production, where the old filter definitions were immediately applied to the plans.

This worked fine for the past 6 months. Now however, we have the opposite problem. Our production environment has suddenly fallen back to using the plans for these 6 months old, falsely filtered indexes, where until a few weeks back it was still using the ones the existing indexes are supposed to use.

We've tested fixing this by again dropping these problem indexes, and this time creating them with a different name entirely. This seems to be working fine.

But my question is this: Considering that the indexes have been dropped (not just renamed), then created with the same name, and the query plan cache has been cleared AND the statistics updated several times, how is it possible that SQL Server seems to have a mind of its own and now has resolved to using ancient plans that I didn't even know could have existed any more?

Basically, how exactly does SQL Server store and use the data associated with indexes and their plans? How can you force SQL Server to clear that cache, wherever it may be, completely so that it can absolutely not simply decide to use ancient detrimental plans simply based on the same index name? How does all of this work, so that we can understand it and will never have to deal with this issue again?

Thanks!

EDIT:

It's now all but confirmed that these 6 months old filtered indexes were the reason. I restored the DB to a testing environment and ran problem queries against it, providing the wrong execution plans compared to another ancient testing environment. Checking each and every one of the indexes used by the older, functional environment, every single index definition was identical. I updated the stats, reorganized and rebuilt the indexes so none of them had fragmentation above 35, cleared the query plan cache, and still the problem persisted.

I then proceeded to find specifically those indexes involved in the query that were briefly filtered 6 months ago, dropped them and recreated them (the first time they still didn't work right, after the second attempt after another restore, they DID start working right). After this, I dropped and created the index with a different name that hasn't been used before, otherwise using the same definitions. This fixed it every time, and the execution planner would now use the correct indexes. Then I dropped these, created the indexes again using the filter definitions from 6 months back, again with a new name to ensure the query planner would use the new definitions instead of some ghost statistics from older ones. The failed plan produced after these indexes was identical to the ones initially used when all definitions fragmentation, statistics etc had been cleared and checked. Proving once and for all that despite all the seemingly available metadata, the execution planner was working under the assumption that the indexes were filtered and thus not usable, all along.

Do any of you know what could be going on? Or is this something that should be reported as a bug, regardless of how rare it might be. Because the implications of this behavior and the effects it's already had, are severe enough that currently I'm considering logging every index name just so none of them will ever be reused. Otherwise this paints a grim picture in that SQL Server may store ancient statistics in the background and start using them at any time, completely nullifying the structure and purpose of new indexes which may be completely business-critical. While it seems extremely likely that the failover had something to do with this, I still can't understand how perfectly working indexes suddenly be replaced by outdated and completely wrong definitions and statistics to the point where no manner of rebuilding them or updating the applicable metadata would help. With no real way to even diagnose that this has happened, other than a sudden decrease in performance, and quirky behaviour on the query planner's part.

"Use Database" command inside a stored procedure

Posted: 19 Aug 2013 02:40 PM PDT

I would like to dynamically use a database inside a stored procedure but the use database_name command is not supported inside procedures.

What alternatives are there?

How does MySQL or PostgreSQL deal with multi-column indexes in ActiveRecord?

Posted: 19 Aug 2013 09:31 AM PDT

I'm creating indexes for my models right now and I want to know how MySQL and PostgreSQL deal with an index with more than 1 column like:

add_index :users, [:username, :created_at]  

That should utilize the index when I do a query like (I think):

User.where("username = ? and created_at > ?", 'jim', DateTime.now.yesterday)  

But will it also utilize the index if I only use the username in a query?

User.where("username = ?", 'jim')  

Run Multiple Scripts In One Transaction Across Multiple Servers [duplicate]

Posted: 19 Aug 2013 01:25 PM PDT

This question already has an answer here:

We have deployment scripts that we need to run on databases that are spread across multiple servers. One script only runs on one database, but the scripts depend on each other. We are looking for a way to run all of the scripts as one big transaction so that all scripts either commit or rollback as a whole.

How do I do this?

I would prefer a way to do this from ADO.NET, but SSMS is cool, too.

My current solution (that does not work) is that I begin transaction in every database, run all my scripts, and then commit/rollback once everything is good. However, I can't run all my scripts since cross-database dependencies are blocking indefinitely.

mongodb config servers not in sync

Posted: 19 Aug 2013 09:56 AM PDT

I have setup with 2 shards, with 2 replica servers and 3 config servers, and 2 mongos. I have following problems:

1) mongo config servers out of sync:

Aug 14 09:46:48 server mongos.27017[10143]: Sun Aug 11 09:46:48.987 [CheckConfigServers] ERROR: config servers not in sync! config servers mongocfg1.testing.com:27000 and mongocfg3.testing.com:27000 differ#012chunks: "d2c08c5f1ee6048e5f6fab30e37a70f0"#011chunks: "7e643e9402ba90567ddc9388c2abdb8a"#012databases: "6f35ec52b536eee608d5bc706a72ec1e"#011databases: "6f35ec52b536eee608d5bc706a72ec1e"  

2) I use this document to sync servers: http://docs.mongodb.org/manual/tutorial/replace-config-server/ 3) After sync i restart one mongos server, and see this in logs:

Thu Aug 15 09:56:05.376 [mongosMain] MongoS version 2.4.4 starting: pid=1575 port=27111 64-bit host=web-inno.innologica.com (--help for usage)  Thu Aug 15 09:56:05.376 [mongosMain] git version: 4ec1fb96702c9d4c57b1e06dd34eb73a16e407d2  Thu Aug 15 09:56:05.376 [mongosMain] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49  Thu Aug 15 09:56:05.376 [mongosMain] options: { configdb: "mongocfg1.testing.com:27000,mongocfg2.testing.com:27000,mongocfg3.testing.com:27000", keyFile: "/mongo_database/pass.key", port: 27111 }  Thu Aug 15 09:56:05.582 [mongosMain] SyncClusterConnection connecting to [mongocfg1.testing.com:27000]  Thu Aug 15 09:56:05.583 [mongosMain] SyncClusterConnection connecting to [mongocfg2.testing.com:27000]  Thu Aug 15 09:56:05.583 [mongosMain] SyncClusterConnection connecting to [mongocfg3.testing.com:27000]  Thu Aug 15 09:56:05.585 [mongosMain] SyncClusterConnection connecting to [mongocfg1.testing.com:27000]  Thu Aug 15 09:56:05.586 [mongosMain] SyncClusterConnection connecting to [mongocfg2.testing.com:27000]  Thu Aug 15 09:56:05.586 [mongosMain] SyncClusterConnection connecting to [mongocfg3.testing.com:27000]  Thu Aug 15 09:56:07.213 [Balancer] about to contact config servers and shards  Thu Aug 15 09:56:07.213 [websvr] admin web console waiting for connections on port 28111  Thu Aug 15 09:56:07.213 [Balancer] starting new replica set monitor for replica set replica01 with seed of mongo1.testing.com:27020,mongo2.testing.com:27020,mongo3.testing.com:27017  Thu Aug 15 09:56:07.214 [Balancer] successfully connected to seed mongo1.testing.com:27020 for replica set replica01  Thu Aug 15 09:56:07.214 [Balancer] changing hosts to { 0: "mongo1.testing.com:27020", 1: "mongo2.testing.com:27020" } from replica01/  Thu Aug 15 09:56:07.214 [Balancer] trying to add new host mongo1.testing.com:27020 to replica set replica01  Thu Aug 15 09:56:07.215 [Balancer] successfully connected to new host mongo1.testing.com:27020 in replica set replica01  Thu Aug 15 09:56:07.215 [Balancer] trying to add new host mongo2.testing.com:27020 to replica set replica01  Thu Aug 15 09:56:07.215 [Balancer] successfully connected to new host mongo2.testing.com:27020 in replica set replica01  Thu Aug 15 09:56:07.215 [mongosMain] waiting for connections on port 27111  Thu Aug 15 09:56:07.427 [Balancer] Primary for replica set replica01 changed to mongo1.testing.com:27020  Thu Aug 15 09:56:07.429 [Balancer] replica set monitor for replica set replica01 started, address is replica01/mongo1.testing.com:27020,mongo2.testing.com:27020  Thu Aug 15 09:56:07.429 [ReplicaSetMonitorWatcher] starting  Thu Aug 15 09:56:07.430 [Balancer] starting new replica set monitor for replica set replica02 with seed of mongo5.testing.com:27020,mongo6.testing.com:27020  Thu Aug 15 09:56:07.431 [Balancer] successfully connected to seed mongo5.testing.com:27020 for replica set replica02  Thu Aug 15 09:56:07.432 [Balancer] changing hosts to { 0: "mongo5.testing.com:27020", 1: "mongo6.testing.com:27020" } from replica02/  Thu Aug 15 09:56:07.432 [Balancer] trying to add new host mongo5.testing.com:27020 to replica set replica02  Thu Aug 15 09:56:07.432 [Balancer] successfully connected to new host mongo5.testing.com:27020 in replica set replica02  Thu Aug 15 09:56:07.432 [Balancer] trying to add new host mongo6.testing.com:27020 to replica set replica02  Thu Aug 15 09:56:07.433 [Balancer] successfully connected to new host mongo6.testing.com:27020 in replica set replica02  Thu Aug 15 09:56:07.712 [Balancer] Primary for replica set replica02 changed to mongo5.testing.com:27020  Thu Aug 15 09:56:07.714 [Balancer] replica set monitor for replica set replica02 started, address is replica02/mongo5.testing.com:27020,mongo6.testing.com:27020  Thu Aug 15 09:56:07.715 [Balancer] config servers and shards contacted successfully  Thu Aug 15 09:56:07.715 [Balancer] balancer id: web-inno.innologica.com:27111 started at Aug 15 09:56:07  Thu Aug 15 09:56:07.715 [Balancer] SyncClusterConnection connecting to [mongocfg1.testing.com:27000]  Thu Aug 15 09:56:07.716 [Balancer] SyncClusterConnection connecting to [mongocfg2.testing.com:27000]  Thu Aug 15 09:56:24.438 [mongosMain] connection accepted from 127.0.0.1:55303 #1 (1 connection now open)  Thu Aug 15 09:56:24.443 [conn1]  authenticate db: admin { authenticate: 1, nonce: "6cc9a76b79656179", user: "admin", key: "xxxxxxxxxxxxxxxxxxx" }  Thu Aug 15 09:56:26.676 [conn1] creating WriteBackListener for: mongo1.testing.com:27020 serverID: 520c7b87e4a4c3afa569b21a  Thu Aug 15 09:56:26.676 [conn1] creating WriteBackListener for: mongo2.testing.com:27020 serverID: 520c7b87e4a4c3afa569b21a  Thu Aug 15 09:56:26.678 [conn1] creating WriteBackListener for: mongo5.testing.com:27020 serverID: 520c7b87e4a4c3afa569b21a  Thu Aug 15 09:56:26.678 [conn1] creating WriteBackListener for: mongo6.testing.com:27020 serverID: 520c7b87e4a4c3afa569b21a  Thu Aug 15 09:56:26.679 [conn1] SyncClusterConnection connecting to [mongocfg1.testing.com:27000]  Thu Aug 15 09:56:26.679 [conn1] SyncClusterConnection connecting to [mongocfg2.testing.com:27000]  Thu Aug 15 09:56:26.680 [conn1] SyncClusterConnection connecting to [mongocfg3.testing.com:27000]  Thu Aug 15 09:57:33.704 [conn1] warning: inconsistent chunks found when reloading collection.documents, previous version was 8651|7||51b5c7a96b2903a0b3fac106, this should be rare  Thu Aug 15 09:57:33.714 [conn1] warning: ChunkManager loaded an invalid config for collection.documents, trying again  Thu Aug 15 09:57:34.065 [conn1] warning: inconsistent chunks found when reloading collection.documents, previous version was 8651|7||51b5c7a96b2903a0b3fac106, this should be rare  Thu Aug 15 09:57:34.076 [conn1] warning: ChunkManager loaded an invalid config for collection.documents, trying again  Thu Aug 15 09:57:34.491 [conn1] warning: inconsistent chunks found when reloading collection.documents, previous version was 8651|7||51b5c7a96b2903a0b3fac106, this should be rare  Thu Aug 15 09:57:34.503 [conn1] warning: ChunkManager loaded an invalid config for collection.documents, trying again  Thu Aug 15 09:57:34.533 [conn1] Assertion: 13282:Couldn't load a valid config for collection.documents after 3 attempts. Please try again.  0xa82161 0xa46e8b 0xa473cc 0x8b857e 0x93cb52 0x93f329 0x93ff18 0x94311f 0x9740e0 0x991865 0x669887 0xa6e8ce 0x7f4456361851 0x7f445570790d   /usr/bin/mongos(_ZN5mongo15printStackTraceERSo+0x21) [0xa82161]   /usr/bin/mongos(_ZN5mongo11msgassertedEiPKc+0x9b) [0xa46e8b]   /usr/bin/mongos() [0xa473cc]   /usr/bin/mongos(_ZN5mongo12ChunkManager18loadExistingRangesERKSs+0x24e) [0x8b857e]   /usr/bin/mongos(_ZN5mongo8DBConfig14CollectionInfo5shardEPNS_12ChunkManagerE+0x52) [0x93cb52]   /usr/bin/mongos(_ZN5mongo8DBConfig14CollectionInfoC1ERKNS_7BSONObjE+0x149) [0x93f329]   /usr/bin/mongos(_ZN5mongo8DBConfig5_loadEv+0xa48) [0x93ff18]   /usr/bin/mongos(_ZN5mongo8DBConfig4loadEv+0x1f) [0x94311f]   /usr/bin/mongos(_ZN5mongo4Grid11getDBConfigESsbRKSs+0x480) [0x9740e0]   /usr/bin/mongos(_ZN5mongo7Request5resetEv+0x1d5) [0x991865]   /usr/bin/mongos(_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x67) [0x669887]   /usr/bin/mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x42e) [0xa6e8ce]   /lib64/libpthread.so.0(+0x7851) [0x7f4456361851]   /lib64/libc.so.6(clone+0x6d) [0x7f445570790d]  Thu Aug 15 09:57:34.549 [conn1] scoped connection to mongocfg1.testing.com:27000,mongocfg2.testing.com:27000,mongocfg3.testing.com:27000 not being returned to the pool  Thu Aug 15 09:57:34.549 [conn1] warning: error loading initial database config information :: caused by :: Couldn't load a valid config for collection.documents after 3 attempts. Please try again.  Thu Aug 15 09:57:34.549 [conn1] AssertionException while processing op type : 2004 to : collection.system.namespaces :: caused by :: 13282 error loading initial database config information :: caused by :: Couldn't load a valid config for collection.documents after 3 attempts. Please try again.  Thu Aug 15 09:57:37.722 [Balancer] SyncClusterConnection connecting to [mongocfg1.testing.com:27000]  Thu Aug 15 09:57:37.723 [Balancer] SyncClusterConnection connecting to [mongocfg2.testing.com:27000]  Thu Aug 15 09:57:37.723 [Balancer] SyncClusterConnection connecting to [mongocfg3.testing.com:27000]  

First mongos also have this error "warning: error loading initial database config information :: caused by :: Couldn't load a valid config for collection.documents after 3 attempts. Please try again."

but work for now.

Second mongos after restart don't work;

mongos> show collections  Thu Aug 15 09:57:34.550 JavaScript execution failed: error: {      "$err" : "error loading initial database config information :: caused by :: Couldn't load a valid config for collection.documents after 3 attempts. Please try again.",      "code" : 13282  } at src/mongo/shell/query.js:L128  mongos>  

What is the next steps to recover config servers?

All advice are welcome.

MySQL: ERROR 126 (HY000): Incorrect key file for table + slow logical backups

Posted: 19 Aug 2013 02:56 PM PDT

I've got '/tmp' directory mounted with 'tmpfs' and for some reason this is causing the following error:

mysql> SELECT DISTINCT table_schema FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','mysql')      -> ;  ERROR 126 (HY000): Incorrect key file for table '/tmp/#sql_29ef_0.MYI'; try to repair it  

-

# df -h /tmp/  Filesystem            Size  Used Avail Use% Mounted on  tmpfs                 2.0G   12K  2.0G   1% /tmp  # df -i /tmp/  Filesystem            Inodes   IUsed   IFree IUse% Mounted on  tmpfs                2041621       7 2041614    1% /tmp  # mount | grep /tmp  tmpfs on /tmp type tmpfs (rw,size=2048M)  

Please note that the same query works fine when '/tmp' dir is mounted with ext4 file system.

EDIT:

Server_01

# cd /var/lib/mysql ; ls -lR | grep -c "\.frm$"  1876765  

but this also happened on server with much less tables:

Server_02

# cd /var/lib/mysql ; ls -lR | grep -c "\.frm$"  49514   

I was using this query to list all databases but as it didn't work with tmpfs I've just replaced it with simpler one (SHOW DATABASES...).

I was watching disk space on /tmp mounted with tmpfs and there still was a plenty of space so I'm not sure how it could run out of space?

Basically I've got a problem with logical backups on the server with ~8000 DBs - it takes many hours (~24) to complete this task. I've created a simple BASH script (please see below) and instead of mysqldump I'm using mydumper as initial tests showed that it's much faster.

Backups running very fast initially and then slowing down dramatically:

# ./backup.sh  Backing up database number: 1  Completed in: 0.016  Backing up database number: 2  Completed in: 0.078  Backing up database number: 3  Completed in: 0.074  Backing up database number: 4  Completed in: 0.068  Backing up database number: 5  Completed in: 0.071  Backing up database number: 6  Completed in: 0.060  Backing up database number: 7  Completed in: 0.067  Backing up database number: 8  Completed in: 0.070  Backing up database number: 9  Completed in: 0.065  .....  Backing up database number: 107  Completed in: 10.749  Backing up database number: 108  Completed in: 12.125  Backing up database number: 109  Completed in: 11.313  Backing up database number: 110  Completed in: 11.572  Backing up database number: 111  Completed in: 11.371  .....  

Script:

#!/usr/bin/env bash    DATA_DIR="/tmp/mysqlbackup"  LOCKFILE=/tmp/backup.lock  NOW=$(date +%Y%m%d)  COUNT=1    if [ -f "$LOCKFILE" ]; then          echo "$(basename $0) is already running: PID $(cat $LOCKFILE)"          exit 0    else            echo $$ > $LOCKFILE            if [ ! -d $DATA_DIR/$NOW ]; then                  mkdir -m 700 -p $DATA_DIR/$NOW          fi            while read DB; do                    (( COUNT++ ))                    echo "Backing up database number: $COUNT"                    START=$(date +%s.%N)                    mydumper -e -o $DATA_DIR/$NOW/$DB -B "$DB"                    ELAPSED=$(date +%s.%N)                      printf "Completed in: %.3F\n" $(echo $ELAPSED - $START | bc)            done <<< "$(mysql -A -B -N -e "SHOW DATABASES" | egrep -v '(mysql|*_schema|log)')"            echo "Removing backup dir...";            rm -rf $DATA_DIR/$NOW            rm -f $LOCKFILE            exit 0    fi  

Why does my SQL Server show more than half a million active tasks?

Posted: 19 Aug 2013 08:04 PM PDT

select task_state,count(*)  from sys.dm_os_tasks  group by task_state  

I ran the above statement on a SQL Server instance, and found it had about 633,000 records.

task_state       -------------- -----------  RUNNABLE       2  RUNNING        32  DONE           633115  SUSPENDED      99  

How can I Close/Kill the useless tasks?

The MDW data collector have about 4000 page allocate in tempdb per time.

And this cause the IO pressure when server in busy time.

This is production server, We do not want to restart the service. And the version number is 11.0.3000.

The Max Worker Count is :1216  CPU Count:48  Hyperthread ratio:12  MaxDop: 8  

Scheduler_Id is 0 - 47 and the amount of rows are average. and other columns are null.

How to get data comparations from two mysql tables [on hold]

Posted: 19 Aug 2013 11:01 AM PDT

What I have: The next structure:

Table ChrM

  • LID (PRIMARY with auto increment)
  • LOCUS (varchar (9)) Example value: (At1g30500) (Fixed Number of LOCUS values =173 rows)
  • StartPoint (INT) (Exclusive value assigned by locus)
  • EndPoint (INT) (Exclusive value assigned by locus)

    Actual table looks like the following
        | LID |   LOCUS      | StartPoint | EndPoint|         1    AT1G30500        734         273         2    AT3G15690        11415       8848         3    AT3G05690        262900      195300         .        .              .          .        173   At5g01410        366700      264418  

Table DMRs

  • idDMRs (Primary with auto increment)
  • StartPos (INT)(Exclusive value assigned by idDMRs)
  • EndPos (INT) (Exclusive value assigned by idDMRs)
  • Other...
Actual table looks like this
        | idDMRs |   StartPos  | EndPos | OTHER 1| OTHER 2 | OTHER n             1         1003        716         .       .         .             .           .          .          .       .         .             156       197126      195371      .       .         .             172       197299      196119      .       .         .             201       197233      195621      .       .         .             .           .           .         .       .         .                20633     262857      262857      .       .         .  

What I want : Select all values from table DMRs that fulfill the next condition,

EndPos >= EndPoint AND StartPos <= StartPoint   

This for every LOCUS in table ChrM (Total of 173 times) to get an (LOCUS, idDMRs, EndPos, StartPos, OTHER1, OTHer2, OTHER n) array.

As a newbie to Mysql I tried doing this one by one printing Table ChrM in paper and inserting LOCUS respective StartPoint and EndPoint by hand and querying individually LOCUS by LOCUS in the following manner:

SELECT * FROM DMRs    WHERE EndPos >= 195300 AND StartPos <= 262900;  

And the results are what I want, but I wonder if I will ever finish to analyze all my data when I have 50 different DMRs tables to analyze doing this 173 times for table.

Example of my query output
         |  idDMRs  | StartPos | EndPos | OTHER 1| OTHER 2 | OTHER n               156      197126    195371      .       .         .               172      197299    195419      .       .         .               201      197233    195621      .       .         .   
This is the result I want to get for every LOCUS
        | LOCUS |   idDMRs  | StartPos | EndPos | OTHER 1| OTHER 2 | OTHER n           AT3G05690    156      197126    195371      .       .         .           AT3G05690    172      197299    195419      .       .         .           AT3G05690    201      197233    195621      .       .         .   

So, in conclusion I would like help on how to resolve this problem or any advice on what I should read or know in order to resolve this on my own.

Moving large databases

Posted: 19 Aug 2013 09:09 PM PDT

I have a centos server and /var/lib/mysql/ is 125GB (disk has 1GB free space).

Ordinarily I would use mysqldump to backup the databases, but I don't normally work with such large databases, so I need to know the safest way of copying the databases over to a new server.

All advice appreciated!

Dropping Hypothetical Indexes

Posted: 19 Aug 2013 12:09 PM PDT

In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes.

I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything.

I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error.

Has anyone deleted these before and is there a trick I'm missing?


Addendum

Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out.

SA permissions issues with many nested objects

Posted: 19 Aug 2013 06:09 PM PDT

I have a broker application that's relatively complicated.

Today, after I made some changes, I started getting the error:

The server principal 'sa' is not able to access the database 'XYZ' under the current security context.

The whole scenario up to the point of the error is:

(In Database ABC)

  • A broker message gets submitted into a queue
  • The queue reader picks it up
  • The queue reader updates a table to indicate the work is starting
  • This table has an update trigger on it. The trigger
    • Checks a table in database XYZ to make sure the inserted value for this field is valid
    • The table is accessed via a synonym

The check in the trigger I believe is what is causing the issue.

If I run the update manually, it works fine. I have also used EXECUTE AS to run the update manually as sa which works fine.

Other relevant facts:

  • sa is the owner of both database ABC and database XYZ
  • there's no funny business with the sa account - it's db_owner role in both DBs as well

Is there some sort of strange scoping happening because all this is running in the context of broker?

Updates

Some more info:

  • DB ownership chaining is on at the server level but not in the DBs. I turned it on and it made no difference.
  • Using a three part name instead of a synonym didn't make a difference
  • Profiler trace shows that the statements are running as SA and not another login
  • Setting both databases TRUSTWORTHY to ON didn't make a difference
  • If I run the queue activation proc manually, it processes correctly (under my credentials).

Database user specified as a definer

Posted: 19 Aug 2013 11:09 AM PDT

I have a view in my database. problem is below

Error SQL query:

SELECT *   FROM `lumiin_crm_prod`.`v_contact`   LIMIT 1 ;  

MySQL said:

1449 - The user specified as a definer ('lumicrm'@'%') does not exist

i Google for a solution

User is created for Host & not for Global.

How to create the User for Global ????

How do I execute an Oracle SQL script without sqlplus hanging on me?

Posted: 19 Aug 2013 04:09 PM PDT

For an automated task I would very much like to run some SQL scripts and make sure that sqlplus does not hang under any circumstancees, i.e.:

  • If the script contains any named substitution variable that has to be entered by the user, sqlplus should return with an error instead of prompting for the variable -- I cannot use set define off, as some of these scripts need to accept command line parameters that have to be resolved with &1
  • The script must not "hang" when it doesn't contain an exit; at the end.

    Solved: I think now that I can achieve this by wrapping the sql-script in a secondary "caller script". I.e., the caller script calls the other script with @ and then has a fixed exit; after that. The other script doesn't need an exit that way.

  • Anything else: If it would require a prompt, it should return with an error.

How can i do this with Oracle (and sqlplus or something else)?

SSRS appears to be ignoring Permissions set using Report Manager

Posted: 19 Aug 2013 07:09 PM PDT

I have setup SSRS on SQL Server 2008 in native mode.

As an administrator I can login to report manager, upload reports and run them, and also use the Web Service URL to generate reports.

I have also created a local user on the machine, I went into Report Manager as Admin, and at the top level set permissions that should assign the local user to all roles.

When I login to the machine as that user, and then navigate to Report Manager I just get the heading for the page, but do not see any of the folders that are configured.

I've checked and the folders are set to inherit parent permissions and they are showing the newly created local user in there too.

It seems odd that I have set the permissions, yet SSRS is still not showing what I should be able to see. Is there another step I need to take other than configuring the permissions in Report Manager?

When logged in as the newly created local user:

Report Manager - Shows the heading for the page, but no folders/items    Web Service URL (http://machine/ReportServer) - rsAccessDeniedError  

How can I copy data from one MySQL server to another based on a SELECT statement (then delete the data from the original)?

Posted: 19 Aug 2013 09:07 PM PDT

I have a very large log table from which I want to copy rows, putting them into the same table structure on a new server. I don't want to copy everything, only old rows, so that the table on the main server stays small-ish. So I have to SELECT the data I want and only move (and delete) that.

Keep in mind that there is a lot of data, and I don't want to copy it all with a mysqldump if I don't need to, especially if it's going to lock the table while it's being copied.

The best I've come up with is a PHP script, which I will post as a tentative answer, although I'm certain it's not the best option.

Couldn't install SQL Server 2012 on machine with Windows 7 SP1, VS 2010 SP1

Posted: 19 Aug 2013 06:55 PM PDT

I am trying to install SQL Server 2012 RTM on my pc, I have installed Windows 7 SP1, VS 2010 SP1 but it again and again giving this error:

A network error occurred while attempting to read from the file:
D:\Microsoft SQL Server 2012 RTM\1033_ENU_LP\x64\setup\x64\sqlncli1.msi
Error help link:
http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=sqlncli.msi%40PublishProduct%401316

When I go to the Microsoft link I find

We're sorry
There is no additional information about this issue in the Error and Event Log Messages or Knowledge Base databases at this time. You can use the links in the Support area to determine whether any additional information might be available elsewhere.

How can resolve this, at first i thought adding current user in Display Replay controller making the problem, then uninstalled everything and again tried installing but couldn't succeed. Can anyone suggest what may be causing these problem and possible resolve for it. I have also SQL Server 2008 R2 installed which I want to keep simultaneously with 2012

Additional Information's:

Its on local drive i.e on my harddisk, so there should be no error and Its the RTM version of SQL 2012. I find below logs on C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\

Feature: SQL Client Connectivity SDK Status: Failed: see logs for details Reason for failure: An error occurred during the setup process of the feature. Next Step: Use the following information to resolve the error, and then try the setup process again. Component name: SQL Server Native Client Access Component Component error code: 1316 Component log file: C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log\20121022_164723\sqlncli_Cpu64_1.log Error description: A network error occurred while attempting to read from the file: D:\Microsoft SQL Server 2012 RTM\1033_ENU_LP\x64\setup\x64\sqlncli1.msi Error help link: go.microsoft.com/…

This problem not yet solved, although other people installed SQL2012 with this installer on their PC's. I think something wrong with my PC or some settings i am using. Please can anyone help me out?

SQL Query Formatter

Posted: 19 Aug 2013 01:40 PM PDT

Are there any (Linux based) SQL Query Formatting programs/plugins/extensions?

I use PostgreSQL and MySQL but other DB's are welcome as well.

I can use a VM to test with but would prefer a Linux (Ubuntu) based solution.

I have seen a online version but nothing as a installable.

Eclipse Based IDE's are a plus as well

Example:

select f1, f2, fname, lName from tblName where f1 = true and fname is not null order by lName asc  

to something like

SELECT f1, f2, fname, lName  FROM tblName  WHERE f1 = true  AND fname IS NOT NULL  ORDER BY lName ASC  

Here is a online example:

But I would rather this be in a local environment

Related:

UPDATE:

Looking at this:

No comments:

Post a Comment

Search This Blog