Tuesday, March 19, 2013

[how to] clusterdb stopped working for PostgreSQL database in Ubuntu 12.04 server

[how to] clusterdb stopped working for PostgreSQL database in Ubuntu 12.04 server


clusterdb stopped working for PostgreSQL database in Ubuntu 12.04 server

Posted: 19 Mar 2013 09:02 PM PDT

After a fresh PostgreSQL install (version 9.2) on my Ubuntu server 12.04, clusterdb utility stopped working. I used the same config files as I used with version 9.1.

Below is the actual command:

clusterdb -v -h localhost -p <port#> -U postgres -d <dbname>  

And this is the response that I'm getting:

clusterdb: could not connect to database <dbname>: could not connect to server: Connection refused      Is the server running on host "localhost" (127.0.0.1) and accepting      TCP/IP connections on port <port#>?  

I'm able to connect to the server locally and from outside. I can also use another utilities such as pg_dump with no problems.

My pg_hba.conf file below:

# Database administrative login by Unix domain socket  local all postgres peer  # TYPE DATABASE USER ADDRESS METHOD "local" is for Unix domain socket connections only  local all all peer  # IPv4 local connections:  host all all 127.0.0.1/32 md5  # IPv6 local connections:  host all all ::1/128 md5  # Allow replication connections from localhost, by a user with the replication privilege.  local all postgres md5  host all all x.x.x.x/x md5         #my network  host    all all  x.x.x.x/x  md5    #internal ip of the server  

Are there any security changes were implemented in PostgreSQL 9.2 vs 9.1 that may prevent clusterdb to work properly?

Appropriate Action for Dead Locks / Blocking

Posted: 19 Mar 2013 08:00 PM PDT

I've setup my databases so that anytime a procedure is blocked for more than 45 seconds the database notifies the DBA email. Is it bad practice to setup a way to auto kill the process that is doing the blocking? I'm assuming yes; however, waiting until an off hours DBA can get to a computer and fix the change seams problematic as well. Is there a better way to handle the blocking processes?

SQL Server 2008 R2 Restore COPY_ONLY full backup with transaction logs

Posted: 19 Mar 2013 07:11 PM PDT

After doing some research I cannot seem to find an answer to this question.

Background I am attempting to setup a backup plan that fits the following three requirements: 1) Reliability of backups, having nightly full backups 2) Transaction log backups that could be restored from 3) Low amount of disc space used 4) The backups must be accessible locally for an auditing tool

So to fit those needs I am thinking full backups weekly, differential daily, and transaction hourly. Then each night a copy_only backup would run that can be shipped offsite, this backup is done so that the log chain is not broken, and we have reliable nightly full backups offsite, without having to eat up so much local disc space.

Question Is it possible to restore from the copy_only backup, and restore the transaction logs after.

Let me just give an example so you know what I am talking about.

Using the below list I am wondering if it is possible to restore FullbackupCOPY_ONLYC.bak followed by TransactionbackupG.trn, TransactionbackupH.trn, finally TransactionbackupI.trn

---List of Backups--- FullbackupA.bak 01/01/2013 00:00:00 DifferntialbackupA.bak 02/01/2013 00:00:00 FullbackupCOPY_ONLYA.bak 02/01/2013 00:00:00 TransactionbackupA.trn 02/01/2013 01:00:00 TransactionbackupB.trn 02/01/2013 02:00:00 TransactionbackupC.trn 02/01/2013 03:00:00 DifferntialbackupB.bak 03/01/2013 00:00:00 FullbackupCOPY_ONLYB.bak 03/01/2013 00:00:00 TransactionbackupD.trn 03/01/2013 01:00:00 TransactionbackupE.trn 03/01/2013 02:00:00 TransactionbackupF.trn 03/01/2013 03:00:00 DifferntialbackupC.bak 04/01/2013 00:00:00 FullbackupCOPY_ONLYC.bak 04/01/2013 00:00:00 TransactionbackupG.trn 04/01/2013 01:00:00 TransactionbackupH.trn 04/01/2013 02:00:00 TransactionbackupI.trn 04/01/2013 03:00:00

Maybe this whole setup is not reasonable I am fairly new to SQL Server and am trying to learn as I go. Any advice/help would be appreciated.

What settings should I use in my.cnf

Posted: 19 Mar 2013 09:01 PM PDT

We have about 5-6 different internal webapps (staff use only) that we will be transferring across to a new virtual server, however I need to setup our my.cnf settings first and I am lost (I am web designer - so this is a little out of my league!).

Our internal systems have about 100 tables apiece, using primarily InnoDB but in some cases using Myisam. Also, 1 of our webapps has quite a lot of data (100,000 records+) and the SQLs used to select the data are quite inefficient (few indexes used, inner SELECTs etc). Roughly about 50 users at any one time.

Our new server will be the HS-250 from HeartInternet.

Many thanks for the help,

Cant start SQL services

Posted: 19 Mar 2013 04:03 PM PDT

So I am having issues starting my SQL services. Currently it is failing the start due to:

FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'E:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\modellog.ldf'. Diagnose and correct the operating system error, and retry the operation.

The issue herein lies in the fact that this .ldf file is not on the E drive. There is no E: drive. The file is located on the F: drive.

I cannot find anywhere to tell SQL to change this. I can not drop/move/detach/etc the databases, because I cant even get the service to start, in order to connect.

This is happening on 3 different instances of SQL, after volume moves. Any ideas?

SQL 2008

SQL Server 2008 Restore

Posted: 19 Mar 2013 06:49 PM PDT

I'm trying to restore from a database backup to SQL Server 2008. The backup came from a different server. I'm executing the following query:

RESTORE DATABASE Database_Name  FROM DISK = 'C:\Path\To\Backup.file'  WITH REPLACE  GO  

If the database does not exist, I'm getting the following error:

Msg 3118, Level 16, State 1, Line 1  The database "Database_Name" does not exist. RESTORE can only create a database when restoring either a full backup or a file backup of the primary file.  Msg 3013, Level 16, State 1, Line 1  RESTORE DATABASE is terminating abnormally.  

The file is a full backup.

If the database does exist, I get the following error:

Msg 3154, Level 16, State 4, Line 1  The backup set holds a backup of a database other than the existing 'Database_Name' database.  Msg 3013, Level 16, State 1, Line 1  RESTORE DATABASE is terminating abnormally.  

From what I understand, WITH REPLACE is supposed to be a workaround for this.

I'm stumped.

Is it ever harmful to set a database owner in SQL Server?

Posted: 19 Mar 2013 04:32 PM PDT

There are a bunch of databases on one of our SQL servers that have no owner. Generally speaking, is it harmful to give them one? For example:

USE dbName;  GO    EXEC sp_changedbowner 'sa';  GO  

I know sa may not be the best choice, but it is just an example. My primary concern is whether or not creating an owner when there is none may cause issues with software that can currently connect OK.

Custom sp_who/sp_whoUsers

Posted: 19 Mar 2013 02:50 PM PDT

I need to allow a client in a dev DW SQL 2K8R2 environment, to view and kill processes, but I do not want to grant VIEW SERVER STATE to this person (he's a former sql dba and is considered a potential internal threat).

When I run the following, it returns one row as if the user ran the sp themselves with their current permissions.

USE [master]  GO    SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO    CREATE PROCEDURE [dbo].[usp_who] with execute as owner  AS  BEGIN      SET NOCOUNT ON;      exec master.dbo.sp_who;  END  

Changing the "with execute as" to "self" (I'm a sysadmin) returns the same results. I've also tried the below instead of calling sp_who, and it only returns one row.

select * from sysprocesses  

It seems that the context isn't switching, or persisting, throughout the execution of the procedure. And this is to say nothing of how I'm going to allow this person to "kill" processes.

Does anyone have a solution or some suggestions to this seemly unique problem?

sysadmin cannot drop login when ddl audit trigger enabled

Posted: 19 Mar 2013 12:19 PM PDT

I have a DDL trigger defined for database- and server-level events in SQL Server 2008R2 (Standard) which exists to log all DDL events into an audit table in an admin database. It's only function is to extract the relevant data from EVENTDATA() and insert into this table. For the purpose of the insert (as only sysadmins can access the admin database), I have created a dedicated SQL login with only INSERT permission to this table, and granted IMPERSONATE on this login to public. This is intended to prevent permissions-related errors from the trigger firing and attempting to insert into the audit table, when the caller does not have the necessary access to the database/table.

Here is the trigger definition:

CREATE TRIGGER [ddl_log]   ON ALL SERVER   FOR DDL_DATABASE_LEVEL_EVENTS, DDL_SERVER_LEVEL_EVENTS   AS  BEGIN    DECLARE @data XML  SET @data = EVENTDATA()     EXECUTE AS LOGIN='<dummy login>'      INSERT admin.dbo.ddl_audit (PostTime, DB_User, [Event], [TSQL], Host, DatabaseName)      VALUES (         GETDATE(),         CONVERT(NVARCHAR(100), ORIGINAL_LOGIN()),         @data.value('(/EVENT_INSTANCE/EventType)[1]', 'nvarchar(100)'),          @data.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'),          CONVERT(NVARCHAR(100), HOST_NAME()),         @data.value('(/EVENT_INSTANCE/DatabaseName)[1]','nvarchar(100)')         ) ;  REVERT   END  

I have had no issues with this trigger since implemented months ago. However, now it appears to be preventing even a sysadmin from executing an ALTER LOGIN, DROP LOGIN, etc. under certain circumstances as follows:

My environment also includes MS Dynamics GP 2010 (Great Plains). Great Plains allows an admin to manage users, and for each new Great Plains user, the software creates a SQL login for that user in SQL Server. Resetting a password in the Great Plains interface resets the SQL password. And so forth...

However, even if logged into Great Plains as 'sa' as long as the above trigger is enabled any attempt to alter or drop a login fails with error 15151 (Cannot alter the login 'loginname', because it does not exist or you do not have permission). If I disable the trigger, everything works normally.

The same operations executed in SSMS, or through some other interface, are successful, even for non-sysadmins who have some level of DDL permissions. It only fails when performed in Great Plains. A profiler trace of the operation shows that GP is merely submitting a standard T-SQL 'ALTER LOGIN' or 'DROP LOGIN' statement, and that the statement correctly shows as called by the sa account. It does not appear that the session ever switched to a different context, other than for the insert into the audit table (which it never actually got to, as no record was logged for the statement). And just in case the session somehow was maintaining the wrong context after that impersonation, I tried making the dummy-insert login a sysadmin with no success.

My question is, are there certain combinations of SET options/connection settings, etc. that could result in this type of behavior or issues with DDL trigger execution that could prevent a sysadmin from performing certain operations? Or is there some other avenue to investigate that I am completely missing?

Create Temp Table From Stored Procedure Result

Posted: 19 Mar 2013 12:23 PM PDT

Is it possible to create a temp table from a stored procedure result set without having to specify column datatypes? If not, is it possible to create a physical table at least from the SP? The reason I am asking is because I have an SP that returns too many columns for me to declare in a create table command.

Is there something like

create table from

where columns do not need to specified and sql automaticly determines column attributes??

I am running SQL server 2008

trap xp_logininfo stored procedure error

Posted: 19 Mar 2013 12:03 PM PDT

Sql server 2000, windows 2003 server.

I am trying find all the invalid domain login creates in sql 2000 server in a script using xp_logininfo and get message "Could not obtain information about Windows NT group/user" How can I trap this error in T-sql scripts and continue. Or is there any other way to valid domain id with AD group.

Thanks

Pan

Load sybase .db into new sybase developer server

Posted: 19 Mar 2013 02:06 PM PDT

We've got a few .db files laying around here which were created by Sybase (version 8.x, I guess) and I've the task to extract some data from those old databases.

Now I've installed the latest Sybase developer version (15.7) and want to add these databases to the sybase server.

Unfortunately I do not have any experience with sybase and I couldn't find any documentation in the official sybase manual for this task.

Is this even possible? Does anyone of you know about an article describing the integration of existing complete databases or the necessary commands for the task?

INNODB real-time locks?

Posted: 19 Mar 2013 11:46 AM PDT

How can I view real-time locks in innodb when information_schema.innodb_locks is not available?

Investigating high SQL waits, especically LCK_M_U

Posted: 19 Mar 2013 01:26 PM PDT

Recently I have been struggling with SQL server performance and although we have fixed a huge multitude of basic errors in the config it is still performing badly. To clarify it is not overall performance but rather fairly frequent time outs (from the client application). Previously I had looked to the memory as a cause but this has now been resolved and we are still getting the same behaviour.

Looking at the graphs from Management Data Warehouse I can see that LCK_M_U/X/IX are causing the majority of our waits around the time a user experiences a timeout. Everything I am reading states I need to look at the queries and processes running but I have yet to find anything aimed at a level I can understand. The locks, as you can see in the picture below, seem to spike which coincides with the error on the users side. Is there a clever DMV or some such that I can address to try and work out what query is being run that is creating the lock? Is it a case of trawling through a trace to find the details? Any guidance greatly appreciated and apologies if the information is not clear.

enter image description here

Update strings for all cmdexec Jobs

Posted: 19 Mar 2013 11:47 AM PDT

Does anyone know if it's possible, (and furthermore advisable) to update all cmdexec strings in jobs on a server?

We'd like to change our logging location, and in principle I'd think this could be done very simply by changing the /O "\\LOCATION\file.log" strings.

I can see how you'd do that in fact

USE MSDB  GO  update dbo.sysjobsteps  set [command] = replace([command], '\\OLDLOCATION\... ', '\\NEWLOCATION\... ')  WHERE...  

BUT I've had it drummed into me that you should never update the sys tables manually, so the idea of doing this makes me beyond nervous. For anything else I'd use the dbo.sp_update_job proc (http://msdn.microsoft.com/en-gb/library/ms188745.aspx for those who've strayed here looking for something else job related), but that procedure doesn't seem to be able to handle this.

I have a hunch that this might possibly be an exception to the "Never Update the Sys Tables" rule though? I can't imagine that the cmdexec string is implied in any further relations in the way, for example, enabled/disabled status evidently is. Wishful thinking?

So I don't know for sure, and I can't seem to find out. Anyone know or have any experience with this?

Thanks Dan

PostgreSQL 9.2: vacuum returning disk space to operating system

Posted: 19 Mar 2013 03:11 PM PDT

Vacuum usually does not return disk space to operating system, except some special cases. From the docs:

The standard form of VACUUM removes dead row versions in tables and indexes and marks the space available for future reuse. However, it will not return the space to the operating system, except in the special case where one or more pages at the end of a table become entirely free and an exclusive table lock can be easily obtained. In contrast, VACUUM FULL actively compacts tables by writing a complete new version of the table file with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until the operation completes.

The question is: how can this database state when one or more pages at the end of a table become entirely free be achieved? This can be done via VACUUM FULL, but I haven't got enough space to implement it. So are there any other possibilities?

In what data type should I store an email address in database?

Posted: 19 Mar 2013 12:20 PM PDT

I understand that an 254 character email address is valid, but implementations I have researched tend to use a varchar(60) to varchar(80) or equivalent. For example: this SQL Server recommendation uses varchar(80)

Is there a reason to not use the full 254 character maximum? Doesn't a varchar by definition use only as much storage as needed to hold the data?

Are there significant performance implications/trade-offs which cause so many implementations to use less than the full 254 possible characters?

SQL Server in a state of confusion: Does the database exist or doesn't it?

Posted: 19 Mar 2013 01:54 PM PDT

Got a really weird, annoying problem.. Somehow the instance of SQL Server 2008 R2 running on our server has gotten somewhat corrupted.

First, we noticed that the database we created yesterday was missing. So, we looked around and found that it was still there, but detached. So, we tried to attach the mdf but got a message which was something like The file is currently in use.

I thought that was odd, so restarted SQL Server. Same thing... okay, time for drastic measures... so, I stopped the service, zipped up the mdf, started the service, unzipped it and then tried to restore it. The above message was gone, but then I got:

Cannot attach a database with the same name as an existing database  

Ouch. Of course it's not showing in the database explorer, so no idea what's going on... last resort:

DROP DATABASE [DatabaseName]  

Of course that didn't work.. that tells me the database does not exist. So, I'm stuck... at one point MS SQL thinks the db does exist and at another point it thinks the db does not exist.. obviously it's in a state of confusion.

Has anyone seen this before? Got any ideas on how to fix it?

Retrieving all PK and FK with pgadmin

Posted: 19 Mar 2013 04:11 PM PDT

I have a big database that I need to extract all primary keys and foreign keys from each table.

I have pgAdmin III.

Is there a way to do this automatically and not go over each table manually?

Overview of how MongoDB uses its various threads

Posted: 19 Mar 2013 08:07 PM PDT

On one instance I have MongoDB using ~85 threads. In lieu of having time to investigate directly, I am curious:

  1. What are the threads used for? Do they fall into a few high-level usage categories?
  2. How can I control/influence the number of threads used? E.g. max connection params, etc.

What are the differences between leaf and non-leaf pages?

Posted: 19 Mar 2013 11:36 AM PDT

I've been running some index usage reports, and I'm trying to get a definition of Leaf and Non-leaf. There seem to be both Leaf and Non-leaf Inserts, Updates, Deletes, Page Merges, and Page Allocations. I really don't know what it means, or if one is better than the other.

If someone could give a simple definition of each, and also explain why Leaf or Non-leaf matters, it would be appreciated!

Ms-Access 2007-2010 SQL - from single row record how to create multiple rows

Posted: 19 Mar 2013 12:51 PM PDT

I need help in below table updation using SQL in Ms-Access 2007-2010 database.

I've a table called table1 which has below entries:

  table1:  --------------------------------------------------  |  col1  |   col2   |          col3         |  -------------------------------------------------  |    A1  |  abcd    |      123;456;789      |  -------------------------------------------------  |    B2  |  efgh    |       246;369         |  --------------------------------------------------  |    C3  |   ijk    |                       |  --------------------------------------------------   

Using SQL, I want to change the above table (particularly the column col3 multiple values into individual multiple rows) as shown below:

  --------------------------------------------------  |  col1  |   col2   |          col3       |  -------------------------------------------------  |    A1  |  abcd    |         123         |  -------------------------------------------------  |    A1  |  abcd    |         456         |  -------------------------------------------------  |    A1  |  abcd    |         789         |  -------------------------------------------------  |    B2  |  efgh    |         246         |  --------------------------------------------------  |    B2  |  efgh    |         369         |  -------------------------------------------------  |    C3  |   ijk    |                     |  --------------------------------------------------  

How do I do it? I tried various forums. but could not come up with exact solution so far.

Appreciate your help.

Thanks, Jak.

Stress test to saturate memory for MySQL innodb

Posted: 19 Mar 2013 01:30 PM PDT

I'm investigating windows 32bit memory problems (the 2G limit) and am after a test which will max out mysql's memory, the innodb_buffer_pool, per connection memory and/or any other uses of memory.

Perhaps a query I could use for mysqlslap?

How to search whole MySQL database for a particular string

Posted: 19 Mar 2013 11:03 AM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

Splitting TempDB into multiples files equal to number of CPUs

Posted: 19 Mar 2013 04:45 PM PDT

The article SQL Server tempdb Best Practices Increase Performance suggests that I should split tempdb into a number of files equal to the number of cores. So for 4 cores you get 4 files.

By having the larger number of files, you can increase the number of physical I/O operations that SQL Server can push to the disk at any one time. The more I/O that SQL Server can push down to the disk level, the faster the database will run. With standard databases, SQL Server can cache a large amount of the data that it needs into memory. Because of the high-write nature of the tempdb, the data needs to be written to the disk before it can be cached back up into memory.

Though it sounds good in theory, is it really that good as a general optimisation? Is it something that may only apply for specific systems where IO is very high?

multivalued weak key in ER database modeling

Posted: 19 Mar 2013 11:44 AM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 19 Mar 2013 02:44 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

No comments:

Post a Comment

Search This Blog