Sunday, September 1, 2013

[how to] Does limit impact affected rows or not?

[how to] Does limit impact affected rows or not?


Does limit impact affected rows or not?

Posted: 01 Sep 2013 05:31 PM PDT

I have this table:

CREATE TABLE IF NOT EXISTS `usergroups` (        `id` int(11) unsigned NOT NULL AUTO_INCREMENT,        `user_id` int(11) unsigned NOT NULL,        `group_id` smallint(5) unsigned NOT NULL,        PRIMARY KEY (`group_id`,`user_id`),        KEY `id` (`id`),        KEY `user_id` (`user_id`)      ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=5496 ;  

when run this query:

EXPLAIN SELECT `UserGroup`.`user_id` FROM   `usergroups` AS `UserGroup` WHERE  `UserGroup`.`group_id` = 1 limit 30  

its out put is:

1       SIMPLE        UserGroup      ref  PRIMARY  PRIMARY  2        const  543      Using index  

I think something is wrong! because it affected 543 rows but I think it should affect 30 rows at maximum. Is it true?

Restoring tab exports to a mysql database with integrity

Posted: 01 Sep 2013 02:33 PM PDT

I have a database with a lot of data for a client that's on a shared web host. Doing a plain mysqldump is causing the job to get killed on the server, so we were looking at backing up the tables individually.

I've found that the --tab option will automatically create separate files for each table, but my question comes to restoring them while maintaining integrity. This site shows various ways to write shell scripts that loops through the list of backup files and loads them one-by-one. However I am suspicious that this may not guarantee integrity of data.

Is there a more automated way I can lock the database and load the individual tab files, to ensure that I have the database restored to the state it was backed up in?

MySQL: update instead of delete if foreign key constraint?

Posted: 01 Sep 2013 07:46 PM PDT

I have a bit of a strange question. I know about insert on duplicate key update. My question is, is there something similar for deletes that fail because of foreign key constraints?

For example:

delete from table1 where value='something';

But, table2 has a foreign key that depends on the the value I want to delete in table1, so the delete fails. I'd like to do something like this:

delete from table1 where value='something' on foreign key fail update some_other_value='something else';

I know that looks really weird, but I have a good reason for doing it (without getting into details, it has to do with versioning historical data that can't be destroyed in the event that a value is referenced elsewhere.) I can figure out how to do this with more than one query, of course, but I'd like to do it in a single query if I can. I'm pretty sure it's not possible, but I'd like to ask before giving up :)

Thanks!

Relation between max values for table_open_cache and open_files_limit?

Posted: 01 Sep 2013 09:44 AM PDT

From MySQL documentation:

Max value limit for open_files_limit is 65536 and table_open_cache is 524288.

When I applied max limit, mysql server started but with following warnings in error log:

2013-09-01 11:34:42 57231 [Warning] Buffered warning: Could not increase number of max_open_files to more than 65536 (request: 1048687)    2013-09-01 11:34:42 57231 [Warning] Buffered warning: Changed limits: table_cache: 32713 (requested 524288)  

I see that max-limit for open_files_limit is less than table_open_cache. I feel this is strange and open_file_limit should be always greater than table_open_cache.

Neverthless, I set open_file_limit to larger than 65536 and it worked.

I am now wondering, how these are related? Is there any mistake in MySQL documentation?

p.s. I modified /etc/security/limits.conf on way as for such high limits.

Expanding from one to one to one to many relationship for 2 tables

Posted: 01 Sep 2013 12:34 PM PDT

I have 4 tables tblSData,tblGF,tblGFAlert and tblEAlert. For each data in tblSData it can have a data in tblGFAlert or tblEAlert. The problem now each main data (tblSData) may have more then one data in tblEAlert. So for each main data now I can pick nicely and show cause I have their foreign key in the tblSData. Below are table and current queries how to over come this problem ?

SELECT tblSData.header,  tblGFIn.gFName as gFNameIn,  tblGFOut.gFName as gFNameOut,  CAST(Date_Add(tblSData.dateTimer , Interval '".$gmtValue."' hour_minute) AS CHAR) As dateTimer,  CAST(tblSData.sInsertDateTime  AS CHAR) As sInsertDateTime ,  tblGFAIn.gMessage,  tblGFAOut.gMessage,  tblEAlert.eMessage                          FROM tblSData                      LEFT JOIN tblGF As tblGFIn  ON  tblSData.gFInID=tblGFIn.gFID                                       LEFT JOIN tblGF As tblGFOut  ON tblSData.gFOutID=tblGFOut.gFID                                       LEFT JOIN tblGFAlert AS tblGFAIn ON tblSData.gFAInID = tblGFAIn.gAlertID                        LEFT JOIN tblGFAlert AS tblGFAOut ON tblSData.gFAOutID=tblGFAOut.gAlertID                                             LEFT JOIN tblEAlert ON tblSData.eAlertID=tblEAlert.eAlertID                        Where tblSData.aID=".$aID." Order By tblSData.dateTimer  Asc        CREATE TABLE IF NOT EXISTS `tblSData` (    `sDataID` int(11) NOT NULL AUTO_INCREMENT,     `header` varchar(255) NOT NULL,       `aID` int(5) NOT NULL,    `gFInID` int(5) NOT NULL,    `gFOutID` int(5) NOT NULL,    `gFAInID` int(5) NOT NULL,    `gFAOutID` int(5) NOT NULL,    `eAlertID` int(5) NOT NULL,    `dateTimer` datetime NOT NULL     PRIMARY KEY (`sDataID`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;        CREATE TABLE IF NOT EXISTS `tblGAlert` (    `gAlertID` int(11) NOT NULL AUTO_INCREMENT,    `eID` int(5) NOT NULL,    `aID` int(5) NOT NULL,    `gEntryStatus` enum('Do','Ad','Ej') NOT NULL,    `gDateTime` datetime NOT NULL,    `gInsertDateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    `gMessage` varchar(255) NOT NULL,     PRIMARY KEY (`gAlertID`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;    CREATE TABLE IF NOT EXISTS `tblEAlert` (    `eAlertID` int(11) NOT NULL AUTO_INCREMENT,    `sDataID` int(5) NOT NULL,    `eID` int(5) NOT NULL,    `aID` int(5) NOT NULL,    `eDateTime` datetime NOT NULL,    `eInsertDateTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    `eMessage` varchar(255) NOT NULL,    PRIMARY KEY (`eAlertID`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;  

Below is a sample data so if looking at tblEAlert for line 1 & 2 is linked to tblSData 1 and 3,4,5 and 6 is linked tblData 2

Sample data for tblSData    1,"A1",1122,100,102,1,2,1,2013-07-13 15:30:19  2,"A3",1122,104,103,3,4,3,2013-07-13 15:45:19  3,"A4",1122,105,108,5,6,7,2013-07-13 15:55:19      Sample data for tblEAlert    1,1,1,1122,2013-07-13 15:30:19,2013-07-13 15:30:19,"Alert 1"  2,1,2,1122,2013-07-13 15:30:19,2013-07-13 15:30:19,"Alert 2"  3,2,2,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 2"  4,2,3,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 3"  5,2,4,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 4"  6,2,5,1122,2013-07-13 15:45:19,2013-07-13 15:45:19,"Alert 5"  

How to automatically backup MongoDB?

Posted: 01 Sep 2013 07:51 AM PDT

How to automatically backup MongoDB ? for example daily

ps : daily , monthly

Stored procedure to handle empty sets

Posted: 01 Sep 2013 04:12 AM PDT

How can I make a stored procedure to print a specific message if an empty set wa returned by the query?

best ETL design to transfer transaction tables records into the data-warehouse

Posted: 01 Sep 2013 11:19 AM PDT

I have 2 type of tables to populate the data-warehouse with every day, lookup tables or configuration tables with few 100s records, and thats easy where i just truncate and refill the table.

but for transaction tables, that have many records, i usually increment, that is i run the ETL daily to add yesterdays records.

i have 2 problems that i face always

  1. when the job fails for any reason (i lose that Days transactions)
  2. when for any reason the job run twice or i run it twice (i get duplicates)

now i am trying to design a way where i over come these 2 problems as well as am trying to develop the ETL in such a way that it can auto fix it self incase any of these events occur.

i want it to check if there are missing days and run run the ETL for that day, and check if there are duplicates and delete them.

below are ways i though of 1. i take in the last 5 days regardless, every day the ETL runs, deletes the last 5 days and refill. 2. i check the destination tables if they have missing dates in the last month and then i query the source with the missing days.

keeping in mind that the source is a huge table in a production environment that i have to optimize my query to the maximum when requesting from it.

thanks

Trying to create PostGIS table

Posted: 31 Aug 2013 11:04 PM PDT

I have create a database following the hompage

http://postgis.refractions.net/documentation/manual-1.5/ch02.html#id2648455

and now I can't create a simple non-spatial table with

I have use

sudo su - postgres  createdb lab  psql lab    CREATE TABLE lab_member  

I use \dt to get table I have now

and there is just two table not I create

        List of relations       Schema |       Name       | Type  |  Owner     --------+------------------+-------+----------   public | geometry_columns | table | postgres   public | spatial_ref_sys  | table | postgres  (2 rows)  

I want to COPY a csv table into Postgis database but i can't even create a empty table to do it?

please help me to create a table!

MySQL Replication fails - commands received in wrong order?

Posted: 31 Aug 2013 11:59 PM PDT

The other day, I had this issue with one of our MySQL 5.1.60 master and 5.1.61 slave setups:

Last_Error: Error 'Can't create table 'foobar.#sql-4b87_2' (errno: 150)' on query.  Default database: 'foobar'. Query: 'alter table picture add index FKDD905 (pictureSource_id),   add constraint FKDD905 foreign key (pictureSource_id) references picturesource (id)'  

Got to mention, that we just "host" the MySQL servers/setups; the content (including INDEXes and such) comes from the customer. We just run it.

Okay, so here's what happened, as far as I understand it:

  1. On the master, the customer created the table "picturesource"
  2. On the master, they added the index "FKDD905" to the table "picture", referencing the field "id" from "picturesource" as a Foreign Key
  3. The slave tried to execute "alter table add index…".
  4. The slave tried to execute "add table picturesource…".

In that order.

On the slave, that failed. It failed, because the table picturesource did not exist on the slave. How can that happen? Why did the slave try to add the index BEFORE creating the table? I mean, on the master, the table MUST have existed before they added the key.

Don't know if it's important, but binlog_format = MIXED. I manually added the (empty) table picturesource and could then restart the slave process (→ START SLAVE;). In a subsequent SHOW FULL PROCESSLIST\G, I then saw, that it did a copy to tmp table for the picture table; the picture table is huge (~7 GB).

Can someone clarify?

How to access a SQL Server database from other computer connected to the same workgroup?

Posted: 01 Sep 2013 12:46 AM PDT

I have created a C# application which uses a SQL Server database. I have other computers connected to me and to each other in a workgroup. I have shared my C# application with others. When they open the application they get the error

A network related or instance-specific error occured while establishing a connection to SQL Server. the server was not found or was not accessible

But the application is working fine on my PC. The connection string I am using is

Data Source=ASHISHPC1\SQLEXPRESS;Initial Catalog=ACW;User ID=ash159;Password=ashish159  

which is stored in a .config file.

The application is working fine on my PC. What must I do? I have enabled the TCP/IP in the server but the same error persists. Some change in connection string or something else?

Please help.. Thank you..

my.cnf validation

Posted: 01 Sep 2013 01:18 PM PDT

We have moved our server from old 8 GB RAM Server to new 16 GB RAM server so that we could have better performance.

The server is still consuming lot of MEMORY.

The tables in the database are not designed for InnoDB. The DB physical file size is approximately 2.8 GB.

my.cnf parameters are :

[client]  #password           = your_password  port                = 3306  socket              = /var/lib/mysql/mysql.sock    [mysqld]  port = 3306  socket = /var/lib/mysql/mysql.sock  skip-locking  #skip-bdb#niraj  skip-external-locking  key_buffer                  = 128M  max_length_for_sort_data    = 1024  max_tmp_tables              = 32M  table_cache                 = 64  max_allowed_packet          = 128M  sort_buffer_size            = 32M  read_buffer_size            = 10M  join_buffer_size            = 256M  read_rnd_buffer_size        = 64M   myisam_sort_buffer_size     = 256M  thread_cache_size           = 64  query_cache_size            = 256M  thread_concurrency          = 8  max_connect_errors          = 100  log-bin=mysql-bin  server-id                            = 1  set-variable = max_connections       = 10000  set-variable = connect_timeout       = 280  set-variable = interactive_timeout   = 280  set-variable = net_read_timeout      = 300  innodb_buffer_pool_size              = 3G  innodb_additional_mem_pool_size      = 32M  innodb_log_file_size                 = 768M  innodb_log_buffer_size               = 16M  #innodb_flush_log_at_trx_commit      = 1  innodb_lock_wait_timeout             = 50    [mysqldump]  quick  max_allowed_packet          = 64M    [mysql]  no-auto-rehash    [isamchk]  key_buffer              = 64M  sort_buffer_size        = 256k   read_buffer             = 256k  write_buffer            = 256k    [myisamchk]  key_buffer              = 64M  sort_buffer_size        = 256M  read_buffer             = 256k  write_buffer            = 256k    [mysqlhotcopy]  interactive-timeout  

Please any one can validate my.cnf and suggest why taking much memory.

How do I migrate varbinary data to Netezza?

Posted: 01 Sep 2013 03:18 PM PDT

I got a warning message while migrating DDL from SQL Server to Netezza:

Warning: [dbo].[spec_binarymessage].[blobdata] data type [varbinary] is not supported the target system and will be scripted as VARCHAR(16000).

I'm wondering whether this kind of data conversion will cause some issues such as truncation of data etc.?

How can I get my linked server working using Windows authentication?

Posted: 01 Sep 2013 04:18 PM PDT

I'm trying to get a linked server to ServerA created on another server, ServerB using "Be made using the login's current security context" in a domain environment. I read that I'd need to have SPNs created for the service accounts that run SQL Server on each of the servers in order to enable Kerberos. I've done that and both now show the authentication scheme to be Kerberos, however, I'm still facing the error:

"Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'".  

In Active Directory, I can see that the service account for ServerB is trusted for delegation to MSSQLSvc, but I noticed that the service account for ServerA does not yet have "trust this user for delegation" enabled. Does the target server also need to have that option enabled? Is anything else necessary to be able to use the current Windows login to use a linked server?

Connection to local SQL Server 2012 can be established from SSMS 2008 but not from SSMS 2012

Posted: 01 Sep 2013 05:18 PM PDT

I have two local SQL Server instances running on my local machine. The first is SQL Server 2008 R2 Enterprise Edition (named MSSQLSERVER) and the 2nd is SQL Server 2012 Business Intelligence Edition.

My problem is with SSMS 2012 which can connect to distant servers but not the local 2012 instance; I can however connect to this instance from SSMS 2008.

The error message I get when trying to login is

Login Failed. The login is from an untrusted domain and cannot be used with Windows Authentication. (Microsoft SQL Server, Error: 18452)

I must point out that I don't have the necessary privileges to access SQL Server Configuration Manager (blocked by group policy).

Any help would be appreciated.

Design consideration regarding state handling: how to store multiple, variable states for one entity

Posted: 01 Sep 2013 10:18 AM PDT

First I have to admit that I'm not a database professional neither are my colleagues. For a new project my colleagues and me came to a design question we couldn't really solve easily. And all the ideas had some disadvantages, so we could't figure out what's the best way to go.

We have a main entity "Transaction" which should be processed by "ProcessingRules". The processing rules can be configured by the users in the Web application (each rule has a different execution scheduling. One might be running every hour, whereas the others might run nightly).

Lets say Transaction gets 10'000 new records a day.

This would lead to a DB design where I need to keep the State "Processed YES/NO" for each "ProcessingRule" and "Transaction".

I thought the proper way is to have a relation table between the "ProcessingRoles" and the "Transaction". If no record present, the record has not been processed by this role yet.

Transaction [0..1] ------ [*] TransactionRuleProcessing [*] ------- [0..1] ProcessingRule

But when I think of the Query, this would lead into a WHERE NOT EXISTS (SELECT 1 FROM TransactionRuleProcessing...) query for the rule to identify new or unprocessed records.

If we have a large amount of rows in Transaction, I think this will affect performance because the NOT EXISTS will have to join the whole table to the state table. If I'm not mistaken, this might cause a performance issue.

On the other side, if we had only one state directly on the Transaction table, we could add an index and there would be no join between the large Transaction table and the state table.

Question:

Is it true that such a NOT EXISTS query would have to join the whole Transaction table with the TransactionRuleProcessing table to identify non existing (processed) rows? How could this affect performance of the database with a large Transaction table? What would be other recommendations to flag a record by a various amount of states?

Any ideas very much appreciated

How to handle "many columns" in OLAP RDBMS

Posted: 01 Sep 2013 02:18 PM PDT

I have a fact that has around 1K different numerical attributes (i.e. columns). I would like to store this in to a column-oriented DB and perform cube analysis on it.

I tried to design a star schema, but I'm not sure how to handle this many columns. Normalising it sounds wrong, but I can't just have flat columns either. The combination of attributes are also too diverse to have a simple dimension table for this, even if I'd reduce the numerical values into categories (ranges), which is an option. I thought about storing them as XML or JSON for each row, but that doesn't sound great either.

If it helps, I'm planning to use Amazon's redshift for the DB.

Note: We have strong preference for RedShift as it fits perfectly for at least other few operations we do on this data. Hence I want to avoid other technologies like HBase if possible.

Mysql DB server hits 400% CPU

Posted: 01 Sep 2013 12:18 PM PDT

I have been facing problem with my database server quite a month, Below are the observations that I see when it hits the top.

 - load average 40 to 50   - CPU % - 400%    - idle % - 45%   - wait % - 11%   - vmstat procs r-> 14 and b-> 5   

And then drains down within 5 minutes. And when I check the show processlist I see queries for DML and SQL are halted for some minutes. And it processes very slowly. Whereas each query are indexed appropriately and there will be no delay most of the time it returns less than 1 second for any query that are being executed to server the application.

  • Mysql Version : 5.0.77
  • OS : CentOS 5.4
  • Mem: 16GB RAM (80% allocated to INNODB_BUFFER_POOL_SIZE)
  • Database Size: 450 GB
  • 16 Processor & 4 cores
  • Not in per-table model.
  • TPS ranges 50 to 200.
  • Master to a Slave of the same configuration and seconds behind is 0.

Below url shows show innodb status \G and show open tables; at the time spike. And this reduced within 5 minutes. Sometimes rare scenarios like once in two months I see the processes takes more than 5 to 8 hours to drain normal. All time I notice the load processor utilization and how it gradually splits its task and keep monitoring the process and innodb status and IO status. I need not do anything to bring it down. It servers the applications promptly and after some time it drains down to normal. Can you find anything suspicious in the url if any locks or OS waits any suggestion to initially triage with or what could have caused such spikes ?

http://tinyurl.com/bm5v4pl -> "show innodb status \G and show open tables at DB spikes."

Also there are some concerns that I would like to share with you.

  1. Recently I have seen a table that gets inserts only about 60 per second. It predominantly locks for a while waiting for auto-inc to get released. And thus subsequent inserts stays in the processlist tray. After a while the table gets IN_USE of about 30 threads and later I don't know what it makes to free them and clears the tray. (At this duration the load goes more than 15 for 5 minutes)

  2. Suppose if you say application functionality should be shapped to best suite the DB server to react. There are 3 to 5 functionalities each are independent entities in schema wise. Whenever I see the locks it gets affected to all other schemas too.

  3. Now what makes best fuzzy is the last one. I see slave keeps in synch with master with a delay of 0 second all time whereas slave has a single thread SQL operation that is passed from relay IO that which acts in FIFO model from the binary logs where Master had generated. When this single headed slave can keep the load less and have the operations upto-date, should the concurrent hits are really made to be concurrent for the functionalities which I assume making the possible IO locks in OS level. Can this be organized in application itself and keep the concurrent tenure density thinner?

What are the different ways to keep track of active and archived data?

Posted: 01 Sep 2013 02:18 AM PDT

I'm looking for different ways to keep track of both active and archived data so I can pro and con them.

The system:
I have a computer with a database on it. The database has several tables in it; one of which contains a list of users that can use the computer; and several tables for auditing (user 1 did this, user 2 did that, etc). This database is a slave of a master database in which a Content Management System is used to say, add a new user and see reports on what user did what.

Example:
As stated above, I have a table (lets call it users) that keeps track of all the users that are allowed to use the computer. As time goes by users will be added and removed. The problem is the audit tables keep track of a user ID so if the user is removed I lose the user information because the rows can't be joined. One idea I had was to use MySql's triggers so that if a user is added, an insert trigger is triggered and inserts a copy of the data to an 'archived' user table (lets call it users_archive). That way the computer can use users to determine if the user has permission to use it and reports can use users_archive for reports.

This seems like the easiest and most simple way to do it, but I can't find any other ways via google search to see if there are any other ways to do something like this.

Database stuck in restoring and snapshot unavailable

Posted: 01 Sep 2013 11:18 AM PDT

I tried to restore my database from a snapshot. This usually took around a minute to complete the last couple of times. When I did it today, it didn't complete for around 30 minutes and the spid was in a suspended state. I stopped the query and now my database is stuck in restoring state and my snapshot is unavailable. Am I screwed?

USE master;  RESTORE DATABASE QA from   DATABASE_SNAPSHOT = 'QA_Snap_Testing';  GO  

Multiple database servers for performance vs failover

Posted: 01 Sep 2013 07:18 PM PDT

If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best?

Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active?

I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.'

If it matters, we have traffic around 3,000-7,000 simultaneous users.

Access 2003 (SQL Server 2000) migration to SQL Azure

Posted: 01 Sep 2013 08:18 AM PDT

As my old Windows 2003 RAID controller started throwing errors, I am seriously thinking about switching current Access 2003 (adp/ADO) clients to use a Windows SQL Azure solution, in place of current SQL Server 2000.

Does anybody knows if this is a feasable/painless operation?

Truncate table is taking too long in PostgreSQL

Posted: 01 Sep 2013 05:57 AM PDT

I have many databases on the same server, all with same templates. When I execute truncate command on exceptions table in each database, it works fine and executes immediately but on database named db_edr_s1 the same truncate command on exceptions table is taking too much, approx. 5 minutes.

The version is 9.1.

Any other info needed?

InnoDB - How to get top locked tables and rows which are locked

Posted: 01 Sep 2013 03:18 AM PDT

I was searching for a tool OR query which can give me top locked tables and which particular row is locked, Is it possible to get ?

PostGIS: Remove stale/obsolete geometry columns from the geometry_columns table [on hold]

Posted: 31 Aug 2013 10:22 PM PDT

I understand that the function Populate_Geometry_Columns() inserts records in the geometry_columns table for geometry columns that are not yet listed there. However, an invocation of Probe_Geometry_Columns() reports:

select probe_geometry_columns();                probe_geometry_columns                 ---------------------------------------------------   probed:1086 inserted:0 conflicts:1086 stale:53612  (1 row)  

This is after a call to Populate_Geometry_Columns(). Is there an easy/documented way how to get rid of the conflicts and the stale entries?

EDIT: As per Darrell Fuhriman's answer on GIS.SX, there are three ways:

  • truncate the geometry_columns table and rerun SELECT probe_geometry_columns()
  • Run a custom script that operates on PostGis's internal tables
  • Upgrade to PostGis 2.0 or later

PgAdmin III - How to connect to database when password is empty?

Posted: 01 Sep 2013 07:00 PM PDT

I have installed PostgreSQL 9.1 on my PC (Win 7). I have a small Java application connecting successfully to it with login=sa and password="". The connection works.

However, it is refused from PgAdmin III itself. I get:

Error connecting to the server: fe_sendauth: no password supplied  

How do I connect to my database from PgAdmin III with an empty password?

EDIT

This is just a test, not production code.

No comments:

Post a Comment

Search This Blog