Friday, October 11, 2013

[how to] How to add Master and Slave PostgrSQL 9.1 Database in Rails app

[how to] How to add Master and Slave PostgrSQL 9.1 Database in Rails app


How to add Master and Slave PostgrSQL 9.1 Database in Rails app

Posted: 11 Oct 2013 04:55 PM PDT

I am working on a rails app using postgresql 9.1 as a database.in postgresql i a have configured for streaming replication, Now how to add Master and Slave PostgreSQL database in rails application ( in database.yml file) I have specified the database details as

development:

adapter: postgresql encoding: unicorn database: app_development pool: 5 username: username password: password host: master ip port: port number

development:

adapter: postgresql encoding: unicorn database: app_development pool: 5 username: username password: password host: slave ip port: port number

development:

adapter: postgresql encoding: unicorn database: app_development pool: 5 username: username password: password host: svale ip port: port number

But While executing this produces error as " ActiveRecord::StatementInvalid" " PG::ReadOnlySqlTransaction: ERROR: cannot execute INSERT in a read-only transaction"

How to specify the master and slave database in database.yml file

Regards, Balu

Help with a PIVOT

Posted: 11 Oct 2013 03:45 PM PDT

I have a view that gives data which looks like this:

          Rank        |        Score          --------------------------          1           |            1          2           |          2.5          3           |            5          1           |            7          2           |          2.5          3           |            8          1           |            9          2           |          2.5          3           |            5  

What I would like to do is apply a PIVOT so that it appears in a 3x3 matrix. The result is always guaranteed to have 9 rows, with RANK going 123..123..123.

         1   |   2   |   3      ------------------------         1      2.5      5         7      2.5      8         9      2.5      5  

I've attempted to to use a PIVOT, but have been unsuccessful (the part about the aggregate function always confuses me):

SELECT      *  FROM      MyView  PIVOT  (      MAX([Rank]) FOR [Rank] IN ([1], [2], [3])  ) pvt  

How can I shape my data into the form that I am looking for (PIVOT or otherwise)?

Sorry if the requirement is a bit odd, I'm working with a piece of reporting software that does what I want it to do, if I can shape my data into that form.

How to show NULL results from query?

Posted: 11 Oct 2013 07:38 PM PDT

i'm trying to create a procedure with filters for search.

If the parameter is "NULL" the value of parameter will be column value;

Here my problem: When my parameter and column value is NULL.

I have no results because of this.

I know that query: "COLUMN_NAME IS NULL" return the result even if the value is NULL, but i don't know how to implement this.

I trying to do this with "CASE" but without success.

CREATE PROCEDURE SEARCH_QUERYS          @ID   int,          @LIVE bit,          @DATA datetime  AS  BEGIN          SELECT * FROM TB_SEARCH          WHERE           ID = ISNULL(@ID,ID) AND          LIVE = ISNULL(@LIVE,LIVE) AND          DATA = ISNULL(@DATA,DATA)  END  

How can I create an ad-hoc domain user on SQL Server

Posted: 11 Oct 2013 05:30 PM PDT

Okay, so here's the problem. I have an application I need to install and run locally for testing and debugging. The application has been designed to run on a domain and as such expects to log into the database using domain credentials.

I can't modify the section of code that handles the logins without a serious amount of work which I don't have time for this week, nor do I have any ability to add the machine hosting the database to the domain (contractors machines aren't allowed to be added to the domain under the I.T. policy).

So I'm in a bit of a bind. I cannot use the CREATE LOGIN [DOMAIN\User] From Windows routine as I get the following error:

Windows NT user or group 'DOMAIN\User' not found. Check the name again.  

I can't just insert into the relevant security tables: sys.server_principals table, sys.SysLogins etc. because

Ad hoc updates to system catalogs are not allowed.  

Is there any way I can back door my local SQL Server installation to accept these user credentials from the application without being attached to the domain?

Of course adding the machine to the domain and logging in as a domain user would resolve this issue in a couple of minutes, but political issues prevent that from happening.

COUNT WITH JOIN

Posted: 11 Oct 2013 02:56 PM PDT

I've the following tables:

   fss_post(post_id,text)     fss_comment(idComment, post_id, text)  

Is possible to write a query that give me as result a row for each post_id with total comment count for this post?

EXAMPLE:

    post_id       comment_count      101010        5      101011        0  

And so on... Thanks.

Getting MySQL User Variable in Sequence, While Using ORDER BY RAND()

Posted: 11 Oct 2013 05:32 PM PDT

I am running a SELECT statement, which ultimately will be in an INSERT..SELECT statement.

I want a result like this

ID        COUNTER
3 1
7 2
1 3
20 4
12 5

When I run this query, I get everything in sequence:

SELECT 16132,@counter:=@counter+1 as counter     FROM table t    INNER JOIN (SELECT @counter:=-1) b   LIMIT 5;  

But I want it to be randomized, so I add ORDER BY RAND() at the end of the query, but then I end up with "random" values in the counter column. I think the randomized counter values match to the larger, entire result set, but is there a way I can get a counter column that will go in sequence when I am using ORDER BY RAND() ?

Trigger needs dbms_alert declared

Posted: 11 Oct 2013 12:21 PM PDT

A trigger has given me the following:

 ORA-04098: trigger 'adminuser.DB_ERROR_CHANGE_TRIG' is invalid and failed re-validation  

and this is when working on altering schemas/triggers/etc from an import of a client database.

I ran

show errors trigger adminuser.db_error_change_trig  

and the results were:

LINE/COL ERROR  -------- -----------------------------------------------------------------  2/5      PL/SQL: Statement ignored  2/5      PLS-00201: identifier 'DBMS_ALERT' must be declared  

Now, googling, I have found that I need to grant explicit execute privileges to the public user which I did with

grant execute on dms_public to public  

However, this has not worked.
How can I determine what is going on, as well as how to fix this. Thirdly, is dbms_alert schema specific? Meaning, my understanding is the the full name of the object is SYS.dbms_alert, is there another for another schema or is one global one?

Best Practices for Testing Application Dashboard Data

Posted: 11 Oct 2013 03:16 PM PDT

We have a dashboard that contains metrics based on a 6 month reporting period. Users log in and see their metrics based on the current 6-month reporting period.

Every 6 months, we load new metrics in our DEV/QA environment and configure them for the upcoming reporting period so we can test. But, our application and ETL use GETDATE(), so for the ETL to calculate properly, we have been setting the server's DATE in the future so everything will calculate and display as if we were in the future.

Our DBA group is now telling us that this should not be done and no one else in the industry does this. Can someone give me some examples of what they have done for testing in this situation? Should we be looking to re-design our ETL and application and what are some ideas for best practices?

We have unit testing for the application, so it will work. We are more worried that the data from the ETL will not be correct on Day 1 of the new period. It is very important that the metrics are displaying properly and the data is correct on Day 1.

MySQL Query Optimization - Indexes & Joins 12 million+ records

Posted: 11 Oct 2013 12:54 PM PDT

I'm running the following query:

SELECT      `track_title`.`title`,      `track`.`id`,      `artist`.`name` artist,      `album`.`title` album,      `st_tag`.`tag`,      `track`.`year`  FROM       `track`  JOIN `artist` ON `track`.`artist_id` = `artist`.`id`  JOIN `album` ON `track`.`album_id` = `album`.`id`  JOIN `track_title` ON `track`.`id` = `track_title`.`id`  JOIN `track_tags` ON `track`.`id` = `track_tags`.`track_id`  JOIN `st_tag` ON `track_tags`.`tag_id` = `st_tag`.`id`  WHERE      `track_title`.`title_sch` LIKE "% love %"  ORDER BY      `track_title`.`title`  LIMIT      0,500;  

Right now it takes at least 15 mins to execute. Both track and track_title have about 12 million records. Album and artist have at least a million each. Can this be sped up with better indexes? I've been playing with a combination of indexes in order to speed up the queries but i'm still having trouble getting it right.

Table structure:

CREATE TABLE `album` (    `id` int(10) unsigned NOT NULL,    `title` varchar(1000) default NULL,    `title_sch` varchar(1000) default NULL,    `flags` smallint(5) unsigned NOT NULL default '0',    PRIMARY KEY  (`id`),    KEY `title_sch` (`title_sch`(255)),    KEY `album_title` USING BTREE (`title`(255)),    KEY `album_cluster` USING BTREE (`id`,`title`(255),`title_sch`(255))  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;    CREATE TABLE `artist` (    `id` int(10) unsigned NOT NULL,    `name` varchar(1000) default NULL,    `name_sch` varchar(1000) default NULL,    `flags` smallint(5) unsigned NOT NULL default '0',    PRIMARY KEY  (`id`),    KEY `name_sch` (`name_sch`(255)),    KEY `name` USING BTREE (`name`(255)),    KEY `artist_name` USING BTREE (`name`(255)),    KEY `artist_cluster` USING BTREE (`id`,`name`(255),`name_sch`(255))  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;    CREATE TABLE `st_tag` (    `id` smallint(5) unsigned NOT NULL auto_increment,    `tag` varchar(50) NOT NULL,    `genre` smallint(5) unsigned NOT NULL,    `parent` smallint(5) unsigned default NULL,    `depth` tinyint(3) unsigned NOT NULL default '0',    `display` tinyint(4) NOT NULL default '-1',    PRIMARY KEY  (`id`)  ) ENGINE=MyISAM AUTO_INCREMENT=352 DEFAULT CHARSET=utf8;    CREATE TABLE `track` (    `id` int(10) unsigned NOT NULL,    `artist_id` int(10) unsigned NOT NULL,    `album_id` int(10) unsigned NOT NULL,    `status` smallint(5) unsigned NOT NULL default '0',    `flags` smallint(5) unsigned NOT NULL default '0',    `year` smallint(5) unsigned default NULL,    `duration` smallint(5) unsigned NOT NULL,    `track_no` smallint(5) unsigned default NULL,    `disc_no` tinyint(3) unsigned default NULL,    `explicit` tinyint(3) unsigned NOT NULL default '0',    `popularity` tinyint(3) unsigned NOT NULL default '0',    PRIMARY KEY  (`id`),    KEY `album_id` (`album_id`),    KEY `artist_id` (`artist_id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;    CREATE TABLE `track_tags` (    `id` int(10) unsigned NOT NULL auto_increment,    `track_id` int(10) unsigned NOT NULL,    `tag_id` smallint(5) unsigned NOT NULL,    `status` tinyint(3) unsigned NOT NULL default '0',    PRIMARY KEY  (`id`),    KEY `track_id` (`track_id`),    KEY `tag_id` (`tag_id`)  ) ENGINE=InnoDB AUTO_INCREMENT=26661380 DEFAULT CHARSET=utf8;    CREATE TABLE `track_title` (    `id` int(10) unsigned NOT NULL,    `title` varchar(1000) default NULL,    `title_sch` varchar(1000) default NULL,    `version` varchar(100) default NULL,    PRIMARY KEY  (`id`),    KEY `title` (`title`(255)),    KEY `title_sch` (`title_sch`(255)),    KEY `title_cluster` USING BTREE (`id`,`title`(255),`title_sch`(255))  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

Are de-normalised tables preferred in any aspect of a project? [on hold]

Posted: 11 Oct 2013 10:15 AM PDT

While going through interview , My Interviewer asked me a question about Denormalization of tables and its usage in applications.

On guess, I answered, Yes it might be. Being denormalized , you have all columns in a single table where you do not require any kind of joins so your biggest plus point will be performance if proper look up conditions are given. And you can use a denormalized table in Reportings.

Is it correct, what I answered?

use function arguments or variable in postgres function

Posted: 11 Oct 2013 07:37 PM PDT

I want to set the default value for the column building and floor in my table points in the below function called test(b,f)

Create or replace function test(b text, f text) returns void as  $$  Begin  Alter table points alter COLUMN building set default b;  Alter table points alter COLUMN floor set default f;  End;  $$  language plpgsql;  

When I run select test('Shopping Mall','03') it gives me below error:

 column b does not exist  

It seems that I cannot use a function arguments b in the alter ... query?

Issue In INSERT INTO SELECT Statement [duplicate]

Posted: 11 Oct 2013 12:46 PM PDT

This question already has an answer here:

Can anyone please advise how to rectify below code:

INSERT INTO table1 values(10, SELECT col1 FROM table2, null, null, 'variable');

---Here table1 has only 4 column(In actual table there are 100 columns and I don't want to mention each column and then enter value)

Is it possible to mention select statement inside values during table insertion ?

This query is not answered earlier because here I wanted to insert data in a table1 by fetching few column data from database of table2 and rest of data needs some mathematical calculation(you can assume rest of data as variable).

If this is answered, please provide me link.

Creating domain user login from another domain?

Posted: 11 Oct 2013 10:00 AM PDT

I have two domains named "Domain1" and "Domain2". I have SQL server 2005 in Domain2, and users in Domain1.

How can I create a login in SQL Server 2005 (in Domain2) for the user that is present in Domain1?

db2 database alias not listed via list database directory but still unable to create that database

Posted: 11 Oct 2013 04:13 PM PDT

I am trying to create a database by logging on to the linux server on which the db2 server is installed, as the instance owner

db2 => list database directory     System Database Directory     Number of entries in the directory = 1    Database 1 entry:     Database alias                       = SAMPLE   Database name                        = SAMPLE   Local database directory             = /home/mylogin/sqllib   Database release level               = f.00   Comment                              =   Directory entry type                 = Indirect   Catalog database partition number    = 0   Alternate server hostname            =   Alternate server port number         =    db2 => create database testdb2  SQL1005N  The database alias "TESTDB2" already exists in either the local   database directory or system database directory.      db2 => list database directory on /home/mylogin/sqllib     Local Database Directory on /home/mylogin/sqllib     Number of entries in the directory = 1    Database 1 entry:     Database alias                       = SAMPLE   Database name                        = SAMPLE   Database directory                   = SQL00001   Database release level               = f.00   Comment                              =   Directory entry type                 = Home   Catalog database partition number    = 0   Database member number               = 0  db2 => quit  

I also checked if there are any other instances that exist other than mine, but there aren't.

DB20000I  The QUIT command completed successfully.  -bash-4.1$ db2ilist  mylogin  

While the alias testdb2 is not listed by list database directory , db2 claims that this alias already exists in either the system database directory or the local database directory.

If this is true, how do I get the equivalent of list database directory that lists testdb2 as well?

Way to make differential backup from full different but from same database

Posted: 11 Oct 2013 05:21 PM PDT

I have the following situation: Our sql server is running in a cloud service. The database to be backed up have 60gb. We need to make a local copy of the database for some tests, and it need to be up to date, but 60gb is impracticable to download daily.

We thought to make a full backup, download it one time, make differential backups in following days. Once a week, we can stop the database operations to make the last differential backup from last week full backup and make a full backup, and start again the database operations. In this moment, the local bd, when restored with the last diff backup (made when the db was with stopped operations), have the same data than the last week full backup.

The idea behind this is, if is possible to use the diff backup from last full backup and restore it in our local database. If it is not possible, anyone have some idea of what we can do to get all the data, up to date, from our cloud database?

Is it safe to kill ANALYZE TABLE query on a MyISAM table?

Posted: 11 Oct 2013 09:26 AM PDT

I have a huge MyISAM table that is blocking my application. I frequently find ANALYZE TABLE queries running on that table. I guess that's because a join is being performed on the table or at least an index is being used, I'm not sure.

So I'm thinking to run a OPTIMIZE TABLE query on the table to speed things up. But there are currently multiple ANALAYZE TABLE queries locking the table, so is it safe to kill such queries ? I couldn't find enough information on mysql's documentation for that matter..

Thanks.

scramble the value of one column and unscramble it?

Posted: 11 Oct 2013 10:26 AM PDT

I want to change all the values in one column, as it is a production database and I don't want to give the data to testers without obfuscation.

Example:

Name             Name  ----     -->     ----  ABC              BCD  PQR              QRS  

I have searched scrambling and other things but my issue is I want to change the obfuscated data back to their original values when needed.

I need a generic script so I can apply it to an arbitrary table if required.

Error: "Storage Engine for the Table Doesn't Support Nullable Columns" (SequelPro)

Posted: 11 Oct 2013 02:26 PM PDT

I'm trying to load a very normal .csv file (that was created from Excel 2011 for Mac) into SequelPro (using MySQL) with my Mac -- and I've recently started getting this error consistently. Can anybody let me know what it is and how to fix it?

An error occurred while trying to add the new table 'wblist' by    CREATE TABLE `wblist` (  `FILE` VARCHAR(255),   `FIRSTNAME` VARCHAR(255),   `MIDDLE` VARCHAR(255),   `LASTNAME` VARCHAR(255),   `FULLNAME` VARCHAR(255),   `GENDER` VARCHAR(255),   `ADDRESS` VARCHAR(255),   `CITY` VARCHAR(255),   `STATE` VARCHAR(255),   `ZIP` VARCHAR(255),   `PHONE` BIGINT(11),   `UNIT` VARCHAR(255),   `JOB` VARCHAR(255),   `AREA` VARCHAR(255),   `TIME` VARCHAR(255),   `MAILINGADDRESS` VARCHAR(255),   `MAILINGCITY` VARCHAR(255),   `MAILINGSTATE` VARCHAR(255),   `MAILINGZIP` VARCHAR(255),   `ID` BIGINT(11),   `CONFIDENCE` VARCHAR(255),   `BIRTHDATE` VARCHAR(255),   `AGE` INT(11),   `RACE` VARCHAR(255),   `ETHNICITY` VARCHAR(255),   `RELIGION` VARCHAR(255),   `PARTY` VARCHAR(255),   `REGISTRATIONDATE` VARCHAR(255),   `VOTERSTATUS` VARCHAR(255),   `OtherPhone` VARCHAR(255),   `POSSIBLEADDRESS` VARCHAR(255),   `POSSIBLEMAILADDRESS` VARCHAR(255),   `RECID` VARCHAR(255)) ENGINE=CSV;    MySQL said: The storage engine for the table doesn't support nullable columns  

This is stopping me before I'm able to import the table. Thanks for the help!

optimizing MySQL for traffic analytics system

Posted: 11 Oct 2013 08:26 PM PDT

background :

I've developed a URL shortener system like Bitly with same features , so the system also tracks clickers info and represent as graphs to the person who has shorten the link as analytics data. currently I'm using MySQL and have a table to store click info with this schema:

visit_id (int)  ip (int)  date (datetime)  country  browser  device  os  referrer (varchar)  url_id (int)  //as foreign key to the shortened URL  

and for now , just the url_id field has index

The system should represent click analytics in the time periods the user wants, for example past hour, past 24 hours , the past month , ...

for example to generate graphs for past month , I do following queries:

SELECT all DAY(date) AS period, COUNT( * )                           FROM (                            SELECT *                           FROM visits                          WHERE url_id =  '$url_id'                          ) AS URL                          WHERE DATE > DATE_SUB( CURRENT_TIMESTAMP( ) , INTERVAL 1 MONTH )                           GROUP BY DAY( DATE )    //another query to display clicker browsers in this period  //another query to display clicker countries in this period  // ...  

issues:

  • for a shortened link with about 500,000 clicks , it takes about 3-4 seconds to calculate just the first query , so for total queries about 10-12 seconds which is terrible.
  • lots of memory and CPU is needed to run such queries

questions :

1- how to improve and optimize the structure , so the analytics of high traffic links will be shown in less than 1 second(like bitly and similar web apps) and with less usage of CPU and RAM ? should I make an index on the fields date, country, browser, device, os, referrer ? if yes , how to do that for the field date because I should group clicks some times by DAY(date), sometimes by HOUR(date), sometimes by MINUTE(date) and ...

2- is MySQL suitable for this application? assume at maximum my application should handle 100 million links and 10 billion clicks on them totally. Should I consider switching to an NoSQL solution for example?

3- if MySQL is ok , is my database design and table structure proper and well designed for my application needs? or you have better recommendations and suggestions?

UPDATE: I made an index on column referrer but it didn't help at all and also damaged the performance and I think that's because of the low cardinality of this column (also others) and the big resulting index size related to the RAM of my server.

I think making index on these columns would not help to solve my problem, my idea is about one of these:

1- if using MySQL, maybe generating statistics using background processing for high traffic links is better instead of calculating lively at the user request.

2- using some caching solution like memcached to help MySQL with high traffic links.

3- using a NoSQL such as MongoDB and solutions like Map-Reduce which I am poorly familiar with and haven't used ever.

what do you think?

Primary replica set server goes secondary after secondary fails

Posted: 11 Oct 2013 05:26 PM PDT

I have a 2 servers replica set that, after the secondary fails the primary goes into secondary mode while the secondary is in STARTUP2 (recovering). The problem with this is that I can't use the collection stored in that replica set freely, I'm getting errors trying to use the collection:

pymongo.errors.OperationFailure: database error: ReplicaSetMonitor no master found for set: rs2  

Sometimes if I restart the mongod instances, the server rs2-1 is the primary for a while, but after some time (while the secondary is recovering) I see this in the logs of rs2-1 (the primary):

Tue May  7 17:43:40.677 [rsHealthPoll] replSet member XXX.XXX.XXX.XXX:27017 is now in state DOWN  Tue May  7 17:43:40.677 [rsMgr] can't see a majority of the set, relinquishing primary  Tue May  7 17:43:40.682 [rsMgr] replSet relinquishing primary state  Tue May  7 17:43:40.682 [rsMgr] replSet SECONDARY  Tue May  7 17:43:40.682 [rsMgr] replSet closing client sockets after relinquishing primary  

Is there an easy way to make the primary keep being primary after the secondary fails? Am I doing something wrong?

Thanks in advance!

MySQL backup InnoDB

Posted: 11 Oct 2013 01:26 PM PDT

I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service.

As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good.

read here if you are interested

As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup?

Thanks,

Strange characters in mysqlbinlog output

Posted: 11 Oct 2013 11:26 AM PDT

Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?

  • mysqlbinlog Ver 3.3 for Linux at x86_64
  • mysql 5.5.28 server

Thanks! Julie

12c Grid Agent installation issues - Shell path is incorrect or not defined

Posted: 11 Oct 2013 08:26 AM PDT

I am installing 12c Grid Control agent on my server via Enterprise manager.

The targer server is a UNIX server with AIX OS on it. I add target host manually, enter authentication information, host name, installation directory, etc. and all goes well untill I click "Deploy Agent" button, then I get the following error:

"Shell path is incorrect or not defined.:/bin/bash(SH_PATH),-c(SH_ARGS) on host MyHostName"

and details are :

"Check the property values in the following files in this order, ssPaths_.properties or sPaths.properties or Paths.properties, in "/u02/app/oracle/middleware/oms/oui/prov/resources"

There isn't /u02/app" directory on my target server, only /u01/app/oracle" so I assume the above directory is on the same server as Enterprise manager.

Although I did find a directory where I have files of the format ssPaths_.properties - /u01/app/oracle/product/11.2.0.3/oui/prov/resources

Since my platform ia AIX, I used file ssPaths_aix.properties

On my target server if I run which bash I get /usr/bin/bash and which ssh returns /usr/bin/ssh ahd which sh returns /usr/bin/sh

So I edited the ssPaths_aix.properties file so that SH_PATH=/usr/bin/sh but still getting the same error. I have also tried setting to SH_PATH=/usr/bin/bash but nothing seems to make any difference. I am thinking one possibility would be I am editing wrong file.

Can anyone help?

DB2 to require password each time

Posted: 11 Oct 2013 04:07 PM PDT

I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested.

More specifically, this is how the authentication configuration looks like:

db2 get dbm cfg|grep -i auth

 GSS Plugin for Local Authorization    (LOCAL_GSSPLUGIN) =    Server Connection Authentication          (SRVCON_AUTH) = NOT_SPECIFIED   Database manager authentication        (AUTHENTICATION) = CLIENT   Alternate authentication           (ALTERNATE_AUTH_ENC) = NOT_SPECIFIED   Cataloging allowed without authority   (CATALOG_NOAUTH) = NO   Trusted client authentication          (TRUST_CLNTAUTH) = SERVER   Bypass federated authentication            (FED_NOAUTH) = NO  

db2 connect to dbName

   Database Connection Information       Database server        = DB2/LINUXX8664 10.1.0     SQL authorization ID   = DB2INST1     Local database alias   = DBNAME  

db2 connect to dbName user db2inst1 using password

   SQL1639N  The database server was unable to perform authentication because      security-related database manager files on the server do not have the required      operating system permissions.  SQLSTATE=08001  

I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck.

Connecting to a SQL Server database from a Flash program

Posted: 11 Oct 2013 12:26 PM PDT

I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7.

If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it?

In other words is this a LAN only system or does it automatically make itself available for the programs I link to it?

Does the order of columns in a table's definition matter?

Posted: 11 Oct 2013 03:17 PM PDT

When defining a table, it's helpful to order the columns in logical groups and the groups themselves by purpose. The logical ordering of columns in a table conveys meaning to the developer and is an element of good style.

That is clear.

What is not clear, however, is whether the logical ordering of columns in a table has any impact on their physical ordering at the storage layer, or if it has any other impact that one might care about.

Apart from the impact on style, does column order ever matter?

RMAN & Archivelog Deletion Policy confusion

Posted: 11 Oct 2013 03:12 PM PDT

Never been able to fully grasp the notion of delete input/delete input all when backup up Archivelog with rman.

Tied to the same issue is the configure Archivelog deletion policy to backed up X times to disk.

Deleting input, or deleting input all, seems counterintuitive to me when the deletion policy is set to 7 times to match an overall backup policy of 7 days.

Is there anyone here who can explain to someone who may be just a little dense how this all plays together? And what should the Archivelog deletion policy be if the goal is to have one week's worth of backups available at all times? All backups are weekly backed up to a tape drive. With seven days as overall, and 7 times on Archivelog deletions policy, am I backing it up 7 times or 7 times 7 times?

Thank you for any help anyone can offer……

PostgreSQL: The database cluster initialization failed

Posted: 11 Oct 2013 01:47 PM PDT

Both C:\PostgreSQL and C:\PostgreSQL\data have postgres user with full access and admin rights.

I ran postgresql-9.1.2-1-windows.exe from postgres user as admin. Target C:\PostgreSQL

Every way I try, I get "The database cluster initialization failed."

Questions

  • Is it OK to run everything w/o being a Windows Service?
  • Is there a work-around to install as Windows Service?

I am trying to setup PostGIS to work with GeoDjango.

I was able to manually install PostGIS. New to PostgreSQL and I am having a crisis of confidence over all of this. Coming from MySQL to PostgreSQL for the first time.


Pertinent log output from C:\Users\Larry\AppData\Local\Temp\install-postgresql.log:

WScript.Network initialized...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can read the path C: (using icacls) to Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can read the path C:\PostgreSQL (using icacls) to Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can read the path C:\PostgreSQL\data (using icacls) to Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can write to the data directory (using icacls) to  Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Failed to ensure the data directory is accessible (C:\PostgreSQL\data)      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called Die(Failed to initialise the database cluster with initdb)...  Failed to initialise the database cluster with initdb  

Suggestions?

No comments:

Post a Comment

Search This Blog