[how to] How to add Master and Slave PostgrSQL 9.1 Database in Rails app |
- How to add Master and Slave PostgrSQL 9.1 Database in Rails app
- Help with a PIVOT
- How to show NULL results from query?
- How can I create an ad-hoc domain user on SQL Server
- COUNT WITH JOIN
- Getting MySQL User Variable in Sequence, While Using ORDER BY RAND()
- Trigger needs dbms_alert declared
- Best Practices for Testing Application Dashboard Data
- MySQL Query Optimization - Indexes & Joins 12 million+ records
- Are de-normalised tables preferred in any aspect of a project? [on hold]
- use function arguments or variable in postgres function
- Issue In INSERT INTO SELECT Statement [duplicate]
- Creating domain user login from another domain?
- db2 database alias not listed via list database directory but still unable to create that database
- Way to make differential backup from full different but from same database
- Is it safe to kill ANALYZE TABLE query on a MyISAM table?
- scramble the value of one column and unscramble it?
- Error: "Storage Engine for the Table Doesn't Support Nullable Columns" (SequelPro)
- optimizing MySQL for traffic analytics system
- Primary replica set server goes secondary after secondary fails
- MySQL backup InnoDB
- Strange characters in mysqlbinlog output
- 12c Grid Agent installation issues - Shell path is incorrect or not defined
- DB2 to require password each time
- Connecting to a SQL Server database from a Flash program
- Does the order of columns in a table's definition matter?
- RMAN & Archivelog Deletion Policy confusion
- PostgreSQL: The database cluster initialization failed
How to add Master and Slave PostgrSQL 9.1 Database in Rails app Posted: 11 Oct 2013 04:55 PM PDT I am working on a rails app using postgresql 9.1 as a database.in postgresql i a have configured for streaming replication, Now how to add Master and Slave PostgreSQL database in rails application ( in database.yml file) I have specified the database details as development: adapter: postgresql encoding: unicorn database: app_development pool: 5 username: username password: password host: master ip port: port number development: adapter: postgresql encoding: unicorn database: app_development pool: 5 username: username password: password host: slave ip port: port number development: adapter: postgresql encoding: unicorn database: app_development pool: 5 username: username password: password host: svale ip port: port number But While executing this produces error as " ActiveRecord::StatementInvalid" " PG::ReadOnlySqlTransaction: ERROR: cannot execute INSERT in a read-only transaction" How to specify the master and slave database in database.yml file Regards, Balu |
Posted: 11 Oct 2013 03:45 PM PDT I have a view that gives data which looks like this: Rank | Score -------------------------- 1 | 1 2 | 2.5 3 | 5 1 | 7 2 | 2.5 3 | 8 1 | 9 2 | 2.5 3 | 5 What I would like to do is apply a PIVOT so that it appears in a 3x3 matrix. The result is always guaranteed to have 9 rows, with RANK going 123..123..123. 1 | 2 | 3 ------------------------ 1 2.5 5 7 2.5 8 9 2.5 5 I've attempted to to use a PIVOT, but have been unsuccessful (the part about the aggregate function always confuses me): How can I shape my data into the form that I am looking for (PIVOT or otherwise)? Sorry if the requirement is a bit odd, I'm working with a piece of reporting software that does what I want it to do, if I can shape my data into that form. |
How to show NULL results from query? Posted: 11 Oct 2013 07:38 PM PDT i'm trying to create a procedure with filters for search. If the parameter is "NULL" the value of parameter will be column value; Here my problem: When my parameter and column value is NULL. I have no results because of this. I know that query: "COLUMN_NAME IS NULL" return the result even if the value is NULL, but i don't know how to implement this. I trying to do this with "CASE" but without success. |
How can I create an ad-hoc domain user on SQL Server Posted: 11 Oct 2013 05:30 PM PDT Okay, so here's the problem. I have an application I need to install and run locally for testing and debugging. The application has been designed to run on a domain and as such expects to log into the database using domain credentials. I can't modify the section of code that handles the logins without a serious amount of work which I don't have time for this week, nor do I have any ability to add the machine hosting the database to the domain (contractors machines aren't allowed to be added to the domain under the I.T. policy). So I'm in a bit of a bind. I cannot use the I can't just insert into the relevant security tables: sys.server_principals table, sys.SysLogins etc. because Is there any way I can back door my local SQL Server installation to accept these user credentials from the application without being attached to the domain? Of course adding the machine to the domain and logging in as a domain user would resolve this issue in a couple of minutes, but political issues prevent that from happening. |
Posted: 11 Oct 2013 02:56 PM PDT I've the following tables: Is possible to write a query that give me as result a row for each post_id with total comment count for this post? EXAMPLE: And so on... Thanks. |
Getting MySQL User Variable in Sequence, While Using ORDER BY RAND() Posted: 11 Oct 2013 05:32 PM PDT I am running a I want a result like this ID COUNTER When I run this query, I get everything in sequence: But I want it to be randomized, so I add |
Trigger needs dbms_alert declared Posted: 11 Oct 2013 12:21 PM PDT A trigger has given me the following: and this is when working on altering schemas/triggers/etc from an import of a client database. I ran and the results were: Now, googling, I have found that I need to grant explicit execute privileges to the public user which I did with However, this has not worked. |
Best Practices for Testing Application Dashboard Data Posted: 11 Oct 2013 03:16 PM PDT We have a dashboard that contains metrics based on a 6 month reporting period. Users log in and see their metrics based on the current 6-month reporting period. Every 6 months, we load new metrics in our DEV/QA environment and configure them for the upcoming reporting period so we can test. But, our application and ETL use GETDATE(), so for the ETL to calculate properly, we have been setting the server's DATE in the future so everything will calculate and display as if we were in the future.
We have unit testing for the application, so it will work. We are more worried that the data from the ETL will not be correct on Day 1 of the new period. It is very important that the metrics are displaying properly and the data is correct on Day 1. |
MySQL Query Optimization - Indexes & Joins 12 million+ records Posted: 11 Oct 2013 12:54 PM PDT I'm running the following query: Right now it takes at least 15 mins to execute. Both track and track_title have about 12 million records. Album and artist have at least a million each. Can this be sped up with better indexes? I've been playing with a combination of indexes in order to speed up the queries but i'm still having trouble getting it right. Table structure: |
Are de-normalised tables preferred in any aspect of a project? [on hold] Posted: 11 Oct 2013 10:15 AM PDT While going through interview , My Interviewer asked me a question about Denormalization of tables and its usage in applications. On guess, I answered, Yes it might be. Being denormalized , you have all columns in a single table where you do not require any kind of joins so your biggest plus point will be performance if proper look up conditions are given. And you can use a denormalized table in Reportings. Is it correct, what I answered? |
use function arguments or variable in postgres function Posted: 11 Oct 2013 07:37 PM PDT I want to set the default value for the column When I run It seems that I cannot use a function arguments |
Issue In INSERT INTO SELECT Statement [duplicate] Posted: 11 Oct 2013 12:46 PM PDT This question already has an answer here: Can anyone please advise how to rectify below code: INSERT INTO table1 values(10, SELECT col1 FROM table2, null, null, 'variable'); ---Here table1 has only 4 column(In actual table there are 100 columns and I don't want to mention each column and then enter value) Is it possible to mention select statement inside values during table insertion ? This query is not answered earlier because here I wanted to insert data in a table1 by fetching few column data from database of table2 and rest of data needs some mathematical calculation(you can assume rest of data as variable). If this is answered, please provide me link. |
Creating domain user login from another domain? Posted: 11 Oct 2013 10:00 AM PDT I have two domains named "Domain1" and "Domain2". I have SQL server 2005 in Domain2, and users in Domain1. How can I create a login in SQL Server 2005 (in Domain2) for the user that is present in Domain1? |
db2 database alias not listed via list database directory but still unable to create that database Posted: 11 Oct 2013 04:13 PM PDT I am trying to create a database by logging on to the linux server on which the db2 server is installed, as the instance owner I also checked if there are any other instances that exist other than mine, but there aren't. While the alias testdb2 is not listed by If this is true, how do I get the equivalent of |
Way to make differential backup from full different but from same database Posted: 11 Oct 2013 05:21 PM PDT I have the following situation: Our sql server is running in a cloud service. The database to be backed up have 60gb. We need to make a local copy of the database for some tests, and it need to be up to date, but 60gb is impracticable to download daily. We thought to make a full backup, download it one time, make differential backups in following days. Once a week, we can stop the database operations to make the last differential backup from last week full backup and make a full backup, and start again the database operations. In this moment, the local bd, when restored with the last diff backup (made when the db was with stopped operations), have the same data than the last week full backup. The idea behind this is, if is possible to use the diff backup from last full backup and restore it in our local database. If it is not possible, anyone have some idea of what we can do to get all the data, up to date, from our cloud database? |
Is it safe to kill ANALYZE TABLE query on a MyISAM table? Posted: 11 Oct 2013 09:26 AM PDT I have a huge MyISAM table that is blocking my application. I frequently find So I'm thinking to run a Thanks. |
scramble the value of one column and unscramble it? Posted: 11 Oct 2013 10:26 AM PDT I want to change all the values in one column, as it is a production database and I don't want to give the data to testers without obfuscation. Example: I have searched scrambling and other things but my issue is I want to change the obfuscated data back to their original values when needed. I need a generic script so I can apply it to an arbitrary table if required. |
Error: "Storage Engine for the Table Doesn't Support Nullable Columns" (SequelPro) Posted: 11 Oct 2013 02:26 PM PDT I'm trying to load a very normal .csv file (that was created from Excel 2011 for Mac) into SequelPro (using MySQL) with my Mac -- and I've recently started getting this error consistently. Can anybody let me know what it is and how to fix it? This is stopping me before I'm able to import the table. Thanks for the help! |
optimizing MySQL for traffic analytics system Posted: 11 Oct 2013 08:26 PM PDT background : I've developed a URL shortener system like Bitly with same features , so the system also tracks clickers info and represent as graphs to the person who has shorten the link as analytics data. currently I'm using MySQL and have a table to store click info with this schema: and for now , just the The system should represent click analytics in the time periods the user wants, for example past hour, past 24 hours , the past month , ... for example to generate graphs for past month , I do following queries: issues:
questions : 1- how to improve and optimize the structure , so the analytics of high traffic links will be shown in less than 1 second(like bitly and similar web apps) and with less usage of CPU and RAM ? should I make an index on the fields 2- is MySQL suitable for this application? assume at maximum my application should handle 100 million links and 10 billion clicks on them totally. Should I consider switching to an NoSQL solution for example? 3- if MySQL is ok , is my database design and table structure proper and well designed for my application needs? or you have better recommendations and suggestions? UPDATE: I made an index on column I think making index on these columns would not help to solve my problem, my idea is about one of these: 1- if using MySQL, maybe generating statistics using background processing for high traffic links is better instead of calculating lively at the user request. 2- using some caching solution like memcached to help MySQL with high traffic links. 3- using a NoSQL such as MongoDB and solutions like Map-Reduce which I am poorly familiar with and haven't used ever. what do you think? |
Primary replica set server goes secondary after secondary fails Posted: 11 Oct 2013 05:26 PM PDT I have a 2 servers replica set that, after the secondary fails the primary goes into secondary mode while the secondary is in STARTUP2 (recovering). The problem with this is that I can't use the collection stored in that replica set freely, I'm getting errors trying to use the collection: Sometimes if I restart the mongod instances, the server rs2-1 is the primary for a while, but after some time (while the secondary is recovering) I see this in the logs of rs2-1 (the primary): Is there an easy way to make the primary keep being primary after the secondary fails? Am I doing something wrong? Thanks in advance! |
Posted: 11 Oct 2013 01:26 PM PDT I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service. As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good. read here if you are interested As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup? Thanks, |
Strange characters in mysqlbinlog output Posted: 11 Oct 2013 11:26 AM PDT Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?
Thanks! Julie |
12c Grid Agent installation issues - Shell path is incorrect or not defined Posted: 11 Oct 2013 08:26 AM PDT I am installing 12c Grid Control agent on my server via Enterprise manager. The targer server is a UNIX server with AIX OS on it. I add target host manually, enter authentication information, host name, installation directory, etc. and all goes well untill I click "Deploy Agent" button, then I get the following error: "Shell path is incorrect or not defined.:/bin/bash(SH_PATH),-c(SH_ARGS) on host MyHostName" and details are : "Check the property values in the following files in this order, ssPaths_.properties or sPaths.properties or Paths.properties, in "/u02/app/oracle/middleware/oms/oui/prov/resources" There isn't /u02/app" directory on my target server, only /u01/app/oracle" so I assume the above directory is on the same server as Enterprise manager. Although I did find a directory where I have files of the format ssPaths_.properties - /u01/app/oracle/product/11.2.0.3/oui/prov/resources Since my platform ia AIX, I used file ssPaths_aix.properties On my target server if I run which bash I get /usr/bin/bash and which ssh returns /usr/bin/ssh ahd which sh returns /usr/bin/sh So I edited the ssPaths_aix.properties file so that SH_PATH=/usr/bin/sh but still getting the same error. I have also tried setting to SH_PATH=/usr/bin/bash but nothing seems to make any difference. I am thinking one possibility would be I am editing wrong file. Can anyone help? |
DB2 to require password each time Posted: 11 Oct 2013 04:07 PM PDT I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested. More specifically, this is how the authentication configuration looks like:
I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck. |
Connecting to a SQL Server database from a Flash program Posted: 11 Oct 2013 12:26 PM PDT I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7. If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it? In other words is this a LAN only system or does it automatically make itself available for the programs I link to it? |
Does the order of columns in a table's definition matter? Posted: 11 Oct 2013 03:17 PM PDT When defining a table, it's helpful to order the columns in logical groups and the groups themselves by purpose. The logical ordering of columns in a table conveys meaning to the developer and is an element of good style. That is clear. What is not clear, however, is whether the logical ordering of columns in a table has any impact on their physical ordering at the storage layer, or if it has any other impact that one might care about. Apart from the impact on style, does column order ever matter? |
RMAN & Archivelog Deletion Policy confusion Posted: 11 Oct 2013 03:12 PM PDT Never been able to fully grasp the notion of delete input/delete input all when backup up Archivelog with rman. Tied to the same issue is the configure Archivelog deletion policy to backed up X times to disk. Deleting input, or deleting input all, seems counterintuitive to me when the deletion policy is set to 7 times to match an overall backup policy of 7 days. Is there anyone here who can explain to someone who may be just a little dense how this all plays together? And what should the Archivelog deletion policy be if the goal is to have one week's worth of backups available at all times? All backups are weekly backed up to a tape drive. With seven days as overall, and 7 times on Archivelog deletions policy, am I backing it up 7 times or 7 times 7 times? Thank you for any help anyone can offer…… |
PostgreSQL: The database cluster initialization failed Posted: 11 Oct 2013 01:47 PM PDT Both I ran postgresql-9.1.2-1-windows.exe from postgres user as admin. Target Every way I try, I get "The database cluster initialization failed." Questions
I am trying to setup PostGIS to work with GeoDjango. I was able to manually install PostGIS. New to PostgreSQL and I am having a crisis of confidence over all of this. Coming from MySQL to PostgreSQL for the first time. Pertinent log output from C:\Users\Larry\AppData\Local\Temp\install-postgresql.log: Suggestions? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment