Sunday, March 31, 2013

[how to] How do you find the right Path? [closed]

[how to] How do you find the right Path? [closed]


How do you find the right Path? [closed]

Posted: 31 Mar 2013 03:34 PM PDT

Now, I understand that this might be the wrong place to be posting, and i apologize if it is, but I am out of ideas and out of my depth in trying to understand the system/path of becoming a DBA.

I have just started to teach myself SQL and I really enjoy it, so naturally the first question that comes to mind is,"How do I turn this into my career?"

I've been searching all the forums I can find to try and understand what it is that I need to do to get on the right path, with no luck. I have found a great deal of useful websites, but nothing that can help me right now.

So now I turn to you ladies and gents, I am aware that it wont happen over night and I'm not worried about starting at the bottom and working my way up which brings me to my question(s)...

Where do I start?

How do I start?

What position am I supposed to be applying for?

Thanks in Advance,

Fabian.

Why is a hash match operator in this very basic query

Posted: 31 Mar 2013 04:31 PM PDT

I'm beginning to learn some about looking at execution plans and making queries more efficient

Consider these two basic queries

select distinct pat_id, drug_class, drug_name from rx     select pat_id, drug_class, drug_name from rx   

and their execution plans

enter image description here

index being used:

CREATE CLUSTERED INDEX [ix_overlap] ON [dbo].[rx]   (      [pat_id] ASC,      [fill_date] ASC,      [script_end_date] ASC,      [drug_name] ASC  )  

Even though the first query supposedly has the higher cost by a 4:1 margin it runs faster than the second one. Why is it that a simple distinct added to the query will add the (what I assume to always be bad, corrections are welcome) hash match operator? And why does it have the higher query cost relative to the second query if it runs faster.

Restore standby database with undo rollback file

Posted: 31 Mar 2013 05:57 PM PDT

I made full backup to specific database and restore it with stand-by recovery where data-base in read-only I tried to insert value or edit to table in this database but not inserted

this values stored in undo rollback file I want after I restore database to read-write How I can recover this inserted value to table and cheak it existed in the table ?

Limiting number of results in a Partition using OVER(PARTITION BY)

Posted: 31 Mar 2013 05:56 PM PDT

In the following query, why is it that we have to limit the results returned from each Partition by using the clause WHERE foo.row_num < 3 outside of the subquery foo but not from within the subquery with WHERE row_num < 3?

Query

SELECT pid, land_type, row_num, road_name, round(CAST(dist_km AS numeric), 2) AS dist_km    FROM (      SELECT ROW_NUMBER()       OVER (          PARTITION by loc.pid          ORDER BY ST_Distance(r.the_geom, loc.the_geom)      ) as row_num,      loc.pid, loc.land_type, r.road_name,       ST_Distance(r.the_geom, loc.the_geom)/1000 as dist_km      FROM ch05.land AS loc      LEFT JOIN ch05.road AS r      ON ST_DWithin(r.the_geom, loc.the_geom, 1000)      WHERE loc.land_type = 'police station'  ) AS foo    WHERE foo.row_num < 3  ORDER BY pid, row_num;  

Query that does not work

SELECT pid, land_type, row_num, road_name, round(CAST(dist_km AS numeric), 2) AS dist_km    FROM (      SELECT ROW_NUMBER()       OVER (          PARTITION by loc.pid          ORDER BY ST_Distance(r.the_geom, loc.the_geom)      ) as row_num,      loc.pid, loc.land_type, r.road_name,       ST_Distance(r.the_geom, loc.the_geom)/1000 as dist_km      FROM ch05.land AS loc      LEFT JOIN ch05.road AS r      ON ST_DWithin(r.the_geom, loc.the_geom, 1000)      WHERE loc.land_type = 'police station'      AND row_num < 3  ) AS foo    ORDER BY pid, row_num;  

Error: ERROR: column "row_num" does not exist

Very fast replication

Posted: 31 Mar 2013 05:05 PM PDT

I used this guide to make replication between my two db servers : http://wiki.postgresql.org/wiki/Streaming_Replication

But, when I updated a row on master server it will be available in slave instantly. It should be asynchronous , isn't it? What I am missing?

Thank You

mysql optimize table crash

Posted: 31 Mar 2013 11:27 AM PDT

When I try OPTIMIZE TABLE `table` (myisam) on a table which is about 300MB, then it is crashed and must be repaired. What could cause this problem? The same problem occurs on other tables over 300MB.

which postgres do i have installed? what directory needs to be in my path to run initdb?

Posted: 31 Mar 2013 10:41 AM PDT

I need to use postgresql with ruby on rails.

I have these directories

/Library/PostgreSQL  

and

/usr/local/Cellar/postgresql/usr:local:Cellar:postgresql:9.2.2  

This 2nd directory name is confusing. In finder, when I'm in the directory

/usr/local/Cellar/postgresql,  usr:local:Cellar:postgresql:9.2.2   

is actually one directory with forward slashes in the name. usr/local/Cellar/postgresql. It's in terminal where I see the :'s

"brew info postgres" returns the following:

postgresql: stable 9.2.2  http://www.postgresql.org/  Depends on: readline, ossp-uuid  /usr/local/Cellar/postgresql/usr:local:Cellar:postgresql:9.2.2 (2819 files, 39M)  https://github.com/mxcl/homebrew/commits/master/Library/Formula/postgresql.rb  

So do I have postgres installed properly? Do I have 2 versions? One in Library/PostgreSQL and one in /usr/local/Cellar (via HomeBrew)?

Regarding trying to use initdb to create my first postgres database:

I have ohmyzsh installed and I am trying to run the initdb command but getting command not found. Which path should I put in my path statement so that it can find the command?

/usr/local/var/postgres/server.log did not exist, so I created it.

pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start says server is starting and when I do ps auxwww | grep postgres

I get:

postgres   260   0.0  0.1  2522180   8084   ??  SNs  Tue07PM   0:58.79  /System/Library/Frameworks/CoreServices.framework/Frameworks/Metadata.framework/Versions/A/Support/mdworker MDSImporterWorker com.apple.Spotlight.ImporterWorker.504  

so I think that means it is running, right?

Well I found initdb script file in both Library and brew locations. I navigated to the brew directory containing initdb and tried to run the initdb command and got zsh: command not found.

I then navigated to the Library containing initdb, tried to run it, and got the same results.

Isn't the current directory part of the path because I'm IN IT?

Summary of my questions:

  • Which postgres do i have installed? Is it installed twice and should I uninstall a version? If so, how?
  • What directory needs to be in my path to run initdb and do I really need to run it?
  • Should the postgres server start when I start my computer? I heard it should. How do I set that up?

mysql master master replication issue

Posted: 31 Mar 2013 05:18 PM PDT

I have set the 2 database servers with master-master replication. On both servers, the following parameters are showing properly when I check the slave status

Slave_IO_Running: Yes

Slave_SQL_Running: Yes

But there is a database size difference between these 2 serves

Could some one give me the solution for this?

Copying my Oracle 10g Express database to another PC

Posted: 31 Mar 2013 07:36 AM PDT

I have Oracle 10g Express. How can I make a copy of my database and application?

I don't want to make a back up, I want to move my DB and application to another PC.

MySql Cluster for educational use

Posted: 31 Mar 2013 09:50 AM PDT

I am student and I am writing Master of Science paper about in-memory databases. I want to use MySql Cluster for benchmarks in my work.
Can I use MySql Cluster free of charge for educational purposes or do I need to pay for it?

Oracle 11g R2 on CentOS 6.3, and Net Configuration Assistant

Posted: 31 Mar 2013 10:41 AM PDT

this is my first time to install Oracle 11g R2 on Linux.

I made all instructions in Oracle documentation.

But in the installation process, the Net Configuration Manager is failed.

I continued the process, and after it finished, I couldn't connect to the sqlplus.

This is my tnsnames.ora

# tnsnames.ora Network Configuration File: /Oracle/u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora  # Generated by Oracle configuration tools.    ORCL =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))      (CONNECT_DATA =        (SERVER = DEDICATED)        (SERVICE_NAME = orcl.localhost)      )  

and listener.ora

# listener.ora Network Configuration File: /Oracle/u01/app/oracle/product/11.2.0/db_1/network/admin/listener.ora  # Generated by Oracle configuration tools.    LISTENER =    (DESCRIPTION_LIST =      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))      )    )    ADR_BASE_LISTENER = /Oracle/u01/app/oracle  

Output of lsnrctl status

LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 29-MAR-2013 02:35:48    Copyright (c) 1991, 2009, Oracle.  All rights reserved.    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost.localdomain)(PORT=1521)))  TNS-12541: TNS:no listener   TNS-12560: TNS:protocol adapter error    TNS-00511: No listener     Linux Error: 111: Connection refused  

I open ./netca to reconfigure the listener again.

But the program told me that the port is in use.

I deleted it and added a new one but the same warning that port is in use

**

Solution:

** Finally I found the solution, that I didn't change permission to be owned by oracle user and oinstall group for the database dir in which the oracle software is. And I think that there are some scripts or programs that related to listener and the oracle user cannot run it.

mysql second slave not syncying while first slave works fine

Posted: 31 Mar 2013 10:14 AM PDT

I have a master (m) - slave (s1) setup using mysql 5.1.45

When I try to add a second slave (s2) the slave lags behind and never catches up on the sync.

Even after having synced the s2 with the whole system offline and there were (Seconds_Behind_Master = 0) after a few hours the s2 gets out of sync.

Strange is that s1 is always on sync.

any ideas?

SHOW SLAVE STATUS \G  (on slave2)  *************************** 1. row ***************************             Slave_IO_State: Waiting for master to send event                Master_Host: xxx.xxx.xxx.xxx                Master_User: xxxx_xxxx5                Master_Port: 3306              Connect_Retry: 60            Master_Log_File: mysql-bin.013165        Read_Master_Log_Pos: 208002803             Relay_Log_File: xxxxxxxxxx-relay-bin.000100              Relay_Log_Pos: 1052731555      Relay_Master_Log_File: mysql-bin.013124           Slave_IO_Running: Yes          Slave_SQL_Running: Yes            Replicate_Do_DB: xxxxxxxxx        Replicate_Ignore_DB:         Replicate_Do_Table:     Replicate_Ignore_Table:    Replicate_Wild_Do_Table:  Replicate_Wild_Ignore_Table:                 Last_Errno: 0                 Last_Error:               Skip_Counter: 0        Exec_Master_Log_Pos: 1052731410            Relay_Log_Space: 44233859505            Until_Condition: None             Until_Log_File:              Until_Log_Pos: 0         Master_SSL_Allowed: No         Master_SSL_CA_File:         Master_SSL_CA_Path:            Master_SSL_Cert:          Master_SSL_Cipher:             Master_SSL_Key:      Seconds_Behind_Master: 69594  Master_SSL_Verify_Server_Cert: No              Last_IO_Errno: 0              Last_IO_Error:             Last_SQL_Errno: 0             Last_SQL_Error:  

iperf results between servers:

M -> s2  [ ID] Interval       Transfer     Bandwidth  [  5]  0.0-10.0 sec    502 MBytes    420 Mbits/sec  [ ID] Interval       Transfer     Bandwidth  [  4]  0.0-10.0 sec  1.05 GBytes    902 Mbits/sec    M -> s1  [ ID] Interval       Transfer     Bandwidth  [  4]  0.0-10.0 sec    637 MBytes    534 Mbits/sec  [ ID] Interval       Transfer     Bandwidth  [  5]  0.0-10.0 sec    925 MBytes    775 Mbits/sec  

vmstat for s2

 vmstat   procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------    r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st   1  0    268 126568 199100 22692944    0    0   100   836    8   81  1  0 96  3    vmstat 2 10  procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st  0  0    268 1150144 197128 21670808    0    0   100   835    9   81  1  0 96  3  0  0  0    268 1144464 197160 21674940    0    0   644  3096 1328 1602  0  0 97  2  0  0  2    268 1140680 197176 21679624    0    0   846  5362 1002 1567  0  0 98  2  0  0  1    268 1135332 197192 21685040    0    0   960  3348  850 1193  0  0 98  1  0  0  0    268 1130776 197204 21688752    0    0   576  2894  978 1232  0  0 98  2  0  0  0    268 1127060 197264 21693556    0    0   586  5202 1075 1505  0  0 97  3  0  0  0    268 1122184 197272 21698412    0    0   896  1160  614  727  0  0 98  1  0  0  0    268 1118532 197300 21702780    0    0   586  5070 1279 1708  0  0 93  6  0  0  0    268 1114000 197324 21705820    0    0   402  1522  947  942  0  0 95  4  0  0  0    268 1109708 197336 21710188    0    0   704  9150 1224 2109  0  0 97  2  0  

top output on s2

top - 14:44:25 up 16:36,  1 user,  load average: 1.62, 1.47, 1.42  Tasks: 140 total,   1 running, 139 sleeping,   0 stopped,   0 zombie  Cpu0  :  2.9%us,  1.1%sy,  0.0%ni, 73.8%id, 21.8%wa,  0.0%hi,  0.4%si,  0.0%st  Cpu1  :  0.8%us,  0.3%sy,  0.0%ni, 95.5%id,  3.3%wa,  0.0%hi,  0.0%si,      0.0%st  Cpu2  :  0.6%us,  0.3%sy,  0.0%ni, 97.7%id,  1.4%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu3  :  0.5%us,  0.2%sy,  0.0%ni, 98.9%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu4  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu5  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu6  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Cpu7  :  0.0%us,  0.0%sy,  0.0%ni, 99.9%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st  Mem:  24744184k total, 24005508k used,   738676k free,   199136k buffers  Swap:  1050616k total,      268k used,  1050348k free, 22078920k cached  

Mysqldump from the Amazon RDS

Posted: 31 Mar 2013 10:02 AM PDT

I have a Mysql database deployed on Amazon web services RDS. I am running a crontab which runs mysqldump command. The problem is it freezes the rds instance and my website will be down for more then half and hour almost. It will be inaccessible. Is there any way to stop the freeze during the dump and website will be accessible during the dump also.

MySQL - ERROR 1045 (28000): Access denied for user

Posted: 31 Mar 2013 10:25 AM PDT

I just installed a fresh copy of Ubuntu 10.04.2 LTS on a new machine. I logged into MySQL as root:

david@server1:~$ mysql -u root -p123  

I created a new user called repl. I left host blank, so the new user can may have access from any location.

mysql> CREATE USER 'repl' IDENTIFIED BY '123';  Query OK, 0 rows affected (0.00 sec)  

I checked the user table to verify the new user repl was properly created.

mysql> select host, user, password from user;  +-----------+------------------+-------------------------------------------+  | host      | user             | password                                  |  +-----------+------------------+-------------------------------------------+  | localhost | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | server1   | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | 127.0.0.1 | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | ::1       | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | localhost |                  |                                           |  | server1   |                  |                                           |  | localhost | debian-sys-maint | *27F00A6BAAE5070BCEF92DF91805028725C30188 |  | %         | repl             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  +-----------+------------------+-------------------------------------------+  8 rows in set (0.00 sec)  

I then exit, try to login as user repl, but access is denied.

david@server1:~$ mysql -u repl -p123  ERROR 1045 (28000): Access denied for user 'repl'@'localhost' (using password: YES)  david@server1:~$ mysql -urepl -p123  ERROR 1045 (28000): Access denied for user 'repl'@'localhost' (using password: YES)  david@server1:~$   

Why is access denied?

For a InnoDB only DB, which of these elements can be removed?

Posted: 31 Mar 2013 10:41 AM PDT

So, I'm trying to set up a Drupal 7 my.conf file that's combining best practices from various performance blogs. I'm realizing though that some of them are older than others, and many aren't assuming InnoDB. So of this list of configs, which are irrelevant if you're building for InnoDB.

[client]  port = 3306  socket = /var/run/mysqld/mysqld.sock    [mysqld_safe]  socket = /var/run/mysqld/mysqld.sock  nice = 0  open-files-limit = 4096    [mysqld]  port = 3306  user = mysql    default_storage_engine  default-storage-engine = InnoDB  socket = /var/run/mysqld/mysqld.sock  pid_file = /var/run/mysqld/mysqld.pid  basedir = /usr  tmpdir = /tmp  lc-messages-dir = /usr/share/mysql  local-infile = 0  automatic_sp_privileges = 0  safe-user-create = 1  secure-auth = 1  secure-file-priv = /tmp  symbolic-links = 0  key_buffer_size = 32M  myisam-recover = BACKUP,FORCE  concurrent_insert = 2  max_allowed_packet = 16M  max_connect_errors = 1000000  datadir = /var/lib/mysql  tmp_table_size = 64M  max_heap_table_size = 64M  query_cache_type = 1  query_cache_size = 0  query_cache_limit = 8M  query_cache_min_res_unit = 1K  default-storage-engine = InnoDB  thread_stack = 256K  thread_cache_size = 128  max_connections = 128  open_files_limit = 65535  skip-locking  skip-bdb  server-id  = 1  log_bin = /var/log/mysql/mysql-bin.log  binlog_cache_size = 256K  sync_binlog  = 256  expire_logs_days  = 14  max_binlog_size = 1G  binlog_do_db  = include_database_name  binlog_ignore_db = include_database_name  max_user_connections = 150  key_buffer = 16M  key_cache_block_size     = 4K  bulk_insert_buffer_size  = 8M  myisam_sort_buffer_size  = 64M  join_buffer_size = 8M  read_buffer_size = 2M  sort_buffer_size = 3M  read_rnd_buffer_size = 64M  table_cache = 4096  table_definition_cache = 4096  table_open_cache = 16384  optimizer_search_depth   = 4  collation-server = utf8_general_ci  interactive_timeout = 400  wait_timeout = 300  connect_timeout = 10  thread_concurrency=8  back_log = 2048  open-files = 10000  query_prealloc_size = 65536  query_alloc_block_size = 131072  

MySQL inserts/deletes slowing down after a while after Start/Reboot

Posted: 31 Mar 2013 10:49 AM PDT

The Setup here is: OS: Ubuntu 12.04 (1 Core and 1.75 GB of RAMO) MySQL Version: 5.5

There are about 8 to 9 tables in the database. On initially starting mysql the queries are fast at around 0.05 - 0.10 seconds, however after sometime the INSERTS and DELETES slow down to around 1 - 2 seconds, however the SELECTS are still around 0.05 - 0.10 seconds.

Then after rebooting the system the situation reverts back to how it was in the beginning of the previous paragraph. Everything is fast, but then INSERTS and DELETES slow down tremendously after a while.

Note: I'm just doing simple inserts and deletes on very few records, and the database starts with empty tables in the beginning.

Any insight?

Edit: Create Table of 1 of the Tables

| FeedUps |   CREATE TABLE `FeedUps` (    `post_id` int(10) unsigned DEFAULT NULL,    `liker` int(10) unsigned DEFAULT NULL,    UNIQUE KEY `post_id` (`post_id`,`liker`),    KEY `liker` (`liker`),    CONSTRAINT `FeedUps_ibfk_1` FOREIGN KEY (`post_id`)         REFERENCES `Feed` (`post_id`) ON DELETE CASCADE ON UPDATE CASCADE,    CONSTRAINT `FeedUps_ibfk_2` FOREIGN KEY (`liker`)         REFERENCES `Users` (`user_id`) ON DELETE CASCADE ON UPDATE CASCADE  ) ENGINE=InnoDB DEFAULT CHARSET=latin1   

Is it possible to have extra tables in a Slave with MySQL Replication

Posted: 31 Mar 2013 12:58 PM PDT

As my title mention I have a Master and a Slave database.

Master if for operations data and my slave mainly for reporting stuff.

The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system.

How could I add tables on the Slave without Master caring about it ? Is it even possible ?

MySQL 5.1.67 - Replication failure Mysql, Master Log Truncated or corrupted

Posted: 31 Mar 2013 01:11 PM PDT

I have run into some issues with replication

After an in-place upgrade of mySQL from 5.0.77 -> 5.1.67 rel 14.4 Percona

replication between a master and multiple slaves is causing issues. slaves are at the same level as master.

The replication mode is mixed (SBR and RBR)

The master seems to be truncating the log incorrectly, seemingly at random, and slaves subsequently stop replicating due to corrupt log being sent through.

mysqlbinlog also segfaults when trying to read the log.

This is the error being thrown by the slaves (table names/server names obfuscated):

Last_Error: Could not execute Update_rows event on table ozdb1.tableA; Incorrect string >value: '\x8EIQ\x13)\x00...' for column 'Description1' at row 2, Error_code: 1366; >Incorrect string value: '\x8E\x08\x0F\x0F\x0F\x0F...' for column 'Description2' at row >1, Error_code: 1366; Incorrect string value: '\xBD\x06DINI...' for column 'Description3' >at row 1, Error_code: 1366; Corrupted replication event was detected, Error_code: 1610; >handler error HA_ERR_CORRUPT_EVENT; the event's master log apdb041-bin.005603, >end_log_pos 1070537659

currently the only way to recover is via full refresh of the slaves.

Would there be an issue on the master server? No disk / network / logging indicates a reason why the mysql master would incorrectly truncate the log.

What could be causing the master to incorrectly truncate or not close off the log properly?

I'd be happy to hear about any extra detail which might be required to help answer this question!

Replication issue - CREATE SELECT alternative?

Posted: 31 Mar 2013 11:58 AM PDT

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

SQL Server replication subscriptions marked as inactive

Posted: 31 Mar 2013 02:58 PM PDT

Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever?

It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time.

Note, I'm not talking about the subscriptions being marked as expired...just as inactive.

Thank you.

How to drop a DB2 instance when the instance owner was removed

Posted: 31 Mar 2013 09:58 AM PDT

This is a real sticky situation. I was handed over a machine (running an AIX 7.1), and my first task was to re-install DB2 server on it. But someone before me had conveniently removed an instance owner account, and probably recreated it. Now, the problem is this:

1) When I try to uninstall DB2, it says the instance is active and has to be dropped first.

2) When I try to drop this instance, DB2 says there is no such instance.

I am quite new to DB2 administration. Not sure how to proceed here. Any help is appreciated

Thanks

How to Convert Horizontal to Vertical Array?

Posted: 31 Mar 2013 10:58 AM PDT

I need to create a query (suitable for Standard Edition) that has data from multiple columns (Columns 1-6 with corresponding Date Started and Date Completed data) displayed vertically, but also has the column name in the preceding column to identify it, along with other data (Record Number, Status).

Sample data:

+--------------+--------+------------+-------------+---------------+  | RecordNumber | Status | ColumnName | DateStarted | DateCompleted |  +--------------+--------+------------+-------------+---------------+  |            1 | Open   | Column 1   | 2012-01-01  | 2012-02-01    |  |            2 | Hold   | Column 2   | 2012-01-03  | 2012-03-01    |  |            1 | Open   | Column 3   | 2012-02-05  | 2012-04-06    |  |            3 | Closed | Column 4   | 2012-05-10  | 2012-07-25    |  |            2 | Hold   | Column 5   | 2012-03-09  | 2012-04-01    |  |            1 | Open   | Column 6   | 2012-10-10  | 2012-12-12    |  +--------------+--------+------------+-------------+---------------+  
DECLARE @Data AS TABLE  (      RecordNumber    integer NOT NULL,      [Status]        varchar(10) NOT NULL,      ColumnName      varchar(10) NOT NULL,      DateStarted     date NOT NULL,      DateCompleted   date NOT NULL  );    INSERT @Data  (      RecordNumber,       [Status],      ColumnName,      DateStarted,      DateCompleted  )  VALUES      (1, 'Open', 'Column 1', '20120101', '20120201'),      (2, 'Hold', 'Column 2', '20120103', '20120301'),      (1, 'Open', 'Column 3', '20120205', '20120406'),      (3, 'Closed', 'Column 4', '20120510', '20120725'),      (2, 'Hold', 'Column 5', '20120309', '20120401'),      (1, 'Open', 'Column 6', '20121010', '20121212');  

Ensure correct username when using pg_restore

Posted: 31 Mar 2013 07:58 AM PDT

I have just installed postgres 9.1.6 on a local Ubuntu server. Now I'm trying to restore a database dump from a database on Heroku. The local database is setup like this:

sudo -u postgres psql -c "create user app_user with password 'pass';"  sudo -u postgres psql -c "create database app_production owner app_user;"  

Now, when I try to restore the the dump I use the following command:

pg_restore --verbose --schema=public --no-acl --no-owner --jobs=8 --exit-on-error --username=app_user --dbname=app_production /tmp/app_production.dump  

Now in psql with \l to see ownerships I get the following:

                                         List of databases            Name    |   Owner   | Encoding |   Collate   |    Ctype    |   Access privileges    ------------------+-----------+----------+-------------+-------------+-----------------------   app_production   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    postgres         | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    template0        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres   template1        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres  

As you can see, the ownership of app_production database has now turned from app_user to postgres. I would have expected the owner of the app_production database to still be app_user, so what am I doing wrong?

BTW, The --schema=public was added, because I was getting a weird error:

"Could not execute query: ERROR: must be owner of extension plpgsql"

Another thing is, that the owner of the dump is the user that the database was having on heroku, which would be something like 'jebf473b73bv73v749b7'

How to script out push subscription creation at the subscriber?

Posted: 31 Mar 2013 12:58 PM PDT

I'm trying to set up a push subscription to a SQL Server publication from the subscriber.

I could set up the subscription at the publisher using the Replication Wizard in Management Studio. However, I would prefer to script the process relative to the subscriber so I can automate the deployment of a new SQL Server subscriber instance.

Initially, I'm happy to prompt for the name of the publisher before deployment. If I can get this working, I will look for a way to inject the correct value for my environment automatically.

What is a simple way to do this for a SQL Server instance that has to create multiple subscriptions at different publishers?

I'm open to using any supported SQL Server scripting solution: SMO, RMO, Sqlcmd, WMI, PSDrive, even pure T-SQL.

I've attempted to solve this problem in two ways. The first is a complete solution using T-SQL, but it involves some manual steps.

Using T-SQL

I have a manual solution in T-SQL. The solution is based on the output of the Management Studio Replication Script Generator output.

Using Management Studio, I run the following script to generate a T-SQL script that I can run at the publisher:

PRINT N'  EXECUTE MyDatabase.dbo.sp_addsubscription    @publication = N''MyPublication'',    @subscriber = ''' + CAST(SERVERPROPERTY('ServerName') AS SYSNAME) + ''',    @destination_db = ''SubscriberDatabase'',    @subscription_type = N''Push'',    @sync_type = N''automatic'',    @article = N''all'',    @update_mode = N''read only'',    @subscriber_type = 0;    EXECUTE MyDatabase.dbo.sp_addpushsubscription_agent    @publication = N''MyPublication'',    @subscriber = ''' + CAST(SERVERPROPERTY('ServerName') AS SYSNAME) + ''',    @subscriber_db = ''SubscriberDatabase'',    @job_login = null,    @job_password = null,    @subscriber_security_mode = 1,    @frequency_type = 64,    @frequency_interval = 1,    @frequency_relative_interval = 1,    @frequency_recurrence_factor = 0,    @frequency_subday = 4,    @frequency_subday_interval = 5,    @active_start_time_of_day = 0,    @active_end_time_of_day = 235959,    @active_start_date = 0,    @active_end_date = 0,    @dts_package_location = N''Distributor'';';  

On the MYSUBSCRIBER instance, the output would look like this:

EXECUTE MyDatabase.dbo.sp_addsubscription    @publication = N'MyPublication',    @subscriber = 'MYSUBSCRIBER',    @destination_db = 'SubscriberDatabase',    @subscription_type = N'Push',    @sync_type = N'automatic',    @article = N'all',    @update_mode = N'read only',    @subscriber_type = 0;    EXECUTE MyDatabase.dbo.sp_addpushsubscription_agent    @publication = N'MyPublication',    @subscriber = 'MYSUBSCRIBER',    @subscriber_db = 'SubscriberDatabase',    @job_login = null,    @job_password = null,    @subscriber_security_mode = 1,    @frequency_type = 64,    @frequency_interval = 1,    @frequency_relative_interval = 1,    @frequency_recurrence_factor = 0,    @frequency_subday = 4,    @frequency_subday_interval = 5,    @active_start_time_of_day = 0,    @active_end_time_of_day = 235959,    @active_start_date = 0,    @active_end_date = 0,    @dts_package_location = N'Distributor';  

I copy the output and execute the script at the publisher instance to set up the subscription.

I think I can't automate this in pure T-SQL without editing the script before running it, because T-SQL by design does not handle user input.

Using PowerShell and RMO

PowerShell has simple ways to process user input, so this seems like a good way to prototype the automation process.

MSDN has an eight-step guide to set up a push subscription using the .NET Replication Management Objects (RMO).

Here are the first two steps:

  1. Create a connection to the Publisher by using the ServerConnection class.
  2. Create an instance of the TransPublication class by using the Publisher connection from step 1. Specify Name, DatabaseName, and ConnectionContext.

I'm trying to translate these steps into a PowerShell script, but I can't get past step 2.

In the following code examples, I use fictional object names. I believe this does not affect the answerability of the question because the error message is identical when I use the real object names.

First attempt: setting the properties

My first attempt is to create the TransReplication object then set its properties. The code looks like this:

Add-Type -AssemblyName "Microsoft.SqlServer.Rmo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";    $Publisher = New-Object Microsoft.SqlServer.Management.Common.ServerConnection MyServer    $Publication = New-Object Microsoft.SqlServer.Replication.TransPublication  $Publication.Name = 'MyPublication'  $Publication.DatabaseName = 'MyDatabase'  $Publication.ConnectionContext = $Publisher  

When I execute this script, I see the following error:

Exception setting "ConnectionContext": "Cannot convert the "server='(local)';Trusted_Connection=true;multipleactiveresultsets=false" value   of type "Microsoft.SqlServer.Management.Common.ServerConnection" to type "Microsoft.SqlServer.Management.Common.ServerConnection"."  At line:8 char:14  + $Publication. <<<< ConnectionContext = $Publisher      + CategoryInfo          : InvalidOperation: (:) [], RuntimeException      + FullyQualifiedErrorId : PropertyAssignmentException  

It looks like it's failing becuase it can't convert the type ServerConnection to the type ServerConnection. I don't understand how this could fail for the stated reason, because the value is already of the required type.

Second attempt: overloading the constructor

My second attempt is to specify the property values of the TransReplication object in the constructor. The code looks like this:

Add-Type -AssemblyName "Microsoft.SqlServer.Rmo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";    $Publisher = New-Object Microsoft.SqlServer.Management.Common.ServerConnection MyServer    $Publication = New-Object Microsoft.SqlServer.Replication.TransPublication 'MyPublication', 'MyDatabase', $Publisher  

When I execute this script, I see the following error:

New-Object : Cannot find an overload for "TransPublication" and the argument count: "3".  At line:5 char:26  + $Publication = New-Object <<<<  -TypeName Microsoft.SqlServer.Replication.TransPublication 'MyPublication', 'MyDatabase', $Publisher      + CategoryInfo          : InvalidOperation: (:) [New-Object], MethodException      + FullyQualifiedErrorId : ConstructorInvokedThrowException,Microsoft.PowerShell.Commands.NewObjectCommand  

It looks like the New-Object cmdlet can't find the three-argument constructor documented by MSDN:

public TransPublication(    string name,    string databaseName,    ServerConnection connectionContext  )  

Parameters

As far as I can tell, I'm overloading the constructor correctly.

Am I doing something wrong? Is there something unusual about my environment? Am I better off using another solution?

Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress?

Posted: 31 Mar 2013 01:58 PM PDT

I'm trying to monitor the progress of a sqlite3 command importing a large .sql file into a database using zenity --progress.

I've tried the following which will import the file, however progress is not shown:

sqlite3 DATABASE < import_file.sql | zenity --progress --percentage=0 --auto-close  

I know I need to provide zenity a numeric source for the progress, but don't really know how to obtain the number.

Can anyone help me?

Why is Database Administration so hard?

Posted: 31 Mar 2013 05:26 AM PDT

I know a lot of Database Administrators and they are all over 28-29 years old.

Is all database administration like that? I mean, is this about getting experience more than at least 7-8 years?

Or is being a database administrator so hard?

Are there any good tools for monitoring postgresql databases

Posted: 31 Mar 2013 11:33 AM PDT

I am planing to have very soon few highly loaded postgresql databases. I have some expirience managing mysql databases with high load, but now we have to use postgresql.

I want to know what are the best tools for day-to-day database management and status reporting. (Of course console is the best one, but I want to know about other options too)

All expirience is welcome!

How do I list all databases and tables using psql?

Posted: 31 Mar 2013 08:45 PM PDT

I am trying to learn PostgreSQL administration and have started learning how to use the psql command line tool.

When I log in with psql --username=postgres, how do I list all databases and tables?

I have tried \d, d and dS+ but nothing is listed. I have created two databases and a few tables with pgAdmin III, so I know they should be listed.

No comments:

Post a Comment

Search This Blog