Saturday, June 29, 2013

[how to] Get the Time of Last update of a column

[how to] Get the Time of Last update of a column


Get the Time of Last update of a column

Posted: 29 Jun 2013 08:38 PM PDT

Good evening, This command gives the date of the last update for a table

SELECT UPDATE_TIME  FROM   information_schema.tables  WHERE  TABLE_SCHEMA = 'MyDB'  AND TABLE_NAME = 'MyTable'  

But I want to find the time of last update of a particular column of a table. I can't use triggers because I want to know the time of last update of a specific column from a system table.

I hope I explained well my problem. Thanks

MySQL design for big data

Posted: 29 Jun 2013 12:19 PM PDT

this is my first post on this forum although i have been following it for a while. Here's my question and i really need help about this one. I have a database witch contains trip services. every trip service has 30 fields of data and GPS position logs of vehicle movement and asterisk call logs, receipts etc. With my calculation over 1 year period database size would be at least 2.3 TB. Needles to say that querying needs to go in many-2-many fashion and all data needs to be available within seconds and i need to have it at disposal for at least 3 years. In future there will be around 2000 queries per second generating about 4000 records per seconds within approximate of 6000 fields and also approximate of 1000 - 2000 updates and 4000 SELECT operations. My question is : what is the bast practice to keep my database responsive and effective and what hardware is most suitable for that kind of demands. I have read the normalization article and i understand what is the goal. Hardware that i am planning to get is as follows ( HP Prolliant running centOS):

CPU : 2 x Intel Xeon Processor E5-2620 RAM : 48 GB HDD : 8 x HP 300 GB 6G SAS 15K rpm SFF (2.5-inch) Network : 4 x Gigabit

Thanks in advance,

best regards

What is the best way to create database table? [on hold]

Posted: 29 Jun 2013 08:04 AM PDT

I am developing a web-application almost similar to cardekho or carwale. Like these website, my application also contains full details of car such as brands, models. Car details like: basic overview, pictures, features, specification, dimensions, colors etc.

My question is what is the best practice to create database table. What I have concluded yet is I will create different tables for brands and models and will use foreign key to join them. But for car details, should I go put every detailed information in one table or should I keep that in different tables and join them with a primary model key?

How to select rows which have max and min of count?

Posted: 29 Jun 2013 01:18 PM PDT

I have the following table in SQL Server 2012:

CREATE TABLE [MyTable] (      [Id] UNIQUEIDENTIFIER NOT NULL PRIMARY KEY DEFAULT NEWID(),      [MyGroup] INT NOT NULL  );  

I want to output a table of the form

    | MyGroup | Count  Max |         |  Min |         |  

For example, if I have 30 rows in MyTable where MyGroup = 1, 20 rows where MyGroup = 2, and 10 rows where MyGroup = 3, then

    | MyGroup | Count  Max | 1       | 30  Min | 3       | 10  

What sort of query would output this information?

Difference between partitioning and fragmentation

Posted: 29 Jun 2013 05:47 AM PDT

Im trying to learn sth. about distributed databases and i stumbled upon the concepts of fragmentation and partitioning. Now im asking myself: whats the difference? It seems to me as if both do the same thing producing the similar results (deviding a relation horizontally/vertically/both)...

MariaDB header error

Posted: 29 Jun 2013 03:55 PM PDT

I had a MySQL server on my Ubuntu dev box (I use a tutorial "how to install a lamp on ubuntu" type of article) and I decided to install mariadb.

So I uninstall mysql and install mariadb.

~$ mysql --version  mysql  Ver 15.1 Distrib 10.0.3-MariaDB, for debian-linux-gnu (i686) using readline 5.1  ~$ mysqld --version  mysqld  Ver 10.0.3-MariaDB-1~precise-log for debian-linux-gnu on i686 (mariadb.org binary distribution)  

The installation worked fine, all my database worked fine except I get this error when I run the code:

Warning: mysqli::mysqli() [mysqli.mysqli]: Headers and client library minor version mismatch. Headers:50529 Library:100003 in /var/www/test.php on line 5

this is my php code:

  $mysqli = new mysqli("localhost", "root", "", "test");      if (mysqli_connect_errno()) {        printf("Connect failed: %s\n", mysqli_connect_error());        exit();    }      $query = "SELECT * FROM test";      if ($result = $mysqli->query($query)) {          while ($row = $result->fetch_assoc()) {            printf ("%s (%s)\n", $row["id"], $row["name"]);        }        $result->free();    }      $mysqli->close();  

and the mysql info:

Welcome to the MariaDB monitor.  Commands end with ; or \g.  Your MariaDB connection id is 40  Server version: 10.0.3-MariaDB-1~precise-log mariadb.org binary distribution    Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others.    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.    MariaDB [(none)]> select version();  +------------------------------+  | version()                    |  +------------------------------+  | 10.0.3-MariaDB-1~precise-log |  +------------------------------+  1 row in set (0.00 sec)    MariaDB [test]> show tables;  +----------------+  | Tables_in_test |  +----------------+  | test           |  +----------------+  1 row in set (0.01 sec)    MariaDB [test]> show create table test\G  *************************** 1. row ***************************         Table: test  Create Table: CREATE TABLE `test` (    `id` int(11) DEFAULT NULL,    `name` varchar(255) DEFAULT NULL  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  1 row in set (0.00 sec)    MariaDB [test]> select * from test;  +------+------+  | id   | name |  +------+------+  |    1 | dev  |  |    2 | qa   |  +------+------+  2 rows in set (0.01 sec)  

I tried a mysql_upgrade:

mysql_upgrade --force  Phase 1/3: Fixing table and database names  Phase 2/3: Checking and upgrading tables  Processing databases  information_schema  mysql  mysql.column_stats                                 OK  mysql.columns_priv                                 OK  mysql.db                                           OK  mysql.event                                        OK  mysql.func                                         OK  mysql.gtid_slave_pos                               OK  mysql.help_category                                OK  mysql.help_keyword                                 OK  mysql.help_relation                                OK  mysql.help_topic                                   OK  mysql.host                                         OK  mysql.index_stats                                  OK  mysql.innodb_index_stats                           OK  mysql.innodb_table_stats                           OK  mysql.ndb_binlog_index                             OK  mysql.plugin                                       OK  mysql.proc                                         OK  mysql.procs_priv                                   OK  mysql.proxies_priv                                 OK  mysql.servers                                      OK  mysql.slave_master_info                            OK  mysql.slave_relay_log_info                         OK  mysql.slave_worker_info                            OK  mysql.table_stats                                  OK  mysql.tables_priv                                  OK  mysql.time_zone                                    OK  mysql.time_zone_leap_second                        OK  mysql.time_zone_name                               OK  mysql.time_zone_transition                         OK  mysql.time_zone_transition_type                    OK  mysql.user                                         OK  performance_schema  test  test.test                                          OK  Phase 3/3: Running 'mysql_fix_privilege_tables'...  OK  

As far as I know everything there is ok. What's the problem?

Also, is MariaDB ready for production?

How to manage constants within an Oracle Database

Posted: 29 Jun 2013 09:17 AM PDT

This question touches on a few different parts of Oracle that I'm not particularly familiar with so bear with me:

I am trying to redesign some of the procedures and functions within the database I manage. One of the most annoying things is the use of integers to express the status of rows.

For example rows that need to be processed get the number 3, rows that have been processed get the number 0 and rows that are in the middle of processing get number 1. This is a simplified example.

What I was hoping to do was code those integers into constants so when the procedures are written the words would be self-documenting...

I've tried to use packages and functions to manage these constants. Everything comes out a bit messy. The cleanest I've found is:

CREATE OR REPLACE FUNCTION PROCESSED RETURN NUMBER AS BEGIN RETURN 0;    END;  /  

This allows you to type the sql below which looks relatively neat.

SELECT rows  FROM table  WHERE status = PROCESSED;  

The problem I've found is that these columns are indexed to return quickly but the functions mean the indexes aren't used.

With that background my question is: How should constants be managed in Oracle effectively? What solution has the best trade off for visual simplicity, logical organisation and database performance.

database structure - complicated requirements

Posted: 29 Jun 2013 04:45 AM PDT

I have a project to build a website but it's complicated and I'm having trouble figuring out what the best way to build the database would be to handle these particular requirements.

The site is for a local builders and farmers (and anyone else who uses heavy equipment) to rent their machinery amongst themselves. Users should be able to sign up and list an item of equipment which is then searchable and bookable by other users of the site.

So a builder might sign-up and upload a listing for his concrete mixer. Then another user can search for concrete mixers to hire between 2 dates and place a booking for the mixer through the site.

So far so good.

Problem is that the builder should be able to set a default per-day rate but they should also be able to say that through-out the month of July, or on the last two weekends in August the mixers default daily rate is different. So basically everyday could end up having a different rate and I'm having trouble figuring out what is the most efficient way to structuring the database and how to calculate the total costs of renting for several days if there's potentially a different rate every day.

At the moment I'm imaging having to loop through a 365 sized array but that can't be right. I'm a bit new to this so I'm probably just confused.

When creating remote BLOB store is "RBSFilestreamFile" always the name of the file to be added to the FILEGROUP?

Posted: 29 Jun 2013 04:45 PM PDT

When creating a remote BLOB store in SQL Server (2008 R2) is "RBSFilestreamFile" always the name of the file when adding it to the FILEGROUP like in this query (this is the name I've seen used in every example I've found online, but I need to know for sure)?

ADD FILE (name = RBSFilestreamFile, filename = 'c:\Blobstore')      TO FILEGROUP RBSFilestreamProvider  

I'm asking because I'm working on an application for restoring SharePoint content databases and need to know if I can hardcode this string into the application.

I know you can create file groups and files with any name you want, but specifically for setting up RBS with SharePoint using SQL 2008's built in FILESTREAM provider are these names an expected convention? i.e. Will it work if I name my FILEGROUP and/or FILE something else?

What is proper database design for a drupal table with an order field?

Posted: 29 Jun 2013 01:45 AM PDT

I am adding a custom widget to an instance of Drupal 6.x & MySQL 5.5 and came across a problem with updating rows.

I have a table of recipe ingredients where multiple ingredients are tied to a single recipe by node id (nid) & version id (vid).

The primary key is vid, nid & order.

The vid & nid are related to the recipe nid & vid fields.

The table schema is:

+-----------------+------------------+  | Field           | Type             |  +-----------------+------------------+  | vid             | int(10) unsigned |  | nid             | int(10) unsigned |  | order           | int(10) unsigned |  | name            | varchar(255)     |  | unit_of_measure | varchar(32)      |  | quantity        | int(10) unsigned |  +-----------------+------------------+  

The problem comes when attempting to reorder the ingredients.

For instance:

+-----+-----+-------+---------+-------------------+----------+  | vid | nid | order |  name   |  unit_of_measure  | quantity |  |  5  |  1  |   1   | Chicken |        Lb         |    1     |  |  5  |  1  |   2   |  Rice   |        Cup        |    2     |  |  5  |  1  |   3   |  Thyme  |        Tbsp       |    3     |  +-----+-----+-------+---------+-------------------+----------+  

I want to move Thyme to the top of the list but I can't change the order for Thyme to 1 since that primay key already exists (Chicken). I can't move Chicken down to order 2 because Rice is already there, etc...

My position is that we should add a unique auto-incrementing int field that will be the sole primary key. Which will enable us to reorder the rows without incident with the possibility that two rows might end up with the same nid, vid, & order.

My coworkers position was that to add a unique auto-increment int field is bad design because there should never be two different rows that have the same vid, nid & order. But following this belief there are two ways to implement a reorder of the rows

  1. Update each row's order with some large number (ie- 1001, 1002, 1003) so that the original order is no longer conflicting, then update each row with the correct order values (1, 2, 3).
  2. Delete each row that has the same nid & vid, then insert all the rows again in the correct order.

From the database's perspective, what is the correct approach?

what is the best way to update 2 tables via data entry form in access 2007 database?

Posted: 29 Jun 2013 02:45 AM PDT

I have a database with more than 500.000 records, I use Table_a to Accession the files by the ID number,and Name, Date, Address, Tel, ..... And then i use Table_b to archive files and get the position where i should keep that file, just by entering the ID. Is there any way to update the Database to have it all in one action? I mean after Accessioning the file, i may have a check box, or a drop list to move this record directly to the Archiving at the sametime??

SQL Server Replication: "ALTER TABLE ALTER COLUMN" is not propagated to subscribers

Posted: 29 Jun 2013 07:45 AM PDT

We are running SQL Server 2008 R2 SP1 as publisher & distributor, and SQL Server 2005 SP3 as subscriber. The replication of schema changes is activated, and the replication has been running for years, including frequent schema changes (new column, new constraints, etc).

The following instruction was sent on the publisher:

use myDatabase  alter table Tbl_TypeLignePaye        alter column natureTypeLignePaye nvarchar(12)  go  

where field natureTypeLignePaye was originally nvarchar(3) null

The query ran without errors on the main database. The result is the following:

  1. The field natureTypeLignePaye still appears as nvarchar(3) in the object browser
  2. The column properties screen indicates a nvarchar type with a length of 12
  3. The change was not propagated to the subscribers

Any idea on what is going on with this database?

Publisher: object browser window vs property window give incoherent data

field type and length

Is it possible to have extra tables in a Slave with MySQL Replication

Posted: 29 Jun 2013 01:45 PM PDT

As my title mention I have a Master and a Slave database.

Master if for operations data and my slave mainly for reporting stuff.

The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system.

How could I add tables on the Slave without Master caring about it ? Is it even possible ?

What fillfactor for caching table?

Posted: 29 Jun 2013 06:14 AM PDT

I have heavily updated / accessed table where I store serialized java objects. They are in the table for 2-3 hours (also are being updated during that period) and then removed. Size of table is around 300MB. I have spotted it is very, very often VACUUMed and wonder if changing the fillfactor would help?

Replication issue - CREATE SELECT alternative?

Posted: 29 Jun 2013 03:45 PM PDT

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

Which text-index I should create for xooops engine to achieve better search results?

Posted: 29 Jun 2013 09:45 AM PDT

In one of projects we use xoops engine to manage content. In mysql slow query log most of queries are following :

SELECT p.uid,f.forum_id, p.topic_id, p.poster_name, p.post_time, f.forum_name, p.post_id, p.subject              FROM xps33_bb_posts p,              xps33_bb_posts_text pt,                  xps33_bb_forums f WHERE p.post_id = pt.post_id AND p.approved = 1 AND p.forum_id = f.forum_id AND f.forum_id IN (1,4,61,7,9,17,20,45,35,44,38,39,43,53,54,55,56,57,58,60,14,29,40,26,18,41,33,24,32,59,25) AND ((p.subject LIKE '%rivi%' OR pt.post_text LIKE '%orvi%') AND (p.subject LIKE '%care%' OR pt.post_text LIKE '%gor%'))  ORDER BY p.post_time DESC LIMIT 0, 5;  

I can't change them as It would involve changing the engine which is not an option atm. But I can help the engine to search faster. As I understood as the table uses MyIsam engine I can create text indicies which should make search faster, am I right?

So in general for which indicies I should create to avoid following queries run for long time?

+----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  | id | select_type | table | type   | possible_keys       | key     | key_len | ref                | rows   | Extra                       |  +----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  |  1 | SIMPLE      | p     | ALL    | PRIMARY,forumid_uid | NULL    | NULL    | NULL               | 144090 | Using where; Using filesort |  |  1 | SIMPLE      | f     | eq_ref | PRIMARY             | PRIMARY | 4       | diginew.p.forum_id |      1 | Using where                 |  |  1 | SIMPLE      | pt    | eq_ref | PRIMARY             | PRIMARY | 4       | diginew.p.post_id  |      1 | Using where                 |  +----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  3 rows in set (0.00 sec)  

How to drop a DB2 instance when the instance owner was removed

Posted: 29 Jun 2013 10:45 AM PDT

This is a real sticky situation. I was handed over a machine (running an AIX 7.1), and my first task was to re-install DB2 server on it. But someone before me had conveniently removed an instance owner account, and probably recreated it. Now, the problem is this:

1) When I try to uninstall DB2, it says the instance is active and has to be dropped first.

2) When I try to drop this instance, DB2 says there is no such instance.

I am quite new to DB2 administration. Not sure how to proceed here. Any help is appreciated

Thanks

Postgresql could not find pg_xlog

Posted: 29 Jun 2013 08:20 PM PDT

I'm trying to mount the WAL to a different disk. I stopped the server, and did the following:

(I'm using Fabric)

  # Move pg_xlog to wals/path.dev/    dest_dir = "/home/ec2-user/%s/wals/%s" % (config['main_dir'], dev.replace('/','.'))    src_dir = "/var/lib/pgsql/%s/data/pg_xlog" % version      # Move all files     sudo("mkdir %s" % (dest_dir), warn_only = True)      # Mount .fab-pg/wals/path.to.dev/    mount(dev, dest_dir, fs)      # Point it at new origin    sudo("sudo mv %s* %s" % (src_dir, dest_dir))    dest_dir = "%s/pg_xlog" % dest_dir # pg_xlog was moved into the directory    sudo("ln -s %s %s" % (dest_dir, src_dir))      # Grant postgres permissions    sudo("chown postgres %s" % dest_dir)    sudo("chown postgres %s" % src_dir)  

PG Log

LOG:  received smart shutdown request  LOG:  autovacuum launcher shutting down  LOG:  shutting down  LOG:  database system is shut down  LOG:  database system was shut down at 2012-12-13 03:00:46 UTC  FATAL:  required WAL directory "pg_xlog" does not exist  LOG:  startup process (PID 30971) exited with exit code 1  LOG:  aborting startup due to startup process failure  

Ls

[ec2-user@ip-10-205-21-74 fab-postgres]$ sudo ls -al /var/lib/pgsql/9.2/data  total 96  drwx------ 14 postgres postgres  4096 Dec 13 03:02 .  drwx------  4 postgres postgres  4096 Dec 12 04:03 ..  drwx------  5 postgres postgres  4096 Dec 12 03:57 base  drwx------  2 postgres postgres  4096 Dec 13 03:00 global  drwx------  2 postgres postgres  4096 Dec 12 03:57 pg_clog  -rw-------  1 postgres postgres  4476 Dec 12 03:57 pg_hba.conf  -rw-------  1 postgres postgres  1636 Dec 12 03:57 pg_ident.conf  drwx------  2 postgres postgres  4096 Dec 13 03:02 pg_log  drwx------  4 postgres postgres  4096 Dec 12 03:57 pg_multixact  drwx------  2 postgres postgres  4096 Dec 13 03:02 pg_notify  drwx------  2 postgres postgres  4096 Dec 12 03:57 pg_serial  drwx------  2 postgres postgres  4096 Dec 12 03:57 pg_snapshots  drwx------  2 postgres postgres  4096 Dec 13 03:00 pg_stat_tmp  drwx------  2 postgres postgres  4096 Dec 12 03:57 pg_subtrans  drwx------  2 postgres postgres  4096 Dec 12 03:57 pg_tblspc  drwx------  2 postgres postgres  4096 Dec 12 04:03 pg_twophase  -rw-------  1 postgres postgres     4 Dec 12 03:57 PG_VERSION  lrwxrwxrwx  1 root     root        30 Dec 13 03:01 pg_xlog -> .fab-pg/wals/.dev.xvdf/pg_xlog  -rw-------  1 postgres postgres 19587 Dec 12 03:57 postgresql.conf  -rw-------  1 postgres postgres    59 Dec 13 03:02 postmaster.opts  

ls

[ec2-user@ip-10-205-21-74 fab-postgres]$ sudo ls -al ~/.fab-pg/wals/.dev.xvdf/  total 12  drwxr-xr-x 3 root     root     4096 Dec 13 03:01 .  drwxrwxr-x 3 ec2-user ec2-user 4096 Dec 13 03:01 ..  drwx------ 3 postgres postgres 4096 Dec 12 04:03 pg_xlog  

fixed pg_xlog to point to correct place

[ec2-user@ip-10-205-21-74 fab-postgres]$ sudo ls -al /var/lib/pgsql/9.2/data  total 96  drwx------ 14 postgres postgres  4096 Dec 13 14:42 .  drwx------  4 postgres postgres  4096 Dec 13 14:38 ..  drwx------  5 postgres postgres  4096 Dec 13 14:38 base  drwx------  2 postgres postgres  4096 Dec 13 14:38 global  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_clog  -rw-------  1 postgres postgres  4476 Dec 13 14:38 pg_hba.conf  -rw-------  1 postgres postgres  1636 Dec 13 14:38 pg_ident.conf  drwx------  2 postgres postgres  4096 Dec 13 14:42 pg_log  drwx------  4 postgres postgres  4096 Dec 13 14:38 pg_multixact  drwx------  2 postgres postgres  4096 Dec 13 14:42 pg_notify  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_serial  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_snapshots  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_stat_tmp  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_subtrans  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_tblspc  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_twophase  -rw-------  1 postgres postgres     4 Dec 13 14:38 PG_VERSION  lrwxrwxrwx  1 root     root        45 Dec 13 14:40 pg_xlog -> /home/ec2-user/.fab-pg/wals/.dev.xvdf/pg_xlog  -rw-------  1 postgres postgres 19587 Dec 13 14:38 postgresql.conf  -rw-------  1 postgres postgres    59 Dec 13 14:42 postmaster.opts  

Mongo replicated shard member not able to recover, stuck in STARTUP2 mode

Posted: 29 Jun 2013 03:45 AM PDT

I have following setup for a sharded replica set in Amazon VPC:

mongo1: 8G RAM Duo core (Primary)

mongo2: 8G RAM Duo core (Secondary)

mongo3: 4G RAM (Arbiter)

Mongo1 is the primary member in the replica set with a 2 shard setup:

 mongod --port 27000 --dbpath /mongo/config -- configsvr      mongod --port 27001 --dbpath /mongo/shard1 --shardsvr --replSet rssh1     mongod --port 27002 --dbpath /mongo/shard2 --shardsvr --replSet rssh2  

Mongo2 is the secondary member in the replica set, mirrors mongo1 exactly:

 mongod --port 27000 --dbpath /mongo/config -- configsvr      mongod --port 27001 --dbpath /mongo/shard1 --shardsvr --replSet rssh1   # Faulty process     mongod --port 27002 --dbpath /mongo/shard2 --shardsvr --replSet rssh2  

Then for some reason, the 27001 process on mongo2 experienced a crash due to out of memory (cause unknown) last week. When I discovered the issue (the application still works getting data from the primary) and restarted the 27001 process, it was too late to catch up with the shard1 on mongo1. So I followed 10gen's recommendation:

  • emptied directory /mongo/shard1
  • restart the 27001 process using command

    mongod --port 27001 --dbpath /mongo/shard1 --shardsvr --replSet rssh1  

However it's 24+ hours now, the node is still in STARTUP2 mode, I have about 200G data in the shard1 and it appears that it got about 160G over to /mongo/shard1 on mongo2. Following is the replica set status command output(run on mongo2)

rssh1:STARTUP2> rs.status()  {       "set" : "rssh1",       "date" : ISODate("2012-10-29T19:28:49Z"),       "myState" : 5,       "syncingTo" : "mongo1:27001",       "members" : [            {                 "_id" : 1,                 "name" : "mongo1:27001",                 "health" : 1,                 "state" : 1,                 "stateStr" : "PRIMARY",                 "uptime" : 99508,                 "optime" : Timestamp(1351538896000, 3),                 "optimeDate" : ISODate("2012-10-29T19:28:16Z"),                 "lastHeartbeat" : ISODate("2012-10-29T19:28:48Z"),                 "pingMs" : 0            },            {                 "_id" : 2,                 "name" : "mongo2:27001",                 "health" : 1,                 "state" : 5,                 "stateStr" : "STARTUP2",                 "uptime" : 99598,                 "optime" : Timestamp(1351442134000, 1),                 "optimeDate" : ISODate("2012-10-28T16:35:34Z"),                 "self" : true            },            {                 "_id" : 3,                   "name" : "mongoa:27901",                 "health" : 1,                 "state" : 7,                 "stateStr" : "ARBITER",                 "uptime" : 99508,                 "lastHeartbeat" : ISODate("2012-10-29T19:28:48Z"),                 "pingMs" : 0            }       ],       "ok" : 1  }    rssh1:STARTUP2>   

It would appear most of the data from primary was replicated, but not all. The logs shows some error but I don't know if it's related:

  Mon Oct 29 19:39:59 [TTLMonitor] assertion 13436 not master or secondary; cannot currently read from this replSet member ns:config.system.indexes query:{ expireAfterSeconds: { $exists: true } }    Mon Oct 29 19:39:59 [TTLMonitor] problem detected during query over config.system.indexes : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }    Mon Oct 29 19:39:59 [TTLMonitor] ERROR: error processing ttl for db: config 10065 invalid parameter: expected an object ()    Mon Oct 29 19:39:59 [TTLMonitor] assertion 13436 not master or secondary; cannot currently read from this replSet member ns:gf2.system.indexes query:{ expireAfterSeconds: { $exists: true } }    Mon Oct 29 19:39:59 [TTLMonitor] problem detected during query over gf2.system.indexes : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }    Mon Oct 29 19:39:59 [TTLMonitor] ERROR: error processing ttl for db: gf2 10065 invalid parameter: expected an object ()    Mon Oct 29 19:39:59 [TTLMonitor] assertion 13436 not master or secondary; cannot currently read from this replSet member ns:kombu_default.system.indexes query:{ expireAfterSeconds: { $exists: true } }    Mon Oct 29 19:39:59 [TTLMonitor] problem detected during query over kombu_default.system.indexes : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }    Mon Oct 29 19:39:59 [TTLMonitor] ERROR: error processing ttl for db: kombu_default 10065 invalid parameter: expected an object ()  

Everything on primary appeared to be fine. No errors in the log.

I tried the steps twice, one with the mongo config server running and one with mongo config server down, both are same results.

This is a production setup and I really need to get the replica set back up working, any help is much much appreciated.

Ensure correct username when using pg_restore

Posted: 29 Jun 2013 08:45 AM PDT

I have just installed postgres 9.1.6 on a local Ubuntu server. Now I'm trying to restore a database dump from a database on Heroku. The local database is setup like this:

sudo -u postgres psql -c "create user app_user with password 'pass';"  sudo -u postgres psql -c "create database app_production owner app_user;"  

Now, when I try to restore the the dump I use the following command:

pg_restore --verbose --schema=public --no-acl --no-owner --jobs=8 --exit-on-error --username=app_user --dbname=app_production /tmp/app_production.dump  

Now in psql with \l to see ownerships I get the following:

                                         List of databases            Name    |   Owner   | Encoding |   Collate   |    Ctype    |   Access privileges    ------------------+-----------+----------+-------------+-------------+-----------------------   app_production   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    postgres         | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    template0        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres   template1        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres  

As you can see, the ownership of app_production database has now turned from app_user to postgres. I would have expected the owner of the app_production database to still be app_user, so what am I doing wrong?

BTW, The --schema=public was added, because I was getting a weird error:

"Could not execute query: ERROR: must be owner of extension plpgsql"

Another thing is, that the owner of the dump is the user that the database was having on heroku, which would be something like 'jebf473b73bv73v749b7'

Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress?

Posted: 29 Jun 2013 05:45 PM PDT

I'm trying to monitor the progress of a sqlite3 command importing a large .sql file into a database using zenity --progress.

I've tried the following which will import the file, however progress is not shown:

sqlite3 DATABASE < import_file.sql | zenity --progress --percentage=0 --auto-close  

I know I need to provide zenity a numeric source for the progress, but don't really know how to obtain the number.

Can anyone help me?

What's the difference between a temp table and table variable in SQL Server?

Posted: 29 Jun 2013 01:48 PM PDT

This seems to be an area with quite a few myths and conflicting views.

So what is the difference between a table variable and a local temporary table in SQL Server?

2 comments:

  1. Upcoming cars and bikes in strollernew.com

    ReplyDelete
  2. Great! this is an information sharing article. Nice content in your blog.Car Modification in Delhi

    ReplyDelete

Search This Blog