Tuesday, August 13, 2013

[how to] Taking a backup of a streaming WAL slave

[how to] Taking a backup of a streaming WAL slave


Taking a backup of a streaming WAL slave

Posted: 13 Aug 2013 09:14 PM PDT

I have an environment with pair of postgresql servers set up to do WAL streaming replication.

I also have a test environment and periodically I want to snapshot the production DBMS and restore that onto the test server.

Right now I do:

ID=db-$(date +%F_%T)  pg_basebackup --format=tar --pgdata=- --xlog --gzip --label=$ID  

and then restore that archive on the test server.

Unfortunately that makes things really slow down on the production server so I'd like to switch that to pulling the clone from the replication slave.

But I can't - the error I get is:

pg_basebackup: could not connect to server: FATAL: recovery is still in progress, can't accept WAL streaming connections

What do I need to do to accomplish my goal of not killing performance on the master when I need to clone it?


Running on Ubuntu 12.04 x86_64, postgresql server 9.1.9-0ubuntu12.04

How to keep the structure of the Tablix when there is no data to show

Posted: 13 Aug 2013 06:38 PM PDT

We have a SSRS report which shows the result of a dataset in a tablix. If the query in the dataset returns no data, the header of the Tablix gets displayed but not the tablix cells below the header. I know there is a property to NoRowMessage, but in fact it is not what we want. Because when I specify a message, SSRS shows the message instead of the empty tablix.I must show the tablix structure as it is, but with empty cells.

I even set the rule for each cell value of the tablix using IsNothing function, to show blank (""), if the value is null, but it didn't help.

Do you have any idea? Thanks for your help in advance.

Database Structure and Design

Posted: 13 Aug 2013 06:49 PM PDT

Okay my project in simple words. Multiple shop sites are hosted on different hosting services receiving orders from customers. All the orders are to be directed to the main company website where the employees will receive the orders, complete and update the system. So its a two way traffic of information between shop sites and the company website.

Its basically a network of content writing websites where orders will be received and sent to the main writers lounge being hosted on the company website. Initially the plan was to build 1 centralized database on the company website and remotely connect all the website to it BUT there are many factors causing the recent change in plans such as security, vulnerability, sustainability of each website and speed issues since all the websites will be connecting remotely to the database being hosted on the company website. There are almost 20 tables in the database, give or take. And the client does not want to invest in the dedicated hosting or anything, hence it has to be different websites on different hosting plans.

I would appreciate any of your professional thoughts on this. Diversify your answer by assuming two situations: 1) All the websites are being hosted with different hosting service providers and read:writer ratio is 70:30. 2) All the websites are being hosted with the same hosting service provider but different hosting accounts and not a dedicated VPS thing. This might help in overcoming the delay issues in remote database queries.

Solutions that we could think of : * Host 1 centralized database being hosted on the company website and connect all the sites remotely to it. * Host a personal database of each website on its hosting. With a separate database for the Writers being hosted on the main company website and copy the data daily from multiple databases to the writers database. Although we don't think this method is really worth it because it will anyways delay the process and hinder the smoothness of the system.

Understanding multiple indexes with MySQL (and performance)

Posted: 13 Aug 2013 03:50 PM PDT

I have a "phones" table with around 120 000 records, it contains phone_id, phone_number, phone_user_name, phone_last_contacted, etc.

The phone_id is index and looks like: 1,2,3,4 ... 120 000.

But I often update different fields like "phone_last_contacted":

UPDATE phones SET phone_last_contacted = '01-01-2013' WHERE phone_number = '+123456789'

The "slow query" log says that query read many records before finding the right one to update;

Would it be a performance upgrade if I set the phone_number to be the index? Considering that all phone_numbers are unique in this table. If the phone_number is the index, does this mean that mysql will know how to go directly to the row to update instead of reading many rows and find the right row ? Is that the purpose of an index or multiple indexes ?

What Are the Log File Names?

Posted: 13 Aug 2013 05:33 PM PDT

I moved my database files. When I go attach the MDF file by itself, SQL Server will tell me that it can't find the log files. How do I go about asking the MDF what files it is expecting before attaching it?

More background information: I have a bunch of SAN backup volumes. I attached them to Windows through iSCSI, and now half the drive letters are messed up. Moreover, I multiple volumes should be mapped to the save drive letter, so I can't restore the correct drive letters.

I know that the files are all there, but I don't know how many and which LDF/NDFs I should be attaching with each MDF.

Cannot connect remotely to a named instance

Posted: 13 Aug 2013 01:52 PM PDT

So, I have two servers (ServerA and ServerB) that I've set up with Windows Server 2008 R2 and SQL Server 2008 R2. My goal is to replicate between these two in a failover fashion (with ServerC as a witness instance, but that's for later).

Both servers are configured in the same way. Firewall rules added, SQL Server installed with the same settings. Network via TCP/IP and named pipes enabled. I can connect with SSMS to ServerA fine, but I get the following when connecting to ServerB:

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (.Net SqlClient Data Provider)

Now, the instance name is correct and I can successfully connect to ServerA with the instance name (they're the same on both servers). If I leave out the instance name, I get another error saying it just timed out.

The only thing that works is running SSMS on the local machine via shared memory, but obviously that doesn't solve the problem. I feel like I'm missing something obvious. Help?

Detect Convert_Implicit usages

Posted: 13 Aug 2013 12:33 PM PDT

I suspect that my Sql Server is using CONVERT_IMPLICIT a lot because many of my fields are set as string in NHibernate while they are set as varchar in database. Is there any option to detect when server is using The CONVERT_IMPLICIT function?

Thanks

How do I avoid duplicate rows using FREETEXT?

Posted: 13 Aug 2013 12:09 PM PDT

I have the following query:

SELECT     cat.CategoryId,cat.CategoryName,cat.CategoryImageURL,cat.CategoryImageData,    ap.AppName,ap.AppImageData,ap.AppDetails,ap.AppId  FROM Category cat,App ap  WHERE FREETEXT (cat.CategoryName, 'App')  

enter image description here How do I avoid duplicate rows in these results?

Vacuum settings for mostly append-only data

Posted: 13 Aug 2013 06:18 PM PDT

I have a table with the following characteristics:

  1. We INSERT a few 100k rows each day
  2. We never UPDATE the rows
  3. We DELETE "old" data once a week

From my shallow knowledge of Postgres VACUUM, it seems like this table might be a good candidate for a non-standard VACUUM strategy. A few questions I have...

  1. Apart from the weekly purging of old data, will a VACUUM do anything? Seems to me there is nothing to compact/clean if we're only doing INSERTs.
  2. Would it be a horrible idea to simply disable AUTOVACUUM on that table and manually VACUUM FULL after our weekly DELETE?

How can I join two tables while filtering the results with a" WHERE IN" clause?

Posted: 13 Aug 2013 04:26 PM PDT

What I am trying to do is set up a selection filter for a cascading list on my site. When I say selection filter, I mean I am trying to limit, and sort the results based on what the user selects.

I have set up the ability for my users to click a button and mark an item as a favorite. That information is stored in "billdata.favoritebills".

Also, for efficiency I have saved vote totals for each bill in billdata.billvotesmeta. billdata.billvotesmeta contains the column names: billid,totalvotes,yesvotes,presentvotes, and novotes. The primary key is billid.

My database is set up as follows:

billdata.bills

billdata.favoritebills

billdata.billvotesmeta

In this particular filter I am trying to set up, I want to limit the results to only being ones that the user has marked as a favorite and at the same time I want to order them from least to greatest total number of votes.

billdata.favoritebills contains the column names: userid and billid and a combination either exists in marked as a favorite or is deleted if it is unmarked as a favorite.

$testquery = "SELECT * FROM bills INNER JOIN bills ON billdata.billvotesmeta.billid = billdata.bills.id WHERE id IN ( SELECT billid FROM billdata.favoritebills WHERE userid='$_SESSION[userid]' ) ORDER BY DESC billdata.favoritebills.totalvotes";    $result = mysqli_query(dbcxn('bill'),$testquery)       or mysqli_debug();    $fetched = mysqli_fetch_assoc($result);  

The resulting output from PHP is:

Warning: mysqli_debug() expects exactly 1 parameter, 0 given in /opt/lampp/htdocs/unipartisan/includes/billlistcompiler.inc on line 38    Warning: mysqli_fetch_assoc() expects parameter 1 to be mysqli_result, boolean given in /opt/lampp/htdocs/unipartisan/includes/billlistcompiler.inc on line 39  1NULL  

What should happen is that it limits the billdata.bills.* selection to only being favorites and then orders them by the joined table's total vote count. The data from billdata.bills is the information used to build the page.

I think the problem is in the " WHERE id IN ( SELECT billid FROM billdata.favoritebills WHERE userid='$_SESSION[userid]' ) maybe this is not compatible with a query that has a join as I have not found anywhere else detailing a WHERE IN clause along with a join. "

Why does MySQL ignore the index even on force for this order by?

Posted: 13 Aug 2013 08:41 PM PDT

I run an EXPLAIN:

mysql> explain select last_name from employees order by last_name;  +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    | id | select_type | table     | type | possible_keys | key  | key_len | ref  | rows  | Extra          |  +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    |  1 | SIMPLE      | employees | ALL  | NULL          | NULL | NULL    | NULL | 10031 | Using filesort |  +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    1 row in set (0.00 sec)    

The indexes in my table:

mysql> show index from employees;    +-----------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+    | Table     | Non_unique | Key_name      | Seq_in_index | Column_name   | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |    +-----------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+    | employees |          0 | PRIMARY       |            1 | subsidiary_id | A         |           6 |     NULL | NULL   |      | BTREE      |         |               |    | employees |          0 | PRIMARY       |            2 | employee_id   | A         |       10031 |     NULL | NULL   |      | BTREE      |         |               |    | employees |          1 | idx_last_name |            1 | last_name     | A         |       10031 |      700 | NULL   |      | BTREE      |         |               |    | employees |          1 | date_of_birth |            1 | date_of_birth | A         |       10031 |     NULL | NULL   | YES  | BTREE      |         |               |    | employees |          1 | date_of_birth |            2 | subsidiary_id | A         |       10031 |     NULL | NULL   |      | BTREE      |         |               |    +-----------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+---------------+    5 rows in set (0.02 sec)    

There is an index on last_name but the optimizer does not use it.
So I do:

mysql> explain select last_name from employees force index(idx_last_name) order by last_name;    +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    | id | select_type | table     | type | possible_keys | key  | key_len | ref  | rows  | Extra          |    +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    |  1 | SIMPLE      | employees | ALL  | NULL          | NULL | NULL    | NULL | 10031 | Using filesort |    +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    1 row in set (0.00 sec)    

But still the index is not used! What am I doing wrong here?
Does it have to do with the fact that the index is NON_UNIQUE? BTW the last_name is VARCHAR(1000)

Update requested by @RolandoMySQLDBA

mysql> SELECT COUNT(DISTINCT last_name) DistinctCount FROM employees;    +---------------+    | DistinctCount |    +---------------+    |         10000 |    +---------------+    1 row in set (0.05 sec)        mysql> SELECT COUNT(1) FROM (SELECT COUNT(1) Count500,last_name FROM employees GROUP BY last_name HAVING COUNT(1) > 500) A;    +----------+    | COUNT(1) |    +----------+    |        0 |    +----------+    1 row in set (0.15 sec)    

Issue adding shared data source (.rsds) to SSRS project

Posted: 13 Aug 2013 11:01 AM PDT

I'm trying to add a shared data source to an SSRS project. The rsds is stored online on our intranet. I right-click on Shared Data Sources in the Solution Explorer and select Add -> Existing Item. I enter the URL in the box that pops up, find the .rsds file I want to add to the set of Shared Data Sources, and add it. Instead of adding the file to Shared Data Sources, however, it gets added to the Reports folder in Solution Explorer and isn't available to use as a data source. This is my first time using a .rsds like this, but a coworker who has done it before was just as confused as I am.

Is there something I'm doing wrong, or is there a glitch in the system somewhere along the way?

Why isn't the rows of an EXPLAIN accurate?

Posted: 13 Aug 2013 11:22 AM PDT

I have the following table:

mysql> select count(*) from employees;    +----------+    | count(*) |    +----------+    |    10000 |    +----------+    1 row in set (0.03 sec)    

I do an EXPLAIN: mysql> explain select last_name from employees order by last_name;

+----+-------------+-----------+------+---------------+------+---------+------+------+----------------+    | id | select_type | table     | type | possible_keys | key  | key_len | ref  | rows | Extra          |  +----+-------------+-----------+------+---------------+------+---------+------+------+----------------+    |  1 | SIMPLE      | employees | ALL  | NULL          | NULL | NULL    | NULL | 9894 | Using filesort |    +----+-------------+-----------+------+---------------+------+---------+------+------+----------------+    1 row in set (0.00 sec)  

The rows are 9894. I was expecting 10000.
I do:

mysql> analyze table employees;  +-------------------------------------+---------+----------+----------+    | Table                               | Op      | Msg_type | Msg_text |    +-------------------------------------+---------+----------+----------+    | sql_dummy.employees | analyze | status   | OK       |    +-------------------------------------+---------+----------+----------+    1 row in set (0.04 sec)    

and re-run EXPLAIN:

mysql> explain select last_name from employees order by last_name;

+----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    | id | select_type | table     | type | possible_keys | key  | key_len | ref  | rows  | Extra          |    +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    |  1 | SIMPLE      | employees | ALL  | NULL          | NULL | NULL    | NULL | 10031 | Using filesort |    +----+-------------+-----------+------+---------------+------+---------+------+-------+----------------+    1 row in set (0.01 sec)    

The rows are now 10031.

Does anyone know why the rows is never 10000? I have noticed this in other cases as well.

why is DBMS_SCHEDULER giving me an error?

Posted: 13 Aug 2013 11:27 AM PDT

I am trying to run the DBMS_SCHEDULER using this on oracle 11g:

BEGIN   DBMS_SCHEDULER.create_job (     job_name        => 'RUN_ACCTUSE',     job_type        => 'PLSQL_BLOCK',    job_action      => 'declare  in_clause varchar2(2560);  sel_query varchar2(2560);  n number := 0;  begin  for x in (select distinct userfieldcd from acctuserfield)  loop          if n <> 0 then               in_clause := in_clause || ', ';      end if;      in_clause := in_clause ||  '''' || x.userfieldcd || '''';      n := 1;      end loop;  sel_query := 'select * from (select Acctnbr, userfieldcd, value from acctuserfield) pivot (max(value) for userfieldcd in ('||in_clause||'))';  dbms_output.put_line (sel_query);    execute immediate 'create or replace view VIEW_ACCTUSE as ' ||sel_query;  end;',  start_date      => '14-AUG-13 5.00.00AM US/Pacific'  repeat_interval => 'freq=daily',  end_date        => NULL,  enabled         => TRUE,  comments        => 'Update VIEW_ACCTUSE.');  END;  /  

This is the error I get:
ORA-06550: line 19, column 19: PLS-00103: Encountered the symbol "SELECT" when expecting one of the following:

) , * & = - + < / > at in is mod remainder not rem =>
<> or != or ~= >= <= <> and or like like2
like4 likec between || multiset member submultiset
ORA-06550: line 19, column 95:
PLS-00103: Encountered the symbol "(" when expecting one of the following:

, ; for group having intersect minus order start union where
connect

I dont understand why is that?

Query Optimization for added value

Posted: 13 Aug 2013 10:30 AM PDT

I have a query that's behaving a bit oddly. In my database I have a table called "records". It tells me a bunch of information about what applications a user ran on my company's machines. I'm trying to aggregate some statistics, but am having some odd issues with a query.

This query runs in about 6.5 minutes (~30 million entries in "records"). I would expect it to take longer when divisionName isn't specified, but it seems to be taking an unreasonable amount of time to finish (overnight and still chugging).

select divisionName, programName, count(usageID)       from records R       right join Programs P           on P.programID=R.usageProgramID       right join locate L           on L.computerID=R.usageComputerID       where divisionName="umbrella"      group by programName      order by programName asc      INTO OUTFILE '/tmp/lab_prog_umbrella.csv'      FIELDS TERMINATED BY ','      LINES TERMINATED BY '\n';  

Is there an alternate structure to speed up the query? I have an index on (computerID,divisionName) in locate and (programID,programName) in Programs as well as a multitude of indexes in records.

Note: Programs contains 4 fields and locate contains 2. I don't think the joins are exceptionally large.

How do I delete only related records in a multi-key MERGE in SQL Server?

Posted: 13 Aug 2013 01:37 PM PDT

Suppose you have something like this:

Source table (variable):

Values (    LeftId INT NOT NULL,    RightId INT NOT NULL,    CustomValue varchar(100) NULL  )  

Target table:

Mapping (    LeftId INT NOT NULL,    RightId INT NOT NULL,    CustomValue varchar(100) NULL  )  

I want to merge Values into Target with the following rules:

  • Match on source.LeftId = target.LeftId AND source.RightId = target.RightId
    • when matched in target, update CustomValue
    • when not matched in target, insert
  • Delete any unmatched values in the target that do match a LeftId in the source, i.e. only delete records that related to the LefIds of what I'm merging.

(That last rule is hard to describe, sorry!)

For instance:

Source:

1, 10, foo  1, 11, foo  

Target:

1, 10, bar  1, 12, foo  2, 20, car  

Merge result:

Result Target:

1, 10, foo (updated)  1, 11, foo (inserted)  1, 12, foo (deleted)  2, 20, car (unchanged)  

So...

Here's what I have so far, which takes care of update and insert:

MERGE Mapping AS target  USING (SELECT LeftId, RightId, CustomValue FROM @Values)     AS source (LeftId, RightId, CustomValue)    ON target.LeftId = source.LeftId      AND target.RightId = source.RightId  WHEN NOT MATCHED THEN    INSERT (LeftId, RightId, CustomValue)    VALUES (source.LeftId, source.RightId, source.CustomValue)  WHEN MATCHED THEN    UPDATE SET      CustomValue = source.CustomValue;  

How do I do the delete part of my rule?

Oracle impdp - Importing file from remote server

Posted: 13 Aug 2013 01:13 PM PDT

Am I missing something or is it not possible to import a dump (.dmp) file using impdp to a database on another server other than where it was created? All of my investigations lead me to believe that this cannot be done...that the dump file needs to reside locally on the server where the data is to be imported or NFS mounted so it appears to be local. This seems to be a capability that the old "exp/imp" utilities used to have, but no longer exist. I know you can move data using impdp and the REMOTE_LINK option but in order to use this, the data must physically reside in a schema within the database instance on the remote side for it to be copied. It can't reside in a dump/exported file.

For example. I have Server "A" and Server "B". Each of them with an Oracle instance on it. On "A", I perform an export using expdp of schema "TESTDATA" to a dumpfile named "testdata.dmp" where it is store on "A". At some point in the future I would like to restore the contents of the "TESTDATA" dump file (testdata.dmp) to a new schema (TESTDATA2) on server "B".

At this point, is it true that my only options are to:

  1. Copy testdata.dmp to server "B" and perform an import directly on server "B"
  2. NFS mount the directory containing testdata.dmp on server "A" from server "B" so the dmp file appears local to server "B" and then perform the import.
  3. Create a temporary schema (TMPSCHEMA) on server "A", import the test.dmp file to the temporary schema using the REMAP_SCHEMA option in impdp, and then perform an impdp on server "B" using the REMOTE_LINK option pulling the data from TMPSCHEMA on "A"

Either I'm missing something here, or Oracle left a huge gap in functionality between impdp and imp.

Can I give the same name to a table and a stored procedure?

Posted: 13 Aug 2013 12:13 PM PDT

Is it possible to name a stored procedure the same as a table? I have an existing table named 'buyers', and I'm trying to create a stored procedure named 'buyers', but I'm getting this error:

There is already an object named 'buyers' in the database.

Transactional Replication Failing Suddenly - Distributor Error

Posted: 13 Aug 2013 02:11 PM PDT

I have two databases (both SQL Server 2008 R2) taking part in transactional replication. This exists to pass data to a customer of ours, so I am troubleshooting from the Publisher side.

Replication has been working fine for months, then all of a sudden (about a week ago) the customer reported that data is no longer showing up in their subscription tables. So - I started troubleshooting and found this:

Replication Error

(the error = The process could not execute '{call sp_MSget_subscription_guid(20)}' on ''.)

It seems that the distributor has failed for this subscription.

My questions (I am relatively new to Replication and have been pouring through documentation):

  • How can I clear this up and get data flowing again?
    • My initial thought is to Reinitialize the subscription, but the only snapshot that exists is the initial snapshot taken when replication began. I get the sense that re-initalizing will just reapply that snapshot and force all data back down again.
  • Can I just restart the Distributor Agent
    • From what I am reading, this may be a viable option - but I can't find the agent anywhere in SMSS (although, I am sure that is due to my inexperience with replication).

Note: This is the only subscription we have.

Update: Query result (mentioned in the comments below) Query Result

SymmetricDS fails to start for PostgreSQL 9.2

Posted: 13 Aug 2013 06:09 PM PDT

I'm trying to get SymmetricDS up and running with PostgreSQL. I've followed the tutorial (almost) exactly. (I have not set up a separate node yet since, for my purposes, I need that separate node to be truly separate and on a different VM.) Unfortunately, I am not able to get the database import step to function, as SymmetricDS will not connect to the database.

Following advice from Connecting to local instance of PostgreSQL from JDBC, I ensured that the second SLOC in pg_hba.conf was sensible; PostgreSQL will (should) accept all connections made over TCP/IP over lo using client-side identification. (The linked does call for md5 as opposed to ident; this has no visible effect and, according to the stack trace, is probably not what JDBC is expecting.)

I've ensured that symmetricds is a system user and is a user registered with PostgreSQL. If memory serves, I did this with something like

ADD USER symmetricds WITH PASSWORD sds-pass;  GRANT ALL PRIVILEGES TO test FOR USER symmetricds;  

(or something to this effect? I'm very new to databases.) If I had to guess where I went wrong, it'd be here.

I edited the engine file corp-000.properties to use the PostgreSQL versions of connection details (the file comes set for MySQL) and I filled in the appropriate credentials.


As far as I know, this is all that is needed to get SymmetricDS up and running (at least for the import step). Obviously, something went wrong; a stack trace is included below. What did I miss?

Shell log:

[root@dbrepa samples]# cat /var/lib/pgsql/9.2/data/pg_hba.conf  # PostgreSQL Client Authentication Configuration File  # ===================================================  # ...    # TYPE  DATABASE        USER            ADDRESS                 METHOD    # "local" is for Unix domain socket connections only  local   all             all                                     peer  # IPv4 local connections:  host    all             all             127.0.0.1/32            ident  # IPv6 local connections:  host    all             all             ::1/128                 ident  # Allow replication connections from localhost, by a user with the  # replication privilege.  #local   replication     postgres                                peer  #host    replication     postgres        127.0.0.1/32            ident  #host    replication     postgres        ::1/128                 ident      [root@dbrepa samples]# grep symmetricds /etc/passwd  symmetricds:x:501:501::/home/symmetricds:/bin/bash      [root@dbrepa samples]# service psql start  Starting psql service:                                     [  OK  ]      [root@dbrepa samples]# su - symmetricds  [symmetricds@dbrepa ~]$ psql test  psql (9.2.4)  Type "help" for help.    test=> \l                                     List of databases     Name    |  Owner   | Encoding |   Collate   |    Ctype    |    Access privileges  -----------+----------+----------+-------------+-------------+--------------------------   postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |   template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres             +             |          |          |             |             | postgres=CTc/postgres   template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres             +             |          |          |             |             | postgres=CTc/postgres   test      | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres            +             |          |          |             |             | postgres=CTc/postgres   +             |          |          |             |             | symmetricds=CTc/postgres  (4 rows)    test=> \q  [symmetricds@dbrepa ~]$ exit  logout      [root@dbrepa samples]# cat ../engines/corp-000.properties  #  # Licensed to JumpMind Inc under one or more contributor  # ...  #  # You should have received a copy of the GNU General Public License,  # version 3.0 (GPLv3) along with this library; if not, see  # <http://www.gnu.org/licenses/>.  #  # ...  #    engine.name=corp-000    # The class name for the JDBC Driver  db.driver=org.postgresql.Driver    # The JDBC URL used to connect to the database  db.url=jdbc:postgresql://localhost/corp?stringtype=unspecified    # The user to login as who can create and update tables  db.user=symmetricds    # The password for the user to login as  db.password=sds-pass    registration.url=  sync.url=http://localhost:8080/sync/corp-000    # Do not change these for running the demo  group.id=corp  external.id=000    # Don't muddy the waters with purge logging  job.purge.period.time.ms=7200000    # This is how often the routing job will be run in milliseconds  job.routing.period.time.ms=5000  # This is how often the push job will be run.  job.push.period.time.ms=10000  # This is how often the pull job will be run.  job.pull.period.time.ms=10000      [root@dbrepa samples]# ../bin/dbimport --engine corp-000 --format XML create_sample.xml  Log output will be written to ../logs/symmetric.log  [] - AbstractCommandLauncher - Option: name=engine, value={corp-000}  [] - AbstractCommandLauncher - Option: name=format, value={XML}  -------------------------------------------------------------------------------  An exception occurred.  Please see the following for details:  -------------------------------------------------------------------------------  org.postgresql.util.PSQLException: FATAL: Ident authentication failed for user "symmetricds"          at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:398)          at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:173)          at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)          at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:136)          at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:29)          at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:21)          at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:31)          at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)          at org.postgresql.Driver.makeConnection(Driver.java:393)          at org.postgresql.Driver.connect(Driver.java:267)          at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)          at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)          at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556)          at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545)   [wrapped] org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (FATAL: Ident authentication failed for user "symmetricds")          at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)          at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)          at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)          at org.jumpmind.symmetric.AbstractCommandLauncher.testConnection(AbstractCommandLauncher.java:325)   [wrapped] java.lang.RuntimeException: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (FATAL: Ident authentication failed for user "symmetricds")          at org.jumpmind.symmetric.AbstractCommandLauncher.testConnection(AbstractCommandLauncher.java:329)          at org.jumpmind.symmetric.AbstractCommandLauncher.getDatabasePlatform(AbstractCommandLauncher.java:336)          at org.jumpmind.symmetric.DbImportCommand.executeWithOptions(DbImportCommand.java:113)          at org.jumpmind.symmetric.AbstractCommandLauncher.execute(AbstractCommandLauncher.java:130)          at org.jumpmind.symmetric.DbImportCommand.main(DbImportCommand.java:72)  -------------------------------------------------------------------------------  

Some more resources as I find them in my sea of open tabs:

Large table and poor performance - what to do next?

Posted: 13 Aug 2013 09:08 PM PDT

We have a Windows 2008 R2 / SQL Server 2008 R2 (Standard) server that is used to host a single database. The database itself mainly consists of a single large table containing both live and historical data. The table is currently 101 million rows of 35 columns, and growing at the rate of around 250,000 rows a day. Splitting the table into smaller tables unfortunately isn't really an option due to a mass of legacy code.

The database itself (around 100Gb) is held as a single file on a single SSD drive. The server has another two 10K SAS disks used for the boot OS, paging etc, and the server has 22Gb of RAM.

Although everything's running fine, we have a few dozen users who need to query the data in this table. We have limited control over what these queries do: sometimes it's pulling a few hundred rows from yesterday, at other times it's tens of thousands of rows from 6 months ago. 99.9% of the activity is reading rows; there is very little writing apart from the live data being INSERTed throughout the day. At peak times, simple queries that return a lot of data can take half an hour or more to complete.

We have indexes in place that are helping, but the ultimate bottleneck appears to be disk I/O. The SSD in place isn't the fastest, and as a result we're looking at retrofitting a RAID1+0 array of high-end SSD drives to increase performance (we've checked the array card can handle the throughput).

Assuming we have this array in place, what is the best plan to increase read throughput to this database? If we have a super-fast SSD array, is that enough? Or would partitioning the database into separate files on separate logical drives be a better idea, even though they're essentially destined for the same disks? Similarly, would splitting database and log files across logical drives in the same array make any difference?

Drop a SQL Server database

Posted: 13 Aug 2013 10:03 AM PDT

Taking too long to drop a 500gb+ SQL Server database. My question being what ways can we drop a database other than through GUI or T-SQL? Would deleting the .mdf and .ldf files from the server help? I can detach and delete the mdf and ldf files. I just want to know the fastest way to drop it. It took me close to 11hrs without any results so I killed it and was hoping someone might know how to do it.

Help with Postgres 9.1 data import (is 4x expected data size, 4x slower than MySQL, and index still needed)

Posted: 13 Aug 2013 01:33 PM PDT

I have 25 billion <int, int, float> rows that I'm trying to import into Postgres, and after 77% of the data being imported, the Postgres data folder is taking up 840GB, about 4x the storage requirements for that many rows at 12 bytes each. Additionally, importing is taking 4x longer than my same import on MySQL (as described in MySQL MyISAM index causes query to match no rows; indexes disabled, rows match).

Here are my commands:

mydb=# CREATE TABLE mytable (id1 int, id2 int, score float)  $ psql mydb -c "COPY mytable (id1, id2, score) FROM 'file_000'"  $ psql ...  $ psql mydb -c "COPY mytable (id1, id2, score) FROM 'file_099'" # 100 files total  

I'm running Postgres 9.1. There are no other tables in the database. This is not a production environment. The files are TSV text files. The only output from each COPY command is something like "COPY 256448118" -- at least until I ran out of disk space.

Am I doing something wrong here, or is this the expected behavior?

Subquestion 1: Where is this extra storage overhead coming from, and can I avoid it?

  • Update: It looks like there is a HeapTupleHeader of 23 bytes on each row, so that probably explains this overhead (source: StackOverflow post). Any way to avoid this?

Subquestion 2: If storage requirements are indeed 4x that of expected size, can I speed up importing (i.e. with some configuration change, etc)?

Subquestion 3: I need an index on id1, so what will storage requirements be for that during and after creation (I planned to run CREATE INDEX id1x ON mytable (id1))?

Import from incremental backups to a new host in Oracle 11g

Posted: 13 Aug 2013 01:31 PM PDT

I am using Oracle 11g. I would like to know that whether it is possible to import from incremental level 0 & 1 backups to a new host using RMAN. If yes, how can I do that?

For level 1 I am using differential method.

Waiting for response from Postgres server

Posted: 13 Aug 2013 09:44 AM PDT

I have a problem when query from application (Java) to PostgreSQL. Here my query list (in 1 session user):

select count (id) from tb_a;  select count (id) from tb_a;  select nextval ('cssq_01');  select nextval ('cssq_02');    insert into tb_a;  insert into tb_b;    insert into tb_c;  insert into tb_d;  insert into tb_e;    select id, name from tb_a join tb_e;    insert / update ...  

In my case, I have to insert 500k rows into db & when rows ~ 64k I am waiting for long time (30 - 60 seconds). I checked postgres server log file, it's empty. The reasons may be server config or schema (constraint, index...). Until now, I do not have solution.

Please help me. Many thanks.

P/S: Here my detailed query (you can copy & paste to view) - thanks:

START_TIME = 1370253312380    Hibernate: select count(lttbdabfin0_.id) as col_0_0_ from public.lttb_dab_financial_trans lttbdabfin0_ where lttbdabfin0_.identify_number=? and lttbdabfin0_.cif_num=? and lttbdabfin0_.account_num=? and lttbdabfin0_.trans_date=? and lttbdabfin0_.branch_code=? and lttbdabfin0_.events_id=? limit ?    Hibernate: select cstbevents0_.id as id8_0_, cstbevents0_.auth_stat as auth2_8_0_, cstbevents0_.checker_dt_stamp as checker3_8_0_, cstbevents0_.checker_id as checker4_8_0_, cstbevents0_.rules_id as rules10_8_0_, cstbevents0_.services_id as services11_8_0_, cstbevents0_.maker_dt_stamp as maker5_8_0_, cstbevents0_.maker_id as maker6_8_0_, cstbevents0_.mod_no as mod7_8_0_, cstbevents0_.name as name8_0_, cstbevents0_.record_stat as record9_8_0_ from public.cstb_events cstbevents0_ where cstbevents0_.id=?    INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.core.query.service.EventQueryServiceImpl.getEvent" result="SUCCESS"    Hibernate: select tdtbinputp0_.id as id34_, tdtbinputp0_.auth_stat as auth2_34_, tdtbinputp0_.checker_dt_stamp as checker3_34_, tdtbinputp0_.checker_id as checker4_34_, tdtbinputp0_.code as code34_, tdtbinputp0_.program_list_id as program12_34_, tdtbinputp0_.maker_dt_stamp as maker6_34_, tdtbinputp0_.maker_id as maker7_34_, tdtbinputp0_.mod_no as mod8_34_, tdtbinputp0_.record_stat as record9_34_, tdtbinputp0_.unit_name as unit10_34_, tdtbinputp0_.value as value34_ from public.tdtb_input_param_values tdtbinputp0_ cross join public.lttm_program_list lttmprogra1_ where lower(lttmprogra1_.code)=? and tdtbinputp0_.program_list_id=lttmprogra1_.id and lower(tdtbinputp0_.code)=? and tdtbinputp0_.maker_dt_stamp=(select max(tdtbinputp2_.maker_dt_stamp) from public.tdtb_input_param_values tdtbinputp2_ cross join public.lttm_program_list lttmprogra3_ where lower(lttmprogra3_.code)=? and tdtbinputp2_.program_list_id=lttmprogra3_.id and lower(tdtbinputp2_.code)=?) limit ?  Hibernate: select cstbrules0_.id as id12_0_, cstbrules0_.auth_stat as auth2_12_0_, cstbrules0_.checker_dt_stamp as checker3_12_0_, cstbrules0_.checker_id as checker4_12_0_, cstbrules0_.description as descript5_12_0_, cstbrules0_.expression as expression12_0_, cstbrules0_.maker_dt_stamp as maker7_12_0_, cstbrules0_.maker_id as maker8_12_0_, cstbrules0_.mod_no as mod9_12_0_, cstbrules0_.name as name12_0_, cstbrules0_.record_stat as record11_12_0_ from public.cstb_rules cstbrules0_ where cstbrules0_.id=?    Hibernate: select nextval ('public.cslt_dabfinancialtrans')  H  ibernate: select cstbelemen0_.rules_id as rules11_12_1_, cstbelemen0_.id as id1_, cstbelemen0_.id as id6_0_, cstbelemen0_.auth_stat as auth2_6_0_, cstbelemen0_.checker_dt_stamp as checker3_6_0_, cstbelemen0_.checker_id as checker4_6_0_, cstbelemen0_.elements_id as elements10_6_0_, cstbelemen0_.rules_id as rules11_6_0_, cstbelemen0_.is_out as is5_6_0_, cstbelemen0_.maker_dt_stamp as maker6_6_0_, cstbelemen0_.maker_id as maker7_6_0_, cstbelemen0_.mod_no as mod8_6_0_, cstbelemen0_.record_stat as record9_6_0_ from public.cstb_element_rule cstbelemen0_ where cstbelemen0_.rules_id=?  Hibernate: select cstbelemen0_.id as id7_0_, cstbelemen0_.auth_stat as auth2_7_0_, cstbelemen0_.checker_dt_stamp as checker3_7_0_, cstbelemen0_.checker_id as checker4_7_0_, cstbelemen0_.maker_dt_stamp as maker5_7_0_, cstbelemen0_.maker_id as maker6_7_0_, cstbelemen0_.mod_no as mod7_7_0_, cstbelemen0_.name as name7_0_, cstbelemen0_.record_stat as record9_7_0_ from public.cstb_elements cstbelemen0_ where cstbelemen0_.id=?  Hibernate: select nextval ('public.cslt_elementvalue')  Hibernate: select cstbelemen0_.id as id7_0_, cstbelemen0_.auth_stat as auth2_7_0_, cstbelemen0_.checker_dt_stamp as checker3_7_0_, cstbelemen0_.checker_id as checker4_7_0_, cstbelemen0_.maker_dt_stamp as maker5_7_0_, cstbelemen0_.maker_id as maker6_7_0_, cstbelemen0_.mod_no as mod7_7_0_, cstbelemen0_.name as name7_0_, cstbelemen0_.record_stat as record9_7_0_ from public.cstb_elements cstbelemen0_ where cstbelemen0_.id=?    Hibernate: select nextval ('public.cslt_elementvalue')    Hibernate: select cstbelemen0_.id as id7_0_, cstbelemen0_.auth_stat as auth2_7_0_, cstbelemen0_.checker_dt_stamp as checker3_7_0_, cstbelemen0_.checker_id as checker4_7_0_, cstbelemen0_.maker_dt_stamp as maker5_7_0_, cstbelemen0_.maker_id as maker6_7_0_, cstbelemen0_.mod_no as mod7_7_0_, cstbelemen0_.name as name7_0_, cstbelemen0_.record_stat as record9_7_0_ from public.cstb_elements cstbelemen0_ where cstbelemen0_.id=?  Hibernate: select nextval ('public.cslt_elementvalue')  Hibernate: select cstmidenti0_.id as id19_, cstmidenti0_.auth_stat as auth2_19_, cstmidenti0_.can_search as can3_19_, cstmidenti0_.checker_dt_stamp as checker4_19_, cstmidenti0_.checker_id as checker5_19_, cstmidenti0_.code as code19_, cstmidenti0_.maker_dt_stamp as maker7_19_, cstmidenti0_.maker_id as maker8_19_, cstmidenti0_.mod_no as mod9_19_, cstmidenti0_.name as name19_, cstmidenti0_.record_stat as record11_19_ from public.cstm_identifier_type cstmidenti0_ where cstmidenti0_.code=? limit ?    INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.core.query.service.IdentifierTypeQueryServiceImpl.getIdentifierType" result="SUCCESS"    Hibernate: select cstmmember0_.id as id20_, cstmmember0_.auth_stat as auth2_20_, cstmmember0_.checker_dt_stamp as checker3_20_, cstmmember0_.checker_id as checker4_20_, cstmmember0_.customer_id as customer9_20_, cstmmember0_.maker_dt_stamp as maker5_20_, cstmmember0_.maker_id as maker6_20_, cstmmember0_.mod_no as mod7_20_, cstmmember0_.record_stat as record8_20_ from public.cstm_members cstmmember0_ where cstmmember0_.customer_id in (select cstbidenti1_.customer_id from public.cstb_identifier cstbidenti1_ cross join public.cstm_identifier_type cstmidenti2_ where cstbidenti1_.identifier_type_id=cstmidenti2_.id and cstbidenti1_.id_value=? and cstmidenti2_.code=? group by cstbidenti1_.customer_id having count(cstbidenti1_.customer_id)=?)    INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.core.query.service.CustomerQueryServiceImpl.getMember" result="SUCCESS"    Hibernate: select nextval ('public.CSSQ_CUSTOMER')  Hibernate: select nextval ('public.CSSQ_MEMBER')  Hibernate: select nextval ('public.cssq_identifier')    Hibernate: select nextval ('public.CSSQ_CUSTOMER_ATTRIBUTE')    INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.core.command.service.CustomerCommandServiceImpl.createMember" result="SUCCESS"    Hibernate: select cstbaccoun0_.id as id5_, cstbaccoun0_.auth_stat as auth2_5_, cstbaccoun0_.checker_dt_stamp as checker3_5_, cstbaccoun0_.checker_id as checker4_5_, cstbaccoun0_.maker_dt_stamp as maker5_5_, cstbaccoun0_.maker_id as maker6_5_, cstbaccoun0_.mod_no as mod7_5_, cstbaccoun0_.name as name5_, cstbaccoun0_.record_stat as record9_5_ from public.cstb_account_type cstbaccoun0_ where lower(cstbaccoun0_.name)=? limit ?  INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.core.query.service.AccountTypeQueryServiceImpl.getAccountType" result="SUCCESS"    Hibernate: select cstmcurren0_.id as id16_, cstmcurren0_.auth_stat as auth2_16_, cstmcurren0_.checker_dt_stamp as checker3_16_, cstmcurren0_.checker_id as checker4_16_, cstmcurren0_.DECIMAL as DECIMAL16_, cstmcurren0_.maker_dt_stamp as maker6_16_, cstmcurren0_.maker_id as maker7_16_, cstmcurren0_.mod_no as mod8_16_, cstmcurren0_.name as name16_, cstmcurren0_.record_stat as record10_16_ from public.cstm_currency cstmcurren0_ where lower(cstmcurren0_.name)=? limit ?    INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.core.query.service.CurrencyQueryServiceImpl.getCurrency" result="SUCCESS"    Hibernate: select nextval ('public.CSSQ_ACCOUNT')    INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.core.command.service.AccountCommandServiceImpl.createAccount" result="SUCCESS"    Hibernate: insert into public.lttb_dab_financial_trans (account_num, auth_stat, branch_code, checker_dt_stamp, checker_id, cif_num, code, events_id, external_trans_ref, identify_name, identify_number, maker_dt_stamp, maker_id, mod_no, point_value, reason_notes, record_stat, service_period, trans_date, transaction_ref, was_contribute, was_old, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)    Hibernate: insert into public.lttb_element_values (auth_stat, checker_dt_stamp, checker_id, elements_id, dab_financial_trans_id, maker_dt_stamp, maker_id, mod_no, record_stat, value, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)  Hibernate: insert into public.lttb_element_values (auth_stat, checker_dt_stamp, checker_id, elements_id, dab_financial_trans_id, maker_dt_stamp, maker_id, mod_no, record_stat, value, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)  Hibernate: insert into public.lttb_element_values (auth_stat, checker_dt_stamp, checker_id, elements_id, dab_financial_trans_id, maker_dt_stamp, maker_id, mod_no, record_stat, value, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)  Hibernate: insert into public.cstm_customer (auth_stat, checker_dt_stamp, checker_id, maker_dt_stamp, maker_id, mod_no, name, record_stat, type, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)    Hibernate: insert into public.cstm_members (auth_stat, checker_dt_stamp, checker_id, customer_id, maker_dt_stamp, maker_id, mod_no, record_stat, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?)    Hibernate: insert into public.cstb_identifier (auth_stat, checker_dt_stamp, checker_id, customer_id, identifier_type_id, id_value, maker_dt_stamp, maker_id, mod_no, record_stat, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)  Hibernate: insert into public.cstb_customer_attribute (code, customer_id, value, id) values (?, ?, ?, ?)  Hibernate: insert into public.cstb_accounts (auth_stat, balance, block_amount, checker_dt_stamp, checker_id, account_type_id, currency_id, customer_id, local_balance, maker_dt_stamp, maker_id, mod_no, open_date, record_stat, status, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)    Hibernate: select cstmmercha0_.id as id21_, cstmmercha0_.auth_stat as auth2_21_, cstmmercha0_.checker_dt_stamp as checker3_21_, cstmmercha0_.checker_id as checker4_21_, cstmmercha0_.customer_id as customer9_21_, cstmmercha0_.maker_dt_stamp as maker5_21_, cstmmercha0_.maker_id as maker6_21_, cstmmercha0_.mod_no as mod7_21_, cstmmercha0_.record_stat as record8_21_ from public.cstm_merchant cstmmercha0_ cross join public.cstm_customer cstmcustom1_ where cstmmercha0_.customer_id=cstmcustom1_.id and lower(cstmcustom1_.name)=? limit ?    INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.core.query.service.CustomerQueryServiceImpl.getMerchant" result="SUCCESS"    Hibernate: select cstbaccoun0_.id as id4_, cstbaccoun0_.auth_stat as auth2_4_, cstbaccoun0_.balance as balance4_, cstbaccoun0_.block_amount as block4_4_, cstbaccoun0_.checker_dt_stamp as checker5_4_, cstbaccoun0_.checker_id as checker6_4_, cstbaccoun0_.account_type_id as account14_4_, cstbaccoun0_.currency_id as currency15_4_, cstbaccoun0_.customer_id as customer16_4_, cstbaccoun0_.local_balance as local7_4_, cstbaccoun0_.maker_dt_stamp as maker8_4_, cstbaccoun0_.maker_id as maker9_4_, cstbaccoun0_.mod_no as mod10_4_, cstbaccoun0_.open_date as open11_4_, cstbaccoun0_.record_stat as record12_4_, cstbaccoun0_.status as status4_ from public.cstb_accounts cstbaccoun0_ cross join public.cstb_account_type cstbaccoun1_ where lower(cstbaccoun1_.name)=? and cstbaccoun0_.account_type_id=cstbaccoun1_.id and cstbaccoun0_.customer_id=? limit ?    INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.core.query.service.AccountQueryServiceImpl.getAccount" result="SUCCESS"    Hibernate: select cstbservic0_.id as id13_0_, cstbservic0_.auth_stat as auth2_13_0_, cstbservic0_.checker_dt_stamp as checker3_13_0_, cstbservic0_.checker_id as checker4_13_0_, cstbservic0_.merchant_id as merchant10_13_0_, cstbservic0_.maker_dt_stamp as maker5_13_0_, cstbservic0_.maker_id as maker6_13_0_, cstbservic0_.mod_no as mod7_13_0_, cstbservic0_.name as name13_0_, cstbservic0_.record_stat as record9_13_0_ from public.cstb_services cstbservic0_ where cstbservic0_.id=?    Hibernate: select lttbprogra0_.services_id as services1_26_0_, lttbprogra0_.auth_stat as auth2_26_0_, lttbprogra0_.checker_dt_stamp as checker3_26_0_, lttbprogra0_.checker_id as checker4_26_0_, lttbprogra0_.program_list_id as program9_26_0_, lttbprogra0_.maker_dt_stamp as maker5_26_0_, lttbprogra0_.maker_id as maker6_26_0_, lttbprogra0_.mod_no as mod7_26_0_, lttbprogra0_.record_stat as record8_26_0_ from public.lttb_program_service lttbprogra0_ where lttbprogra0_.services_id=?  Hibernate: select count(cstbtransa0_.id) as col_0_0_ from public.cstb_transaction cstbtransa0_ where cstbtransa0_.transaction_ref=? limit ?    Hibernate: select nextval ('public.cssq_transaction')    Hibernate: select nextval ('public.cssq_transaction')    INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.core.command.service.TransactionCoreServiceImpl.onBussiness" result="SUCCESS"  Hibernate: insert into public.cstb_transaction (amount, auth_stat, checker_dt_stamp, checker_id, credit_or_debit, accounts_id, exchange_rate_id, services_id, customer_id, is_gl, maker_dt_stamp, maker_id, mod_no, record_stat, transaction_ref, type, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)    Hibernate: insert into public.cstb_transaction (amount, auth_stat, checker_dt_stamp, checker_id, credit_or_debit, accounts_id, exchange_rate_id, services_id, customer_id, is_gl, maker_dt_stamp, maker_id, mod_no, record_stat, transaction_ref, type, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)    Hibernate: update public.lttb_dab_financial_trans set account_num=?, auth_stat=?, branch_code=?, checker_dt_stamp=?, checker_id=?, cif_num=?, code=?, events_id=?, external_trans_ref=?, identify_name=?, identify_number=?, maker_dt_stamp=?, maker_id=?, mod_no=?, point_value=?, reason_notes=?, record_stat=?, service_period=?, trans_date=?, transaction_ref=?, was_contribute=?, was_old=? where id=?    INFO : com.dtsc.loyalty.logger.query -  time="03/06/2013 16:55:12" user_ip="-" username="-" business_unit="-" screen="-" function="-" object_name="-" object_id="-" action="com.dtsc.loyalty.tld.listener.service.TldMdbServiceImpl.processMessage" result="SUCCESS"    END_TIME = 1370253313548    EXECUTE_TIME = 1168 milisecond ~ 1 second  

Oracle RAC 11g R2 DB silent install: parameter needed for system class

Posted: 13 Aug 2013 09:04 PM PDT

I'm installing Oracle RAC DB 11gR2 using response file and silent mode. The installation is failing with the error message:

[INS-35421] This options installs a single instance database only
You have chosen to perform a Desktop class install on a cluster. This options will not install oracle RAC

Which parameter is needed for a server-class install? I tried changing the value for oracle.install.db.config.starterdb.control from DB_CONTROL to GRID_CONTROL, but I still get the same warning.

Which database could handle storage of billions/trillions of records?

Posted: 13 Aug 2013 08:54 PM PDT

We are looking at developing a tool to capture and analyze netflow data, of which we gather tremendous amounts of. Each day we capture about ~1.4 billion flow records which would look like this in json format:

{     "tcp_flags": "0",     "src_as": "54321",     "nexthop": "1.2.3.4",     "unix_secs": "1352234521",     "src_mask": "23",     "tos": "0",     "prot": "6",     "input": "105",     "doctets": "186",     "engine_type": "0",     "exaddr": "2.3.4.5",     "engine_id": "2",     "srcaddr": "9.8.7.6",     "dst_as": "12345",     "unix_nsecs": "752265174",     "sysuptime": "2943529544",     "dst_mask": "24",     "dstport": "80",     "last": "2943523241",     "srcport": "52672",     "dpkts": "4",     "output": "111",     "dstaddr": "6.5.4.3",     "first": "2943517993"  }  

We would like to be able to do fast searches (less than 10 seconds) on the data set, most likely over narrow slices of time (10 - 30 mintes intervals). We also want to index the majority of the data points so we can do searches on each of them quickly. We would also like to have an up to date view of the data when searches are executed. It would be great to stay in the open source world, but we are not opposed to looking at proprietary solutions for this project.

The idea is to keep approximately one month of data, which would be ~43.2 billion records. A rough estimate that each record would contain about 480 bytes of data, would equate to ~18.7 terabytes of data in a month, and maybe three times that with indexes. Eventually we would like to grow the capacity of this system to store trillions of records.

We have (very basically) evaluated couchbase, cassandra, and mongodb so far as possible candidates for this project, however each proposes their own challenges. With couchbase the indexing is done at intervals and not during insertion of the data so the views are not up to date, cassandra's secondary indexes are not very efficient at returning results as they typically require scanning the entire cluster for results, and mongodb looks promising but appears to be far more difficult to scale as it is master/slave/sharded. Some other candidates we plan to evaluate are elasticsearch, mysql (not sure if this is even applicable), and a few column oriented relational databases. Any suggestions or real world experience would be appreciated.

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 13 Aug 2013 03:02 PM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

ORA-09925: Unable to create audit trail file

Posted: 13 Aug 2013 02:43 PM PDT

I'm using Oracle 11.1.0.6.0 db, on Linux 64 bit server on Amazon cloud.

I'm getting following error when I run the connect command

SQL> connect / as sysdba  ERROR:  ORA-09925: Unable to create audit trail file  Linux-x86_64 Error: 30: Read-only file system  Additional information: 9925  ORA-09925: Unable to create audit trail file  Linux-x86_64 Error: 30: Read-only file system  Additional information: 9925  

When I run this df -h command I'm not seeing the file system it's just showing as below

$ df -h  Filesystem Size Used Avail Use% Mounted on  

I do not understand cause of this issue. Please let me know how to solve this issue

No comments:

Post a Comment

Search This Blog