Thursday, October 3, 2013

[how to] Doing a point in time restore with CDC enabled; Possible?

[how to] Doing a point in time restore with CDC enabled; Possible?


Doing a point in time restore with CDC enabled; Possible?

Posted: 03 Oct 2013 08:42 PM PDT

I discovered this week, the hard way, that the database restore options NORECOVERY and KEEP_CDC are mutually exclusive. So, this begs the question. How do you do a database restore, keeping CDC intact, with both full and log backups?

Doing research on MSDN etc. I cannot find any documentation on restoring a database with KEEP_CDC using any other option than a single full db restore with RECOVERY specified.

I was able to find one attempt that did the full and subsequent logs without the keep_cdc option waiting until the final log. Only then was the table brought online with the RECOVERY and KEEP_CDC option. The result was a corrupt CDC schema as demonstrated here.

If the intent is to KEEP_CDC on restore are you truley limited to a full backup only or is there a mechanism similar to the attempt above to keep it intact during multi-file restors on a server other than the original?

Mysql: update query with subquery

Posted: 03 Oct 2013 08:25 PM PDT

Hello i need to perform the following query

Query

update Taxonomy  set sourceId = (  select id from TaxonomyMapping a where a.oldId =       (          select cm.en_ID          from TaxonomyMapping ta           join CategoryMapping cm on ta.oldId = cm.language_ID where ta.oldId = oldId limit 1      )   ) where id > -1;  

The tables are as following:

Taxonomy(id, oldId, sourceId, name, language)

TaxonomyMapping(id, oldId, sourceId, name, language) Exact copy of Taxonomy

CategoryMapping(en_ID, language_ID)

What im trying to accomplish The original taxonomy table has got categories in a certain language there are translations of it, but the need they are known in Category mapping, now i need to add the english translation to the sourceId, Can somebody help me with this? at the moment it fills one id for all im suspecting that the oldId(ta.oldId = oldId) i use is not the oldId of the table to update. Is it possible to do a query like this or should i search for another solution?

Recover oracle 10g database from old data filse

Posted: 03 Oct 2013 09:16 PM PDT

I have oracle 10g database,

unfortunately, our machine and system crashed and all the backups and oracle archive logs, but we have old image copies of our data files and one of the control file (Copy & Paste)

how I can restore our database from this copies?

I tried to create a new database and replace the old files with new files, but I couldn't (I got message, we need to open the database with resetlogs, but we can't open it if we didn't make incomplete recovery)

please help me

MySQL innodb engine table and database backup and recovery

Posted: 03 Oct 2013 01:54 PM PDT

I am using storage snapshot to backup the mysql database for myisam database. can you guide me how to take the innodb database and tablespace backup and restore using storage snapshot.

Move SQL Server 2008 database from machine A to B with no downtime

Posted: 03 Oct 2013 05:45 PM PDT

I got a bit of a problem. I have a SQL Server database in box A which has approximately 60gb of data. I need to move this to a new machine but I cannot have downtime at all. Every second that goes by there are new data in the db so we cant do it at some random time in the middle of the night.

Is there any way to achieve this? I am pretty much able to do whatever I want so any suggestion will be useful.

The remote copy of database “<DBNAME>” is not related to the local copy of the database

Posted: 03 Oct 2013 11:41 AM PDT

I'm trying to join a secondary DB to my SQL Server 2012 Availability Group following this article:

Join a Secondary Database to an Availability Group (SQL Server)

When I attempt this the local SQL server windows event log gives error 1407:

The remote copy of database "<DBNAME>" is not related to the local copy of the database  

I have followed all the steps and created a recent backup and restore with NORECOVERY.

The error log of the destination SQL server gives these errors:

AlwaysOn: The local replica of availability group '<AGNAME>' is going offline because the corresponding resource in the Windows Server Failover Clustering (WSFC) cluster is no longer online. This is an informational message only. No user action is required.    The state of the local availability replica in availability group '<AGNAME>' has changed from 'SECONDARY_NORMAL' to 'RESOLVING_NORMAL'. The replica state changed because of either a startup, a failover, a communication issue, or a cluster error. For more information, see the availability group dashboard, SQL Server error log, Windows Server Failover Cluster management console or Windows Server Failover Cluster log.     The state of the local availability replica in availability group '<AGNAME>' has changed from 'RESOLVING_NORMAL' to 'NOT_AVAILABLE'. The replica state changed because of either a startup, a failover, a communication issue, or a cluster error. For more information, see the availability group dashboard, SQL Server error log, Windows Server Failover Cluster management console or Windows Server Failover Cluster log.   

The cluster and all its nodes are online. The AG dashboard does not give any additional information to help diagnose the problem.

Opening all firewalls did not solve the issue.

The Windows Event log gives this line:

The database 'DelphiOBUDB' is marked RESTORING and is in a state that does not allow recovery to be run.  

However, the DB is left in Restoring mode when you restore it and specify the NORECOVERY flag as per the tutorial guidelines.

How can I join the DB to my availability group?

Is there any way to use different InnoDB settings for different databases on the same server?

Posted: 03 Oct 2013 12:28 PM PDT

From what i know, this is not possible but i'm surprised its not highly requested feature. I'd love to be able to have one db using InnoDB in ACID compliant mode and another db on the same server used for non-critical data that uses InnoDB with more relaxed settings related to disk writes.

Is there a way to achieve this setup?

Query to return fields of distinct values per key

Posted: 03 Oct 2013 08:35 PM PDT

In the process of building a data warehouse we are creating some aggregation tables for a reporting engine. We would like to build a single aggregation table from the fact table that can exist as list of all possible values of certain fields that are present in the fact table.

I would like to create a table that has the following output structure:

|Org   |Gender| Age|State|  ---------------_----------  |     1|     M|   1|   FL|  |     1|     F|   2|   VA|  |     1|  NULL|   3|   CA|  |     1|  NULL|NULL|   NJ|  |     2|     M|   2|   VA|  |     2|  NULL|   3| NULL|  |     2|  NULL|   5| NULL|  

So that for each Org, the non-NULL DISTINCT list in each column represents the list of distinct values in the Fact table for each column. I can't use a DISTINCT operator because that will look for the list of distinct combinations of fields, not where each field is distinct and the number of rows for each Org is at most the number of distinct values for that Org.

The database I'm using is SQL Server 2012, if it matters.

Selecting the highest value in a column?

Posted: 03 Oct 2013 12:13 PM PDT

I am dealing with a table that has 3 columns and I need a way to select, for every PrntP_PstnGrpID, the row that match and has the highest ChldLevel.

The PrntP_PstnGrpID could have multiple entries but each entry will have an ever increasing ChldLevel, starting at 1. How can I write a select statement to pull the row with maximum ChilDlevel, for every PrntP_PstnGrpID?

I.E. for PrntP_PstnGrpID = 10 which has 3 entries of ChldLevel : 1,2,3 I want to get the row with ChldLevel = 3 but for PrntP_PstnGrpID = 5 which has 5 entries of ChldLevel : 1,2,3,4,5 to get the row with ChldLevel = 5.

CREATE TABLE [dbo].[P_PositionGroupFlat] (      [ID]                      INT            NOT NULL,      [PrntP_PstnGrpID]         INT            NOT NULL,      [ChldLevel]               INT            NOT NULL,      [Sort]                    VARCHAR (8000) NOT NULL,  );  

Oracle's UTL_FILE when connected locally or through listener has different behaviour

Posted: 03 Oct 2013 03:35 PM PDT

We have a procedure which creates a CSV file with ULT_FILE. (env is Oracle 11.2.0.1 and RHEL 6.4) Until now, we were storing that CSV file into system oracle (the oracle owner) user home (/home/oracle/csv/). It works fine, but now we're required to store the CSV file into a different system user home (eg. reports system user home (/home/reports/csv/))

So we added the oracle user to the group reports as its secondary group, and then change reports home permissions to be accesible to him and all his group.

# id oracle  uid=500(oracle) gid=500(oracle) grupos=500(oracle),502(reports)  # id reports  uid=502(reports) gid=502(reports) grupos=502(reports)  # chmod 770 /home/reports  # ls -la /home/reports/  total 52  drwxrwx---  8 reports reports 4096 oct  3 12:58 .  drwxr-xr-x. 5 root    root    4096 oct  2 11:05 ..  drwxrwxrwx  2 reports reports 4096 oct  3 12:59 csv  

With this, logging into system as oracle I can write, read and execute files into reports's home.

# su - oracle  oracle ~$ touch /home/reports/csv/test.txt  oracle ~$ ls -la /home/reports/csv/test.txt  total 8  -rw-rw-r-- 1 oracle          oracle             0 oct  3 17:51 test.txt  

And now , as far as I know (and Oracle's documentation says), this should works, but it doesn't .. not at all. If I connect locally with sqlplus, It works. But if I connect from a remote machine or through the listener, I doesn't!

I'll show you: Local connection with sqlplus:

oracle ~$ export ORACLE_SID=MYDB  oracle ~$ sqlplus -S informes  Introduzca la contraseña:   select a.directory_name,          a.directory_path,         b.grantee,         b.privilege  from all_directories a,        all_tab_privs b   where a.directory_name = b.table_name    and DIRECTORY_NAME='CSVFOLDER';    DIRECTORY_NAME                 DIRECTORY_PATH       GRANTEE   PRIVILEGE  ------------------------------ -------------------- --------- ---------  CSVFOLDER                       /home/reports/csv   INFORMES  READ  CSVFOLDER                       /home/reports/csv   INFORMES  WRITE    show user  USER es "INFORMES"    declare    output_fich utl_file.file_type;  begin    output_fich := utl_file.fopen('CSVFOLDER','testfile.csv','W');    utl_file.put_line (output_fich, 'test line');    utl_file.fclose(output_fich);  end;  /    Procedimiento PL/SQL terminado correctamente.    host ls -la /home/reports/csv/testfile.csv  -rw-rw-r-- 1 oracle oracle 10 oct  3 18:20 /home/informesestados/tmp/testfile.csv    host rm /home/reports/csv/testfile.csv  

Now, lets try it again connecting through the listener Let's see where is my tnsnames pointing to:

oracle ~$ cat $ORACLE_HOME/network/admin/tnsnames.ora | grep MYDB  MYDB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = FR-BD1-tmp)(PORT = 1522))) (CONNECT_DATA = (SERVICE_NAME = MYDB)))    oracle ~$ ping FR-BD1-tmp  PING fr-bd1-tmp (192.168.78.3) 56(84) bytes of data.  64 bytes from fr-bd1-tmp (192.168.78.3): icmp_seq=1 ttl=64 time=0.047 ms  64 bytes from fr-bd1-tmp (192.168.78.3): icmp_seq=2 ttl=64 time=0.025 ms  ^C  --- fr-bd1-tmp ping statistics ---  2 packets transmitted, 2 received, 0% packet loss, time 1386ms  rtt min/avg/max/mdev = 0.025/0.036/0.047/0.011 ms    oracle ~$ /sbin/ifconfig | grep "inet addr"            inet addr:192.168.78.3  Bcast:192.168.78.255  Mask:255.255.255.0    oracle ~$ lsnrctl services LISTENER_MYBD    LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 03-OCT-2013 18:33:04    Copyright (c) 1991, 2009, Oracle.  All rights reserved.    Conectándose a (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=fr-bd1-tmp)(PORT=1522)))  Resumen de Servicios...  El servicio "mydb" tiene 1 instancia(s).    La instancia "mydb", con estado READY, tiene 1 manejador(es) para este servicio...      Manejador(es):        "DEDICATED" establecido:45 rechazado:0           LOCAL SERVER  El servicio "mydb_node1" tiene 1 instancia(s).    La instancia "mydb", con estado READY, tiene 1 manejador(es) para este servicio...      Manejador(es):        "DEDICATED" establecido:3 rechazado:0 estado:ready           LOCAL SERVER  El comando ha terminado correctamente    oracle ~$ sqlplus -S informes@mydb    Introduzca la contraseña:  select a.directory_name,          a.directory_path,         b.grantee,         b.privilege  from all_directories a,        all_tab_privs b   where a.directory_name = b.table_name    and DIRECTORY_NAME='CSVFOLDER';    DIRECTORY_NAME                 DIRECTORY_PATH       GRANTEE   PRIVILEGE  ------------------------------ -------------------- --------- ---------  CSVFOLDER                       /home/reports/csv   INFORMES  READ  CSVFOLDER                       /home/reports/csv   INFORMES  WRITE    show user  USER es "INFORMES"    declare    output_fich utl_file.file_type;  begin    output_fich := utl_file.fopen('INFORMES','testfile.csv','W');    utl_file.put_line (output_fich, 'test line');    utl_file.fclose(output_fich);  end;  /  declare  *  ERROR en línea 1:  ORA-29283: operación de archivo no válida  ORA-06512: en "SYS.UTL_FILE", línea 536  ORA-29283: operación de archivo no válida  ORA-06512: en línea 4  

Now, If I change reports's home permissions to be accesible to all, the UTL_FILE procedure through listener conenction works!

# chmod 777 /home/reports  # ls -la /home/reports/  total 52  drwxrwxrwx  8 reports reports 4096 oct  3 12:58 .  drwxr-xr-x. 5 root    root    4096 oct  2 11:05 ..  drwxrwxrwx  2 reports reports 4096 oct  3 12:59 csv    # su - oracle  oracle ~$ sqlplus -S informes@mydb  Introduzca la contraseña:  declare    output_fich utl_file.file_type;  begin    output_fich := utl_file.fopen('CSVFOLDER','testfile.csv','W');    utl_file.put_line (output_fich, 'test line');    utl_file.fclose(output_fich);  end;  /    Procedimiento PL/SQL terminado correctamente.    host ls -la /home/reports/csv/testfile.csv  -rw-rw-r-- 1 oracle oracle 10 oct  3 18:59 /home/informesestados/tmp/testfile.csv  

???!!

I don't get why this behaviour. As Oracle's documentation says:

(http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/u_file.htm)

On UNIX systems, the owner of a file created by the FOPEN function is the owner of the shadow process running the instance

The owner of the shadow process its the same in both methods (i.e. ''oracle'' system user), so,.. why this?

Anybody has a clue? Am I missing something?

Regards!

E/R diagram for sports league database

Posted: 03 Oct 2013 04:43 PM PDT

I am attempting to create a data model for a sports league

There are two entities in my domain. The first entity is a Manager and the second is a Team.

A Manager has

  • personnel id
  • first name
  • last name
  • birth date
  • birthplace

A Team has one Manager

If these are the design constraints, then how can we connect these two constraints?

bulk insert not working in stored proc when using exec

Posted: 03 Oct 2013 01:50 PM PDT

We have a SQL Server 2005 database with a stored procedure that needs to do a bulk insert.

The query in the stored procedure is defined as such:

declare @query nvarchar(max)    set @query = N'bulk insert uploadtmp  from ''\\xxxxxxxxxxxxxxx.local\webapps\root\DOE_SPECIAL_ED\uplds\2014\201455E002.txt'' with (datafiletype=''char'', fieldterminator=''\t'', rowterminator=''\n'')'    exec sp_executesql @query  

and the stored procedure is called from the web app nothing gets inserted, but if the stored procedure is changed to just run the bulk insert using a hard coded SQL statement and called from the web app:

bulk insert uploadtmp  from '\\xxxxxxxxxxxxxxx.local\webapps\root\DOE_SPECIAL_ED\uplds\2014\201455E002.txt' with (datafiletype='char', fieldterminator='\t', rowterminator='\n')  

the data gets loaded. The developer needs to be able to eventually use a dynamic version because the path and file will change and be passed from the web app as a parameter. What's even more confusing is when you execute the stored procedure from a query window on the database server the bulk insert works either way, it only has a problem when called from the web app.

Composite Primary Key efficiency as a Foreign Key

Posted: 03 Oct 2013 07:32 AM PDT

I have a table with a Composite Primary key (consisting of 4 columns) which is used to ensure no duplicates are entered into the table. I am now in need of a new table which will need to reference the keys in this table as foreign keys.

My question is which approach is more efficient for lookup speeds:

1) Do I create the new table including all 4 columns and reference them all in a foreign key.

or

2) Do I create a new identity column in the Primary Key table and use this as a foreign key in the new table.

This database is expected to hold a very large amount of data, so I have built it up until now with a view to minimising the amount of data held in each table. With this in mind, option 2 would be the best approach since I will save 2 int columns and a datetime column for every row, but I want to avoid increasing the lookup time if unnecessary.

How to remove diacritics in computed persisted columns? COLLATE is non-deterministic and cannot be used

Posted: 03 Oct 2013 09:59 AM PDT

I have a function that includes:

SELECT @pString = CAST(@pString AS VARCHAR(255)) COLLATE SQL_Latin1_General_Cp1251_CS_AS  

This is useful, for example, to remove accents in French; for example:

UPPER(CAST('Éléctricité' AS VARCHAR(255)) COLLATE SQL_Latin1_General_Cp1251_CS_AS)  

gives ELECTRICITE.

But using COLLATE makes the function non-deterministic and therefore I cannot use it as a computed persisted value in a column.

Q1. Is there another (quick and easy) way to remove accents like this, with a deterministic function?

Q2. (Bonus Question) The reason I do this computed persisted column is 1)to search 2)to detect duplicates. For example the user may enter the customer's last name as either 'Gagne' or 'Gagné' or 'GAGNE' or 'GAGNÉ' and the app will find it using the persisted computed column (and the update/insert will balk at the duplicate). Is there a better way to do this?

EDIT: Using SQL Server 2012 and SQL-Azure.

Initializing Transactional Replication From two Different Database Backup

Posted: 03 Oct 2013 08:53 AM PDT

I have two different large Database (200GB for each) which I want to replicate their data to a central database (3rd DB). I followed this link and it was very useful for a single database.

But my problem is I have to restore two different backups to a single database. Is that possible at all?

Performance: order of tables in joined statement

Posted: 03 Oct 2013 08:02 AM PDT

I have the following SQL statement, running on a SQLite database on a Windows mobile device.

SELECT       table1.uniqueidentifier1, table1.int1, table1.varchar1,       table1.decimal1, table1.decimal2   FROM table1  INNER JOIN table2 On table1.PK = table2.FK  WHERE table2.uniqueidentifier2 IN (uniqueidentifier1,uniqueidentifier2,....)  ORDER BY table1.varchar1  

As there are several hundred thousand records in each table and the device isn't really new this takes some time.

Would the performance be better, if I switched the tables, somehow like this:

SELECT       table1.uniqueidentifier1, table1.int1, table1.varchar1,       table1.decimal1, table1.decimal2   FROM table2  INNER JOIN table1 On table1.PK = table2.FK  WHERE table2.uniqueidentifier2 IN (uniqueidentifier1,uniqueidentifier2,....)  ORDER BY table1.varchar1  

Please note: in the first statement I select from table 1` and join table 2, in the second, it is switched.

Why or why not is it faster?

How to design database for polymorphic relationships

Posted: 03 Oct 2013 10:13 AM PDT

I have the following scenerio:

  • There are some categories
  • Each category has attributes
  • An attribute could be of free text or collection of options
  • There is a product which is assigned to category which needs to store values/options of attributes

How can a model be made without any repetition, and which correctly addresses the polymorphic nature of attributes?

MYSQL LOAD DATA INFILE taking a long time

Posted: 03 Oct 2013 03:13 PM PDT

I have a MYSQL DB running on a raspberry pi. Now under normal circumstances MYSQL actually runs slightly quicker than it did on my much more powerful desktop. However I am trying to insert 60 million records in to the database using LOAD DATA INFILE. I tried it all in one go (a 1.2GB File) and it was still trying to load the data 1.5 days later. So I tried loading in 100 000 chunks which was fine for the first 3 million records but soon started to grind to a halt. So I then removed the indexes from the table and it seems to run a bit quicker but I noticed that for each 100 000 rows I insert the time increases by about 20 seconds.

What is strange is that when I did a database restore from my original desktop machines database (an identical db with 60million rows in the main table) the restore only took about 1 hour.

What is causing the slowdown for LOAD DAT

I should point out that I am using InnoDB

EDIT:

I reduced the chunks to 1000 records and left it running which did appear to speed things up as after about 1 hour it had inserted 24million records however each insert of 1000 was taking about 30 seconds. However I then decided to stop it running and restarted the raspberry pi. Then I ran the import again and low and behold the initial inserts were back to less than one second again.

So my question is, do I need to clear a cache or something as MYSQL appears to be getting bogged down rather than the actual LOAD DATA INFILE being slow. It is almost as if it is filling up memory and not releasing it or something much more technical to do with MYSQL.

Unclear oracle problem with getting response of query

Posted: 03 Oct 2013 08:21 AM PDT

The problem is very unclear,

I have an application server, that constructs queries and sends it to Oracle to get response.

The Oracle database is installed on a high perfomance server:
Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

The application server was tested on different locations inside of the intranet, including that high perfomance server, where the database is located.

Several versions of the application server (from old to newest) were tested also.

The problem:

When the application server sends a query to the database, it has to wait for about 300 seconds to get response and construct logged output.

But when I'm connecting to the database with PL/SQL Developer and running that query using that IDE, I'm getting a response in less than one second.

Is there something familiar for you, guys?

We're just cracked our heads trying to determine where the problem could be located.

If you need more information - I'll update this post immideately.

Restrict range of dynamic ports available to SQL Server

Posted: 03 Oct 2013 10:38 AM PDT

Is there a way to limit the range of dynamic ports available to SQL Server from the database side, or database server side? Our SOP is to use static ports across the network firewall and a vendor is having trouble locking down their ports. Theoretically, if we allowed a range of 1000 ports inside the dynamic range (49152–65535) across the firewall how would I limit SQL Server to only assign a dynamic port inside that range?

Probable circular dependency, but query still runs

Posted: 03 Oct 2013 08:27 PM PDT

I built this ER diagram in MySQL Workbench and it will run and generate the database. My concern is that the relationship between the employees, departments, positions, and employment_statuses is a circular relationship.

Because I know people will ask, this is something I was working on in a class. I'm not asking for help with homework, this is something we were working on in class and I took a different route than the professor. I'm asking for clarification about database modelling theory.

enter image description here

Calculating the median value in a MySQL table with CodeIgniter

Posted: 03 Oct 2013 05:25 PM PDT

This is my model file in codeigniter. I need to calculate the median for this variable: budget_breakdown.breakdown_grants. How can I do this calculation?

function reports_details1() {          $this->db->select('budget_breakdown.breakdown_grants');      //survey_respondent_info.state,survey_respondent_info.survey_id,budgets.budget_id,          $this->db->from('survey_respondent_info');          $this->db->join('budgets',                          'budgets.survey_id=survey_respondent_info.survey_id' , 'left');          $this->db->join('budget_breakdown',                              'budgets.budget_id=budget_breakdown.budget_id' , 'left');          $this->db->where('budgets.budget_option_id', 2);          $query1 = $this->db->get();          $result = $query1->result();          return $result;      }  

Access denied for user 'root'@'%'

Posted: 03 Oct 2013 08:56 PM PDT

I used to access the root user in MySQL just fine. But recently, I am no longer able to.

I am able to login fine :

 mysql -u root -p  

Here is the mysql status after login :

mysql> status  --------------  mysql  Ver 14.14 Distrib 5.5.28, for debian-linux-gnu (i686) using readline 6.2    Connection id:      37  Current database:     Current user:       root@localhost  SSL:            Not in use  Current pager:      stdout  Using outfile:      ''  Using delimiter:    ;  Server version:     5.5.28-0ubuntu0.12.04.3 (Ubuntu)  Protocol version:   10  Connection:     Localhost via UNIX socket  Server characterset:    latin1  Db     characterset:    latin1  Client characterset:    utf8  Conn.  characterset:    utf8  UNIX socket:        /var/run/mysqld/mysqld.sock  Uptime:         4 min 16 sec    Threads: 1  Questions: 112  Slow queries: 0  Opens: 191    Flush tables: 1  Open tables:  6  Queries per second avg: 0.437  --------------  

But when I want to do any action, such as :

mysql> CREATE DATABASE moyennegenerale;  ERROR 1044 (42000): Access denied for user 'root'@'%' to database 'moyennegenerale'  

I understand % is used to signify any host , but my status clearly states localhost. Does somebody have an idea of what might be going on?

Parallel inserts cause deadlock on DB2

Posted: 03 Oct 2013 07:38 AM PDT

The scenario is the following: The application attempts to insert into the same table from two parallel threads making 4000 insertions in each thread (separate transaction per thread). It causes the DB always to fail with the following exception:

com.ibm.db2.jcc.a.pn: The current transaction has been rolled back because of a deadlock or timeout. Reason code "2".. SQLCODE=-911, SQLSTATE=40001, DRIVER=3.52.95

The full log is ([#1] indicates the 1st thread/transaction, [#2] is correspondingly the 2nd):

SQL: create table line (id integer generated by default as identity, line_number integer not null, constraint line_pk primary key (id))  [#1] SQL: insert into line (line_number) values (1)  [#1] SQL: insert into line (line_number) values (2)  [#2] SQL: insert into line (line_number) values (1)  [#2] SQL: insert into line (line_number) values (2)  [#1] SQL: insert into line (line_number) values (3)  [#2] SQL: insert into line (line_number) values (3)  [#1] SQL: insert into line (line_number) values (4)  [#2] SQL: insert into line (line_number) values (4)  ...  [#2] SQL: insert into line (line_number) values (1608)  [#1] SQL: insert into line (line_number) values (1608)  [#2] SQL: insert into line (line_number) values (1609)  [#2] SQL: insert into line (line_number) values (1610)  [#2] SQL: insert into line (line_number) values (1611)  ...  [#2] SQL: insert into line (line_number) values (1654)  [#2] SQL: insert into line (line_number) values (1655)  [#1] [org.epo.lifesciences.slice.db.DBTest] Thread #1 has failed  org.springframework.dao.DeadlockLoserDataAccessException: StatementCallback; SQL [insert into line (line_number) values (1608)]; The current transaction has been rolled back because of a deadlock or timeout.  Reason code "2".. SQLCODE=-911, SQLSTATE=40001, DRIVER=3.52.95; nested exception is com.ibm.db2.jcc.a.pn: The current transaction has been rolled back because of a deadlock or timeout.  Reason code "2".. SQLCODE=-911, SQLSTATE=40001, DRIVER=3.52.95          at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTransl          at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLException          at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:407)          at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:519)       ...  [#2] SQL: insert into line (line_number) values (1656)  [#2] SQL: insert into line (line_number) values (1657)  ...  [#2] SQL: insert into line (line_number) values (3999)  [#2] SQL: insert into line (line_number) values (4000)  [#2] Thread #2 completed  

It looks like that lock space is exhausted much earlier then transaction log is exhausted. Solutions which I see (and seem to work):

  • Increase lock space, in particular trick the parameters:
    • locklists – maximum storage for lock list configuration parameter
    • maxlocks – maximum percent of lock list before escalation configuration parameter
  • Commit more frequently.

Both of them are not strictly acceptable because:

  • One need to know in advance what should be the size of locklists which needs to be increased with the growing number of insertions. Thus DB2 cannot automatically adapt to increasing workload efficiently. Also DB instance needs to be restarted when this parameter is changed.
  • Often commits also mean that there should be a way to roll back all previous commits if something goes wrong at certain point of time. That complicates the application as it needs to implement "revert" logic (savepoints won't help). Moreover intermediate commits will be visible to data requester, so application needs also to track such "uncompleted" tasks and hide the data from client.

The test application works OK both for MySQL, HSQL and MSSQL as is (with no further tricks). So I believe there should be a way to make it working on DB2 without code refactoring and staying on SQL'92 compliant level. Is it possible with enterprise-level DB2? Perhaps I miss something trivial, any feedback is welcomed.

Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries) [on hold]

Posted: 03 Oct 2013 09:25 AM PDT

I'm trying to configure a load balancing system. I've a python script, invoked through mod_wsgi on Apache, that generates a query and executes it on pgpool: request-> wsgi python -> pgpool -> postgresql. Pgpool is configured as load balancer using 4 servers with 24GB ram and 350GB ssh hd. Our db is about 150GB and a query takes about 2 seconds. These are the configurations:

Pgpool

  • num_init_children 500
  • max_pool 2
  • child_life_time 300 seconds

Apache (mpm_prefork)

  • StartServers 100
  • MinSpareServers 20
  • MaxSpareServers 45
  • ServerLimit 2000
  • MaxClients 100
  • MaxRequestsPerChild 1000

PostgreSQL

  • max_connections = 1000
  • shared_buffers = 6GB
  • work_mem = 4GB

It seems not working When I try to submit more than 150 concurrent queries, although pgpool log file doesn't have any errors I get this error from the python script:

OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

Any ideas?

Can I move rows between partitions by updating the partition key?

Posted: 03 Oct 2013 09:56 AM PDT

I would think that this would be a fairly simply question, but I've actually had a difficult time finding an answer for this.

The question: Can you move rows of data within a partitioned table from one partition to another by simply updating the partition column so that it crosses the partition boundary?

For example, if I have a table that has a partition key:

CREATE TABLE SampleTable  (      SampleID INT PRIMARY KEY,      SampleResults VARCHAR(100) NOT NULL,  )  

With the partition function that maps to the primary key:

CREATE PARTITION FUNCTION MyPartitionFunc (INT) AS  RANGE LEFT FOR VALUES (10000, 20000);  

Can I move a row from the first partition to the third partition by changing the SampleID from 1 to (say) 500,000?

Note: I'm tagging this as both sql server 2005 and 2008, since they both support partitioning. Do they handle it differently?

Tool to generate large datasets of test data

Posted: 03 Oct 2013 07:28 AM PDT

Many times when trying to come up with an efficient database design the best course of action is to build two sample databases, fill them with data, and run some queries against them to see which one performs better.

Is there a tool that will generate (ideally straight into the database) large (~10,000 records) sets of test data relatively quickly? I'm looking for something that at least works with MySQL.

No comments:

Post a Comment

Search This Blog