Tuesday, March 12, 2013

[how to] List DB2 backups

[how to] List DB2 backups


List DB2 backups

Posted: 12 Mar 2013 08:22 PM PDT

Is there a way to list the DB2 backups? All I can find is db2 list history backup all for but I think you need to check through all of those to see if they've since been deleted. This seems like a simple question but I'm coming up blank.

How do I remove duplicate records in a join table in psql?

Posted: 12 Mar 2013 08:13 PM PDT

I have a table that has a schema like this:

create_table "questions_tags", :id => false, :force => true do |t|          t.integer "question_id"          t.integer "tag_id"        end          add_index "questions_tags", ["question_id"], :name => "index_questions_tags_on_question_id"        add_index "questions_tags", ["tag_id"], :name => "index_questions_tags_on_tag_id"  

I would like to remove records that are duplicates, i.e. they have both the same tag_id and question_id as another record.

What does the SQL look like for that?

Strategies for organising SQL Server with large amount of RAM

Posted: 12 Mar 2013 05:24 PM PDT

We now have a new server for our DB and amongst other things we have 128GB of RAM available (previously I had 16GB) . I know SQL Server is very good at managing it's resources, but I was wondering if there are any special settings or strategies that I should employ in either the server/db settings or processing code (stored procs/indexes etc) to ensure that SS takes best advantage of the available ram.

The DB is about 70GB and it's a non transactional db (it's a data warehouse). So basically large WRITE followed by massive READ is the normal flow of things.

Leaf nodes for averages

Posted: 12 Mar 2013 03:53 PM PDT

I have the following MDX tuple calculation for my KPI in Dashboard Designer:

AVG([Person].[Person].children,[Measures].[Hours])  

This works perfectly when, for instance, I view it by Team name.

However, when I view it by the [Person] it's returning no values. Does AVG not work when you're looking directly at the leaf nodes or something? Or is there something else I'm doing wrong?

Specify Server for DBMS_Scheduler Job in Policy Managed RAC

Posted: 12 Mar 2013 02:33 PM PDT

A unit test requires a dbms_scheduler job to run on the same RAC node as the unit test is being run from. I know that with an Admin managed database this could be done by creating a service that limited the available instances and then using that service in a job class the job uses. My question is, how can this be done in 11.2 with policy management?

Pools can be created that have only a single server in them and databases can be assigned to multiple pools, but as I understand it, a server can only be assigned to a single pool. Therefore, a service can't be created that uses a single server and still have other services that use a pool defined with multiple servers including that one.

I also know that services can be created as either SINGLETON or UNIFORM, but since SIGNLETON doesn't provide for allowed servers or even preferred servers, I'm not sure how this would help.

Surely I am missing something that makes this all possible.

After streaming replication has failed, how to get it back again?

Posted: 12 Mar 2013 01:08 PM PDT

I have a similar problem to this: Replication has failed; how to get going on again?

Essentially my slave failed, and now complains "requested WAL segment 0000000100000135000000E4 has already been removed"

In my case I HAVE done a full base backup again as per the instructions http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial#Binary_Replication_in_6_Steps. So I shut the master down, did a full rsync, started up the slave, then started the master. And I still get the same error.

What is a good, repeatable way to calculate MAXDOP on SQL Server?

Posted: 12 Mar 2013 05:48 PM PDT

When setting up a new SQL Server 2012, I use the following code to determine a good starting point for the MAXDOP setting:

/* If this instance is hosting a Sharepoint database, you MUST specify       MAXDOP=1 according to http://blogs.msdn.com/b/rcormier/archive/2012/10/25/you-shall-configure-your-maxdop-when-using-sharepoint-2013.aspx  */      DECLARE @CoreCount int;  DECLARE @NumaNodes int;    SET @CoreCount = (SELECT i.cpu_count from sys.dm_os_sys_info i);  SET @NumaNodes = (SELECT MAX(c.memory_node_id) + 1               FROM sys.dm_os_memory_clerks c               WHERE memory_node_id < 64);    IF @CoreCount > 4 /* If less than 5 cores, don't bother... */  BEGIN      DECLARE @MaxDOP int;      SET @MaxDOP = @CoreCount * 0.75;      IF @MaxDOP > (@CoreCount / @NumaNodes)           SET @MaxDOP = (@CoreCount / @NumaNodes);      PRINT 'Suggested MAXDOP = ' + CAST(@MaxDOP as varchar(max));  END  

I realize this is a bit subjective, and can vary based on many things; however I'm attempting to create a tight catch-all piece of code to use as a starting point for a new server.

Does anyone have any input on this code?

UNION is slow but both queries are fast in separate

Posted: 12 Mar 2013 02:57 PM PDT

Dunno what else to do about this one. I have one table that has has a start and a stop columns and I want to return the results of it joined both by start and by stop and I want a clear distinction between the two. Now both queries run fast separatly:

SELECT              UNIX_TIMESTAMP(CONVERT_TZ(start_dev, '+00:00', GetCarrierTimezone(a0.carrier_id))) AS alertStart,              NULL AS alertStop,              c0.name AS carrier_name,              carrier_image,              l0.Latitude,              l0.Longitude          FROM              carriers AS c0                  INNER JOIN start_stop AS a0 ON a0.carrier_id = c0.id                      INNER JOIN pcoarg AS l0 ON a0.startLogId = l0.id          WHERE                  FIND_IN_SET(a0.carrier_id, '89467,1,64578,222625,45013') > 0              AND                  start_dev > '2013-03-11 11:46:48'              AND                   start_dev = (SELECT MIN(start_dev) FROM start_stop AS a1 WHERE a0.carrier_id = a1.carrier_id AND DATE(a1.start_dev) = DATE(a0.start_dev))          AND IsNotificationInSchedule(22, start_dev) > 0  

So this one takes 0.063. But if I combine it in a UNION (doesn't matter if it's UNION ALL OR DISTINCT OR WHATEVER) it just takes about 0.400 seconds.

SELECT * FROM  (      (          SELECT              UNIX_TIMESTAMP(CONVERT_TZ(start_dev, '+00:00', GetCarrierTimezone(a0.carrier_id))) AS alertStart,              NULL AS alertStop,              c0.name AS carrier_name,              carrier_image,              l0.Latitude,              l0.Longitude          FROM              carriers AS c0                  INNER JOIN start_stop AS a0 ON a0.carrier_id = c0.id                      INNER JOIN pcoarg AS l0 ON a0.startLogId = l0.id          WHERE                  FIND_IN_SET(a0.carrier_id, '89467,1,64578,222625,45013') > 0              AND                  start_dev > '2013-03-11 11:46:48'              AND                   start_dev = (SELECT MIN(start_dev) FROM start_stop AS a1 WHERE a0.carrier_id = a1.carrier_id AND DATE(a1.start_dev) = DATE(a0.start_dev))              AND IsNotificationInSchedule(22, start_dev) > 0      ) UNION ALL (          SELECT              NULL AS alertStart,              UNIX_TIMESTAMP(CONVERT_TZ(stop_dev, '+00:00', GetCarrierTimezone(a0.carrier_id))) AS alertStop,              c0.name AS carrier_name,              carrier_image,              l0.Latitude,              l0.Longitude          FROM              start_stop AS a0                  INNER JOIN carriers AS c0 ON a0.carrier_id = c0.id                      INNER JOIN pcoarg AS l0 ON a0.stopLogId = l0.id          WHERE                  FIND_IN_SET(a0.carrier_id, '89467,1,64578,222625,45013') > 0              AND                  stop_dev > '2013-03-11 11:46:48'              AND                   stop_dev = (SELECT MAX(stop_dev) FROM start_stop AS a1 WHERE a0.carrier_id = a1.carrier_id AND DATE(a1.stop_dev) = DATE(a0.stop_dev))              AND IsNotificationInSchedule(22, start_dev) > 0      )  ) AS startStops  ORDER BY IF(alertStart IS NULL, alertStop, alertStart)  

Here is EXPLAIN on single query:

1   PRIMARY c0  ALL PRIMARY             17  Using where  1   PRIMARY a0  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.c0.id   72  Using where  1   PRIMARY l0  ref id ASC  id ASC  4   test_backoffice.a0.startLogId   1   Using where  2   DEPENDENT SUBQUERY  a1  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.a0.carrier_id   72  Using where; Using index  

And here is the EXPLAIN for the JOIN:

1   PRIMARY   system                  0   const row not found  2   DERIVED c0  ALL PRIMARY             17  Using where  2   DERIVED a0  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.c0.id   72  Using where  2   DERIVED l0  ref id ASC  id ASC  4   test_backoffice.a0.startLogId   1   Using where  3   DEPENDENT SUBQUERY  a1  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.a0.carrier_id   72  Using where; Using index  4   UNION   c0  ALL PRIMARY             17  Using where  4   UNION   a0  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.c0.id   72  Using where  4   UNION   l0  ref id ASC  id ASC  4   test_backoffice.a0.stopLogId    1   Using where  5   DEPENDENT SUBQUERY  a1  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.a0.carrier_id   72  Using where; Using index      UNION RESULT      ALL                       

Help on this one would be greatly appreciated. :)

EDIT:

I'm getting inconsistent result. If I remove the convert_tz for example and try to get the timezone outside the union I get very fast results, but If I rename the result it automatically goes down to the same underperformante query:

SELECT      *,      GetCarrierTimezone(carrier_id) timezone  FROM  (  

this takes 0.374s

SELECT      *,      GetCarrierTimezone(carrier_id)  FROM  (  

while this takes 0.078 (mostly the lag from the db to my machine)..

Are there any good open source tools for DB objects end user manipulation?

Posted: 12 Mar 2013 12:15 PM PDT

I've been recently tasked to provide some object adjustment features for our end users, simple thing like changing the value of two or three fields in some specific business know tables without the need to call the IT department and with some logging and auditing for our most paranoid managers.

Is there any software that already does this? (gather the table schema and data from another DB and provide a user with really simple adjusting capabilities)

Feature requests keep piling up, support for SQLServer and Oracle databases, auditing automatically and by user choice, running processes and what not.

Any good tools that can provide this way of meta/highlevel simple database interaction?

Using pgAdmin SQL Editor to execute a master file containing multiple sql files

Posted: 12 Mar 2013 08:59 PM PDT

I'm using pgAdminIII SQL Editor to develop a really long script. I'd like to break the script into smaller, more manageable scripts and include each sql file in a master file, then just execute the master file.

example: master.sql

contents (I don't know the syntax to use):

file1.sql  file2.sql  file3.sql  

I've found several tutorials about using psql -f in the command-line and \i to include these files, but I'd rather use a GUI to execute my scripts while I develop and test locally.

Is this possible? Any references/documentation would be very helpful.

EDIT: For clarity, I'm not asking about better products to use other than pgAdmin (unless the product can do what I'm asking above), nor am I asking how to do this in psql - I already have documentation for that and I don't want to use the command line. Preference is for executing the master.sql script file in a sql editor.

Sum Up values in a query based on other information

Posted: 12 Mar 2013 03:05 PM PDT

I am trying to grab the sum of 2 columns if 1 column is the same. I currently have a record-set that looks like this:

enter image description here

I get these results by running this statement:

 select distinct a.eventnum, a.num_cust, a.out_minutes, d.xpers   FROM mv_outage_duration a   INNER JOIN mv_aeven d   ON d.Num_1 = a.eventnum   and (d.DEV_NAME = 'T007F12127')   and d.rev_num = (select max(rev_num)   from mv_aeven d   where a.eventnum = d.Num_1)  group by a.eventnum, a.num_cust, a.out_minutes, d.xpers  

How do I get the sum of Num_cust and Out_Minutes for the record if the eventnum is the same?

I'd like to return 1 and only 1 row for each event number, and if there are more than 1 step, I'd like to add the Num_cust and Out_minutes for each step.

I've tried

 select distinct a.eventnum, a.num_cust, a.out_minutes, d.xpers, sum(a.Num_cust)   FROM mv_outage_duration a   INNER JOIN mv_aeven d   ON d.Num_1 = a.eventnum   and (d.DEV_NAME = 'T007F12127')   and d.rev_num = (select max(rev_num)   from mv_aeven d   where a.eventnum = d.Num_1)  group by a.eventnum, a.num_cust, a.out_minutes, d.xpers  

and it just returns the results as a new column sum(a.num_cust).

enter image description here

I also tried

 select distinct a.eventnum, a.num_cust, a.out_minutes, d.xpers, sum(select Num_cust from mv_outage_duration a where a.eventnum = d.num_1)   FROM mv_outage_duration a   INNER JOIN mv_aeven d   ON d.Num_1 = a.eventnum   and (d.DEV_NAME = 'T007F12127')   and d.rev_num = (select max(rev_num)   from mv_aeven d   where a.eventnum = d.Num_1)  group by a.eventnum, a.num_cust, a.out_minutes, d.xpers  

...but that just wouldn't run at all.

Here's some statements to set everything up

Create table mv_outage_duration( eventnum, num_cust, out_minutes, restore_dts, off_dts, cause, feeder, dev_name)   create table mv_aeven (Num_1, rev_num, xpers, weather_code, completion_remarks)          Insert into "mv_outage_duration" (EVENTNUM,NUM_CUST,OUT_MINUTES,RESTORE_DTS,OFF_DTS,CAUSE,FEEDER,DEV_NAME) values ('T00000000133',79,11,'20130307085914CS','20130307084811CS','10','17FL012011','T007F12127');  Insert into "mv_outage_duration" (EVENTNUM,NUM_CUST,OUT_MINUTES,RESTORE_DTS,OFF_DTS,CAUSE,FEEDER,DEV_NAME) values ('T00000000133',61,13,'20130307090200CS','20130307084811CS','10','17FL012011','T007F12127');  Insert into "mv_outage_duration" (EVENTNUM,NUM_CUST,OUT_MINUTES,RESTORE_DTS,OFF_DTS,CAUSE,FEEDER,DEV_NAME) values ('T00000000014',61,4,'20130304140400CS','20130304135945CS','09','17FL012011','T007F12127');  Insert into "mv_outage_duration" (EVENTNUM,NUM_CUST,OUT_MINUTES,RESTORE_DTS,OFF_DTS,CAUSE,FEEDER,DEV_NAME) values ('T00000000173',79,1,'20130307161532CS','20130307161424CS','01','17FL012011','T007F12127');  Insert into "mv_outage_duration" (EVENTNUM,NUM_CUST,OUT_MINUTES,RESTORE_DTS,OFF_DTS,CAUSE,FEEDER,DEV_NAME) values ('T00000000173',61,3,'20130307161800CS','20130307161424CS','01','17FL012011','T007F12127');            Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000014',10,796072,'LIGHTNING IN AREA','COMPLETETION REMARKS FROM TRUCK ON TOFS. ');  Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000014',11,796072,'NORMAL FOR SEASON','COMPLETETION REMARKS FROM TRUCK ON TOFS.');  Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000173',7,79607,'LIGNTNING IN AREA','wetr');  Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000173',6,79607,'LIGNTNING IN AREA','wetr');  Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000133',7,796072,'THUNDERSTORM','Testing Step Restore for Kasey');  Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000133',6,796072,'THUNDERSTORM','Testing Step Restore for Kasey');  

Install PostgreSQL 9.2 on Windows using WIN1252 encoding.

Posted: 12 Mar 2013 01:26 PM PDT

I had installed PostgreSQL 9.2 earlier and it always installed with the encoding being WIN1252 (the default database was WIN1252). I then some time ago reinstalled it with the encoding being UTF8 (I dont exactly remember what I did). I am now trying to re-install postgresql again but re-installing it with the encoding set to WIN1252. I am installing postgresql 9.2.2 from the installer executable and using an options file. I am setting the locale to "English, United States" and the installer-language to "en". Are these the wrong values I should be using? Is there some internal variable I must of set to UTF8 that postgresql is reading to know to use UTF8? I dont see any reference to UTF8 anywhere when I install postgresql. After I install postgres, it shows my database is UTF8 and the 'client_encoding' variable is set to WIN1252.

Can the same database be log-shipping secondary and primary at the same time?

Posted: 12 Mar 2013 11:15 AM PDT

Here is my scenario:

Database DB1 on Server1 is log shipping primary in data center.

Database DB1 on Server2 is log shipping secondary; Server2 is in remote location. Logs are shipped from data center to remote location via shared virtual Jungle Disk drive accessible both from data center and remote location via internet.

In case I fail over to Server2 I would like to have log backups as well.

So my thinking is after configuring DB1 on Server2 as log shipping secondary I would then also configure it as a log shipping primary (even though these log backups won't get shipped anywhere from Server2). When database DB1 on Server2 is in "secondary" mode log backup job will probably be disabled.

Is this a valid use for log shipping?

Performing SELECT on EACH ROW in CTE or Nested QUERY?

Posted: 12 Mar 2013 11:45 AM PDT

This is a problem in PostgreSQL

I have a table which stores the tree of users;

      +------+---------+      |  id  | parent  |      |------+---------|      |  1   |   0     |      |------|---------|      |  2   |   1     |      |------|---------|      |  3   |   1     |      |------|---------|      |  4   |   2     |      |------|---------|      |  5   |   2     |      |------|---------|      |  6   |   4     |      |------|---------|      |  7   |   6     |      |------|---------|      |  8   |   6     |      +------+---------+  

I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example

tree for #1 has size 7
tree for #5 has size 0
tree for #6 has size 2, and so on

Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this:

      +------------------+-------------+      |  tree_root_node  |  tree_size  |      |------------------+-------------|      |      1           |     7       |      |------------------+-------------|      |      2           |     3       |      |------------------+-------------|      |      3           |     0       |      |------------------+-------------|      |      4           |     3       |      |------------------+-------------|      |      5           |     0       |      |------------------+-------------|      |      6           |     2       |      |------------------+-------------|      |      7           |     0       |      |------------------+-------------|      |      8           |     0       |      +------------------+-------------+  

The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data.

I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing.

Can someone please point to right direction ?

SQL Server 2008 can't repair consistency

Posted: 12 Mar 2013 05:52 PM PDT

I have a problem with a SQL Server 2008 database.

Launching

DBCC CHECKDB  

I get this error:

SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0xd2e00940; actual: 0x925ef494). It occurred during a read of page (1:15215) in database ID 22 at offset 0x000000076de000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\storico_ita_tlx.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

I found the table causing the problem:

DBCC CHECKTABLE  

Msg 824, Level 24, State 2, Line 8
SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0xd2e00940; actual: 0x925ef494). It occurred during a read of page (1:15215) in database ID 22 at offset 0x000000076de000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\storico_ita_tlx.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately.

Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

So I tried with the repair operations:

DBCC CHECKTABLE (table_name, REPAIR_ALLOW_DATA_LOSS)  

but I get the same error:

Msg 824, Level 24, State 2, Line 8
SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0xd2e00940; actual: 0x925ef494). It occurred during a read of page (1:15215) in database ID 22 at offset 0x000000076de000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\storico_ita_tlx.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately.

Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

I've also tried setting the DB in SINGLE_USER mode, but with no result.

I am not able to delete nor truncate the table. As I always get the same error.

The table does not have any constraints. It has one PK and one Index, but I can't drop any of them.

How can I store a pdf in PostgreSQL

Posted: 12 Mar 2013 04:45 PM PDT

I have to store .pdf files in a table.

I have a table, state, with columns:

id_state,  name,  pdffile (bytea)  

I want to store the pdf files in the pdffile column.

How can I do this?

How I prevent deadlock occurrence in my application?

Posted: 12 Mar 2013 11:53 AM PDT

I am developing an LMS application in PHP framework(Codeigniter 2.1.0). I am using MySQL database. All the tables in the database have innodb engine. I also created indexes on each tables. Now I am doing load testing using Jmeter version 2.9 locally for 200 users concurrently. During the load testing, in a specific page action I got Deadlock Found error. I changed my original query to the new one but again same error is occurring.

I have written save_interactions function which takes four parameters interaction array,module_id,course_id,user_id & is been called so many times by the AJAX script. The following script inserts the record if the specific interaction_id is not present in that table otherwise the update query will get fire.

public function save_interactions($interaction_array,$modid,$cid,$uid)  {      foreach($interaction_array as $key=>$interact_value)      {          $select_query = $this->db->query("SELECT COUNT(*) AS total FROM `scorm_interactions` WHERE `mod_id`='".$modid."' AND `course_id`='".$cid."' AND `user_id`='".$uid."' AND `interaction_id`='".$interact_value[0]."'");          $fetchRow = $select_query->row_array();            if($fetchRow['total']==1)          {              $update_data = array(                          "interaction_type"=>$interact_value[1],                          "time"=>$interact_value[2],                          "weighting"=>$interact_value[3],                          "correct_response"=>$interact_value[4],                          "learner_response"=>$interact_value[5],                          "result"=>$interact_value[6],                          "latency"=>$interact_value[7],                          "objectives"=>$interact_value[8],                          "description"=>$interact_value[9]              );              $this->db->where('mod_id', $modid);              $this->db->where('course_id', $cid);              $this->db->where('user_id', $uid);              $this->db->where('interaction_id', $interact_value[0]);              $this->db->update('scorm_interactions', $update_data);          }else          {              $insert_data = array(                          "user_id"=>$uid,                          "course_id"=>$cid,                          "mod_id"=>$modid,                          "interaction_id"=>$interact_value[0],                          "interaction_type"=>$interact_value[1],                          "time"=>$interact_value[2],                          "weighting"=>$interact_value[3],                          "correct_response"=>$interact_value[4],                          "learner_response"=>$interact_value[5],                          "result"=>$interact_value[6],                          "latency"=>$interact_value[7],                          "objectives"=>$interact_value[8],                          "description"=>$interact_value[9]              );              $this->db->insert('scorm_interactions', $insert_data);          }      }  }  

I got this type of error:

Deadlock found when trying to get lock; try restarting transaction

UPDATE `scorm_interactions` SET      `interaction_type` = 'choice',      `time` = '10:45:31',      `weighting` = '1',      `correct_response` = 'Knees*',      `learner_response` = 'Knees*',      `result` = 'correct',      `latency` = '0000:00:02.11',      `objectives` = 'Question2_1',      `description` = ''  WHERE      `mod_id` =  '4' AND      `course_id` =  '5' AND      `user_id` =  '185' AND      `interaction_id` =  'Question2_1'  ;    Filename: application/models/user/scorm1_2_model.php Line Number: 234  

Can anyone please suggest me how to avoid Deadlock?

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 12 Mar 2013 12:25 PM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

Script to get duration

Posted: 12 Mar 2013 02:53 PM PDT

I am new to PostgreSQL. I am trying to write a query which can give me duration of the time. The fields are in the format yyyymmddhhmmss. In between we will get empty fields for Start_Time or End_Time. I want to skip these rows and get the output.

Start_Time          End_Time   20130312080535   20130312080550  20130312080018   20130312080028  20130312080030   20130312080049  20130311154049   20130311154138  20130311225510    20130311152500   20130311152538  20130311225510    20130311152539   20130311152614  20130311152740   20130311152806  

Is there an execution difference between a JOIN condition and a WHERE condition?

Posted: 12 Mar 2013 02:40 PM PDT

Is there a performance difference between these two example queries?

Query 1:

select count(*)  from   table1 a  join   table2 b  on     b.key_col=a.key_col  where  b.tag = 'Y'  

Query 2;

select count(*)  from   table1 a  join   table2 b  on     b.key_col=a.key_col     and b.tag = 'Y'  

Notice the only difference is the placement of the supplemental condition; the first uses a WHERE clause and the second adds the condition to the ON clause.

When I run these queries on my Teradata system, the explain plans are identical and the JOIN step shows the additional condition in each case. However, on this SO question regarding MySQL, one of the answers suggested that the second style is preferred because WHERE processing occurs after the joins are made.

Is there a general rule to follow when coding queries like this? I'm guessing it must be platform dependent since it obviously makes no difference on my database, but perhaps that is just a feature of Teradata. And if it is platform dependent, I'd like very much to get a few documentation references; I really don't know what to look for.

DB2 Server Table Space Locked

Posted: 12 Mar 2013 02:11 PM PDT

At work we keep receiving the following DataException seemingly at random when one of our processes tries to write/access a table for one of our clients:

com.ibm.db.DataException: A database manager error occurred. :   [IBM][CLI Driver][DB2/NT64] SQL0290N  Table space access is not allowed.  SQLSTATE=55039  

Has anyone encountered this? I'm not the person who primarily does administrative tasks on our databases, but even they seem to be having difficulty finding the root of this problem. Any suggestions? Anyone encounter this before? This error comes up for only one of our clients at a time, and it generally seems to rotate. We have Rackspace service but they wont be of much help unless we can provide screenshots, etc at the exact moment this occurs.

Apologies if this post may be too vague, please let me know what information to supply to clarify things more. I'm one of the developers in my office, but I don't primarily handle the administrative tasks on our databases.

edit: We spoke with IBM and this could possibly be caused by some sort of virus scan being run by IBM/Rackspace as a part of maintenance? They said this kind of dubiously though, so I am doubting this is the culprit because tables remained locked for variable amounts of time.

Relational database for address model

Posted: 12 Mar 2013 12:07 PM PDT

I want to design an "Address" model for all types of entities like users, businesses, etc.

I have two types of main models: one is User and the other is Business. Each one has different address types like below.

User

1.Contact Address  2.Billing Address  

Business

1.Contact Address  2.something  

So I created an address model with an addresstype column like this

Address

id  addresstype  user  addressline1  addressline2  

Relationships:

  • User – One to many –> Business
  • User – One to many –> Address (User Column)

Now using the above relations, addresstype and user columns will be in a relation, but Business address is not relatted with address.

How can I design this one in an efficient way?

unable to login oracle as sysdba

Posted: 12 Mar 2013 06:38 PM PDT

I just got 11gR2 running and was able to conn as sysdba. I shutdown and started up the database to mount a new pfile. Now, I cannot login as sysdba. My parameter for password file is:

 *.remote_login_passwordfile='EXCLUSIVE'  

I am using sqlplus within the server. This is not a remote connection.

[oracle@oel56 ~]$ sqlplus /nolog    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 5 22:50:46 2013    Copyright (c) 1982, 2009, Oracle.  All rights reserved.    SQL> conn / as sysdba  ERROR:  ORA-01031: insufficient privileges  

Here's some more information:

[oracle@oel56 ~]$ grep -E "ine SS_DBA|ine SS_OPER" $ORACLE_HOME/rdbms/lib/config.c  #define SS_DBA_GRP "oracle"  #define SS_OPER_GRP "oracle"  [oracle@oel56 ~]$ id oracle  uid=500(oracle) gid=500(oracle) groups=500(oracle),54321(oinstall),54322(dba),54323(oper) context=user_u:system_r:unconfined_t  

Is there a combination of columns in sys.dm_exec_sessions that is unique per the server?

Posted: 12 Mar 2013 04:42 PM PDT

In SQL Server, each session has its own spid. Spids are unique at any given notice, but spids, like process and thread identifiers in the OS are recycled.

However sys.dm_exec_sessions has other columns with session metadata. Is there a combination of columns that is guaranteed to be unique for a server instance?

"connectivity libraries needed are not found" error in IBM Data Studio

Posted: 12 Mar 2013 02:37 PM PDT

UPDATE

I am getting the following error when I try to create a new database in IBM Data Studio v3.1.1.0.

The connectivity libraries that are needed for local or remote non-JDBC operations were not found. To provide these libraries, you can install IBM data server client or a local DB2 server instance.  

I have already started the instance using

db2start  

command.

After searching exhaustively, I am not able to find any help on the internet regarding this error.

How to insert into junction table using triggers

Posted: 12 Mar 2013 01:39 PM PDT

Sorry in advance if this is "basic SQL." I wanted to know how to update my junction tables automatically. For example, these are my tables.

Artist and Song are base tables and SongArtist is the junction table. Everything in SongArtist is PK and FK.

CREATE  TABLE IF NOT EXISTS `Artist` (    `artistID` INT NOT NULL AUTO_INCREMENT ,    `artistName` VARCHAR(150) NOT NULL ,    PRIMARY KEY (`artistID`) )  ENGINE = InnoDB    CREATE  TABLE IF NOT EXISTS `Song` (    `songName` VARCHAR(150) NOT NULL ,    `songID` INT NOT NULL AUTO_INCREMENT ,    PRIMARY KEY (`songID`) )  ENGINE = InnoDB    CREATE  TABLE IF NOT EXISTS `SongArtist` (    `songID` INT NOT NULL ,    `artistID` INT NOT NULL ,    PRIMARY KEY (`songID`, `artistID`) ,    INDEX `fk_Artist_Artist_idx` (`artistID` ASC) ,    INDEX `fk_Song_Song_idx` (`songID` ASC) ,    CONSTRAINT `fk_Song_Song`      FOREIGN KEY (`songID` )      REFERENCES `Song` (`songID` )      ON DELETE CASCADE      ON UPDATE CASCADE,    CONSTRAINT `fk_Artist_Artist`      FOREIGN KEY (`artistID` )      REFERENCES `Artist` (`artistID` )      ON DELETE CASCADE      ON UPDATE CASCADE)  ENGINE = InnoDB  

I created some triggers like this, but they don't seem to work as I can't do INSERT INTO and add a new row when I only know one field of the junction table because I have two columns that are PK.

CREATE   TRIGGER after_song_insert AFTER INSERT  ON Song  FOR EACH ROW   BEGIN      INSERT INTO SongArtist (songID) values (songID);  END;  CREATE  TRIGGER after_song_update AFTER UPDATE  ON Song  FOR EACH ROW   BEGIN      INSERT INTO SongArtist (songID) values (songID);  END;  CREATE  TRIGGER after_song_delete AFTER DELETE  ON Song  FOR EACH ROW   BEGIN      DELETE FROM SongArtist (songID) values (songID);      END;  $$      DELIMITER ;  

What should I do?

Data dictionary best practices in SQL Server 2008 r2

Posted: 12 Mar 2013 07:38 PM PDT

We are interested in sharing the meta data and data dictionary among the team. I know that we can use the Extended Properties for this purpose, but based on my experience I've seen it gets out of date easily, because team members tend to forget to update them or skip this step.

I'm wondering if there is a more convenient way to create the data dictionary which can be maintained with the least amount of effort and time.

Thank you.

InnoDB - High disk write I/O on ibdata1 file and ib_logfile0

Posted: 12 Mar 2013 12:12 PM PDT

Server Specification: VPS with following info

model name  : Intel(R) Xeon(R) CPU           E5649  @ 2.53GHz  MemTotal:      2058776 kB  MemFree:        244436 kB  

We are running IP.Board from Invision Power Services, we are using innodb_file_per_table and have reloaded the database to reduce ibdata1 size. However, we still got problem of high CPU and I/O usage lately despite of the reduced ibdata1 size.

From my inspection, I believe that it causes by high I/O usage on ibdata1. Below is the data I obtained using pt-ioprofile -cell sizes (in Percona ToolKit). Basically, it's the total I/O amount collected in the period of 30 seconds.

# pt-ioprofile -cell sizes  Fri Jul 20 10:22:23 ICT 2012  Tracing process ID 8581       total      pread       read     pwrite      fsync       open      close   getdents      lseek      fcntl filename     6995968          0          0    6995968          0          0          0          0          0          0 /db/mysql/ibdata1     1019904          0          0    1019904          0          0          0          0          0          0 /db/mysql/ib_logfile0      204800     204800          0          0          0          0          0          0          0          0 /db/mysql/admin_phpbb3forum/phpbb_posts.ibd       49152      49152          0          0          0          0          0          0          0          0 /db/mysql/admin_ips/ips_reputation_cache.ibd       32768      32768          0          0          0          0          0          0          0          0 /db/mysql/admin_ips/ips_reputation_totals.ibd       29808          0          0          0          0          0          0      29808          0          0 /db/mysql/admin_ips/  ... (other trivial I/O records truncated)  

Running iotop and I see DISK WRITE: goes up and down around 2M/s and 200K/s

My question is, why we have high I/O write on ibdata1 and ib_logfileX while we have only about 5-10 small update per second into our sessions tables, which are also MEMORY table (only about 300K in size)? It is puzzling me because there's also no equivalent write I/O on any other table file, which indicates that the write I/O is not caused by UPDATE/INSERT/DELETE.

Note that I'm only a programmer who are just by chance have the duty to maintain this, so please feel free to ask for more info. I've done a lot of things to this server, but please don't assume that I have done anything I should have done already.

Additional info:

# ls -l /db/mysql/ib*  -rw-rw---- 1 mysql mysql  18874368 Jul 21 01:26 /db/mysql/ibdata1  -rw-rw---- 1 mysql mysql 134217728 Jul 21 01:26 /db/mysql/ib_logfile0  -rw-rw---- 1 mysql mysql 134217728 Jul 21 01:26 /db/mysql/ib_logfile1  

and

mysql> SHOW VARIABLES LIKE 'innodb%';  +-------------------------------------------+------------------------+  | Variable_name                             | Value                  |  +-------------------------------------------+------------------------+  | innodb_adaptive_flushing                  | ON                     |  | innodb_adaptive_flushing_method           | estimate               |  | innodb_adaptive_hash_index                | ON                     |  | innodb_adaptive_hash_index_partitions     | 1                      |  | innodb_additional_mem_pool_size           | 20971520               |  | innodb_autoextend_increment               | 8                      |  | innodb_autoinc_lock_mode                  | 1                      |  | innodb_blocking_buffer_pool_restore       | OFF                    |  | innodb_buffer_pool_instances              | 1                      |  | innodb_buffer_pool_restore_at_startup     | 0                      |  | innodb_buffer_pool_shm_checksum           | ON                     |  | innodb_buffer_pool_shm_key                | 0                      |  | innodb_buffer_pool_size                   | 402653184              |  | innodb_change_buffering                   | all                    |  | innodb_checkpoint_age_target              | 0                      |  | innodb_checksums                          | ON                     |  | innodb_commit_concurrency                 | 0                      |  | innodb_concurrency_tickets                | 500                    |  | innodb_corrupt_table_action               | assert                 |  | innodb_data_file_path                     | ibdata1:10M:autoextend |  | innodb_data_home_dir                      |                        |  | innodb_dict_size_limit                    | 0                      |  | innodb_doublewrite                        | ON                     |  | innodb_doublewrite_file                   |                        |  | innodb_fake_changes                       | OFF                    |  | innodb_fast_checksum                      | OFF                    |  | innodb_fast_shutdown                      | 1                      |  | innodb_file_format                        | Barracuda              |  | innodb_file_format_check                  | ON                     |  | innodb_file_format_max                    | Barracuda              |  | innodb_file_per_table                     | ON                     |  | innodb_flush_log_at_trx_commit            | 2                      |  | innodb_flush_method                       | O_DIRECT               |  | innodb_flush_neighbor_pages               | 0                      |  | innodb_force_load_corrupted               | OFF                    |  | innodb_force_recovery                     | 0                      |  | innodb_ibuf_accel_rate                    | 100                    |  | innodb_ibuf_active_contract               | 1                      |  | innodb_ibuf_max_size                      | 201310208              |  | innodb_import_table_from_xtrabackup       | 0                      |  | innodb_io_capacity                        | 4000                   |  | innodb_kill_idle_transaction              | 0                      |  | innodb_large_prefix                       | OFF                    |  | innodb_lazy_drop_table                    | 0                      |  | innodb_lock_wait_timeout                  | 50                     |  | innodb_locks_unsafe_for_binlog            | OFF                    |  | innodb_log_block_size                     | 4096                   |  | innodb_log_buffer_size                    | 4194304                |  | innodb_log_file_size                      | 134217728              |  | innodb_log_files_in_group                 | 2                      |  | innodb_log_group_home_dir                 | ./                     |  | innodb_max_dirty_pages_pct                | 75                     |  | innodb_max_purge_lag                      | 0                      |  | innodb_mirrored_log_groups                | 1                      |  | innodb_old_blocks_pct                     | 37                     |  | innodb_old_blocks_time                    | 0                      |  | innodb_open_files                         | 300                    |  | innodb_page_size                          | 16384                  |  | innodb_purge_batch_size                   | 20                     |  | innodb_purge_threads                      | 1                      |  | innodb_random_read_ahead                  | OFF                    |  | innodb_read_ahead                         | linear                 |  | innodb_read_ahead_threshold               | 56                     |  | innodb_read_io_threads                    | 24                     |  | innodb_recovery_stats                     | OFF                    |  | innodb_recovery_update_relay_log          | OFF                    |  | innodb_replication_delay                  | 0                      |  | innodb_rollback_on_timeout                | OFF                    |  | innodb_rollback_segments                  | 128                    |  | innodb_show_locks_held                    | 10                     |  | innodb_show_verbose_locks                 | 0                      |  | innodb_spin_wait_delay                    | 6                      |  | innodb_stats_auto_update                  | 0                      |  | innodb_stats_method                       | nulls_equal            |  | innodb_stats_on_metadata                  | OFF                    |  | innodb_stats_sample_pages                 | 8                      |  | innodb_stats_update_need_lock             | 1                      |  | innodb_strict_mode                        | OFF                    |  | innodb_support_xa                         | ON                     |  | innodb_sync_spin_loops                    | 30                     |  | innodb_table_locks                        | ON                     |  | innodb_thread_concurrency                 | 0                      |  | innodb_thread_concurrency_timer_based     | OFF                    |  | innodb_thread_sleep_delay                 | 10000                  |  | innodb_use_global_flush_log_at_trx_commit | ON                     |  | innodb_use_native_aio                     | ON                     |  | innodb_use_sys_malloc                     | ON                     |  | innodb_use_sys_stats_table                | OFF                    |  | innodb_version                            | 1.1.8-rel27.1          |  | innodb_write_io_threads                   | 24                     |  +-------------------------------------------+------------------------+  90 rows in set (0.00 sec)  

From @RolandoMySQLDBA : Please run this

SET @TimeInterval = 300;  SELECT variable_value INTO @num1 FROM information_schema.global_status  WHERE variable_name = 'Innodb_os_log_written';  SELECT SLEEP(@TimeInterval);  SELECT variable_value INTO @num2 FROM information_schema.global_status  WHERE variable_name = 'Innodb_os_log_written';  SET @ByteWrittenToLog = @num2 - @num1;  SET @KB_WL = @ByteWrittenToLog / POWER(1024,1) * 3600 / @TimeInterval;  SET @MB_WL = @ByteWrittenToLog / POWER(1024,2) * 3600 / @TimeInterval;  SET @GB_WL = @ByteWrittenToLog / POWER(1024,3) * 3600 / @TimeInterval;  SELECT @KB_WL,@MB_WL,@GB_WL;  

and show the output. This will tell you how many bytes per hour is written to ib_logfile0/ib_logfile1 based on the next 5 minutes.

Above SQL query result (At 8am local time, while the members online is about 25% of the stat during the day):

mysql> SELECT @KB_WL,@MB_WL,@GB_WL;  +--------+----------+-------------------+  | @KB_WL | @MB_WL   | @GB_WL            |  +--------+----------+-------------------+  |  95328 | 93.09375 | 0.090911865234375 |  +--------+----------+-------------------+  1 row in set (0.00 sec)  

Performance implications of MySQL VARCHAR sizes

Posted: 12 Mar 2013 01:33 PM PDT

Is there a performance difference in MySQL between varchar sizes? For example, varchar(25) and varchar(64000). If not, is there a reason not to declare all varchars with the max size just to ensure you don't run out of room?

Comfortable sqlplus interface?

Posted: 12 Mar 2013 11:26 AM PDT

I found sqlplus'interface is rather outdated. It's quite nice to have some commands or keywords at disposal, but for example no "arrow-up" key for the previous history entry is available.

What is a good replacement / extension for sqlplus? Could be a GUI or better (so it stays useful via SSH) a command line utility.

SQL*Plus is the main command line tool to operate with the Oracle Database.

No comments:

Post a Comment

Search This Blog