Saturday, August 24, 2013

[how to] Is STRICT_TRANS_TABLES required for replication?

[how to] Is STRICT_TRANS_TABLES required for replication?


Is STRICT_TRANS_TABLES required for replication?

Posted: 24 Aug 2013 08:23 PM PDT

Is STRICT_TRANS_TABLES required for replication? Particularly, I need to know about it for Percona Cluster.

Schema Change Script : Validation and test cases

Posted: 24 Aug 2013 07:14 PM PDT

I have written a schema change script to remove a certain column C from Table A to Table B. This is my logic/algorithm,

  1. If Column C does not exist already in Table B, add it.
  2. If column C exists in Table B, update the newly added Column values(which would initially be null) to the corresponding matching values from Table A on a condition A.columnZ(Primary_Key) = B.ColumnZ(Foreign_key).
  3. Drop Column C from Table A.

I have written the script for the same( a simple one, though). Now, since Database changes are highly critical because its going to be run on Production database and the column contains some values related to Users' billing information, I want to test them completely before going for deployment.

I would like to get suggestions for Test-cases to validate the script.Since, this is the first time we're going to making this scripts, any other advice on potential problems while making change scripts, would be appreciated as well.

What is the difference between COUNT(*) and COUNT(*) OVER()

Posted: 24 Aug 2013 04:59 PM PDT

Take the following code example:

SELECT MaritalStatus,         COUNT(*) AS CountResult         COUNT(*) OVER() AS CountOverResult         FROM (schema).(table)         GROUP BY Marital Status  

COUNT(*) Returns all rows ignoring nulls right?

What does COUNT(*) OVER() do?

Need to learn SQL quickly

Posted: 24 Aug 2013 03:02 PM PDT

I am a 14 year freelance web and software developer. In the middle of a project, a client threw a need for a database to me. I need to learn/be able to use SQL very soon. What resources do you professionals suggest?

Thank you!

-TJonS

Is there a logical way to visualise a join by looking at it quickly

Posted: 24 Aug 2013 01:40 PM PDT

As the question asks, is there a logical way of visualising a join in your head quickly. I always have trouble visualising the reason for a join from looking at the code, and have to go and visually inspect the tables to see what the relationships are.

For example ( a simple one )

SELECT *  FROM Production.Product p JOIN Sales.SalesOrderDetail d  ON p.ProductID = d.ProductID  

I could logically look at this query and form the idea in my head..

I'm Looking for Products that have Orders/Sales.

What quick processes do you have for visualing joins?

how to create an incremental database copy in postgresql?

Posted: 24 Aug 2013 08:57 PM PDT

Virtual machine software like VirtualBox allow one to make incremental VM clones. That is data, once "touched" (opened writable), will be copied and stored in the incremental cache of the new clone.

I am searching for the same option for a Postgres DBMS.

How can I setup an incremental database copy, where entries are read from the original database, and touched/modified rows from the local copy?

If not on the DBMS level, how can I emulate such behavior at the file-system/storage level using a separate DBMS instance?

Background: The idea is to utilize the powerful database-server yet without incurring much resource overhead for a staged/developer database-copy.

Feel free to edit the subject or post to improve clarity.
Thanks.

Why do large tables slow down queries in databases?

Posted: 24 Aug 2013 11:48 AM PDT

Kind of a generic question; but I have noticed that large tables in certain CMS's can slow them down to a crawl. The solutions for this I've seen are to just truncate those tables.

Why do large tables slow down SQL queries so much?

How do Indexes help large tables?

How does disk speed factor into this?

Is there a way to auto resume an incomplete job after sql server restart?

Posted: 24 Aug 2013 07:47 PM PDT

I ran into problem of job stopped after the sql server restart.

Is there a way to auto resume an incomplete job after sql server restart?

Thanks!

Comparing binary 0x and 0x00 turns out to be equal on SQL Server

Posted: 24 Aug 2013 01:13 PM PDT

It seems that SQL Server considers 0x and 0x00 equal values:

SELECT CASE WHEN 0x = 0x00 THEN 1 ELSE 0 END  

This outputs 1.

How can I get true binary bit-for-bit comparison behavior? Also, what are the exact rules under which two (var)binary values are considered equal?

Also note the following behavior:

--prints just one of the values  SELECT DISTINCT [Data]  FROM (VALUES (0x), (0x00), (0x0000)) x([Data])    --prints the obvious length values 1, 2 and 3  SELECT DATALENGTH([Data]) AS [DATALENGTH], LEN([Data]) AS [LEN]  FROM (VALUES (0x), (0x00), (0x0000)) x([Data])  

Background of the question is that I'm trying to deduplicate binary data. I need to GROUP BY binary data, not just compare two values. I'm glad I even noticed this problem.

Note, that HASHBYTES does not support LOBs. I'd also like to find a simpler solution.

Finding swap causes of MySQL

Posted: 24 Aug 2013 03:49 AM PDT

In my centos 6.3 server I have a MySQL 5.5.33 database.
It has 17 tables (15 InnoDB, 2 MyISAM) and total records 6.7M rows. I refactored my schemas and added indexes for my slow logs. My average query time is 20-30 ms. And my database performs well.

But I have some cron queries that runs every 3 hours. They don't use any index, they runs very slow and every query runs nearly 1500-2000 ms. I don't plan to add new indexes for them, because in that case I have to add many indexes and that queries run very rare.

When I restart my database server, -normally- swap is zero. After some time swapping becomes large gradually. After 13 days, I get 650MB swap of MySQL. I want to find what causes this swapping and try to reduce the swap without performance grade.

I want to be sure that the cause is cron queries or some other thing causes this swap size.

My top results:

top - 13:33:01 up 13 days, 11:04,  1 user,  load average: 0.77, 1.02, 1.07  Tasks: 148 total,   1 running, 147 sleeping,   0 stopped,   0 zombie  Cpu(s): 27.4%us,  5.3%sy,  0.0%ni, 59.1%id,  7.8%wa,  0.0%hi,  0.3%si,  0.0%st  Mem:   1020564k total,   854184k used,   166380k free,    73040k buffers  Swap:  2097144k total,   643036k used,  1454108k free,    94000k cached      PID USER        PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  SWAP COMMAND   9573 mysql       20   0 2336m 328m 3668 S  7.3 33.0 349:14.25 554m mysqld  15347 examplecom  20   0  219m  32m  10m S  2.7  3.2   0:02.66    0 php-cgi  15343 examplecom  20   0  215m  28m  10m S 10.0  2.9   0:05.80    0 php-cgi  15348 examplecom  20   0  215m  28m  10m S 12.3  2.8   0:03.62    0 php-cgi  15346 examplecom  20   0  215m  28m  10m S  9.6  2.8   0:06.39    0 php-cgi  15350 examplecom  20   0  212m  25m  10m S 10.0  2.6   0:02.19    0 php-cgi  15345 examplecom  20   0  211m  24m  10m S  6.6  2.5   0:04.28    0 php-cgi  15349 examplecom  20   0  209m  22m  10m S  5.3  2.2   0:02.66    0 php-cgi  12771 apache      20   0  334m 5304 2396 S  0.0  0.5   0:02.53  10m httpd  12763 apache      20   0  335m 5224 2232 S  0.3  0.5   0:02.33  11m httpd  

How to access a SQL Server database from other computer connected to the same workgroup?

Posted: 24 Aug 2013 05:12 PM PDT

I have created a C# application which uses a SQL Server database. I have other computers connected to me and to each other in a workgroup. I have shared my C# application with others. When they open the application they get the error

A network related or instance-specific error occured while establishing a connection to SQL Server. the server was not found or was not accessible

But the application is working fine on my PC. The connection string I am using is

Data Source=ASHISHPC1\SQLEXPRESS;Initial Catalog=ACW;User ID=ash159;Password=ashish159  

which is stored in a .config file.

The application is working fine on my PC. What must I do? I have enabled the TCP/IP in the server but the same error persists. Some change in connection string or something else?

Please help.. Thank you..

Update year alone in date (Oracle 11g)

Posted: 24 Aug 2013 02:49 PM PDT

When I checked one of my client's database, I found some mistakes which will make problems. The problem is that the date is showing as 01-01-01 when I try to display it in the dd-MM-yyyy format the result is 01-01-0001. How can I get rid of this problem. Is it possible to update the year alone in these dates.

I can find the dates with this problem by using this query

select from_date from date_table where length(extract(year from from_date))='2';

Can I update this using the logic that

  • Find the dates using the above query

  • Update the year by checking the condition that if year is between '00' and '13' prefix '20' with the year else prefix '19'

How can I do this? Can I do it using Oracle query or should I write code for doing so?

Database is Oracle 11g R2 working in RHEL5.

Oracle intermittent performance problem

Posted: 24 Aug 2013 06:51 PM PDT

I have some complex queries that are usually fast (< 1 sec), but occasionally slow (> 10 sec, even minutes).

How do I track this down or identify what condition is causing it?

One theory was caching - perhaps the query is fast when the blocks cached in memory. I tried flushing both the shared pool and the buffer cache ("alter system flush buffer_cache") but the query is still fast after doing that.

Is there a way to nail down precisely what was going on during a specific execution that was bad, and which part of query execution took the longest?

Use surrogate or natural PK if a row is uniquely identified by two or more columns?

Posted: 24 Aug 2013 08:26 PM PDT

I am having a hard time choosing between natural and surrogate PK for my database table.

The database is for a ranking system for an MOBA game. Here is more or less what I would do if I were to go the natural route.

Player table

 - PlayerID Int AI NN UQ   - PlayerName Varchar PK   - ServerName Varchar PK   - Registered date, level, hero etc misc columns  

Rank table

 - PlayerName FK references player   - ServerName FK references player   - RankID Int AI NN UQ   - PlayerRank Int NN UQ  

The thing is, each row in the player table is uniquely identified by the pair of PlayerName and ServerName. I thought using a surrogate key in this case is not really appropriate but I would like to hear suggestions on this.

MySQL SSL encryption

Posted: 24 Aug 2013 03:15 PM PDT

Does setting MASTER_SSL to 1 in change master ensure encryption without specifying options MASTER_SSL_CA, MASTER_SSL_CAPATH, MASTER_SSL_CERT, MASTER_SSL_KEY ?

After setting MASTER_SSL to 1, show slave status says Master_SSL_Allowed: yes, but does that ensure the transferred data will be encrypted?

 Master_SSL_Allowed: Yes   Master_SSL_CA_File:   Master_SSL_CA_Path:      Master_SSL_Cert:    Master_SSL_Cipher:       Master_SSL_Key:  

Thanks!

mongodb replication node stuck at “STARTUP2” with optimeDate as 1970

Posted: 24 Aug 2013 04:15 AM PDT

i have just setup replica sets with three nodes . the third node is stuck at stateStr STARTUP2 with "optimeDate" : ISODate("1970-01-01T00:00:00Z"). However its showing no error message. Is this alright. On primary rs.status() yeilds

{      "set" : "qdit",      "date" : ISODate("2013-06-18T22:49:41Z"),      "myState" : 1,      "members" : [          {              "_id" : 0,              "name" : "q.example.com:27017",              "health" : 1,              "state" : 1,              "stateStr" : "PRIMARY",              "uptime" : 2940,              "optime" : {                  "t" : 1371593311,                  "i" : 1              },              "optimeDate" : ISODate("2013-06-18T22:08:31Z"),              "self" : true          },          {              "_id" : 1,              "name" : "q1.example.com:27017",              "health" : 1,              "state" : 2,              "stateStr" : "SECONDARY",              "uptime" : 457,              "optime" : {                  "t" : 1371593311,                  "i" : 1              },              "optimeDate" : ISODate("2013-06-18T22:08:31Z"),              "lastHeartbeat" : ISODate("2013-06-18T22:49:40Z"),              "lastHeartbeatRecv" : ISODate("2013-06-18T22:49:40Z"),              "pingMs" : 0,              "syncingTo" : "twitnot.es:27017"          },          {              "_id" : 2,              "name" : "q2.example.com:27017",              "health" : 1,              "state" : 5,              "stateStr" : "STARTUP2",              "uptime" : 300,              "optime" : {                  "t" : 0,                  "i" : 0              },              "optimeDate" : ISODate("1970-01-01T00:00:00Z"),              "lastHeartbeat" : ISODate("2013-06-18T22:49:40Z"),              "lastHeartbeatRecv" : ISODate("2013-06-18T22:49:41Z"),              "pingMs" : 7          }      ],      "ok" : 1  }  

also

db.printSlaveReplicationInfo() on yields

source:   qdit1.queuedit.com:27017       syncedTo: Tue Jun 18 2013 22:08:31 GMT+0000 (UTC)           = 2894 secs ago (0.8hrs)  source:   qdit2.queuedit.com:27017       syncedTo: Thu Jan 01 1970 00:00:00 GMT+0000 (UTC)           = 1371596205 secs ago (380998.95hrs)  

Is this alright. Also how can i test my replication especially the third node

How to repair Microsoft.SqlServer.Types assembly

Posted: 24 Aug 2013 06:15 PM PDT

When I run a checkdb('mydb') this is the only error message printed.

Msg 8992, Level 16, State 1, Line 1  Check Catalog Msg 3857, State 1: The attribute (clr_name=NULL) is required but is missing for row (assembly_id=1) in sys.assemblies.  

It is referring to 'Microsoft.SqlServer.Types' I do see that in the this db the clr_name is blank. but under the master db there is a value in there.

I tried to drop or alter the assembly to add this value but its restricted.

btw, this db was updated lately from sql-server 2005 to 2008R2.

Unable to connect to Amazon RDS instance

Posted: 24 Aug 2013 02:15 PM PDT

I recently created an oracle instance on Amazon RDS. Unfortunately, I'm not able to connect to the instance using Oracle SQL Developer.

The (relevant) information I have from Amazon;

Endpoint - The DNS address of the DB Instance: xxx.yyy.eu-west-1.rds.amazonaws.com

DB Name - The definition of the term Database Name depends on the database engine in use. For the MySQL database engine, the Database Name is the name of a database hosted in your Amazon DB Instance. An Amazon DB Instance can host multiple databases. Databases hosted by the same DB Instance must have a unique name within that instance. For the Oracle database engine, Database Name is used to set the value of ORACLE_SID, which must be supplied when connecting to the Oracle RDS instance: ZZZ

Master Username - Name of master user for your DB Instance: org

Port - Port number on which the database accepts connections: 1521

From this information, the connection settings in SQL Developer are pretty obvious, so I don't really see what I could be missing...

Will Partitions and Indexes on the same table help in performace of Inserts and Selects?

Posted: 24 Aug 2013 07:15 PM PDT

I have a table containing the list of visitors and this table has the following information.

  • Visitor Browser Information
  • Visitor Location Information
  • Visitor Time Information
  • No of Visits

I have a second table that maintains the history of each visits, which means I if the same visitor visits the site, I insert into the second table and update the no. of visits on the first table.

The kind of reports that I have to generate for this table are

  1. Count of Visitors/day or days (Search Between days)
  2. Count of Visitors/month
  3. Count of Visitors/year
  4. Count of Visitors/browser or grouped by browsers

On an average there are about 20000 inserts to the second table and about 15000 inserts to the first table, meaning 5000 were updates to the first table (5000 repeat visits).

I need to decide between partitioning the tables by month and sub-partitioning by days for the reports 1,2,3 and index the browser related columns for report 4.

There will be more reports in the future not sure on what clauses.

Does partitioning/sub-partitioning along with indexing help in the performance of inserts and selects?

Should I perform partitioning on both the tables?

I am currently using MySQL 5.5 + InnoDB

Efficient way to perform approximated search?

Posted: 24 Aug 2013 11:15 AM PDT

I have to perform a join between table_a and table_b, using three fields as key being one of them the date of the event, say, both tables have id1, id2 and evnt_time for eache record.

As it happens though, the evnt_time can be displaced in a few seconds between these tables. Thus the join has to behave sort of table_a.id1 = table_b.id1 and table_a.id2 = table_b.id2 and table_a.evnt_time ~ table_b.evnt_time, where:

  • ~ means approximately, given + or - seconds
  • There must be a way to give ~ parameters of how precise should be. e.g.: table_a.evnt_time ~(2) table_b.evnt_time will join table_a.evnt_time with table_b.evnt_time - 2s, table_b.evnt_time - 1s, table_b.evnt_time, table_b.evnt_time + 1s and table_b.evnt_time + 2s.

To tackle the situation, there are a few possibilities I am experimenting, but my doubt is on which architecture should I use to perform a very efficient "approximated join" - these tables are partitioned, sub-partitioned and each sub part may contain billions of records... (although I have a reasonable amount of resources).

For once, I thought of storing the unique sencond of the event (i.e. second it happened since julian calendar) on both sides so the real join (give a "~(2)") could simply look like:

select *  from      table_a,      table_b  where      table_a.id1 = table_b.id1      and table_a.id2 = table_b.id2      and (table_a.evnt_sec = table_b.evnt_sec           or table_a.evnt_sec = table_b.evnt_sec + 1          or table_a.evnt_sec = table_b.evnt_sec + 2          or table_a.evnt_sec = table_b.evnt_sec - 1          or table_a.evnt_sec = table_b.evnt_sec - 2)  

But I am unsure of how efficiently would that perform with the indexes and scans..

This is just an example, I am not stuck to any sort of architecture. Also, I am using Oracle 11gR2.

MySQL: Error reading communication packets

Posted: 24 Aug 2013 07:15 AM PDT

I get this warning in mysql,

[Warning] Aborted connection 21 to db: 'MyDB' user: 'MyUser' host: 'localhost' (Got an error reading communication packets)  

I have been through few topics in google and according to some suggestion I increased the max_allowed_packet from 128 to 512 to 1024 still the same behaviour.

I am using Drupal 7, and yes there are lots of blob data types, but 1024 Mb of max_allowed_packet should be enough in my opinion.

Any other workaround how to overcome this warning ?

EDIT:

Added some settings as @Rolando's suggestions/answer, I still get the same warning.

My mysql config looks like this:

[client]  port        = 3306  socket      = /tmp/mysql.sock  default-character-set = utf8    [mysqld]  port        = 3306  socket      = /tmp/mysql.sock  skip-external-locking  key_buffer_size = 16K   max_allowed_packet = 1024M   table_open_cache = 128   sort_buffer_size = 64K  read_buffer_size = 256K  read_rnd_buffer_size = 256K  net_buffer_length = 2K  thread_stack = 192K  # Query cache disabled  thread_cache_size = 8  myisam-recover = BACKUP  max_connections = 100  thread_concurrency = 10  tmp_table_size = 128M  max_heap_table_size = 128M  log_error                = /var/log/mysql/mysql-error.log  log_slow_queries        = /var/log/mysql/mysql-slow.log  long_query_time = 2    log_warnings = 2    server-id   = 1  binlog-format = row  replicate-same-server-id = 0  auto-increment-increment = 2  auto-increment-offset = 1  log_bin = mysql-bin  log-slave-updates  relay-log=mysqld-relay-bin  expire_logs_days        = 10  max_binlog_size         = 100M    innodb_data_home_dir = /var/db/mysql  innodb_data_file_path = ibdata1:10M:autoextend  innodb_log_group_home_dir = /var/db/mysql  innodb_buffer_pool_size = 8G  character-set-server = utf8  #innodb_additional_mem_pool_size = 2M  innodb_log_file_size = 2047M  innodb_log_buffer_size = 32M  innodb_flush_log_at_trx_commit = 2  innodb_thread_concurrency = 8  innodb_lock_wait_timeout = 50  innodb_flush_method = O_DIRECT    [mysqldump]  quick  quote-names  max_allowed_packet = 16M  default-character-set = utf8    [mysql]  default-character-set = utf8    [myisamchk]  key_buffer_size = 32M  sort_buffer_size = 32M    [mysqlhotcopy]  interactive-timeout    [mysqld_save]  syslog  

My application uses only InnoDB, but there are few database like mysql, which came with the standard mysql installations are only the ones which uses MyISAM engine type, I guess that should not be my concern though.

As you can see I have replication too, the warning is the same one in replicated server too, whose config is identical as this.

How to remove column output in a for xml path query with a group by expression?

Posted: 24 Aug 2013 12:15 PM PDT

I forgot how to remove a column from being output in a FOR XML PATH query using a group by expression. I used it before but somehow I lost the article. In the below example. I do not wish to have idForSomething output in my result by I want to use it as condition for my inner query.

SELECT     idForSomething,      SUM(allSomething) AS [@sum],     (SELECT           innerSomething AS [@inner], innerSomething2 AS [@inner2]      FROM             someTable s2      WHERE            s2.innerSomething = s1.idForSomething      FOR XML PATH('innerlist'), TYPE)  FROM          someTable s1  WHERE         idForSomething = 1  GROUP BY       idForSomething  FOR XML PATH('listofsomethings')  

Added XML Body:

    <listofsomethings @sum="10">          <innerlist @inner="..." @inner2="..." />          <innerlist @inner="..." @inner2="..." />          <innerlist @inner="..." @inner2="..." />      </listofsomethings>  

I will look around again online, but I asking for the syntax to SQL Server to NOT USE "idForSomething" column in the final output. I thought it was something like NOOUTPUT but I can't remember and it does not work.

Best cloud platform for PostgreSQL

Posted: 24 Aug 2013 10:18 AM PDT

Currently, in our project, we use dedicated servers for our PostgreSQL database.

In theory, we can run anything on some cloud platforms. However, PostgreSQL configuration is strictly related to hardware configuration. What we are looking for is a cloud solution with native PostgreSQL support.

Here is the list of desired features:

  • automatic database configuration adjustment depending on selected resources
  • replication setup out of the box

So what are the options and the best pick for such a service?

Inserting query result to another table hangs on "Copying to temp table on disk" on MySQL

Posted: 24 Aug 2013 04:15 PM PDT

I started the process of inserting returned results to another table. The query groups the rows in respect of indexed IDs. This causes 149,000,000 rows to be decreased to 460,000 rows.

The query includes 3 table INNER JOINs, with each table having about 20,000,000 rows.

Further information, the process completes in about 12 seconds for a test file which has 1000 input rows, and returns 703 rows.

I started the query earlier ### we don't know when earlier is ###, but it is still running in the state: "Copying to temp table on disk" after 38000 seconds (10 and a half hours).

I think there is a problem during the insertion process. What am I probably doing wrong here? If it helps, the operating system of the computer is Windows 7, it has 3 GB RAM, an Intel Core2Duo 2.27GHz processor. ### you forgot to tell us details on the hard drive. One partition in, one out, same disk, same partitions, etc ###

Here's my query as it currently reads:

INSERT INTO kdd.contents               (adid,                descriptionwords,                purchasedkeywordwords,                titlewords)   SELECT t.adid,          dt.tokensid,          pkt.tokensid,          tt.tokensid   FROM   kdd.training t         INNER JOIN kdd.purchasedkeywordid_tokensid pkt                 ON t.keywordid = pkt.purchasedkeywordid          INNER JOIN kdd.titleid_tokensid tt                 ON t.titleid = tt.titleid          INNER JOIN kdd.descriptionid_tokensid dt                 ON t.descriptionid = dt.descriptionid   GROUP  BY adid;   

Primary key type change not reflected in foreign keys with MySQL Workbench

Posted: 24 Aug 2013 05:15 PM PDT

I have a problem with MySQL Workbench and primary/foreign keys.

I have some tables with PKs involved in relationship with other tables. If I modify the type of the PK, the type of the FK doesn't automatically update to reflect the change.

Is there any solution? Do I have to manually modify all the relations?

How to troubleshoot enq: TX - row lock contention?

Posted: 24 Aug 2013 07:22 AM PDT

I have the following situation.

I have RAC. On both nodes there are the locks.

On the First Node

    SID EVENT                           USERNAME    BLOCKING_SESSION    ROW_WAIT_OBJ#   OBJECT_NAME LOCKWAIT            SQL_ID          STATUS  1   102 enq: TX - row lock contention   MYUSER      155                 136972          TABLE1V     0000000810EFA958    5f4bzdg49fdxq   ACTIVE  2   111 enq: TX - row lock contention   MYUSER      155                 136972          TABLE1V     0000000810EFAC98    5f4bzdg49fdxq   ACTIVE  

Blocking session info

    SID EVENT                       USERNAME    ROW_WAIT_OBJ#   OBJECT_NAME LOCKWAIT    SQL_ID          STATUS  1   155 SQL*Net message from client MYUSER      136971          MyTABLEIMAGES_IDPK      4hw85z8absbjc   INACTIVE  

On the Second Node

    SID EVENT                           USERNAME    BLOCKING_SESSION    ROW_WAIT_OBJ#   OBJECT_NAME   LOCKWAIT          SQL_ID          STATUS  1   65  enq: TX - row lock contention   MYUSER      155                 137033          FactTABLE1V   0000000810EF9B58  1mznc2z75ksdx   ACTIVE  2   111 enq: TX - row lock contention   MYUSER      155                 136972          TABLE1V       0000000810EF9818  5f4bzdg49fdxq   ACTIVE  

Blocking session info

    SID EVENT                       USERNAME    ROW_WAIT_OBJ#   OBJECT_NAME  SQL_ID  STATUS  1   155 SQL*Net message from client MYUSER      127176          MYTableLOG           INACTIVE  

Additional Info : Blocking session SQL_TEXT

create or replace procedure ACTIONProcedureDELETE  (  p_ID NUMBER  )   is     cursor oldval is select r.id,r.sessionstatus    from MyTABLEIMAGES  r where r.idparent=p_ID;    begin         update  actionmyTableblock r  set r.status='False' where  ID=p_ID;       for oldvalItem in oldval loop        if oldvalItem.Sessionstatus='True' then        update MyTABLEIMAGES r set r.sessionstatus='False' where r.id=oldvalItem.Id;      else        update MyTABLEIMAGES r set r.sessionstatus='True' where r.id=oldvalItem.Id;      end if;    end loop;    end ACTIONProcedureDELETE;  

How do I troubleshoot this ?

As you can see blocking session is INACTIVE but still locking.

If I select v$sql_bind_capture there is no value for VALUE_STRING for blocking session sql_id.

From where to start?

I can guess that there is missing commit/rollback but application developer says "I have everything ok, I have written commit where it is necessary"

Please help.

What specific tests need to be performed under Database migration from DB2 to Oracle?

Posted: 24 Aug 2013 01:35 PM PDT

Currently I am going to be testing (I am a Data Analyst) of a larger migration project where they are planning to migrate from DB2 to Oracle. My question may be little broader; however any information which relates to the following questions would certainly be helpful.

1) How to validate both under Source and Target tables for:

  • Matching of Total Number of tables

  • Matching of Data fields under each table

2) How to validate from Front end (UI), so that right data is getting populated under right table: Challenge

  • There are 500+ UIs, What is the best way to test in an Optimized way (to avoid any duplicate testing)
  • Should I also focus on table mapping to UI screen

3) Any specific challenges I need to see under migration

  • Datalock, Spacing issue, Noisy words

No comments:

Post a Comment

Search This Blog