Monday, July 1, 2013

[how to] Any solution to fix circular replication fail-over?

[how to] Any solution to fix circular replication fail-over?


Any solution to fix circular replication fail-over?

Posted: 01 Jul 2013 08:45 PM PDT

Here's my current situation ...

I've developed a P.O.S system for our corporation, which has 25+ branch across the country, each branch has it's own P.O.S database on wamp server.

I want for every event occur on any branch to affect the other branches.

Some people advised me with ring topology (circular replication), but there's a big problem within it, that when one server gets down, the replication will stop and will go through infinite loop.

Please advice.

SUM 1 column with same ID and SELECT multiple columns from multiple tables

Posted: 01 Jul 2013 07:00 PM PDT

I would like to SUM the values of the column TimeTook when the MPRO column is the same. I also require to grab data from multiple tables and select multiple columns. I have 2 queries with the information and need to know how to mix them together in 1.

    SQL1 = "SELECT MPRO.MPRO, SUM(MPRO.TimeTook) AS TimeTook, MPRO.Carrier, MPRO.Product, MPRO.Province, MPRO.Record, MASTER.DateIn, JOBS.JobTypeCode "      SQL2 = "FROM Worklog.Worklog.MPRO MPRO INNER JOIN WorkLog.WorkLog.MASTER "      SQL3 = "MASTER ON MPRO.Record=MASTER.Record JOIN WorkLog.WorkLog.JOBS "      SQL4 = "JOBS ON MPRO.MPro=JOBS.MPro WHERE (MASTER.DateIn>={ts '" + Str(StartYear) + "-" + StartMonthStr + "-" + StartDayStr + " 00:00:00'} "      SQL5 = "AND MASTER.DateIn<={ts '" + Str(EndYear) + "-" + EndMonthStr + "-" + EndDayStr + " 00:00:01'}) ) "      SQL7 = "GROUP BY MPRO.MPRO, MPRO.Carrier, MPRO.Product, MPRO.Province, MPRO.Record, MASTER.DateIn, JOBS.JobTypeCode"        SQLStatement = "SELECT MPRO, Carrier, Product, Province, SUM(TimeTook) AS TimeTook FROM MPRO GROUP BY MPRO, Carrier, Product, Province"      SQLStatement = SQL1 + SQL2 + SQL3 + SQL4 + SQL5 + SQL7e  

Duplicate GROUP_CONCAT values

Posted: 01 Jul 2013 02:13 PM PDT

Writing a plugin for Wordpress using a custom post type and an additional alias table to hold repetition of the post events. The problem comes with duplicate post meta values.

wp_post_meta table:

meta_id | post_id | meta_key           | meta_value  --------+---------+--------------------+---------------------  2748    | 5456    | calendar_venue_id  | 5  2749    | 5456    | calendar_start_date| 2013-06-28 21:13:00  2750    | 5456    | calendar_end_date  | 2013-06-28 22:13:00  2751    | 5456    | calendar_link      | null  2752    | 5456    | calendar_recur     | D  2753    | 5456    | _edit_last         | 1  2754    | 5456    | _edit_lock         | 1372460794:1  2755    | 5456    | calendar_repeat_on | 16  2756    | 5456    | calendar_repeats   | 1  2757    | 5456    | calendar_recur_end | 1  

wp_post table:

ID   | post_author | post_date           | post_date_gmt       | post_content  | post_title           | post_excerpt | post_status | post_type  -----+-------------+---------------------+---------------------+---------------+----------------------+--------------+-------------+----------  5456 | 1           | 2013-06-28 21:48:18 | 2013-06-28 21:48:18 | werk w/ me!!! | come one alias table | null         | publish     | event  

wp_calendar_alias

alias_event | alias_venue | alias_start_date    | alias_end_date  ------------+-------------+---------------------+---------------------  5456        | 5           | 2013-06-28 21:06:00 | 2013-06-28 22:06:00  5456        | 5           | 2013-06-28 21:13:00 | 2013-06-28 22:13:00  5456        | 5           | 2013-06-29 21:06:00 | 2013-06-29 22:06:00  5456        | 5           | 2013-06-29 21:13:00 | 2013-06-29 22:13:00  5456        | 5           | 2013-06-30 21:06:00 | 2013-06-30 22:06:00  5456        | 5           | 2013-06-30 21:13:00 | 2013-06-30 22:13:00  5456        | 5           | 2013-07-01 21:07:00 | 2013-07-01 22:07:00  5456        | 5           | 2013-07-01 21:13:00 | 2013-07-01 22:13:00  5456        | 5           | 2013-07-02 21:07:00 | 2013-07-02 22:07:00  5456        | 5           | 2013-07-02 21:13:00 | 2013-07-02 22:13:00  5456        | 5           | 2013-07-03 21:07:00 | 2013-07-03 22:07:00  5456        | 5           | 2013-07-03 21:13:00 | 2013-07-03 22:13:00  5456        | 5           | 2013-07-04 21:07:00 | 2013-07-04 22:07:00  5456        | 5           | 2013-07-04 21:13:00 | 2013-07-04 22:13:00  5456        | 5           | 2013-07-05 21:07:00 | 2013-07-05 22:07:00  5456        | 5           | 2013-07-05 21:13:00 | 2013-07-05 22:13:00  5456        | 5           | 2013-07-06 21:07:00 | 2013-07-06 22:07:00  5456        | 5           | 2013-07-06 21:13:00 | 2013-07-06 22:13:00  5456        | 5           | 2013-07-07 21:13:00 | 2013-07-07 22:13:00  5456        | 5           | 2013-07-08 21:13:00 | 2013-07-08 22:13:00  5456        | 5           | 2013-07-09 21:13:00 | 2013-07-09 22:13:00  5456        | 5           | 2013-07-10 21:13:00 | 2013-07-10 22:13:00  

MYSQL QUERY:

SELECT GROUP_CONCAT(DISTINCT postmeta.meta_key ORDER BY postmeta.meta_key ASC) AS post_meta_keys,   GROUP_CONCAT(postmeta.meta_value ORDER BY postmeta.meta_key ASC) AS post_meta_values,   GROUP_CONCAT(DISTINCT alias.alias_start_date ORDER BY alias.alias_start_date ASC) AS alias_start_dates,   GROUP_CONCAT(DISTINCT alias.alias_end_date ORDER BY alias.alias_start_date ASC) AS alias_end_dates,   posts.*,   venues.*   FROM wp_posts AS posts   LEFT JOIN wp_postmeta AS postmeta ON posts.ID = postmeta.post_id   LEFT JOIN wp_calendar_alias AS alias ON posts.ID = alias.alias_event   LEFT JOIN wp_calendar_venues AS venues ON alias.alias_venue = venues.venue_id   WHERE posts.ID = 5456 AND posts.post_type = 'event'   GROUP BY postmeta.post_id  

The query gives me a single row result but the post_meta_values repeats the same values. post_meta_keys would as well except I applied distinct to it. The problem is that post_meta_values may have same meta_value but for different meta_values.

I am still a MYSQL novice so I am not sure what to do. It looks like it applies the alias results to every post_meta result. Not sure how to make it not do that.

Thanks, Anthony

Why monitor bytes sent and received on mysql

Posted: 01 Jul 2013 03:02 PM PDT

I want to know why we should monitor bytes sent and received on mysql ? I hope I have not asked a vague question.

Thanks.

SQL Server freezes (because of application), need logging

Posted: 01 Jul 2013 04:58 PM PDT

We have an application running ontop of our SQL Server 2005 instance and a couple of times a week this application (unannounced) will cause the SQL Server to freeze. I can't even restart the SQL Server service; I have to restart the entire machine.

Needless to say, I can't open a query window to run sp_who2 to find the cause. It could be days before the issue shows up again. Is there any type of logging I can put in place to better track down what is causing the SQL Server to freeze up?

exec xp_readerrorlog only shows me what has happened after the restart so it isn't much help.

At the time it freezes, CPU is pegged at 90-97% and memory is maxed at 8 GB. Server has 12 GB but the max is set to 8192 for SQL Server.

Grouping results based on link table

Posted: 01 Jul 2013 01:52 PM PDT

I have three tables:

users

+----+-------+  | id | name  |   +----+-------+  | 1  | dave  |  | 2  | cher  |  | 3  | meg   |  | 4  | sarah |  +----+-------+  

groups

+----+-------+  | id | name  |  +----+-------+  | 1  | admin |  | 2  | super |  +----+-------+  

user_groups

+----+---------+----------+  | id | user_id | group_id |  +----+---------+----------+  | 1  | 1       | 1        |  | 1  | 2       | 1        |  +----+---------+----------+  

I want to create a query so that I return an array of group names which contains an array of users. I don't want to return groups which have no users, but I would like to return users which have no group.

admin      - dave      - cher  - meg  - sarah  

At the moment I am getting the groups which have items first.

SELECT name FROM groups  INNER JOIN user_groups ON groups.id = user_groups.group_id  

Then iterating through the groups and adding users.

SELECT id, name FROM users  INNER JOIN user_groups ON users.id = user_groups.user_id  WHERE user_groups.group_id = $group_id  

Is there a way to combine these queries?

How to optimize table_cache?

Posted: 01 Jul 2013 12:22 PM PDT

I searched google but couldnt find an exact answer, some have this value at 10000 or even 20000. Others say that even 1000 is too much for 1gb of RAM. Confusion.

My number of opened_tables is growing. Mysqltuner says I should raise the table_cache value, I tried to raise it to 2K, then 3K, then 4K, then I made it 512 to see what happens. Here is the report:

Currently running supported MySQL version 5.0.67-community-nt-log  Operating on 32-bit architecture with less than 2GB RAM  Archive Engine Installed  Berkeley DB Engine Not Installed  Federated Engine Installed  InnoDB Engine Installed  ISAM Engine Not Installed  NDBCLUSTER Engine Not Installed  Data in InnoDB tables: 12M (Tables: 3)  Data in MEMORY tables: 380K (Tables: 2)  Data in MyISAM tables: 83M (Tables: 84)  Total fragmented tables: 16  All database users have passwords assigned  Up for: 16h 12m 55s (161K q [2.000 qps], 6K conn, TX: 281M, RX: 22M)  Reads / Writes: 69% / 31%  Total buffers: 128.0M global + 2.1M per thread (100 max threads)  Maximum possible memory usage: 334.3M (16% of installed RAM)  Slow queries: 1% (11/161K)  Highest usage of available connections: 10% (10/100)  Key buffer size / total MyISAM indexes: 32.0M/4.1M  Key buffer hit rate: 97% (849K cached / 20K reads)  Query cache efficiency: 28% (26K cached / 93K selects)  Query cache prunes per day: 0  Sorts requiring temporary tables: 1% (1 temp sorts / 21K sorts)  Temporary tables created on disk: 1% (2 on disk / 15K total)  Thread cache hit rate: 99% (10 created / 6K connections)  Table cache hit rate: 5% (139 open / 2K opened)  Open file limit used: 20% (232/1K)  Table locks acquired immediately: 99% (116K immediate / 116K locks)  InnoDB data size / buffer pool: 12.5M/32.0M  Run OPTIMIZE TABLE to defragment tables for better performance  MySQL started within last 24 hours - recommendations may be inaccurate  Increase table_cache gradually to avoid file descriptor limits  table_cache (> 512)  Scan Complete    mysql> show global STATUS LIKE 'open%';  +---------------+-------+  | Variable_name | Value |  +---------------+-------+  | Open_files    | 222   |  | Open_streams  | 0     |  | Open_tables   | 127   |  | Opened_tables | 2335  |  +---------------+-------+  4 rows in set (0.00 sec)  

But opened_tables is still growing, should I raise it again?? Or is not worth attention?

Also some people say that the problem is usually in creating a lot of temp tables, as we can see from the stats its not my case. Whats the ratio between table_cache and RAM ?

thanks

Will table partitioning speed up my queries?

Posted: 01 Jul 2013 04:56 PM PDT

My database is set up with what we call our staging tables and our live tables. Currently for we have about 2 million rows for a table called products. Our customers we call manufacturers load their data into the staging tables and when they are done updating, adding, deleting data we then push that data into our live tables.

These live tables feed a web service for a mobile app and websites. The pushing of data from staging to live consists of deleting all the data in the live tables and inserting all of the data from the staging tables. All of this is segregated by a column called ManufacturerID that exists in every table.

Some manufacturers have 500 products, some have 75,000. Depending on this we sometimes have REALLY REALLY slow web service responses because of paging through all of the 2 million records. Deleting the data from the live tables also seems to be getting ridiculously slow.

Would partitioning my products table by ManufacturerID help this situation? From what I have read this would basically mean when I am querying my products I would only be querying a small subset of the database on that ManufacturerID and therefore see a huge improvement in overall response time.

Where should a default setting field be?

Posted: 01 Jul 2013 10:40 AM PDT

I am creating a database to track occurrences of incidents on buses based on an existing system. Currently the drivers can set a default bus number, which is used to pre-populate the bus number on the incident form. Where they can change that then, if they are driving another bus. This is a selection made by the driver. Not a bus assigned to them, per say. I was going to put this field in the staff table, not the bus table. However, someone said I should not, that I should put it in the bus table. I guess it would technically work either way, but I don't feel like the field belongs to the bus - it belongs to the staff member. Am I overthinking this?

Create multiple columns from a single column filtering criteria?

Posted: 01 Jul 2013 11:25 AM PDT

thanks in advance for your time, what I need to achieve is generate multiple columns result from a single query, the thing is this:

I got a table were is storing the next:

id | user_id | revision  1       1      Approved  2       1      Rejected  3       1      Pending  4       1      Pending  5       1      Pending  

What I need is to get the next result:

Total | User | Pending | Rejected | Approved    5       1       3         1           1  

I have tried using Group by like this:

SELECT count(id) Total, user_id User, revision FROM table1 GROUP BY user_id, revision  

But the output is way to far from what I am spectating to get:

Total | User | revision    3      1      Pending    1      1      Rejected    1      1      Approved  

The idea is to get the proper object returned to avoid over cycling the server side code with unnecessary processes, so If the database can do the handling in one call this would be great.

I tried googling for hours and all I could get were a lot of results for 'single column from multiple rows' which is something I do not need, thanks for your help in advance.

The final result set with many users should look like this:

Total | User | Pending | Rejected | Approved    5       1       3         1           1    5       2       4         0           1    5       7       2         3           0  

Updated: After more researching I found this: http://stackoverflow.com/questions/14172282/sql-query-needed-to-get-result-in-two-columns-grouped-by-type-and-percentage-in

This is almost what I need after some editing I still can't get it to send it in solid numbers instead percentages, I lack the knowledge for working group cases, I will keep trying if any of you can help it is greatly appreciated.

Solution Hey I managed to do this, it was easy, but my lack of knowledge was in the middle, I read the chapter Generating Summaries in MySQL Cookbook, this is where they explain that you can create count, sum and other methods inside your query to gain the proper values in those columns you define, so this is the query I ended up using:

SELECT user_id as userid, count(id) Total,  COUNT(IF(revision = 'pending', 1, NULL)) as pending,  COUNT(IF(revision = 'approved', 1, NULL)) as approved,  COUNT(IF(revision = 'rejected', 1, NULL)) as rejected  FROM table1  GROUP BY user_id  

This will fill those values by count only if criteria is met.

How would you track DB storage use and then extrapolate it?

Posted: 01 Jul 2013 09:26 AM PDT

I need to track the size of various DBs we have, then see it's growth rate so that we can guess where it will be at certain times. I'm using Oracle 11g first, but will eventually be applying this to other DBMSs like DB2 and Sybase.

I'm not even sure where to begin or what commands/queries I'll need to run.

What does the wait type PREEMPTIVE_XE_CALLBACKEXECUTE mean?

Posted: 01 Jul 2013 10:15 AM PDT

I have two instances in the last week where a DROP command for an Extended Events session has taken over a day to complete, sitting with the wait: PREEMPTIVE_XE_CALLBACKEXECUTE.

Can anyone explain what this wait type means?

More Background Info: In the first instance, I ran a T-SQL command to drop the session and it completed a day later. Using sp_whoisactive, I saw that the query was waiting on PREEMPTIVE_XE_CALLBACKEXECUTE.

During this time, Object Explorer's queries to gather metadata were being blocked and recieving lock-timeouts, but there were no other complaints (or so I thought at the time).

I tried to drop another session on Friday, and the same behavior occurred, except it didn't just go away after a day as the with the first event. Instead, I found out this morning that the client application could not connect. It was being blocked by the DROP EVENT SESSION query.

A restart of the SQL Service cleared out the blocking query.

I can't find anything that helps to diagnose what this wait type this is... and why my event sessions won't drop like they should. Can you help?

Server Info: SQL 2008 R2 Enterprise with SP2

bad ELF interpreter while installing Oracle 9.2.0.4 in Red Hat 6.2

Posted: 01 Jul 2013 08:56 AM PDT

Fellow DBAs

When trying to run runInstaller for Oracle 9.2.0.4 in a Red Hat server I get the following error:

./runInstaller: /lib/ld-linux.so.2: bad  ELF interpreter: No such file or directory  

Both the installer disks and the OS are 64 bits.

Since we haven't that version of Oracle installed in that version of Red Hat before, we wonder whether the OS isn't correctly installed (ie missing libraries) or Oracle 9.2.0.4 won't run on Red Hat 6.2.

Currently, we don't have Oracle support.

Please shed some light on this regard.

Transactional Replication - Snapshot metrics

Posted: 01 Jul 2013 08:00 PM PDT

I have an application that is using Transactional Replication. We are using snapshots to initialize the subscriptions. All publishers and subscribers are using SQL Server 2008.

Due to the way our deployment works, I end up having to reinitialize the subscribers every time we do a build. This is time consuming and annoys the business, so I have to make sure that our developers are aware of the impact their changes will have on the size of the snapshots and the time it will take to apply them. I also want to measure the impact of settings changes that I make on the publications and agents (bcp threads, batch size, etc).

The metrics I want to capture are:

  1. The size of the snapshot generated for each publication.
  2. The time it took to generate the snapshot.
  3. The time it took to apply the snapshot at the subscriber.

Is it possible to get this information by querying distribution? I realize that I could get snapshot size by looking at the filessystem after it is generated, but I would prefer to avoid that if possible.

How to determine Oracle LOB storage footprint?

Posted: 01 Jul 2013 08:41 AM PDT

With SECUREFILE storage I can specify whether I want compression (and the level of it) and deduplication, and it's all a trade-off between time and space.

Timing is fairly easy to profile but what's the easiest way to get a reasonably accurate measurement of how much space a specific LOB column takes up?

Where to store versioned/historized records?

Posted: 01 Jul 2013 11:38 AM PDT

In our application we are using versioning records using triggers. The application is supposed to handle millions of records in the future and the history/versioning tables are there to ensure that there is an audit trail for various transactions.

Currently everything is stored into one database. My concern is that these tables will bloat the transactional database, and I really want to move all record history into another database mainly because:

  1. transactional database can be backed up more frequently since it is smaller.
  2. indices can be scheduled to be rebuilt more frequently on the transactional db.
  3. The history can be archived more easily if needed.

Are there any performance advantages to choosing this strategy or is the extra database another moving part which is not needed? What is the prescribed solution for this sort of thing?

Update column value in entire database

Posted: 01 Jul 2013 11:01 AM PDT

In my database, I have around 30 tables, each with an emp_number (INT) column.

How can I update the value of emp_number in all 30 tables?

Filtering results by date [migrated]

Posted: 01 Jul 2013 08:32 AM PDT

I have the following query:

SELECT STOCK.ITEM_DATE, STOCK.AMOUNT, STOCK.OPERATIONTYPE            ,ITEM.ITEM_NAME   FROM ITEM INNER JOIN STOCK ON STOCK.ITEM_ID = ITEM.ITEM_ID      WHERE STOCK.ITEM_DATE > to_date('06/26/2013','mm/dd,yyyy')            AND ITEM.CATEGORY_ID = 1  

I want to select all items before '06/26/2013' INCLUDING '06/26/2013', but the result is before '06/26/2013' NOT including '06/26/2013'.

Any ideas how to do that, i tried >= but nothing changed.

MySQL Master-Slave setup: master is not indexed and slave is indexed

Posted: 01 Jul 2013 12:56 PM PDT

Is it possible to have a MySQL master-slave setup where the master has no indexes (for faster insertion) and the slave will be indexed?

Thanks in advance!

"Mysql2::Error: Table doesn't exist" after power outtage

Posted: 01 Jul 2013 09:31 AM PDT

The power went out while my site was running and now when trying to access my site running locally on my mac I'm getting this error:

ActiveRecord::StatementInvalid: Mysql2::Error: Table 'mysite.users' doesn't exist:  

Is there anything I can do other than dropping the database and recreating it?

EDIT:

I'm getting this when trying to drop the database:

rake db:drop  Couldn't drop MySite : #<Mysql2::Error: Unknown database 'mysite'>  

InnoDB tables inaccessible after reboot

Posted: 01 Jul 2013 11:32 AM PDT

After reboot each time, I can't see my InnoDB table. However, when use command mode I can see my tables, but cannot access to it.

mysql> show tables;  +----------------------+  | Tables_in_xpl_ticket |  +----------------------+  | active_list          |  | bill_block           |  | bill_block_list      |  | block                |  | block_list           |  | box                  |  | cisco_switch         |  | mac_add              |  | mac_change           |  | month                |  | new_connect          |  | new_user             |  | open                 |  | package_change       |  | paid_list            |  | pay                  |  | problem              |  | re_open              |  | refund               |  |  ticket              |  | user                 |  | user_log             |  +----------------------+  22 rows in set (0.00 sec)  

But when I want to access a table it says table doesn't exist;

mysql> select * from active_list;  ERROR 1146 (42S02): Table 'xpl_ticket.active_list' doesn't exist  

Edit by RolandoMySQLDBA

Please run the following in MySQL:

SELECT VERSION();  SELECT CONCAT('[',table_name,']') tbl_name  FROM information_schema.tables WHERE table_schema='xpl_ticket';  

Please run the following in the OS:

cd /var/lib/mysql/xpl_ticket  ls -l  

Hi RolandoMySQLDBA,

Thank you for your quick reply. Here is all those outputs.

mysql> SELECT VERSION();  +-------------------------+  | VERSION()               |  +-------------------------+  | 5.5.31-0ubuntu0.13.04.1 |  +-------------------------+  1 row in set (0.00 sec)      mysql> SELECT CONCAT('[',table_name,']') tbl_name  -> FROM information_schema.tables WHERE table_schema='xpl_ticket';  +-------------------+  | tbl_name          |  +-------------------+  | [active_list]     |  | [bill_block]      |  | [bill_block_list] |  | [block]           |  | [block_list]      |  | [box]             |  | [cisco_switch]    |  | [mac_add]         |  | [mac_change]      |  | [month]           |  | [new_connect]     |  | [new_user]        |  | [open]            |  | [package_change]  |  | [paid_list]       |  | [pay]             |  | [problem]         |  | [re_open]         |  | [refund]          |  | [ticket]          |  | [user]            |  | [user_log]        |  +-------------------+  22 rows in set (0.03 sec)  

My data directory is not in "/var/lib/mysql" path. I have been changed it to "/var/www/xampp/mysql/data" path. So I am going to execute the following code.

Lurid / # cd /var/www/xampp/mysql/data/xpl_ticket/  Lurid xpl_ticket # ls -l  total 265  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 active_list.frm  -rwxrwxrwx 1 root root 8654 Jun  6 12:48 bill_block.frm  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 bill_block_list.frm  -rwxrwxrwx 1 root root 8654 Jun  6 12:48 block.frm  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 block_list.frm  -rwxrwxrwx 1 root root 8802 Jun  6 12:48 box.frm  -rwxrwxrwx 1 root root 8648 Jun  6 12:48 cisco_switch.frm  -rwxrwxrwx 1 root root   65 Jun  6 12:48 db.opt  -rwxrwxrwx 1 root root 8690 Jun  6 12:48 mac_add.frm  -rwxrwxrwx 1 root root 8698 Jun  6 12:48 mac_change.frm  -rwxrwxrwx 1 root root 8618 Jun  6 12:48 month.frm  -rwxrwxrwx 1 root root 8716 Jun  6 12:48 new_connect.frm  -rwxrwxrwx 1 root root 9002 Jun  6 12:48 new_user.frm  -rwxrwxrwx 1 root root 8680 Jun  6 12:48 open.frm  -rwxrwxrwx 1 root root 8724 Jun  6 12:48 package_change.frm  -rwxrwxrwx 1 root root 8692 Jun  6 12:48 paid_list.frm  -rwxrwxrwx 1 root root 8592 Jun  6 12:48 pay.frm  -rwxrwxrwx 1 root root 8802 Jun  6 12:48 problem.frm  -rwxrwxrwx 1 root root 8670 Jun  6 12:48 refund.frm  -rwxrwxrwx 1 root root 8714 Jun  6 12:48 re_open.frm  -rwxrwxrwx 1 root root 8900 Jun  6 12:48 ticket.frm  -rwxrwxrwx 1 root root 8704 Jun  6 12:48 user.frm  -rwxrwxrwx 1 root root 8808 Jun  6 12:48 user_log.frm  

Hi Mannoj

These tables are created in these same database.

I have some errors in my "/var/log/mysql/error.log". I don't understand what are those. Here are they ....

130611 12:41:28 [Note] /usr/sbin/mysqld: Normal shutdown    130611 12:41:29 [Note] Event Scheduler: Purging the queue. 0 events  130611 12:41:30  InnoDB: Starting shutdown...  130611 12:41:31  InnoDB: Shutdown completed; log sequence number 1595675  130611 12:41:31 [Note] /usr/sbin/mysqld: Shutdown complete    130611 12:43:15 [Note] Plugin 'FEDERATED' is disabled.  130611 12:43:15 InnoDB: The InnoDB memory heap is disabled  130611 12:43:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins  130611 12:43:15 InnoDB: Compressed tables use zlib 1.2.7  130611 12:43:15 InnoDB: Using Linux native AIO  130611 12:43:15 InnoDB: Initializing buffer pool, size = 128.0M  130611 12:43:15 InnoDB: Completed initialization of buffer pool  InnoDB: The first specified data file ./ibdata1 did not exist:  InnoDB: a new database to be created!  130611 12:43:15  InnoDB: Setting file ./ibdata1 size to 10 MB  InnoDB: Database physically writes the file full: wait...  130611 12:43:15  InnoDB: Log file ./ib_logfile0 did not exist: new to be created  InnoDB: Setting log file ./ib_logfile0 size to 5 MB  InnoDB: Database physically writes the file full: wait...  130611 12:43:16  InnoDB: Log file ./ib_logfile1 did not exist: new to be created  InnoDB: Setting log file ./ib_logfile1 size to 5 MB  InnoDB: Database physically writes the file full: wait...  InnoDB: Doublewrite buffer not found: creating new  

Thanks to both of you.

Fulltext stoplist replication

Posted: 01 Jul 2013 07:45 PM PDT

In MS SQL Server 2008 R2 there is replication of table with fulltext index on it. But stoplist that is associated with replicated fulltext index doesn't replicate.

Is there any possibility to replicate stoplist also?

Process attempted to unlock a resource it does not own

Posted: 01 Jul 2013 01:45 PM PDT

SQL Server 2005 SP4 32-Bit

I have a DBCC CHECKDB job running nightly. Last night, soon after the job started, I got the errors below. The database is NOT in suspect mode, and CHECKDB comes back clean when I run it now. The database is fully accessible. Should I be concerned? I'd hate to go back to a backup at this point.

 2013-04-02 02:10:55.53 spid56      Error: 1203, Severity: 20, State: 1.   2013-04-02 02:10:55.53 spid56      Process ID 56 attempted to unlock a resource it                                      does not own: PAGE: 34:1:388664. Retry the                                       transaction, because this error may be caused                                       by a timing condition. If the problem persists,                                       contact the database administrator.   2013-04-02 02:10:55.58 spid56      Error: 3314, Severity: 17, State: 3.   2013-04-02 02:10:55.58 spid56      During undoing of a logged operation in                                       database 'MY_DATABASE_NAME', an error occurred                                       at log record ID (1342973:12519:37). Typically,                                       the specific failure is logged previously as                                       an error in the Windows Event Log service.                                       Restore the database or file from a backup,                                       or repair the database.  

compare the same table

Posted: 01 Jul 2013 11:45 AM PDT

I am facing an issue with the following query. When I execute the query, it takes very long. I broke the query into two parts, compared with a shell script, but is there any chance to go with one query?

Any suggestion welcome.

select distinct substring(mobile_num,3,12)  from mobile  where  status ='INACTIVE'    and date(unsub_date) >= DATE(CURDATE() - INTERVAL 90 DAY)    and mobile_num not in(select distinct mobile_num from mobile where status='ACTIVE')  order by updtm;  
| mobile_num  | varchar(12)   | keyword     | varchar(45)   | sub_date    | datetime     | unsub_date  | datetime     | circle_name | varchar(45)   | type        | varchar(45)   | status      | varchar(45)  | operator    | varchar(45)   | act_mode    | varchar(45)   | deact_mode  | varchar(45)   | id          | bigint(20)    | updtm       | timestamp     

sql server database sharding - what to do with common data / non sharded data

Posted: 01 Jul 2013 06:45 PM PDT

We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level.

We are looking to shard the database now to ensure that we can handle company growth and future loads.

We have decided what particular data should be sharded. It is a subset of our database which is highly utilized.

However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc .

I see two options to handle this common/universal data:

1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs.

2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards.

concerns about design #1

1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data.

2)a loss of referential integrity. Not possible to do cross database referential integrity.

3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards.

4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3.

concerns about design #2

In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively.

I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see.

thank you in advance.

ProcessID Owner with third party app

Posted: 01 Jul 2013 11:35 AM PDT

I am having problems trying to audit a third party app with a SQLServer backend.

The app manages users and user access via a SQL table, using a single SQL login to access the databases.

I am trying to audit this by using the Host PID much like Windows Task Manager associates a PID and Owner for every process.

Here is what I have attempted and tried

  • I was able to figure out how to extract the app.exe PID.
  • I cannot figure out how to get the Windows Owner associated to that PID.
  • I have tried using xp_cmdshell to query the Windows tasklist and even wrote a .Net console app which gets called by SQL to collect the information but every time I try to extract the Owner it is blank.

Any thoughts on how I can get the Owner?

Merge two packages into one package in Oracle 10g

Posted: 01 Jul 2013 11:39 AM PDT

I want to combine two different packages into one single new package in Oracle 10g. Is there some way to accomplish this?

Sys.WebForms.PageRequestManager Error in SSRS

Posted: 01 Jul 2013 11:45 AM PDT

When i am executing on SSRS report i am getting below error

Sys.WebForms.PageRequestManagerServerErrorException: An unknown error occurred while processing the request on the server. The status code returned from the server was: 500  Line: 5  Char: 62099  Code: 0  

I'm getting this error at the bottom left yellow colored exclamation mark.

What could be the cause of getting this error in SSRS??

Please help!

Could not open File Control Bank (FCB) for invalid file ID X when restoring from snapshot

Posted: 01 Jul 2013 11:49 AM PDT

We have a scenario in which we, during integration tests, make a backup of an existing database and restore it under a new name. We then create a snapshot of our "new" database. After each test we restore the database from the snapshot so that different tests don't affect each other. This works well except for in a specific scenario where the restore fails with the following error message:

System.Data.SqlClient.SqlException : Unable to open the physical file "\server\path\MyDb_IntegrationTestsSnapshot.ss". Operating system error 58: "58(failed to retrieve text for this error. Reason: 15105)". Could not open File Control Bank (FCB) for invalid file ID 2 in database 'MyDb_IntegrationTestsSnapshot'. Verify the file location. Execute DBCC CHECKDB. RESTORE DATABASE is terminating abnormally.

The code that produces that looks like this:

using (var con = new SqlConnection(ConfigurationManager.ConnectionStrings["MyDb"].ConnectionString))  {      con.Open();      var databaseName = GetDatabaseName();      var sql =          string.Format(              @"use master               alter database {0} set single_user with rollback immediate;              RESTORE DATABASE {0} FROM DATABASE_SNAPSHOT = '" + GetSnapshotName() + @"';              ALTER DATABASE {0} SET MULTI_USER WITH NO_WAIT",              databaseName);      var command = new SqlCommand(sql, con);        command.ExecuteNonQuery();        con.Close();        SqlConnection.ClearAllPools();  }  

In almost all cases the above code works just fine, however when the test executes a few specific SQL commands that uses temporary tables AND performs deletes the above error occurs. Note however that in other cases commands that uses temporary tables works just fine. Likewise, other deletes work well as well.

Furthermore, this ONLY happens when we run the tests on our build server against a separate database machine. Running the tests locally against a local SQL Server instance works fine. So does running the tests locally against the same database server that the build server uses.

Finally, note that other tests that uses this rollback setup works fine on the build server. Even the rollback after the now failing tests ran just fine at a previous point in time, but have now stopped working. The only real difference between now and then seems to be that some additional data has been added.

We're using SQL Server 2008 R2 and .NET 4.

UPDATE: Based on the answer by Pondlife we attempted to run DBCC but without success as it reported the snapshot marked as suspect. We then checked the logs and found "Error: 17053, Severity: 16, State: 1.". Trying to find information about that we found that one cause of this could be when using VMWare and having a PVSCSI adapter configured. The build server does run on VMWare (the db server is native) but it isn't configured with a PVSCSI adapter but a LSI Logic SAS adapter.

UPDATE 2: We've narrowed this down to tests which cause relatively many (330) deletes. If we change a test whose teardown usually causes this error so that the actual test code only triggers a single delete instead of hundreds it works fine.

No comments:

Post a Comment

Search This Blog