Wednesday, July 31, 2013

[how to] How do I determine when scaling is necessary or helpful?

[how to] How do I determine when scaling is necessary or helpful?


How do I determine when scaling is necessary or helpful?

Posted: 31 Jul 2013 05:43 PM PDT

I have a MySQL Server that I use for the back-end storage of an online game. The game writes to the database to persist character information such as world location, health status, ability stats, etc .. This data is written every few seconds to the database for each character and vehicle. I have been noticing that as more and more users join the game that queries I write via the database seem slower. Also, as I make the online world more complex, via content added via the database, the game tends to slow.

I am looking for the bottleneck and I am wondering how I go about determining if scaling my database (vertically or horizontally) will make a difference. Knowing the database is MySQL how do I go about seeing if my single database server is the bottleneck and if scaling it would be helpful? Thank you.

How to allow each user to access his own db, and superuser to all dbs in PostgreSQL?

Posted: 31 Jul 2013 03:57 PM PDT

I want to set up some pretty traditional permissions: each user can access his own db, and superuser can access all the dbs.

So far I got the first part right (I think). I edited pg_hba.conf, and changed this:

local all all  host all all  host all all  

for this:

local sameuser all  host sameuser all  host sameuser all  

But now I can't access any db with my superuser. I tried adding "localhost all myuser" but doesn't seem to work.

Any ideas?

PostgreSQL COPY: is it always faster to delete and recreate indexes?

Posted: 31 Jul 2013 06:46 PM PDT

In this answer, Erwin Brandstetter recommends

It is also substantially faster to delete indexes before a huge bulk INSERT / COPY and recreate them afterwards as it is much more costly to incrementally adjust indexes for every inserted row than to create them at once.

1) Is this true even when the table is already large? If not, is there an easy way to estimate the point of diminishing return? For example, I'm doing COPY of 30,000 records at a time into a table which already has say 100 million rows. Each batch is relatively small, but on the other hand the total batches on hand to import at the moment will more than double the size of the table.

2) If I were to delete the index before COPY, would recreating it be faster if the COPY is done with records already in index order?

There is only one index on the table, which is a 5-column primary key.

I'm doing this on a non-production machine, and I've already disabled fsync, syncronous_commit, autovacuum, etc. I'm executing the COPY operation from four separate processes concurrently on an 8-core machine. I've got 12GB of memory, so I could set maintenance_work_mem to 1GB or more if it would help.

Changing autogrow settings with database mirroring

Posted: 31 Jul 2013 02:15 PM PDT

Recently I changed the Autogrow settings for some of the databases on our SQL Server 2008 R2 server. These are involved in a database mirroring configuration, with the principal on SERVER1 and the mirror on SERVER2.

This week I failed over three databases- now SERVER1 is the mirror and SERVER2 is the principal for these databases. It appears that the autogrow settings did not move over to SERVER2 properly, as the databases now show that they grow by a percentage (two are set to 32768%, the other to 131072%).

This is different than the settings that used to be there (I believe it was the default- 10%), and also different that the 256MB I set on SERVER1.

To make things more confusing, this is only happening on the primary file- the secondary files and log file has retained the settings I set on SERVER1.

My suspicion is that this is a bug- I did patch SQL after changing the autogrow settings. My question is- has anyone seen this scenario before? Is there a method to make sure all of the settings are correct on both servers without failing all the databases over?

UPDATE: Using a 4th database that I'll call DB1, I set the autogrow on SERVER1 to 512MB (after failing the database over, witnessing the same issue, and failing it back). When I failed it over to SERVER2 after that, it shows growth of 65536%. The takeaway is that the is_autogrow_percent value in sys.master_files is not moving to SERVER2, but the growth value is.

UPDATE2: With DB1 on SERVER1, I changed the autogrow to 10% and failed it to SERVER2. The 10% value remained. I then repeated this, setting autogrow back to 256MB this time. Again, the growth value changed but the is_autogrow_percent did not.

Transfer logins to another server

Posted: 31 Jul 2013 01:30 PM PDT

I moved a database from SQL Server 2005 to SQL Server 2012. Is there any way of restoring all the logins on the new server?

SQL Server logs in Gmail [on hold]

Posted: 31 Jul 2013 02:09 PM PDT

First off, I'm using SQL Server 2008 R2.

So we all know Gmail uses the conversation view for emails. I like this...except when my SQL Server backup logs get emailed to me from the same email address. So Gmail collapses them into a single conversation even though the logs are for different databases.

Is there a way to trick Gmail to separating them out based on the database? Do I have to create a new distribution list for each database I have or do I have to create a new Operator (SQL Server Agent -> Operator) for every database?

Does anyone else have to deal with this issue?

What is a good name for a column representing the display order of the rows?

Posted: 31 Jul 2013 02:11 PM PDT

For example, a junction table associating a product and its pictures.

create table product_picture (    product_id bigint not null references product(id),    picture_id bigint not null references picture(id),    some_column int not null  );  

What is a good, short, general, name for "some_column" if it represents the display order of the photos?

"order", "sort", and "sequence" are out, as they are keywords.

Caclulating employee's work time

Posted: 31 Jul 2013 11:44 AM PDT

I need to store and analyze employee's work time.

Currently I've developed this table structure

CREATE TABLE `hirefire` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `employeeid` int(10) unsigned NOT NULL,    `hired` date NOT NULL,    `fired` date DEFAULT NULL,    `firereason` enum('fired','left','parental','vacation','death') COLLATE utf8_bin DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `hired` (`hired`),    KEY `fired` (`fired`),    KEY `employee` (`employeeid`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin

The fields employeeid, hired, and fired are self-explanatory (fired may be NULL when a person was hired but not yet fired).

firereason:

  • 'fired' - forced to be fired by the boss
  • 'left' - left job by his own will
  • 'parental' - parental leave
  • 'vacation' - his non-paid vacation
  • 'death' - he or she died

This my table scheme may be (I think) good to check whether a given employee is listed as working at least a day in a given month (BTW, an SQL code for this would be appreciated),

But now a complexity comes out:

We will also need to be able to count employee's total working years. Note that parental leave and non-paid vacations should be not subtracted from the working years. This time my database structure goes non convenient for this kind of calculation.

Which DB structure would you suggest to use? (Is my (above) variant is the best structure for these tasks, or can it be improved?)

What's about allocating a whole DB row for every hire/fire event (that is having a separate row for hire data and fire date not one the same row for paired fire/hire)? For which tasks this would be better and for which worse?

MySQL 5.

Just how secure is the data I put on SQL Azure?

Posted: 31 Jul 2013 01:20 PM PDT

My wife has been asking me to create a web application for her business and from what I've read using SQL Azure would be perfect. Except for a couple of small (really really big) gotchas.

  • She works in the financial industry and is HEAVILY regulated.
  • The data she works with is very sensitive and has to have a minimum risk of exposure.

So my question is, if I put her companies information on SQL Azure how secure will it be? I'm not asking about general SQL Server/Azure security. I'm talking about the fact that this is a shared space. How possible is it to hack into the data, bypassing the security that I put into place?

Reporting Services TempDB keeps going into restoring mode

Posted: 31 Jul 2013 10:53 AM PDT

I am looking into an issue with SQL Server 2008 Reporting Services. It has been working fine until yesterday morning. Now I notice that ReportServerTempDB keeps on going into Restoring... mode. I restored it to online mode with RESTORE DATABASE [ReportServerQ4TempDB] WITH RECOVERY.

The reports work fine after this and then after 10-15 minutes, then ReportServerTempDB goes back into Restoring mode. Report execution gives the following error:

An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. (rsReportServerDatabaseError) For more information about this error navigate to the report server on the local server machine, or enable remote errors...

Any suggestions why it is doing that.

I have checked the following

  1. Memory - 50% available
  2. CPU - only 10% utilized
  3. Storage - 30% space remaining

There are multiple instance of SSRS running and SQL instance hosting the database for SSRS on the same server.

I have checked that AutoClose is not enabled for this database, and I have discovered that the database goes into Restoring... mode every hour, when our log backup job runs.

enter image description here

MySQL table architecture

Posted: 31 Jul 2013 09:58 AM PDT

Background information:

I have a table containing upload information, every day I get more and more uploads but most of the queries I run center around information gathered within the last 12 months; frequently it uses an even tighter time scale and queries are restricted to information gathered in the last 30 days

This system has been in existence from 2004 when there were 400 uploads, today there are 2+ million

Table structure:

CREATE TABLE `data_mediagallery` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `status` tinyint(3) unsigned NOT NULL DEFAULT '0',    `contenttype` char(40) NOT NULL DEFAULT '',    `filename` varchar(100) NOT NULL DEFAULT '',    `injector` char(40) NOT NULL DEFAULT '',    `hits` int(11) DEFAULT '0',    `message` longtext,    `date` datetime DEFAULT NULL,    `title` varchar(80) NOT NULL DEFAULT '',    `tags` varchar(255) NOT NULL DEFAULT '',    `metadata` blob,    `location` char(8) NOT NULL DEFAULT '',    `uid` int(11) unsigned NOT NULL DEFAULT '0',    `filesize` bigint(20) NOT NULL DEFAULT '0',    `upload` datetime DEFAULT NULL,    `privacy` tinyint(3) unsigned NOT NULL DEFAULT '0',    `width` int(10) unsigned NOT NULL DEFAULT '0',    `height` int(10) unsigned NOT NULL DEFAULT '0',    `offensive` int(10) unsigned NOT NULL DEFAULT '0',    `sourcelocation` char(8) NOT NULL DEFAULT '',    `autoblog` tinyint(1) NOT NULL DEFAULT '0',    `extension` char(10) NOT NULL DEFAULT '',    `filetype` tinyint(3) unsigned NOT NULL DEFAULT '0',    `conversiontime` float NOT NULL DEFAULT '0',    `converttime` datetime DEFAULT NULL,    `sender` varchar(100) NOT NULL DEFAULT '',    `vhost` int(10) unsigned NOT NULL DEFAULT '0',    `channel` int(10) unsigned NOT NULL DEFAULT '0',    `rotation` tinyint(3) unsigned NOT NULL DEFAULT '0',    `ofilesize` bigint(20) NOT NULL DEFAULT '0',    `moderationstatus` tinyint(3) unsigned NOT NULL DEFAULT '0',    `rating` decimal(8,6) DEFAULT NULL,    `votecount` int(10) unsigned NOT NULL DEFAULT '0',    `url` varchar(150) NOT NULL DEFAULT '',    `geo_latitude` double DEFAULT NULL,    `geo_longitude` double DEFAULT NULL,    `length` decimal(8,2) DEFAULT '0.00',    `parentid` int(11) NOT NULL DEFAULT '0',    `language` char(2) NOT NULL DEFAULT '',    `author` varchar(100) NOT NULL DEFAULT '',    `context` tinyint(3) unsigned NOT NULL DEFAULT '0',    `externalid` varchar(255) DEFAULT NULL,    `originalsaved` bit(1) NOT NULL DEFAULT b'1',    `hidden` tinyint(4) NOT NULL DEFAULT '0',    `commentcount` int(11) NOT NULL DEFAULT '0',    `approvedcomments` int(11) NOT NULL DEFAULT '0',    `notdeniedcomments` int(11) NOT NULL DEFAULT '0',    `lastupdatetime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    `channelleft` int(10) unsigned NOT NULL DEFAULT '0',    `originalLocation` char(8) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `upload` (`upload`),    KEY `vhostupload` (`vhost`,`upload`),    KEY `vhostmodstatus` (`vhost`,`status`,`moderationstatus`,`uid`),    KEY `complexfiletype` (`vhost`,`status`,`moderationstatus`,`filetype`,`channel`),    KEY `vhostcontext` (`vhost`,`moderationstatus`,`context`,`parentid`,`status`,`filetype`),    KEY `externalid` (`externalid`),    KEY `externalcomments`.    KEY `vhostchannel` (`vhost`,`status`,`moderationstatus`,`context`,`channelleft`)  ) ENGINE=InnoDB;  

Questions

Is there a way to partition the table that would make the most sense? Does partitioning even make sense? How do I deal with new data if I do partition?

Can I force a user to use WITH NOLOCK?

Posted: 31 Jul 2013 02:10 PM PDT

Can I force a user's queries to always run with the hint NOLOCK? e.g. they type:

select * from customer  

But what is executed on the server is:

select * from customer with (nolock)  

THIS QUESTION IS NOT about the various pros and cons of NOLOCK.

How to determine Windows Server start time using T-SQL

Posted: 31 Jul 2013 09:22 AM PDT

I am looking for a way to find out when the Windows Server was last started using only t-sql commands.

I am trying to stay within the default security configurations (i.e. I don't want to enable xp_cmdshell)

TokuDB database size unknown in phpmyadmin

Posted: 31 Jul 2013 01:51 PM PDT

In mysql server, I installed the TokuDB storage engine. I converted database engine InnoDB to TokuDB, but my table sizes show unknown in phpmyadmin. enter image description here

select query not displaying column names with records

Posted: 31 Jul 2013 03:47 PM PDT

If I execute a select query it is displaying only records not column names like as follows

mysql> select movie_desc from movies limit 1\G;  

************* 1. row *************
If the story-writer of 'Alias Janaki' had gone to a Rajamouli or a Shekar Kammula to narrate the story, the listener could have said this after the narration: "Flashback over?

What is the problem?

best practice either create multiple databases or multiple users

Posted: 31 Jul 2013 06:45 PM PDT

I have several application which currently running on my local. I have installed oracle 11g and create a global database name as oracle11. Now should i create again different database for each application using Database configuration assistance or should i create different users for each application under that global database? What would be the best practice? If i create different users its also behave same as creating different databases right?

What if in a production environment where server house for couple of application? Do i need to do the same as my local by creating one database and different users for each application?

DBCC CHECKDB unfixable corruption: Indexed view contains rows that were not produced by the view definition

Posted: 31 Jul 2013 08:34 AM PDT

TL;DR: I've got an unfixable corruption in an indexed view. Here the details:


Running

DBCC CHECKDB([DbName]) WITH EXTENDED_LOGICAL_CHECKS, DATA_PURITY, NO_INFOMSGS, ALL_ERRORMSGS

on one of my databases produces the following error:

Msg 8907, Level 16, State 1, Line 1 The spatial index, XML index or indexed view 'ViewName' (object ID 784109934) contains rows that were not produced by the view definition. This does not necessarily represent an integrity issue with the data in this database. (...)

CHECKDB found 0 allocation errors and 1 consistency errors in table 'ViewName'.

repair_rebuild is the minimum repair level (...).

I do understand that this message indicates that the materialized data of the indexed view 'ViewName' is not identical with what the underlying query produces. However, manually verifying the data does not turn up any discrepancies:

SELECT * FROM ViewName WITH (NOEXPAND)  EXCEPT  SELECT ...  from T1 WITH (FORCESCAN)  join T2 on ...    SELECT ...  from T1 WITH (FORCESCAN)  join T2 on ...  EXCEPT  SELECT * FROM ViewName WITH (NOEXPAND)  

NOEXPAND was used to force use of the (only) index on ViewName. FORCESCAN was used to prevent indexed view matching from happening. The execution plan confirms both measures to be working.

No rows are being returned here, meaning that the two tables are identical. (There are only integer and guid columns, collations do not come into play).

The error cannot be fixed by recreating the index on the view or by running DBCC CHECKDB REPAIR_ALLOW_DATA_LOSS. Repeating the fixes also did not help. Why does DBCC CHECKDB report this error? How to get rid of it?

(Even if rebuilding fixed it my question would still stand - why is an error reported although my data checking queries run successfully?)

PostgreSQL CREATE TABLE creates with incorrect owner

Posted: 31 Jul 2013 10:59 AM PDT

I'm using PostgreSQL 9.2.4. When I create a table as a non-superuser in a database owned by that non-superuser, it is owned by the postgres user, so I can't put any data into it unless I explicitly grant myself permission.

I created the non-superuser like this:

admin_user=# create role "test1" NOINHERIT LOGIN ENCRYPTED PASSWORD 'wibble' CONNECTION LIMIT 10;  

Then I created a database owned by test1 like this:

admin_user=# create database "test1$db1" with owner "test1";  

Then I started a new psql as user test1, and created a table:

test1$db1=> create table test_table (column_name varchar(50));  

But I couldn't insert into it:

test1$db1=> insert into test_table values ('some data');                                                                                                      ERROR:  permission denied for relation test_table  

Checking the permissions shows that the table is owned by the postgres user:

test1$db1=> \dt               List of relations   Schema |      Name      | Type  |  Owner     --------+----------------+-------+----------   public | test_table     | table | postgres  

However, I can grant myself permissions and do stuff:

test1$db1=> grant insert, select on test_table to test1;                                                                                                      GRANT  test1$db1=> insert into test_table values ('some data');   INSERT 0 1  test1$db1=> select * from test_table;   column_name   -------------   some data  (1 row)  

What's going on? I'm pretty sure this used to work. And the PostgreSQL docs for CREATE TABLE say

CREATE TABLE will create a new, initially empty table in the current database. The table will be owned by the user issuing the command.

Having to grant permissions to myself on my own tables doesn't sound like it's what I should have to do.

Any help much appreciated!

[UPDATE]

It looks like this might be caused by some kind of change to the PostgreSQL package at http://apt.postgresql.org/pub/repos/apt/

We have three machine images, all of which installed PostgreSQL from that source:

  • Install at 23 July 2013, 18:47 UTC -- correct table ownership
  • Install at 23 July 2013, 19:39 UTC -- correct table ownership
  • Install at 24 July 2013, 13:16 UTC -- error

So perhaps something has been changed in the packaging. I'll drop a line to the mailing list for that.

Proper MySQL database maintenance

Posted: 31 Jul 2013 01:31 PM PDT

I hope this isn't too broad of a question. I'm certain it'll be able to help me and any future dba noobs that stumble upon it.

I'm a systems admin that got put into the roll of DBA (because I helped the CEO with his outlook, so I can obviously manage our databases!). It's not that big or busy of a database server... a mysqldump is about 6GB in size and it's taken us 5 years to get it that big. Munin reports that we're averaging 40-60 queries a second at our peak hours.

My boss paid for me to take the Oracle University systems admin course, but having gone through it, it simply explains the different parts of the mysql server, what things do and how they do it. But it's not that in-depth and you definitely don't come out of that course a DBA.

So as the current DBA, what should I do to make sure everything is running smooth? Are there daily maintenance tasks I can perform? Are there certain metrics I should be checking? Or to put it another way, as DBAs, what do YOU do on a daily basis to keep everything in good shape?

If it'll help tailor the answer a little bit, here are some specifics. We have 171 tables, all but 3 are innodb, the others are myisam. We have Master/Slave replication set up between our primary datacenter and our disaster recovery site, using RBR. Version is 5.5.28.

What can I do?

Teradata : Disadvantage of Collect stats

Posted: 31 Jul 2013 01:13 PM PDT

I want to know any disadvantage can occur by doing collect stats on a column that is being widely used in join condition across so many procedures ? (Eg. the column is accessed 300+ times access in 60 days)

While searching in google, i came to know the only issue that can cause performance degradation is out of date Stats. In this case, this is not going to happen as whenever data of table changes, we are going to run collect stats.

Any other disadvantage is there because of adding collect stats other than space consumption ?

Can't change root password: "The system cannot find the file specified."

Posted: 31 Jul 2013 08:46 AM PDT

I'm trying to change the root password in MySQL on my development machine (I've just installed MySQL, so it currently doesn't have a password), but it keeps failing with the following error message:

The system cannot find the file specified.

I'm using MySQL 5.1.70 (x86, 32-bit) on Windows 7 SP1 (64 bits). I've added MySQL's "bin" directory to my "Path" environment variable.

In the comments of the MySQL documentation, I read that I should have installed the service using the absolute path, so I stopped MySQL, and uninstalled it:

C:\Windows\system32>mysqld --remove  Service successfully removed.  

Then I installed it again, using the absolute path this time:

C:\Windows\system32>C:\web\mysql-5.1.70\bin\mysqld.exe --install  Service successfully installed.  

I started MySQL, and tried to change the password again:

C:\Windows\system32>mysqladmin -u root password Pe%8XiduwOqdZ<ZFE5!  The system cannot find the file specified.  

I also tried with quotes:

C:\Windows\system32>mysqladmin -u root password 'Pe%8XiduwOqdZ<ZFE5!'  The system cannot find the file specified.  

I also tried to change the current directory to MySQL's "bin" directory:

C:\Windows\system32>cd C:\web\mysql-5.1.70\bin    C:\web\mysql-5.1.70\bin>mysqladmin -u root password Pe%8XiduwOqdZ<ZFE5!  The system cannot find the file specified.    C:\web\mysql-5.1.70\bin>mysqladmin -u root password 'Pe%8XiduwOqdZ<ZFE5!'  The system cannot find the file specified.  

What's wrong?

InnoDB tables inaccessible after reboot

Posted: 31 Jul 2013 11:46 AM PDT

After reboot each time, I can't see my InnoDB table. However, when use command mode I can see my tables, but cannot access to it.

mysql> show tables;  +----------------------+  | Tables_in_xpl_ticket |  +----------------------+  | active_list          |  | bill_block           |  | bill_block_list      |  | block                |  | block_list           |  | box                  |  | cisco_switch         |  | mac_add              |  | mac_change           |  | month                |  | new_connect          |  | new_user             |  | open                 |  | package_change       |  | paid_list            |  | pay                  |  | problem              |  | re_open              |  | refund               |  |  ticket              |  | user                 |  | user_log             |  +----------------------+  22 rows in set (0.00 sec)  

But when I want to access a table it says table doesn't exist;

mysql> select * from active_list;  ERROR 1146 (42S02): Table 'xpl_ticket.active_list' doesn't exist  

Edit by RolandoMySQLDBA

Please run the following in MySQL:

SELECT VERSION();  SELECT CONCAT('[',table_name,']') tbl_name  FROM information_schema.tables WHERE table_schema='xpl_ticket';  

Please run the following in the OS:

cd /var/lib/mysql/xpl_ticket  ls -l  

Hi RolandoMySQLDBA,

Thank you for your quick reply. Here is all those outputs.

mysql> SELECT VERSION();  +-------------------------+  | VERSION()               |  +-------------------------+  | 5.5.31-0ubuntu0.13.04.1 |  +-------------------------+  1 row in set (0.00 sec)      mysql> SELECT CONCAT('[',table_name,']') tbl_name  -> FROM information_schema.tables WHERE table_schema='xpl_ticket';  +-------------------+  | tbl_name          |  +-------------------+  | [active_list]     |  | [bill_block]      |  | [bill_block_list] |  | [block]           |  | [block_list]      |  | [box]             |  | [cisco_switch]    |  | [mac_add]         |  | [mac_change]      |  | [month]           |  | [new_connect]     |  | [new_user]        |  | [open]            |  | [package_change]  |  | [paid_list]       |  | [pay]             |  | [problem]         |  | [re_open]         |  | [refund]          |  | [ticket]          |  | [user]            |  | [user_log]        |  +-------------------+  22 rows in set (0.03 sec)  

My data directory is not in "/var/lib/mysql" path. I have been changed it to "/var/www/xampp/mysql/data" path. So I am going to execute the following code.

Lurid / # cd /var/www/xampp/mysql/data/xpl_ticket/  Lurid xpl_ticket # ls -l  total 265  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 active_list.frm  -rwxrwxrwx 1 root root 8654 Jun  6 12:48 bill_block.frm  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 bill_block_list.frm  -rwxrwxrwx 1 root root 8654 Jun  6 12:48 block.frm  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 block_list.frm  -rwxrwxrwx 1 root root 8802 Jun  6 12:48 box.frm  -rwxrwxrwx 1 root root 8648 Jun  6 12:48 cisco_switch.frm  -rwxrwxrwx 1 root root   65 Jun  6 12:48 db.opt  -rwxrwxrwx 1 root root 8690 Jun  6 12:48 mac_add.frm  -rwxrwxrwx 1 root root 8698 Jun  6 12:48 mac_change.frm  -rwxrwxrwx 1 root root 8618 Jun  6 12:48 month.frm  -rwxrwxrwx 1 root root 8716 Jun  6 12:48 new_connect.frm  -rwxrwxrwx 1 root root 9002 Jun  6 12:48 new_user.frm  -rwxrwxrwx 1 root root 8680 Jun  6 12:48 open.frm  -rwxrwxrwx 1 root root 8724 Jun  6 12:48 package_change.frm  -rwxrwxrwx 1 root root 8692 Jun  6 12:48 paid_list.frm  -rwxrwxrwx 1 root root 8592 Jun  6 12:48 pay.frm  -rwxrwxrwx 1 root root 8802 Jun  6 12:48 problem.frm  -rwxrwxrwx 1 root root 8670 Jun  6 12:48 refund.frm  -rwxrwxrwx 1 root root 8714 Jun  6 12:48 re_open.frm  -rwxrwxrwx 1 root root 8900 Jun  6 12:48 ticket.frm  -rwxrwxrwx 1 root root 8704 Jun  6 12:48 user.frm  -rwxrwxrwx 1 root root 8808 Jun  6 12:48 user_log.frm  

Hi Mannoj

These tables are created in these same database.

I have some errors in my "/var/log/mysql/error.log". I don't understand what are those. Here are they ....

130611 12:41:28 [Note] /usr/sbin/mysqld: Normal shutdown    130611 12:41:29 [Note] Event Scheduler: Purging the queue. 0 events  130611 12:41:30  InnoDB: Starting shutdown...  130611 12:41:31  InnoDB: Shutdown completed; log sequence number 1595675  130611 12:41:31 [Note] /usr/sbin/mysqld: Shutdown complete    130611 12:43:15 [Note] Plugin 'FEDERATED' is disabled.  130611 12:43:15 InnoDB: The InnoDB memory heap is disabled  130611 12:43:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins  130611 12:43:15 InnoDB: Compressed tables use zlib 1.2.7  130611 12:43:15 InnoDB: Using Linux native AIO  130611 12:43:15 InnoDB: Initializing buffer pool, size = 128.0M  130611 12:43:15 InnoDB: Completed initialization of buffer pool  InnoDB: The first specified data file ./ibdata1 did not exist:  InnoDB: a new database to be created!  130611 12:43:15  InnoDB: Setting file ./ibdata1 size to 10 MB  InnoDB: Database physically writes the file full: wait...  130611 12:43:15  InnoDB: Log file ./ib_logfile0 did not exist: new to be created  InnoDB: Setting log file ./ib_logfile0 size to 5 MB  InnoDB: Database physically writes the file full: wait...  130611 12:43:16  InnoDB: Log file ./ib_logfile1 did not exist: new to be created  InnoDB: Setting log file ./ib_logfile1 size to 5 MB  InnoDB: Database physically writes the file full: wait...  InnoDB: Doublewrite buffer not found: creating new  

Thanks to both of you.

Fulltext stoplist replication

Posted: 31 Jul 2013 07:47 PM PDT

In MS SQL Server 2008 R2 there is replication of table with fulltext index on it. But stoplist that is associated with replicated fulltext index doesn't replicate.

Is there any possibility to replicate stoplist also?

sql server database sharding - what to do with common data / non sharded data

Posted: 31 Jul 2013 06:47 PM PDT

We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level.

We are looking to shard the database now to ensure that we can handle company growth and future loads.

We have decided what particular data should be sharded. It is a subset of our database which is highly utilized.

However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc .

I see two options to handle this common/universal data:

1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs.

2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards.

concerns about design #1

1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data.

2)a loss of referential integrity. Not possible to do cross database referential integrity.

3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards.

4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3.

concerns about design #2

In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively.

I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see.

thank you in advance.

Defragmenting SQL Server data and log files live with MoveFile API

Posted: 31 Jul 2013 02:26 PM PDT

My employer is looking at deploying NTFS defragmentation software that uses the Windows MoveFile API to defragment open files. This would get deployed to thousands of SQL Server servers running SQL versions from 2005-2012 and Windows versions from 2003 and 2008 R2. If it matters, the product I'm talking about is PerfectDisk, but I believe there are similar programs that work the same way.

Testing so far hasn't turned up many problems aside from occasional I/O performance issues, which is no surprise and can be worked around by rescheduling & tuning the defragmentation. However, I'm more concerned about the risk of data corruption.

Does anyone here have experience with running this type of software in production on database servers? Did you experience any data corruption?

I'm rather uneasy about this despite being unable to find any solid evidence that it poses a problem.

Thanks for any replies.

Edited to add: This horrible idea was thankfully just forgotten about, possibly in part due to some of the warnings I gave.

What's the difference between a temp table and table variable in SQL Server?

Posted: 31 Jul 2013 08:43 AM PDT

This seems to be an area with quite a few myths and conflicting views.

So what is the difference between a table variable and a local temporary table in SQL Server?

Logging/profiling an unpredictably slow stored procedure

Posted: 31 Jul 2013 10:40 AM PDT

I have a stored procedure that sometimes runs significantly slower than others. I have two different timers measuring the execution time at different scopes. One is in the client code, measuring only the OCI execute call. The second is inside the PL/SQL procedure itself, timing the inside of the procedure's main block.

Normally, if the procedure's timer takes, say, 5 seconds to run, the client's timer is only a few milliseconds greater. This is completely reasonable. The problem is that sometimes the client's timer will be much greater than the procedure's, even double.

I've eliminated any issue on the client or in the transport layer. I enabled client and server logging in sqlnet.ora at support level. On the client, the gap between request and response packets is essentially equal to the client code's timer. On the server, a similar gap is seen between the incoming request and the outgoing response.

The question remains, if the overall call takes 10 seconds, and inside the procedure takes only 5 seconds, where did the other 5 seconds go?

I don't expect the exact answer to the above question, but I would like to hear ideas on how to go about finding the answer. What logs can I turn on to give me insight into what Oracle is doing both after the request arrives but before the actual call to the procedure, and after the procedure until the response is sent back to the client?

Edit: Perhaps there's something wrong with my timing methodology? I'm using dbms_utility.get_cpu_time() (which provides time in hundredths of a second, hence the division by 100.)

PROCEDURE foo (elapsed_time OUT NUMBER) IS    start_time NUMBER;  BEGIN    start_time := dbms_utility.get_cpu_time();    -- ... PL/SQL stuffs    elapsed_time := (dbms_utility.get_cpu_time() - start_time) / 100;  END;  

Any Open Source / free ETL out there?

Posted: 31 Jul 2013 12:35 PM PDT

I was using Pentaho Data Integration even before Pentaho bought it and call it that. I have the last free version.

I went on their website recently to see if they had released another version only to find out my favorite open source etl is not much open anymore and not quite free.

Does any of you know of alternatives in affordable, easy to use ETL tools?

No comments:

Post a Comment

Search This Blog