Saturday, September 7, 2013

[how to] Row not being deleted from the publication when it's deleted in a subscription

[how to] Row not being deleted from the publication when it's deleted in a subscription


Row not being deleted from the publication when it's deleted in a subscription

Posted: 07 Sep 2013 05:41 PM PDT

We've merge replication configured in our environment(1 Publisher, 3 Subscribers) and We know by default merge replication is bi-directional. Its working fine in case of inserts i.e, if a row is inserted at publisher then it's getting updated at subscriber and vice versa. The real issue is, if row gets deleted at publisher then its getting deleted at subscriber but, if a row gets deleted in any of 3 subscribers then its not getting deleted at publisher.

When a row is deleted at publisher then it's getting updated in MSmerge_tombstone(Publisher). But, in case of subscriber, the corresponding row(rowguid) is not getting updated in MSmerge_tombstone(Subscriber). I think that might be the issue, even merge delete trigger exist on these tables(Subscriber). Please help me out, as it is a production issue.

Publisher(2008R2 sp1-Enterprise), Subscribers(2008R2 sp1-Standard)

Clients require instance name to connect to clustered instance

Posted: 07 Sep 2013 12:59 PM PDT

We have a clustered instance, say Instance1, assigned its own virtual network name and IP, via Failover Cluster Manager. This instance is configured to listen on port 1433 in SQL Server network configuration (verified in the logs).

Within our intranet, we can connect to this instance by using just the IP address, say 172.0.1.2 (as well as the FQDN) using SSMS. However, some external clients cannot connect unless they specify 172.0.1.2\Instance1 explicitly.

I don't understand why this is the case. From what I understand, it doesn't make sense that the instance name needs to be specified given that the IP is assigned to the instance itself.

I noted in SQL Server network configuration that the IP address assigned via Failover Cluster Manager is not explicitly listed as an active enabled address. Should that be added?

MySQL failures after changing innodb_flush_method to O_DIRECT and innodb_log_file_size

Posted: 07 Sep 2013 08:34 AM PDT

We changed innodb_flush_method to O_DIRECT, innodb_log_file_size to a bigger value, and 3 other things (innodb_buffer_pool_size, innodb_additional_mem_pool_size, innodb_log_buffer_size) while MySQL was stopped. Unfortunately DirectAdmin restarted MySQL before the log files were deleted.

Later, when we tried to fix the situation and stopped MySQL (and stopped DA) we deleted log files it didn't help - databases were corrupted. We finally created a new instance, and recovered everything from backups.

We had errors like these:

2013-09-06 01:40:24 7ff777a0c700 InnoDB: Error: page 4 log sequence number 40425569713  2013-09-06 01:40:24 7ff777a0c700 InnoDB: Error: page 3 log sequence number 40425569736  2013-09-06 01:40:24 7ff777a0c700 InnoDB: Error: page 28 log sequence number 40425569736  2013-09-06 01:40:24 7ff777a0c700 InnoDB: Error: page 4 log sequence number 40425542325  

Can someone tell me what really happened? We've seen bad things happening when you change innodb_log_file_size and don't delete log files, but we were able to recover from it then.

Was setting innodb_flush_method to O_DIRECT the cause here?

Bigdata - When do we choose NoSQL databases (Document stores, key-value stores, Column family and Graph)?

Posted: 07 Sep 2013 06:20 AM PDT

We have different kind of big data which can grow from Tera to Exabytes.

The below are the characteristics of our data:

  1. Statistical data for analytics (Huge growth)
  2. Files (images, PDFs, Videos etc.) (Huge growth, Frequent reads, streaming)
  3. JSON data with Constant growth(Cache data on RAM, Constant writes/updates, Very high frequency of Reads)
  4. Dynamic JSON Data which doesn't have constant size (Huge growth, Constant writes/updations, Frequent Reads)

We are using C++(for some of the REST services), Scala(Lift Web framework) for our development. Kindly let us know when to use the below NoSQL databases?

  1. Document store i) MongoDB
  2. Graph Database i) Neo4J
  3. Column Family - BigTable Clones i) Cassandra ii) HBase iii) Hyper Table
  4. Key-value store - Dynamo Clones i) Redis ii) Membase iii) Riak

Please share any resources ( blogs, books etc) for more details.

Update row to another table using mysql trigger

Posted: 07 Sep 2013 08:31 AM PDT

I have the following 2 tables (places & offer_company) in a MySQL database named COUPON1.81.

CREATE TABLE IF NOT EXISTS `places` (   `id` int(60) NOT NULL AUTO_INCREMENT,   `name` varchar(255) NOT NULL,   `Description` text NOT NULL,    **'Company_id' int(60) NOT NULL,**   `Active` int(60) NOT NULL, [...]    CREATE TABLE IF NOT EXISTS `offer_company` (   `id` int(60) NOT NULL AUTO_INCREMENT,    `company_name` varchar(150) NOT NULL,   `Address` text NOT NULL,   **'places_id' int(60) NOT NULL,** [...]  

I need to update the rows from table company_id to table places when insert a new row on table company.

For example, if I insert a new row to offer_company with these values ('1', CSB(pvt)limited, srilanka, 123) then the the trigger should update the rows from table company_id to table places

Places_id select form places table

***places table***    +----------+-------------+------+-----+---------+-------+-------+-----+  | id       | name        | Description                  | Company_id  |  +----------+-------------+------+-----+---------+-------+-------+-----+  | 123      | eiffel tower|  is an iron lattice tower    |  0          |   | 124      | sigiriya    |  sigiriya Rock               |  0          |   +----------+-------------+------+-----+---------+-------+-------+-----+      *** Offer_company table ***    +----------+-------------+------+-----+----------+----------+  | id       | company_name       | Address        | places_id|  +----------+-------------+------+-----+----------+----------+  | 1        | CSB(pvt)limited    |  89/A,Srilanka |  123     |   +----------+-------------+------+-----+----------+----------+  

After select places_id in company table, company_id should be updated in places table.

*** places table***      +----------+-------------+------+-----+---------+-------+-------+-------+  | id       | name        | Description                  | Company_id    |   +----------+-------------+------+-----+---------+-------+-------+-------+  | 123      | eiffel tower|  is an iron lattice tower    |  1            |   | 124      | sigiriya    | sigiriya Rock                |  0            |   +----------+-------------+------+-----+---------+-------+-------+-------+  

I added this trigger

--  -- Triggers `offer_company`  --  DROP TRIGGER IF EXISTS `add`;  DELIMITER //  CREATE TRIGGER `add` AFTER INSERT ON `offer_company`   FOR EACH ROW INSERT INTO places (Company_id)      SELECT new.id               FROM offer_company              WHERE offer_company.places_id = new.places_id  //  DELIMITER ;  

How can I solve this problem?

Implicitly (or proxy) convert to Nvarchar on incoming queries from legacy or 3rd party applications

Posted: 07 Sep 2013 02:26 AM PDT

We have a 3rd party application of witch we don't have the source code. This application executes queries on our database without using the "N" prefix before literal strings. It does this in inserts and where clauses.

So what we need is a way to translate a query like this:

SELECT Whatever FROM Person WHERE Name = 'John';  

to this:

SELECT Whatever FROM Person WHERE Name = N'John';  

Is there a way to implicitly force this on SQL server (2005 or higher, I guess if an upgrade is needed it would be acceptable).

Otherwise, does anyone know of a 3rd party proxy that would be able to translate the queries? Building one would be costly, but I think it is feasible. Or could we use some other trick like a trigger or something?

I have already checked if the 3rd party application can handle multi-byte characters in the strings that are sent and returned, that is not a problem at all. The only problem is that it's formatting the queries without using N's. (It should have been using parameters but we can't change it anyway).

How to trace the exacy query fired in sql server

Posted: 07 Sep 2013 10:22 AM PDT

I am facing a problem with performance tuning. We are using an application called ARAS (one of the leading PLM software) which is using Sql server as the back end.

Every time we find some data using Aras interface it interacts with SQL server and give us Response.

But I am unable to find which query is being sent to SQL Server. I tried to find it using this query

SELECT deqs.last_execution_time AS [Time], dest.TEXT AS [Query]  FROM sys.dm_exec_query_stats AS deqs  CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest  ORDER BY deqs.last_execution_time DESC.  

This was also not helping me properly.

We are using SQL Server 2008 Express and ARAS 9.3

ERROR 1136 Need help mysql

Posted: 07 Sep 2013 03:44 AM PDT

I am working on my sql via mysql. I need help with a code.
I am able to create the table; however, I am having trouble installing the data into the table with out getting the ERROR 1136.

CREATE DATABASE  IF NOT EXISTS `contact_info`;  USE `contact_info`;  DROP TABLE IF EXISTS `contact_info`;  CREATE TABLE `contact_info` (    `firstName` varchar(25),    `middleInitial` char(1),     `lastName` varchar(25),    `suffixDescription` varchar(5),    `titleDescription` varchar(5),    `jobTitle` varchar(40),    `department` varchar(30),    `email` varchar(35),    `url` varchar(50) DEFAULT NULL COMMENT 'Web address for station',    `IMaddress` varchar(25),    `phoneNumber` varchar(25),    `phoneDescription` varchar(10),    `birthday` date,    `notes` varchar(255),    `companyName` varchar(30),    `addressLine1` varchar(40),    `addressLine2` varchar(40),    `city` varchar(20),    `state_province` varchar(15) ,    `zip_postalcode` varchar(10) ,    `country_region` varchar(15) ,    `companyURL` varchar(50),    `companyPhone` varchar(12)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

insert into contact_info (`firstName `,               `middleInitial `,              `lastName `,              `suffixDescription `,              `titleDescription `,              `jobTitle `,              `department `,              `email `,              `url `,              `IMaddress `,              `phoneNumber `,              `phoneDescription `,              `birthday `,              `notes `,              `companyName `,              `addressLine1 `,              `addressLine2 `,              `city `,              `state_province `,              `zip_postalcade `)  value ('Jacob',      'P',      'Jacobson',      'Jr.',      'Mr.',      'Director',      'Finace',      'rjameson@concor.com',      'www.concor.com/~rjames',      'jpjacobson',      '323-546-6834 ext. 29',      'work',      '1969-07-13',      'All meetings must be scheduled through Charlene Reynolds',      'Concor International, Inc.',      '143 South Main Street',      null,      'Los Angeles',      'CA',      '90012-3712',      'USA',      'www.concor.com',      '323-546-6834'      ),('Charlene',      null,      'Reynolds',      null,      'Ms.',      'Assistant to Finance Director',      'Finace',      'creynolds@concor.com',       null,      'charreynolds',      '323-546-6834 ext. 30',      'work',      '1972-06-15',      'Very nice, but can make things difficult if you make her angry.',      'Concor International, Inc.',      '143 South Main Street',      null,      'Los Angeles',      'CA',      '90012-3712',      'USA',      'www.concor.com',      '323-546-6834'      ),      ('Karson',      'B',      'Campbell',      null,      'Dr.',      'Chief Resident',      'Pediatrics',      'kbc232@mvch.org',      null,      null,      '585-544-1212',      'home',      '1955-01-05',      'Wife: Molly Kids: Cassidy, Justine, and Cory.',      'Mountain View Hospital',      null,      null,      null,      null,      null,      null      ),      ('Les',      'M',      'Nelson',      'Ph.D.',      'Prof.',      'Professor',      'Business',      'lmnbus@rit.edu',      null,      null,      '585-475-0000',      'work',      '1964-03-23',      'Hates when work is submitted late',      'RIT',      'Bldg. 12',      '102 Lomb Memorial Drive',      'Rochester',      'NY',      '14623',      'USA',      'www.rit.com',      null      ),      ('Rachel',      null,      'Woods',      null,      'Miss.',      null,      null,      'goof@go.com',      null,      'Goofy12',      '585-475-0000',      'cell',      '1985-05-06',      'Favorite color is purple.',      null,      null,      null,      null,      null,      null,      null,      null,      null      ),      ('David',      'W',      'Shanley',      null,      'Mr.',      null,      null,      null,      null,      null,      null,      null,      null,      null,      null,      null,      null,      null,      null,      null,      null,      null,      null      )      ;  

When I insert it to mysql, I get this "ERROR 1136 (21S01): Column count doesn't match value count at row 1" I look on Google, some my books that i owned and even try this Getting error 1136 in this MySQL query still at a standstill.

SQL Server backup job last 5 days

Posted: 07 Sep 2013 01:35 AM PDT

We've been having this nightly backup job on SQL Server 2012 but it appends to the .bak file instead of overwriting, so while googling I discovered master.dbo.xp_delete_file

I noticed that I can delete several backups at once prior than date parameter

I'm thinking in keeping 3 days thus deleting like this:

declare @dt datetime  select @dt=getdate()-3  EXECUTE master.dbo.xp_delete_file 0,N'D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\',N'bak',@dt,1  

All seems ok, but for the backup, can I create filenames like this? (note the @today in CONCAT() )

declare today varchar(10);  SELECT @today=CONVERT(char(10), GetDate(),126);    BACKUP DATABASE perfMaster TO DISK = CONCAT('D:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Backup\perfMaster_',@today,'.bak'   WITH NOFORMAT, NOINIT,    NAME = N'Full Database perfMaster Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 20  

Or what methods do you employ to accomplish backups with some days history?

Looking for a database-design to model a availability problem (large data sets are expected) [on hold]

Posted: 07 Sep 2013 12:47 PM PDT

I am looking for a database-design to store and query information about disposability of cars in a to be designed care-sharing-community, where users can rent cars provided by other users. There will be a (proximity) search which should only show all available cars.

I'll expect to have following data: Car data with general availability - set by the car owner. Rent contracts for a specific or recurring date/time, which reduces the availability.

Here some example queries that should be possible:

  1. get all cars available now
  2. get all cars available tomorrow from 8am to 2:30pm
  3. get all cars continuously available from 2013-10-01 until 2013-10-30
  4. get all cars available each Tuesday to Thursday from 10am to 6pm from 2013-11-01 until 2013-11-24

No matter if SQL or NoSQL is used for the model, any ideas are welcome. Important: The database-design should scale well, as large data sets are expected.

Cardinality showing the nature of relationships among tables

Posted: 07 Sep 2013 08:19 AM PDT

Lets say there's a table named Department and a table named Instructor. We've conditions that says :

  • A Department can have many instructors.
  • An instructor can be associated to only single department.

Now the point is, if seen from the Department's angle, the relationship is of type "one to many"(coz one department can accommodate many instructors ) but if seen from instructor's angle, then the relationship is of type "one to one"(coz, one instructor can join only one department).

If we implement "one to many"( from department's angle) by making a foreign key in instructor table that would refer to department table, e.g. :

Instructors table :

create table instructors(inst_id number primary key not null, inst_name varchar2(30) not null,  dept_id number constraint dept_id_fk references department(dept_id) constraint not null);  

Department Table :

create table department(dept_id number primary key not null, dept_name varchar2(15) not null);  

Does these statements sufficiently satisfies both "one to one"( from instructor's table angle ) and "one to many"(from department's angle) ?

or

Do we need to change or SQL code to introduce "one to one" aspect too ?

Cannot create an instance of OLE DB provider "OraOLEDB.Oracle" for linked server

Posted: 07 Sep 2013 05:19 AM PDT

i am having a bit of a linked server problem .

I created a sql server linked server and it's been working fine for last 2 years and suddenly not working .

SQL Server Microsoft SQL Server 2005 - 9.00.5324.00 (X64) Aug 24 2012 18:28:47 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2)

Oracle client version Oracle - OraClient11g_home1

Cannot create an instance of OLE DB provider "OraOLEDB.Oracle" for linked server

Ps : we have enabled Allow in Process in Sql Server for Oracle Provider

anyone got any ideas on this and what can be done to avoid this happen again

many thanks

Ray

Mysql datatype for username of maximum 15 characters

Posted: 06 Sep 2013 11:19 PM PDT

I currently have a database with the "user" table having username as one of the columns.

I allow a maximum of 15 characters for the username in my system... Is it fine to have the username column as a varchar(15) datatype?

mysqldump freezing on a specific table

Posted: 07 Sep 2013 03:19 PM PDT

I dumped a database (sys_data) which is very big (800GB, all data in one ibdata file) from a remote server. But the dump was blocked at a table (tb_trade_376). My dump command:

mysqldump -uxx -pxx -h192.168.1.xxx --single-transcation sys_data > /home/sys_data.sql  

When the dump was blocked:

show processlist;  5306612 | root | 192.168.1.161:57180 | sys_data      | Query  | 23955 | Sending data | SELECT /*!40001 SQL_NO_CACHE */ * FROM `tb_trade_376`  

On the other hand I can dump the table tb_trade_376 successfully if I just dump the table only.

mysqldump -uxx -pxx -h192.168.1.xxx \    --single-transcation sys_data tb_trade_376 > /home/tb_trade_376.sql  

This works well and quickly! The table tb_trade_376 has about 700,000-800,000 rows.

What is the next step in investigating why I can't dump the whole database? How can I make it work?

Why would increase in innodb_buffer_pool_size slow down MySQL?

Posted: 07 Sep 2013 09:19 AM PDT

5.1.68-cll - MySQL Community Server on CentOS

The system has 32GB of RAM.

I increased innodb_buffer_pool_size from 10240M to 15360M (10GB -> 15GB).

Time taken for a series of identical operations increased from 720 to 822 seconds (14% increase).

This was the result only a single test at each setting. But 4 previous tests performed a few months ago resulted in times between 726 and 740s.

I just tried running it again with 8GB, and the time taken was 719s.

Why would more memory result in a slower process?

EDIT: More details on process

The process that I'm testing involves emptying some tables and rebuilding them from data from existing tables. I'm not sure if it's using SELECT INSERT or if it's SELECTing the data, then using PHP to create long INSERT statements. If that matters then I can find out.

There are no schema definition changes being made.

Here is the output of numactl --hardware while the server is relatively idle:

root@server [~]# numactl --hardware  available: 1 nodes (0)  node 0 cpus: 0 1 2 3 4 5 6 7  node 0 size: 32740 MB  node 0 free: 6216 MB  node distances:  node   0    0:  10  

And free -m

root@server [~]# free -m               total       used       free     shared    buffers     cached  Mem:         32081      25864       6216          0       2591      12791  -/+ buffers/cache:      10482      21599  Swap:        15994         16      15977  

Edit by RolandoMySQLDBA

Please run this query

SELECT      InnoDBSpace / POWER(1024,1) InnoDB_KB,      InnoDBSpace / POWER(1024,2) InnoDB_MB,      InnoDBSpace / POWER(1024,3) InnoDB_GB  FROM  (      SELECT SUM(data_length+index_length) InnoDBSpace      FROM information_schema.tables      WHERE ENGINE='InnoDB'  ) A;  

RESULT:

InnoDB_KB InnoDB_MB InnoDB_GB  8413536 8216.34375 8.02377319335938  

and this one

SHOW GLOBAL STATUS LIKE 'innodb_buffer_pool_pages%';  

RESULT:

Innodb_buffer_pool_pages_data  410035  Innodb_buffer_pool_pages_dirty  204  Innodb_buffer_pool_pages_flushed  826954  Innodb_buffer_pool_pages_free  99231  Innodb_buffer_pool_pages_misc  15022  Innodb_buffer_pool_pages_total  524288  

During running of process:

root@server [~]# numactl --hardware  available: 1 nodes (0)  node 0 cpus: 0 1 2 3 4 5 6 7  node 0 size: 32740 MB  node 0 free: 5461 MB  node distances:  node   0    0:  10  

and:

             total       used       free     shared    buffers     cached  Mem:         32081      26658       5423          0       2603      12948  -/+ buffers/cache:      11106      20975  Swap:        15994         16      15977  

In place upgrade from MySQL 5.5 to 5.6.11 removes all users from user table

Posted: 07 Sep 2013 02:19 PM PDT

On Windows, I upgraded from 5.1 to 5.5 no problem.

  1. Copied my 5.1 data folder into my 5.5 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

All good, but going from 5.5 to 5.6:

  1. Copied 5.5 data folder to 5.6 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

but I get:

C:\Users\QAdmin>mysql_upgrade  Looking for 'mysql.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysql.exe  Looking for 'mysqlcheck.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe  Running 'mysqlcheck' with connection arguments: "--port=3306"  Running 'mysqlcheck' with connection arguments: "--port=3306"  mysql.user_info                                    OK  Running 'mysql_fix_privilege_tables'...  Running 'mysqlcheck' with connection arguments: "--port=3306"  C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe: Got error: 1130: Host 'localhost' is not allowed to connect to this MySQL server when trying to connect  FATAL ERROR: Upgrade failed  

If I look at the mysql.user table it is completely empty.

  • Has anyone seen this or know what is going on?
  • During the "upgrade" the user table gets erased and when it tries to connect it can't?

Thanks.

SSRS 2008 R2 setup issue

Posted: 07 Sep 2013 05:19 PM PDT

I have installed SSRS 2008 R2 on my desktop and server. When I hit the reports link on my desktop

http://mypc/Reports_mypc/Pages/Folder.aspx  

all I get to see is this home page of the desktop ssrs instance

I cant create a new folder or data source or anything of the sort

On the server where I am attempting to set up SSRS 2008 R2, all I get is a white screen that shows the virtual folder name in large fonts, followed by the version of the reporting services server on the next line. This is not leaving me any clues as to what needs to be fixed. On both pcs I am using the credentials of the local admin. Any clues on what needs to be fixed?

How to sync MySQL database between two servers

Posted: 07 Sep 2013 06:19 AM PDT

I need to sync a MySQL database between two servers.

If a new record will be added in server A, then server B fetch new record.

If a new record will be added in server B, then server A fetch new record.

I would like to query a range of criteria on multiple columns in MySQL

Posted: 07 Sep 2013 10:19 AM PDT

I have 3 columns in a mysql table. I'm using Innodb engine. I want to be able to search for some values on those columns and also anything close to those values. For example :

We have 3 columns as "Fast" "Smart" "Sharp" and they are of type boolean/int I have 1000 records in the table, as an example, I list 5 here.

Fast  |  smart  |  Sharp  ------------------------  0         1          1  1         1          1  0         0          1  1         1          1  1         0          0  

and let's say we wanna search for people who are smart and sharp as 0 1 1 but if there is no record with the value of 0 1 1 we want to get the closest to it which is 1 1 1 or 1 0 1 or etc. So now when I search for it, I get the exact value, if the value doesn't exist, I won't get anything back from DB. So how can I achieve this and if I get a bunch of result, how can I sort them from the closets to my query to the feartest ?

What is the actual difference between innodb_autoinc_lock_mode 0,1,2

Posted: 06 Sep 2013 10:19 PM PDT

I would like to understand the differences between innodb_autoinc_lock_mode options 0,1 and 2 when parallel load data infiles are given.

I see in "0" option, it locks the entire table and does the First transaction requested for N number of records say TX1. So when next transaction say TX2 is raised in meantime when first transaction is still getting uploaded using "load data", then it has to wait in the queue for the first one TX1 to complete. And then it sets the max(AI_column)+1 value from the table and does upload for the next set of load data. In this case it doesn't jump the Auto Increment numbers.

Also I see in "1" option, it locks the entire table and does the First transaction requested for N number of records say TX1. So when next transaction say TX2 is raised in meantime when first transaction is still getting uploaded using "load data", then it has to wait in the queue for the first one TX1 to complete. And then it sets the max(AI_column)+1 value from the table and does upload for the next set of load data. And then it sets the max(AI_column)+some_creepy_jump.

But I see in "2" option, it doesn't lock the entire table. Instead it keeps inserts for each process simultaneously and inserting records for which ever request falls in randomly and ends all threads with average time around (1.21 sec for 7 parellel threads using load data having 1000000 record each). In this case it has multiple transactions in mixed order. And then it sets the max(AI_column)+some_creepy_jump atlast.

I'm using mysql 5.1.61 .

  • My question is what is the use of option 1?
  • And why does it stay as default later versions of Mysql 5.1.22 ?
  • Have anyone comeaccross any disasters with option 2?

As my project demands multiple processes using load data for a table. And I have tested the above options and finalized to go with option 2. Kindly correct me if my experiments are wrong.

How to avoid using the IN clause in sql query used in a stored procedure

Posted: 07 Sep 2013 08:23 AM PDT

Is there a better way to write the SQL other than using the IN clause in the below given SP. When i am using this IN clause, i suffer a performance dip due to large number or records involved in the User and Member tables.

CREATE PROCEDURE [dbo].[sp_MemberIdsFromUserIds] @dtUserIds UNIQUETABLE readonly  AS    BEGIN        SELECT userID,               Memberid        FROM   Member               INNER JOIN USER                 ON UserMemberID = MemberiD        WHERE  userID IN (SELECT UniqueId                          FROM   @dtUserIds)    END   

Kindly suggest the alternative as i don't have a full knowledge in writing SP

What is the difference between int(8) and int(5) in mysql?

Posted: 07 Sep 2013 12:19 PM PDT

I found out, that if you have a field defined as INT(8) without ZEROFILL it will behave exactly as INT(5)

in both cases the maximum value is

−2,147,483,648 to 2,147,483,647, from −(2^31) to 2^31 − 1  

or do i miss something?

I found this Question: http://dba.stackexchange.com/a/370/12923

The (5) represents the display width of the field. From the manual, it states:

The display width does not constrain the range of values that can be stored in the column. Nor does it prevent values wider than the column display width from being displayed correctly. For example, a column specified as SMALLINT(3) has the usual SMALLINT range of -32768 to 32767, and values outside the range permitted by three digits are displayed in full using more than three digits.

The display width, from what I can tell, can be used to left-pad numbers that are less than the defined width. So 00322, instead of 322. TBH, I've never used it.

But it doesn't affect the storage size of the column. An int will take up more space than a smallint.

so there seems to be no difference then.

Mysql innoDB write operations are extremely slow

Posted: 07 Sep 2013 07:19 AM PDT

I'm having serious performance problems with MySQL and the InnoDB engine. Even the simplest table makes writing operations (creating the table, inserting, updating and deleting) horribly slow, as you can see in the following snippet.

mysql> CREATE TABLE `test` (`id` int(11) not null auto_increment,     -> PRIMARY KEY(`id`)) ENGINE=InnoDB;  Query OK, 0 rows affected (4.61 sec)    mysql> insert into test values ();  Query OK, 1 row affected (1.92 sec)    mysql> insert into test values ();  Query OK, 1 row affected (0.88 sec)    mysql> insert into test values ();  Query OK, 1 row affected (1.10 sec)    mysql> insert into test values ();  Query OK, 1 row affected (6.27 sec)    mysql> select * from test;  +----+  | id |  +----+  |  1 |  |  2 |  |  3 |  |  4 |  +----+  4 rows in set (0.00 sec)    mysql> delete from test where id = 2;  Query OK, 1 row affected (0.28 sec)    mysql> delete from test where id = 3;  Query OK, 1 row affected (6.37 sec)  

I have been looking at htop and the long waiting times are not because of abnormal CPU load. It's almost zero, and memory usage is also normal. If I create the same table using the MyISAM engine, then it works normally. My my.cnf file contains this (if I remember right I haven't changed anything from the default Debian configuration):

[client]  port        = 3306  socket      = /var/run/mysqld/mysqld.sock  [mysqld_safe]  socket      = /var/run/mysqld/mysqld.sock  nice        = 0    [mysqld]  user        = mysql  pid-file    = /var/run/mysqld/mysqld.pid  socket      = /var/run/mysqld/mysqld.sock  port        = 3306  basedir     = /usr  datadir     = /var/lib/mysql  tmpdir      = /tmp  language    = /usr/share/mysql/english  skip-external-locking  bind-address        = 127.0.0.1  key_buffer      = 40M  max_allowed_packet  = 16M  thread_stack        = 128K  thread_cache_size       = 8  myisam-recover         = BACKUP  max_connections        = 100  table_cache            = 64  thread_concurrency     = 10  query_cache_limit   = 1M  query_cache_size        = 40M  log_slow_queries    = /var/log/mysql/mysql-slow.log  long_query_time = 2  log-queries-not-using-indexes  expire_logs_days    = 10  max_binlog_size         = 100M    [mysqldump]  quick  quote-names  max_allowed_packet  = 16M    [isamchk]  key_buffer      = 16M  !includedir /etc/mysql/conf.d/  

I have also tried to restart the server, but it doesn't solve anything.

The slow queries log doesn't give any extra information.

MySQL table relations, inheritance or not?

Posted: 07 Sep 2013 06:19 PM PDT

Im building a micro CMS. Using Mysql as RDMS, and Doctrine ORM for mapping.

I would like to have two types of pages. Static Page, and Blog Page. Static page would have page_url, and page_content stored in database. Blog page would have page_url, but no page_content. Blog would have Posts, Categories...

Lets say I have route like this:

/{pageurl}  

This is page, with page url that can be home, or news, or blog... That page can be either Static page, and then I would joust print page_content. But it can also be Blog Page, and then I would print latest posts as content.

How should I relate these Static Page and Blog Page tables? Is this inheritance, since both are pages, with their URL, but they have different content? Should I use inheritance, so that both Static and Blog page extends Page that would have page_url? Or should I made another table page_types and there store information about available page types?

No comments:

Post a Comment

Search This Blog