Saturday, August 3, 2013

[how to] Percona XtraDB Cluster With Async slaves

[how to] Percona XtraDB Cluster With Async slaves


Percona XtraDB Cluster With Async slaves

Posted: 03 Aug 2013 08:07 PM PDT

I have set up a 3 node percona cluster in amazon ec2. As our client required, i also had to set up an asynchronous slaves in another datacenter as well. Each async slave pointed to single percona cluster node with log-slave-updates and bin-log enabled.

However the async slaves failed to maintain the replication process for long. At least once a week i have to build the slaves from scratch. These async slaves are used only for backup purpose.

Is there any alternative method(best way) of doing this?

Percona Cluster ---> async slaves ?

Make "NOT NULL" default in Postgresql 9.2

Posted: 03 Aug 2013 08:19 PM PDT

I use NOT NULL a lot when creating columns.

Is there a way to make this default when defining columns? In other words, make column is_nullable default to NO. (I realize this would make it harder for me to set it to YES later one.)

phpmyadmin Lock Down

Posted: 03 Aug 2013 08:51 PM PDT

I have a client that wants to use myphpadmin to manage his mysql database 5.x on my server. I found this article on how to hide databases from myphpadmin but I don't like this because if I don't specify the databases to hide then every time I add a new database it because available to the client through myphpadmin.

Article: http://www.electrictoolbox.com/hide-databases-phpmyadmin/

Command to hide a database in the myphpadmin config.inc.php file:

$cfg['Servers'][$i]['hide_db'] = 'hidedb1|hidedb2|etc...';  

I also created a separate username and password in mysql to.

Is there a way to reverse this so I can tell myphpadmin what database to only show for this login?

Thanks,

Frank G.

Does the mysql parameter "max_connections" apply for unix socket connections?

Posted: 03 Aug 2013 07:52 AM PDT

The documentation does not specify if max_connections apply only to TCP connections.

The way it is said, it looks like it does apply for any kind of connection, but I'd like to be sure because otherwise, I could use it when the "Too many connection" problem arises, even if the (last) SUPER connection slot is already in use.

DB: Having redundant fields in Table

Posted: 03 Aug 2013 08:23 AM PDT

This is a basic DB design question but I now run multiple times into the same situation which I try to illustrate in the following example.

Lets say I have make a DB-system for messaging in a classifieds platform.

A user publishes many ads and other users can message her for a specific ad.

I thus have an ads table, a user table and a message table in my DB.

Since users message people for specific ads, storing the ad AND the user is redundant.

On the other hand, when listing all incoming messages for a given user it would be much more efficient to have the user name as a field in the message table too, because a user might have many ads and only very few of them have messages.

So should the message table now have a user field or not?

how to use index for asp.net site? [on hold]

Posted: 03 Aug 2013 01:11 AM PDT

I am started learning index in SQL Server. I have a table with some fields. So, here I used index for one of my table column Price.

CREATE INDEX pricetag  ON pricetable (Price)  

When I can use index? If the data type as integer or if the table doesn't have a constraint like primary key or foreign key. Because when I execute my above query it saying following error.

My table have ID as primary key.

Cannot find the object "pricetable" because it does not exist or you do not have permissions.

I want know when to use index, where to apply index in SQL Server?

Moving a database to a new server

Posted: 03 Aug 2013 07:58 PM PDT

My database server is being decommissioned at work. We have a new database server to take its place. Both servers have the same operating system and SQL Server 2008. I need to move all my databases and data over to the new server.

What is the best way to do this to ensure as few problems as possible?

My data is not just tables, views and stored procedures. There are also several SQL Server Agent jobs and several database maintenance plans and scheduled backups.

The system is not being used at night, so I have a late night opportunity to take the database offline, if need be.

Can I just copy the MSSQL directory over to the new server? Or should I just backup each database individually? If I do backup each database, should I back up the system databases (like master, model and msdb) too?

Connect to remote Oracle Database

Posted: 03 Aug 2013 04:08 PM PDT

I needed to make an application connect to an oracle database located in the same network (ping the host works fine), but not in the same machine (i.e. it's a remote server). I developed then a software using Qt and I was able to connect remotely successfully with that database from my machine, but I'm not being able to do the same from the machine my software is supposed to be located during its functioning.

I've tried to understand what is so special about my machine that on it my software is able to connect with the Oracle database while it's not able to do the same in the right machine, and it seems that the only different thing is that I've installed the Oracle database here, while I'm not allowed to do the same there.

The error returned by Qt is that I'm simply unable to logon, while the QOCIDriver returns that it wasn't possible to create the environment. Interestingly enough, the same problem was occurring with me on my machine during the time I didn't have the Oracle database installed on it and neither the Oracle Client. And this Oracle Client thing is already installed in the official machine with no positive results.

So I'ld like to know exactly what I need to do to make my software works. Install the database after all in the official machine? Not viable. Install the Oracle Client? I've already done it. Add the ORACLE_HOME registry/environment variable? Did both. ^^ Copied the dbhome_1/BIN folder with lots of dlls and .exe to the official machine? Did it already, and that only made it began to crash while trying to connect to the database. And now I'm out of solutions... So what is happening that I can connect my app to this oracle database from my machine but I can't do the same from that other machine?

SQL Server logs in Gmail

Posted: 03 Aug 2013 05:25 AM PDT

First off, I'm using SQL Server 2008 R2.

When I set up my maintenance plan and got to the "Select Report Options" step, I selected to email the report to the Operator I already have set up. This does not allow me to alter the subject, which I believe Gmail uses when it collapses emails into conversations.

Is there a way to trick Gmail to separating them out based on the database? Or is there a better way from within SQL Server Management Studio to email the reports out? I have multiple instances, with their own Operator, each Operator is using the same email address (no_reply@noneofyourbusiness.com), but the name is different....but since the subject is the same, Gmail still collapses them into one big conversation.

Does anyone else have to deal with this issue?

Select rows, where 3 columns have the same values

Posted: 03 Aug 2013 03:02 AM PDT

I have a structure like this:

 col0 col1 col2 col3   1    1    1    1   1    1    1    2   1    1    1    3   1    2    1    1   1    2    3    1   2    3    4    5   2    3    4    9  

I now want to select all rows, that match other rows on the first three columns, so the result of this query using the sample data would be:

 col0 col1 col2 col3   1    1    1    1   1    1    1    2   1    1    1    3   2    3    4    5   2    3    4    9  

How can I achieve this, I'm kinda stuck at the early beginning...

Thanks in advance! Alex

Update: Clearified the question. Thanks Erwin.

MySQL 5.5 High CPU Usage

Posted: 03 Aug 2013 05:23 PM PDT

I have been working with a client with high traffic website(500k visitors and 600-800 active users at any given time). This uses wordpress and the DB is using MyISAM engine. The problem we had was high CPU usage in the server. All the time CPU load is 15-20. We used litespeed and MySQL 5.1 with CentOS 5.9 in Dual Xeon L5506, 12GB RAM server with Sata HDD.

So I analyzed the database and found there is only 4GB of data and index size of that DB and decided to convert to InnoDB. Once we did, we ended up having 80-150 CPU load and server was about to crash. So we transferred MySQL to another server with same config but to MySQL 5.5.

In New DB server CPU load is 1-2 and web server still on 4-6 constant CPU load.

Here is my my.cnf

[mysqld]  innodb_file_per_table=1  local-infile = 0  default-storage-engine = InnoDB  max_connections = 1000    innodb_buffer_pool_size = 8G  innodb_flush_method = O_DIRECT  innodb_log_file_size = 256M  innodb_log_buffer_size = 8M  innodb_flush_log_at_trx_commit = 2  innodb_thread_concurrency = 8  innodb_file_format = Barracuda    myisam_sort_buffer_size = 16MB    query_cache_type = 1  query_cache_limit = 2M  query_cache_size = 256M  thread_cache_size = 16K    key_buffer_size = 128M  max_heap_table_size = 128M  tmp_table_size = 128M  join_buffer_size = 32M  read_buffer_size = 32M  read_rnd_buffer_size = 1M  sort_buffer_size = 32M  table_cache = 4K  open_files_limit = 65535    log-slow-queries = /var/log/mysql/slowqueries.log  long_query_time = 3  

PS : Not all of our DB's are InnoDB, so MyISAM values are placed after good analysis.

Statistics : For 4 hours

Questions since startup: 7,339,471 Documentation   ø per hour: 1,704,102  ø per minute: 28,402  ø per second: 473       Traffic              ø per hour  Received 4.8 GiB    1.1 GiB  Sent     248.5 GiB  57.7 GiB  Total    253.3 GiB  58.8 Gi  

How to run a SELECT query within while loop in PHP?

Posted: 03 Aug 2013 04:51 PM PDT

Within a SELECT query via PHP while loop, I need to run a mysql SELECT query as

$result1 = $mysqli->query("SELECT * FROM table1");    while ( $row = $result->fetch_assoc() ) {    if ( $row['X'] == 'X' ) {  $result2 = $mysqli->query("SELECT * FROM table2");  .....  }    }  

but this does not work. I cannot JOIN the tables, as the if statement is complicated to perform SELECT from different tables.

Delete from table where multiple fields match select subquery from other table

Posted: 03 Aug 2013 09:50 AM PDT

I want to delete an entry in a table where multiple fields match the results of another select subquery which takes data from another table.

This is what I have so far, though it doesn't work:

DELETE FROM table1  WHERE table1.id IN     (SELECT id     FROM table1 a JOIN table2 b     ON a.field1 = b.field1     AND a.field2 = b.field2     AND a.field3 = b.field3     AND b.id = ?     )  

Oracle 11g http listener configuration

Posted: 03 Aug 2013 02:50 AM PDT

Could please someone point me how to configure oracle db to be able to display PL/SQL Server pages. I have successfully created and mapped dad with the DBMS_EPG.

Here is the listener:

# listener.ora Network Configuration File: D:\app\Ja\product\11.2.0\dbhome_1\NETWORK\ADMIN\listener.ora  # Generated by Oracle configuration tools.    LISTENER =    (DESCRIPTION_LIST =      (DESCRIPTION =        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))      )      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))      )      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 8080))      )    )    ADR_BASE_LISTENER = D:\app\Ja  

When I want to enter

localhost:8080/my_dad/home

I get error: No data received.

How do I solve this?

MySql one time event never runs?

Posted: 03 Aug 2013 12:50 PM PDT

Please have a look at below events

1) create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO ...     2) create EVENT Test2 ON SCHEDULE EVERY 20 SECOND STARTS CURRENT_TIMESTAMP ON COMPLETION PRESERVE ENABLE DO ...   

I expect event Test1 to run one time after 20 seconds but it never runs. Event Test2 is working fine.

Any idea? Thanks.

Ok sorry it is the alter that is not working

At first i did create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

then shortly after i did alter EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

Expect event Test1 to run again in another 20 secs but it didn't.

How can row estimates be improved in order to reduce chances of spills to tempdb

Posted: 03 Aug 2013 02:40 AM PDT

I notice that when there are spill to tempdb events (causing slow queries) that often the row estimates are way off for a particular join. I've seen spill events occur with merge and hash joins and they often increase the runtime 3x to 10x. This question concerns how to improve row estimates under the assumption that it will reduce chances of spill events.

Actual Number of rows 40k.

For this query, the plan shows bad row estimate (11.3 rows):

select Value    from Oav.ValueArray   where ObjectId = (select convert(bigint, Value) NodeId                       from Oav.ValueArray                      where PropertyId = 3331                          and ObjectId = 3540233                        and Sequence = 2)     and PropertyId = 3330  option (recompile);  

For this query, the plan shows good row estimate (56k rows):

declare @a bigint = (select convert(bigint, Value) NodeId                         from Oav.ValueArray                        where PropertyId = 3331                          and ObjectId = 3540233                          and Sequence = 2);    select Value    from Oav.ValueArray   where ObjectId = @a                    and PropertyId = 3330    -- Workpiece Side Active Objects      option (recompile);  

Can statistics or hints be added to improve the row estimates for the first case? I tried adding statistics with particular filter values (property = 3330) but either could not get the combination correct or perhaps it is being ignored because the ObjectId is unknown at compile time and it might be choosing an average over all ObjectIds.

Is there any mode where it would do the probe query first and then use that to determine the row estimates or must it fly blindly?

This particular property has many values (40k) on a few objects and zero on the vast majority. I would be happy with a hint where the max expected number of rows for a given join could be specified. This is a generally haunting problem because some parameters may be determined dynamically as part of the join or would be better placed within a view (no support for variables).

Are there any parameters that can be adjusted to minimize chance of spills to tempdb (e.g. min memory per query)? Robust plan had no effect on the estimate.

Add partition works but not the drop partition

Posted: 03 Aug 2013 04:50 AM PDT

We have set an event as below. What we notice is that the add partition is working well as we can see on a daily basis the partition list is growing but the drop partition is not working well any reason for this?

IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.PARTITIONS     WHERE TABLE_NAME = 'testPart1' AND PARTITION_NAME =     CONCAT(' p'      ,   DATE_FORMAT( DATE_SUB( CURDATE(), INTERVAL 14 DAY ), '%Y%m%d' ))) THEN        SET @stmt := CONCAT(          'ALTER TABLE testPart1 DROP PARTITION '      ,   ' p'      ,   DATE_FORMAT( DATE_SUB( CURDATE(), INTERVAL 14 DAY ), '%Y%m%d' )       );      PREPARE stmt FROM @stmt;      EXECUTE stmt;      DEALLOCATE PREPARE stmt;      END IF;       IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.PARTITIONS     WHERE TABLE_NAME = 'testPart1' AND PARTITION_NAME =     CONCAT(' p'      ,   DATE_FORMAT( DATE_ADD( CURDATE(), INTERVAL 2 DAY ), '%Y%m%d' ))) THEN       SET @stmt := CONCAT(          'ALTER TABLE testPart1 ADD PARTITION ('      ,   'PARTITION p'      ,   DATE_FORMAT( DATE_ADD( CURDATE(), INTERVAL 2 DAY ), '%Y%m%d' )      ,   ' VALUES LESS THAN ('      ,   TO_DAYS( CURDATE() ) + 2       ,   '))'      );      PREPARE stmt FROM @stmt;      EXECUTE stmt;      DEALLOCATE PREPARE stmt;     END IF;   

Slow SSRS Report in production

Posted: 03 Aug 2013 07:50 AM PDT

I have an SSRS report which gets its data by firing a series of stored procedures.

Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine.

I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run.

What else should I be doing? Something with perfmon?

Shrinking the SYSTEM tablespace in Oracle

Posted: 03 Aug 2013 06:50 AM PDT

Our SYSTEM tablespace grew out of control because of the SYS.AUD$ table.

We have truncated SYS.AUD$ but the datafile is still very big (~30G).

Resize doesn't work because the file contains used data beyond requested RESIZE value

What should I do here?

Here's our version information:

  • Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
  • PL/SQL Release 11.2.0.1.0 - Production
  • CORE 11.2.0.1.0 Production
  • TNS for 64-bit Windows: Version 11.2.0.1.0 - Production
  • NLSRTL Version 11.2.0.1.0 - Production

ORA-16000 when trying to perform select on read only Oracle database

Posted: 03 Aug 2013 08:51 PM PDT

My application's SQL encounters an ORA-16000 error when trying to access a read only Oracle database.

ORA-16000: database open for read-only access ORA-06512: at "SYS.XMLTYPE",   line 272 ORA-06512: at line 1 ### The error may exist in com/xyz/report/dao/Transaction001Mapper.xml  

This is the query that involves the XMLTYPE, the INTERFACE_CONTENT is a CLOB column:

SELECT CONTENTS FROM ERRCODES WHERE          CODE=(SELECT xmltype(INTERFACE_CONTENT).extract('/Message/PaymentStatus/ps:FIToFIPmtStsRpt/ps:TxInfAndSts/ps:StsRsnInf/ps:Rsn/ps:Prtry/text()','xmlns="urn:abcde" xmlns:head="urn:iso:std:iso:20022:tech:xsd:head.001.001.01" xmlns:ps="urn:iso:std:iso:20022:tech:xsd:pacs.002.001.03"').getstringval() APP_CODE  FROM MESSAGE_EXTERNAL_INTERACTION MEI WHERE MEI.MSG_TYPE='Pacs_002'      AND MEI.MID='MY_ID')  

I also did a lot of EXTRACTVALUE() method on an XML field type.

The SQL is working perfectly if the database is not read only (read write).

What is the issue here? Is this related to some missing privileges/grants?

Is a join optimized to a where clause at runtime?

Posted: 03 Aug 2013 05:30 AM PDT

When I write a query like this...

select *  from table1 t1  join table2 t2  on t1.id = t2.id  

Does the SQL optimizer, not sure if that is the correct term, translate that to...

select *  from table1 t1, table2 t2  where t1.id = t2.id  

Essentially, is the Join statement in SQL Server just an easier way to write sql? Or is it actually used at run-time?

Edit: I almost always, and will almost always, use the Join syntax. I am just curious what happens.

Restore database from backup file of different version / edition

Posted: 03 Aug 2013 12:20 PM PDT

I read that it's possible to restore a database in SQL Server as long as you're restoring from an older version to a newer version, for backwards compatibility reasons.

Does anyone know off hand if you can restore a database from a *.bak file for different editions of SQL Server? We are moving a very large database via FTP that will take a couple days, so we'd rather only do this once. If nobody responds by the time we transfer the database via FTP, we will obviously try this out and see if it works by testing, and answer our own question.

Below is a query to get version details of SQL Server. The productversion is in the format {major revision}.{minor revision}.{release revision}.{build number}. In my case, the {release revision} has a value of 5500 for the source and 5512 for the target. So that looks okay. However, the edition is different.

Query:

SELECT     SERVERPROPERTY('productversion'),     SERVERPROPERTY('productlevel'),     SERVERPROPERTY('edition')  

Source database:

10.0.5500.0  SP3  Developer Edition (64-bit)  

Target database:

10.0.5512.0  SP3  Enterprise Edition (64-bit)  

Restore SQL Server database using Windows Powershell 3.0

Posted: 02 Aug 2013 11:50 PM PDT

I'm trying to restore a SQL Server database with a PowerShell script, but I'm having problems.

Here is the error I'm getting:

Exception calling "SqlRestore" with "1" argument(s): "Restore failed for Server 'WUSFK250042-OLU\SQLSERVER2008R2'. " At line:48 char:1 + $smoRestore.SqlRestore($server)

Here is my code:

#clear screen  cls    #load assemblies  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null  #Need SmoExtended for backup  [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null  [Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null    $backupFile = "C:\SafewayRT\SafewayRTFUll.bak"    #we will query the database name from the backup header later  $server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") "WUSFK250042-OLU\SQLSERVER2008R2"  $backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backupFile, "File")  $smoRestore = new-object("Microsoft.SqlServer.Management.Smo.Restore")    #restore settings  $smoRestore.NoRecovery = $false;  $smoRestore.ReplaceDatabase = $true;  $smoRestore.Action = "Database"  $smoRestorePercentCompleteNotification = 10;  $smoRestore.Devices.Add($backupDevice)    #get database name from backup file  $smoRestoreDetails = $smoRestore.ReadFileList($server)    #display database name  "Database Name from Backup Header : " +$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #give a new database name  $smoRestore.Database =$smoRestoreDetails.Rows[0]["Safeway_LogixRT"]    #specify new data and log files (mdf and ldf)  $smoRestoreFile = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")  $smoRestoreLog = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile")    #the logical file names should be the logical filename stored in the backup media  $smoRestoreFile.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"]  $smoRestoreFile.PhysicalFileName = $server.Information.MasterDBPath + "\" + $smoRestore.Database + "_Data.mdf"  $smoRestoreLog.LogicalFileName = $smoRestoreDetails.Rows[0]["Safeway_LogixRT"] + "_Log"  $smoRestoreLog.PhysicalFileName = $server.Information.MasterDBLogPath + "\" + $smoRestore.Database + "_Log.ldf"  $smoRestore.RelocateFiles.Add($smoRestoreFile)  $smoRestore.RelocateFiles.Add($smoRestoreLog)    #restore database  $smoRestore.SqlRestore($server)  

How to run a cold backup with Linux/tar without shutting down MySQL slave?

Posted: 03 Aug 2013 03:50 PM PDT

I run the following before tar-ing up the data directory:

STOP SLAVE;  FLUSH TABLES WITH READ LOCK;  FLUSH LOGS;   

However, tar will sometimes complain that the ibdata* and ib_logfiles* files are updated during the process. What am I missing?

The slave machine is in a cold standby machine so there are no client processes running while tar is running.

CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution.

Run Unix command using PL/SQL

Posted: 03 Aug 2013 08:50 AM PDT

Is it possible to run a Unix command using a query in Oracle?

I want to run simple commands (like df -h) using a query.

Is this at all possible or am I wasting my time? I don't want to use a different language like Java or C to call a procedure, it needs to purely PL/SQL.

Theta join explanation

Posted: 03 Aug 2013 09:38 AM PDT

Given the following question, I'd be glad if someone could help me.

Take the following relational schemes:

R(A, B , C)     S(D, E, F)   

where A and D attributes are the primary keys.

Assume to have an instance r of R with n tuples and an instance s of S with m tuples.

Moreover, assume to have a referential integrity constraint between C and the primary key of S.

1) How many tuples does the θ-join between s and r contain if the join predicate is C = D?     2) How many tuples does the θ-join between s and r contain if the join predicate is B = E?   

My answers:

1) n tuples since it all depends on the number of records belonging to the second relation that match the records which are part of the first one

2) zero since there's no common attributes

Is my reasoning correct?

Any help will be highly appreciated...thanks in advance

No comments:

Post a Comment

Search This Blog