Wednesday, September 25, 2013

[how to] Does MySQL close cursor if stored procedure encounters an exception?

[how to] Does MySQL close cursor if stored procedure encounters an exception?


Does MySQL close cursor if stored procedure encounters an exception?

Posted: 25 Sep 2013 06:39 PM PDT

I've seen two different approaches of cursor handling is MySQL stored procedures. Now I'm curious about their stability, due to potential MYSQLEXCEPTION risks in stored routines.

Assuming, that CURSOR is opened when DBMS raises MYSQLEXCEPTION and goes to EXIT HANDLER if it is declared...

Approach #1:

DELIMITER $$    CREATE PROCEDURE `test1`()  BEGIN  DECLARE `EOS` BOOLEAN DEFAULT FALSE;    DECLARE `buffer` INT UNSIGNED;    DECLARE `MyCursor` CURSOR FOR  SELECT      `id`  FROM      `MyTable`  LIMIT 50;    DECLARE EXIT HANDLER FOR MYSQLEXCEPTION      BEGIN      ROLLBACK;        SIGNAL SQLSTATE 'ERR0R' SET MESSAGE_TEXT = 'Oops... FATAL ERROR!', MYSQL_ERRNO = 5656;      END;    DECLARE CONTINUE HANDLER FOR NOT FOUND SET `EOS` = TRUE;    START TRANSACTION;    OPEN `MyCursor`;    cycle: LOOP        FETCH `MyCursor` INTO `buffer`;        IF `EOS`      THEN LEAVE cycle;      END IF;        -- INSERTs to another tables, UPDATEs of another tables and DELETEs from to another tables      -- Section that might throw a MYSQLEXCEPTION    END LOOP cycle;    CLOSE `MyCursor`;    COMMIT;    END$$    DELIMITER ;  

Note that if MYSQLEXCEPTION will be thrown in commented section, then MyCursor probably will not be closed. Or it will be closed and deallocated in this case?

Somewhere I've seen following...

Approach #2:

DELIMITER $$    CREATE PROCEDURE `test2`()  BEGIN  DECLARE `EOS` BOOLEAN DEFAULT FALSE;    DECLARE `buffer` INT UNSIGNED;    DECLARE `MyCursor` CURSOR FOR  SELECT      `id`  FROM      `MyTable`  LIMIT 50;    DECLARE EXIT HANDLER FOR MYSQLEXCEPTION      BEGIN      ROLLBACK;        CLOSE `MyCursor`; --   <---- is this even needed here ?        SIGNAL SQLSTATE 'ERR0R' SET MESSAGE_TEXT = 'Oops... FATAL ERROR!', MYSQL_ERRNO = 5858;      END;    DECLARE CONTINUE HANDLER FOR NOT FOUND SET `EOS` = TRUE;    START TRANSACTION;    OPEN `MyCursor`;    cycle: LOOP        FETCH `MyCursor` INTO `buffer`;        IF `EOS`      THEN LEAVE cycle;      END IF;        -- INSERTs to another tables, UPDATEs of another tables and DELETEs from to another tables      -- Section that might throw a MYSQLEXCEPTION    END LOOP cycle;    CLOSE `MyCursor`;    COMMIT;    END$$    DELIMITER ;  

And now I'm a bit confused about CURSOR handling.

Question: Which of this approaches is more accurate and reliable? If I'm not closing CURSOR in EXIT HANDLER will it impact anything? And if it is, how "bad" might be consequences?

Branching Source Control at development

Posted: 25 Sep 2013 05:38 PM PDT

At my current employer, the developers are branching for development per story and merging for release in Git - and frankly I'm jealous. ...but I haven't been able to wrap my head around how a SQL Server DB workflow should work in this manner, especially considering tooling that I am aware of / using. So, I'm curious if anyone has some thoughts.

~Our SQL Server environment

 [Prod] <- [Testing] <- [Dev Release] <-> [Dev01],[Dev02],[Dev03]  

Each team has a dev server stack where stories are worked. There are 25+ databases per server. When product/project determine which stories are to release, those changes are merged to Dev Release - and eventually flow back to team dev. Dev Release is then used to script the deployment. Thus any ongoing projects (long term / third party integrations / etc) aren't included with the release. This means that there is not a point guaranteed where dev release and any team dev are fully synchronized. Also of note, the sequence of changes made to dev release are not related to the sequence that changes are applied within team dev...

Some ponderings:

One way (problem): If we have a single branch for each team dev server - it appears to be easy to keep in sync with tooling (RedGate Source Control, etc)... ...but prepping for a release (in Git) either means cherry-picking commits (too much work) or git copying files and losing dev commit history (too much work). (I assume that SVN's file deltas would preserve dev history - but would still be a pain (as its not a merge)... guessing) Although dev history is lost, we would still have a list of release changes - but that doesn't work for me. This feels wrong.

Another way (problem): If we branch per story - then it would be fairly trivial to merge to dev release and preserve object history. However, there wouldn't be a single branch that matches the state of the database, and thus tooling (RedGate, etc) will always be complaining that the DB and the repo are out of sync. This feels annoying.

At my previous employer, we branched for release using SVN. All forward development teams worked towards a common release - thus forward changes were developed against the trunk (or a dev branch reconciled to the trunk), and hotfixes were applied to the release branches. Depending upon the product, we had different workflows for publish - but the idea was similar. Most all the links I've found (even on this site) seem to point to that workflow - as described very well here: http://www.infoq.com/articles/agile-version-control

However, all the teams (excluding hotfixes) were working towards a common release - thus the dev branches had to reconcile / merge. So, if we had a project / integration that wasn't in the release, we'd have to back out changes of the trunk for the freeze - and reapply once the next release cycle started. Annoying - but doable. Of note, the changes to the database could be applied in a serial fashion - as they were (mostly) all related. So, it's no surprise that tooling SSDT / RedGate / etc seem to be geared for this...

So, just curious if anybody has any pointers, helpful URLs, or can point out where my ponderings have gone awry, I'd be grateful for the info.

SCCM 2007 SQL Query Code - Help Needed

Posted: 25 Sep 2013 05:25 PM PDT

I am trying to write a report where I grab data showing all server collections with maintenance windows. How I want to do this is to show only those server collections where those computers are also members of my "All Server" collection as well. So as to identify them as servers. I cannot use the OS caption like '%Server%' because some of these computers are also Windows 7, but are being used as a server role.

Here is the code I came up with, but the output is blank. Any help would be very much appreiated. Thank you very much

Select Distinct      v_Collection.CollectionID as [Collection ID],      v_Collection.Name as [Collection Name],      v_ServiceWindow.Description as [Maintenance Window Description],       v_ServiceWindow.StartTime as [Maintenance Window Start Time],       v_ServiceWindow.Duration as [Maintenance Window Duration]  From v_Collection Join v_ServiceWindow on           v_ServiceWindow.CollectionID = v_Collection.CollectionID  Join v_FullCollectionMembership on           v_FullCollectionMembership.CollectionID = v_ServiceWindow.CollectionID   Where v_FullCollectionMembership.Name In          (           Select Distinct v_Collection.CollectionID           From v_Collection           Where v_Collection.CollectionID = 'XYZ'          )  ORDER BY [Collection ID];  

Merge reports 'You must rerun snapshot because current snapshot files are obsolete', gets caught in Death Loop

Posted: 25 Sep 2013 06:56 PM PDT

We have a very large Merge publication for a data warehouse with some large (.bcp files on the order of 20GB) fact tables that is causing us tremendous trouble.

Specifically, we created a publication and a subscription, and things ran OK w/ a couple hiccups (disk space). After the first hiccup (log space exhausted) we saw this:

You must rerun snapshot because current snapshot files are obsolete.

On restart, it did pretty much what one would expect (went through the process, skipping things it had already done).

At the end we saw the 'nothing to do' message, but immediately it went back into the ftp/apply process, failing b/c when it tries to apply the .sch file 'can't drop the table because it is being used in replication'.

So things are obviously hosed. Restarting from zero might be advised but we did the 'remove publication' sproc on the offending subscriber last night so we should have been clean.

Also, this whole process (downloading and applying) took 9 hours, so if possible I would like to get to the root of it before restarting the process. Why would the snapshot have become obsolete in such a short period? What could have gone awry? If you have any questions or need info, let me know.

Experience with Scalearc sharding?

Posted: 25 Sep 2013 06:34 PM PDT

Does anyone have experience with Scalearc sharding and it's ability to scale ? Does Scalearc needs to have the whole data in memory or does it behave well with disk storage ? I'm thinking to big database application like 10TB and above.

Unify case when changing fields values

Posted: 25 Sep 2013 04:16 PM PDT

I have the following code:

$sql=      'select *,'.      'case when fisicas_cpf is not null then null else razao_social end as razao_social,'.      'case when fisicas_cpf is not null then null else nome_fant end as nome_fant '.      'from pessoas p join fisicas f join juridicas j '.      'on p.fisicas_cpf=f.cpf or p.juridicas_cnpj=j.cnpj '.      'group by p.id'  ;  

I want to unify the case statements because they verify the same thing.

I want something like this:

'case when fisicas_cpf is not null then null else razao_social,nome_fant end as razao_social,nome_fant '.  

Here, @AaronB says that the case just return one value. This applicate to my situation, or there is a way of doing it?

Updating an indexed column performance impact in SQL server

Posted: 25 Sep 2013 01:08 PM PDT

I am trying to understand is there any performance impact if I am trying to update a column which is the key column of a non-clustered index in SQL server 2008? For e,g

I have a column IsActive in the table TestTable. I have created a non clustered index on column IsActive. Now I am trying to update the value of the column IsActive through an update query like

UPDATE TestTable SET IsActive= 1 WHERE IsActive = 0  

I need to understand will this perform better or not having the index on it will be better. I am asking his as reindexing takes place when index vales are updated. So will this reindexing have a impact on the query performance? Note : The table has excess of 100 million rows.

Use of Case Statement result in Join

Posted: 25 Sep 2013 12:18 PM PDT

Let's say I have a Case statement, something like this:

SELECT    CASE       WHEN SC.Original_Order_Id IS NOT NULL      THEN SC.Original_Order_Id       WHEN P.Product_Name like '%Bundled%'      THEN R.Original_order_Id      WHEN P.Product_Name like '%SUMB%'      THEN R.Original_order_Id      WHEN P.Product_Name like '%SUM-1B%'      THEN R.Original_order_Id      ELSE (Select               Original_Order_Id             From               Support_Contract            Where              End_Date = SC.End_Date            And              Registration_ID = (Select                                    Original_Registration_Id                                 From                                   Registration_Conversion                                 Where                                   Conversion_Registration_Id = R.Registration_Id))    END AS Original_Order_Id  FROM                dbo.Support_Contract AS SC  LEFT JOIN    dbo.Registration AS R       ON SC.Registration_Id = R.Registration_Id  LEFT JOIN    dbo.Product AS P       ON SC.Contract_Product_Id = P.Product_Id  

Is there a good way to use that value in a Join (ideally without the use of a derived table)? Ideally, I have another table named Order_ that I would like to join against (or retrieve from) using the result of the case statement to retrieve the Order_Number where Order_.Order__ID = Original_Order_Id.

MySQL on DRDB very fragile

Posted: 25 Sep 2013 11:51 AM PDT

My website (a Magento store) runs perfectly fine most of the time, until at some point MySQLd gets hammered with a few more INSERTs than normal, and then locks up. I run MySQL on Linodes with Debian 7.0 and ext4 on DRDB (for failover).

When it locks up, no queries are passed through and all activity is halted. I then SSH into the machine and see that the load is at 0.00, and the mysqld process itself is in Sl mode. I can't find any dead or zombie processes. I also tried to issue SHOW FULL PROCESSLIST; which just displayed that some INSERTs were waiting.

Where can i start to find out the issue of this? Can it be related to DRBD or ext4 somehow? Can it be the issue of some mysqld setting hitting the ceiling?

This is my custom my.cnf: https://gist.github.com/jonathanselander/0c4f22ba06b613083d3c

Failover scenario with principal "hard down"

Posted: 25 Sep 2013 02:59 PM PDT

My company has a failover pair of SQL Server 2005 instances, which provide database availability for a UL-licensed alarm/call center, where uptime and availability are critical for life safety.

Here's an example line from the connectionStrings of one of the business-critical applications (sanitized for our protection):

<add name="MyCompany.MyApp.Properties.Settings.MyDbConnectionString"   connectionString="Data Source=Principal-DB;Failover Partner=Mirror-DB;   Initial Catalog=MyDb; Integrated Security=True; Persist Security Info=True;"   providerName="System.Data.SqlClient" />  

About half an hour ago, this was the scenario as we believe it to be:

  1. DBA reports need for hardware maintenance on Principal-DB server to all users at a specified time, and recommends they close and restart client applications if they experience database connection issues.
  2. DBA fails over from Principal-DB to Mirror-DB. Client applications are unaffected.
  3. DBA powers down Principal-DB server.
  4. Some client applications begin to timeout on pooled connections (as would be expected). Users experiencing these errors close and restart as recommended by the DBA.
  5. Those client applications now fail to start altogether, due to a failure to receive a response from the database.
  6. Hilarity ensues.

Our theory was that, because Principal-DB was not in a state to respond to connection requests at all, when it would normally refuse such connections quickly if it were in the restoring state, the client applications ended up waiting for the entire allotted connection timeout (default, 20 seconds) for the primary server to respond, then returning the timeout error without ever attempting to connect to the listed failover partner.

The quick fix was to push an update to the client application containing an App.config that swapped the Data Source and Failover Partner instances, so Mirror-DB was now the server to which the app attempted to connect first. When we fail back to Principal-DB, we will have to undo this change with another application update.

I need a more permanent fix. This was not our expected behavior for a failover pair, and it cannot be allowed to happen again. There has to be a way to configure the client application so that it will correctly try to connect to the failover partner in this circumstance before returning an error.

SQL Agent still attempting replication for deleted publication

Posted: 25 Sep 2013 07:58 PM PDT

We recently retired a replication publication, deleting each subscription (push subscriptions) and then deleting the publication, all from within SSMS. However, we continue to get the following message (once every minute) in the four days since deleting this publication.

Replication-Replication Distribution Subsystem: agent (null) failed. The publication 'Replicate_ASG_Reference_Data' does not exist.

Why is it still trying to run a deleted publication, and how do I stop it?

Additional information: This is on SQL Server 2008 SP3 running on Windows Server 2008 R2.

SQL Server backup fails on disk space error

Posted: 25 Sep 2013 01:35 PM PDT

SQL Server 2008 R2 RTM running on Windows 2008 64bit.

Having problems backing up a single database with the following command

backup database [somedatabase]   to disk = '\\somehost\d$\BACKUP\somedirectory\some_database.bak'  with compression, init, stats=1  

The database backs up just fine for a while

93 percent processed.  94 percent processed.  

Then I get an error about diskspace

Msg 3202, Level 16, State 1, Line 1
Write on "\somehost\d$\BACKUP\somedirectory\some_database.bak" failed: 1130 (Not enough server storage is available to process this command.)
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.

This doesn't make any sense because there is over 800GB of available disk space and the backup is using 121GB.

In SQL Server's logs I get this message

Date        9/25/2013 1:04:56 PM  Log     SQL Server (Current - 9/25/2013 12:15:00 PM)    Source      Backup    Message  BackupIoRequest::ReportIoError: write failure on backup device   '\\somehost\d$\BACKUP\somedirectory\some_database.bak'. Operating   system error 1130(Not enough server storage is available to process  this command.).  

Some information:

  1. Backups are being executed from the server locally and the backup files are being saved to \\somehost which is another server close by.

  2. Both the local and remote hosts have sufficient disk space.

  3. Compression is on

Any assistance on getting to real cause of these backup failures would be appreciated.

Thank you

Update 16:48

As of R2, Compression is a backup option on standard. Thank you RLF. Still having the same problem with or without. The only difference is that the backup fails at 18% without compression. Decided to monitor disk space to make sure nothing else was eating up GB during the backup.

Free space during backup

As expected, available disk space went down to 600GB while using compression and 400GB without. Disk space never dipped below 400 though.

Does failed task considered to be run to completion in SQL Server maintenance plan terminology?

Posted: 25 Sep 2013 03:02 PM PDT

The replies to my previous question: "How to specify a SQL Server maintenance plan subtask to run unconditionally on success or failure of preceding tasks?" answered me that I should change "Evaluation operation:" in my maintenance plan from "Success" to "Completion".

enter image description here

enter image description here

Though, I am still in doubt about the term "Completion" and that the proposed solution will completely address my situation.

All tasks are configured over "All user databases" that:

  • I have not noticed immediately and added my own user (diagnostics, monitoring, testing) database which, then, took offline;
  • I am not permitted to change the maintenance plans without previous authorization from company management

enter image description here

So, the nightly databases backup (penultimate) task in screenshot above failed. There is no diagnostics or logging except the fact that all tasks failed.

I corrected the situation by detaching the culprit offline database. So, after that, maintenance plan job started to complete successfully (with backup task).

But I could not understand:
if I leave a user database offline and all maintenance plan tasks fail to run due to this. Do the tasks considered coming to "completion"?

Some doubts remain because I made other changes which I am sure could not prevent maintenance pla completion. But who knows...

UPDATE: As I could see by the duration of maintenance plan job in comparison to the successful job runs, the the tasks were not even started because their corresponding previous task did not start and they did not start because probably failed to find "All user databases".

So, the definition of:

Completion requires only that the precedence executable has completed, without regard to outcome, in order for the constrained executable to run.

IMO. does not assure the previous task completion because it could have been simply not being started and run at all

query cache hit value is not changing in my database

Posted: 25 Sep 2013 06:22 PM PDT

The query cache hit value is not changing in my database only for some queries.If I write

select * from drugs_info limit 1000;   

the query cache hit has changed, but if I execute the following queries for two to three times the query cache hit is not changing. The queries as follows:

select * from drugs_info limit 10000;     select * from drugs_info limit 15000;  

like this. I don't know what is the problem.

The table, which contains 64500 rows:

CREATE TABLE drugs_info (       did int(11) NOT NULL AUTO_INCREMENT,       brand_name varchar(150) DEFAULT NULL,       generic varchar(500) DEFAULT NULL,       tradename varchar(150) DEFAULT NULL,       manfactured varchar(100) DEFAULT NULL,       unit varchar(300) DEFAULT NULL,       type varchar(50) DEFAULT NULL,       quantity varchar(50) DEFAULT NULL,       price float DEFAULT NULL,       PRIMARY KEY (did),       KEY id1 (brand_name,generic)   )  ENGINE=InnoDB AUTO_INCREMENT=64379 DEFAULT CHARSET=latin1;  

and the query:

select * from drugs_info;  

and my settings are

mysql> show status like 'qcache%';   +-------------------------+---------+   | Variable_name           |  Value  |   +-------------------------+---------+   | Qcache_free_blocks      | 1       |   | Qcache_free_memory      | 8958376 |   | Qcache_hits             | 0       |   | Qcache_inserts          | 0       |   | Qcache_lowmem_prunes    | 0       |   | Qcache_not_cached       | 2       |   | Qcache_queries_in_cache | 0       |   | Qcache_total_blocks     | 1       |   +-------------------------+---------+   

Why is this?

postgresql replication - pg_stat_replication is showing empty columns

Posted: 25 Sep 2013 12:55 PM PDT

I've a postgresql 9.2 streaming replication setup. It appears that the slave is getting the updates from master and is in sync. I've verified it by looking at pg_xlog dir and process list.

  $ ps aux | grep 'postgres.*rec'  postgres 26349  2.3 42.9 38814656 18604176 ?   Ss   Sep20  24:06 postgres: startup process   recovering 000000010000026E00000073  postgres 26372  4.9  0.1 38959108 78880 ?      Ss   Sep20  51:27 postgres: wal receiver process   streaming 26E/731E05F0  

And the startup logs on the slave also look alright.

  2013-09-21 03:02:38 UTC LOG:  database system was shut down in recovery at 2013-09-21 03:02:32 UTC  2013-09-21 03:02:38 UTC LOG:  incomplete startup packet  2013-09-21 03:02:38 UTC FATAL:  the database system is starting up  2013-09-21 03:02:38 UTC LOG:  entering standby mode  2013-09-21 03:02:38 UTC LOG:  redo starts at 26E/71723BB8  2013-09-21 03:02:39 UTC FATAL:  the database system is starting up  2013-09-21 03:02:39 UTC LOG:  consistent recovery state reached at 26E/75059C90  2013-09-21 03:02:39 UTC LOG:  invalid xlog switch record at 26E/75059E98  2013-09-21 03:02:39 UTC LOG:  database system is ready to accept read only connections  2013-09-21 03:02:39 UTC LOG:  streaming replication successfully connected to primary  

What worries me is that the pg_stat_replication table on the master. It shows that there is a client connected, but doesn't show anything more.

  archive=> select * from pg_stat_replication;   pid | usesysid |  usename   | application_name | client_addr | client_hostname | client_port | backend_start | state | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state  -----+----------+------------+------------------+-------------+-----------------+-------------+---------------+-------+---------------+----------------+----------------+-----------------+---------------+------------   999 |    16384 | replicator | walreceiver      |             |                 |             |               |       |               |                |                |                 |               |  (1 row)  

Is this the expected behavior? I remember seeing values for client_addr, sent_location, replay_location etc. when I did a test run some time back. Is there anything that I'm missing?

SQL-Server 2008R2: Maintenance plan fails - failover server

Posted: 25 Sep 2013 09:20 PM PDT

I am trying to get a maintenance plan (called New HS BKUP) to execute and I am receiving the following error:

[298] SQLServer Error: 15404, Could not obtain information about Windows NT   group/user 'OLD-SRVR-NAME\sqladmin', error code 0x534. [SQLSTATE 42000]   (ConnIsLoginSysAdmin)  

I have 2 databases in a failover setup. The one that I'm referencing here is the secondary/target server. I had the server named OLD-SRVR-NAME but then I couldn't get mail to send to I renamed the server PRD-DB and reinstalled SQL. I also reinstalled the Management components.

Here are some queries that I've ran:

select ownersid from msdb.dbo.sysssispackages WHERE name = 'New HS BKUP'  

0x01

select @@servername  

PRD-DB

So what I'm getting from these results is that the job that I'm trying to execute is owned by sa, however it fails saying that it's looking for OLD-SRVR-NAME\sqladmin.

The server name isn't even OLD-SRVR-NAME, it's PRD-DB now, so I can't figure out why the job even referencing the old server name at all. This job has been recreated many times with different names since I changed the server's name and reinstalled SQL.

Before I hit this problem I worked on getting the mail to send (which it does now). In order to do this I had to force regenerate the master key, in case that is important here.

SQL Server: subscriber as publisher and hierarchical replication

Posted: 25 Sep 2013 04:23 PM PDT

In Oracle Streams one way replication from one DB to another is a basic block for many replication topologies (N-way, hierarchical, combined and so on), and changes could be captured and applied for the same table at the same time.

But I can't find anywhere in SQL Server documentation whether table (article) could be a source (publisher) and destination (subscriber) simultaneously. Is it possible, for example, to setup bidirectional replication using two pairs of publisher->subscriber transactional replication?

Naturally I am more interested in multi-level hierarchy: for example one table is replicated from root node through intermediate nodes to leaves, second one - from leaves to root, and third one - bidirectionally, and for all tables intermediate nodes could perform DMLs to be replicated too. With Oracle Streams it is easy to achieve, and even more sophisticated configurations are possible, but are they with SQL Server?

UPDATE: It seems it is with use of merge replication and republishing (http://msdn.microsoft.com/en-us/library/ms152553.aspx), but what about transactional replication?

Cannot see Green Button with white arrow in object explorer; DTC config problems?

Posted: 25 Sep 2013 02:23 PM PDT

I have SQL Servers across locations setup for replication. One of the SQL Server instances that I installed, running locally I can see the green button with the white arrow in the object explorer when I connect to it. However, when I connect to it from any other location, I cannot see that or neither can I start or Stop the SQL Serer Agent; even though locally I am able to do that.

Additionally, the MDTC doesn't has only 2 items sent both with were rejected, where as other servers have 100's committed and sent. Is there something wrong with the DTC settings?

Please help.

MySQL failover - Master to Master Replication

Posted: 25 Sep 2013 05:23 PM PDT

My company is trying to implement a MySQL failover mechanism, to achieve higher availability in our webservices tier - we commercialize a SaaS solution. To that end we have some low-end VMs scattered through different geographical locations, each containing a MySQL 5.5 server with several DBs, that for the time being are merely slave-replicating from the production server - the objective up until now was just checking the latency and general resilience of MySQL replication.

The plan however is to add a Master-Master replication environment between two servers in two separate locations, and these two instances would handle all the DB writes. The idea wouldn't necessarily imply concurrency; rather the intention is having a single one of the instances handling the writes, and upon a downtime situation using a DNS Failover service to direct the requests to the secondary server. After the primary comes back online, the b-log generated in the meantime in the secondary would be replicated back, and the DNS Failover restored the requests back to the first one.

I am not an experienced administrator, so I'm asking for your own thoughts and experiences. How wrong is this train of thought? What can obviously go wrong? Are there any much better alternatives? Bash away!

Thanks!

multi-master to single-slave replication at table level with PostgreSQL or MySQL

Posted: 25 Sep 2013 01:23 PM PDT

Here is my scenario

Master1 hosting DB1  Master2 hosting DB2  ...  MasterN hosting DBN    replicate to:    Slave1 hosting DB1,DB2... DBN  

I've read similar questions and they recommend to start different instances at Slave1 and simply do MasterN-Slave1(instanceN) replication, as instructed here:

Single slave - multiple master MySQL replication

That would be piece of cake, but running different instances might be a waste of resources.

I really want to achieve this with an single DBMS instance at Slave1, and if possible with PostgreSQL; but can try with MySQL if there is a solution for this.

Any help is appreciated.

FOR UPDATE or INSTEAD OF UPDATE Trigger?

Posted: 25 Sep 2013 06:42 PM PDT

I am working on my project at university - database in MS SQL Server for some sort of travel agency.

I would like to create a trigger to check if I can add another reservation for specific Holiday (i.e. Hawaii 01-08 Sep 2013). I wrote a trigger: 'FOR UPDATE','FOR INSERT', but I have problem with checking availability. When I check No. of places left the inserted/updated reservation is already in DB so I cannot fill last place. Of course I can change the trigger to 'INSTEAD OF UPDATE' but I do not know if it is a good solution. I have more constraints in this trigger so I do not know if I should change them too. Additional constraints I have are:

  • check if someone has not already booked another holidays in the same time, X-Check dates of all reservations made by specific participant

  • check if selected transport is available for specific holiday, for each holiday(term) there are specific transports available etc.

Here you can find part of my DB Diagram: http://s11.postimage.org/t7m32amrn/Scheme.jpg

I hope it will help everyone to understand what I mean.

I would be very grateful for any help.

How to search whole MySQL database for a particular string

Posted: 25 Sep 2013 12:22 PM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

SQL Server connection failed for domain user from Mac OS machine

Posted: 25 Sep 2013 02:08 PM PDT

I have SQL Server 2008 R2 SP1. I am using connection string to connect to SQL Server.

In connection string I am passing the driver path, server name, TCP port number, user name with which I want to connect, password , DSN name, TDS driver version number, db name.

  • I am using Free TDS driver for connecting from Mac OS machine.
  • On SQL server side I am having Windows authentication mode.
  • When I tried to connect by entering the user name and password of the user who has installed the SQL server or admin of the SQL server, it works fine.

But if I try with normal domain user, I am getting the SQL Server connection failed error.

After searching about it I found that some authentication is failing. Can anyone help me in resolving this issue?

I believe that some settings need to be changed or there should be some modification on connection string.

Please help me in it.

Thanks, Tausif.

Is it bad to assign specific processors to SQL Server 2008 R2 cluster instances?

Posted: 25 Sep 2013 08:14 PM PDT

I have a SQL Server 2008 R2 failover cluster with a passive node and an active node. The servers are physically identical. I have two instances of SQL Server, one set to use all the CPUs in the first 3 NUMA nodes, the other set to use all the CPUs in the 4th NUMA node.

This seems to be working okay, even when failover occurs, but is this a bad idea?

What would happen in failover if the passive server didn't have as many processors?

Trouble setting up Nagios to monitor Oracle services

Posted: 25 Sep 2013 03:23 PM PDT

I've got an install of Nagios XI that doesn't seem to want to talk to any of my Oracle services here. I've pulled out the monitoring command and am running it manually, after setting ORACLE_HOME and LD_LIBDRARY_PATH of course, but it keeps generating the following error:

/usr/local/nagios # libexec/check_oracle_health --connect "oracle-server:1551" --username user --password "pass" --name OFFDB1 --mode tablespace-can-allocate-next --warning 20 --critical 30  CRITICAL - cannot connect to oracle-server:1551. ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA (DBD ERROR: OCIServerAttach)  

I'm still fairly new to Oracle, but my googlings seem to indicate that SERVICE_NAME should match the GLOBAL_DBNAME in listener.ora, which is OFFDB1. Do I need to do something else here like modify the connect string?

As a note, there are multiple instances of oracle sharing the target box, but each seems to be intalled to separate partitions and are running their own listeners or various ports.

How to user the Tablediff Utility with Replication Filters

Posted: 25 Sep 2013 04:03 PM PDT

I am using merge replication in SQL 2012. I am trying out the TableDiff utility to show non convergence.

I see a problem with this approach though. This is because I am using parameterised filters to filter the subscriptions.

In this instance the TableDiff utility tells me that there are missing rows in the subscription and generates the SQL to insert them. If I swap the source and destination around it generates a whole lot of delete statements to remove the records at the publication.

This is not a correct result. It should just check the rows that exist at the subscriber, or understand the filters and know which rows should be at the subscriber.

Can I use this utility in this instance? If not is there an alternative? I find it surprising that none of the documentation about this utility mentions this problem.

SQL Server Database Mirroring : strange client application behavior upon failover

Posted: 25 Sep 2013 07:28 PM PDT

I am putting a HA SQL server environment up based on three SQL Server 2008 R2 machines and database mirroring.

I'll name them :

  • principal.company.intra
  • mirror.company.intra
  • witness.company.intra

The "company.intra" domain is the holding company's domain.

Both database engines are listening on the static 52002 port so client applications access them the following way :

principal.company.intra,52002 & mirror.company.intra,52002  

Endpoints are called EP_Mirroring at principal and mirror, EP_Witness at witness and listen on the 5022 port at principal, 5023 at mirror and 5024 at witness.

Service accounts are correctly configured and granted connect permissions on each other's endpoints.

The mirroring feature is working fine, and databases correctly failover in case of TSQL manual failover or simulated system failure.

The problem is with applications behaving strangely upon failover.

The application testing context is the following :

A small .NET app consisting of two textboxes and a button :

When the button is clicked, it makes a stored proc call and fills Textbox1 with the sp's output and Textbox2 with the Data source property of my SqlConnection.

The connection string looks like :

Data source=principal.company.intra,52002;failover partner=mirror.company.intra,52002;  initial catalog = TEST_FAILOVER;user ID=user;password=pass;Connection Timeout=30  

I launched this app from my laptop, located in another domain : laptop.childcompany.com

Scenario 1:

  1. I launch the app
  2. Button pressed, TB1 : sp output / TB2 : principal.company.intra,52002
  3. Database Failover
  4. Button pressed, connection timeout
  5. Button pressed, connection timeout
  6. DB Failover at mirror (back to principal)
  7. Button pressed, TB1 : sp output / TB2 : principal.company.intra,52002

Scenario 2:

  1. Database Failover
  2. Launching the app
  3. Button pressed, TB1 : sp output / TB2 : mirror.company.intra,52002
  4. Database Failover at mirror (back to principal)
  5. Button pressed, connection error
  6. Button pressed, TB1 : sp output / TB2 : principal.company.intra,52002
  7. Database Failover
  8. Button pressed, connection timeout
  9. Button pressed, connection timeout

I then tried to launch the app on the mirror server, locally via RDP

Scenario 3

  1. Launching the app
  2. Button pressed, TB1 : sp output / TB2 : principal.company.intra,52002
  3. Database Failover
  4. Button pressed, connection error
  5. Button pressed, TB1 : sp output / TB2 : MIRROR

I checked MSDN for Ole DB connectivity behavior in case of failover to understand why in the third scenario the data source property of my connection was set to MIRROR and not mirror.company.holding,52002

I learned that the failover server property of my connection string is only used in the case of an initial connection to the principal failing (explaining why scenario 2 correctly worked), but that in the case of an existing connection, the principal server has already provided the app with the mirror server address (in something called "failover cache"); the provided information is wrong, no listening port information and host name instead of FQDN thus failing outside the holding company domain.

Next step, I checked the sys.database_mirroring view :

select M.*   from sys.databases D   inner join sys.database_mirroring M on D.database_id = M.database_id   where D.name = 'TEST_FAILOVER'  

And noticed that the "mirroring_partner_instance" field containes "MIRROR" instead of "mirror.company.intra,52002".

A quick check to Books Online informed me that this is the field used to inform client apps of the failover partner upon initial connection to the principal DB Engine.

So, finally, my question is :

Is there a way to correct that behavior, and make the "mirroring_partner_instance" field hold the FQDN of the mirror, with the listening port?

At endpoint creation? At mirroring configuration level (via an alter database statement)? DB Engine configuration?

I already successfully tried setting up a SQL Server Native client alias at client computers as a workaround, however I'd really prefer if the principal server returned the correct failover information to client applications.

What are the advantages of having a centralised SQL database server for development?

Posted: 25 Sep 2013 12:46 PM PDT

I have an idea to use for my fellow developers and basically wanted feedback.

The idea is to have a centralised SQL Server database server for all developers to use.

Each developer would have either own database on the server for the specific project (for example, SW_ProjectName).

Therefore if a developer was off for a week and we needed to hit a deadline we could easily hook into their database by changing our local config connection string.

In addition to this their would be a test database (for example ProjectName_Testing) which the developer would use to run test their sprint before deploying to Staging.

Can anybody think of any advantages and disadvantages for this approach instead of having everybody having their own local databases for development.

SQL Server Rebuild/Reorganize Fragmented Indexes causes CRC error or consistency errors

Posted: 25 Sep 2013 07:50 PM PDT

In our test lab I've been experimenting with different jobs to keep our critical indexes from becoming too fragmented.

I'm currently using the approach described here: sys.dm_db_index_physical_stats (under the Examples -> D section: Using sys.dm_db_index_physical_stats in a script to rebuild or reorganize indexes).

Basically every hour I query the dm_db_index_physical_stats dynamic management view and if an index is between 5% and 30% fragmented I reorganize it, if it's greater than 30% fragmented I rebuild it. It seems to work fine during most of our testing, however, twice I've run into a problem where the scheduled job fails with an error:

The operating system returned error 23(Data error (cyclic redundancy check).) to SQL Server during a read at offset 0x00000eae3b2000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\database.mdf'.
Additional messages in the SQL Server error log and system event log may provide more detail.
This is a severe system-level error condition that threatens database integrity and must be corrected immediately.
Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. [SQLSTATE HY000] (Error 823)

When I run DBCC CHECKDB a problem is reported in one of my indexes, the only way I'm able to fix this problem is by using

DBCC CHECKDB ('dbname', REPAIR_ALLOW_DATA_LOSS)  

I'm not positive, but I suspect this error is caused by rebuilding or reorganizing indexes while my load tests are running in the test lab.

I've searched around and found nobody else reporting this consistency error related to rebuilding indexes. You can see more information about my problem on my blog post: SQL Server Index Corruption: CRC Error & DBCC CHECKDB

Is my approach to tuning indexes flawed? Should I not be trying to rebuild indexes on a "live" database (while traffic is hitting it)? Should I be using SQL Server Enterprise Edition's feature of rebuilding an index with (online=on)? Any help is appreciated.

I'm running SQL Server 2008 R2 Standard.

some SQL Server MSX targets show as Normal, Blocked

Posted: 25 Sep 2013 08:17 PM PDT

I've centralized my farm's jobs in MSX, covering about 70 instances of SQL Server. Seemingly randomly, at least once a week, several instances will stop accepting instructions. Within the MSX window Manage Target Servers, instead of all instances showing as Normal, they show as Normal, Blocked. I have failed to figure out why this happens. Defecting and re-enlisting usually cures this ill, but that means having to re-add that target in any appropriate jobs, so I would rather resolve the cause than endure the workaround.

Any advice? I would like to keep using SQL Server's tools for multi-server administration, but am open to other avenues.

No comments:

Post a Comment

Search This Blog