Friday, April 19, 2013

[how to] Relational vs Non-Relational Database for Events Database

[how to] Relational vs Non-Relational Database for Events Database


Relational vs Non-Relational Database for Events Database

Posted: 19 Apr 2013 04:43 PM PDT

I'm trying to find out whether an SQL or no-SQL solution would be better for creating an events database. I'm creating a ticketing system, similar to ticket master. I know that for either database-type storage is the simple part. The deciding factor is the performance of the following queries:

  • Select events by location within a specific date range.
  • Select all events in a given location (city, town, etc.), sort by date.
  • Search for events by keyword, within a specific date range, within a specific location.

Events basically have ID, NAME, LOCATION, VENUE, START DATE, END DATE

In a relational schema I would have an EVENTS table, a DATES table for storing dates separately because events can occur on more than one date and they are repeatable, and a VENUES table from which the event location (country, city, etc) can be cross-referenced.

I have no experience with no-SQL databases, so if you vote for no-SQL please suggest how you see the "schema" being organized and which particular DB.

I hope this question is specific enough. Query performance is the deciding factor.

Accidentally Deleted SQL Server Log Reader Service Account

Posted: 19 Apr 2013 05:07 PM PDT

Uh, I accidentally deleted the Security Login in my SQL server that allows for the reading of log files.

Now, I can no longer see any of the log files in my SQL Server Logs node of SMSS:(

What account is it so that I can add back the login?

I think it started with "BUILT IN" or something.

PostgreSQL and default Schemas

Posted: 19 Apr 2013 04:11 PM PDT

Whenever I create a brand new database in PostgreSQL Maestro it creates the following list of default schemas:

enter image description here

Now from my understanding Schemas are like folders for organization, etc. So the question I have is are all these schemas needed when I create a new DB? If so what are they used for on PG side as I won't ever use them myself.

I can understand information_schema as this is default for an install of MySQL on a server, though I don't get why a database would need it's own as opposed to the entire server, but to each DB type his own I guess.

sql web-application Connection String for member of a domain group

Posted: 19 Apr 2013 05:09 PM PDT

In my testing environment, I have a local sql express (2008R) server instance. I've created a domain group and given it a login in the database.

But now, how do I consctruct the connection string for my web-application? Do I give it the username and password of one of the members of the group?

And, what do I put for a "user" in the Database security section? The group itself doesn't have a password.

Do I have to add the service account from the application pool to the AD group?

Here is what I have for a connection string; but this allows all connections, it seems:

<add name="development" connectionString="server=myserver\sqlexpress;      database=mydatabase;Persist Security Info=True;Integrated Security=SSPI;      Trusted_Connection=True; Pooling=True" providerName="System.Data.SqlClient"  />  

Accidentally turned down SQL memory too low to log in, how do I fix?

Posted: 19 Apr 2013 02:09 PM PDT

I turned down the Maximum Memory of the my SQL Server instance to 15 MB... Now I can't log in to increase it. How can I increase the maximum memory without logging in?

Version is SQL 2008 R2

Unable to import exported Wordpress DB - #2006 - MySQL server has gone away

Posted: 19 Apr 2013 01:08 PM PDT

I'm having a nightmare trying to clone a live Wordpress site into a local dev environment using MAMP Pro and BackupBuddy.

I have no problem with setup. The problem is that there seems to be some issue within my database that prevents import.

Backupbuddy fails at the database import stage.

If I switch and try to import manually, via MAMP PRO's phpMyAdmin, I get the following:

2006 - MySQL server has gone away

The import always stops at the same place: with my wp_options table.

Experimenting, I was able to import the full database when the wp_options table was NOT included. Thereafter, trying to import only the wp_options table leads to the same error.

The wp_options take is only 3MB. The whole database is 13MB.

I followed suggestions online relating to increasing some basic configuration values. In MAMP PRO's startMysql.sh (MAMP.app/bin/startMysql.sh):

max_allowed_packet=100M

I also added the following to my php.ini (MAMP PRO.app --> File menu --> Edit Template --> PHP ..)

max_allowed_packet= 100M wait_timeout= 6000

Memory limit (in php.ini) is 256MB, max input time is 600, and max execution time is 600.

These values should be more than enough to prevent the "#2006 - MySQL server has gone away" error, but I'm still getting it.

FURTHER, I have tried importing the database as xml, and csv. So far, I simply cannot get a working copy of this database going in my localhost environment.

I even wanted to simply export and cut and paste the actual database, but then I find that Wordpress uses .frm .MYD and .MYI files, and my export is a .sql file that likely phpMyAdmin would convert and segment out, but then the import fails.

Does anyone have any suggestions?

Reasons for incremental backup on Progress OpenEdge

Posted: 19 Apr 2013 02:00 PM PDT

In the case that storage is not a problem: are there actually any good reasons for doing incremental backups instead of just doing full backups?

Edit
This could actually refer to any database with full & incremental backups. In this case we use a Progress OpenEdge RDBMS with support for both backup plans as well as real time transactional log archiving. Apart from that I don't think the question must relate to a single vendor. The choice of full/incremental backup can apply to lots of different database engines.

Why use separate databases for high traffic/volume tables?

Posted: 19 Apr 2013 01:27 PM PDT

While looking at the database structure of an application I am using I recognized that it uses 3 different databases on the same SQL Server instance for different things.

The first one contains the main data that changes rarely.

The second one contains event logs with high traffic and volume and the last one is an archive database for old event logs.

I was wondering what the benefit of this structure might be, because the databases are running on the same instance and the database files are located on the same disk. Therefore I would not expect any performance improvements from this.

I thought, maybe I am overlooking something and somebody can point me to benefits that I did not think of.

Update:
Some good points were made regarding maintenance and security. But I am still wondering if it is possible to get a performance improvement.

A more general question would be: Can the performance of one table suffer from other large tables in the same database (due to fragmentation or for some other reason) or are those effects probably negligible.

Clustered vs Nonclustered Index

Posted: 19 Apr 2013 02:18 PM PDT

My database currently has a primary Key/Clustered index on the ID column for each table. However, the application that connects to the database is always looking at views defined as:

SELECT * FROM Table WHERE DeletedDate IS NULL  

Would it be best to add the DeletedDate to the clustered index or is it better practice to have a nonclustered index with the DeletedDate as an include?

IIS takes longer to run queries than SMSS

Posted: 19 Apr 2013 12:23 PM PDT

I'm completely stumped on this one. We're running a pretty usual intranet app here, classic ASP frontend and SQL Server 2005 backend. Occasionally, once/twice a week, a certain stored proc takes ~50x longer to run when I run it in the Management Studio. This slowness lasts the entire day and it inexcplicably goes away the next day.

So essentially what's happening is this: once/twice a week, when I fire up the intranet page that runs the stored procedure and displays the results, it takes roughly 115secs to run. I then fire up SQL Server Management Studio and run the same query, which this time around takes only 3-4secs to run.

The next day, the intranet page is taking as long as SSMS to run the very same stored procedure, with nothing having been changed in the interim (not to my knowledge anyway).

Other details:

  • the IIS connects to SQL Server 2005 using Provider=sqloledb as the driver
  • the slowdown lasts the entire day, regardless of the number of active users
  • I can replicate the slowness by running the same query from Excel, using the same driver/connection string
  • I've ran a trace on the SQL server and it turns out that on those particularly slow days, the duration of the stored proc is actually longer when executed via IIS than direct query

As such, I was wondering whether any of you has encountered any similar behaviour before, if you've fond a solution to it and what would your advice be in tackling/solving it?

Thanks for your help,

Please find Trace results in .csv format below:

EventClass,TextData,ApplicationName,CPU,Reads,Writes,Duration (s),ClientProcessID,SPID,StartTime,EndTime 12,exec spStubbornProc,Internet Information Services,"106,016","7,059,999",1,115.80,5476,85,18/04/2013 08:17:15,18/04/2013 08:19:11 12,exec dbo.spStubbornProc,Microsoft SQL Server Management Studio - Query,"3,141","146,051",0,3.40,5876,84,18/04/2013 08:20:45,18/04/2013 08:20:48 12,exec dbo.spStubbornProc,Microsoft SQL Server Management Studio - Query,"2,563","147,387",0,2.93,5876,84,18/04/2013 08:21:04,18/04/2013 08:21:07 12,exec spStubbornProc,Internet Information Services,"103,156","7,083,365",7,118.73,5476,80,18/04/2013 09:39:41,18/04/2013 09:41:40 12,exec dbo.spStubbornProc,Microsoft SQL Server Management Studio - Query,"2,406","175,029",1,2.57,5876,84,18/04/2013 10:08:58,18/04/2013 10:09:01 12,exec spStubbornProc,Internet Information Services,"112,218","7,103,267",7,114.72,5476,75,18/04/2013 14:06:26,18/04/2013 14:08:21 12,exec spStubbornProc,Internet Information Services,"93,515","7,101,229",3,93.60,6092,60,18/04/2013 19:37:02,18/04/2013 19:38:35 12,exec spStubbornProc,Internet Information Services,"2,500","148,775",1,3.42,5160,75,19/04/2013 09:16:01,19/04/2013 09:16:04

Access 2007 - Splitting single record into component parts similar to Microsoft Money's split transaction

Posted: 19 Apr 2013 02:57 PM PDT

I currently import a bank statement into Access 2007 and would like to be able to categorise each record into its component parts similar to the way that Microsoft Money splits a transaction. I have a Microsoft Money screen grab showing a payment to Amazon that has been split into 3 component categories, however, I can't post it as I do not have a 10 reputation.

I hope therefore that someone may have used this product to know what I am referring to.

If so, is there a way to achieve this using tables/queries?

Or, can it be done programatically?

Or, is this just a step too far for someone with my limited Access knowledge?

Tool to export data with all relational data?

Posted: 19 Apr 2013 02:11 PM PDT

Is there a tool to export data from selected rows in a table with all data stored in other tables in other tables linked by relational design?

The purpose is to ease migrations of bits of data between servers for adhoc migrations. I am looking specifically for a tool for MySQL InnoDB with defined foreign keys.

Limiting user access to tables based on a ROLE

Posted: 19 Apr 2013 05:20 PM PDT

I have a SQL Server 2005 database to which I would like to LIMIT access. I have done the following:

  • Created a database role called NO_HR
  • Added the tables to this role under Securables that should be blocked
  • Set all permissions to DENY for these tables
  • Assigned my users this role under Database Users -> Role Members

I would expect the user to have whatever public access is allowed to the database but be denied access to the tables as defined in the role.

Problem is, users can access all tables as if the rules in the role are not taken into account. My question: How do I block access to tables based on role membership?

A few facts:

  • SQL Server 2005 Standard
  • Windows 2003 R2 Enterprise
  • Users are DOMAIN logins
  • Any help would be appreciated.

SG

SQL Server to compress the backup file from procedure

Posted: 19 Apr 2013 02:56 PM PDT

Is it possible to compress a backup file to .zip (or another) in a stored procedure invoked from a maintenance plan?

My maintenance plan contains these steps:

  • check DB (consistency, re_index, maybe not important)

  • full backup

  • to check backup file by using restore_verifyonly

  • shrink database

After restore_verifyonly/shrink database I want to compress this file.

Server side trace to database

Posted: 19 Apr 2013 05:23 PM PDT

How can I set up a server side trace that dumps to a database instead of a trace file?

I set-up and tested a trace configuration I want with Sql Server Profiler that dumps only needed info straight to a separate database (without a .trc file).
When I export this trace as sql statement however, I see it is configured to save to .trc file instead of a database.

Either my googling-fu is letting me down or nobody is interested in this.

Proper way to export database to other servers

Posted: 19 Apr 2013 05:24 PM PDT

I am using SSMS (I am a software developer, not a DBA) and in each application release I need to release the database with its tables (some with data some with scheme only).

I tried doing 3 things (SSMS):

  1. Using Database --> Tasks -> Generate scripts... This is fine, for small databases, when the scripts is over 200 MB, users reports memory problems, both when running sqlcmd.exe and from SSMS. Moreover, I have to remember to script triggers and I am never sure whether the scripts generate identical database.
  2. I tried using the Back Up option but I guess this option is intended for backup and not export data, it prevents me from exporting only the scheme for example. Back ups also keep histories for differential restores which is not what I intend.

  3. I tried out Script Database as... CREATE To... but these doesn't keep triggers and data. Moreover, this script use hardcoded paths to .mdf and .idf which changes from a server to server.

What is the correct approach to replicate a database to another server ?

P.S. I am new to this Stack Exchange website, I read several question but didn't find an answer after looking on several questions.

Migrating from SQL Server 2005 to SQL Server 2008 R2 on a different box

Posted: 19 Apr 2013 05:24 PM PDT

We currently have a SQL Server 2005 instance hosting a lot of application specific databases (ArchiveManager, SolarWinds, Kaspersky etc). We are planning to deploy a new SQL Server 2008 R2 instance on a new server and slowly migrate across to this instance. All of the applications have fairly straightforward migration plans for changing the database location normally revolving around stopping services, detaching, attaching and rerunning installers.

All looks good so far. My only concern is with SQL Server Agent jobs and logins. Initially I was under the impression that these were all stored in the system databases and that I could move these. I understand now that this is not the case. Microsft KB 918992 suggest running a script to generate a stored procedure and then calling this and running the generated script on the new instance. There seem to be a lot of caveats to it though. Another option would be to use SSIS to generate a "Transfer Logins" and "Transfer Jobs" package and run that (both editions are Standard) but I am unversed in SSIS so am a touch scared to use that.

I suppose my question boils down to this:

  • Is the Microsft KB the best way to migrate logins to the new SQL instance?
  • Is the only way to migrate tasks to the new instance via SSIS?

I am trying to get it as clear in my head as possible for my planning documentation so am very appreciative of any replies/guidance.

Replication on MySQL server

Posted: 19 Apr 2013 01:34 PM PDT

I had to stop the slave server to test something. After I started the server again there is a problem with replication on MySQL server

On the problematic server

mysql> SHOW SLAVE STATUS\G  *************************** 1. row ***************************                 Slave_IO_State: Connecting to master                    Master_Host: servera                    Master_User: replica                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: servera-bin.000024            Read_Master_Log_Pos: 808459481                 Relay_Log_File: serverb-relay-bin.000071                  Relay_Log_Pos: 4          Relay_Master_Log_File: servera-bin.000024               Slave_IO_Running: No              Slave_SQL_Running: Yes                Replicate_Do_DB:            Replicate_Ignore_DB:             Replicate_Do_Table:         Replicate_Ignore_Table:        Replicate_Wild_Do_Table:    Replicate_Wild_Ignore_Table:                     Last_Errno: 0                     Last_Error:                   Skip_Counter: 0            Exec_Master_Log_Pos: 808459481                Relay_Log_Space: 106                Until_Condition: None                 Until_Log_File:                  Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:             Master_SSL_CA_Path:                Master_SSL_Cert:              Master_SSL_Cipher:                 Master_SSL_Key:          Seconds_Behind_Master: NULL  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 1129                  Last_IO_Error: error connecting to master 'replica@servera:3306' - retry-time: 60  retries: 86400                 Last_SQL_Errno: 0                 Last_SQL_Error:  

on the problematic server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | serverb   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

on the main server:

SELECT user, host FROM mysql.user WHERE Repl_slave_priv = 'Y';  +---------+-----------+  | user    | host      |  +---------+-----------+  | root    | localhost |  | root    | servera   |  | root    | 127.0.0.1 |  | replica | servera   |  | replica | serverb   |  +---------+-----------+  

according to what I've read, there is a need to execute the following command om the main server:

mysql> FLUSH HOSTS;   

What will happen then? if there is any application connected to it - will it disconnect it also?

query taking long time to execute.we are expecting result in very quickly anybody please rewrite it

Posted: 19 Apr 2013 03:02 PM PDT

Query is taking long time to execute.we are expecting result in 0 sec.If possible Please help me to rewrite it. Also suggest me to add any indexes on table.Also suggest me if optimization is needed.If possible explain me what is the root cause.

Query And Explain Plan:

mysql> explain SELECT * FROM (`xgen_studentMLs`)      LEFT JOIN `xgen_quizzes`           ON xgen_studentMLs.quizid = xgen_quizzes.idnum      WHERE            `student` = 27126        AND xgen_studentMLs.topic = 1829        AND xgen_studentMLs.metatype = 'topic'        AND (xgen_studentMLs.department='Nursing' OR xgen_studentMLs.department='Biology')        AND (xgen_quizzes.book IS NULL OR xgen_quizzes.book=0)      ORDER BY xgen_studentMLs.timestamp DESC LIMIT 100;    +----+-------------+-----------------+--------+---------------+---------+---------+------------------------------+------+-----------------------------+  | id | select_type | table           | type   | possible_keys | key     | key_len | ref                          | rows | Extra                       |  +----+-------------+-----------------+--------+---------------+---------+---------+------------------------------+------+-----------------------------+  |  1 | SIMPLE      | xgen_studentMLs | range  | student,mult  | mult    | 265     | NULL                         |   18 | Using where; Using filesort |  |  1 | SIMPLE      | xgen_quizzes    | eq_ref | PRIMARY       | PRIMARY | 4       | prepu.xgen_studentMLs.quizid |    1 | Using where                 |  +----+-------------+-----------------+--------+---------------+---------+---------+------------------------------+------+-----------------------------+  

Table structure:

mysql> show create table xgen_quizzes\G  *************************** 1. row ***************************         Table: xgen_quizzes  Create Table: CREATE TABLE `xgen_quizzes` (    `idnum` int(11) NOT NULL AUTO_INCREMENT,    `userid` int(11) NOT NULL DEFAULT '0',    `timestarted` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',    `timefinished` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',    `questionlist` mediumtext NOT NULL,    `topics` mediumtext NOT NULL,    `totalnum` int(11) NOT NULL DEFAULT '0',    `completednum` int(11) NOT NULL DEFAULT '0',    `assignment` int(11) NOT NULL DEFAULT '0',    `department` varchar(255) NOT NULL DEFAULT '',    `book` int(11) NOT NULL DEFAULT '0',    `cqs` mediumtext NOT NULL,    `metatype` varchar(25) DEFAULT 'topic',    PRIMARY KEY (`idnum`),    KEY `userid` (`userid`),    KEY `assignment` (`assignment`)  ) ENGINE=InnoDB AUTO_INCREMENT=13547573 DEFAULT CHARSET=latin1  1 row in set (0.00 sec)    mysql> show create table xgen_studentMLs\G  *************************** 1. row ***************************         Table: xgen_studentMLs  Create Table: CREATE TABLE `xgen_studentMLs` (    `student` int(11) NOT NULL,    `topic` int(11) NOT NULL,    `ML` float NOT NULL,    `MI` int(3) NOT NULL DEFAULT '0',    `timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    `department` varchar(255) NOT NULL,    `metatype` varchar(25) DEFAULT 'topic',    `quizid` int(11) NOT NULL DEFAULT '0',    KEY `student` (`student`),    KEY `mult` (`topic`,`department`,`student`,`timestamp`,`ML`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  1 row in set (0.00 sec)  

Merge Replication identity field issues

Posted: 19 Apr 2013 12:21 PM PDT

One of our clients is using our software with merge replication of a database on a SQL Server 2008 R2 machine. There are two production environments in separate geographical locations only one of which is live at any one time, so basically one and live one on standby. Only the live database is updated by teh applications. Every couple of months they failover between the datacentres and the standby environment become the live centre. There is an instance of SQL Server 2008 in each datacentre and merge replication is used to keep them in sync. This was all working ok until the beginning of the year when we started getting replication errors with some lTID columns in various tables that have the Identity property set.

The errors were like this one:

The insert failed. It conflicted with an identity range check constraint in database 'GateMain', replicated table 'dbo.tGateCalcsLog', column 'lTID'. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent.

Then after the last failover we noticed we had an issue with the lTID values in one specific table. Our application relies on the lTID value always having incremented in order such that the highest lTID value is always the newest entry in the table. We've found that due to how the identity ranges are being managed by replication that when the system is failed over that the lTID range of the now live database server may have a range of values that are lower than those already present in the table. Is there a way to manage this in merge replication so we can guarantee that the next identity value allocated to the lTID column in greater than any lTID currently in the table? Or do we need to use a different type of replication or possibly mirroring?

MySQL Slaves lag behind master

Posted: 19 Apr 2013 02:02 PM PDT

I have one master and four slaves. Sometimes all my slaves lag behind the master. I have implemented the heartbeat for monitoring replication lag. Now I am trying to find why the slaves are lagging behind the master.

I saw the slow queries (for today) on the master and I found that the slowest query (DML) was taking 138 seconds. But the slaves were lagging about 1400 seconds and there were also no slow queries on the slaves for DML (update, delete, insert, etc.).

Points to be taken into consideration:

  1. All tables are InnoDB.
  2. 68 GB of RAM (Master as well as slaves).
  3. Data size about 1 TB.
  4. Master and slave are running from a long.

What may be the reason for lag?

MySQL replication between VPS and shared host

Posted: 19 Apr 2013 01:02 PM PDT

I have a VPS where I have ssh access and shared host with CPanel. Both have MySQL.

I need to create a master/slave setup among the two server. As per this tutorial: How To Set Up Database Replication In MySQL, I will need to modify the my.conf file.

Is it possible to achieve this with the following permissions on both the servers?

  • VPS

    I have root access with ssh login where I can modify /etc/my.conf file.

  • Shared host

    Cpanel with phpmyadmin.

Partition of tablespace

Posted: 19 Apr 2013 08:02 PM PDT

My postgres server is running in a windows server and three tablespace are created. Postgresql server is installed in C: drive and the tablespaces are in D: and E: drive. So how to take a backup and restore. Is it similar to normal backup and while restore, in another machine, do i need to have the same setup or i can restore it in a single drive? From the tablespace folder, is it possible to retreive information for only one table which is huge?

SQL Server equivalent to functionality of Oracle RAC?

Posted: 19 Apr 2013 12:55 PM PDT

I did some Googling and couldn't find an answer to this question more recent than a few years ago, so I thought I'd ask. Oracle's RAC feature offers load-balancing for both read and write transactions, as well as scale-out and high-availability without downtime (at least, as I understand it - we're about to deploy our first databases that use RAC, so we'll see how it goes).

Is there any SQL Server feature set (or third party component you could install on top) that delivers equivalent functionality? We've always used Windows clustering, where a failover event causes about 20-30 seconds of SQL downtime - always tolerable, but not ideal. Now, with AlwaysOn in SQL 2012, SQL Server shrinks that to about 15 seconds and adds the concept of read-only-secondary databases, but they still require that write transactions are choked through a single connection point (much improved, since many transactions are just read, but still not really load balancing), and in the case of a node failure or the need to patch, there's still downtime.

I suppose it's just more curiosity - I feel like this is the only area that SQL Server falls behind Oracle (at least among the features I've personally seen used). I wanted to see if there are any options out there to close that gap and possibly improve our own SQL Server deployment while we wait for Microsoft's equivalent feature to be added - maybe in SQL 2014/2015?

Call a stored procedure from a trigger

Posted: 19 Apr 2013 04:56 PM PDT

I have created a stored procedure in mysql using the following syntax.

DROP PROCEDURE IF EXISTS `sp-set_comment_count`;    DELIMITER $$    CREATE PROCEDURE `sp_set-comment_count` (IN _id INT)  BEGIN     -- AC   - AllCount     DECLARE AC INT DEFAULT 0;       SELECT COUNT(*) AS ac       INTO AC       FROM usergroups AS ug  LEFT JOIN usergroup_comments AS ugm ON ugm.`gid` = ug.`id`  LEFT JOIN mediagallery AS dm ON ugm.mid = dm.`id`      WHERE dm.`status` NOT IN (200, 201, 202, 203, 204, 205)        AND ug.`id` = _id;       UPDATE usergroups        SET allCount = AC,      WHERE usergroups.`id` = _id;    END $$  DELIMITER ;  

FYI I've greatly simplified the stored procedure but I do know it works without any issues.

What I'd like to be able to do is set up a trigger from usergroup_comments that works like this.

DROP TRIGGER IF EXISTS `usergroups_comments_insert`     CREATE TRIGGER `usergroups_comments_insert` AFTER INSERT ON `usergroups_comment`      FOR EACH ROW      BEGIN         CALL sp-set-comment_count(NEW.`gid`);      END;  

But for some reason every time I do mysql throws an error at me that's less than helpful stating that there's a syntax error on line 4.

I've combed through the mysql documentation and found some information on restrictions of triggers but found it to be fairly convoluted.

http://dev.mysql.com/doc/refman/5.1/en/stored-program-restrictions.html

Any ideas would be helpful.

How to safely change MySQL innodb variable 'innodb_log_file_size'?

Posted: 19 Apr 2013 01:16 PM PDT

So I'm fairly new to tuning InnoDB. I'm slowly changing tables (where necessary) from MyIsam to InnoDB. I've got about 100MB in innodb, so I increased the innodb_buffer_pool_size variable to 128MB:

mysql> show variables like 'innodb_buffer%';  +-------------------------+-----------+  | Variable_name           | Value     |  +-------------------------+-----------+  | innodb_buffer_pool_size | 134217728 |  +-------------------------+-----------+  1 row in set (0.00 sec)  

When I went to change the innodb_log_file_size value (example my.cnf on mysql's innodb configuration page comments to change the log file size to 25% of the buffer size. So now my my.cnf looks like this:

# innodb  innodb_buffer_pool_size = 128M  innodb_log_file_size = 32M  

When I restart the server, I get this error:

110216 9:48:41 InnoDB: Initializing buffer pool, size = 128.0M
110216 9:48:41 InnoDB: Completed initialization of buffer pool
InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes
InnoDB: than specified in the .cnf file 0 33554432 bytes!
110216 9:48:41 [ERROR] Plugin 'InnoDB' init function returned error.
110216 9:48:41 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.

So my question: Is it safe to delete the old log_files, or is there another method to change the innodb_log_file_size variable?

How do you document your databases?

Posted: 19 Apr 2013 02:52 PM PDT

I find that most of my clients are not documenting their databases at all and I find that pretty scary. To introduce some better practice I would like to know what tools/process you are using.

  • How do you document your database? (SQL-Server)
  • What tool do you use?
  • Documentation Storage Format for database schema/meta-data?
    • Word documents
    • Excel spreadsheet
    • Plain Text
  • Documentation process or policies?

I am not talking about reverse engineering / document a existing database, but mainly on the documentation best practices while you develop your system/database.

No comments:

Post a Comment

Search This Blog