Friday, August 16, 2013

[how to] Mailing list system database schema design

[how to] Mailing list system database schema design


Mailing list system database schema design

Posted: 16 Aug 2013 05:16 PM PDT

I'm working on a simple mailing list system so to store messages and keep track of each message recipients so i have 3 tables:

  • Mails (MessageId, MessageSubject, MessageBody)
  • Mails-Recipients (MessageId, RecipientId)
  • Recipients (RecipientId, RecipientName, RecipientEmail)

The problem is: when sending messages to a large number of recipients like 1000 ,i have to add 1000 rows to the Mails-Recipients table then to get the recipients info for a single message i have to use join query with 1000 rows and with a large number of message this table will grow rapidly causing performance issues.

Is there any better schema design for this system that can help maintain a good performance?

Database utilizing the whole of swap space

Posted: 16 Aug 2013 04:29 PM PDT

When I load my progress database, it almost instantly eats up all available RAM, as well as swap space. I've been told that adding extents might fix this, but I'm not sure of how/where to identify if this is true. I additionally need to identify where/how to add extents. I did find some documentation, but without being able to identify the source of the problem I don't think I can execute it. I would post some additional data, but unfortunately I don't even know what is of use. Thanks

MySQL Grouping Order

Posted: 16 Aug 2013 03:29 PM PDT

Is it possible to force the order in which a result set is grouped? I have a table with multiple languages in it, and I'm doing a GROUP_CONCAT to get a comma separated list of the product name in each language, for each product_id. However it doesn't seem possible to get MySQL to return that concatenated string in any particular order of language_id's. What I'd like is to be able to order the grouping by language id, so that they'll always come out in a pre-determined order.

Is this possible? If so, how? If not possible within the SELECT statement, is there a modification I can make to the table to adjust how the GROUP BY would order the result?

Dynamic SQL is not allowed error for creating function in MySQL

Posted: 16 Aug 2013 01:35 PM PDT

According to this question and answer of Angelo I create a MySQL function as below:

DELIMITER $$  CREATE FUNCTION hierarchy_connect_by_parent_eq_prior_id(value INT, reqdTable VARCHAR(50)) RETURNS INT  NOT DETERMINISTIC  READS SQL DATA  BEGIN              DECLARE _id INT;              DECLARE _parent INT;              DECLARE _next INT;              DECLARE CONTINUE HANDLER FOR NOT FOUND SET @id = NULL;              SET _parent = @id;              SET _id = -1;              SET @table_name = reqdTable;              IF @id IS NULL THEN                  RETURN NULL;              END IF;              LOOP                    SET @sql_text = concat('SELECT MIN(id) INTO @id FROM ', @table_name,' WHERE parent_id = _parent AND id > _id;');                  PREPARE stmt FROM @sql_text;                  EXECUTE stmt;                  DEALLOCATE PREPARE stmt;                    IF @id IS NOT NULL OR _parent = @start_with THEN                    SET @level = @level + 1;                    RETURN @id;                  END IF;                  SET @level := @level - 1;                    SET @sql_text = concat('SELECT id, parent_id INTO _id, _parent FROM ', @table_name,' WHERE id = _parent;');                  PREPARE stmt FROM @sql_text;                  EXECUTE stmt;                  DEALLOCATE PREPARE stmt;                END LOOP;       END  

But It has following Error:

 #1336 - Dynamic SQL is not allowed in stored function or trigger   

What should I do?

Run Multiple Scripts In One Transaction Across Multiple Servers

Posted: 16 Aug 2013 05:22 PM PDT

We have deployment scripts that we need to run on databases that are spread across multiple servers. One script only runs on one database, but the scripts depend on each other. We are looking for a way to run all of the scripts as one big transaction so that all scripts either commit or rollback as a whole.

How do I do this?

I would prefer a way to do this from ADO.NET, but SSMS is cool, too.

My current solution (that does not work) is that I begin transaction in every database, run all my scripts, and then commit/rollback once everything is good. However, I can't run all my scripts since cross-database dependencies are blocking indefinitely.

SQL Server 2012 High Availability on VMware virtualization

Posted: 16 Aug 2013 02:15 PM PDT

We're going to host a large SQL Server 2012 under a VMware virtualization and we need it to be highly available.

These are the three options I found supported, but couldn't find a lot of info and comparison on who's better and why (and mostly - what's the disadvantages):

  1. VMware HA - pro: cheap and easy to use and configure, just let the VM team deal with it. con: no HA for system upgrades, e.g. SQL upgrades, Windows services-packs\KBs etc.
  2. SQL Server Failover Cluster (FCS) - pro: well known, supported, saves space (compared to AlwaysOn Availability Group), HA for system upgrades. con: I haven't heard of successful clusters over VM, or at least good ones. Heard it's a mess and to "stay away from it". A bit harder to admin, some problems may take days to figure out (no-one with a lot of knowledge).
  3. SQL Server AlwaysOn Availability Group - pro: HA for everything! system, DB etc. con: New, added administering and hard to confiugre, new problems to deal with, double the space(!), "no strings attached" (to other servers).

That's pretty much it.

I would appreciate any info you may have about this, and would appreciate even more recommendations (hopefully based on test cases).

~Thanks.

Which databases can have row and column level security based upon Active Directory?

Posted: 16 Aug 2013 01:27 PM PDT

I have various database types, Oracle 11g, MySQL 5, Sql Server 2012. I have direct access. I can have other databases if I need them such as PostgreSQL.

What I'm doing is protecting data at the database and table level as opposed to the application level though I there will be some mixing.

What I'm hoping to do is control all the security from Active Directory. I don't want to do security in MySQL, security in Oracle, etc. That is a lot of duplicate entry.

I will have a webpage send credentials to the database engine to determine which bits of data the user can see.

Thanks for ideas.

After subscribing to a replication set, why can't I see the replicated tables on the local database?

Posted: 16 Aug 2013 12:16 PM PDT

I'm pretty new to databases in general (about two weeks), but I'm learning fast. If I'm using the wrong terminology anywhere, let me know in the comments. Bear with me. :)

Background

The company I work for is using PostgreSQL 9.1 (with pgAdmin III) to manage their data. The scenario is this. Say we have a master database M. Only M will be receiving direct submissions of data. However, on a regular basis (or even a push-notification basis?), another database, T, needs to pull the data from M such that specific tables of M are replicated in T. As far as this problem is concerned, this is one-way communication.

The (slight) catch: T is not read-only. Changes in T still need to be persistent to T and T only. So, this problem is not strictly replication, but I believed it was a start. (In my mind, T could replicate M but not push its own changes to M, thus preserving integrity.) If there is a better way to go about this, please don't hesitate to comment some links or keywords. :)

Problem Statement

I need to set up Master-Slave replication of a subset of tables in the Master database. That is, specific tables from the master must always be pushed onto the slave, but the slave is free to make its own local changes.

I am using pgAdmin III with Slony-I. On the master, I've set up a Slony cluster and added my admin node, my master node, and my slave node. I believe I correctly made paths and listens for open communication between the master and the slave. I'll post their configuration as screenshots at the end.

I've created a table in the postgres database called replicateme. (For simplicity, I'm using the defaults username/password/databases for this research.)

\c postgres  create database replicateme (    product_no integer PRIMARY KEY,    name       text    price      numeric  );  

I created a new replication set and created a new replicated table, public.replicateme.

At this point, after refreshing, I do not see the replication set show up on the pgAdmin instance on the slave node. This is odd because the nodes themselves do show up. If I connect to the master database 'normally,' the set (and table) show up, obviously. From the requirements though, I don't believe I'll be able to connect directly to the master database.

Thinking 'Oh, it's because I'm not subscribed,' I went on to create the subscription. I was only able to create it on the Master database though. Even after creation, I was left depressingly data-less.

Is there an extra step I'm missing? I'm hardly CLI averse, but pgAdmin/Slony was advertised to the development team as completely workable through the GUI. I'm getting the feeling it's not, but I cannot find any documentation that makes it clear where pgAdmin stops and the terminal begins. What is left to be done?


Master:
enter image description here

Slave:
enter image description here

Ghost Cleanup process has high CPU usage in SQL Server 2012 SP1

Posted: 16 Aug 2013 11:21 AM PDT

Has anyone encountered this? We are using SQL Server 2012 Enterprise Edition with SP1. During a time of low/no activity, I noticed the Ghost Cleanup process is utilizing CPU at a very high rate, from 75% to 99%, these aren't bursts but sustained usage.

Is this a bug with SQL 2012? I have seen this in SQL 2008 R2 and there was a fix for it but I haven't seen anything for 2012.

ERROR: text search configuration name "english" must be schema-qualified

Posted: 16 Aug 2013 11:06 AM PDT

Trying to do some full text searching with postgres and when running the following command

UPDATE "EMAIL" SET PostText = to_tsvector('pg_catalog.english', "TEXT");  

I'm getting the error:

ERROR:  text search configuration name "english" must be schema-qualified  

Is there something I'm missing? I've also tried

UPDATE "EMAIL" SET PostText = to_tsvector('english', "TEXT");  

Edit 1: Table EMAIL has the column "TEXT" in it. They are both in all caps FWIW

MySQL - unique constraints design

Posted: 16 Aug 2013 03:27 PM PDT

I am using MySQL and looking to solve a problem with relational data. I am not sure it is possible but wanted to ask the community.

Please consider the diagram:

enter image description here

I would like to add a unique constraint such that only one User can have only one Role per Event (Which is stored in the UserEventRoles table). Not sure how this can be solved? Perhaps with different design? Also the goal is not have a scenario where keys can be "out of sync", such as users role would point to an event where that role does not exist.

EventRoles table describes which Roles are valid at each Event.

I am sure this problem has come up before with other database designs and would like know what the best approach may be.

Also here is a link to sql fiddle to play around schema and some sample data.

sqlFiddle

multiple line text values in mysqldump text export file

Posted: 16 Aug 2013 10:19 AM PDT

I'm trying to export +100mil record table into txt file. My plan is split up txt file to small pieces by size or line then import.

I have one text field has multiple line like blog post text, in txt export file it exported as multiple lines which I want it to be 1 line 1 row so I can process it by lines.

I tried various fields-terminated-by, lines-terminated-by, fields-escaped-by parameters for export but nothing made that multiple line text into single, quoted and comma separated line.

It does quote well when I export the data in sql format but I haven't succeeded to convert new line characters in the text field to \n\r or \n whatever those characters are. Even if I escape it, still exported as new line with the quote.

Relations, dependencies and normal form

Posted: 16 Aug 2013 02:54 PM PDT

I have some issues in understanding about relations and dependencies. I will show two examples and what I want to know:

Assume the relation GTX {G,T,X} with the following dependencies:

G,X -> T  T -> X  

or this one:

Relation {W,X,Y,Z}:

X,Y -> Z  Z -> W  
  • So to my question, how do I properly draw a dependency diagram for this kind of relations? (tutorials or/and explanations are much appreciated)
  • How do I see what the primary key is? (I know what a pk is)
  • In which normal form are the relations?

I have never been able to understand this fully and always got confused by the tutorials I found. I appreciate all answers!

MySQL: ERROR 126 (HY000): Incorrect key file for table

Posted: 16 Aug 2013 09:46 AM PDT

I've got '/tmp' directory mounted with 'tmpfs' and for some reason this is causing the following error:

mysql> SELECT DISTINCT table_schema FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','mysql')      -> ;  ERROR 126 (HY000): Incorrect key file for table '/tmp/#sql_29ef_0.MYI'; try to repair it  

-

# df -h /tmp/  Filesystem            Size  Used Avail Use% Mounted on  tmpfs                 2.0G   12K  2.0G   1% /tmp  # df -i /tmp/  Filesystem            Inodes   IUsed   IFree IUse% Mounted on  tmpfs                2041621       7 2041614    1% /tmp  # mount | grep /tmp  tmpfs on /tmp type tmpfs (rw,size=2048M)  

Please note that the same query works fine when '/tmp' dir is mounted with ext4 file system.

Regards

MySQL Access Denied, tried a few things, pulling hair

Posted: 16 Aug 2013 02:21 PM PDT

I'm trying to get to know Django (my first attempts at a framework, or any backend work for that matter), and I'm seriously stumped by MySQL, and SQL in general.

I'm trying to create a new database and I get:

ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'dbname'  

So I've tried the advice here: Access denied for user localhost

Which might work, but using: mysql -uroot -p I can't seem to remember my password. I don't recall setting one, but leaving it blank doesn't work either.

Also, I have to run mysql using:

/Applications/MAMP/Library/bin/mysql  

Because:

mysql  

results in:

ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)  

I'm guessing that's related to the error above, right?

I'm in a right mess here, SQL and the command line intimidate the heck out of me and it seems so easy to break stuff. If anyone could offer some pointers that would be great.

EDIT: Update on the issue;

Ugh, moved on to the next step of the Django tut. I get:

raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb

So I run:

pip install MySQL-python

and I get:

/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:33:10: fatal error: 'stdio.h' file not found

isn't stdio.h something to do with C?

I'm clearly not cut out for this back end stuff. I'm going back to JavaScript where it's all fluffy and forgiving!

Setting up SQL Server service broker from one database to multiple databases

Posted: 16 Aug 2013 09:55 AM PDT

I would like to set a SQL broker service whereby DatabaseA sends the same data to DatabaseB and DatabaseC. In this case, all databases reside on the same instance.

I have looked at the following examples, however, they only show me how to setup a SQL broker service between two databases:

I would be grateful if anyone could provide any assistance on this.

Why does my SQL Server show more than half a million active tasks?

Posted: 16 Aug 2013 06:21 PM PDT

select task_state,count(*)  from sys.dm_os_tasks  group by task_state  

I ran the above statement on a SQL Server instance, and found it had about 633,000 records.

task_state       -------------- -----------  RUNNABLE       2  RUNNING        32  DONE           633115  SUSPENDED      99  

How can I Close/Kill the useless tasks?

The MDW data collector have about 4000 page allocate in tempdb per time.

And this cause the IO pressure when server in busy time.

This is production server, We do not want to restart the service. And the version number is 11.0.3000.

The Max Worker Count is :1216  CPU Count:48  Hyperthread ratio:12  MaxDop: 8  

Scheduler_Id is 0 - 47 and the amount of rows are average. and other columns are null.

What's stored in data_dir/base of postgresql

Posted: 16 Aug 2013 06:48 PM PDT

I suddenly found found my postgresql out of space. It's a tmp instance running on ram disk. Limited space (around 800M) allowed. Data are not actually stored on it, all table will be dropped after some calculation done.

I checked the folder and found tons of data under base folder:

  $ du -h  6.0M    ./pgsql_tmp  706M    ./16384  6.0M    ./12780  6.0M    ./12772  6.0M    ./1  730M    .  

My question is: what is inside that folder? Any change I can clean it out?

View with fallback (performance/optimization question)

Posted: 16 Aug 2013 03:57 PM PDT

I have a table with tariffs for stuff; the tariffs table is not important in this scenario, the "tariff values" are. In this Demonstration SQL Fiddle the tariff_plan is the FK to the tariffplans table (not included in the example). The tariff for each "thing" is the tariff_type (simplified to a simple char for demonstration purposes).

I have, for example, a default tariffplan (key = default); this is the tariff that goes for each customer unless another value is defined for the same tariff_type for that customer. A customer is assigned a tariffplan (key = plan_x in my example).

If have tariffs defined for items a, b, c and d in the default plan. In plan_x I define "override" values for a and c.

So, what I do is I select the default plan (alias p below for primary) and left-join the "override" plan (plan_x) to it (alias s below for secondary):

select *  from tariff_values as p  left outer join tariff_values s       on (p.tariff_type = s.tariff_type) and (s.tariff_plan = 'plan_x')  where (p.tariff_plan = 'default')  

This results, as expected, in:

id   tariff_plan tariff_type tariff_foo tariff_bar id   tariff_plan tariff_type tariff_foo tariff_bar  ---- ----------- ----------- ---------- ---------- ---- ----------- ----------- ---------- ----------  1    default     a           0.10       0.20       5    plan_x      a           0.09       0.19  2    default     b           0.55       0.66       NULL NULL        NULL        NULL       NULL  3    default     c           1.99       2.99       6    plan_x      c           0.99       1.99  4    default     d           9.99       6.33       NULL NULL        NULL        NULL       NULL  

Because I want to abstract this away I want to put this into a table valued function so I can create a "dynamic view":

select * from dbo.get_tariffplan_for('plan_x', default);  

This should result in a "virtual table" (or "dynamic view") similar to the tariff_values table, thus: not having two tariff_foo's and two tariff_bar's and let the application decide which one to use. And so, I resort to ISNULL and Case when... constructs to "override" the default values:

select p.tariff_type,      ISNULL(s.tariff_foo, p.tariff_foo) as tariff_foo,       ISNULL(s.tariff_bar, p.tariff_bar) as tariff_bar,      ISNULL(s.tariff_plan, p.tariff_plan) as tariff_plan,      CASE WHEN s.id IS NULL THEN 1 ELSE 0 END as isfallback  from tariff_values as p  left outer join tariff_values s      on (p.tariff_type = s.tariff_type) and (s.tariff_plan = 'plan_x')  where (p.tariff_plan = 'default')  

This results in:

tariff_type tariff_foo tariff_bar tariff_plan isfallback  ----------- ---------- ---------- ----------- -----------  a           0.09       0.19       plan_x      0  b           0.55       0.66       default     1  c           0.99       1.99       plan_x      0  d           9.99       6.33       default     1  

All I need to do now is stuff this query into a TVF:

CREATE FUNCTION get_tariffplan_for  (         @customerplan as varchar(50),      @defaultplan as varchar(50) = 'default'  )  RETURNS TABLE   AS RETURN   (      select p.tariff_type,          ISNULL(s.tariff_foo, p.tariff_foo) as tariff_foo,           ISNULL(s.tariff_bar, p.tariff_bar) as tariff_bar,          ISNULL(s.tariff_plan, p.tariff_plan) as tariff_plan,          CASE WHEN s.id IS NULL THEN 1 ELSE 0 END as isfallback      from tariff_values as p      left outer join tariff_values s          on (p.tariff_type = s.tariff_type) and (s.tariff_plan = @customerplan)      where (p.tariff_plan = @defaultplan)  );  

And there we have it. We can call our function ("dynamic view") as intended (and also use it in selects/joins etc.)

select * from dbo.get_tariffplan_for('plan_x', default);    --or:    select *  from foo  inner join dbo.get_tariffplan_for('plan_x', default) bar      on foo.tariff_type = bar.tariff_type  

Now my first question is:

I have a feeling all these ISNULL (or COALESCE) and/or CASE WHEN ... stunts seem to complicate things unnecessarily and something tells me this can be done more efficiently. However, I can't come up with a better and/or more efficient alternative.

So I'm hoping someone here has some ideas on how to improve this.

My second question is:

What if I had a product (tariff_type q for example) that I sold exclusively to some customer; the tariff wouldn't be in the default tariff-plan so I'd have to add another select to the above (with a union) to get all exclusive tariffs for that customer in the resultset. That would result in a query like this:

select p.tariff_type,      ISNULL(s.tariff_foo, p.tariff_foo) as tariff_foo,       ISNULL(s.tariff_bar, p.tariff_bar) as tariff_bar,      ISNULL(s.tariff_plan, p.tariff_plan) as tariff_plan,      CASE WHEN s.id IS NULL THEN 1 ELSE 0 END as isfallback,      0 as isexclusive  from tariff_values as p  left outer join tariff_values s      on (p.tariff_type = s.tariff_type) and (s.tariff_plan = @customerplan)  where (p.tariff_plan = @defaultplan)    UNION    --Exclusive values  select p.tariff_type,      p.tariff_foo,       p.tariff_bar,      p.tariff_plan,      0 as isfallback,      1 as isexclusive  from tariff_values p  left outer join tariff_values s      on (p.tariff_type = s.tariff_type) AND (s.tariff_plan = 'default')  where p.tariff_plan = 'plan_x'      and s.id is null  

(Demonstrated in this SQL fiddle)

In the above example I use another left join with s.id is null (but that could be rewritten in other ways (using count, exists, not in, having...whatever)) to retrieve the customer-exclusive tariffs. But maybe there's a better way instead of the union?

Altering the location of Oracle-Suggested Backup

Posted: 16 Aug 2013 02:05 PM PDT

On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere.

As far as I can see, the scheduled backup job is simply:

run {  allocate channel oem_disk_backup device type disk;  recover copy of database with tag 'ORA_OEM_LEVEL_0';  backup incremental level 1 cumulative  copies=1 for recover of copy with tag 'ORA_OEM_LEVEL_0' database;  }  

Asking RMAN to show all reveals that device type disk is indeed configured to store elsewhere:

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/s01/backup/PROD11/PROD11_%U';  

If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM,

Why might Oracle still choose to dump the backupset to the db_recovery_file_dest?

Ultimately, how can I change the backup destination?

Database restructure - beneficial?

Posted: 16 Aug 2013 09:06 PM PDT

I have a table for email messages. Then, I have a table that has the message parts.

The Parts table contains a

   field_id,  message_id, and data   

At the time I used Parts to name the table and yet used field_id for the column. Just an FYI

So for example, a part_id of 2 would be the subject of the message. I have parts for subject, date, htmlbody, and textbody. Due to this structure, I have approximately 2 more queries per email (one for the parts, and another for the email addresses associated to the email) than if I were to push all the data into the messages tables. I found this structure to be best, but I'm beginning to think it might be wrong and not best for performance.

My question is, will it be in my best interests to restructure the database? Id rather not.

I was thinking about moving the htmlbody and textbody and subject and date to the messages table. Another solution would be to grab all the emails and their data from the Parts table in one query. I could grab all the ids in one query and then do an IN(ids) for the second query.

CREATE TABLE IF NOT EXISTS `messages` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `user_id` int(10) NOT NULL,    `account_folder_id` int(10) NOT NULL,    `hash` varchar(255) NOT NULL,    `uid` int(10) NOT NULL,    `seen` tinyint(1) NOT NULL,    `flagged` tinyint(1) NOT NULL,    `date_created` int(11) NOT NULL DEFAULT '0',    `last_modified` int(11) NOT NULL DEFAULT '0',    PRIMARY KEY (`id`),    UNIQUE KEY `hash` (`hash`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    CREATE TABLE IF NOT EXISTS `message_parts_data` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `message_id` int(10) NOT NULL,    `field_id` int(10) NOT NULL,    `data` text NOT NULL,    `date_created` int(11) NOT NULL DEFAULT '0',    `last_modified` int(11) NOT NULL DEFAULT '0',   PRIMARY KEY (`id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    CREATE TABLE IF NOT EXISTS `fields` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `name` text,    PRIMARY KEY (`id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    INSERT INTO `fields` (`id`, `name`) VALUES  (1, 'To'),  (2, 'Subject'),  (3, 'Date'),  (4, 'From'),  (5, 'CC'),  (7, 'ReplyTo'),  (8, 'textHtml'),  (9, 'textPlain'),  (11, 'Forward');  

Thanks

MYSQL Timezone support

Posted: 16 Aug 2013 04:06 PM PDT

We are having a shared hosting plan and they are saying that do provide MYSQL Timezone support in a shared hosting plan. I can create timezone related tables in our database and populate them with required data(data from from our local MYSQL Timezone related tables. How to view the code syntax for MySQL "CONVERT_TZ" function?

Thanks Arun

replication breaks after upgrading master

Posted: 16 Aug 2013 12:05 PM PDT

I have a set up of replication with master 5.1.30 and slave 5.5.16 and the replication is working good

Now i have upgraded mysql master to 5.1.47

As far as i know we have to turn off the log bin with sql_log_bin=0 before using mysql_upgrade program in order to up grade the replication setup as well

but the problem here is the binary log was not turned off while mysql_upgrade program is running

The reason i found is in 5.1 the sql_log_bin is a session variable and mysql_upgrade program runs in another session

so how to upgrade the replication as well along with the server with any breakage on replication setup.

any suggestions are really useful.....

Need to suppress rowcount headers when using \G

Posted: 16 Aug 2013 01:05 PM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

How to search whole MySQL database for a particular string

Posted: 16 Aug 2013 05:06 PM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

multivalued weak key in ER database modeling

Posted: 16 Aug 2013 03:05 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 16 Aug 2013 06:06 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

MYSQL 5.5 Fail start Fedora 16

Posted: 16 Aug 2013 10:05 AM PDT

I installed mysql and mysql-server from the repos (MySQL version 5.5). Then tried to start it, but got an error.

[root@server]# service mysqld start  Redirecting to /bin/systemctl start  mysqld.service  Job failed. See system logs and 'systemctl status' for details.  

Here is the log:

121118  2:41:38 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  121118  2:41:38 [Note] Plugin 'FEDERATED' is disabled.  121118  2:41:38 InnoDB: The InnoDB memory heap is disabled  121118  2:41:38 InnoDB: Mutexes and rw_locks use GCC atomic builtins  121118  2:41:38 InnoDB: Compressed tables use zlib 1.2.5  121118  2:41:38 InnoDB: Using Linux native AIO /usr/libexec/mysqld: Can't create/write to file '/tmp/ibhsfQfU' (Errcode: 13)  121118  2:41:38  InnoDB: Error: unable to create temporary file; errno: 13  121118  2:41:38 [ERROR] Plugin 'InnoDB' init function returned error.  121118  2:41:38 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.  121118  2:41:38 [ERROR] Unknown/unsupported storage engine: InnoDB  121118  2:41:38 [ERROR] Aborting    121118  2:41:38 [Note] /usr/libexec/mysqld: Shutdown complete    121118 02:41:38 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended  

Fresh installation, nothing changed prior to that, just ran yum update.

Here is the systemctl status trace

[root@linyansho /]# systemctl status mysqld.service  mysqld.service - MySQL database server    Loaded: loaded (/lib/systemd/system/mysqld.service; disabled)    Active: failed since Sun, 18 Nov 2012 02:45:19 +0300; 5min ago    Process: 864 ExecStartPost=/usr/libexec/mysqld-wait-ready $MAINPID (code=exited, status=1/FAILURE)    Process: 863 ExecStart=/usr/bin/mysqld_safe --basedir=/usr (code=exited, status=0/SUCCESS)    Process: 842 ExecStartPre=/usr/libexec/mysqld-prepare-db-dir %n (code=exited, status=0/SUCCESS)    CGroup: name=systemd:/system/mysqld.service  

Sql Anywhere 11: Restoring incremental backup failure

Posted: 16 Aug 2013 11:05 AM PDT

We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers.

We are doing a full backup as follows:

dbbackup -y -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\full  

This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups:

dbbackup -y -t -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\inc    dbbackup -y -l -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"       c:\backuppath\live  

However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:

10092: Unable to find table definition for table referenced in transaction log

The transaction log restore command is:

dbeng11 "c:\dbpath\dbname.db" -a "c:\backuppath\dbname.log"  

The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore.

Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11?

UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ...

Edit:

You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure

No comments:

Post a Comment

Search This Blog