Sunday, April 7, 2013

[how to] MySQL: Is "using temporary" always a bad thing?

[how to] MySQL: Is "using temporary" always a bad thing?


MySQL: Is "using temporary" always a bad thing?

Posted: 07 Apr 2013 05:10 PM PDT

Is using temporary in the output of explain always bad?

To give an example, I have a three-way join on three very large tables. The indexes are correct. The columns being joined on are indexed and have matching column lengths. The only red flag is using temporary. Is this simply unavoidable given the circumstances?

These are in-memory temp tables, not disk tables, so I'm thinking, maybe it's just how MySQL works, and not always a bad thing. (To give a sense of size... the row product is 1300 x 1 x 3).

DBCC CHECKDB Notification

Posted: 07 Apr 2013 06:34 PM PDT

There are plenty of questions on DBA.SE regarding DBCC CHECKDB and how to resolve problems when errors are returned. My specific question is on actually getting notified that DBCC CHECKDB returned errors. Most all DBAs know that you can automate the command and should run it often.

I came across this article by Cindy Gross, which has some very good notes. In it she mentions use of SQL Server Agent that if it finds errors from the execution of the CHECKDB command it will fail that step (or job depending on configuration). She points to Paul Randal's blog post on the topic here.

Now I am curious if anyone knows that the Check Database Integrity Task in a maintenance plan would do the same thing? MSDN does not mention that it will and I have not truthfully been an environment where it has come across a corruption issue; so can't say that it does. This would be versus simply setting up a SQL Agent Job with multiple steps that runs the specific command against each database, as Cindy suggested.

Thoughts? Obviously proof is in the pudding so providing more than just a guess would be helpful...

mySQL- large database

Posted: 07 Apr 2013 05:14 PM PDT

I need guideline to setup DBMS(MYSQL or SQL Server 2008). The database size is expected upto 80-100GB in next few months. One main table will contain textual content like essays. Can MySQL handle such big size database and can full text search really work for me for such big database. Should I consider MySQL or any other?

Thanks

Archive partition before delete it?

Posted: 07 Apr 2013 10:51 AM PDT

I have manage to create an event and with the following scripts could both create and delete the partition. The issue now I need help on how to save the partition before its deleted? Should I use percona tools or mysqldump itself?

IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.PARTITIONS     WHERE TABLE_NAME = 'testPart1' AND PARTITION_NAME =     CONCAT(' p'      ,   DATE_FORMAT( DATE_SUB( CURDATE(), INTERVAL 14 DAY ), '%Y%m%d' ))) THEN        SET @stmt := CONCAT(          'ALTER TABLE testPart1 DROP PARTITION '      ,   ' p'      ,   DATE_FORMAT( DATE_SUB( CURDATE(), INTERVAL 14 DAY ), '%Y%m%d' )       );      PREPARE stmt FROM @stmt;      EXECUTE stmt;      DEALLOCATE PREPARE stmt;      END IF;       IF NOT EXISTS(SELECT * FROM INFORMATION_SCHEMA.PARTITIONS     WHERE TABLE_NAME = 'testPart1' AND PARTITION_NAME =     CONCAT(' p'      ,   DATE_FORMAT( DATE_ADD( CURDATE(), INTERVAL 2 DAY ), '%Y%m%d' ))) THEN       SET @stmt := CONCAT(          'ALTER TABLE testPart1 ADD PARTITION ('      ,   'PARTITION p'      ,   DATE_FORMAT( DATE_ADD( CURDATE(), INTERVAL 2 DAY ), '%Y%m%d' )      ,   ' VALUES LESS THAN ('      ,   TO_DAYS( CURDATE() ) + 2       ,   '))'      );      PREPARE stmt FROM @stmt;      EXECUTE stmt;      DEALLOCATE PREPARE stmt;     END IF;  

SQL Server taking a lot of memory

Posted: 07 Apr 2013 05:39 PM PDT

I have SQL Server Express on my VPS(2GB RAM) with a database size of 1.5 GB that I expect will increase to 15 GB. I have noticed that SQL Server is using 1.5 GB RAM even when only small queries has been run against the database.

Is there any relation between memory and database size? Can I tune my SQL Server so it can handle memory better and not use as much memory as it does now?

Here is the memory status of SQL Server:

VM Reserved,1455488  VM Committed,19320  AWE Allocated,0  Reserved Memory,1024  Reserved Memory In Use,0  

Unique index for subquery with RANK() function

Posted: 07 Apr 2013 09:36 AM PDT

I have the following tables:

CREATE TABLE Revision (      RevisionId INT PRIMARY KEY IDENTITY,      UserName NVARCHAR(256) NOT NULL,      DateTime DATETIME NOT NULL DEFAULT GETDATE()  )    CREATE TABLE MyEntity (      MyEntityId INT NOT NULL,      RevisionId INT NOT NULL FOREIGN KEY REFERENCES Revision (RevisionId),      Deleted BIT NOT NULL DEFAULT 0,      Name NVARCHAR(256) NOT NULL,      Body NVARCHAR(MAX) NOT NULL,      PRIMARY KEY (MyEntityId, RevisionId)  )  

That is, the MyEntity table has all changes to an entity. When an entity is created, modified, or deleted, a new record is inserted into MyEntity and into Revision, so that the entire history is tracked.

I would like a view to have the latest version of each entity:

CREATE VIEW MyEntityLatest WITH SCHEMABINDING AS  SELECT      Latest.MyEntityId,      Latest.Name,      Latest.Body  FROM dbo.MyEntity  INNER JOIN (      SELECT          MyEntityId,          RevisionId,          Name,          Body,          RANK() OVER (PARTITION BY MyEntityId ORDER BY RevisionId DESC) AS RevisionIdDesc      FROM MyEntity      WHERE Deleted = 0  ) AS Latest      ON dbo.MyEntity.MyEntityId = Latest.MyEntityId      AND dbo.MyEntity.RevisionId = Latest.RevisionId  WHERE Latest.RevisionIdDesc = 1  

However, I would like to create an unique index (and hence constraint) such that the Name is unique for the latest revision only. I cannot create an index on the view, because of the subquery in the view.

How can I accomplish this?

How to import .mdf database to MYSQL (.sql) database? [duplicate]

Posted: 07 Apr 2013 10:33 AM PDT

This question already has an answer here:

Two database file named "xxx.mdf" & "xxx.ldf" are provided and I want to import the database to phpmyadmin and for that i want to make .sql file and to make multiple query from it... Need urgent information.

what kind of owner to defined when creating Sql server DB for web site?

Posted: 07 Apr 2013 10:24 AM PDT

what kind of owner to defined when creating Sql server DB for web site ? I am trying to create db and i need to select owner. my db will serve aspnet web site and I wonder witch owner will be safe to defined for this db ? Thanks.

Adding users to a list and sort by category?

Posted: 07 Apr 2013 05:29 AM PDT

I'm using SQL-Server 2012 and I would like to know how to solve this program.

So:

I want to create a page that allows admins to manage a project, and part of it, would be to add members/employees to the project (Project Table).

Now, I have another table, users, containing all the users on the system.

Each project has a different department (departments can be added - for example, hardware, software, accounting etc).

What I would like to solve is.. how would I do so as to create a new department and add employees to the project i.e. how will have the tables be structured, and what tables are to be created?

Steps

  1. Add user to a project
  2. Assign user role
  3. Assign user department
  4. Create department (will this have to include creation of another table?)
  5. Add users to that department (will this have to include creation of another table?)

How can can I solve this in terms of database design?

How to Handle TimeZone Properly in SQL SERVER?

Posted: 07 Apr 2013 11:06 AM PDT

I have some issue which needed to be fixed quickly. My local development server is in middle east. But my production server is in UK. Now, I need to show the date to user to thier timezone. For example, if a user is in Saudi Arabia, then I need to show the time according to Saudi Arabia format. Should I need to create a new database table called TimeZone and I will save the Time in UTC. Please suggest me/

MySQL format numbers with comma [closed]

Posted: 07 Apr 2013 12:14 AM PDT

How can I print the integer part of a number with commas separating every three digits?

For example,

 12345    -> 12,345   1234.567 -> 1,234.567  

How do I find an invalid utf8 character "somewhere" in my MySQL/trac database

Posted: 06 Apr 2013 09:27 PM PDT

I have an installation of trac, which uses MySQL. It has been upgraded so many times, moved servers etc. and chances are that the MySQL character sets were not always set correctly or consistently over the years. Currently all of them are utf8.

When attempting to upgrade the data using "trac-admin wiki update", I'm getting an error message that a byte ("UnicodeDecodeError: 'utf8' codec can't decode byte 0xa0 in position 4274: invalid start byte") is not valid unicode. Unfortunately trac-admin gives me no hint where (table/row/column) to look for that byte sequence, or what I could do to fix it.

My question is not about trac/trac-admin, however, it's about the database. How would you go about finding, "somewhere" in the database, the offending bytes, and replacing them with something that is at least valid utf8. I have attempted to mysqldump the database and to re-import it, but MySQL gives no indication that anything might be wrong. The invalid bytes get re-imported.

Ideas?

While restoring copy of SQL Server 2008 backup file in SQL Server 2008 R2 degraded performance

Posted: 07 Apr 2013 03:47 PM PDT

I am trying to upgrade our SQL Server version from SQL Server 2008 to SQL Server 2008 R2. But when I restored a SQL Server 2008 backup file in SQL Server 2008 R2, it's significantly degraded the performance of stored procedures. I took a full back up from old SQL Server 2008 and restored it in new SQL Server 2008 R2. I checked the execution plan of several stored procedures to make sure no indexes are missing, but everything is same.

Can anyone please give me an advice about it if I need to do any further steps to get better performance on it?

Thanks, Dony

MySQL5.6 on Mac OS X 10.6.8 problems when granting permissions to root

Posted: 07 Apr 2013 04:35 PM PDT

I'm having serious problems with a MySQL 5.6 instance on a Mac Server.

We had to upgrade MySQL to a newer version, but it turned to be a bad idea, as we lost control to it. We had a backup of the /data directory and the my.cnf file as well. However, when setting an init_file to restore the previous password and its permissions. So we created a text file with this content:

UPDATE mysql.user SET Password=PASSWORD('myOriginalPassword') WHERE User='root';  FLUSH PRIVILEGES;  

We have double checked the init_file permissions (we even gave it a chmod 777, to make sure it worked), but something is not working. When we run mysqld_safe --init_file=/usr/local/mysql/bin/init_file it tries to start the server, but then it shuts itself down. The log is as follows:

130308 13:23:52 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data  2013-03-08 13:23:53 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).  2013-03-08 13:23:53 18979 [Warning] Setting lower_case_table_names=2 because file system for /usr/local/mysql/data/ is case insensitive  2013-03-08 13:23:53 18979 [Note] Plugin 'FEDERATED' is disabled.  2013-03-08 13:23:53 18979 [Note] InnoDB: The InnoDB memory heap is disabled  2013-03-08 13:23:53 18979 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2013-03-08 13:23:53 18979 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-03-08 13:23:53 18979 [Note] InnoDB: CPU does not support crc32 instructions  2013-03-08 13:23:53 18979 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2013-03-08 13:23:53 18979 [Note] InnoDB: Completed initialization of buffer pool  2013-03-08 13:23:53 18979 [Note] InnoDB: Highest supported file format is Barracuda.  2013-03-08 13:23:53 18979 [Note] InnoDB: 128 rollback segment(s) are active.  2013-03-08 13:23:53 18979 [Note] InnoDB: Waiting for purge to start  2013-03-08 13:23:53 18979 [Note] InnoDB: 1.2.10 started; log sequence number 19440939  2013-03-08 13:23:53 18979 [ERROR] /usr/local/mysql/bin/mysqld: unknown option '--skip-locking'  2013-03-08 13:23:53 18979 [ERROR] Aborting    2013-03-08 13:23:53 18979 [Note] Binlog end  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'partition'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_CONFIG'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_DELETED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INSERTED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_METRICS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMPMEM'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_LOCKS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_TRX'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'InnoDB'  2013-03-08 13:23:53 18979 [Note] InnoDB: FTS optimize thread exiting.  2013-03-08 13:23:53 18979 [Note] InnoDB: Starting shutdown...  2013-03-08 13:23:54 18979 [Note] InnoDB: Shutdown completed; log sequence number 19440949  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'BLACKHOLE'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'ARCHIVE'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MRG_MYISAM'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MyISAM'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MEMORY'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'CSV'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'sha256_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'mysql_old_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'mysql_native_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'binlog'  2013-03-08 13:23:54 18979 [Note] /usr/local/mysql/bin/mysqld: Shutdown complete    130308 13:23:54 mysqld_safe mysqld from pid file /usr/local/mysql/data/server.myServerName.com.pid ended  

The problem may definitely come from the [Error] lines, but as we haven't used the --skip-locking, it's just confusing. If we try to log into mysql using mysql -r -p and set the password we used, it can't connect to the server, as it is not started. Any ideas where this problem may come from?

SQL Server 2005 upgrade service pack via command line

Posted: 06 Apr 2013 11:01 PM PDT

I have been wracking my brain over upgrading sql server 2005 to sql server 2005 sp4. I have a script for installing new instances of sql servers for all versions and they all work wonderfully. Problem is, sql server 2012 can't upgrade sql 2005 SP3 and below. So, I have to install up to sql 2005 SP4, AND THEN upgrade through sql 2012, which is highly annoying.

I need to do this silently via the command line, but I cannot get it to work at all. Here are some samples I tried:

 "SQLSP4.exe /qb /ACTION=Upgrade /INSTANCENAME="FOOBAR""   "SQLSP4.exe /quiet /instancename="FOOBAR""  

among a couple other variations. I could not find anything online at all. In fact, the one helpful thread ended in saying that Microsoft didn't have any support for this, but I'm not sure about that.

What can be done here? Does anyone have any experience in updating service packs via command line?

MySQL performance tuning + queries stuck on "Copying to tmp table"

Posted: 07 Apr 2013 09:01 AM PDT

The latter part of the question's title (queries stuck on "Copying to tmp table") has been addressed many times, and I have spent a fair amount of time researching this. I would appreciate it if you guys could help me come to a conclusion - especially with my particular server set up in mind.

Quick overview of server:
- Dedicated server with 2 cores and 64 GB RAM
- Only runs MySQL

The setup is in no way tweaked, so current config is to some extent way off. Hopefully your scolding could provide knowledge.

The web application running on the server is a Magento site with 25.000+ products. The query giving the most headache is the one generating sitemaps.

Currently, the following query has been stuck for a little over an hour on "Copying to tmp table":

Note: I do not really need input on how to increase performance by optimizing this query, I would rather see how much I can shave query time off the queries that already exist.

SELECT DISTINCT `e`.`entity_id`, `ur`.`request_path` AS `url`, `stk`.`is_in_stock` FROM `catalog_product_entity` AS `e`  INNER JOIN `catalog_product_website` AS `w` ON e.entity_id=w.product_id  LEFT JOIN `core_url_rewrite` AS `ur` ON e.entity_id=ur.product_id AND ur.category_id IS NULL AND ur.store_id='1' AND ur.is_system=1  INNER JOIN `catalog_category_product_index` AS `cat_index` ON e.entity_id=cat_index.product_id AND cat_index.store_id='1' AND cat_index.category_id in ('2', '3', '68', '86', '145', '163', '182', '196', '198', '214', '249', '252', '285', '286', '288', '289', '290', '292', '549') AND cat_index.position!=0  INNER JOIN `cataloginventory_stock_item` AS `stk` ON e.entity_id=stk.product_id AND stk.is_in_stock=1  INNER JOIN `catalog_product_entity_int` AS `t1_visibility` ON e.entity_id=t1_visibility.entity_id AND t1_visibility.store_id=0  LEFT JOIN `catalog_product_entity_int` AS `t2_visibility` ON t1_visibility.entity_id = t2_visibility.entity_id AND t1_visibility.attribute_id = t2_visibility.attribute_id AND t2_visibility.store_id='1'  INNER JOIN `catalog_product_entity_int` AS `t1_status` ON e.entity_id=t1_status.entity_id AND t1_status.store_id=0  LEFT JOIN `catalog_product_entity_int` AS `t2_status` ON t1_status.entity_id = t2_status.entity_id AND t1_status.attribute_id = t2_status.attribute_id AND t2_status.store_id='1' WHERE (w.website_id='1') AND (t1_visibility.attribute_id='102') AND ((IF(t2_visibility.value_id > 0, t2_visibility.value, t1_visibility.value)) IN(3, 2, 4)) AND (t1_status.attribute_id='96') AND ((IF(t2_status.value_id > 0, t2_status.value, t1_status.value)) IN(1))    

Relevant config:

Server buffers:

max_connections = 1500;    key_buffer_size = 22G;    innodb_buffer_pool_size = 16G;    innodb_additional_mem_pool_size = 2G;    innodb_log_buffer_size = 400M;    query_cache_size = 64M;    

Per thread buffers:

read_buffer_size = 2M;    read_rnd_buffer_size = 16M;    sort_buffer_size = 128M;    thread_stack = 192K;    join_buffer_size = 8M;    

Question: Does any of these variables seem way off to any of you?

The above memory limits will allow me to use 130% more RAM than is physically installed on my system. Obviously, something must change.

According to the MySQLTuner Perl script, I am currently in danger of using:
Total buffers: 40.7G global + 28.2M per thread (1500 max threads)
Maximum possible memory usage: 82.0G (130% of installed RAM)

Questions: What would give the most performance increase of the above variables, or: will it be more useful to increase the total server buffer limit or the per thread buffer limits?

The two variables that seem to affect "Copying to tmp table" the most are:
- tmp_table_size
- max_heap_table_size

Mine are both set to 0.25G

Question: Any recommendations specifically for those to variables?

There are two suggested fixes that seem to get mentioned more that others:
- Kill process. Do flush table.
- Kill process. Do repair / optimize tables.

Question: Which of the two above suggested solution do you think is the most viable?

how to dump a single stored procedure from a database

Posted: 07 Apr 2013 02:01 PM PDT

I need to create a dump file that contains a single stored procedure from a database. Not all routines from that database

How to find when last reindexing happened in sql server?

Posted: 07 Apr 2013 04:01 AM PDT

I have set 'job' for re-indexing, I want to know when the last re-indexing happened in datetime.

Deriving formulas for input/output

Posted: 07 Apr 2013 08:01 PM PDT

I'm currently enrolled in a DBS class and am having problem with an assignment. I've searched around and have been unable to understand what it is I'm meant to be doing with this derivation formula.

A plant file with TREE-GENUS as the key field includes records with the following TREE-GENUS values: Tsuga, Ficus , Arbutus, Quercus, Melaleuca, Tristaniopsis, Cornus, Sequoiadendron, Lithocarpus, Liriodendron, Pittosporum.
Suppose that records with these search field values are inserted into a random (heap) file with a maximum of 3 records per block. Derive a formula for the expected number of disk I/O to scan these records and to search for a particular record

I've been using some software that was given with the assignment and it also asks what are the maximum number of blocks that are allowed and that is not given by the above brief. I'm not really sure how to derive a formula for this. I've assumed that because there are 3 records per block there are 4 blocks required and that a random heap file uses 1 disk i/o per write/read.

If this is a larger topic than is worth explaining a link to a reliable few pages is also helpful.

innobackupex is failing while exporting backup

Posted: 07 Apr 2013 03:01 AM PDT

I am using perl script which is working fine in one of my other box which used to take individual schema backups & finally take all schema together using percona innobackupex.

I have following information in the Log(posting One day Log details):

--slave-info is used with --no-lock but without --safe-slave-backup. The  binlog position cannot be consistent with the backup data.  Died at /apps/mysql/scripts/mysql_backup.pl line 214.      [Fri Oct 26 04:20:01 2012] [14670]    [Fri Oct 26 04:20:01 2012] [14670] Starting individual backups    [Fri Oct 26 04:20:01 2012] [14670]    [Fri Oct 26 04:20:01 2012] [14670] Backing up proddb    [Fri Oct 26 04:20:01 2012] [14670] Creating /tmp/mysql_backup-14670.fifo    [Fri Oct 26 04:20:01 2012] [14670] Starting /usr/local/bin/zipmt -s -t 4     -c - < /tmp/mysql_backup-14670.fifo > /apps/dbbackup/proddb02.backup-    2012-10-26-042001-proddb.dmp.bz2    [Fri Oct 26 04:20:01 2012] [14670] Starting /usr/bin/innobackupex     --slave-info --no-lock --stream=tar --databases proddb ./ >    /tmp/mysql_backup-14670.fifo    [Fri Oct 26 04:20:02 2012] [14670] FAIL: /usr/bin/innobackupex     --slave-info --no-lock --stream=tar --databases proddb ./ >     /tmp/mysql_backup-14670.fifo failed 256  

Full details are in /apps/dbbackup/backup.log.

 Cause:     /usr/bin/innobackupex --slave-info --no-lock --stream=tar --databases    proddb ./ > /tmp/mysql_backup-14670.fifo failed 256  

Named pipes are existing in the /tmp even after backup getting failed :

  ls -lrth /tmp     =============     prw-r--r--. 1 root root    0 Oct 25 04:20 mysql_backup-18215.fifo   prw-r--r--. 1 root root    0 Oct 26 04:20 mysql_backup-14670.fifo   prw-r--r--. 1 root root    0 Oct 27 04:20 mysql_backup-11278.fifo   prw-r--r--. 1 root root    0 Oct 28 04:20 mysql_backup-7163.fifo   prw-r--r--. 1 root root    0 Oct 29 04:20 mysql_backup-4191.fifo   prw-r--r--. 1 root root    0 Oct 30 04:20 mysql_backup-595.fifo  

Any idea, why the backup is failing?

Trying to create PostGIS database

Posted: 07 Apr 2013 05:34 AM PDT

I am trying to create a spatially enabled PostGIS database. I am following the PostGIS documentation, http://postgis.refractions.net/documentation/manual-1.5/ch02.html#id2648455.

In the short version, i executed,

createdb yourdatabase  createlang plpgsql yourdatabase  

But I did not execute the last three commands,

psql -d yourdatabase -f postgis.sql  psql -d yourdatabase -f postgis_comments.sql  psql -d yourdatabase -f spatial_ref_sys.sql  

Is my database still spatially enabled? If not, then can you tell me where these .sql files are located so I can execute those last commands, since I am getting the following error,

postgis.sql: No such file or directory  

I am using Ubuntu 12.04

Same field but different contents

Posted: 07 Apr 2013 01:01 AM PDT

I'm building a database for a divination system. Just to make it easy, I'll explain it in terms of the i ching.

Every element of a hexagram is a different table. There's one for yin and yang, one for the 3 positions in a trigram, one for the trigrams, one for the hexagrams. There's also a table for the correspondences, which is linked to each of the other tables - correspondences for yin or yang, for the 3 positions, the trigrams, and the hexagrams. There is also a table with commentary, which also links to each of the other tables (except for the correspondence table).

So, the table for the trigrams, for example, might be structured like this:

TriID       TriName      Line1                    -FK (YinYangID)  Line2                    -FK (YinYangID)  Line3                    -FK (YinYangID)  TrigramMeaning  Correspondence           -FK (CorrespondencesID)  Commentary               -FK (CommentariesID)  

With data like:

 TriID  TriName Line1 Line2 Line3 TriMeaning Correspondence  Commentary   1      Qian    Yin   Yin   Yin   Receptive  1               1   2      Kun     Yang  Yang  Yang  Generation 2               2  

The Correspondence table looks something like this

 CorrID    Person    Season     Color     Animal    Number     Element   1         Father    Winter     Blue      Bear      3          Metal   2         Mother    Summer     Red       Deer      6          Earth  

(I know that all my I Ching data's wrong...)

The idea is that the same correspondences are required for every table, so the correspondence table would have an entry for each of the other tables. CorrespondencesIDs 1 through 4 might apply to the YinYang table, for example (Old and New Yin, old and new Yang), and then 5 through 13 to each of the trigrams, and so on.

It seems like this would work just right. On the other hand, it feels intuitively like I'm doing something wrong and breaking some rules. Since every entry on the correspondence table will be different, it seems like there should maybe just be similar entries on every table:

TrigramPerson  TrigramSeason  TrigramColor  TrigramAnimal  

and then

HexagramPerson  HexagramSeason  HexagramColor  

and so on.

What is the smart / best way to structure the database? Apologies if this is a dumb question - I'm new to database design, and some of these concepts still confuse the heck out of me.

No comments:

Post a Comment

Search This Blog