Sunday, July 21, 2013

[how to] calculate Transaction per Second on mysql ndbcluster

[how to] calculate Transaction per Second on mysql ndbcluster


calculate Transaction per Second on mysql ndbcluster

Posted: 21 Jul 2013 07:22 PM PDT

what is the formula to calculate the TPS of mysql ndbcluster?

for example i have 5 clients to submit 200 queries each client.

Is SERVICE_NAME forced to be unique per logical database within a network? Would it have sense the opposite?

Posted: 21 Jul 2013 08:20 PM PDT

I like to know your opinion on these two sentences to see if I understood correctly the concept.

Put RAC configuration aside, I am talking about logical databases, I understand RAC is like a group of hosts serving same logical database by means of analogous instances.

I am talking about the uniqueness of SERVICE_NAME in a network. I also understand that 2 SERVICE_NAME cant point one database, but I would like to know whether two different DATABASES with their unique DB_NAME and SID may and are likely to be useful in some way if they expose same SERVICE_NAME.

MySQL limits on huge number return zero rows

Posted: 21 Jul 2013 08:59 PM PDT

I have table about 1,000,000 rows. this query work well:

SELECT * FROM `articles`  WHERE `articles`.`hash` NOT  IN (  '1z8y'  )  LIMIT 2000,10   

But this query return zero rows:

SELECT * FROM `articles`  WHERE `articles`.`hash` NOT  IN (  '1z8y'  )  LIMIT 800000,10   

What's my problem? Any configure or etc?

Transpose two-dimensional array from n rows to 2 columns

Posted: 21 Jul 2013 02:34 PM PDT

Background

Using PostgreSQL 9.1, so WITH ORDINAL (a 9.4 feature) is not available.

Problem

Looking to simplify the code that pivots a two-dimensional array.

Code

A working, overly-verbose example that illustrates the problem is:

SELECT    u.aspect,    u.preference  FROM (    SELECT      t.aspect_preference AS aspect,      -- Skip every second row      seq % 2 AS seq,      lead( aspect_preference, 1 ) OVER (ORDER BY t.seq) AS preference    FROM (      SELECT        unnest( '{ {"COLOUR_SCHEME", "RASPBERRY_BLISS"}, {"FONT", "TERMES_HEROS"}, {"LIST_LAYOUT", "BULLET_SNOWFLAKE"} }'::text[] ) aspect_preference,        -- Maintain array order after unnesting to a result set        generate_series( 1,         (array_ndims( '{ {"COLOUR_SCHEME", "RASPBERRY_BLISS"}, {"FONT", "TERMES_HEROS"}, {"LIST_LAYOUT", "BULLET_SNOWFLAKE"} }'::text[] ) *          array_length( '{ {"COLOUR_SCHEME", "RASPBERRY_BLISS"}, {"FONT", "TERMES_HEROS"}, {"LIST_LAYOUT", "BULLET_SNOWFLAKE"} }'::text[], 1 ) )        ) seq    ) t    ORDER BY      t.seq  ) u  WHERE    u.seq = 1  

This produces the correct results with the unnest'ed array rows pivoted into the desired columns:

aspect          | preference  ----------------+--------------------  COLOUR_SCHEME   | RASPBERRY_BLISS  FONT            | TERMES_HEROS  LIST_LAYOUT     | BULLET_SNOWFLAKE  

Question

What is a simpler way to write this query (e.g., fewer lines of code, fewer nested SELECTs, or fewer references to the array)?

Replication Not Replicating

Posted: 21 Jul 2013 08:38 AM PDT

I'm trying to set up transactional replication between two SQL Server 2008 R2 databases and, no matter what I try, it just doesn't seem to work, and I'm not sure where to look for further information about what may be going wrong.

I've created the publication and subscription by running through the wizards on both servers, and everything appears to be up and running - I get successful messages on the Publisher that it's created the publication and added the articles to the publication etc. I also subscribe the subscription database successfully to this publication. The replication monitor shows everything OK - but the synchronisation status of both the publication and the subscription always show as "No replicated transactions available" even when I manually add / modify data in the publication database.

I've tried deleting the replication data from both servers, dropping and recreating the databases from backups, reinitializing the subscription database etc all with no joy.

Any advice on where to look for some more info on what may be going on would be gratefully appreciated.

Preventing mongodb crashes

Posted: 21 Jul 2013 12:09 PM PDT

I am using mongodb with the web server.

Recently, I observed that mongodb server crashed after "no space left" for journalling.

So, after some online help I could repair and restart the DB.

But, for production, I am worried now.

Is there a way in mongodb - when bound to one machine - to truncate the journal files and prealloc files to get truncated without restart of mongodb or any other manual intervention?

Postgresql out of memory while trying to a request of ~500mb

Posted: 21 Jul 2013 06:12 PM PDT

I started getting out-of-memory errors on os postgresql-9.1 server running on a x64 Debian with 4GB of RAM, dedicated to postgresql.

Top returns something like this, which seems that the problem is not with the available memory on the machine itself.

Mem:   4113160k total,  3235968k used,   877192k free,    71028k buffers  Swap:   379896k total,       20k used,   379876k free,  2865136k cached  

And the config:

max_connections = 100           # (change requires restart)  ssl = true              # (change requires restart)  shared_buffers = 1500MB         # min 128kB  work_mem = 2MB  maintenance_work_mem = 128MB  effective_cache_size = 3GB  log_min_duration_statement = 2000  log_lock_waits = on  track_counts = on  autovacuum = on  

The query that is logged here is made by OpenFire, but the interesting part is that OpenFire database is almost empty. Still maybe this could be part of a bigger transation that could take so much memory (like trying to populate the list of users from ldap). Now no table has more than 50 rows.

2013-07-19 13:29:46 UTC ERROR:  out of memory  2013-07-19 13:29:46 UTC DETAIL:  Failed on request of size 536870912.  2013-07-19 13:29:46 UTC STATEMENT:  SELECT bytes from ofRRDs where id = $1  TopMemoryContext: 68688 total in 10 blocks; 2184 free (7 chunks); 66504 used    TopTransactionContext: 8192 total in 1 blocks; 7328 free (0 chunks); 864 used    TableSpace cache: 8192 total in 1 blocks; 3216 free (0 chunks); 4976 used    Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 used    MessageContext: 8192 total in 1 blocks; 6976 free (0 chunks); 1216 used    Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used    smgr relation table: 24576 total in 2 blocks; 9808 free (4 chunks); 14768 used    TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 used    Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used    PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used      PortalHeapMemory: 15360 total in 4 blocks; 6304 free (0 chunks); 9056 used        ExecutorState: 452800624 total in 4 blocks; 18040 free (9 chunks); 452782584 used          ExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used    Relcache by OID: 24576 total in 2 blocks; 13872 free (3 chunks); 10704 used    CacheMemoryContext: 817840 total in 20 blocks; 53136 free (1 chunks); 764704 used      unnamed prepared statement: 8192 total in 1 blocks; 1088 free (0 chunks); 7104 used      ofpresence_pk: 2048 total in 1 blocks; 776 free (0 chunks); 1272 used      pg_toast_470250828_index: 2048 total in 1 blocks; 664 free (0 chunks); 1384 used      ofrrds_pk: 2048 total in 1 blocks; 776 free (0 chunks); 1272 used      ofuserflag_etime_idx: 2048 total in 1 blocks; 776 free (0 chunks); 1272 used      ofuserflag_stime_idx: 2048 total in 1 blocks; 776 free (0 chunks); 1272 used      ofuserflag_pk: 2048 total in 1 blocks; 712 free (0 chunks); 1336 used      pg_index_indrelid_index: 2048 total in 1 blocks; 728 free (0 chunks); 1320 used      pg_db_role_setting_databaseid_rol_index: 2048 total in 1 blocks; 712 free (0 chunks); 1336 used      pg_opclass_am_name_nsp_index: 3072 total in 2 blocks; 1544 free (2 chunks); 1528 used      pg_foreign_data_wrapper_name_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_enum_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_class_relname_nsp_index: 3072 total in 2 blocks; 1656 free (3 chunks); 1416 used      pg_foreign_server_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_statistic_relid_att_inh_index: 3072 total in 2 blocks; 1496 free (2 chunks); 1576 used      pg_cast_source_target_index: 3072 total in 2 blocks; 1656 free (3 chunks); 1416 used      pg_language_name_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_collation_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_amop_fam_strat_index: 3072 total in 2 blocks; 1496 free (2 chunks); 1576 used      pg_index_indexrelid_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_ts_template_tmplname_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_ts_config_map_index: 3072 total in 2 blocks; 1544 free (2 chunks); 1528 used      pg_opclass_oid_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_foreign_data_wrapper_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_ts_dict_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_conversion_default_index: 3072 total in 2 blocks; 1544 free (2 chunks); 1528 used      pg_operator_oprname_l_r_n_index: 3072 total in 2 blocks; 1496 free (2 chunks); 1576 used      pg_trigger_tgrelid_tgname_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_enum_typid_label_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_ts_config_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_user_mapping_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_opfamily_am_name_nsp_index: 3072 total in 2 blocks; 1544 free (2 chunks); 1528 used      pg_foreign_table_relid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_type_oid_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_aggregate_fnoid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_constraint_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_rewrite_rel_rulename_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_ts_parser_prsname_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_ts_config_cfgname_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_ts_parser_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_operator_oid_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_namespace_nspname_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_ts_template_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_amop_opr_fam_index: 3072 total in 2 blocks; 1496 free (2 chunks); 1576 used      pg_default_acl_role_nsp_obj_index: 3072 total in 2 blocks; 1544 free (2 chunks); 1528 used      pg_collation_name_enc_nsp_index: 3072 total in 2 blocks; 1544 free (2 chunks); 1528 used      pg_ts_dict_dictname_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_type_typname_nsp_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_opfamily_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_class_oid_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_proc_proname_args_nsp_index: 3072 total in 2 blocks; 1544 free (2 chunks); 1528 used      pg_attribute_relid_attnum_index: 3072 total in 2 blocks; 1656 free (3 chunks); 1416 used      pg_proc_oid_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_language_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_namespace_oid_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_amproc_fam_proc_index: 3072 total in 2 blocks; 1496 free (2 chunks); 1576 used      pg_foreign_server_name_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_attribute_relid_attnam_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_conversion_oid_index: 3072 total in 2 blocks; 1768 free (4 chunks); 1304 used      pg_user_mapping_user_server_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_conversion_name_nsp_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_authid_oid_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_auth_members_member_role_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_tablespace_oid_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_database_datname_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_auth_members_role_member_index: 3072 total in 2 blocks; 1704 free (4 chunks); 1368 used      pg_database_oid_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used      pg_authid_rolname_index: 3072 total in 2 blocks; 1720 free (3 chunks); 1352 used    MdSmgr: 8192 total in 1 blocks; 7872 free (0 chunks); 320 used    LOCALLOCK hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used    Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728 used    ErrorContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 used  2013-07-19 13:30:45 UTC ERROR:  out of memory  2013-07-19 13:30:45 UTC DETAIL:  Failed on request of size 536870912.  2013-07-19 13:30:45 UTC STATEMENT:  SELECT bytes from ofRRDs where id = $1  

Should I shrink the log file? [duplicate]

Posted: 21 Jul 2013 07:37 AM PDT

This question already has an answer here:

My log file was too big, and I made it unrestricted in growth. Should I shrink it with SQL Server Management Studio? What are the downsides, if any, of doing this?

Need help understanding query plan: estimates vs. actuals [on hold]

Posted: 21 Jul 2013 07:34 AM PDT

I'd like to understand a section of the query plan that seems to mess up my query.

enter image description here

The picture shows that there's a clustered index seek, followed by compute scalar, followed by sort.

The part that I do not understand, is if Compute Scalar's Estimated Rows are 327,561, than why is Sort's Estimated Number of Rows only 14? Does it assume Compute Scalar will only return 14 results even though it's input is estimated at 327,561 rows?

I tried running update statistics on the register_hdrs table WITH SAMPLE 100 PERCENT, it didn't seem to change behavior at all.

The suggested indexes improve performance of the clustered index seek, but the sort still estimates number of rows at 14, and so the query plan remains same.

What I need it to do is pick a different path, there's another table in query which when picked speeds up query by about 14 times (it goes from running in 14-15 seconds to 1 second). I believe the under-estimate on the number of rows in the sort is the problem, and is why the slow plan gets picked.

I'm not looking to change the query, because application generates it in a specific way, and it wouldn't be possible to change it at this point. I'm looking into how to force SQL Server to give a better estimate for the rows in the sort, which I believe would result in SQL Server picking a different execution plan.

Transaction Rollback on DROP DATABASE

Posted: 21 Jul 2013 07:44 AM PDT

Does MySQL use transactions for DROP DATABASE... statement (all tables use InnoDB engine)?

Basically I would like to know if any data will be lost if I kill DROP DATABASE... query while it's running?

I've tested this with tables and it works but I don't have big enough database to test it:

mysql> SELECT COUNT(*) FROM t1;  +----------+  | COUNT(*) |  +----------+  | 10000000 |  +----------+  1 row in set (8.57 sec)    mysql> DELETE FROM t1;  

Another MySQL session:

mysql> SHOW PROCESSLIST;  +-----+------+-----------+------+---------+------+----------+-------------------+-----------+---------------+  | Id  | User | Host      | db   | Command | Time | State    | Info              | Rows_sent | Rows_examined |  +-----+------+-----------+------+---------+------+----------+-------------------+-----------+---------------+  | 211 | root | localhost | test | Query   |    0 | init     | SHOW PROCESSLIST  |         0 |             0 |  | 242 | root | localhost | test | Query   |    4 | updating | DELETE FROM t1    |         0 |        879266 |  +-----+------+-----------+------+---------+------+----------+-------------------+-----------+---------------+  2 rows in set (0.00 sec)    mysql> KILL 242;  Query OK, 0 rows affected (0.00 sec)  

back to first session

mysql> SELECT COUNT(*) FROM t1;  ERROR 2006 (HY000): MySQL server has gone away  No connection. Trying to reconnect...  Connection id:    283  Current database: test    +----------+  | COUNT(*) |  +----------+  | 10000000 |  +----------+  1 row in set (6.91 sec)  

Open source tools for Oracle schema compares

Posted: 21 Jul 2013 10:23 AM PDT

I'm looking for an open source tool to generate DDL scripts. Many tools call these synchronization scripts. We essentially want to compare schema of different environments (ie: DEV to QA, QA to PROD) to manage object migrations/deployments a little easier. Do open source tools exist like this for Oracle?

Pulling autogrowth events by file name from default trace

Posted: 21 Jul 2013 07:47 AM PDT

I use below query to get auto-growth event occurred to databases in a sql server.

DECLARE @trcfilename VARCHAR(1000);  SELECT @trcfilename = path FROM sys.traces WHERE is_default = 1  SELECT COUNT(*)as no_of_autogrowths,  sum(duration/(1000*60)) as time_in_min,  Filename  FROM ::fn_trace_gettable(@trcfilename, default)  WHERE (EventClass = 92 OR EventClass = 93)  GROUP BY  Filename  

It outputs number of auto-growths, time taken for auto-growth and logical name of the file. But I want physical name of the file (mdf and ldf file name) instead of logical name.I don't know whether from sys.traces I will get physical name or please help me with an alternate way to do it.

Migration to databases Domain Users do not have access

Posted: 21 Jul 2013 01:34 PM PDT

I migrated databases to new servers, however the applications that were previously used with the databases are failing to load. I have changed the connections and etc. The jobs also seem to be failing. I have a domain account who is the job owner. However, when I try to execute the job under my User name i get the following error:

Executed as user: NT AUTHORITY\SYSTEM. Login failed for user.....[SQLSTATE 28000) (Error 18456).

Is this related to Domain Users not having appropriate read and write access to the database. Also how would I give All domain users permissions to execute stored procedures.

Creating a global temp table in MySQL

Posted: 21 Jul 2013 11:33 AM PDT

I am working on a MySQL stored procedure.

I need to create a temp table and I want to access this temp table whenever I execute this stored procedure.

I know we can't access the temp table in another request in MySQL.

Is there a way to create a temp table like globally, or how can I access the temp table across the multiple requests?

Which one to use? InnoDB Replication or MySQL Cluster?

Posted: 21 Jul 2013 08:33 AM PDT

I am building a url shortener and I use InnoDB as the storage engine for link data. I will start with a single VPS containing both application instance and MySQL database instance on the same Virtual Machine. I want the application to be easily scalable, I want to be able to add new nodes and make the system perform better as easily as possible when needed.

I have been reading about MySQL Cluster and MySQL Replication, but I haven't been able to decide on which one to use. You can imagine that (a popular) url shortener will be both write and read intensive. What is the structure that you would use in such a case? Would you go for cluster or replication?

Then based on the choice of cluster or replication, what is the infrastructure/configuration that I am supposed to have in order to be able to expand from a single innoDB engined Database to a cluster or replication structure? I want to start correctly, I don't want to be stucked in a situation where I can not expand/improve the database system when I need to.

Thanks A lot

sr.

MongoDB problems recovering a member of the replica set

Posted: 21 Jul 2013 08:34 PM PDT

I have a sharded database with 2 replica sets (RS1 and RS2) each one of the RSs with 2 servers. I had a problem yesterday with one member of the RS2, the mongod instance crashed throwing an error. After that I tried to recover the member making it sync with the other member of the replica set (it took a long time to finish the sync) and then I'm getting the same error again:

Tue May  7 12:37:57.023 [rsSync]   Fatal Assertion 16233  0xdcf361 0xd8f0d3 0xc03b0f 0xc21811 0xc218ad 0xc21b7c 0xe17cb9 0x7f57205f2851 0x7f571f99811d   /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdcf361]   /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xa3) [0xd8f0d3]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl17syncDoInitialSyncEv+0x6f) [0xc03b0f]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl11_syncThreadEv+0x71) [0xc21811]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl10syncThreadEv+0x2d) [0xc218ad]   /usr/bin/mongod(_ZN5mongo15startSyncThreadEv+0x6c) [0xc21b7c]   /usr/bin/mongod() [0xe17cb9]   /lib64/libpthread.so.0(+0x7851) [0x7f57205f2851]   /lib64/libc.so.6(clone+0x6d) [0x7f571f99811d]  Tue May  7 12:37:57.155 [rsSync]    ***aborting after fassert() failure      Tue May  7 12:37:57.155 Got signal: 6 (Aborted).    Tue May  7 12:37:57.159 Backtrace:  0xdcf361 0x6cf729 0x7f571f8e2920 0x7f571f8e28a5 0x7f571f8e4085 0xd8f10e 0xc03b0f 0xc21811 0xc218ad 0xc21b7c 0xe17cb9 0x7f57205f2851 0x7f571f99811d   /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xdcf361]   /usr/bin/mongod(_ZN5mongo10abruptQuitEi+0x399) [0x6cf729]   /lib64/libc.so.6(+0x32920) [0x7f571f8e2920]   /lib64/libc.so.6(gsignal+0x35) [0x7f571f8e28a5]   /lib64/libc.so.6(abort+0x175) [0x7f571f8e4085]   /usr/bin/mongod(_ZN5mongo13fassertFailedEi+0xde) [0xd8f10e]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl17syncDoInitialSyncEv+0x6f) [0xc03b0f]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl11_syncThreadEv+0x71) [0xc21811]   /usr/bin/mongod(_ZN5mongo11ReplSetImpl10syncThreadEv+0x2d) [0xc218ad]   /usr/bin/mongod(_ZN5mongo15startSyncThreadEv+0x6c) [0xc21b7c]   /usr/bin/mongod() [0xe17cb9]   /lib64/libpthread.so.0(+0x7851) [0x7f57205f2851]   /lib64/libc.so.6(clone+0x6d) [0x7f571f99811d]  

Any idea of why this may be happening? How can I make this server sync and work? My last surviving server is now running as secondary, is there a way to make it primary for a while to get the data out of it?

Thanks in advance!

MySQL server crashed.

Posted: 21 Jul 2013 04:34 PM PDT

Help! I managed to crash MySQL last night. I am on a Mac using the native version that came with Mountain Lion. I was upgrading from 5.5 to 5.6. I have followed instructions in this forum to delete the installation, but trying to re-install 5.5 says that there is a newer version and won't install. Trying to install 5.6 fails. I found this error in the console:

4/21/13 10:16:56.000 PM kernel[0]: exec of /Volumes/mysql-5.6.11-osx10.7-x86/mysql-5.6.11  osx10.7-x86.pkg/Contents/Resources/preflight denied since it was quarantined by TextEdit    and created without user consent, qtn-flags was 0x00000006  

Help me please ?? I am stuck and in a world of hurt and despair.

MySQL database drop insanely slow

Posted: 21 Jul 2013 12:33 PM PDT

I just installed MySQL 5.0.45-log Source distribution on my Debian 6.0.6 server.

I installed it under my user home directory like I'm used to doing.

But this time the queries are extremely slow to run.

Running a create table or a database drop takes ages. I can literally watch tv in the meantime.

So I did a profiling of the database drop statement.

mysql> SHOW PROFILES;  +----------+--------------+------------------------------+  | Query_ID | Duration     | Query                        |  +----------+--------------+------------------------------+  |        1 | 369.54719400 | drop database db_madeintouch |   |        2 |   0.00004600 | SELECT DATABASE()            |   +----------+--------------+------------------------------+  2 rows in set (0.00 sec)    mysql> SHOW PROFILE FOR QUERY 1;  +----------------------+-----------+  | Status               | Duration  |  +----------------------+-----------+  | (initialization)     | 0.000001  |   | checking permissions | 369.54705 |   | Opening table        | 0.000103  |   | System lock          | 0.000003  |   | Table lock           | 0.000018  |   | query end            | 0.000004  |   | freeing items        | 0.000004  |   | logging slow query   | 0.000002  |   +----------------------+-----------+  8 rows in set (0.00 sec)  

We can see the time it takes for the checking of permissions is of 369 seconds.

I also did a show status of the InnoDB engine.

mysql> show engine innodb status\G  *************************** 1. row ***************************  Status:   =====================================  130415 23:11:27 INNODB MONITOR OUTPUT  =====================================  Per second averages calculated from the last 9 seconds  ----------  SEMAPHORES  ----------  OS WAIT ARRAY INFO: reservation count 781, signal count 781  Mutex spin waits 0, rounds 8629, OS waits 231  RW-shared spins 379, OS waits 190; RW-excl spins 380, OS waits 342  ------------  TRANSACTIONS  ------------  Trx id counter 0 7599  Purge done for trx's n:o < 0 7430 undo n:o < 0 0  History list length 3  Total number of lock structs in row lock hash table 0  LIST OF TRANSACTIONS FOR EACH SESSION:  ---TRANSACTION 0 0, not started, process no 14133, OS thread id 140617364518656  MySQL thread id 16, query id 1305 localhost stephane checking permissions  drop database db_madeintouch  ---TRANSACTION 0 0, not started, process no 14133, OS thread id 140617364383488  MySQL thread id 13, query id 1307 localhost stephane  show engine innodb status  ---TRANSACTION 0 7597, COMMITTED IN MEMORY, process no 14133, OS thread id 140617364518656 dropping table  COMMITTING , undo log entries 16  MySQL thread id 16, query id 1305 localhost stephane checking permissions  drop database db_madeintouch  --------  FILE I/O  --------  I/O thread 0 state: waiting for i/o request (insert buffer thread)  I/O thread 1 state: waiting for i/o request (log thread)  I/O thread 2 state: waiting for i/o request (read thread)  I/O thread 3 state: waiting for i/o request (write thread)  Pending normal aio reads: 0, aio writes: 0,   ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 1; buffer pool: 0  174 OS file reads, 3781 OS file writes, 2099 OS fsyncs  0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s  -------------------------------------  INSERT BUFFER AND ADAPTIVE HASH INDEX  -------------------------------------  Ibuf: size 1, free list len 0, seg size 2,  0 inserts, 0 merged recs, 0 merges  Hash table size 17393, used cells 122, node heap has 1 buffer(s)  0.00 hash searches/s, 0.00 non-hash searches/s  ---  LOG  ---  Log sequence number 0 7801057  Log flushed up to   0 7798962  Last checkpoint at  0 7798962  1 pending log writes, 0 pending chkp writes  1535 log i/o's done, 0.00 log i/o's/second  ----------------------  BUFFER POOL AND MEMORY  ----------------------  Total memory allocated 22136914; in additional pool allocated 1048576  Buffer pool size   512  Free buffers       2  Database pages     509  Modified db pages  18  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages read 183, created 1444, written 6980  0.00 reads/s, 0.00 creates/s, 0.00 writes/s  No buffer pool page gets since the last printout  --------------  ROW OPERATIONS  --------------  0 queries inside InnoDB, 0 queries in queue  1 read views open inside InnoDB  Main thread process no. 14133, id 140617334142720, state: waiting for server activity  Number of rows inserted 0, updated 0, deleted 0, read 0  0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s  ----------------------------  END OF INNODB MONITOR OUTPUT  ============================    1 row in set (0.00 sec)  

And here is my environment.

mysql> SHOW VARIABLES;

+---------------------------------+--------------------------------------------------------------------+ | Variable_name | Value
| +---------------------------------+--------------------------------------------------------------------+ | auto_increment_increment | 1
| | auto_increment_offset | 1
| | automatic_sp_privileges | ON
| | back_log | 50
| | basedir | /home/stephane/programs/mysql-5.0.45/install/ | | binlog_cache_size | 32768
| | bulk_insert_buffer_size | 8388608
| | character_set_client | latin1
| | character_set_connection | latin1
| | character_set_database | latin1
| | character_set_filesystem | binary
| | character_set_results | latin1
| | character_set_server | latin1
| | character_set_system | utf8
| | character_sets_dir | /home/stephane/programs/mysql-5.0.45/install/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci
| | collation_database | latin1_swedish_ci
| | collation_server | latin1_swedish_ci
| | completion_type | 0
| | concurrent_insert | 1
| | connect_timeout | 5
| | datadir | /home/stephane/programs/mysql/install/data/ | | date_format | %Y-%m-%d
| | datetime_format | %Y-%m-%d %H:%i:%s
| | default_week_format | 0
| | delay_key_write | ON
| | delayed_insert_limit | 100
| | delayed_insert_timeout | 300
| | delayed_queue_size | 1000
| | div_precision_increment | 4
| | engine_condition_pushdown | OFF
| | expire_logs_days | 0
| | flush | OFF
| | flush_time | 0
| | ft_boolean_syntax | + -><()~*:""&|
| | ft_max_word_len | 84
| | ft_min_word_len | 4
| | ft_query_expansion_limit | 20
| | ft_stopword_file | (built-in)
| | group_concat_max_len | 1024
| | have_archive | NO
| | have_bdb | NO
| | have_blackhole_engine | NO
| | have_compress | YES
| | have_crypt | YES
| | have_csv | NO
| | have_dynamic_loading | YES
| | have_example_engine | NO
| | have_federated_engine | NO
| | have_geometry | YES
| | have_innodb | YES
| | have_isam | NO
| | have_merge_engine | YES
| | have_ndbcluster | NO
| | have_openssl | NO
| | have_ssl | NO
| | have_query_cache | YES
| | have_raid | NO
| | have_rtree_keys | YES
| | have_symlink | YES
| | hostname | server1
| | init_connect |
| | init_file |
| | init_slave |
| | innodb_additional_mem_pool_size | 1048576
| | innodb_autoextend_increment | 8
| | innodb_buffer_pool_awe_mem_mb | 0
| | innodb_buffer_pool_size | 8388608
| | innodb_checksums | ON
| | innodb_commit_concurrency | 0
| | innodb_concurrency_tickets | 500
| | innodb_data_file_path | ibdata1:10M:autoextend
| | innodb_data_home_dir |
| | innodb_doublewrite | ON
| | innodb_fast_shutdown | 1
| | innodb_file_io_threads | 4
| | innodb_file_per_table | OFF
| | innodb_flush_log_at_trx_commit | 1
| | innodb_flush_method |
| | innodb_force_recovery | 0
| | innodb_lock_wait_timeout | 50
| | innodb_locks_unsafe_for_binlog | OFF
| | innodb_log_arch_dir |
| | innodb_log_archive | OFF
| | innodb_log_buffer_size | 1048576
| | innodb_log_file_size | 5242880
| | innodb_log_files_in_group | 2
| | innodb_log_group_home_dir | ./
| | innodb_max_dirty_pages_pct | 90
| | innodb_max_purge_lag | 0
| | innodb_mirrored_log_groups | 1
| | innodb_open_files | 300
| | innodb_rollback_on_timeout | OFF
| | innodb_support_xa | ON
| | innodb_sync_spin_loops | 20
| | innodb_table_locks | ON
| | innodb_thread_concurrency | 8
| | innodb_thread_sleep_delay | 10000
| | interactive_timeout | 28800
| | join_buffer_size | 131072
| | key_buffer_size | 16384
| | key_cache_age_threshold | 300
| | key_cache_block_size | 1024
| | key_cache_division_limit | 100
| | language | /home/stephane/programs/mysql-5.0.45/install/share/mysql/english/ | | large_files_support | ON
| | large_page_size | 0
| | large_pages | OFF
| | lc_time_names | en_US
| | license | GPL
| | local_infile | ON
| | locked_in_memory | OFF
| | log | ON
| | log_bin | OFF
| | log_bin_trust_function_creators | OFF
| | log_error | /home/stephane/programs/mysql/install/mysql.error.log | | log_queries_not_using_indexes | OFF
| | log_slave_updates | OFF
| | log_slow_queries | ON
| | log_warnings | 1
| | long_query_time | 10
| | low_priority_updates | OFF
| | lower_case_file_system | OFF
| | lower_case_table_names | 0
| | max_allowed_packet | 1047552
| | max_binlog_cache_size | 18446744073709551615
| | max_binlog_size | 1073741824
| | max_connect_errors | 10
| | max_connections | 100
| | max_delayed_threads | 20
| | max_error_count | 64
| | max_heap_table_size | 16777216
| | max_insert_delayed_threads | 20
| | max_join_size | 18446744073709551615
| | max_length_for_sort_data | 1024
| | max_prepared_stmt_count | 16382
| | max_relay_log_size | 0
| | max_seeks_for_key | 18446744073709551615
| | max_sort_length | 1024
| | max_sp_recursion_depth | 0
| | max_tmp_tables | 32
| | max_user_connections | 0
| | max_write_lock_count | 18446744073709551615
| | multi_range_count | 256
| | myisam_data_pointer_size | 6
| | myisam_max_sort_file_size | 9223372036854775807
| | myisam_recover_options | OFF
| | myisam_repair_threads | 1
| | myisam_sort_buffer_size | 8388608
| | myisam_stats_method | nulls_unequal
| | net_buffer_length | 2048
| | net_read_timeout | 30
| | net_retry_count | 10
| | net_write_timeout | 60
| | new | OFF
| | old_passwords | OFF
| | open_files_limit | 1024
| | optimizer_prune_level | 1
| | optimizer_search_depth | 62
| | pid_file | /home/stephane/programs/mysql/install/data/server1.pid | | port | 3306
| | preload_buffer_size | 32768
| | profiling | OFF
| | profiling_history_size | 15
| | protocol_version | 10
| | query_alloc_block_size | 8192
| | query_cache_limit | 1048576
| | query_cache_min_res_unit | 4096
| | query_cache_size | 0
| | query_cache_type | ON
| | query_cache_wlock_invalidate | OFF
| | query_prealloc_size | 8192
| | range_alloc_block_size | 2048
| | read_buffer_size | 258048
| | read_only | OFF
| | read_rnd_buffer_size | 258048
| | relay_log_purge | ON
| | relay_log_space_limit | 0
| | rpl_recovery_rank | 0
| | secure_auth | OFF
| | secure_file_priv |
| | server_id | 1
| | skip_external_locking | ON
| | skip_networking | OFF
| | skip_show_database | OFF
| | slave_compressed_protocol | OFF
| | slave_load_tmpdir | /tmp/
| | slave_net_timeout | 3600
| | slave_skip_errors | OFF
| | slave_transaction_retries | 10
| | slow_launch_time | 2
| | socket | /tmp/mysql.sock
| | sort_buffer_size | 65528
| | sql_big_selects | ON
| | sql_mode |
| | sql_notes | ON
| | sql_warnings | OFF
| | ssl_ca |
| | ssl_capath |
| | ssl_cert |
| | ssl_cipher |
| | ssl_key |
| | storage_engine | MyISAM
| | sync_binlog | 0
| | sync_frm | ON
| | system_time_zone | MSK
| | table_cache | 4
| | table_lock_wait_timeout | 50
| | table_type | MyISAM
| | thread_cache_size | 0
| | thread_stack | 131072
| | time_format | %H:%i:%s
| | time_zone | SYSTEM
| | timed_mutexes | OFF
| | tmp_table_size | 33554432
| | tmpdir | /tmp/
| | transaction_alloc_block_size | 8192
| | transaction_prealloc_size | 4096
| | tx_isolation | REPEATABLE-READ
| | updatable_views_with_limit | YES
| | version | 5.0.45-log
| | version_comment | Source distribution
| | version_compile_machine | x86_64
| | version_compile_os | unknown-linux-gnu
| | wait_timeout | 28800
| +---------------------------------+--------------------------------------------------------------------+ 225 rows in set (43.41 sec)

SQL Server replication conflicts after migration from 2000 to 2008

Posted: 21 Jul 2013 07:34 PM PDT

I got a suggestion over at Stackoverflow to post here....greatful for any and all help.

Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL Server 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL Server via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with sscesa20.dll the entire time.

We recently migrated this entire solution as follow:

  • Web Box - New Win Server 2008 R2 running IIS 7.5
  • SQL Server Box - New Win Server 2008 R2 running SQL Server 2008
  • Mobile app - small updates converted to Visual Studio 2008 and Windows for Mobile 5

The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call sqlcesa35.dll from the mobile device to perform merge replication.

The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL Server database and be removed from their iPaq. The inspection has additional work flow on the web app from here on out.

Now on to the problem. We migrated everything publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the database every row has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user syncs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact.

It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration.

Here is a summary of the order of operation:

New row created in the database >> Mobile user receives data >> mobile user updates data >> synchronizes - data is lost. Conflicts show up for all data lost.

From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly.

Thanks for taking the time to read please help. I am stuck after 3 days.

Running a TPC-C Benchmark Without sleep() i.e. key-in + think time

Posted: 21 Jul 2013 10:33 AM PDT

We are running a TPC-C benchmark against a PostgreSQL 9.2 server using JdbcRunner-1.2 implementation. During first few tests we were not getting a smooth (that is, without sudden spikes down to almost 0 from 300 that we got at times) TPS graph even in supposedly steady state. During these tests we had not specified the sleeptimes for transactions (there are different sleep times recommended for different transaction types). So, by default all agents (for example, 100) continuously ran a mix of five transaction types without any sleeps in between. Later, we found out that we could in fact specify the sleep times; and when we did we could see a smoother graph without much variations.

So, the question is, if not specifying the sleep could be the real reason behind the bad results in first few tests?

What utilities can I use to simulate I/O load for a DB restore and DBCC check?

Posted: 21 Jul 2013 04:49 PM PDT

I'm trying to figure out the peak load I can sustain on new hardware I procured for validating backups and performing DBCC checks. I've been using Crystal Diskmark to get throughput stats which helped me benchmark sequential I/O for the copy/restore tasks. I'm having trouble gauging how much random I/O I can sustain for the DBCC check. I'm thinking about using iometer and sqliosim but want to know config would work best to simulate a DBCC check.

The hardware I'm testing consists of one R720 with dual E5-2609s for 8 cores, 32 GB RAM, Windows 2008 R2 Standard, SQL Server 2008 R2 Standard with SP2, and a PowerVault 3620f with 24 15k SAS spindles hooked up to two dual port HBAs on the R720. I've been experimenting with 4, 8 and 12 spindle RAID 0 groups (I can afford to lose the fault tolerance as the DBs have a life expectancy of minutes as part of the testing process).

I'm thinking I can run multiple simultaneous DBCC checks with the above hardware without hitting disk contention. I have the option to upgrade the RAM to 64 GB and the O/S to Enterprise but probably can't upgrade the SQL to Enterprise due to licensing costs.

Any suggestions on how to determine the max random I/O for DBCC using iometer, sqliosim or another utility would be deeply appreciated.

Reindexing plan

Posted: 21 Jul 2013 07:38 AM PDT

I have a reindexing plan failing with error message:

Refresh failed for PhysicalPartition '1'

(Details: SQL Server Enterprise edition running in cluster.)

Searching through BOL and other online resources gave no pointers at all. What could this be related to?

How to connect to a Database made by Oracle SQL Database?

Posted: 21 Jul 2013 06:34 PM PDT

So I am fairly new at this, so if you could keep that in mind in your answers, it would be much appreciated. I installed Oracle SQL Database on my Windows PC. It came in two zip files. I installed it and the online portion of it works fine. I can login with Username: sys and Password: **. What I am trying to do is connect to this newly created database on another computer through SQL Developer. I have read that in order to do this, you need to change the hostname of the Database from "localhost" to an IP Address. How do I do that and is there anything else I need to do to make this work?

I also found this LINK. Is this something I should do? I do not have a Domain though.

listener.ora

# listener.ora Network Configuration File:    C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\listener.ora  # Generated by Oracle configuration tools.    SID_LIST_LISTENER =  (SID_LIST =   (SID_DESC =     (SID_NAME = CLRExtProc)     (ORACLE_HOME = C:\app\hyderz\product\11.2.0\dbhome_2)     (PROGRAM = extproc)     (ENVS = "EXTPROC_DLLS=ONLY:C:\app\hyderz\product\11.2.0\dbhome_2\bin\oraclr11.dll")   )  )    LISTENER =  (DESCRIPTION_LIST =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))    )  )    ADR_BASE_LISTENER = C:\app\hyderz  

tnsnames.ora

# tnsnames.ora Network Configuration File:   C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\tnsnames.ora  # Generated by Oracle configuration tools.    LISTENER_ORCL =    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))      ORACLR_CONNECTION_DATA =  (DESCRIPTION =  (ADDRESS_LIST =    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))  )  (CONNECT_DATA =    (SID = CLRExtProc)    (PRESENTATION = RO)  )  )    ORCL =  (DESCRIPTION =  (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))  (CONNECT_DATA =    (SERVER = DEDICATED)    (SERVICE_NAME = orcl)  )  )  

Backup plan for MySQL NDB cluster databse not innoDB

Posted: 21 Jul 2013 02:34 PM PDT

I have a Database which will grow more than 250GB all data is in NDB engine(2 datanodes) and no other mysql engine used for data store.

  • What are the best approaches or plans for MySQL Cluster NDB database backup.

  • Is MySQL Enterprise Backup( part of MySQL Cluster CGE) or any other tool does backup for NDB?

  • Can a beginner like me do these backup of NDB easily or needs specialised skills?

Kind regards,

How do I know what indexes to create for a table

Posted: 21 Jul 2013 01:24 PM PDT

Is there a way I can figure out the best way to know which indexes to create for a table?

Deleting folders after Oracle upgrade

Posted: 21 Jul 2013 09:33 AM PDT

I upgraded from Oracle 10g to Oracle 11g in Windows Server 2003.

New Oracle home : C:\app\oracle\product\11.2.0\db_1

Old Oracle 10g was installed on: C:\oracle\product\10.2.0\

Questions

  • Is the 10g folder is still useful?
  • Could I delete the 10g folder without a problem?

How to optimize a log process in MySQL?

Posted: 21 Jul 2013 03:34 PM PDT

In my project, I have about 100.000 users and can't control their behavior. Now, what I would like to do is log their activity in a certain task. Every activity, is one record which includes columns like user_id and some tag_id's.

The problem I have, is that these tasks in some cases can go up to 1.000.000 per year per user. So if I would store all these activities in one table. that would obviously become very big (=slow).

What is best to do here? Create a single table per user (so I have 100.000 log tables) or put all these activities in one table? And what kind of engine should I use?

One important thing to note: Although i simplified the situation a bit the following doesn't look normal, but users can also change values in these tables (like tag_id's).

How to Rename Table or Field With Merge Replication

Posted: 21 Jul 2013 07:51 PM PDT

I am using merge replication with SQL 2012. I am trying to rename a published table, and rename a published field.

These operations are not permitted in merge replication.

However I can think of an action plan if these things are needed. For instance renaming a field would include,

  1. Adding the new field
  2. Migrating the data from the old field to the new field
  3. Wait for a while, perhaps a month
  4. Delete the old field

The problem is after step 2. We can force a software upgrade to the clients so that data is written into the new field.

But some data may have been modified in the old field at one of the subscribers before they do the sync which makes the schema change. This would also be before they were forced to do the update.

The migration could be done more than once, but by the time you get to step 4 its hard to know which modifications have been made to the old field, and which have been made to the new field.

Designing a database for a site that stores content from multiple services?

Posted: 21 Jul 2013 05:34 PM PDT

I'm building a site that implements David Allen's Getting Things Done that pulls in your email, Facebook newsfeed, tweets from those you follow on Twitter, and more services are planned. The problem is that I'm not a DBA, and I'm not sure how to design the database so that as I add features to the site, I won't have to artificially corrupt people's raw data for the purposes of storing it (for example, I want to add the ability to get RSS feeds sometime in the future, but I'm not sure how I'd do that without making a mess).

I've put down my initial ideas using DBDesigner 4, below, you'll find the diagram and the SQL.

A few notes to help clarify clarify things.

  • The Accounts table is for storing authentication tokens and such for facebook, twitter, and such.
  • The messages table is incomplete.
  • The password fields in emailconfiguration and users are encrypted, users with a one-way hash, emailconfiguration with a two-way.
  • I'm using a MySQL database using the InnoDB storage engine on Amazon RDS.
  • Each project may have one context associated with it.
  • Each message may have a project and context, but it's not required.
  • The imap, smtp, and pop3 tables exist to remove duplication within email configuration.
  • queries to this database are generated by Korma, a clojure library.

Can someone please point me in the right direction? I'd also be willing to look at using a NoSQL database if suggested. Thank you for your time and consideration.

site database schema

Here's the SQL create script just in case anyone wants to see it.

CREATE TABLE Pop3 (    domain VARCHAR NOT NULL,    host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE Imap (    domain VARCHAR NOT NULL,    Host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE users (    Username VARCHAR NOT NULL AUTO_INCREMENT,    email VARCHAR NULL,    password_2 VARCHAR NULL,    activation VARCHAR NULL,    is_active BOOL NULL,    PRIMARY KEY(Username)  )  TYPE=InnoDB;    CREATE TABLE smtp (    domain VARCHAR NOT NULL,    host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE projects (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    name VARCHAR NULL,    description TEXT NULL,    context INTEGER UNSIGNED NULL,    PRIMARY KEY(id, users_Username),    INDEX projects_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    -- ------------------------------------------------------------  -- This is the table where access info for facebook, twitter, and others is stored.  -- ------------------------------------------------------------    CREATE TABLE Accountsi (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    owner INTEGER UNSIGNED NULL,    service VARCHAR NULL,    username VARCHAR NULL,    send INTEGER UNSIGNED NULL,    receive INTEGER UNSIGNED NULL,    info TEXT NULL,    PRIMARY KEY(id, users_Username),    INDEX Accountsi_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE EmailConfiguration (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    owner INTEGER UNSIGNED NOT NULL,    address VARCHAR NULL,    psswd VARCHAR BINARY NULL,    domain VARCHAR NULL,    PRIMARY KEY(id, users_Username),    INDEX EmailConfiguration_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE Messages (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    message_id VARCHAR NULL,    user_id VARCHAR NULL,    account INTEGER UNSIGNED NULL,    service VARCHAR NULL,    project INTEGER UNSIGNED NOT NULL,    context INTEGER UNSIGNED NOT NULL,    PRIMARY KEY(id, users_Username),    INDEX Messages_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE context (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    projects_id INTEGER UNSIGNED NOT NULL,    projects_users_Username VARCHAR NOT NULL,    users_Username VARCHAR NOT NULL,    name VARCHAR NULL,    description TEXT NULL,    PRIMARY KEY(id, projects_id, projects_users_Username, users_Username),    INDEX context_FKIndex1(projects_id, projects_users_Username),    INDEX context_FKIndex2(users_Username),    FOREIGN KEY(projects_id, projects_users_Username)      REFERENCES projects(id, users_Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION,    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;  

No comments:

Post a Comment

Search This Blog