Sunday, August 25, 2013

[how to] Are duplicates ever acceptable in a database

[how to] Are duplicates ever acceptable in a database


Are duplicates ever acceptable in a database

Posted: 25 Aug 2013 07:54 PM PDT

A warehouse system I am working on allows inwards consignments [type = 0] and outwards consignments [type = 1]. The user selects multiple products for each consignment and then multiple warehouse locations for each product.

The system needs to keep track of stock coming in and allow reporting and checking for available stock etc.

When a consignment goes out, the system must select which products to use (based on first in first out or other custom logic). Products from an inwards consignment can go out on multiple outwards consignments. Products from an outwards consignment can come from multiple inwards consignments.

I have designed the following table structure:

products [id, description]

locations [id, description]

consignments [id, description, type]

consignment_products [id, consignment_id, product_id]

consignment_product_locations [id, consignment_product_id, qty, notes]

consignment_outwards [id, consignment_product_location_id, consignment_product_id, qty]

All the tables hold the info for both inwards and outwards consignments.

consignment_product_locations.qty = the qty that the user selected for the consignment

consignment_outwards.qty - the breakdown (detail) of the assigned inwards consignment product quantities.

  • Is this the best way to design the database?

  • consignment_product_locations.qty is used for inwards consignments. However, for outwards consignments, it will always be the sum of the matching consignment_outwards.qty values. Is it acceptable to store a calculated field in this case to maintain consistency with inwards consignments and to aid reporting? Or would it be better to fill in the quantity for inwards consignments but leave the qty null for outwards consignments.

  • Would it be better to split consignment_product_locations into 2: consignment_product_locations_inwards [ id, consignment_product_id, qty, notes] consignment_product_locations_outwards [ id, consignment_product_id, notes]

(NB: this would make reporting harder)

What to do after deleting a million records

Posted: 25 Aug 2013 08:38 PM PDT

We are implementing a new feature in our system that will cause about a million records (each record is tiny, basically a GUID, a date, and four smallint fields) to be purged from a table every night. Basically it's a caching table, and once the data is 7 days old we do:

DELETE FROM scheduleCache WHERE schDateCreated < '2013-08-26

This will run every night at 1am, and will purge about a million records every time it runs.

Is there anything I should be noting or doing for a table like this? Any properties I should put on the table, or any routines I should run regularly to "clean up"? I've never dealt with a table like this before.

The table has a single clustered index (GUID + one of the smallint fields), and we have a weekly index rebuild that runs Sunday mornings.

MySQL 5.6 DateTime Incorrect datetime value: '2013-08-25T17:00:00+00:00' with Error Code 1292

Posted: 25 Aug 2013 03:45 PM PDT

I'm using MySQL 5.6 and I have a program that runs the following SQL statement: UPDATE m_table SET s_time = '2013-08-25T17:00:00+00:00' WHERE id = '123' against my database.

Unforutnately, I get the following error: Incorrect datetime value: '2013-08-25T17:00:00+00:00' for column 's_time' at row 1

The datatype for s_time is DateTime.

I have already attempted to set the allow_invalid_dates property using the workbench.

Can anyone understand and please explain this error to me? I know that if I manually change the statement to UPDATE m_table SET s_time = '2013-08-25 17:00:00' WHERE id = '123', the statement works.

Unfortunately, I cannot modify the program that supplies the SQL statement (which I'm told is valid by the creator of the program) and I also cannot understand what the +00:00 symbolises.

Thanks

MySQL / Fusion IO Configuration Question

Posted: 25 Aug 2013 02:33 PM PDT

I need a little configuration optimization. I think what I have is pretty good but feel like there's still room for improvement.

Current Config

[mysqld]  user=mysql  datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock    #innodb  innodb_log_file_size = 256M  innodb_log_buffer_size = 32M    innodb_read_io_threads=8  innodb_write_io_threads=8    innodb_flush_log_at_trx_commit = 2  innodb_lock_wait_timeout=50  innodb_file_per_table  innodb_buffer_pool_size=16G  innodb_buffer_pool_instances=8  innodb_io_capacity=10000  #eliminating double buffering  innodb_flush_method = O_DIRECT  flush_time=86400    # Disabling symbolic-links is recommended to prevent assorted security risks;  # to do so, uncomment this line:  # symbolic-links=0  #tmpdir=/dev/shm  #tmpdir=/usr/tmpfs  skip-name-resolve  #skip-locking  #safe-show-database  query_cache_limit=4M  query_cache_size=256M  sort_buffer_size=8M  read_rnd_buffer_size=1M  max_connections=5000  interactive_timeout=60  wait_timeout=300  connect_timeout=30  thread_cache_size=32  key_buffer=124M  tmp_table_size=4096M  max_heap_table_size=256M  join_buffer=16M  max_connect_errors=2000  table_cache=2048  thread_concurrency=12  long_query_time=5  log-slow-queries=/var/log/mysql-slow.log  #table_definition_cache=384  max_allowed_packet=1024M  #server-id=20  #log-bin=mysql-bin  #expire_logs_days=10    event_scheduler=ON    #master-host =  #master-user =  #master-password =  #master-port = 3306    [mysqld_safe]  log-error=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

Dedicated Database server specs

  • VMWare VM
  • Dual Quad Xeon X5680 @ 3.33GHz (8 vCPU)
  • 24gb RAM
  • Fusion IO mounted on /var/lib/mysql
  • Rough total of 500 tables between all databases

I know this kind of question really kind of requires eyes on the database server but I will gladly append information to this question as needed in order to get a much better configuration.

Thanks for the help in advance.

Is there a way to implement a cross-database task on MSSQL 2012 with the Availability Groups feature?

Posted: 25 Aug 2013 01:15 PM PDT

We use SQL Server 2012 and its new Availability Groups (AG) feature. There is a task for moving old data of some tables from one database to another database. Both databases are included into different availability groups. Previously (before using the AG feature) the task was resolved by adding the second server instance as a linked server (sp_addlinkedserver) and executing a distributed transaction in the following way:

  1. begin transaction
  2. insert old data into server2.table2 from server1.table1
  3. delete old data from server1.table1
  4. commit transaction

Unfortunately, distributed transactions are not supported for AG because databases may become inconsistent in case of failover (http://technet.microsoft.com/en-us/library/ms366279.aspx).

Is there some way to implement this task with keeping the AG feature and without implementing the rollback logic in case of exceptions?

Dynamic form from a database

Posted: 25 Aug 2013 12:45 PM PDT

I am considering to develop a dynamic form so all the fields and validation type can be stored on the database. For example I could do table design something like this:

Field ID  | Field title |  Field Type | Values  1         | Name        |  Text       |  2         | Gender      |  Radio      | Male,Female  3         | Location    |  Text       |   4         | street      |  Text       |   

I could then query the above table and generate HTML accordingly. The reason I am doing this because each network/category can have different fields or same fields. I also want Admin have ability to add more fields on the back-end so it will appear on the add-sale page.

If user click on submit button to submit an order - how to populate all the fields in the order table?

It is going to be problematic - you have to make sure all the fields are already included in the order table when you add more fields from a backend. Is there other better way?

Restoring a database to a point in time: Database I want not showing up in dropdown list

Posted: 25 Aug 2013 12:58 PM PDT

I want to restore to a point in time from a database that is already attached. When I try to do this using SMSS I cannot find the database I want in the dropdown list, but oddly it has a couple databases in the list that have been deleted.

I tried closing/reopening SMSS in hopes that it would refresh but no such luck.

Anyone know why the database is not showing up in the "From database:" drop down list?

EDIT:

If I try to restore a backup to a point in time then no backup sets appear for me to select. However, if I restore the backup to the most recent point in time and then restore the transaction log then I'm able to select a point in time when restoring the transaction log. But then when I query the database I find that it's still restored to the most recent point in time from the database backup rather than the time I selected for the transaction log.

How do I restore to a point in time? The backups are using the full backup model, but I'm not sure if I should be doing anything else as part of the backup to be able to restore to a point in time. Any advice would be greatly appreciated!

Mysql encrypted database contents by proxy?

Posted: 25 Aug 2013 10:39 AM PDT

Forive the amateur nature of this question. I work with Wordpress, often transforming the vanilla flavour into custom made versions for clients.

Increasingly clients require more and more marketing information, which often includes entering traffic source conversions, contact data, measuring performance by users etc. Also the amount of lead generation that is happening on these sites increases too.

I looked for some solutions, including encrypting informations but this would always mean i have to hack each and every plugin, and of course the wordpress core which is something i want to avoid and there seems to be the same problem, that decryption files would still be on the server giving a hacker the means to decrypt. (for easy solutions)

So my question:

Is there some better server based utility where i can simply specify the database path, username and password to some utility that encrypts and decrypts information from another database as data is passed to it?

This would be a tidy way of encryping this information. I could specify a database elsewhere with much harder security and specify some sort of proxy database setting in the wp file.

Any ideas? What i am looking for is a super clean and easy way for very very basic server admins to increase the encyption of data inside wordpress in a universal method, ie works for all datas, no need to modify any wordpress plugins or core etc.

Force SQL Server database speed [on hold]

Posted: 25 Aug 2013 12:30 PM PDT

How can I optimize my database speed? Can I disable some functions for it? I'm using indexes and stored procedures. Is there more performance tools?

Sphinx: Indexer causes MySQL to hang

Posted: 25 Aug 2013 02:09 PM PDT

Not sure what's going on. I run indexer --all --rotate When it finishes mysql hangs and not accepting new connections. from my observation as soon as indexer finishes, all update,insert,delete queries goes into query end state

*mysql tables are not corrupt

*i'm using Percona mysql 5.6.12-56

*table in Innodb type

*tried to install sphinx from source and rpm, also tried Sphinx 2.1.1 and Sphinx 2.0.8

indexer --all --rotate  Sphinx 2.1.1-beta (rel21-r3701)  Copyright (c) 2001-2013, Andrew Aksyonoff  Copyright (c) 2008-2013, Sphinx Technologies Inc (http://sphinxsearch.com)    using config file '/etc/sphinx/sphinx.conf'...  indexing index 'online'...  collected 27114 docs, 99.0 MB  sorted 258.8 Mhits, 100.0% done  total 27114 docs, 98993190 bytes  total 119.609 sec, 827633 bytes/sec, 226.68 docs/sec  total 21 reads, 4.497 sec, 53362.9 kb/call avg, 214.1 msec/call avg  total 2510 writes, 3.210 sec, 968.1 kb/call avg, 1.2 msec/call avg  rotating indices: successfully sent SIGHUP to searchd (pid=12773).  

processlist when it hangs:

    Id  User    Host    db  Command Time    State   Info    Rows_sent   Rows_examined      31891   forum_DB        localhost   forum_DB        Query   346     query end   INSERT INTO ibf_sessions (`id`,`member_name`,`member_id$      31905   forum_DB        localhost   forum_DB        Query   346     query end   DELETE FROM ibf_sessions WHERE (id='yandex=95108240250_$      31964   forum_DB        localhost   forum_DB        Query   345     query end   INSERT INTO ibf_sessions (`id`,`member_name`,`member_id$      32062   forum_DB        localhost   forum_DB        Query   343     query end   INSERT INTO ibf_topic_views (`views_tid`) VALUES(599181$      32077   forum_DB        localhost   forum_DB        Query   343     query end   INSERT INTO ibf_topic_views (`views_tid`) VALUES(599181$      32353   forum_DB        localhost   forum_DB        Query   338     query end   INSERT INTO ibf_core_like_cache (`like_cache_id`,`like_$      32443   forum_DB        localhost   forum_DB        Query   336     query end   INSERT INTO ibf_core_like_cache (`like_cache_id`,`like_$      32450   forum_DB        localhost   forum_DB        Query   336     query end   INSERT INTO ibf_sessions (`id`,`member_name`,`member_id$      32518   forum_DB        localhost   forum_DB        Query   335     query end   INSERT INTO ibf_sessions (`id`,`member_name`,`member_id$      32617   forum_DB        localhost   forum_DB        Query   333     query end   INSERT INTO ibf_core_like_cache (`like_cache_id`,`like_$      32642   forum_DB        localhost   forum_DB        Query   332     query end   INSERT INTO ibf_sessions (`id`,`member_name`,`member_i  ...  37207   online  localhost   online  Query   247     Waiting for query cache lock    SELECT id, short_story, title, date, alt_name, category$  37216   forum_DB        localhost   forum_DB        Query   247     query end   INSERT INTO ibf_sessions (`id`,`member_name`,`member_id$  37228   online  localhost   online  Query   247     Waiting for query cache lock    SELECT id, short_story, title, date, alt_name, category$  37232   online  localhost   online  Query   247     System lock     SELECT id, autor, date, short_story, SUBSTRING(full_story, 1, 15) as fu$  37239   online  localhost   online  Query   247     FULLTEXT initialization SELECT id, short_story, title, date, alt_name, category, flag F$  37243   music   localhost   music   Query   247     Waiting for query cache lock    TRUNCATE TABLE dle_login_log    0   0  37250   online  localhost   online  Query   246     Sending data    SELECT COUNT(*) as count FROM dle_post WHERE approve=1 AND allow_main=1$  37253   files   localhost   files   Query   246     Waiting for query cache lock    TRUNCATE TABLE dle_views        0   0  37264   music   localhost   music   Query   246     Waiting for table metadata lock TRUNCATE TABLE dle_login_log    0   0  37271   files   localhost   files   Query   245     Waiting for table metadata lock SELECT COUNT(*) as count, news_id FROM dle_views GROUP $  37279   online  localhost   online  Query   245     Sending data    SELECT COUNT(*) as count FROM dle_post WHERE approve=1 AND allow_main=1$  37288   files   localhost   files   Query   244     Waiting for table metadata lock SELECT COUNT(*) as count, news_id FROM dle_views GROUP $  37289   online  localhost   online  Query   244     FULLTEXT initialization SELECT id, short_story, title, date, alt_name, category, flag F$  37291   files   localhost   files   Query   244     Waiting for table metadata lock SELECT COUNT(*) as count, news_id FROM dle_views GROUP $  37292   online  localhost   online  Query   244     Waiting for query cache lock    TRUNCATE TABLE dle_login_log    0   0  37296   online  localhost   online  Query   244     Sending data    SELECT COUNT(*) as count FROM dle_post WHERE approve=1 AND  ...  

cat processlist-2013-08-25-11-52.log | wc -l 352

sphinx.conf

source online_posts  {          type                    = mysql            sql_host                =           sql_user                =           sql_pass                =           sql_db                  = online_test          sql_port                = 3306  # optional, default is 3306            sql_query               = \                  SELECT * FROM post            #sql_attr_uint          = group_id          sql_attr_timestamp  = date            sql_query_pre = SET NAMES utf8          sql_query_pre = SET CHARACTER SET utf8          sql_query_pre = SET SESSION query_cache_type=OFF            sql_query_info          = SELECT * FROM post WHERE id=$id  }    index online  {          source                  = online_posts          path                    = /var/lib/sphinx/online          docinfo                 = extern          charset_type            = utf-8          morphology              = stem_enru            min_word_len            = 2          min_prefix_len          = 0          min_infix_len           = 2            charset_table = 0..9, A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F            enable_star = 1  }    indexer  {      mem_limit       = 512M  }      searchd  {      listen          = 9312      listen          = 9306:mysql41      log         = /var/log/sphinx/searchd.log      query_log       = /var/log/sphinx/query.log      read_timeout        = 5      max_children        = 30      pid_file        = /var/run/sphinx/searchd.pid      max_matches     = 1000      seamless_rotate     = 1      preopen_indexes     = 1      unlink_old      = 1      workers         = threads # for RT to work      binlog_path     = /var/lib/sphinx/  }  

every time i run indexer i get the following in /var/log/mysql.log

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%$  syntax error, unexpected '-', expecting FTS_TEXT or FTS_TERM or FTS_NUMB or '('  syntax error, unexpected '-', expecting FTS_TEXT or FTS_TERM or FTS_NUMB or '('  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%$  

Looks like others have this problem as well, but I don't understand how he solved that http://sphinxsearch.com/forum/view.html?id=11072

Restrict access to rows by usename

Posted: 25 Aug 2013 11:32 AM PDT

In my database i store data of different users (e.g. addresses or invoices) in the corresponding tables. I want to make sure that a logged in user only has access to it's own data in the tables (so that the user cannot read e.g. an address of an other user).

Currently this is done in the application accessing the mysql server.

Because the application will be split into multiple independent parts, written in different languages, I'm looking for a solution that is closer to the database, otherwise i need to make sure that the access rules are equal in every application.

While I also have the alternative of a middleware on my list, I'm looking for a in database solution in the first place.

Currently I already have a structure running in a test environment (It is a shorted version for illustrating). But I would like to know if there is a better solution to achieve this.

I already know that that a VIEW with the algorithm MERGE is limited in what can be in the query and which JOINs can be done in the VIEW to keep the VIEW in a state that INSERT and UPDATE queries are still available.

Tables

CREATE TABLE IF NOT EXISTS `User` (    `_id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `mysqluser` varchar(120) COLLATE utf8_unicode_ci NOT NULL,    PRIMARY KEY (`_id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;    CREATE TABLE IF NOT EXISTS `_Address` (    `_id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `owner` int(10) unsigned NOT NULL,    PRIMARY KEY (`_id`),    KEY `owner` (`owner`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ;    CREATE TABLE IF NOT EXISTS `_Invoice` (    `_id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `owner` int(10) unsigned NOT NULL,    `address` int(10) unsigned NOT NULL,    PRIMARY KEY (`_id`),    KEY `owner` (`owner`),    KEY `address` (`address`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;  

Views

CREATE ALGORITHM=MERGE DEFINER=`cm_root`@`localhost` SQL SECURITY DEFINER VIEW `Address` AS   SELECT `a`.`_id` AS `_id`        FROM `_Address` `a` JOIN `User` `u` ON (`a`.`owner` = `u`.`_id`)       WHERE (`u`.`mysqluser` = substring_index(user(),'@',1));    CREATE ALGORITHM=MERGE DEFINER=`cm_root`@`localhost` SQL SECURITY DEFINER VIEW `Invoice` AS   SELECT `a`.`_id` AS `_id`,`a`.`address` AS `address`      FROM `_Invoice` `a` JOIN `User` `u` ON (`a`.`owner` = `u`.`_id`)      WHERE (`u`.`mysqluser` = substring_index(user(),'@',1));  

Constraints

ALTER TABLE `_Address`    ADD CONSTRAINT `_owner_address_fk` FOREIGN KEY (`owner`) REFERENCES `User` (`_id`) ON DELETE CASCADE ON UPDATE CASCADE;      ALTER TABLE `_Invoice`    ADD CONSTRAINT `_owner_invoice_fk` FOREIGN KEY (`owner`) REFERENCES `User` (`_id`) ON DELETE CASCADE ON UPDATE CASCADE,    ADD CONSTRAINT `_address_fk` FOREIGN KEY (`address`) REFERENCES `_Address` (`_id`) ON DELETE CASCADE ON UPDATE CASCADE;  

DBCC checkdb showing failure after a restore

Posted: 25 Aug 2013 03:38 PM PDT

We restored a backup into a new database. After running a few delete queries we received the following message:

Msg 824, Level 24, State 2, Line 1 SQL Server detected a logical consistency-based I/O error:...

I ran DBCC checkdb and saw numerous lines such as:

Msg 2576, Level 16, State 1, Line 1  The Index Allocation Map (IAM) page (0:0) is pointed to by the previous pointer of IAM page (1:783) in object ID 0, index ID -1, partition ID 0, alloc unit ID 72057599497994240 (type Unknown), but it was not detected in the scan.  

and

Msg 8939, Level 16, State 98, Line 1  Table error: Object ID 2083043448, index ID 1, partition ID 72057608168079360, alloc unit ID 72057608304721920 (type LOB data), page (1:307605). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 12716041 and -4.  Msg 8965, Level 16, State 1, Line 1  Table error: Object ID 2083043448, index ID 1, partition ID 72057608168079360, alloc unit ID 72057608304721920 (type LOB data). The off-row data node at page (1:307605), slot 0, text ID 1368915968 is referenced by page (1:307603), slot 0, but was not seen in the scan.  Msg 8965, Level 16, State 1, Line 1  Table error: Object ID 2083043448, index ID 1, partition ID 72057608168079360, alloc unit ID 72057608304721920 (type LOB data). The off-row data node at page (1:307605), slot 1, text ID 903086080 is referenced by page (1:307648), slot 0, but was not seen in the scan.  Msg 8929, Level 16, State 1, Line 1  Object ID 2083043448, index ID 1, partition ID 72057608168079360, alloc unit ID 72057609480241152 (type In-row data): Errors found in off-row data with ID 1368915968 owned by data record identified by RID = (1:43506:13)  Msg 8929, Level 16, State 1, Line 1  Object ID 2083043448, index ID 1, partition ID 72057608168079360, alloc unit ID 72057609480241152 (type In-row data): Errors found in off-row data with ID 903086080 owned by data record identified by RID = (1:102876:38)  

The final section of output was:

CHECKDB found 1 allocation errors and 12 consistency errors in database 'Clients'.  repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (Clients).  DBCC execution completed. If DBCC printed error messages, contact your system administrator.  

My question is: Should I attempt to restore the database again or proceed with attempting to repair? I only hesitate because it is a rather large DB and restoring takes quite a while.

UPDATE: a bit more info.
All of the table errors are related to a single table I could just drop and recreate. However I don't understand what the IAM error really means or its impact.

Reverse connect by prior level value for arbitrarily-deep hierarchy

Posted: 25 Aug 2013 02:39 PM PDT

Background

Using a menu hierarchy to drive a login process for users. Users have the ability to set their preferred menu item. When they log in, if they have a preferred menu item set, the system directs them to that item. If no preferred menu item is set, they log into the default menu item for their "most important" role.

Code

The query uses connect by prior to get the list of menus:

  SELECT      LEVEL AS menu_level,      t.name AS menu_name,      t.id AS menu_id    FROM      jhs_menu_items t, (        SELECT          jmi.id        FROM          jhs_users ju        JOIN jhs_user_role_grants jurg ON          ju.id = jurg.usr_id        LEFT OUTER JOIN user_menu_preferences ump ON          ju.id = ump.jhs_usr_id        LEFT OUTER JOIN default_menu_preferences dmp ON          jurg.rle_id = dmp.jhs_rle_id        JOIN jhs_menu_items jmi ON          -- Retrieve the user's preferred menu item, failing to the default          -- if no preference is set.          jmi.id = coalesce(            ump.jhs_menu_items_id,            dmp.jhs_menu_items_id          )        WHERE          ju.username = 'USERNAME' AND          ROWNUM = 1        ORDER BY          dmp.role_priority_sort      ) menu_preference      -- Derive the menu hierarchy starting at the user's preference, going back to     -- the root menu item.    START WITH t.id = menu_preference.id    CONNECT BY PRIOR t.mim_id = t.id  

Problem

A root menu item has NULL for its parent (mim_id). The user's menu preference is a menu item leaf node, which can be found at any level in the hierarchy (the maximum depth is 3, in this case).

When the data is returned, the values for the LEVEL pseudocolumn (alias MENU_LEVEL) are in reverse order:

╔════════════╦═══════════╦══════════════╗  ║ MENU_LEVEL ║ MENU_NAME ║ MENU_ITEM_ID ║  ╠════════════╬═══════════╬══════════════╣  ║          1 ║ MenuTab3  ║ 100436       ║  ║          2 ║ MenuTab2  ║ 101322       ║  ║          3 ║ MenuTab1  ║ 101115       ║  ╚════════════╩═══════════╩══════════════╝  

This should actually return:

╔════════════╦═══════════╦══════════════╗  ║ MENU_LEVEL ║ MENU_NAME ║ MENU_ITEM_ID ║  ╠════════════╬═══════════╬══════════════╣  ║          3 ║ MenuTab3  ║ 100436       ║  ║          2 ║ MenuTab2  ║ 101322       ║  ║          1 ║ MenuTab1  ║ 101115       ║  ╚════════════╩═══════════╩══════════════╝  

However, since the hierarchy is connected by starting from the user's preferred menu item, and worked back up to the root menu item, it makes sense that LEVEL is counting "backwards".

Having the level reversed means we can ask, "What is the 3rd-level menu item for the user named 'USERNAME'"? Expressed in as a SQL where clause:

WHERE menu_level = 3 AND username = 'USERNAME';  

Question

How would you reverse the value of LEVEL for an arbitrarily-deep hierarchy?

For example, something like:

SELECT    LEVEL AS MENU_LEVEL_UNUSED,    max(LEVEL) - LEVEL + 1 AS MENU_LEVEL  FROM ...  

Obviously that won't work because max is an aggregate function.

Fiddle

http://sqlfiddle.com/#!4/60678/3/0

Strangely, I'm seeing different behaviours in Fiddle's 11g R2 instance than the local Oracle instance -- the ROWNUM is picking up "1" in the Fiddle when it should be picking up "3". This prevents seeing the menu hierarchy, and hence the LEVEL. Not sure why.

Ideas

  • We could add a column to jhs_menu_items that stores the depth. This is a bit redundant, though, because the hierarchy itself contains that information.
  • We could wrap the jhs_menu_items table in a view that calculates the depth. This could get computationally expensive.
  • Is this a good candidate for WITH?

SQL Server transaction log backups: test whether tail log follows last known log backup

Posted: 25 Aug 2013 04:23 PM PDT

We are using SQL Server with full recovery mode. Given a full backup and a series of log backups, we would like to be able to check whether the log chain is complete from the last full backup to the current tail log. (Without actually restoring these backups; the purpose here is to test the consistency of the backups.)

I already know how to do this for the existing backups: using RESTORE HEADERONLY I get the FirstLSN and LastLSN of every file, which can be compared for consecutive files, in order to determine whether they are compatible.

However, I don't know how to check whether the tail log follows the last log backup.

If I had the FirstLSN of the tail log, I could compare it to the LastLSN of the last log backup. But how can I obtain the FirstLSN of the tail log?

I need a solution that works from SQL Server 2005 upwards (ideally using t-sql). So far, I have searched Google to no avail. Btw. I first posted this on stackoverflow; but migrated it here since it was flagged off-topic there.

Managing constants changes to database

Posted: 25 Aug 2013 11:52 AM PDT

Here's the deal, I'm in charge of creating a website that will host small web apps. This apps will be photo contests, creatives sentences and other similar gibberish. The deal is, since apps we'll be constantly changing and some new ones will come up what's the best way to save the config specs for each one? Should I create new tables and fields as I need? Or should I serialize this data and put it on one field?

In this case instead of having a table apps, I was thinking having a table for each app, this way I could have a more modular way of dealing with apps, making changes more easy, or more painfull (in case of the changes apply to all the apps)

Or if I serialize everthing, then I would only have one table and I won't have the need to make changes to the table, but this option seems a bit lame to me.

Alter mysql database engine doesn't work from cluster to other

Posted: 25 Aug 2013 10:16 AM PDT

I got into this issue when I tried to alter my database table engine to InnoDB from ndbcluster, It did not work. But When I changed engine from InnoDB to ndbcluster it worked.

It did not throw any error but did not work. I just want to know why this did not work as I got it from MySQL site that it won't work. I just need to know the reason for the same.

For client requirement I may need to alter the engine on installed database without losing any data.

Any Ideas or help that anybody can provide to resolve issue.

How to disable flashback query logging for a specific table (Oracle)?

Posted: 25 Aug 2013 09:16 AM PDT

We have a specific table that has a lot of activity and it creates a lot of change records. The consequence is that the flashback data only goes back a couple of days. That is OK for many cases but it would be beneficial to have access to more historical data.

We would like to either restrict logging on that table. Or disable it completely. I imagine that we may be able to do this by tablespace, I just have not found much on how to make these changes.

MS Access: error 3045 while linking to back-end

Posted: 25 Aug 2013 12:16 PM PDT

In our environment multiple users each work with their own copied front-end, with tables linked to a single networked back-end (both files are .mdb).

Each user has permissions to read and write to the location of BE, and both FE and BE are set to "shared access".

Recently a single user started getting error: 3045 "Could Not Use Database; File Already in Use" while trying to open forms using linked data.

The same error appears while trying to re-link the tables using the manager. All other users can still normally work.

Tried restarting his workstation, removing local files and re-copying the FE and it didn't work.

There is no .ldb file left after all users turn off the FE, compacting after that and restarting the FE of the unlucky user didn't help either.

Said user is working on AC2010, but some others are working on AC2007 and AC2003.

Please help!

edit: BE is stored on Windows Server 2003.

Security for Oracle linked server from SQL Server

Posted: 25 Aug 2013 03:16 PM PDT

I'm trying to make the linked server from SQL Server 2005 to Oracle more secure by not having user/pass on "Be made using this security context:" fields so only a specific user can access the data from linked server.

So I tried mapping SQL Security User to Oracle user/pass by inserting it in "Local server login to remote server login mappings:" area then "For a login not defined in the list above, connection will:" set to "Not be made".

When I click OK, I get:

Access to the remote server is denied because no login-mapping exists  

Is it not possible to map SQL login to a Oracle login? Is there any way to get this to work?

Statistical Analysis of Data that has to be done in an order?

Posted: 25 Aug 2013 09:16 PM PDT

Bear with me - that is the first time try that in SQL Server, normally I have been doing that on the front end ;)

I a implementing some analysis on time coded data series. This is not super complicated stuff, but some of it requires some numbers we do not store in the database and that has to be calculated by aggregating the numbers in a specific algorithm IN ORDER.

To give an example:

  • I have a list of trades and I need to know the maximum loss we had in the account, so i need to aggregate the plus/minus and then take the most extreme negative and positive.

This can not be pre-calculated due to dynamic filtering - there are a number of filters that can be applied to the data.

So far - past - I pulled the data to the application, now for the standard stuff I plan to try to keep that in the sql server.

My problem now is - I can see how that works (acceptable) in SQL Server:

[TradeBegin],  Pnl,  SUM (Pnl) OVER (ORDER BY [TradeBegin] ROWS UNBOUNDED PRECEDING)  [Total]  

But if I put that into a view... and then filter out rows, the Sum is still calcualted from the beginning. And I need a view because I want (need) to map that standard analysis data into an ORM (so dynamic SQL is out). Anyone an idea how to do that?

Record versioning and promotion

Posted: 25 Aug 2013 06:09 PM PDT

Let's say we have this hierarchy:

   -World            --USA          ---WA          ----Seattle          -----Downtown          ------1st Ave          -------945 1st ave          ------3rd Ave          -----South          ----Bellevue            ---TX            ---MI            --Canada             ---BC      

Now, this will not reflect real life completely, but.

  1. At location World we have some global settings. Let's say: list of Races, list of Languages, etc. All locations underneath it can access this data.

  2. Country level holds data about laws, currency, etc.

3.Each city can modify laws. After they modified the law, it is available to the city itself and to the regions within the city. After 3-step approval process, it can become a country level law, and all children will inherit new version of the law. Each city still can modify new version of a law to have changes.

  1. Every City has sandboxes. This means that different organizations within city can test out editing different versions of laws without affecting City master revision.

Currently we solve this issue like this:

We have table Location with HierarchyId
Then we have table Law that has columns StateType, Location_Id, etc.
StateType can be: Master (Available to all within country), Edited (Edited for a specific location), Approved (Approved at a specific location).

Because of this we have to use Table-valued functions in our application to filter specific records that are relevant to the current location. (Some laws can come from country level, some laws come from city level, some laws come from organization level). This makes database much slower.

We cannot use native unique constraints on our fields, because all fields must be unique only within a branch, so we have to use custom check constraints to keep data clean.

Also, insertion and editing records is much slower, since we have to promote some records to a country level, and then apply data transformation for other countries.

Currently we have around 150 tables, and we use a lot of joins.

I'm not sure what's the best way to model complex hierarchical data with versioning. (I'm thinking we need an enchanced version of source control..)

Are there any disadvantages to partitioning on financial year?

Posted: 25 Aug 2013 01:16 PM PDT

Our current set up has one table per financial year (May 1- April 30). Each table has approx 1.5 million rows. We have about 8 years of data, and will obviously be adding each year.

The majority of queries are within the financial year/one partition. Either select * from sales where date time between '2013-05-01' and '2014-04-30 23:59:59' or some period within that year.

My plan is to have a range partition on an InnoDB table. e.g.

PARTITION BY RANGE COLUMNS(datetime)  (PARTITION p0 VALUES LESS THAN ('1999-05-01') ENGINE = InnoDB,   PARTITION p1 VALUES LESS THAN ('2000-05-01') ENGINE = InnoDB,   PARTITION p2 VALUES LESS THAN ('2001-05-01') ENGINE = InnoDB,  ...  

This means that the PK has to become PRIMARY KEY (index,datetime).

Are there any significant disadvantages to partitioning compared to having an unpartitioned table? I know that means the PK is now length 12 and all further indexes will have that prepended to it. Does that make a difference? The table needs to work faster on reads than writes, and there are a fair few indexes on it.

  • Is there any functional difference between partitioning on RANGE COLUMNS(datetime) and RANGE TO_DAYS(datetime) using MySQL 5.5+?
  • Is partitioning on datetime OK? One alternative is adding in a column for financial year and indexing/partitioning on that? (And then adding another index on datetime for other queries).

We do sometimes need to query the time across all time or over "the last X months", but this is pretty rare. The main advantages of moving to a single table is to eliminate the logic in the application working out which table to insert/update/select and not needing to calculate unions in those situations where we need more than one table.

Problems in opening mysql query browser

Posted: 25 Aug 2013 08:16 AM PDT

My mysql query browser doesn't run. I just started to work with xampp and php.

So I just installed xampp in my local machine. Now I have a problem in opening mysql query browser... What I'm getting while trying to open mysql is:

Could not connect to host 'localhost'. MySQL Error Nr. 2002 Can't connect to   local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)  

Click the Ping button to see if there is a networking problem.

Can anyone help me to fix out the above problem?

MongoDB: move documents before capping

Posted: 25 Aug 2013 11:16 AM PDT

The cappedCollection concept works well for most of my projects where cleaning old data without care makes sense.

For another projects, I need a more complex and safe concept. The requirement is nearly the same as logrotate. The data is appended to the main collection, without compression/compact and no index except a timestamp for simple queries by time. => The focus is on writing and persistent data.

Similar to the logrotate rules, I'd like the main collection not to become too large => capped by size; if possible, capping by timestamp might be a plus.

This sounds like a cappedCollection, but I do not want any data loss when it's capped. The old data should be stored into another db's collection that must be compact:true and a non-capped collection. It's name depends on the current month and makes sure that there will be max 12 "archive" collections per year.

Example:

liveDB.mainCollection_capped grows and starts capping.

Before removing old documents, these are savely moved into archiveDB.compactArchiveCollection201303.

No data is lost and the main collection remains small and fast. Storing the data in another database avoids db locks, e.g. repairDatabase tasks on an archive file will not affect or delay the main collection.

Is there a good practice or how to achieve this - as reliable and automated as possible - without writing all the data transfer for a cronjob which handles the data transfer but should never ever be missed because data is lost if capping starts before old data is copied into the archive.

SUPER privilege not defined for master user in Amazon MySQL RDS

Posted: 25 Aug 2013 04:16 PM PDT

I have created one medium instance on amazon rds in asia pecific (singapore) region. i have created my master user with master password. and it is working/connecting fine with workbench installed on my local PC. When, I am going to create function on that instance, it show me following error

ERROR 1418: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable)

At my instance, my variable (log_bin_trust_function_creators) shows OFF. now when I go to change with variable using

SET GLOBAL log_bin_trust_function_creators = 1;  

it gives me another error

Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation

I don't know how to solve this error.

Can anybody help???

How to import table's data in MySql from Sql Server?

Posted: 25 Aug 2013 07:16 PM PDT

I am trying to export table from SQL Server 2008 R2 TO MySql 5.5. For this I am using SQL Server Import and Export Wizard of Sql Server 2008, but it is giving this error. Error

Here this error may be occurring because table in Sql Server has a column with data type image and table in MySql this column's data type is LONGBLOB.

Please provide your expert answers. If not possible through SQL Server Import and Export Wizard than please suggest any other method for data transfer.

Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?

Posted: 25 Aug 2013 05:16 PM PDT

Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?

Conceptually, I was thinking something like:

mysqldump -u root -p database_to_backup > db_backup_file.sql | sudo rsync -avz db_backup_file.sql myuser@mysite.com:/var/www/db_backup_file.sql  

I've seen people pipe the result to mysql for their one liner backup solution, but I was curious if it was possible with rsync. You know--- cause rsync is magic :)

Thanks for your time!

Deleting Data From Multiple Tables

Posted: 25 Aug 2013 02:16 PM PDT

Suppose,I've a table called UNIVERSITY containing universities name:

universityID    universityNAME  isACTIVE       7            GNDU             1       6            PU               1       5            PTU              1       8            LPU              1  

Now these universities ID's has been(obviously) used in many tables within the database(name e.g.Education),Suppose 10 tables.

Q.Now what happen if i delete one university?

A.The universityID field in other tables becomes NULL.

But I don't want these,rather when I delete 1 university from UNIVERSITY TABLE,all its occurrences with Rows in all 10 table should get deleted.

What will be the shortest and easiest MySQL Query for this operation.

NOTE:I'm using PHP language.

Minimizing Indexed Reads with Complex Criteria

Posted: 25 Aug 2013 06:16 PM PDT

I'm optimizing a Firebird 2.5 database of work tickets. They're stored in a table declared as such:

CREATE TABLE TICKETS (    TICKET_ID id PRIMARY KEY,    JOB_ID id,    ACTION_ID id,    STATUS str256 DEFAULT 'Pending'  );  

I generally want to find the first ticket that hasn't been processed and is in Pending status.

My processing loop would be:

  1. Retrieve 1st Ticket where Pending
  2. Do work with Ticket.
  3. Update Ticket Status => Complete
  4. Repeat.

Nothing too fancy. If I'm watching the database while this loop runs I see the number of indexed reads climbs for each iteration. The performance doesn't seem to degrade terribly that I can tell, but the machine I'm testing on is pretty quick. However, I've received reports of performance degradation over time from some of my users.

I've got an index on Status, but it still seems like it scans down the Ticket_Id column each iteration. It seems like I'm overlooking something, but I'm not sure what. Is the climbing number of indexed reads for something like this expected, or is the index misbehaving in some way?

-- Edits for comments --

In Firebird you limit row retrieval like:

Select First 1    Job_ID, Ticket_Id  From    Tickets  Where    Status = 'Pending'  

So when I say "first", I'm just asking it for a limited record set where Status = 'Pending'.

No comments:

Post a Comment

Search This Blog