Sunday, October 13, 2013

[how to] Conditional Join - Specific Column Select

[how to] Conditional Join - Specific Column Select


Conditional Join - Specific Column Select

Posted: 13 Oct 2013 08:17 PM PDT

I have tables like these:

Table Employee ( Emp_Id, Full_Name, ...)  Table Order ( Submitter_Id, Recipient_Id, ...)  

Then say that I have conditional join like this:

select isnull(Emp.Full_Name, Emp2.Full_name) as Recipient  -- , other fields  from Order      left outer join Employee Emp on Emp.Emp_Id = Order.Recipient_Id      left outer join Employee Emp2 on Emp2.Emp_Id = Order.Submitter_Id  -- where clauses  

Brief explaination: The query will return the name of recipient from specific order. An order may has recipient id being set, or using submitter id if the recipient is not set.

The condition: Both table Order and Employee has big amount of records inside, so joining them both is a costly operation. 80% - 90% of records in Order has recipient_id set, so joining to submitter_id can be useless operation. Using isnull as join condition can resulting in index scan (in my experience). Using subquery for column maybe help, but the cost can be high because the operation can be row-by-row.

Is there any conditional join for case like this?

How do you disable anonymous login? - MySQL

Posted: 13 Oct 2013 05:50 PM PDT

According to the MySQL documentation, you can harden a MySQL server by adding passwords, or removing the anonymous accounts.

If you want to prevent clients from connecting as anonymous users without a password, you should either assign a password to each anonymous account or else remove the accounts.

Before hardening, my users table looked like this.

mysql> select user,host,password from mysql.user;  +------------------+-----------+-------------------------------------------+  | user             | host      | password                                  |  +------------------+-----------+-------------------------------------------+  | root             | localhost | *F3A2A51A9B0F2BE246XXXXXXXXXXXXXXXXXXXXXX |  | root             | gitlab    |                                           |  | root             | 127.0.0.1 |                                           |  | root             | ::1       |                                           |  |                  | localhost |                                           |  |                  | gitlab    |                                           |  | debian-sys-maint | localhost | *95C1BF709B26A5BAXXXXXXXXXXXXXXXXXXXXXXXX |  | myuser           | localhost | *6C8989366EAF75BB6XXXXXXXXXXXXXXXXXXXXXXX |  +------------------+-----------+-------------------------------------------+  

I've remove all anonomous accounts, so that the user table now looks like this. (I'm using puppet to manage the users, but puppet effectively performs a DROP USER command).

mysql> select user,host,password from mysql.user;  +------------------+-----------+-------------------------------------------+  | user             | host      | password                                  |  +------------------+-----------+-------------------------------------------+  | root             | localhost | *F3A2A51A9B0F2BE246XXXXXXXXXXXXXXXXXXXXXX |  | debian-sys-maint | localhost | *95C1BF709B26A5BAXXXXXXXXXXXXXXXXXXXXXXXX |  | myuser           | localhost | *6C8989366EAF75BB6XXXXXXXXXXXXXXXXXXXXXXX |  +------------------+-----------+-------------------------------------------+  

Why is it that I am still able to login to my test system without a username or a password?
What do I need to do to prevent any unwanted users from logging in?

root@gitlab:~# mysql  Welcome to the MySQL monitor.  Commands end with ; or \g.  Your MySQL connection id is 98  Server version: 5.5.32-0ubuntu0.12.04.1 (Ubuntu)  ....  mysql>   

Update: I've also just discovered that I can log in as root, without entering a password.

Update2: I found this question which has some good information, but does not solve the issue.

Theere are no anonymous users

mysql> select user,host,password from mysql.user where user='';  Empty set (0.00 sec)  

I log in as root@localhost

mysql> select USER(),CURRENT_USER();  +----------------+----------------+  | USER()         | CURRENT_USER() |  +----------------+----------------+  | root@localhost | root@localhost |  +----------------+----------------+   

I do not have a default password, or skip-grant-tables defined in my.cnf

root@gitlab:~# cat /etc/mysql/my.cnf |grep -i 'skip-grant-tables'|wc -l  0  root@gitlab:~# cat /etc/mysql/my.cnf |grep -i 'pass'|wc -l  0  

Update3:

I have tried performing these steps with puppet, (which should perform a flush privileges automatically). I have also manually flushed privileges and also tried restarting mysql.

Update4:
I've also tried changing the mysql root password and flushed privileges. No luck, I can still log in as any user without a password.

What is the standard isolation level for a db used for money data?

Posted: 13 Oct 2013 03:51 PM PDT

When using a database server for monetary/financial data, I can only assume that using transactions is mandatory.
But what I am not sure is which is usually the isolation level? Is a READ-REPEATABLE used? Or only a serialized level?

MySQL server vs client isolation levels

Posted: 13 Oct 2013 11:31 AM PDT

What is the difference between setting the transaction isolation level in the client configuration and the server configuration?
If I set the client isolation level to be serializable while the server's is read-repeatable what is the result for the transactions? Both of the same client and with transactions of other clients on the same tables?

Parsing an XML field to connect products to multiple categories?

Posted: 13 Oct 2013 10:43 AM PDT

I receive an unstructured XML data feed from my vendor. It actually contains two files. The first one is - New Products;Hot Sellers;Jewelery;Glass Ware;Men's Jewelery;Women's Jewelery;Gold Jewelery;Sterling Silver Jewelery;Watches;Bracelets; Diamond Jewelry; Ring's; Gold Rings; Bulova; (and more (in fact there are over 200 categories which I am working on turning into Categories and multiple levels of subcategories) This is followed with all of the 3000+ products. Parsing most of this and generating a table is pretty straight forward (even though I am pretty new to this) the problem I have is that there is a category field that has multiple comma separated fields.

Jewelery Women's Jewelery, Women's Watches, Watches, Gold Watches, Fashion Jewelery, Bulova

I am looking for a straight forward (and simple) way to parse the field to extract and assign the respective product to all of the categories. Also, what is the best way to design the categories table. Right now I am looking at:

                    Parent: Categories              Children: 1.)Jewelry   2.) Glassware  3.)?    Children of Jewelery: 1.) Men's Jewelery  2.)Women's Jewelery  

Each of these children would contain a number of categories: Watches, Rings, Bracelets, Necklaces, Brand Names, Gold, Sterling Silver, Costume, Fashion, Etc.

I have not yet selected a platform as this is critical to populating my store.

So the first question is how do I populate my categories table from a semi-colon deliminted file? Secondly, what is the best way to parse a multi-populated (comma separated) single field to assign the product to ALL categories appropriate to said item? I am looking at using a Windows based Platform.

How to re setup my mysql like it was with all the data

Posted: 13 Oct 2013 10:07 AM PDT

My computer had slowed down and I made an image file backup. I then reinstalled windows. How can I use the files that I have backed up to give the newly installed mysql the same data it had last time?

I havent got any exports.

How to set Innodb Buffer Pool Size

Posted: 13 Oct 2013 07:40 AM PDT

I have a database using innodb completely. Its now having a innodb_buffer_pool_size at 128MB. Which I know is very less for my user.

But no matter what I do it won't change. I have added a value of 1G both in /etc/my.cnf and also in /etc/mysql/my.cnf and restarted the mysql service. It doesn't seem to update.

My mysql keeps on stopping and when i run mysqltuner & tuning-primer.sh all point to the fact I have to increase the innodb_buffer_pool_size but it simply won't increase.

Is there any other way to enforce this?

How to combine date and time to datetime2 in SQL Server?

Posted: 13 Oct 2013 08:47 AM PDT

Given the following components

DECLARE @D DATE = '2013-10-13'  DECLARE @T TIME(7) = '23:59:59.9999999'  

What is the best way of combining them to produce a DATETIME2(7) result with value '2013-10-13 23:59:59.9999999'?

Some things which don't work are listed below.


SELECT @D + @T   

Operand data type date is invalid for add operator.


SELECT CAST(@D AS DATETIME2(7)) + @T   

Operand data type datetime2 is invalid for add operator.


SELECT DATEADD(NANOSECOND,DATEDIFF(NANOSECOND,CAST('00:00:00.0000000' AS TIME),@T),@D)  

The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart.


SELECT CAST(@D AS DATETIME) + @T   

The data types datetime and time are incompatible in the add operator.


SELECT CAST(@D AS DATETIME) + CAST(@T AS DATETIME)  

Returns a result but loses precision 2013-10-13 23:59:59.997

Does MySQL reindex on update

Posted: 13 Oct 2013 10:13 AM PDT

I have a users table with various fields which are getting updated frequently. i.e table have:

userid -  PRIMARY int  profile_views - int  ratings - int  last_active - int ( to store UNIX timestamps for sorting purpose)  

I also have an index on last_active to get the last active users. When user logins or performs a certain action last_active gets updated and same with profile_views and ratings.

My question is does mysql reindex all table ro reindex index last_active when I update last_active index value or update profile_views or ratings?

table:

CREATE TABLE IF NOT EXISTS `users` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `ip_address` varbinary(16) NOT NULL,    `username` varchar(100) NOT NULL,    `password` varchar(80) NOT NULL,    `salt` varchar(40) DEFAULT NULL,    `email` varchar(100) NOT NULL,    `title` varchar(255) NOT NULL,    `detail` varchar(500) NOT NULL,    `genre_id` int(2) NOT NULL,    `category` tinyint(1) NOT NULL,    `profile_views` int(11) NOT NULL,    `ratings ` int(11) NOT NULL,    `last_active` int(11) NOT NULL,    PRIMARY KEY (`id`),    KEY `genre_id` (`genre_id`),    KEY `last_active` (`last_active`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 AUTO_INCREMENT=4059 ;  

and second table to join to

CREATE TABLE IF NOT EXISTS `users_genre` (    `userid` int(11) NOT NULL,    `genre_id` int(11) NOT NULL,    KEY `userid` (`userid`),    KEY `genre_id` (`genre_id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

Thanks

Is using DISTINCT as a hint in subqueries useful?

Posted: 13 Oct 2013 06:42 AM PDT

Does adding DISTINCT in the following example have any impact on the query running time?
Is it wise to use it as a hint sometimes?

SELECT *  FROM   A  WHERE  A.SomeColumn IN (SELECT DISTINCT B.SomeColumn FROM B)   

What is Required to use Oracle Performance Manager to monitor SQL Server?

Posted: 13 Oct 2013 09:45 AM PDT

I am working with a customer who has several SQL Server and Oracle DBMS server instances. My customer would like to setup up performance data collection, monitoring and alerting for all of their databases. My strength is in SQL Server and I know lots of products/tools for doing this there, but there seems to be very few options for one tool or product that can do both Oracle and SQL Server at the same time.

One option that has been suggested is Oracle Performance Manager, which apparently can also monitor SQL Server. I have been trying to read up on it, but there's a lot of doc out there and I am not very familiar with it's implicit context (the Oracle Product suite(s)) so I am having trouble filtering it down to the parts that I need right now to make a recommendation to my client.

Here are the things that I am trying to figure out:

  1. Does Oracle Performance Manager fulfill any/all of these Performance functions for Oracle databases?:

    a. Performance Data Collection (ie., retained in a database)

    b. Performance Monitoring (live views of performance)

    c. Performance Alerting (live notification of exceeding configurable thresholds)

    d. Performance Reporting (this would be nice but not necessary, I know how to write a report from data tables, even in Oracle)

  2. Which can Oracle Performance Manager fulfill for SQL Server databases?

  3. What specific Oracle Product(s) does my customer need to have or purchase in order to get these functions?

  4. Are there any other requirements or restrictions that we should know about?

Any help on this is much appreciated.

Mysql Server Login and Permission Deny Error

Posted: 13 Oct 2013 02:11 PM PDT

I have a problem with MySQL server. After upgrading MySQL server from 5.3 to 5.4, I am not able to log-in using my MySQL root password (Yes, I am able to log-in without password!).

Also not able to grant permissions to users as well. Not able to change in databases using root account, while taking backup of database due to same.

When I looked into MySQL databases using root account, it doesn't show anything. While on other server, I am able to find all entries.

Kindly please help me to resolve this issue as it is critical.

Log of mysql is as below.

  130906 18:46:13 [Warning] Found invalid password for user: 'root@x.x.x.x'; Ignoring user  1843 130906 18:46:13 [ERROR] Missing system table mysql.proxies_priv; please run mysql_upgrade to create it  1844 130906 18:46:14 [ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist  1845 130906 18:46:14 [ERROR] Native table 'performance_schema'.'events_waits_current' has the wrong structure  1846 130906 18:46:14 [ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure  1847 130906 18:46:14 [ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure  1848 130906 18:46:14 [ERROR] Native table 'performance_schema'.'setup_consumers' has the wrong structure  1849 130906 18:46:14 [ERROR] Native table 'performance_schema'.'setup_instruments' has the wrong structure  1850 130906 18:46:14 [ERROR] Native table 'performance_schema'.'setup_timers' has the wrong structure  1851 130906 18:46:14 [ERROR] Native table 'performance_schema'.'performance_timers' has the wrong structure  1852 130906 18:46:14 [ERROR] Native table 'performance_schema'.'threads' has the wrong structure  1853 130906 18:46:14 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_thread_by_event_name' has the wrong structure  1854 130906 18:46:14 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_instance' has the wrong structure  1855 130906 18:46:14 [ERROR] Native table 'performance_schema'.'events_waits_summary_global_by_event_name' has the wrong structure  1856 130906 18:46:14 [ERROR] Native table 'performance_schema'.'file_summary_by_event_name' has the wrong structure  1857 130906 18:46:14 [ERROR] Native table 'performance_schema'.'file_summary_by_instance' has the wrong structure  1858 130906 18:46:14 [ERROR] Native table 'performance_schema'.'mutex_instances' has the wrong structure  1859 130906 18:46:14 [ERROR] Native table 'performance_schema'.'rwlock_instances' has the wrong structure  1860 130906 18:46:14 [ERROR] Native table 'performance_schema'.'cond_instances' has the wrong structure  1861 130906 18:46:14 [ERROR] Native table 'performance_schema'.'file_instances' has the wrong structure  1861 130906 18:46:14 [ERROR] Native table 'performance_schema'.'file_instances' has the wrong structure  1862 130906 18:46:14 [ERROR] Column count of mysql.db is wrong. Expected 22, found 13. Created with MySQL 50045, now running 50532. Please use mysql_upgrade to fix this error.  1863 130906 18:46:14 [ERROR] mysql.user has no `Event_priv` column at position 29  1864 130906 18:46:14 [ERROR] Cannot open mysql.event  1865 130906 18:46:14 [ERROR] Event Scheduler: An error occurred when initializing system tables. Disabling the Event Scheduler.  1866 130906 18:46:14 [Note] /usr/libexec/mysqld: ready for connections.  1867 Version: '5.5.32-cll-lve' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) by Atomicorp  

SQL Server 2012 Page Compression - Determine compression ratio?

Posted: 13 Oct 2013 05:20 PM PDT

We're using SQL Server 2012 and have page compression turned on for a number of our larger tables. Is there a way we can determine the compression ratio we're getting? We'd like to be able to determine the benefits of compression.

linked server to specific access table

Posted: 13 Oct 2013 08:26 AM PDT

I have built a linked server which links my SQL Server database to an Access DB. Using this object someone is able to see all tables in Access. What I would like to do is to isolate it just to 1 table. So my question is how to build a linked server (from SQL Server) connected only to one specific table in Access?

Accumulo table design methodology

Posted: 13 Oct 2013 11:26 AM PDT

I am just getting started with Accumulo and NoSQL databases and I am looking for some discussion on table design. I get the key value structure that is seen in the manual. However, if I am trying to recreate a relational database, I am not sure how relationships work. Can someone explain to some degree how to setup and "Hello World" database (i.e., manager-employee database). I want to use key-value implementation.

Way to make differential backup from full different but from same database

Posted: 13 Oct 2013 03:26 PM PDT

I have the following situation: Our sql server is running in a cloud service. The database to be backed up have 60gb. We need to make a local copy of the database for some tests, and it need to be up to date, but 60gb is impracticable to download daily.

We thought to make a full backup, download it one time, make differential backups in following days. Once a week, we can stop the database operations to make the last differential backup from last week full backup and make a full backup, and start again the database operations. In this moment, the local bd, when restored with the last diff backup (made when the db was with stopped operations), have the same data than the last week full backup.

The idea behind this is, if is possible to use the diff backup from last full backup and restore it in our local database. If it is not possible, anyone have some idea of what we can do to get all the data, up to date, from our cloud database?

Disaster Recovery for PostgreSQL 9.0

Posted: 13 Oct 2013 07:26 PM PDT

We have a number of PostgreSQL 9.0 servers. We use binary replication to have a host standby instance of those. The only problem is that is someone drops the master, with or without intentions, this will cascade to the replicas as well. I'm looking at the possible ways to avoid this. One possible option in seems to be Point in Time Recovery. I'm just wondering what could be a good design for this. Any ideas? Let's assume the master is compromised and we lose everything we have there. How can we avoid losing the replica or at least have a way to bring it back if it's dropped?

Oracle Patch Update

Posted: 13 Oct 2013 12:26 PM PDT

We have an Oracle RAC production environment with primary and secondary DB. Our DBA has asked to update oracle version from 11.2.0.1.0(64 bit) to 11.2.0.3(64 bit) with patch 6880880,10404530,16803769 and 16803775.

In our current database we have Shared storage,ACL settings, security settings,Gateway/Heteregenous connectivity, Dataguard, Data broker, Backup policy and Oracle Client installed on other machines.

DBA has estimated that he need to do installation, settings from scratch and test.. So, when the version is updated, do we really need to reconfig and install everything (Shared storage,ACL settings, security settings,Gateway/Heteregenous connectivity, Dataguard, Data broker, Backup policy and Oracle Client installed on other machines) ?? If yes its fine, but no then I need to justify it.

I can understand testing would be required..

Updateable Subscriptions: Rows do not match between Publisher and Subscriber

Posted: 13 Oct 2013 09:26 PM PDT

I have transactional replication with updatable subscribers set up in SQL Server 2008. It has 3 subscribers and 1 publisher. I had to setup replication again due to some errors related to the database and the application which uses the database. However, now I run into issues when I try updating a section in my application. It does not go through the with update and gives the following error:

Execption Message: Updateable Subscriptions: Rows do not match between Publisher and Subscriber. Run the Distribution Agent to refresh rows at the Subscriber. Updateable Subscriptions: Rolling back transaction. The transaction ended in the trigger. The batch has been aborted. Exception Triggered by Method: OnError

The update statement obviously doesn't go through. However, when I try it the second time it works. Replication is working. Everything seems to be replication.

Can anyone explain why this error would occur and how I can resolve this issue. I would really appreciate the help!...

How to add rows/columns to the table in runtime in SSRS 2008

Posted: 13 Oct 2013 10:26 AM PDT

Usually we design the table to have x number of rows and y number of columns in a report. But how can we create a report which adds the rows and columns dynamically at run time based on the result of the source query?

For example I want to list stdentId, StudentName and any course each student has enrolled in. As the number of courses is different from one person to the other, I should add the rows and related column for courses at run time based on the query result. How can it be done? For example:

enter image description here

Thanks for your help in advance.

How to disable oracle's MAX_ENABLED_ROLES limit

Posted: 13 Oct 2013 04:26 PM PDT

How to disable oracle's MAX_ENABLED_ROLES limit or expand the value of limitation. [oracle 10g (win32)]

In MySQL, does the order of the columns in a WHERE clause affect query performance,why?

Posted: 13 Oct 2013 01:26 PM PDT

I have a query that doesn't use any indexes:

SELECT 32,         guid,         1,         1,         1,         0,         5  FROM   test  WHERE  level >= 20         AND ( ( fun_GetIndexValue(data, 354) >> 16 ) +                ( fun_GetIndexValue(data, 355) >> 16 ) +                ( fun_GetIndexValue(data, 356) >> 16 ) +                ( fun_GetIndexValue(data, 357) >> 16 ) +                ( fun_GetIndexValue(data, 358) >> 16 ) +                ( fun_GetIndexValue(data, 359) >> 16 ) ) >= 1;   

The level column has only about 80-90 distinct values, the table test has about million rows, and the data column is passed to the function, so I think the query can not use any indexes. But I found that if I put the level condition in the end, the query performs slower. Why is that?

Efficiently Filter Large Set With Disjunctions

Posted: 13 Oct 2013 07:26 AM PDT

Let's say I have a single table

CREATE TABLE Ticket (      TicketId int NOT NULL,      InsertDateTime datetime NOT NULL,      SiteId int NOT NULL,      StatusId tinyint NOT NULL,      AssignedId int NULL,      ReportedById int NOT NULL,      CategoryId int NULL  );  

In this example TicketId is the Primary Key.

I want users to be able to create "partially ad-hoc" queries against this table. I say partially because a few parts of the query will always fixed:

  1. The query will always perform a range filter on an InsertDateTime
  2. The query will always ORDER BY InsertDateTime DESC
  3. The query will page results

The user can optionally filter on any of the other columns. They can filter on none, one, or many. And for each column the user may select from a set of values which will be applied as a disjunction. For example:

SELECT      TicketId  FROM (      SELECT          TicketId,          ROW_NUMBER() OVER(ORDER BY InsertDateTime DESC) as RowNum      FROM Ticket      WHERE InsertDateTime >= '2013-01-01' AND InsertDateTime < '2013-02-01'        AND StatusId IN (1,2,3)        AND (CategoryId IN (10,11) OR CategoryId IS NULL)      ) _  WHERE RowNum BETWEEN 1 AND 100;  

Now assume the table has 100,000,000 rows.

The best I can come up with is a covering index that includes each of the "optional" columns:

CREATE NONCLUSTERED INDEX IX_Ticket_Covering ON Ticket (      InsertDateTime DESC  ) INCLUDE (      SiteId, StatusId, AssignedId, ReportedById, CategoryId  );  

This gives me a query plan as follows:

  • SELECT
    • Filter
      • Top
        • Sequence Project (Compute Scalar)
          • Segment
            • Index Seek

It seems pretty good. About 80%-90% of the cost comes from the Index Seek operation, which is ideal.

Are there better strategies for implementing this kind of searching?

I don't necessarily want to offload the optional filtering to the client because in some cases the result set from the "fixed" part could be 100s or 1000s. The client would then also be responsible for sorting and paging which might too much work for the client.

MySQL crashes with Signal 8 after "Table has no partition for value" error

Posted: 13 Oct 2013 02:08 PM PDT

We have a mysql database running well and suddenly it stops and below are the snippet of the log file. What is the best remedy in this scenario?

130202  1:08:59 [ERROR] Event Scheduler: [root@localhost][packets.ipByte_SecondAnalysis] Table has no partition for value 735266  130202  1:08:59 [Note] Event Scheduler: [root@localhost].[packets.ipByte_SecondAnalysis] event execution failed.  17:09:02 UTC - mysqld got signal 8 ;  This could be because you hit a bug. It is also possible that this binary  or one of the libraries it was linked against is corrupt, improperly built,  or misconfigured. This error can also be caused by malfunctioning hardware.  We will try our best to scrape up some info that will hopefully help  diagnose the problem, but since we have already crashed,   something is definitely wrong and this may fail.    key_buffer_size=8388608  read_buffer_size=131072  max_used_connections=2  max_threads=151  thread_count=3  connection_count=2  It is possible that mysqld could use up to   key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338489 K  bytes of memory  Hope that's ok; if not, decrease some variables in the equation.    Thread pointer: 0x4223f70  Attempting backtrace. You can use the following information to find out  where mysqld died. If you see no messages after this, something went  terribly wrong...  stack_bottom = 7f6d2c794d98 thread_stack 0x40000  /usr/libexec/mysqld(my_print_stacktrace+0x2e)[0x78329e]  /usr/libexec/mysqld(handle_fatal_signal+0x493)[0x66e2c3]  /lib64/libpthread.so.0[0x3b4300f4a0]  /usr/libexec/mysqld(_ZN12ha_partition21min_rows_for_estimateEv+0x4c)[0x8efe8c]  /usr/libexec/mysqld(_ZN12ha_partition16records_in_rangeEjP12st_key_rangeS1_+0x35)[0x8efec5]  /usr/libexec/mysqld[0x71efc7]  /usr/libexec/mysqld[0x71f62f]  /usr/libexec/mysqld[0x71f8ab]  /usr/libexec/mysqld(_ZN10SQL_SELECT17test_quick_selectEP3THD6BitmapILj64EEyyb+0x1060)[0x727310]  /usr/libexec/mysqld[0x5b02f5]  /usr/libexec/mysqld(_ZN4JOIN8optimizeEv+0x4cd)[0x5b11fd]  /usr/libexec/mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x105)[0x5b4b25]  /usr/libexec/mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x174)[0x5ba3e4]  /usr/libexec/mysqld[0x57ba37]  /usr/libexec/mysqld(_Z21mysql_execute_commandP3THD+0x304e)[0x581d5e]  /usr/libexec/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x19d)[0x58340d]  /usr/libexec/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x12f1)[0x584eb1]  /usr/libexec/mysqld(_Z24do_handle_one_connectionP3THD+0xd2)[0x613cd2]  /usr/libexec/mysqld(handle_one_connection+0x50)[0x613de0]  /lib64/libpthread.so.0[0x3b430077f1]  /lib64/libc.so.6(clone+0x6d)[0x3b42ce570d]    Trying to get some variables.  Some pointers may be invalid and cause the dump to abort.  Query (7f6cc8223980): is an invalid pointer  Connection ID (thread ID): 43  Status: NOT_KILLED    The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains  information that should help you find out what is causing the crash.  130202 01:09:02 mysqld_safe Number of processes running now: 0  130202 01:09:02 mysqld_safe mysqld restarted  130202  1:09:02 [Warning] The syntax '--log' is deprecated and will be removed in a future release. Please use '--general-log'/'--general-log-file' instead.  130202  1:09:02 [Note] Plugin 'FEDERATED' is disabled.  130202  1:09:02 InnoDB: The InnoDB memory heap is disabled  130202  1:09:02 InnoDB: Mutexes and rw_locks use GCC atomic builtins  130202  1:09:02 InnoDB: Compressed tables use zlib 1.2.3  130202  1:09:02 InnoDB: Using Linux native AIO  130202  1:09:02 InnoDB: Initializing buffer pool, size = 1.0G  130202  1:09:02 InnoDB: Completed initialization of buffer pool  130202  1:09:02 InnoDB: highest supported file format is Barracuda.  InnoDB: Log scan progressed past the checkpoint lsn 178008262046  130202  1:09:02  InnoDB: Database was not shut down normally!  InnoDB: Starting crash recovery.  InnoDB: Reading tablespace information from the .ibd files...  InnoDB: Restoring possible half-written data pages from the doublewrite  InnoDB: buffer...  InnoDB: Doing recovery: scanned up to log sequence number 178008589529  130202  1:09:02  InnoDB: Starting an apply batch of log records to the database...  InnoDB: Progress in percents: 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99   InnoDB: Apply batch completed  130202  1:09:03  InnoDB: Waiting for the background threads to start  130202  1:09:04 InnoDB: 1.1.8 started; log sequence number 178008589529  /usr/libexec/mysqld: File '/var/log/allQuery.log' not found (Errcode: 13)  130202  1:09:04 [ERROR] Could not use /var/log/allQuery.log for logging (error 13). Turning logging off for the whole duration of the MySQL server process. To turn it on again: fix the cause, shutdown the MySQL server and restart it.  130202  1:09:04 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306  130202  1:09:04 [Note]   - '0.0.0.0' resolves to '0.0.0.0';  130202  1:09:04 [Note] Server socket created on IP: '0.0.0.0'.  130202  1:09:04 [ERROR] Missing system table mysql.proxies_priv; please run mysql_upgrade to create it  130202  1:09:04 [ERROR] Native table 'performance_schema'.'events_waits_current' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'setup_consumers' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'setup_instruments' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'setup_timers' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'performance_timers' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'threads' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_thread_by_event_name' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_instance' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'events_waits_summary_global_by_event_name' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'file_summary_by_event_name' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'file_summary_by_instance' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'mutex_instances' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'rwlock_instances' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'cond_instances' has the wrong structure  130202  1:09:04 [ERROR] Native table 'performance_schema'.'file_instances' has the wrong structure  130202  1:09:04 [Note] Event Scheduler: Loaded 2 events  130202  1:09:04 [Note] /usr/libexec/mysqld: ready for connections.  

Delete word, its meanings, its meaning's example sentences from DB

Posted: 13 Oct 2013 05:26 PM PDT

I have three tables as below (simplified for demonstration):

words  =====  integer id  text    word    meanings  ========  integer id  integer word_id  text    meaning    examples  ========  integer id  integer meaning_id  text    sentence  

where, word_id stores id of the word in words table and meaning_id stores id of the meaning in meanings table. I am trying to figure out a sql query, given a word's id, to delete the word with all its meanings and example sentences all at one time. Is such sql query possible to compose? If so, how?

Edit1: I am using SQLite3 as the database.

Edit2: I figured the following solution which requires 3 sql queries in order:

DELETE FROM examples WHERE meaning_id IN (SELECT id FROM meanings WHERE word_id=the_given_id);  DELETE FROM meanings WHERE word_id=the_given_id;  DELETE FROM words WHERE id=the_given_id;  

I'm still looking for the answer to my question: is the whole process possible to be done in one query?

MySQL concurrent INSERTs

Posted: 13 Oct 2013 09:26 AM PDT

I have a MySQL database with InnoDB tables. There are different client processes making SELECT (to check the existence of a value) and INSERT or UPDATE (depending on the result of the select) statements. What I fear is a possible concurrent access to data causing only INSERTs and no UPDATEs. Is LOCK Table WRITE the only solution?

How can I optimize this query and support multiple SKUs?

Posted: 13 Oct 2013 03:26 PM PDT

My current query only can select one SKU at a time. I can leave salesite_id constant. If there is a way to also have varying salesite_ids that would be good too, but not necessary. Also any suggestions on indexes would be much appreciated also.

SELECT       available - (          SELECT COALESCE(sum(quantity), 0)           FROM product_locks           WHERE sku = 'sku1'      ) - (          SELECT COALESCE(sum(quantity), 0)           FROM               orderlineitems               INNER JOIN responses_authnets ON responses_authnets.id = orderlineitems.response_id           WHERE               sku = 'sku1' AND responses_authnets.salesite_id = 'site_id_1'      ) AS free,       available AS total,       sku,       on_hold   FROM product_inventories   WHERE sku = 'sku1' AND salesite_id = 'site_id_1';  

How to modify an update in Oracle so it performs faster?

Posted: 13 Oct 2013 02:26 PM PDT

I have this query:

UPDATE   (      SELECT   h.valid_through_dt, h.LAST_UPDATE_TMSTMP      FROM   ETL_FEE_SCH_TMP d, FEE_SCHEDULE_HISTORICAL h      WHERE       h.FUND_ID = d.FUND_ID      AND h.FEETYPE_NAME = d.FEETYPE_NAME      AND h.BREAKPOINT_TYPE = d.BREAKPOINT_TYPE      AND h.BREAKPOINT_QTY = d.BREAKPOINT_QTY      AND h.LOW_BREAKPOINT_AMT = d.LOW_BREAKPOINT_AMT      AND h.VALID_THROUGH = TO_DATE ('31-DEC-9999', 'dd-mon-yyyy')      AND h.universe = 'DC'      AND h.universe = d.universe      AND EXISTS      (          SELECT 1          FROM FEE_SCHEDULE s          WHERE s.FUND_ID = h.FUND_ID          AND s.FEETYPE_NAME = h.FEETYPE_NAME          AND s.BREAKPOINT_TYPE = h.BREAKPOINT_TYPE          AND s.BREAKPOINT_QTY = h.BREAKPOINT_QTY          AND s.LOW_BREAKPOINT_AMT = h.LOW_BREAKPOINT_AMT          AND s.universe = 'DC'      )  ) updateTable  SET     updateTable.VALID_THROUGH = (SYSDATE - 1),  updateTable.LAST_UPDATE_TMSTMP = SYSTIMESTAMP;  

The trouble that I am having is that this query takes a long time to run. I don't know whether it is possible to run this on parallel, or it would be easier to update a cursor in a pipeline function.

What would you suggest?

This is all the information that I believe it is relevant.

This is the execution plan of the internal select:

Execution Plan  ----------------------------------------------------------  Plan hash value: 57376096  ---------------------------------------------------------------------------------------------------------  | Id  | Operation                    | Name                     | Rows  | Bytes| Cost (%CPU)| Time     |  ---------------------------------------------------------------------------------------------------------  |   0 | SELECT STATEMENT             |                          |     1 |   306 |  8427   (1)| 00:01:42 |  |   1 |  NESTED LOOPS                |                          |       |       |            |          |  |   2 |   NESTED LOOPS               |                          |     1 |    306|  8427   (1)| 00:01:42 |  |   3 |    MERGE JOIN CARTESIAN      |                          |     1 |    192|  8426   (1)| 00:01:42 |  |*  4 |     INDEX RANGE SCAN         | SYS_C000666              |     1 |     96|     2   (0)| 00:00:01 |  |   5 |     BUFFER SORT              |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |   6 |      SORT UNIQUE             |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  7 |       TABLE ACCESS FULL      | FEE_SCHEDULE             |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  8 |    INDEX RANGE SCAN          | FEE_SCHDL_IDX1           |     1 |       |     1   (0)| 00:00:01 |  |*  9 |   TABLE ACCESS BY INDEX ROWID| FEE_SCHEDULE_HISTORICAL  |     1 |   114 |     1   (0)| 00:00:01 |  ---------------------------------------------------------------------------------------------------------  Predicate Information (identified by operation id):  ---------------------------------------------------     4 - access("D"."UNIVERSE"='DC')     7 - filter("S"."UNIVERSE"='DC')     8 - access("H"."UNIVERSE"='DC' AND "S"."FUND_ID"="H"."FUND_ID" AND                "S"."FEETYPE_NAME"="H"."FEETYPE_NAME" AND                "S"."BREAKPOINT_TYPE"="H"."BREAKPOINT_TYPE" AND                "S"."BREAKPOINT_QTY"="H"."BREAKPOINT_QTY" AND                "S"."LOW_BREAKPOINT_AMT"="H"."LOW_BREAKPOINT_AMT")         filter("H"."FUND_ID"="D"."FUND_ID" AND                "H"."FEETYPE_NAME"="D"."FEETYPE_NAME" AND                "H"."BREAKPOINT_TYPE"="D"."BREAKPOINT_UNIT_TY  

Table data:

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  UNIVERSE|FUND_ID   |FEETYPE_NAME |BREAKPOINT_TYPE|BREAKPOINT_QTY|LOW_BREAKPOINT_AMT|HIGH_BREAKPOINT_AMT|FEE_PCT|FEE_SCHDL_SEQ_ID|GROUP_ID|LAST_UPDATE_TMSTMP  |VALID_FROM|VALID_THROUGH|INSERT_TMSTMP        |JOB_ID|  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |4             |10000             |300000             |3.14   |780250          |null    |1/4/2012  3:59:54 PM|6/23/2012 |12/31/9999   |1/5/2011   3:59:54 PM|666   |  DC      |DCE86Y8XFU|RedemptionFee|Percentage     |9             |  100             |100500             |7.67   |780251          |null    |6/4/2012  4:49:54 PM|11/12/2011|12/31/9999   |8/17/2011  2:00:54 PM|666   |  DC      |DCAYL0KONA|FrontLoad    |Percentage     |2             |50000             |601500             |5.00   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |7             |80000             |900000             |2.24   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  

This is the script of the historical table:

CREATE TABLE FEE_SCHEDULE_HISTORICAL  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    FEE_SCHDL_SEQ_ID                NUMBER        NOT NULL,    GROUP_ID                        NUMBER,    LAST_UPDATE_TMSTMP              DATE          NOT NULL,    VALID_FROM                      DATE          NOT NULL,    VALID_THROUGH                   DATE          NOT NULL,    INSERT_TMSTMP                   DATE          NOT NULL,    JOB_ID                          NUMBER        NOT NULL  );    CREATE UNIQUE INDEX FEE_SCHDL_PK ON FEE_SCHEDULE_HISTORICAL(FEE_SCHDL_SEQ_ID);    CREATE UNIQUE INDEX FEE_SCHDL_HST_IDX ON FEE_SCHEDULE_HISTORICAL (      UNIVERSE,      FUND_ID,      FEETYPE_NAME,      BREAKPOINT_TYPE,      BREAKPOINT_QTY,       LOW_BREAKPOINT_AMT,      VALID_FROM,      JOB_ID  )    CREATE INDEX FEE_SCHEDULE_HST_IDX2 ON FEE_SCHEDULE_HISTORICAL(LAST_UPDATE_TMSTMP)    CREATE INDEX FEE_SCHEDULE_HST_IDX3 ON FEE_SCHEDULE_HISTORICAL(VALID_THROUGH)    ALTER TABLE FEE_SCHEDULE_HISTORICAL ADD (      CONSTRAINT FEE_SCHDL_PK      PRIMARY KEY      (FEE_SCHDL_SEQ_ID)  );  

This is the other table:

CREATE TABLE FEE_SCHEDULE  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    JOB_RUN_ID                      NUMBER        NOT NULL,    FILE_DATE                       DATE          NOT NULL,    CYCLE_DATE                      DATE          NOT NULL  )  

The temporary table is the result of FEE_SCHEDULE_HISTORICAL minus FEE_SCHEDULE

[SQL 2012] Looking for a faster count than count(*) for my sp.

[SQL 2012] Looking for a faster count than count(*) for my sp.


Looking for a faster count than count(*) for my sp.

Posted: 04 Oct 2013 05:09 AM PDT

I have the following sp which is being slowed down by the following count statement:-(select COUNT(*) from EMPLOYMENT_OPPORTUNITIES_CTE) as TotalCount FROM ( I have tried:--(SELECT-- Total_Rows= SUM(st.row_count)--FROM-- sys.dm_db_partition_stats st--WHERE-- object_name(object_id) = 'EMPLOYMENT_OPPORTUNITIES_CTE' AND (index_id < 2))-- as TotalCount FROM ( which is very fast, but does not give me a TotalCount value.I have also tried: (SELECT rowcntFROM sys.sysindexesWHERE id = OBJECT_ID('EMPLOYMENT_OPPORTUNITIES_CTE') AND (indid = 0 OR indid = 1)) as TotalCount FROM ( which is also very fast, but still no TotalCount value.Am i stuck with count(*), or is there a faster way to do it, in this situation?ThanksUSE [JobPortal9_10_13]GO/**********/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGO-- ============================================= -- -- Description: <Description,[Get All Jobs Searched Structured SQL],> -- -- ============================================= ALTER Procedure [dbo].[zianGetAllJobsSearched10_1_13FAST] -- Add the parameters for the stored procedure here @Title varchar(250), @Industry int, @Industry2 int, @Industry3 int, @Date int, @JobTitle int, @JobType int, @Experience int, @Education int, @State int, @City int, @Salary int, @MaxSalary int, @fromRec int, @toRec int, @SortType VARCHAR(50), @SortOrder VARCHAR(10) WITH RECOMPILE AS DECLARE @ActualDate DateTime = cast(((select dateadd(d,@Date,GETDATE()))) as varchar(20));IF (@Title = '') SET @Title = NULL;WITH EMPLOYMENT_OPPORTUNITIES_CTE(id,title,contactperson,lastmodified,description,workexperiencetypeid, workexperience,jobtypeid,AcademicExperienceTypeId, industryid,industryid2,industryid3,salaryminid,salarymaxid, --jobTitle, city,state, PostalCode,positions,lastmodified2) --,deadline)AS( SELECT e.id,title,contactperson,lastmodified,e.description,workexperiencetypeid, isnull((select we.[Name] from workexperiencetypes we where we.id=e.workexperiencetypeid),'') as workexperience, isnull((select jot.[Name] from jobtypes jot where jot.id=e.jobtypeid),'') as jobtype, isnull((select edu.[Name] from Degree edu where edu.Id=e.AcademicExperienceTypeId),'') as education, isnull((select ind.[Name] from industries ind where ind.id=e.industryid),'') as industryname, isnull((select ind.[Name] from industries ind where ind.id=e.industryid2),'') as industryname2, isnull((select ind.[Name] from industries ind where ind.id=e.industryid3),'') as industryname3, salaryminid,salarymaxid, --isnull((select jt.[Name] from jobTitles jt where jt.id=e.jobtypeid),'') as jobTitle, isnull((select ci.[Name] from cities ci where ci.id=c.cityid),'') as city, isnull((select r.[AbbreviatedName] from regions r where r.id=c.regionid),'') as state, isnull((select comp.[PostalCode] from Companys comp where comp.Id=c.id),'') as PostalCode, positions, substring(cast(e.LastModified as varchar(20)),1,12) as lastmodified2 --ROW_NUMBER() OVER (ORDER BY (SELECT 1)) RowNumber from EmploymentOpportunities e With (nolock) --left outer join companys c on e.officeid=c.id --inner join companys c on e.officeid=c.id inner join companys c on e.companyid=c.id where (@Title IS NULL or title = @Title) and (@Industry = 0 OR e.industryid = @Industry) and (@Industry2 = 0 OR Industryid2 = @Industry2) and (@Industry3 = 0 OR Industryid3 = @Industry3) and (@Date = 0 OR lastmodified >= @Date) and lastmodified is not null and lastmodified > @ActualDate --and e.LastModified >''' + cast(((select dateadd(d,@Date,GETDATE()))) as varchar(20)) + '''' --and (@JobTitle = 0 OR title = @JobTitle) and (@JobType = 0 OR jobtypeid = @JobType) and (@Experience = 0 OR workexperiencetypeid = @Experience) and (@Education = 0 OR academicexperiencetypeid = @Education) and (@State = 0 OR c.RegionId = @State) and (@City = 0 OR c.CityId = @City) and (@Salary = 0 OR SalaryMinID >= @Salary) and (@MaxSalary = 0 OR SalaryMaxID <= @MaxSalary) )SELECT id,title as Title,contactperson as ContactPerson,lastmodified,description as Description,workexperience,jobtypeid as jobtype,academicexperiencetypeid as education,industryid as industryname,industryid2 as industryname2,industryid3 as industryname3,salaryminid as salary,salarymaxid as maxsalary,--jobtitle as jobTitle,city + ', ' + state + ', ' + PostalCode as Location,positions as Positions,--deadline,rownumber as RowNumber,--(select COUNT(*) from EMPLOYMENT_OPPORTUNITIES_CTE) as TotalCount FROM ( --(SELECT-- Total_Rows= SUM(st.row_count)--FROM-- sys.dm_db_partition_stats st--WHERE-- object_name(object_id) = 'EMPLOYMENT_OPPORTUNITIES_CTE' AND (index_id < 2))-- as TotalCount FROM ( (SELECT rowcntFROM sys.sysindexesWHERE id = OBJECT_ID('EMPLOYMENT_OPPORTUNITIES_CTE') AND (indid = 0 OR indid = 1)) as TotalCount FROM ( SELECT id,title,contactperson,lastmodified,description,workexperience,jobtypeid,academicexperiencetypeid,industryid,industryid2,industryid3,salaryminid,salarymaxid,--jobtitle,city,state,PostalCode,--Location,positions,--deadline,ROW_NUMBER() OVER (ORDER BY LastModified DESC ) RowNumber FROM EMPLOYMENT_OPPORTUNITIES_CTE ) p1WHERE RowNumber BETWEEN @fromRec AND @toRec ORDER BY CASE WHEN @SortType = 'LastModified' AND @SortOrder = 'DESC' THEN LastModified END DESC, CASE WHEN @SortType = 'LastModified' AND @SortOrder != 'DESC' THEN LastModified END, CASE WHEN @SortType = 'City' AND @SortOrder = 'DESC' THEN City END DESC, CASE WHEN @SortType = 'City' AND @SortOrder != 'DESC' THEN City END, CASE WHEN @SortType = 'State' AND @SortOrder = 'DESC' THEN State END DESC, CASE WHEN @SortType = 'State' AND @SortOrder != 'DESC' THEN State END, CASE WHEN @SortType = 'Title' AND @SortOrder = 'DESC' THEN Title END DESC, CASE WHEN @SortType = 'Title' AND @SortOrder != 'DESC' THEN Title END OPTION(Maxdop 8)

Help! Granting user access to SSRS reports

Posted: 12 Oct 2013 11:24 AM PDT

I have a report server on SS2012.Granted users/(security group)for SSRS reports using Security -> New Role Assignment -> Browser.However, users get the following error: An error has occurred during report processing. (rsProcessingAborted) The permissions granted to user 'xxx' are insufficient for performing this operation. (rsAccessDenied) What's the issue here? Do I need to grant user/SG additional read access to the backend Database reports are referring to?Thanks in advance!

Saturday, October 12, 2013

[how to] MySQL Database "Table Doesn't Exist" When Clicked in phpMyAdmin

[how to] MySQL Database "Table Doesn't Exist" When Clicked in phpMyAdmin


MySQL Database "Table Doesn't Exist" When Clicked in phpMyAdmin

Posted: 12 Oct 2013 02:36 PM PDT

I recently updated MAMP (the LocalHost for Mac) to the latest Version 2.2 in order to get the latest versions of Apache, MySQL, and PHP. After the upgrade, all my LocalHost Websites are unusable. They can't load in the Browser (with MAMP running). I see the MySQL DB Files end in .FRM (Form). When I click on a Table in phpMyAdmin, it says "Table Does Not Exist," even though they are Listed in phpMyAdmin and the Folder for that particular Database inside MAMP/db. How do I fix this, to be able to Edit My Websites Locally?

How to split/explode comma delimited string field into SQL query

Posted: 12 Oct 2013 05:57 AM PDT

I have field id_list='1234,23,56,576,1231,567,122,87876,57553,1216'

and I want to use it to search IN this field:

SELECT *   FROM table1  WHERE id IN (id_list)  
  • id is integer

  • id_list is varchar/text

But in this way this doesn't work, so I need in some way to split id_list into select query.

What solution should I use here? I'm using the T-SQL Sybase ASA 9 database (SQL Anywhere). But in this way this doesn't work, so I need in some way to split id_list into select query.

Way I see this, is to create own function with while loop through, and each element extract based on split by delimiter position search, then insert elements into temp table which function will return as result.

MySQL needs more space

Posted: 12 Oct 2013 04:55 AM PDT

I am using a program to import a Wikipedia dump to my local mysql server. The program is running. I start it four days ago. Unfortunately the drive C: is going to be full. I have two HDDs connected to my PC. Each one are 80GB. The econd HDD is empty. How can prevent the program from throwing exception. It has no pause option. Is is possible to use second HDD in the scenario?

enter image description here

Design DB for users with different information fields?

Posted: 12 Oct 2013 10:46 AM PDT

Let's say I want to register all faculty of a university and they are in different fields and have different resume information. Here we have some common fields and some field specific fields. for example for CS and economic and medicine we may have:

CS:

ID:123  Name:Ali  Lname:Alipour  projects:p1, p3, p6  programming Skills:C[5 star],Java[5 star],JS[3 star],etc.  Research Interest: Network   

Medicine faculty:

ID:456  Name:Jafar  Lname:Jafarson  Hospital experience:hospital1 (2 year), Hopital2(1 month),...  Research Interest: Human body  

Economic faculty:

ID:789  Name:Sadegh  Lname:Alipour  Company: Company1(company_name, add, tel,etc), Company2(company_name, add, tel,etc)  Research Interest: online currencies  

We also may have students in the system:

ID:st_123  Name:mahdi  Lname:Mahdiyar  Major:CS  reg_date:2013.09.09  

My first guess was to make a user_table and place all the common fields in it and then make field specific table for each major, CS_faculty_table, Medicine_faculty_table, Economic_faculty_table and student_table. Then thought about inheritance and polymorphism is OO programming language and if it is available in DB area. Then I saw accepted answer of this question that says you can have different fields inside a JSON file and store it in your DB as a BLOB field!

I'm new to DB filed and don't have enough experience in designing DBs so I wanted to know what's the best approach in this kind of situations?

I want a general answer. I don't know if the DB differs or not, and I don't care to change my DB to another open-source DB (sql or no-sql). I had PostgreSQL in mind.

Transactions' order of commitment within Serializable schedules

Posted: 12 Oct 2013 02:31 AM PDT

The following diagram was taken from a book(T1 and T2 are transactions which read and write to databases objects A and B). For convenience, I quoted few lines of text in that book which describe the diagram and the quote is below.

"Even though the actions of T1 and T2 are interleaved, the result of this schedule is equivalent to running T1(in its entirety) and then running T2. Intuitively, T1's read and write of B is not influenced by T2's actions on A, and the net effect is the same if these actions are 'swapped' to obtain the serial schedule T1;T2."

My question is regarding this "Even though the actions of T1 and T2 are interleaved, the result of this schedule is equivalent to running T1(in its entirety) and then running T2." statement. How can this be true if T2 commits before T1? Please give a detailed answer.

A serializable schedule

MySQL: Optimizing for large but discrete data sets

Posted: 12 Oct 2013 10:17 AM PDT

In brief: I'm developing a database that handles GTFS datasets from multiple transit agencies. Each dataset contains millions of rows in the stop_times.txt file (and thus its corresponding table). Updating the table gets slower and slower as it gets bigger. I can deal with a couple of million rows from a single agency, but what happens when I add 10 more feeds? 50?

Now, the data sets are completely independent of one another. I won't be trying to join information across DART, MTA, and Transport for London. I feel like it would be very bad database design, but I'm tempted to create a separate table for each and forget about the whole thing.

I'm sure this has been answered somewhere, but I really don't know what I'm searching for. I've read up a bit on partitioning, but I'm not sure if that will solve my problem. Would adding a hash partition on my agency_id field solve issues with exploding BTREE indexes?

Here's my current table structure:

CREATE TABLE `stop_times` (    `trip_id` bigint(20) unsigned DEFAULT NULL,    `arrival_time` time DEFAULT NULL,    `departure_time` time DEFAULT NULL,    `stop_id` bigint(20) unsigned DEFAULT NULL,    `stop_sequence` smallint(5) unsigned DEFAULT NULL,    `stop_headsign` tinytext,    `route_id` mediumint(8) unsigned DEFAULT NULL,    `feed_id` smallint(5) unsigned DEFAULT NULL,    `update_id` int(10) unsigned DEFAULT NULL,    UNIQUE KEY `stop_sequence` (`trip_id`,`stop_sequence`) USING HASH,    KEY `trip_id` (`trip_id`) USING HASH,    KEY `departure_time` (`departure_time`) USING BTREE,    KEY `stop_id` (`stop_id`) USING HASH,    KEY `feed_id` (`feed_id`) USING HASH,    KEY `update_id` (`update_id`) USING HASH  ) ENGINE=InnoDB DEFAULT CHARSET=latin1;  

Thanks in advance for the help.

Cannot create stored procedure

Posted: 12 Oct 2013 08:13 AM PDT

I have the following piece of statement entered into MySQL5.6 Command Line Client. However, the following error was received. I haven't even been able to add in END// Delimiter; after the select statement.

At the same time, i was wondering after the stored procedure has been created successfully, how do i CALL the stored procedure without the command line but using java codes.

Kindly assist. Greatly appreciated!

enter image description here

synchronizing local and server database

Posted: 12 Oct 2013 08:19 AM PDT

I am developing a billing system software. For this I have created a database it contains many tables database and this is in local system all transactions data will be stored in local systems database.

I would like to provide data backup tables in server. Whenever the person(billing s/w user) wants he/she can upload the data to server(only newly added data should be uploded if existing data is there).

If local system data is currepted or get deleted by some reason it can be downloaded from the database server. This all features should be done by using billing software. How to do this.

Should I join datetime to a date using cast or range?

Posted: 12 Oct 2013 05:28 AM PDT

This question is a take-off from the excellent one posed here:

Cast to date is sargable but is it a good idea?

In my case, I am not concerned with the WHERE clause but in joining to an events table which has a column of type DATE

One table has DATETIME2 and the other has DATE... so I can effectively JOIN using a CAST( AS DATE) or I can use a "traditional" range query (>= date AND < date+1).

My question is which is preferable? The DATETIME values will almost never match the predicate DATE value.

I expect to stay on the order of 2M rows having the DATETIME and under 5k having the DATE (if this consideration makes a difference)

Should I expect the same behavior on the JOIN as I might using the WHERE clause? Which should I prefer to retain performance with scaling? Does the answer change with MSSQL 2012?

My generalized use-case is to treat my events table like a calendar table

SELECT      events.columns      ,SOME_AGGREGATIONS(tasks.column)  FROM      events  LEFT OUTER JOIN      tasks          --This appropriately states my intent clearer          ON CAST(tasks.datetimecolumn AS DATE) = events.datecolumn           --But is this more effective/scalable?          --ON tasks.datetimecolumn >= events.datecolumn           --AND tasks.datetimecolumn < DATEADD(day,1,events.datecolumn)  GROUP BY      events.columns  

Booted by MYSQL Error (2003) 10060 mid way through work

Posted: 12 Oct 2013 10:24 AM PDT

I was working on some queries and then my HeidiSQL froze, I tried to reboot the connection and I get good old MYSQL Error (2003) (10060). It worked just fine before that.

I haven't made any firewall changes, and I checked the "white list" of IPs on AWS it still was fine. I encountered this error code before but never during work with no changes.

Thoughts?

Edit 1:
I turned OFF firewall and still same error

Edit 2:
It works all of a sudden, but I would like to know why such thing happened. Connection issues?

insufficient privileges while executing oracle stored procedure?

Posted: 12 Oct 2013 08:26 AM PDT

Im getting insufficient privileges error while executing the following oracle stored procedure. Im using Oracle Database 10g Express Edition.

CREATE OR REPLACE  PROCEDURE sp_update_acounts(      accounts_file_dir  IN VARCHAR2,      accounts_file_name IN VARCHAR2)  IS  BEGIN    EXECUTE IMMEDIATE 'CREATE OR REPLACE DIRECTORY ext_accounts_dir AS '''||accounts_file_dir||'''';      EXECUTE IMMEDIATE 'grant read, write on directory ext_accounts_dir to myuser';    EXECUTE IMMEDIATE 'drop table crm_account_stage';    EXECUTE IMMEDIATE 'CREATE TABLE crm_account_stage (entity_account_id NUMBER(19,0), crm_id VARCHAR2(255 CHAR)) ORGANIZATION EXTERNAL (TYPE ORACLE_LOADER DEFAULT DIRECTORY    ext_accounts_dir ACCESS PARAMETERS (FIELDS TERMINATED BY '','' ( entity_account_id CHAR(225), crm_id CHAR(225))) LOCATION ('''||accounts_file_name||''''||') )';    MERGE INTO ua_crm_accounts acc  USING (    SELECT entity_account_id,           crm_id    FROM crm_account_stage) acc_stage  ON (acc_stage.entity_account_id = acc.pkey)  WHEN MATCHED THEN    UPDATE SET acc.crm_id = acc_stage.crm_id;    END;  

Im using the post Update oracle sql database from CSV to build this SP. I could compile this stored procedure sucessfully. I have all the rights for the oracle user because im the admin. I have given all possible rigts.

But when i execute the SP im getting error like

Error starting at line 13 in command:  execute sp_update_acounts('C:\Users\surenr\Desktop\UA\Intitial-Conversion','acc_data.csv')  Error report:  ORA-01031: insufficient privileges  ORA-06512: at "myuser.SP_UPDATE_ACOUNTS", line 7  ORA-06512: at line 1  01031. 00000 -  "insufficient privileges"  *Cause:    An attempt was made to change the current username or password             without the appropriate privilege. This error also occurs if             attempting to install a database without the necessary operating             system privileges.             When Trusted Oracle is configure in DBMS MAC, this error may occur             if the user was granted the necessary privilege at a higher label             than the current login.  *Action:   Ask the database administrator to perform the operation or grant             the required privileges.             For Trusted Oracle users getting this error although granted the             the appropriate privilege at a higher label, ask the database             administrator to regrant the privilege at the appropriate label.  

Update: I'm not trying to update user or password. But the error message says as i'm trying to modify user details. When i try the same code step by step outside the stored procedure, its executing without any problem.

What could be the reason for this? How can i resolve the issue?

One Materialized Views in Two Refresh Groups

Posted: 12 Oct 2013 02:40 AM PDT

I have five materialized views that I want to refresh in two occasions, every Sunday and at the 1st of every month. I created a Refresh Group for the weekly and that works fine. But when I tried to create the second Refresh Group for the monthly I get a "materialized view is already in a refresh group".

You can have a materialized view in only one refresh group?

What options do I have to refresh it in different intervals?

"ORA-03113: end-of-file on communication channel" on startup

Posted: 12 Oct 2013 06:26 AM PDT

I have been reading posts here, on Oracle support, and anywhere else I can find for the last three days and I've given up on this problem...

An Oracle database hung. Shutdown of the database sat for a few hours and then it quit. It wouldn't restart. The server was restarted. Oracle was restarted. Going step by step: startup nomount works, alter database mount works, alter database open returns ORA-03113. This is all on localhost - not over the network. The machine has no firewall of any kind running.

Any idea how to get past this ORA-03113 error? I've been on the phone with support in India for the last 4.5 hours and I haven't found anyone helpful yet.

Create Login command error

Posted: 12 Oct 2013 03:26 PM PDT

What is wrong here in following screen shot? enter image description here

I am trying solutions posted here and here to add a login

MySQL gives me:“Can't open and lock privilege tables: Table 'host' is read only”

Posted: 12 Oct 2013 12:26 PM PDT

I am facing problem restoring a MySQL database. My primary database was MySQL 5.1 and now I am trying to copy it to MySQL 5.5. The database was backed up by using Xtrabackup.

I am using Ubuntu 12.04.3 LTS on this server, MySQL version is: 5.5.32-0ubuntu0.12.04.1-log.

I have followed all the steps to restore using Xtrabackup, this created database files, which I have copied to a tmp directory.

I have modified my.cnf to point to this tmp directory. I have changed the tmp directory permissions and changed the ownership of the files to mysql user.

drwxr-xr-x 12 mysql mysql 4096 Sep 10 10:04 base  

Now when I start the MySQL server I get this error:

[ERROR] Fatal error: Can't open and lock privilege tables: Table 'host' is read only

I have given a try as follows:

  1. Even tried installing MySQL 5.1 to see if that was the issue.
  2. tried chcon mysql_db_t to change the context but it gives me:

    can't apply partial context to unlabelled file

  3. I have used --skip-grant to get into database, but using this I can only access InnoDB tables only, MyISAM tables throw read-only error.

  4. After --skip-grant, I have used upgrade_mysql. This throws me errors stating that many tables are read-only.
  5. I have removed AppArmor too. And restarted too.
  6. I have restored a different database (5.1 to 5.5) previously on Ubuntu 12.04.2 LTS without any issues.

Can some please point me in right direction? I am not sure whats wrong with permissions.

Percona Xtradb Cluster : How to speed up insert?

Posted: 12 Oct 2013 02:26 PM PDT

I recently installed a 3 full master node cluster based on Percona Xtradb (very easy install). But now i need to make some tuning to increase INSERT/UPDATE requests. Actually, i made around 100 insertions every 5 minutes, but also made around 400 update in the same time. All this operation take less than 3 minutes when i was on a single server architecture. And now, with 3 node server, it takes more than 5 minutes ...

Is there any tuning i can do to speed up this operations ? Here is my actual cnf configuration :

[mysqld]  datadir=/var/lib/mysql  user=mysql    wsrep_provider=/usr/lib/libgalera_smm.so  wsrep_cluster_address=gcomm://dbnode01,dbnode02,dbnode03    binlog_format=ROW  default_storage_engine=InnoDB  innodb_locks_unsafe_for_binlog=1  innodb_autoinc_lock_mode=2  wsrep_node_address=1.2.3.4  wsrep_cluster_name=my_cluster  wsrep_sst_method=xtrabackup  wsrep_sst_auth="user:password"  

Here are the 3-server hard config :

Node#1

CPU: Single Processor Quad Core Xeon 3470 - 2.93Ghz (Lynnfield) - 1 x 8MB cache w/HT  RAM: 8 GB DDR3 Registered 1333  HDD: 500GB SATA II  

Node#2

CPU: Single Processor Quad Core Xeon 1270 V2 - 3.50GHz (Ivy Bridge) - 1 x 8MB cache w/HT  RAM: 4 GB DDR3 1333  HDD: 1.00TB SATA II  

Node#3

CPU: Intel(R) Xeon(R) CPU E3-1245 V2 @ 3.40GHz (4-Cores)  RAM: 32G  HDD: 2T  

UPDATE

Actualy there's around 2.4M records (24 fields each) in the table concerned by the INSERT/UPDATE statements (6 fields indexed).

How to design a table that each rows have 5K boolean attributes?

Posted: 12 Oct 2013 11:26 AM PDT

I have about 2M rows and each row looks like the following.

244 true false ... true

-> One integer column(V) and about 5K boolean columns(B1, B2, ..., B5K) associated to the integer.

Due to the limitation of the maximum number of columns that I can have for a row, I have separated the boolean columns(attributes) in a separate table.

Table V:    idx_V value_V  --------------  1     244      ...    Table B:    idx_V idx_B value_B  --------------------  1     1     true  1     2     false  ...  1     5K    true  ...  

This design works alright when I try to find V's that match one boolean column. For example, finding V's where the 2nd boolean attribute is true:

select value_V   where VT.idx_A = BT.idx_A       and idx_B = 2       and value_B = true   from V_Table as VT       and B_Table as BT  

But the query becomes awful when I have to find V's that match a multiple boolean columns, sometimes even for all 5K columns, like finding V's with B1=true, B2=false, B3=true, ... and B5K=false.

My primary use of the tables would be the following 2:

  1. Find V's that x1th, x2th and xnth boolean columns are false/true (n can be anything between 1 and 5K)
  2. Sublists:
    • Find the sequence of the boolean columns for a specific V: T F T T F F ...
    • Find other V's that match the sequence found in 2-A

I'm thinking about constructing a varchar[5K] field to store the boolean sequence to do 2 but it seems like there's just too much waste in space since each boolean only requires just 1 bit but I'm allocating a byte.

What would be the best way to go about this?

Is there a way to implement a cross-database task on SQL Server 2012 with the Availability Groups feature?

Posted: 12 Oct 2013 10:25 AM PDT

We use SQL Server 2012 and its new Availability Groups (AG) feature. There is a task for moving old data of some tables from one database to another database. Both databases are included into different availability groups.

Previously (before using the AG feature) the task was resolved by adding the second server instance as a linked server (sp_addlinkedserver) and executing a distributed transaction in the following way:

  1. begin transaction
  2. insert old data into server2.table2 from server1.table1
  3. delete old data from server1.table1
  4. commit transaction

Unfortunately, distributed transactions are not supported for AG because databases may become inconsistent in case of failover (http://technet.microsoft.com/en-us/library/ms366279.aspx).

Is there some way to implement this task with keeping the AG feature and without implementing the rollback logic in case of exceptions?

Migrating from SQL Server to MySQL using MySQL Workbench tool

Posted: 12 Oct 2013 10:24 AM PDT

I'm trying to migrate few tables from SQL Server to MySQL using MySQL Workbench migration wizard. All work fine for structure migrations but when I go to the data migration section it throws an error for one table:

ERROR: dbo.Documents:SQLExecDirect(SELECT [DocumentID], [CategoryID], CAST([DocumentName] as NVARCHAR(255)) as [DocumentName], [Active], [NavigatorID], CAST([DocumentText] as NTEXT) as [DocumentText], [UseSubtitle], CAST([DocumentSubtitle] as NVARCHAR(255)) as [DocumentSubtitle], CAST([DocumentPlainText] as NTEXT) as [DocumentPlainText], [DocumentType], CAST([DocumentLink] as NVARCHAR(255)) as [DocumentLink], [Sitemap], CAST([SubtitleImage] as NVARCHAR(255)) as [SubtitleImage], CAST([MetaTags] as NVARCHAR(8000)) as [MetaTags], CAST([MetaDescription] as NVARCHAR(8000)) as [MetaDescription], [AccessLevel] FROM [ctool_test].[dbo].[Documents]): 42000:1131:[Microsoft][ODBC SQL Server Driver][SQL Server]The size (8000) given to the convert specification 'nvarchar' exceeds the maximum allowed for any data type (4000).

2131:[Microsoft][ODBC SQL Server Driver][SQL Server]The size (8000) given to the convert specification 'nvarchar' exceeds the maximum allowed for any data type (4000).

Based on that what I can understand it limits columns with nvarchar data to max size of 4000 when MySQL can handle 65535.

Any clue how I can get this to work?

Thanks

Restoring database to UNC path on local drive

Posted: 12 Oct 2013 10:26 AM PDT

When I try to restore a database using a restore command with a local UNC path:

RESTORE DATABASE [dbname]   FROM DISK = N'\\PC91\D\backup.BAK' WITH  FILE = 1,    MOVE N'test' TO N'\\PC91\D\dbname.MDF',    MOVE N'test_log' TO N'\\PC91\D\dbname_log.LDF',    NOUNLOAD, STATS = 10  

I get an error:

Msg 3634, Level 16, State 1, Line 1
The operating system returned the error '5(Access is denied.)' while attempting 'CreateFileW' on '\PC91\D\dbname.MDF'.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.

If I use a local drive letter instead, then it works:

RESTORE DATABASE [dbname]   FROM DISK = N'D:\backup.BAK' WITH FILE = 1,    MOVE N'test' TO N'D:\dbname.MDF',    MOVE N'test_log' TO N'D:\dbname_log.LDF',    NOUNLOAD, STATS = 10  

This command also restores the database to same folder. So why is there an error when I specify the network path?

how to run Db2 export command in shell

Posted: 12 Oct 2013 09:26 AM PDT

I am trying to run the following db2 command through the python pyodbc module

IBM DB2 Command : "DB2 export to C:\file.ixf of ixf select * from emp_hc"

i am successfully connected to the DSN using the pyodbc module in python and works fine for select statement

but when i try to execute the following command from the Python IDLE 3.3.2

cursor.execute(" export to ? of ixf select * from emp_hc",r"C:\file.ixf") pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0104N An unexpected token "db2 export to ? of" was found following "BEGIN-OF-STATEMENT". Expected tokens may include: "". SQLSTATE=42601\r\n (-104) (SQLExecDirectW)')

or cursor.execute(" export to C:\file.ixf of ixf select * from emp_hc")

Traceback (most recent call last): File "", line 1, in cursor.execute("export to C:\myfile.ixf of ixf select * from emp_hc") pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0007N The character "\" following "export to C:" is not valid. SQLSTATE=42601\r\n (-7) (SQLExecDirectW)')

am i doing something wrong ? any help will be greatly appreciated.


From what i came to know db2 export is a command run in shell, not through SQL via odbc.

can you please give me some more information on how to run the command in the shell i am confused and what does that mean ? any guide or small quick tutorial will be great

Import from incremental backups to a new host in Oracle 11g

Posted: 12 Oct 2013 05:26 PM PDT

I am using Oracle 11g. I would like to know that whether it is possible to import from incremental level 0 & 1 backups to a new host using RMAN. If yes, how can I do that?

For level 1 I am using differential method.

InnoDB Tablespace critical error in great need of a fix

Posted: 12 Oct 2013 07:26 AM PDT

Link to screenshot : http://www.nouvellesduquartier.com/i/1/p/Munin_%20Critical_MySql_InnoDB_.JPG (The value reported is outside the allowed range - Byte free, free, gauge, warn, critic)

Question: Could the error shown on the screenshot be the reason why my site is very slow?

If so, I really need help to fix it since I am far from beeing an engeneer! Thank you in advance.

Syncronize mysql databases between local and hosted servers automatically

Posted: 12 Oct 2013 05:26 AM PDT

We have many website with Development , Staging and Production Server. we have many developers for many projects, we need a solution to synchronize the database with developer database with staging database. after that one is works we can move to live database.

That one is need to be fully automatically synchronize so that developer dont need to run that tool each and every time

effective mysql table/index design for 35 million rows+ table, with 200+ corresponding columns (double), any combination of which may be queried

Posted: 12 Oct 2013 06:26 PM PDT

I am looking for advice on table/index design for the following situation:

i have a large table (stock price history data, InnoDB, 35 million rows and growing) with a compound primary key (assetid (int),date (date)). in addition to the pricing information, i have 200 double values that need to correspond to each record.

CREATE TABLE `mytable` (  `assetid` int(11) NOT NULL,  `date` date NOT NULL,  `close` double NOT NULL,  `f1` double DEFAULT NULL,     `f2` double DEFAULT NULL,  `f3` double DEFAULT NULL,     `f4` double DEFAULT NULL,   ... skip a few …  `f200` double DEFAULT NULL,   PRIMARY KEY (`assetid`, `date`)) ENGINE=`InnoDB` DEFAULT CHARACTER SET latin1 COLLATE      latin1_swedish_ci ROW_FORMAT=COMPACT CHECKSUM=0 DELAY_KEY_WRITE=0       PARTITION BY RANGE COLUMNS(`date`) PARTITIONS 51;  

i initially stored the 200 double columns directly in this table for ease of update and retrieval, and this had been working fine, as the only querying done on this table was by the assetid and date (these are religiously included in any query against this table), and the 200 double columns were only read. My database size was around 45 Gig

However, now i have the requirement where i need to be able to query this table by any combination of these 200 columns (named f1,f2,...f200), for example:

select from mytable   where assetid in (1,2,3,4,5,6,7,....)  and date > '2010-1-1' and date < '2013-4-5'  and f1 > -0.23 and f1 < 0.9  and f117 > 0.012 and f117 < .877  etc,etc  

i have not historically had to deal with this large of an amount of data before, so my first instinct was that indexes were needed on each of these 200 columns, or i would wind up with large table scans, etc. To me this meant that i needed a table for each of the 200 columns with primary key, value, and index the values. So i went with that.

CREATE TABLE `f1` (  `assetid` int(11) NOT NULL DEFAULT '0',  `date` date NOT NULL DEFAULT '0000-00-00',  `value` double NOT NULL DEFAULT '0',  PRIMARY KEY (`assetid`, `date`),  INDEX `val` (`value`)  ) ENGINE=`InnoDB` DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci ROW_FORMAT=COMPACT CHECKSUM=0 DELAY_KEY_WRITE=0;  

i filled up and indexed all 200 tables. I left the main table intact with all 200 columns, as regularly it is queried over assetid and date range and all 200 columns are selected. I figured that leaving those columns in the parent table (unindexed) for read purposes, and then additionally having them indexed in their own tables (for join filtering) would be most performant. I ran explains on the new form of the query

select count(p.assetid) as total   from mytable p   inner join f1 f1 on f1.assetid = p.assetid and f1.date = p.date  inner join f2 f2 on f2.assetid = p.assetid and f2.date = p.date   where p.assetid in(1,2,3,4,5,6,7)  and p.date >= '2011-01-01' and p.date < '2013-03-14'   and(f1.value >= 0.96 and f1.value <= 0.97 and f2.value >= 0.96 and f2.value <= 0.97)   

Indeed my desired result was achieved, explain shows me that the rows scanned are much smaller for this query. However i wound up with some undesirable side effects.

1) my database went from 45 Gig to 110 Gig. I can no longer keep the db in RAM. (i have 256Gig of RAM on the way however)

2) nightly inserts of new data now need to be done 200 times instead of once

3) maintenance/defrag of the new 200 tables take 200 times longer than just the 1 table. It cannot be completed in a night.

4) queries against the f1, etc tables are not necessarily performant. for example:

 select min(value) from f1    where assetid in (1,2,3,4,5,6,7)    and date >= '2013-3-18' and date < '2013-3-19'  

the above query, while explain shows that it lookgin at < 1000 rows, can take 30+ seconds to complete. I assume this is because the indexes are too large to fit in memory.

Since that was alot of bad news, I looked further and found partitioning. I implemented partitions on the main table, partitioned on date every 3 months. Monthly seemed to make sense to me but i have read that once you get over 120 partitions or so, performance suffers. partitioning quarterly will leave me under that for the next 20 years or so. each partition is a bit under 2 Gig. i ran explain partitions and everything seems to be pruning properly, so regardless i feel the partitioning was a good step, at the very least for analyze/optimize/repair purposes.

I spent a good deal of time with this article

http://ftp.nchu.edu.tw/MySQL/tech-resources/articles/testing-partitions-large-db.html

my table currently is partitioned with primary key still on it. The article mentions that primary keys can make a partitioned table slower, but if you have a machine that can handle it, primary keys on the partitioned table will be faster. Knowing i have a big machine on the way (256 G RAM), i left the keys on.

so as i see it, here are my options

Option 1

1) remove the extra 200 tables and let the query do table scans to find the f1, f2 etc values. non-unique indexes can actually hurt performance on a properly partitioned table. run an explain before the user runs the query and deny them if the number of rows scanned is over some threshold i define. save myself the pain of the giant database. Heck, it will all be in memory soon anyways.

sub-question:

does it sound like i have chosen an appropriate partition scheme?

Option 2

Partition all the 200 tables using the same 3 months scheme. enjoy the smaller row scans and allow the users to run larger queries. now that they are partitioned at least i can manage them 1 partition at a time for maintenance purposes. Heck, it will all be in memory soon anyways. develop efficient way to update them nightly.

sub-question:

do you see a reason that i may avoid primary key indexes on these f1,f2,f3,f4... tables, knowing that i always have assetid and date when querying? seems counter intuitive to me but i am not used to data sets of this size. that would shrink the database a bunch i assume

Option 3

Drop the f1,f2,f3 columns in the master table to reclaim that space. do 200 joins if i need to read 200 features, maybe it wont be as slow as it sounds.

Option 4

You all have a better way to structure this than i have thought of so far.

* NOTE: i will soon be adding another 50-100 of these double values to each item, so i need to design knowing that is coming

thanks for any and all help

Update #1 - 3/24/2103

I went with the idea suggested in the comments i got below and created one new table with the following setup:

create table 'features'{    assetid int,    date    date,    feature varchar(4),    value   double  }  

I partitioned the table in 3 month intervals.

I blew away the earlier 200 tables so that my database was back down to 45 Gig and started filling up this new table. A day and a half later, it completed, and my database now sits at a chubby 220 Gigs!

It does allow the possibility of removing these 200 values from the master table, as i can get them from one join, but that would really only give me back 25 Gigs or so maybe

I asked it to create a primary key on assetid, date,feature and an index on value, and after 9 hours of chugging it really hadn't made a dent and seemed to freeze up so i killed that part off.

i rebuilt a couple of the partitions but it did not seem to reclaim much/any space.

So that solution looks like it probably isn't going to be ideal. Do rows take up significantly more space than columns i wonder, could that be why this solution took up so much more space?

I came across this article

http://www.chrismoos.com/2010/01/31/mysql-partitioning-tables-with-millions-of-rows

it gave me an idea.

where he says

"At first, I thought about RANGE partitioning by date, and while I am using the date in my queries, it is very common for a query to have a very large date range, and that means it could easily span all partitions."

Now i am range partitioning by date as well, but will also be allowing searches by large date range, which will decrease the effectiveness of my partitioning. I will always have a date range when i search, however i will also always have a list of assetids. Perhaps my solution should be to partition by assetid and date, where i identify typically searched assetid ranges (which i can come up with, there are standard lists, S&P 500, russell 2000, etc). this way i would almost never look at the entire data set.

Then again, i am primary keyed on assetid and date anyways, so maybe that wouldnt help much.

any more thoughts/comments would be appreciated

thanks

SELECTing multiple columns through a subquery

Posted: 12 Oct 2013 01:26 PM PDT

I am trying to SELECT 2 columns from the subquery in the following query, but unable to do so. Tried creating alias table, but still couldn't get them.

SELECT DISTINCT petid, userid,  (SELECT MAX(comDate) FROM comments WHERE petid=pet.id) AS lastComDate,  (SELECT userid FROM comments WHERE petid=pet.id ORDER BY id DESC LIMIT 1) AS lastPosterID    FROM pet LEFT JOIN comments ON pet.id = comments.petid  WHERE userid='ABC' AND deviceID!='ABC' AND comDate>=DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 2 MONTH);  

Basically, I am trying to get the lastComDate & lastPosterID from the same row - the row which is the latest one in comments for the specific pet. Please suggest how can I get them in an efficient way.

The above query works, but seems overkill as same row is fetched twice. Moreover, the ORDER BY clause is significantly slower than the aggregate function - as I found while profiling query. So, a solution avoiding sorting would be appreciated.

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 12 Oct 2013 04:26 PM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

Cast to date is sargable but is it a good idea?

Posted: 12 Oct 2013 05:25 AM PDT

In SQL Server 2008 the datatype date datatype was added.

In this connect item you can see that casting a datetime column to date is sargable and may use an index on the datetime column.

select *  from T  where cast(DateTimeCol as date) = '20130101';  

The other option you have is to use a range instead.

select *  from T  where DateTimeCol >= '20130101' and        DateTimeCol < '20130102'  

Are these queries equally good or should one be preferred over the other?

SQL Fiddle

T SQL Table Valued Function to Split a Column on commas

Posted: 12 Oct 2013 07:55 AM PDT

I wrote a Table Valued Function in Microsoft SQL Server 2008 to take a comma delimited column in a database to spit out separate rows for each value.

Ex: "one,two,three,four" would return a new table with only one column containing the following values:

one  two  three  four  

Does this code look error prone to you guys? When I test it with

SELECT * FROM utvf_Split('one,two,three,four',',')   

it just runs forever and never returns anything. This is getting really disheartening especially since there are no built in split functions on MSSQL server (WHY WHY WHY?!) and all the similar functions I've found on the web are absolute trash or simply irrelevant to what I'm trying to do.

Here is the function:

USE *myDBname*  GO  SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO  ALTER FUNCTION [dbo].[utvf_SPlit] (@String VARCHAR(MAX), @delimiter CHAR)    RETURNS @SplitValues TABLE  (      Asset_ID VARCHAR(MAX) NOT NULL  )    AS  BEGIN              DECLARE @FoundIndex INT              DECLARE @ReturnValue VARCHAR(MAX)                SET @FoundIndex = CHARINDEX(@delimiter, @String)                WHILE (@FoundIndex <> 0)              BEGIN                    DECLARE @NextFoundIndex INT                    SET @NextFoundIndex = CHARINDEX(@delimiter, @String, @FoundIndex+1)                    SET @ReturnValue = SUBSTRING(@String, @FoundIndex,@NextFoundIndex-@FoundIndex)                    SET @FoundIndex = CHARINDEX(@delimiter, @String)                    INSERT @SplitValues (Asset_ID) VALUES (@ReturnValue)              END                RETURN  END  

[MS SQL Server] Backup to remote location fails

[MS SQL Server] Backup to remote location fails


Backup to remote location fails

Posted: 12 Oct 2013 01:01 AM PDT

I create a maintenance plan on a server named Rem-rem-db to backup the database to another database server Rem-rem-db2.I specify the UNC Path \\Rem-rem-db2\dbbackups.I granted permissions to the SQLServerAgent Account to read and write to that folder.I logged in to the remote server using the SQLServerAgent Account and created a file to ensure that the account did have permissions but the job fails.It can't access that path.I have done this before and it worked.Any ideas would be greatly appreciated.Date 10/12/2013 9:40:12 AMLog Job History (Backup_RemressPP_Prod_Remote.Subplan_1)Step ID 1Server Rem-VUE-DBJob Name Backup_RemressPP_Prod_Remote.Subplan_1Step Name Subplan_1Duration 00:00:01Sql Severity 0Sql Message ID 0Operator Emailed Operator Net sent Operator Paged Retries Attempted 0MessageExecuted as user: RemRESSIG\SQLServerAgent. Microsoft (R) SQL Server Execute Package Utility Version 10.50.4000.0 for 64-bit Copyright (C) Microsoft Corporation 2010. All rights reserved. Started: 09:40:12 AM Progress: 2013-10-12 09:40:12.88 Source: {16BD13D9-C4AB-44B5-B374-DE038612FA94} Executing query "DECLARE @Guid UNIQUEIDENTIFIER EXECUTE msdb..sp...".: 100% complete End Progress Error: 2013-10-12 09:40:13.16 Code: 0xC002F210 Source: Back Up Database (Full) Execute SQL Task Description: Executing the query "EXECUTE master.dbo.xp_create_subdir N'\\Rem-vue-db..." failed with the following error: "xp_create_subdir() returned error 5, 'Access is denied.'". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. End Error Error: 2013-10-12 09:40:13.23 Code: 0xC002F210 Source: Back Up Database (Full) Execute SQL Task Description: Executing the query "BACKUP DATABASE [RemressPP_Prod] TO DISK = N'\\cy..." failed with the following error: "Cannot open backup device '\\Rem-vue-db2\dbbackups\RemressPP_Prod\RemressPP_Prod_backup_2013_10_12_094013_0914381.bak'. Operating system error 3(The system cannot find the path specified.). BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 09:40:12 AM Finished: 09:40:13 AM Elapsed: 0.827 seconds. The package execution failed. The step failed.

Page Life Expectancy

Posted: 11 Oct 2013 07:46 AM PDT

Using a script, I got a number for PLE. How do I conclude that this PLE is optimum for my instance?

TempDB question

Posted: 11 Oct 2013 07:42 AM PDT

Does having multiple tempdb files on different drives help with contention?

[T-SQL] set date/time to today date but certain time

[T-SQL] set date/time to today date but certain time


set date/time to today date but certain time

Posted: 11 Oct 2013 01:36 AM PDT

Hi -What is a best way to set a field value to today date and time would be a always set to 4:00:00? Please advice

Pass a table variable column value to a SP output variable

Posted: 11 Oct 2013 01:13 AM PDT

How to pass a table variable column value to a SP output variableI want the values in column v (table variable column) to be assigned to @HCodes(SP output variable)I have a query like this:DECLARE @MaterialCode varchar(max)SELECT @MaterialCode = materialItemCode FROM local_MaterialsItemsContainers MICINNER JOIN local_MaterialsItems MION (MIC.materialItemIncId = MI.materialItemIncId AND MIC.materialItemSqlId = MI.materialItemSqlId AND MI.isDeleted=0x0)WHERE charindex(MIC.materialItemContainerCode,'MC-00000030-13-0001')<>0select @MaterialCodeDECLARE @HCodes nvarchar(MAX)declare @t table (id int identity(1, 1), v varchar(50))INSERT @tSELECT LH.hazardCode from Local_MaterialsItems MI INNER JOIN Local_MaterialsItemsHazards MIH on MI.materialItemIncId = MIH.materialItemIncId AND MI.materialItemSqlId = MIH.materialItemSqlId INNER JOIN Local_Hazards LH on MIH.hazardIncId = LH.hazardIncId AND MIH.hazardSqlId = LH.hazardSqlIdwhere charindex(MI.materialItemCode,@MaterialCode)<>0declare @Numbers table (i int identity(1, 1), j bit)insert into @Numbers select top 50 null from master.dbo.syscolumns a, master.dbo.syscolumns bselect distinct left(substring(','+v, i + 1, 50), charindex(',', substring(','+v+',', i + 1, 50))-1) as vfrom @t t, @Numberswhere substring(','+v, i, 1) = ','order by v

Friday, October 11, 2013

[SQL Server] Stupid Question: Return Object Explorer to It's original position

[SQL Server] Stupid Question: Return Object Explorer to It's original position


Stupid Question: Return Object Explorer to It's original position

Posted: 26 Nov 2010 07:57 AM PST

Stupid question but I accidentally moved object explorer from it's original default position on SSMS. By default, object explorer is to the left, and query window is to the right. Now object explorer seems to have been "undocked" and query window occupies the screen from left to right, such that if I open a new query it opens "behind" object explorer. How do I restore the original "default" view in SSMS?

Search This Blog