Monday, October 7, 2013

[how to] postgres create extension postgis error

[how to] postgres create extension postgis error


postgres create extension postgis error

Posted: 07 Oct 2013 08:20 PM PDT

In the process of creating the postgresql database "map" which uses the postgis extension, via CREATE EXTENSION postgis;, user "mapmaker" gets the following error:

permission denied to create extension "postgis"   HINT:  Must be superuser to create this extension.   

But user "mapmaker" is the database owner as specified by sudo -u postgres psql via the command:

CREATE DATABASE map OWNER mapmaker;   GRANT ALL PRIVILEGES ON DATABASE map TO mapmaker;   

Once mapmaker is a superuser at the user level I no longer receive the error and the extension is created so I understand all I have to do is adjust the permission of mapmaker via the postgres user to superuser but I am interested in knowing why this is the case if the mapmaker was granted all privileges on the database map? Are extensions treated differently? In order to use extensions does a user have to be a user level superuser or can the permissions be allocated on a database level?

I did see http://stackoverflow.com/questions/16527806/cannot-create-extension-without-superuser-role but the answer to the question did not explain why and, unfortunately, I do not have enough points to comment, hence the question.

PostgreSQL 9.1.9 PostGIS 2.0.3

Run a program on a remote computer after a SQL job has completed

Posted: 07 Oct 2013 07:24 PM PDT

We have a SQL Agent Job that performs a few tasks every night (schedulled to run using SQL Agent). The database server is SQL 2012 if that makes a difference.

We have another process (a .net program) that needs to run on a different server after the job has finished on each server. The program cannot be installed on the database server.

Is there a way to get a SQL Agent Job to run a program on another computer after it has finished, or is there an easy way to indicate to a program on another computer that the job has finished?

Seeing recompiles when changed AUTO_UPDATE_STATISTICS to OFF

Posted: 07 Oct 2013 03:34 PM PDT

The reasons why proc cache is flushed are listed here http://msdn.microsoft.com/en-us/library/bb522682.aspx, and AUTO_UPDATE_STATISTICS is not this list. So I changed this option to OFF on production and saw a 100% spike that lasted about 4 minutes and saw a lot of recompiles. What am I missing?

Help designing tables for storing lab test results: values include numbers, <number, >number

Posted: 07 Oct 2013 03:18 PM PDT

We're using MySQL 5.5 and attempting to design tables to store the results from tests run in a lab. The results of some of the tests can be either a plain number, e.g. "5", or a range such as "< 5" or "> 25", and this is where we're getting stuck. The < and > values are due to the limits of the sensitivity of the tests; above or below a certain threshold for each test the exact amount can't be detected. Values right at the threshold are possible, so for the example above, both < 5 and 5 would be valid values for that particular test, and we need to be able to distinguish between these values.

There's a set list of tests, so needing to add new ones easily isn't a concern. Our preliminary design has a TestResults table and each test is a column in that. We were thinking of maybe adding a column for the tests that require < and > just to hold those values or possibly having "flag" values in the application code that signals a value is below/above the threshold and having the thresholds in a reference table. Neither of these solutions seems that great though. We don't want to just store the results as strings because we'll need to be able to get statistics, averages, etc. on the results.

Anyway we were hoping someone out there had done something like this and had some ideas on the best way to do it. We've been Googling (e.g. on LIMS) and haven't been able to find anything on database designs, but it seems like it wouldn't be that uncommon of a problem... Any ideas on the most appropriate way to model this type of data would be greatly appreciated.

How to optimize a query that includes counts of rows in multiple tables?

Posted: 07 Oct 2013 05:05 PM PDT

My query looks like this:

EXPLAIN SELECT ...      FROM USER u  LEFT OUTER JOIN   (SELECT COUNT(*) notified_count, user_id      FROM user_email_tracking ON u.id = email_tracking    WHERE email_type_id = 4    GROUP BY user_id) email_tracking ON u.id = email_tracking.user_id    JOIN ( SELECT user_id, count(*) as subscriptionCount         FROM user_filter         GROUP BY user_id) uf on u.id = uf.user_id    WHERE u.stage = 10  

My intuition is this is pretty doable. That u.stage = 10 narrows it down a lot. There's only a few hundred users who it applies to. If I could narrow down by that first then perform all other work, seems like this query should take a few seconds.

But - user_filter is huge, and user_email_tracking is pretty big. (Both are indexed on their user_id columns). If those have to get performed on everything, it's much slower. And I also must avoid joining user_filter against user_email_tracking on user_id, since that would be a huge times a huge table.

If I only had one count to do, I could move the group by all the way outside and just have a row for each row of user_filter (or user_email_tracking).

How to get SQL to do these things in the right order?

Engine is InnoDB.

How to import .FRM Tables into MySQL DB as .SQL?

Posted: 07 Oct 2013 03:03 PM PDT

After upgrading to the latest version of MAMP (2.2), all my LocalHost Websites & Databases are unusable. All the DB Files are .FRM (Form). In phpMyAdmin, the Tables look like they are present, but upon click, an error message appears saying "Table Not Found/Doesn't Exist."

TempDB on Solid State Drives on SQL Server Cluster

Posted: 07 Oct 2013 07:18 PM PDT

Can solid state drives be used for the TempDB on a SQL Server failover cluster if they are local drive on both machines? It is supported in 2012 but our cluster is a SQL 2005. The application that it is running isn't upgradeable to 2012.

If the SSD drives are on the same local path on both servers wouldn't it just rebuilt the TempDB when it failed over? Wouldn't the cluster recognize any drive the OS recognizes? Or are there other issues that need to be accounted for.

I realize that Microsoft won't support it but the performance gain would be great. Our TempDB on that cluster is the heaviest used DB and we can't change any code in the application it runs.

Cloning rows with minor changes

Posted: 07 Oct 2013 01:47 PM PDT

I am attempting to clone a row in the database, not that difficult. However the row has a reference to another row in the database that when inserted needs to use a different value, perhaps provided by a previous insert.

For example this is the table structure.

CREATE TABLE `usergroup` (      `id` int(11) unsigned NOT NULL AUTO_INCREMENT,      `vhost` int(11) unsigned NOT NULL,      `groupType` tinyint(3) unsigned NOT NULL DEFAULT '1',      `name` varchar(80) NOT NULL,      `description` text,      `note` text,      `logo` int(11) unsigned NOT NULL DEFAULT '0',      `created` datetime DEFAULT NULL,      `treeleft` int(10) NOT NULL,      `treeright` int(10) NOT NULL,      `parentGroup` int(11) unsigned NOT NULL DEFAULT '0',      `geo_radius` int(11) unsigned NOT NULL DEFAULT '0',      `geo_autoRemove` tinyint(4) NOT NULL DEFAULT '1',      `geo_autoAdd` tinyint(4) NOT NULL DEFAULT '1',      PRIMARY KEY (`id`),      KEY `vhost` (`vhost`,`groupType`,`moderationStatus`,`parentGroup`,`created`),      KEY `tree` (`vhost`,`treeleft`,`treeright`)  ) ENGINE=InnoDB  

With the following rows

INSERT INTO `usergroup` (id, vhost, groupType, name, description, note, created, treeleft,                            treeright, parentGroup, geo_radius, geo_autoRemove, geo_autoAdd)         VALUES (9696, 222, 1, 'Group 1.1', NULL, NULL, NULL, 1, 12, 0, 0, 0, 0),               (9697, 222, 1, 'Group 1.2', '', '', NULL, 2, 3, 9696, 0, 0, 0),               (9698, 222, 1, 'Group 1.3', '', '', NULL, 4, 9, 9696, 0, 0, 0),               (9700, 222, 1, 'Group 1.4', NULL, NULL, NULL, 5, 6, 9698, 0, 0, 0),               (9701, 222, 1, 'Group 1.5', NULL, NULL, NULL, 7, 8, 9698, 0, 0, 0),               (9702, 222, 1, 'Group 1.6', NULL, NULL, NULL, 10, 11, 9696, 0, 0, 0),               (9706, 222, 1, 'Group 1.9', NULL, NULL, NULL, 13, 16, 0, 0, 0, 0),               (9707, 222, 1, 'Group 1.10', NULL, NULL, NULL, 14, 15, 9706, 0, 0, 0),               (9704, 222, 1, 'Group 1.7', NULL, NULL, NULL, 17, 18, 0, 0, 0, 0),               (9705, 222, 1, 'Group 1.8', NULL, NULL, NULL, 19, 20, 0, 0, 0, 0),               (9799, 222, 1, 'Group 1.11', '', '', '2012-06-21 17:11:41', 21, 22, 0, 0, 0, 0),               (9800, 222, 1, 'Group 1.12', '', '', '2012-06-24 10:21:34', 23, 24, 0, 0, 0, 0);  

I now want to clone all these usergroups into a new vhost, 333 for example.

    INSERT INTO usergroup (vhost, groupType, name, description, note,           created,  treeleft, treeright, parentGroup, geo_radius,           geo_autoRemove, geo_autoAdd) VALUES               (SELECT 333 as vhost, groupType, name, description,                       note, created, treeleft, treeright, parentGroup,                       geo_radius, geo_autoRemove, geo_autoAdd                  FROM usergroup                 WHERE vhost = 222);  

The problem of course is while everything works out perfectly the parentGroup is incorrect. I suppose inserting by joining the table with itself might work but I need to be able to clone a vhost that has 8 levels of depth but could be as few as two in some places.

Does anyone have any ideas?

Create new database with large initial size

Posted: 07 Oct 2013 01:50 PM PDT

I want to create a new database with 200GB MDF and 50GB LDF. but it is very slow (it's about 15 minutes and still hasn't created). and it is very time consuming. Is that normal? if yes what does it do that takes time? can I enhance its creating speed?

I am using sql server 2008R2, windows server 2008R2, 16GB RAM(wich I limited that to 12GB in ssms) and Corei7 Cpu

Is DBCA in Oracle for a single database or the entire system?

Posted: 07 Oct 2013 12:12 PM PDT

I ask this because I was going to create a simple phone list database for learning. But when I used dbca, I was asked did I want to install the sample databases, HR, etc. It also had the default locations for all archive_log files, passwords for sys, sysman...I thought dbca was just to create a single database.

What is dbca for?

How to completely remove EnterpriseDB Installation of Postgresql on Ubuntu?

Posted: 07 Oct 2013 12:10 PM PDT

All,

I'm trying to install/run PostgreSQL 9.3, but am receiving a multitude of errors which seem to point to a failed installation of postgresql from the enterpriseDB package.

For example, trying to start the server with this command:

sudo /etc/init.d/postgresql-9.3 restart (or start)

...yields the following:

Restarting PostgreSQL 9.3:   -su: /opt/PostgreSQL/9.3/bin/pg_ctl: No such file or directory  PostgreSQL 9.3 did not start in a timely fashion, please see /opt/PostgreSQL/9.3/data/pg_log/startup.log for details  

Except there is no /opt/PostgreSQL/ folder, and I've been told that creation of that was attempted during the EnterpriseDB installation...why the start command is trying to go there, I do not know!

I've started another thread which gave me a very temporary solution, but that doesn't seem to help after I do a restart of my machine.

So the question, how do I completely remove the EnterpriseDB failed installation?

I've tried these commands, but the issue still lingers:

sudo apt-get autoremove postgresql-9.3-postgis pgadmin3  sudo apt-get --purge remove postgresql\*  

Any thoughts or recommendations are welcome!

Thank you!

-m

SQL Server 2012 Resource Governor and memory used by CLR

Posted: 07 Oct 2013 11:34 AM PDT

In SQL 2012 memory used by CLR is now part of what is controlled by "MAX Server Memory"

a) Does Resource Governor in SQL 2012 Enterprise (MAX_MEMORY_PERCENT for resource pool) governs CLR memory?

b) Also, Max_Memory_Percent is the percentage of "Max Server Memory"? So my max server memory is 28 GB on a server with 32 GB RAM. Does that mean setting Max_MEMORY_PERCENT to 10 percent mean I set 2.8 GB for that particular resource pool.

How to determine which `Foos` have no associated `Bars` [duplicate]

Posted: 07 Oct 2013 10:53 AM PDT

This question already has an answer here:

I want to find all Foos that do not have any associated Bars.

Here's the query I'm using. It's returning no records:

select * from foos f where f.id not in(select b.foo_id from bars b);  

However, I know that the foo with id = 1583 has no associated Bars, so there should be at least one result in my previous query.

Can somebody point out a mistake I am making? Thanks.

built-in administrator is disabled and sa is lost [duplicate]

Posted: 07 Oct 2013 10:41 AM PDT

This question already has an answer here:

I've set a password to SQL server SA accunt instance and then disabled windows built-in administrator. Now I've forgotten the SA account's password. How can I recover access to the database ? I can log on Windows OS.

mysql match against boolean mode

Posted: 07 Oct 2013 10:12 AM PDT

I'm using MATCH AGAINST in boolean mode to add a simple search functionality to my site.

The column to be searched is varchar, with no full text index. The column value can be alphanumeric, all letters or all numbers, as well as certain combinations of both (eg. ABC-123').

There seems to be a few problems. First, the search doesn't work when there's a hyphen (like ABC-123). Second, it will not find partial matches. For a value of 856059, the searches look like this:

MATCH (field) AGAINST ('+856049' IN BOOLEAN MODE) // finds the record  MATCH (field) AGAINST ('+85604' IN BOOLEAN MODE) // does not find the record  MATCH (field) AGAINST ('+8560' IN BOOLEAN MODE) // does not find the record  

I'm not sure why this is working this way. For various reasons I can't add a fulltext index to the column and use the "default" match against mode.

Optimizing a query using derived tables

Posted: 07 Oct 2013 07:39 PM PDT

This is actually a cross-post. I had posted this already on StackOverflow, and then someone suggested me to post here instead. If required, I'm willing to delete the StackOverflow post, if it's more relevant to keep a single copy here.

As part of a daily cron job, I need to run a query that processes a whole lot of data. This data is related to the visitors coming to a website, and updating the data with what we have captured previously.

The query relies on 2 derived tables (select queries in the FROM section), to do its work —

SELECT      new_visits.visitor_id, new_visits.visit_id, new_visits.visit_first_action_time,    new_visits.purchased as purchased,      ifnull(existing_visitors.purchased, 0) as existing_purchased   FROM           ( SELECT                    tv.visitor_id, tv.visit_id, tv.visit_first_action_time,           if(tc.idgoal=0,1,0) as purchased                               FROM           tbl_log_visit tv left outer join tbl_log_conversion tc                 ON           tv.visit_id = tc.visit_id AND tc.idgoal = 0                               WHERE          tv.idsite= 12 AND tv.visit_id >= 477256                      ORDER BY tv.visit_id               LIMIT 1000 ) new_visits       LEFT JOIN                 ( SELECT          visitor_id, max(visit_seq) as visit_seq, purchased        FROM          tbl_last_input_visit where site_id = 12        GROUP BY visitor_id, purchased ) existing_visitors             ON new_visits.visitor_id = existing_visitors.visitor_id     ORDER BY new_visits.visitor_id, new_visits.visit_id;  

With smaller datasets, this query works just fine. However, as the data increases, the slowly becomes progressively slower. Until a point where it starts to take around 30 seconds to executed (at the start it takes around 1.5 seconds).

The query plan is as follows —

+----+-------------+------------------------+-------+-----------------------------------------------------------------------------------+---------------+---------+-------------------+---------+---------------------------------+  | id | select_type | table                  | type  | possible_keys                                                                     | key           | key_len | ref               | rows    | Extra                           |  +----+-------------+------------------------+-------+-----------------------------------------------------------------------------------+---------------+---------+-------------------+---------+---------------------------------+  |  1 | PRIMARY     | <derived2>             | ALL   | NULL                                                                              | NULL          | NULL    | NULL              |    1000 | Using temporary; Using filesort |  |  1 | PRIMARY     | <derived3>             | ALL   | NULL                                                                              | NULL          | NULL    | NULL              |  705325 |                                 |  |  3 | DERIVED     | tbl_input_visit        | ref   | visitorid_seq,visitorid_idx                                                       | idvisitor_seq | 4       |                   |  490047 | Using where                     |  |  2 | DERIVED     | tv                     | range | PRIMARY,index_idsite_config_datetime,index_idsite_datetime,index_idsite_idvisitor | PRIMARY       | 4       | NULL              | 4781309 | Using where                     |  |  2 | DERIVED     | tc                     | ref   | PRIMARY                                                                           | PRIMARY       | 8       | tv.idvisit        |       1 | Using index                     |  +----+-------------+------------------------+-------+-----------------------------------------------------------------------------------+---------------+---------+-------------------+---------+---------------------------------+  

At this point, one option I have explored is creation of temporary tables. However, the overhead of doing so is quite significant. I also realise that since this query relies on derived tables, MySQL will not be able to reuse any underlying indexes.

Here are the create statements for the tables involved —

CREATE TABLE `tbl_last_input_visit` (    `site_id` int(10) unsigned NOT NULL,    `visitor_id` binary(8) NOT NULL,    `visit_seq` int(10) unsigned NOT NULL,    `purchase_cycle_seq` int(10) unsigned NOT NULL,    `visit_in_cycle_seq` int(10) unsigned NOT NULL,    `purchased` smallint(5) unsigned NOT NULL COMMENT 'l_ij',    UNIQUE KEY `idvisitor_seq` (`site_id`,`visitor_id`,`visit_seq`),    KEY `idvisitor_idx` (`site_id`,`visitor_id`)  ) ENGINE=InnoDB    CREATE TABLE `tbl_log_visit` (    `visit_id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `idsite` int(10) unsigned NOT NULL,    `visitor_id` binary(8) NOT NULL,    `visit_last_action_time` DATETIME,    `config_id` int(10) unsigned NOT NULL,    PRIMARY KEY (`visit_id`),    KEY `index_idsite_config_datetime` (`site_id`,`config_id`,`visit_last_action_time`),    KEY `index_idsite_datetime` (`site_id`,`visit_last_action_time`),    KEY `index_idsite_idvisitor` (`site_id`,`visitor_id`)  ) ENGINE=InnoDB    CREATE TABLE `tbl_log_conversion` (    `visit_id` int(10) unsigned NOT NULL,    `site_id` int(10) unsigned NOT NULL,    `visitor_id` binary(8) NOT NULL,    `idgoal` int(10) NOT NULL,    `idorder` int(10) NOT NULL,    PRIMARY KEY (`visit_id`,`idgoal`),    UNIQUE KEY `unique_idsite_idorder` (`site_id`,`idorder`)  ) ENGINE=InnoDB  

Is there some way I can go about improving the performance of this query?

Database design: Modifying an open source database without a lot of changes

Posted: 07 Oct 2013 10:17 AM PDT

  • I am adding more functionality/features to an open source project.
  • The project is about a forum where each post can be given a specific list of tags.
  • The tags are homogeneous. I.e they are of similar type. Ex: Subject
  • I am required to add multiple homogeneous tags. For ex: Subject, Course, College, Experience. All of them should have the same features as current tag
  • Currently each post can have multiple tags of same type. The requirement is to multiple types where each type allows multiple tags
  • Below is a snippet of a already existing datamodel

A snippet of already existing database.

  • How do I make changes to the already existing Database? (Please don't discard the question as opinion based since I saw many of them having similar doubts)
  • Do I need to add a separate table for each type of tags and redo whatever the original tag did?
  • Or should I create a level above tags which is a type of tags and they all inherit the basic tag functionality.

One or two tables (for two things: cancellations and moves)?

Posted: 07 Oct 2013 10:54 AM PDT

A lesson may be canceled, moved to an other time, and possibly switched to an other teacher.

In other words, we have

  lessonid INT UNSIGNED  canceled BIT(1)  time TIME  teacherid INT UNSIGNED  

Should all these "lesson modifications" be stored in one table? Or one table for canceled and another for moved lessons?

I am inclined to store it into one table (with possibly NULL time and probably NULL teacherid, because I can add a common "modification ID" (the primary ID of the table) for both cancellations a moves to an other time. But I am open to hear arguments pro and contra.

MySQL 5

Script out XMLA for all databases

Posted: 07 Oct 2013 01:26 PM PDT

I have several SSAS databases. I only want to script out the XMLA for them on a nightly basis. This will be a second tier backup in addition to the regular backups we have.

How do I auto-generate XMLA scripts for all the databases?

Connecting to sql azure from ssms throw exception 'sys.configurations'

Posted: 07 Oct 2013 01:45 PM PDT

Connecting to Sql Server Azure from MSSQL Server MGMT Studio, following exception occures :

Cannot connect to xxxxxxxxx.database.windows.net    An exception occured while executing a Transact-SQL Statement or batch.     (Microsoft.SqlServer.ConnectionInfo)    sql management studio connection to windows azure invalid exception object name   'sys.configurations'  

Any clue ?

And here's the whole message : Cannot connect to ndkkyxvogj.database.windows.net.    ===================================    An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)    Program Location:    at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteWithResults(String sqlCommand) at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ObjectExplorer.CheckDacAndSmo(SqlConnection sqlConnection) at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ObjectExplorer.ValidateConnection(UIConnectionInfo ci, IServerType server) at Microsoft.SqlServer.Management.UI.ConnectionDlg.Connector.ConnectionThreadUser()    ===================================    Invalid object name 'sys.configurations'. (.Net SqlClient Data Provider)    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.00.9164&EvtSrc=MSSQLServer&EvtID=208&LinkId=20476    Server Name: ndkkyxvogj.database.windows.net Error Number: 208 Severity: 16 State: 1 Line Number: 17    Program Location:    at Microsoft.SqlServer.Management.Common.ConnectionManager.ExecuteTSql(ExecuteTSqlAction action, Object execObject, DataSet fillDataSet, Boolean catchException) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteWithResults(String sqlCommand)  

Can non-domain-connected users use Report Builder

Posted: 07 Oct 2013 02:07 PM PDT

I'm dealing with a scenario where SSRS is being provided to remote clients on a different domain. We've discovered that they can launch Report Builder (a ClickOnce application) from their Report Manager browser session, but RB then uses their local credentials when trying to do anything (e.g. hit a datasource) in the RB session.

This article (http://technet.microsoft.com/en-us/library/ms365173(v=sql.105).aspx) seems to indicate that it won't be possible to hack our way around this; whereas this one about Azure (http://msdn.microsoft.com/en-us/library/windowsazure/dn189155.aspx) appears to indicate that a local RB session may be able to interact successfully with Azure - does this suggest there's a glimmer of hope?

We are currently running SQL 2008 R2.

Query_cache doesn't work with join

Posted: 07 Oct 2013 12:25 PM PDT

I have a simple join query, but for some reason query_cache won't cache it!

SELECT id, news, approve, FIXED, DATE, allow_main FROM post LEFT JOIN post_plus ON post.id = post_plus.news_id  WHERE approve =1 AND allow_main =1 ORDER BY FIXED DESC , DATE DESC  LIMIT 7 , 7;  

without the JOIN post_plus ON ( post.id = post_plus.news_id ) it's working.

query_cache is on

| query_cache_limit              | 10485760             |  | query_cache_min_res_unit       | 4096                 |  | query_cache_size               | 536870912            |  | query_cache_strip_comments     | OFF                  |  | query_cache_type               | ON                   |  | query_cache_wlock_invalidate   | OFF                  |  

Server version: 5.6.12-56 Percona Server (GPL), Release rc60.4, Revision 393

both tables are innodb with utf8 charset

mysqldump freezing on a specific table

Posted: 07 Oct 2013 03:25 PM PDT

I dumped a database (sys_data) which is very big (800GB, all data in one ibdata file) from a remote server. But the dump was blocked at a table (tb_trade_376). My dump command:

mysqldump -uxx -pxx -h192.168.1.xxx --single-transcation sys_data > /home/sys_data.sql  

When the dump was blocked:

show processlist;  5306612 | root | 192.168.1.161:57180 | sys_data      | Query  | 23955 | Sending data | SELECT /*!40001 SQL_NO_CACHE */ * FROM `tb_trade_376`  

On the other hand I can dump the table tb_trade_376 successfully if I just dump the table only.

mysqldump -uxx -pxx -h192.168.1.xxx \    --single-transcation sys_data tb_trade_376 > /home/tb_trade_376.sql  

This works well and quickly! The table tb_trade_376 has about 700,000-800,000 rows.

What is the next step in investigating why I can't dump the whole database? How can I make it work?

In place upgrade from MySQL 5.5 to 5.6.11 removes all users from user table

Posted: 07 Oct 2013 02:25 PM PDT

On Windows, I upgraded from 5.1 to 5.5 no problem.

  1. Copied my 5.1 data folder into my 5.5 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

All good, but going from 5.5 to 5.6:

  1. Copied 5.5 data folder to 5.6 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

but I get:

C:\Users\QAdmin>mysql_upgrade  Looking for 'mysql.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysql.exe  Looking for 'mysqlcheck.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe  Running 'mysqlcheck' with connection arguments: "--port=3306"  Running 'mysqlcheck' with connection arguments: "--port=3306"  mysql.user_info                                    OK  Running 'mysql_fix_privilege_tables'...  Running 'mysqlcheck' with connection arguments: "--port=3306"  C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe: Got error: 1130: Host 'localhost' is not allowed to connect to this MySQL server when trying to connect  FATAL ERROR: Upgrade failed  

If I look at the mysql.user table it is completely empty.

  • Has anyone seen this or know what is going on?
  • During the "upgrade" the user table gets erased and when it tries to connect it can't?

Thanks.

SSRS 2008 R2 setup issue

Posted: 07 Oct 2013 05:25 PM PDT

I have installed SSRS 2008 R2 on my desktop and server. When I hit the reports link on my desktop

http://mypc/Reports_mypc/Pages/Folder.aspx  

all I get to see is this home page of the desktop ssrs instance

I cant create a new folder or data source or anything of the sort

On the server where I am attempting to set up SSRS 2008 R2, all I get is a white screen that shows the virtual folder name in large fonts, followed by the version of the reporting services server on the next line. This is not leaving me any clues as to what needs to be fixed. On both pcs I am using the credentials of the local admin. Any clues on what needs to be fixed?

I would like to query a range of criteria on multiple columns in MySQL

Posted: 07 Oct 2013 11:53 AM PDT

I have 3 columns in a mysql table. I'm using Innodb engine. I want to be able to search for some values on those columns and also anything close to those values. For example :

We have 3 columns as "Fast" "Smart" "Sharp" and they are of type boolean/int I have 1000 records in the table, as an example, I list 5 here.

Fast  |  smart  |  Sharp  ------------------------  0         1          1  1         1          1  0         0          1  1         1          1  1         0          0  

and let's say we wanna search for people who are smart and sharp as 0 1 1 but if there is no record with the value of 0 1 1 we want to get the closest to it which is 1 1 1 or 1 0 1 or etc. So now when I search for it, I get the exact value, if the value doesn't exist, I won't get anything back from DB. So how can I achieve this and if I get a bunch of result, how can I sort them from the closets to my query to the feartest ?

What is the difference between int(8) and int(5) in mysql?

Posted: 07 Oct 2013 01:25 PM PDT

I found out, that if you have a field defined as INT(8) without ZEROFILL it will behave exactly as INT(5)

in both cases the maximum value is

−2,147,483,648 to 2,147,483,647, from −(2^31) to 2^31 − 1  

or do i miss something?

I found this Question: http://dba.stackexchange.com/a/370/12923

The (5) represents the display width of the field. From the manual, it states:

The display width does not constrain the range of values that can be stored in the column. Nor does it prevent values wider than the column display width from being displayed correctly. For example, a column specified as SMALLINT(3) has the usual SMALLINT range of -32768 to 32767, and values outside the range permitted by three digits are displayed in full using more than three digits.

The display width, from what I can tell, can be used to left-pad numbers that are less than the defined width. So 00322, instead of 322. TBH, I've never used it.

But it doesn't affect the storage size of the column. An int will take up more space than a smallint.

so there seems to be no difference then.

MySQL table relations, inheritance or not?

Posted: 07 Oct 2013 06:25 PM PDT

Im building a micro CMS. Using Mysql as RDMS, and Doctrine ORM for mapping.

I would like to have two types of pages. Static Page, and Blog Page. Static page would have page_url, and page_content stored in database. Blog page would have page_url, but no page_content. Blog would have Posts, Categories...

Lets say I have route like this:

/{pageurl}  

This is page, with page url that can be home, or news, or blog... That page can be either Static page, and then I would joust print page_content. But it can also be Blog Page, and then I would print latest posts as content.

How should I relate these Static Page and Blog Page tables? Is this inheritance, since both are pages, with their URL, but they have different content? Should I use inheritance, so that both Static and Blog page extends Page that would have page_url? Or should I made another table page_types and there store information about available page types?

Most idiomatic way to implement UPSERT in Postgresql nowadays

Posted: 07 Oct 2013 07:16 PM PDT

I've read about different UPSERT implementations in PostgreSQL, but all of these solutions are relatively old or relatively exotic (using writeable CTE, for example).

And I'm just not a psql expert at all to find out immediately, whether these solutions are old because they are well recommended or they are (well, almost all of them are) just toy examples not appropriate to production use.

So my question is following. Regarding to the fact that it is year 2012, what is the most common, most thread-safe way to implement UPSERT in PostgreSQL?

What's the difference between a CTE and a Temp Table?

Posted: 07 Oct 2013 06:06 PM PDT

What is the difference between a Common Table Expression (CTE) and a temp table? And when should I use one over the other?

CTE

WITH cte (Column1, Column2, Column3)  AS  (      SELECT Column1, Column2, Column3      FROM SomeTable  )    SELECT * FROM cte  

Temp Table

SELECT Column1, Column2, Column3  INTO #tmpTable  FROM SomeTable    SELECT * FROM #tmpTable  

No comments:

Post a Comment

Search This Blog