Tuesday, June 11, 2013

[how to] Prune unused joins

[how to] Prune unused joins


Prune unused joins

Posted: 11 Jun 2013 08:49 PM PDT

I have expected that selecting from a view would not join the tables from which I don't query any values, but it doesn't seem to be the case.

I have a bunch of tables with the following structure:

CREATE TABLE ind1 (year integer, id integer, ind1 float);  ALTER TABLE ind1 ADD PRIMARY KEY (year, id);  INSERT INTO ind1 VALUES (2000, 1, 0.0);  INSERT INTO ind1 VALUES (2000, 2, 0.3);  INSERT INTO ind1 VALUES (2000, 3, 1.1);  INSERT INTO ind1 VALUES (2001, 1, 0.0);  INSERT INTO ind1 VALUES (2001, 2, 0.3);  INSERT INTO ind1 VALUES (2001, 3, 1.1);  INSERT INTO ind1 VALUES (2002, 1, 0.0);  INSERT INTO ind1 VALUES (2002, 2, 0.3);  INSERT INTO ind1 VALUES (2002, 3, 1.1);  VACUUM ANALYZE ind1;  

There is also ind2, ind3, ... . The set of possible year and id values is identical in all tables, and also given in an extra table:

CREATE TABLE id (id integer PRIMARY KEY);  INSERT INTO id VALUES (1);  INSERT INTO id VALUES (2);  INSERT INTO id VALUES (3);  VACUUM ANALYZE id;  

Now I create a view to show all entries for a given year:

CREATE VIEW ind_2000 AS SELECT id, ind1, ind2, ind3    FROM (SELECT id, 2000 AS year FROM id) T    LEFT JOIN ind1 USING (year, id)    LEFT JOIN ind2 USING (year, id)    LEFT JOIN ind3 USING (year, id);  

However, when selecting only the columns id, ind1 and ind2, still all tables are joined, although this is not necessary to perform the query correctly:

EXPLAIN ANALYZE SELECT id, ind1, ind2 FROM ind_2000;  

The entire code is in this gist.

How do I tell the optimizer to avoid the useless joins?

(Using PostgreSQL 8.4)

EDIT: The error persists even when dropping the time dimension, see option1.sql in the gist.

Does the week day number changes according to localization in SQL Server?

Posted: 11 Jun 2013 03:27 PM PDT

Does the weekday number changes according to server/database regional configuration in SQL Server (2005+)?

Convert the table PROCESSLIST from InnoDB to MYISAM

Posted: 11 Jun 2013 09:27 PM PDT

I use MySQL. My database's name is "db", and I have a table called "Test".

I want recover the last update, so I used this query:

SELECT update_time  FROM information_schema.tables  WHERE table_schema='db'  AND table_name='Test';  

But the result is NULL, so I converted my Table from InnoDB to MYISAM by using this query:

mysql> ALTER TABLE db.Test ENGINE=MYISAM;  

and it worked.

But the problem is that when I want to convert the table PROCESSLIST from InnoDB to MYISAM I get this message:

mysql> use information_schema  Database changed  mysql> ALTER TABLE PROCESSLIST ENGINE=MYISAM;  ERROR 1044 (42000): Access denied for user 'root'@'localhost' to database 'information_schema'  

So, is there another solution to know what is the last update for this table (PROCESSLIST)?

How to compare an attribute value of one table to all attribute values of another table?

Posted: 11 Jun 2013 09:29 PM PDT

I want to get all values of A.A and compare each to all values of B.A before determining whether or not to display the A record in the result set. I would only display an A record if the value of an individual A.A does not exist as any B.A.

Can anybody help me with this one?

Automated query log

Posted: 11 Jun 2013 06:43 PM PDT

I would like to ask for some opinions or recommendations or guidelines on how to create an automatic logging system for the executed querys on a MySQL database.

Some details:

  • its a php+mysql project
  • multiple users can use the database, based upon a user table and a authentication system made in php (they share 1 mysql user)
  • so far some query log is implemented, it is not very efficient because ever single logging procedure is hard coded in php
  • i am looking for a way to create a general procedure, to which i can pass the user id, and the rest is done automatically
  • i need to store previous value, new value, column name, table name, user name, query type, time stamp

I am familiar with TRIGGERS and STORED PROCEDURES, but cant really figure out if that's the the way, and if yes how (don't need code, just pure logic, although a usable minimalistic example would be welcomed).

Thank you.

How to/advice for hosting a multi-user database on the cloud with specific objectives

Posted: 11 Jun 2013 11:03 AM PDT

I have developed an application that users will be able to download and create an account with through our company. This application will always be connected to the internet. I have been testing the application with a local SQL database up to this point, but our goal is to have the database hosted online and I need to start thinking big picture. I have come up with a couple solutions, I would just like to here some input/advice from someone who is more experienced in the database world.

Here is the problem:

Our users sell products and we are an order manager. The tables that are going to have lots of entries will be Orders, Shipments, Archived Orders, Archived Shipments, etc. (especially for large companies that have thousands of daily orders.) I am trying to decide the best way to organize the tables/database to accommodate the growth our company will receive once the product is released (a couple hundred customers, a couple thousand in a couple years, not unlikely.) Just off the top of my head, I have a couple ideas:

1) One large Orders table, one large Shipments table. These tables store the thousands of orders/shipments that are added daily, they can be queried based off some company index. These tables will obviously get large (hundreds of customers adding thousands of orders/shipments daily)

2) Individual Orders and Shipments tables for each customer. So the name of the table would essentially be the organizer for different companies orders/shipments. Is this a logical answer for a situation like this?

3) Some sort of system where say the first X customers orders/shipments are added to one database's tables, and then when we get more than X customers, another database/server would be created to divvy up the server power/storage.

Obviously you can tell I am not an expert and some of my questions may seem vague, but all I am looking for is some advice/possible solutions to move forward and start integrating my application with databases/servers on the cloud with the mindset that many customers will be added in a small period of time and I will need to approprately divide/add resources/servers/tables. Any input is appreciated.

How can extract the table schema from just the .frm file?

Posted: 11 Jun 2013 11:03 AM PDT

I have extracted the data directory of mysql from a backup and need to get the schema from an old table, however it's a backup from a different machine.

I have read a fair number of tutorials today on how to do this, but each time it seems to fail or I end up having to reinstall mysql because mysql hangs or crashes. (Create a different database, create table with same name, replace file, stop/start engine, recover from frm. Tried in varying orders and combinations.)

Is there any external tool that can extract the schema from the .frm file? I can see the column names if I open up the file. I had a look but I can't seem to find anything that will enable me to do this.

Thanks in advance.

App can't find or connect to database

Posted: 11 Jun 2013 10:55 AM PDT

I'm trying to install a web app on my Ubuntu 12.04 server. The app is based on PHP and MySQL and I've got LAMP already working OK on my server - infact I've already got other apps working on PHP & MySQL already installed and working OK.

The problem I'm facing is, that when I try to run this app I get the error

Error: A Database connection using "Mysql" was missing or unable to connect. The database server returned this error: SQLSTATE[HY000] [2003] Can't connect to MySQL server on '5.175.145.251' (111)

I have double and triple checked the username and password that I've put into the database.php and have confirmed that it's the same login credentials as what I use for phpMyAdmin and I'm even referring to the correct database (which as can be seen in phpMyAdmin, has been setup correctly.

For the install, I have followed the instructions at http://sourceforge.net/p/coopos/wiki/Install/ The only place I have differed is, "Step III. Configure the Virtual Host" - reason being, I cannot move from "/var/www" to "/home/username/public_html" as I already have other apps running under "/var/www"

My installation of the app (with all the error messages) can be seen at

http://5.175.145.251/pos  

Would really appreciate some help on this, pls.

Is there a way, in a single SQL statement to ensure that all items in a list are present in some column?

Posted: 11 Jun 2013 01:01 PM PDT

What I'm looking for is a way of verifying that all of a list of items (let's say 'FOO', 'BAR' and 'BAZ') are all in a given table.column. This would be something like SELECT something FROM sometable WHERE somevalue IN ('THIS', 'THAT', 'OTHER THING') ... except it's sort of the inverse:

Ideally the statement would give me everything from ('FOO', 'BAR', 'BAZ') NOT IN sometable.somecolumn then I could use that as my check (assert that my SQL query returns an empty set or take the resulting set and complain that each of these is an "unrecognized" key). (A brute force approach would be to insert all of the terms/tags/items into a temporary, single column table and perform a JOIN ON my actual table's column for the set of rows which are NOT IN it.

I'm sure I'm missing some syntactic trick that'll seem obvious once I see an example.

In this particular case my table structures are something like:

CREATE TABLE items (id     INTEGER PRIMARY KEY, item TEXT UNIQUE NOT NULL);    CREATE TABLE tags  (tag_id INTEGER PRIMARY KEY, tag  TEXT UNIQUE NOT NULL);    CREATE TABLE item_tag (item_id integer, tag_id integer,                                 FOREIGN KEY(item_id) REFERENCES items(id),                                 FOREIGN KEY(tag_id) REFERENCES tags(tag_id),                                 PRIMARY KEY (item_id, tag_id));  

... but I'm simply trying to come up with a template for taking an arbitrary number of "tags" and ensure that all of them are valid entries the "tags.tag" column. (The application will have to support arbitrarily complex queries for subsets of these tags to return DISTINCT subsets of the items; but I want to raise an error on any non-existent tag before building the JOIN's WHERE expression).

Obviously I could just loop over the tags doing a separate SELECT tag_id FROM tags where tag=? ... entailing numerous round trips to the database. But this seems silly when I could send all of them to some sort of query in a single statement.

Combining multiple queries

Posted: 11 Jun 2013 06:47 PM PDT

I have 2 queries given below. Now I want to have one quey by combining the two queries so as to get the table with columns (Tag Number, Primary Name and Discoverd Name)

Query 1:

select  RT.[TagName] as 'Tag Number',          RA.SourceName as 'Primary Document',          RA.SourceRevision  from dbo.REP_TTA as RT   join dbo.REP_ASS as RA on RA.TargetName=RT.TagName  where RA.[AssocName]='is a document for'  

Query 2:

select RT.[TagName] as 'Tag Number',         RA.SourceName as 'Discoverd Document',        RA.SourceRevision   from dbo.REP_TransposedTagAttributes as RT   join dbo.REP_ASS as RA on RA.TargetName=RT.TagName  where RA.[AssociationName]='refers to'   

Multiple table rows in one row

Posted: 11 Jun 2013 03:48 PM PDT

I have a table from where want to get such classes where 3rd position candidate got more marks than difference of 1st and 2nd.

class  position  name  total_marks  10     1         a     100  10     2         b     200  10     3         c     150  11     1         d     300   11     2         e     400  11     3         f     200  

I want to write a query to find similar situation in all classes.

How to partition a very large table?

Posted: 11 Jun 2013 01:16 PM PDT

I have a very big table. I want to partition it, but I cannot.

Database size: 1TB, Free space 200GB

Table:

  • Size: 165 columns (row lengh 4216 KB, no LOBs), 500 million rows, 600GB of data.
  • Possible partitioning: a day per partition
  • Number of rows per day/parition: 2 million

In order to partition it, I need to create a clustered index. But creating parition requires free space of the same size as the table, and I do not have extra 600GB.

Are there any options how I partition this table?

EDIT 1:

I have tried copying data into separate table.

However when I try to DELETE (or INSERT) 1 day of data into another table, I get an error, that tranasaction log is full and my transaction is being rolled back. My transaction log is approx 20 GB and I cannot make it any bigger.

MySQL Dumped data not displayed

Posted: 11 Jun 2013 01:06 PM PDT

I am using MySQL 5.0.88. I imported 20 GB dump file in my local machine. After successful import i check my DB but there is no records on that. When I looked a data directory the ibdata1 file size is 30 GB. There is no other database inside the data directory. how to i get the dumped data.

Edit by RolandoMySQLDBA

Request #1 : Please run this query and post its output

SELECT table_schema DB,SUM(data_length+index_length) InnoDBSpace  FROM information_schema.tables WHERE engine='InnoDB' GROUP BY table_schema;  

The output was

+------------------------+-------------+  | DB                     | InnoDBSpace |  +------------------------+-------------+  | sample_development |    18513920     |   | sample_production  |    18857984     |   +------------------------+-------------+  

Request #2 : Please run these queries and post their output

SHOW VARIABLES LIKE 'innodb%';  SHOW VARIABLES LIKE 'datadir';    mysql> SHOW VARIABLES LIKE 'innodb%';  +-----------------------------------------+------------------------+  | Variable_name                           | Value                  |  +-----------------------------------------+------------------------+  | innodb_additional_mem_pool_size         | 1048576                |   | innodb_autoextend_increment             | 8                      |   | innodb_buffer_pool_awe_mem_mb           | 0                      |   | innodb_buffer_pool_size                 | 8388608                |   | innodb_checksums                        | ON                     |   | innodb_commit_concurrency               | 0                      |   | innodb_concurrency_tickets              | 500                    |   | innodb_data_file_path                   | ibdata1:10M:autoextend |   | innodb_data_home_dir                    |                        |   | innodb_adaptive_hash_index              | ON                     |   | innodb_doublewrite                      | ON                     |   | innodb_fast_shutdown                    | 1                      |   | innodb_file_io_threads                  | 4                      |   | innodb_file_per_table                   | OFF                    |   | innodb_flush_log_at_trx_commit          | 1                      |   | innodb_flush_method                     |                        |   | innodb_force_recovery                   | 0                      |   | innodb_lock_wait_timeout                | 50                     |   | innodb_locks_unsafe_for_binlog          | OFF                    |   | innodb_log_arch_dir                     |                        |   | innodb_log_archive                      | OFF                    |   | innodb_log_buffer_size                  | 1048576                |   | innodb_log_file_size                    | 5242880                |   | innodb_log_files_in_group               | 2                      |   | innodb_log_group_home_dir               | ./                     |   | innodb_max_dirty_pages_pct              | 90                     |   | innodb_max_purge_lag                    | 0                      |   | innodb_mirrored_log_groups              | 1                      |   | innodb_open_files                       | 300                    |   | innodb_rollback_on_timeout              | OFF                    |   | innodb_support_xa                       | ON                     |   | innodb_sync_spin_loops                  | 20                     |   | innodb_table_locks                      | ON                     |   | innodb_thread_concurrency               | 8                      |   | innodb_thread_sleep_delay               | 10000                  |   | innodb_use_legacy_cardinality_algorithm | ON                     |   +-----------------------------------------+------------------------+      mysql> SHOW VARIABLES LIKE 'datadir';  +---------------+-----------------+  | Variable_name | Value           |  +---------------+-----------------+  | datadir       | /var/lib/mysql/ |   +---------------+-----------------+  

Request #3 : Please run this query and post its output

I need to see the top of the import file. Please run this in the OS and post its output

# head -30 dump.sql    -> head -30 db.sql  -- MySQL dump 10.11  --  -- Host: localhost    Database: sample_production  -- ------------------------------------------------------  -- Server version   5.0.88-community    /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;  /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;  /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;  /*!40101 SET NAMES utf8 */;  /*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;  /*!40103 SET TIME_ZONE='+00:00' */;  /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;  /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;  /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;  /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;    --  -- Table structure for table `GODZIRRA`  --    DROP TABLE IF EXISTS `GODZIRRA`;  /*!40101 SET @saved_cs_client     = @@character_set_client */;  /*!40101 SET character_set_client = utf8 */;  CREATE TABLE `GODZIRRA` (    `id` int(11) NOT NULL default '0'  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;  /*!40101 SET character_set_client = @saved_cs_client */;  

Request #4 : Please show the mysqld process from the command line # ps -ef | grep mysqld | grep -v grep

-> ps -ef | grep mysqld | grep -v grep  root     27852     1  0 Jun10 ?        00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.localdomain.pid  mysql    27876 27852  0 Jun10 ?        00:00:00 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/localhost.localdomain.pid --skip-external-locking  

Request #5 : Please run this query as shown

SELECT      IFNULL(ENGINE,'Total') "Storage Engine",      LPAD(CONCAT(FORMAT(DAT/POWER(1024,pw1),2),' ',SUBSTR(units,pw1*2+1,2)),17,' ') Data,      LPAD(CONCAT(FORMAT(NDX/POWER(1024,pw2),2),' ',SUBSTR(units,pw2*2+1,2)),17,' ') Indexes,      LPAD(CONCAT(FORMAT(TBL/POWER(1024,pw3),2),' ',SUBSTR(units,pw3*2+1,2)),17,' ') Total  FROM  (      SELECT ENGINE,DAT,NDX,TBL,IF(px>4,4,px) pw1,IF(py>4,4,py) pw2,IF(pz>4,4,pz) pw3      FROM       (          SELECT *,              FLOOR(LOG(IF(DAT=0,1,DAT))/LOG(1024)) px,              FLOOR(LOG(NDX)/LOG(1024)) py,              FLOOR(LOG(TBL)/LOG(1024)) pz          FROM          (              SELECT                  ENGINE,                  SUM(data_length) DAT,                  SUM(index_length) NDX,                  SUM(data_length+index_length) TBL              FROM              (                 SELECT engine,data_length,index_length FROM                 information_schema.tables WHERE table_schema NOT IN                 ('information_schema','performance_schema','mysql')                 AND ENGINE IS NOT NULL              ) AAA              GROUP BY ENGINE WITH ROLLUP          ) AAA      ) AA  ) A,(SELECT ' BKBMBGBTB' units) B;  

Backup not creating a new file

Posted: 11 Jun 2013 11:23 AM PDT

I run a daily backup of my database through command prompt:

c:\sqlcmd -H localhost -Q "BACKUP DATABASE test TO DISK='c:\test.bak'"  

But the new backup replaces the previous backup. Please tell me what to change so that all backups will be stored on disk.

Database Mail sending functionality not working on local system

Posted: 11 Jun 2013 12:45 PM PDT

I am using Database Mail functionality to send mail from a SQL Server 2008 database via following stored procedure execution:

EXEC sp_send_dbmail @profile_name = 'MyProfile',                       @recipients = 'abc@companyname.com',                       @subject = 'Test message',                      @body = 'Congrats Database Mail Received By you Successfully.'   

I have tried with my gmail account profile on my local system it's working properly but not with my company or outlook profile.

Error message:

The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 5 . Exception Message: Could not connect to mail server. (A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond abc.j.i.ooo:pp). )

Reference

http://blogs.msdn.com/b/suhde/archive/2009/07/12/how-to-configure-sql-server-database-mail-to-send-email-using-your-windows-live-mail-account-or-your-gmail-account.aspx  

What would be the problem?

Thanks

In the Vertica database, what is a namespace?

Posted: 11 Jun 2013 10:29 AM PDT

In the Vertica database, what does the term "namespace" mean?

I have reviewed the entire Vertica documentation and cannot find what this means.

createdb: could not connect to database postgres: FATAL: could not write init file

Posted: 11 Jun 2013 07:26 PM PDT

RedHat Enterprise Server 3.0 32 Bits

psql (PostgreSQL) 8.2.3

user: postgres

server is running:

/soft/postgres/8.2.3/bin/pg_ctl start  pg_ctl: another server may be running; trying to start server anyway  2013-05-09 11:23:07 BRST---:FATAL:  lock file "postmaster.pid" already exists  2013-05-09 11:23:07 BRST---:HINT:  Is another postmaster (PID 12810) running in data directory "/opt/psql/dba/bdadms/data1/pstg"?  pg_ctl: could not start server  Examine the log output.  

I had just created a new database cluster with initdb; but when I run createdb:

8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb pstg  createdb: could not connect to database postgres: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb postgres  createdb: could not connect to database template1: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb template1  createdb: could not connect to database postgres: FATAL:  could not write init file  

any clues as to the cause and possible solutions to this problem?

Repeated values in group_concat

Posted: 11 Jun 2013 03:47 PM PDT

I have two tables, first the table food and Second is Activity:

INSERT INTO food      (`id`, `foodName`)  VALUES      (1, 'food1'),      (2, 'food2'),      (3, 'food3'),      (4, 'food4'),      (5, 'food5'),      (6, 'food6'),  ;  CREATE TABLE Activity      (`id` int,`place` varchar(14),`food_id` int,`timing` TIME,`date_and_time` DATETIME)  ;  INSERT INTO Activity      (`id`,`place`, `food_id`,`timing`,`date_and_time`)  VALUES      (1, 'place1', 1, '10:30am','2013-05-01'),      (2, 'place1', 1, '12:30pm','2013-05-01'),      (3, 'place1', 1, '04:30pm','2013-05-01'),      (4, 'place2', 2, '10:30am','2013-05-02'),      (5, 'place2', 2, '12:30pm','2013-05-02'),      (6, 'place2', 2, '4:30pm','2013-05-02'),      (7, 'place1', 2, '10:30am','2013-05-02'),      (8, 'place1', 2, '12:30pm','2013-05-02'),      (9, 'place1', 2, '4:30pm','2013-05-02'),      (10, 'place2', 3, '10:30am','2013-05-03'),      (11, 'place2', 3, '12:30pm','2013-05-03'),      (12, 'place2', 3, '4:30pm','2013-05-03')  ;  

For now I'm using the following query:

SELECT       a.activity_type AS Activity,       COUNT(DISTINCT p.id) AS Products,      CONVERT(GROUP_CONCAT(p.category_id SEPARATOR ',  ') USING utf8)         AS Categories  FROM       food AS p    JOIN       ( SELECT activity_type             , prod_id        FROM activity         WHERE activity_type <> ''         GROUP BY activity_type               , prod_id      ) AS a      ON p.id = a.prod_id  GROUP BY       activity_type  ORDER BY       Products DESC ;  

Could you please help me, I need output in the below format:

place | food_id | Timings             |                              |        |         |---------------------|Date                          |        |         |Time1 |Time2 | Time3 |                              |  ---------------+----------+------------------------------------------|  place1 | 1      | 10:30am| 12:30pm| 4:30pm |2013-05-01(MAX timestamp)|            place2 | 1      | 10:30am| 12:30am| 4:30am |2013-05-01(MAX timestamp)|  

Moving one TempdB on 3 instance server

Posted: 11 Jun 2013 08:53 PM PDT

I have a large virtual SQL Server (Full 2008R2).

I run 3 SQL instances and would like to relocate the TempdB database file to another location, splitting off from the TempdB log file.

The trouble I am having is that, even though the query ALTER DATABASE executes successfully, and then restarting that particular instance's SQL Server Service, does not relocate that database file.

Do I have to restart the whole server to move this database?

MySQL PDO Cannot assign requested address

Posted: 11 Jun 2013 01:26 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

Alternative tools to export Oracle database to SQL Server?

Posted: 11 Jun 2013 03:26 PM PDT

I've got an Oracle database that I need to export (schema and data) to SQL Server.

I am trying the Microsoft SQL Server Migration Assistant for Oracle, but it is horribly slow, grossly inefficient and very un-user-friendly, e.g. I was having problems connecting to the SQL Server DB during data migration - but it still spent ~5 minutes preparing all the data before attempting a connection to SQL Server, then when it failed, the 5 minutes of preparatory work were wasted.

Right now, I'm just trying to connect to another Oracle DB using this tool, I left it overnight and came back this morning, and it's still stuck on 19% of "Loading objects..." And this is on a machine with a good 18GB RAM, of which maybe 8.5 GB currently in use. Task Manager shows me that SSMAforOracle.exe is using 0 CPU, 0 PF Delta, and no change whatsoever in memory usage. In other words: frozen stiff. Absolutely intolerable.

Are there any other tools out there that can migrate an Oracle DB to SQL Server a little more efficiently?

Performing SELECT on EACH ROW in CTE or Nested QUERY?

Posted: 11 Jun 2013 06:19 PM PDT

This is a problem in PostgreSQL

I have a table which stores the tree of users;

      +------+---------+      |  id  | parent  |      |------+---------|      |  1   |   0     |      |------|---------|      |  2   |   1     |      |------|---------|      |  3   |   1     |      |------|---------|      |  4   |   2     |      |------|---------|      |  5   |   2     |      |------|---------|      |  6   |   4     |      |------|---------|      |  7   |   6     |      |------|---------|      |  8   |   6     |      +------+---------+  

I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example

tree for #1 has size 7
tree for #5 has size 0
tree for #6 has size 2, and so on

Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this:

      +------------------+-------------+      |  tree_root_node  |  tree_size  |      |------------------+-------------|      |      1           |     7       |      |------------------+-------------|      |      2           |     3       |      |------------------+-------------|      |      3           |     0       |      |------------------+-------------|      |      4           |     3       |      |------------------+-------------|      |      5           |     0       |      |------------------+-------------|      |      6           |     2       |      |------------------+-------------|      |      7           |     0       |      |------------------+-------------|      |      8           |     0       |      +------------------+-------------+  

The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data.

I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing.

Can someone please point to right direction ?

List all permissions for a given role?

Posted: 11 Jun 2013 08:09 PM PDT

I've searched around all over and haven't found a conclusive answer to this question.

I need a script that can give ALL permissions for an associated role.

Any thoughts, or is it even possible?

This gets me CLOSE - but I can't seem to flip it around and give the summary for roles, rather than users.

http://consultingblogs.emc.com/jamiethomson/archive/2007/02/09/SQL-Server-2005_3A00_-View-all-permissions--_2800_2_2900_.aspx

 WITH    perms_cte as  (          select USER_NAME(p.grantee_principal_id) AS principal_name,                  dp.principal_id,                  dp.type_desc AS principal_type_desc,                  p.class_desc,                  OBJECT_NAME(p.major_id) AS object_name,                  p.permission_name,                  p.state_desc AS permission_state_desc          from    sys.database_permissions p          inner   JOIN sys.database_principals dp          on     p.grantee_principal_id = dp.principal_id  )  --role members  SELECT rm.member_principal_name, rm.principal_type_desc, p.class_desc,       p.object_name, p.permission_name, p.permission_state_desc,rm.role_name  FROM    perms_cte p  right outer JOIN (      select role_principal_id, dp.type_desc as principal_type_desc,      member_principal_id,user_name(member_principal_id) as member_principal_name,     user_name(role_principal_id) as role_name--,*      from    sys.database_role_members rm      INNER   JOIN sys.database_principals dp      ON     rm.member_principal_id = dp.principal_id  ) rm  ON     rm.role_principal_id = p.principal_id  order by 1  

Named Pipe Provider Error code 40

Posted: 11 Jun 2013 06:26 PM PDT

I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But sqlcmd throws an error:

Login timeout expired  

Help!

ERWIN create and use database

Posted: 11 Jun 2013 11:29 AM PDT

Im trying to generate physical database from my logical model using ERWIN 7.

All is well in the script that is generated automatically, there is only the problem that the model is created in the database "master" which is the default SQL Server, rather than being created in the database that I inserted along with the model.

I cant find a way to associate my MODEL with the database im creating along with the model.

Here is the script that ERWIN auto-generates:

CREATE DATABASE Movies go  ALTER DATABASE Movies SET  RECOVERY FULL go  CREATE SCHEMA fbd AUTHORIZATION dbo go  CREATE DEFAULT Default_Value_movies_gen   AS 1 go  CREATE DEFAULT Default_Value_titulo_movies    AS 'No title' go  

BUT it should generate automatically:

CREATE DATABASE Movies go   --Here should be "USE Movies",to use database that ERWIN have just created  ALTER DATABASE Movies SET RECOVERY FULL go  CREATE SCHEMA fbd AUTHORIZATION dbo go  CREATE DEFAULT Default_Value_movies_gen AS 1 go  CREATE DEFAULT Default_Value_titulo_movies AS 'No title' go  

Any ideas?

Restore SQL Server 2012 backup to a SQL Server 2008 database?

Posted: 11 Jun 2013 08:23 PM PDT

Is there a way to restore a SQL Server 2012 database backup to a SQL Server 2008?

I tried to attach file, it does not work.

How to recover MySQL table structure from FRM files

Posted: 11 Jun 2013 03:48 PM PDT

What I have learned from Google searching:

With MyISAM tables you can simply copy the FRM, MYD, and MYI files. In the folder for the DB I'm trying to recover, there are only FRM files and a "db.opt" file, so they must not be MyISAM tables.

Situation:

My computer crashed, but the data is still on the hard drive. The MySQL 5.5.8 server was installed with WAMP on the crashed computer.

I have tried copying the entire data folder over, but MySQL only creates a blank database - no tables.

There is an FRM file for each of the tables I wish to recover. However, they are not readable text files. Is there any way to recover the table structure from these files?

This post http://stackoverflow.com/a/7688688/1188138 indicates that the structure is in fact in those files, but does not provide a method of retrieval.

Thank you so much - I have tried everything I can think of.

INFORMATION_SCHEMA Selection Error

Posted: 11 Jun 2013 02:17 PM PDT

I'm trying to select data from information_schema but I'm getting the following error. How can I fix this?

mysql> SELECT * FROM information_schema.tables ;  ERROR 1018 (HY000): Can't read dir of '.' (errno: 13)  

Moving tables to another SQL2008 database (including indexes, triggers, etc.)

Posted: 11 Jun 2013 09:17 AM PDT

I need to move a whole bunch (100+) of large (millions of rows) tables from one SQL2008 database to another.

I originally just used the Import/Export Wizard, but all the destination tables were missing primary and foreign keys, indexes, constraints, triggers, etc. (Identity columns were also converted to plain INTs, but I think I just missed a checkbox in the wizard.)

What's the right way to do this?

If this were just a couple of tables, I would go back to the source, script out the table definition (with all indexes, etc), then run the index creation portions of the script on the destination. But with so many tables, this seems impractical.

If there wasn't quite so much data, I could use the "Create Scripts..." wizard to script out the source, including data, but a 72m row script just doesn't seem like a good idea!

No comments:

Post a Comment

Search This Blog