Friday, April 12, 2013

[how to] Modeling issue with derived types

[how to] Modeling issue with derived types


Modeling issue with derived types

Posted: 12 Apr 2013 08:09 PM PDT

I have a super class Region and its derived classes OrigineRegion and DestinationRegion as follows :

  public class Region      {            public int Id { get; set; }          public string Name { get; set; }          public virtual List<Agent> Agents { get; set; }      }    public class Origine : Region  {        public virtual ICollection<OrigineAgent> OrigineAgents { get; set; }        public virtual ICollection<Destination> Destinations { get; set; }  }    public class Destination : Region  {      public virtual Origine Origine { get; set; }  }  

OrigineAgent derives from Agent:

public class Agent  {      public int Id { get; set; }  }    public class OrigineAgent : Agent  {      public bool HasNetwork { get; set; }  }  

When I finished modeling this, I ended up with Origine type has two lists: Agents and OrigineAgents.
What do I have to change in my model to avoid this repitition ?

mysql error creating table

Posted: 12 Apr 2013 05:26 PM PDT

I was creating a DB with mySQL, and this error appeared:

Can't create table 'projectefinal.apuntats' (errno: 150)

I've got this:

    CREATE DATABASE IF NOT EXISTS projectefinal    CREATE TABLE IF NOT EXISTS usuaris(      id INT NOT NULL AUTO_INCREMENT,      nom TEXT NOT NULL,      cognoms TEXT NOT NULL,      dni VARCHAR(9) NOT NULL,      aturat BOOLEAN NOT NULL,      PRIMARY KEY(id),      UNIQUE(dni)  );    CREATE TABLE IF NOT EXISTS cursos(      id INT NOT NULL AUTO_INCREMENT,      nom TEXT NOT NULL,      maximspersones INT NOT NULL,      PRIMARY KEY(id)  );    CREATE TABLE IF NOT EXISTS apuntats(      id INT NOT NULL AUTO_INCREMENT,      idcurs INT NOT NULL,      idusuari INT NOT NULL,      coneixements TEXT,      motiu TEXT,      FOREIGN KEY(idcurs)          REFERENCES id(curs)          ON DELETE CASCADE,      FOREIGN KEY(idusuari)          REFERENCES id(usuaris)          ON DELETE CASCADE,      PRIMARY KEY(id)  );  

I've read before that may happends because a table is not created, but it is, I went one by one ("usuaris" first, "curs" for the second, and finally "apuntats") and it gave me that error. Any help on how to fix it?

Thanks again, as always!

MySQL: OPTIMIZE after TRUNCATE?

Posted: 12 Apr 2013 06:50 PM PDT

Using MySQL (either InnoDB or MyISAM tables--I have both), if I TRUNCATE a table and then LOAD DATA INFILE, should I issue an OPTIMIZE TABLE call after the LOAD? It's unclear to me if this is a helpful action?

Service Broker - Communication not happening between servers despite ssbdiagnose saying all is well

Posted: 12 Apr 2013 05:20 PM PDT

I am setting up Service Broker communication between 2 servers, A and B we will call them. I used this article as an example:

http://blogs.msdn.com/b/sql_service_broker/archive/2008/08/26/securing-a-dialog-with-certificates.aspx?Redirected=true

A is SQL 2005, B is SQL 2012. It doesn't work, and I have not been able to track down a good reason why. What I can see is:

The Profiler on the receiving end doesn't even show any Service Broker events, indicating nothing is getting to it.

But using ssbdiagnose as below, it says congratulations, you have no errors:

C:\Users\me>ssbdiagnose -level info -E configuration from service ServerAServiceTest -S ServerA -d MyDB to Service ServerBServiceTest -S ServerB -d MyDB on Contract myContract    Microsoft SQL Server 11.0.2100.60  Service Broker Diagnostic Utility  0 Errors, 0 Warnings  

Also in the transmission queue, the only thing I see that seems particularly off is 'to_broker_instance' is null, though I explicitly specified that info when setting up the route.

Further, no errors are showing up in transmission_status.

Also, SQL Server error logs are shedding no light.

As for firewall issues, well, these are test servers not accessible from the outside so I tried turning the firewalls off altogether.

One thing that is bothersome, I will get these:

An exception occurred while enqueueing a message in the target queue. Error: 15581 State: 7. Please create a master key in the database or open the master key in the session before performing this operation.

I do open the key and these go away. But I shouldn't have to repeatedly open the key everytime I want to do something, should I? I suspect this is part of the problem even though as mentioned the errors go away.

Sorry for the somewhat open question - even some help identifying where to get more informative errors or debugging info would be great. This is new territory for me.

How do I optimize large table so queries that target only recent data perform optimaly?

Posted: 12 Apr 2013 01:49 PM PDT

So I have this table that is ever growing. Most queries are targeting just recent data, say one month old. I suppose this is common problem but I have no idea how it can be solved.

I am open to changing design or if there is mechanism in MsSql to solve this. I have limited options to try different solutions as database is in production and its hard to reproduce.

CREATE TABLE [dbo].[mydata](  [ID] [int] IDENTITY(1,1) PRIMARY KEY NOT NULL,  [Code] [varchar](20) NOT NULL,   -- index1 UNIQUE NONCLUSTERED INDEX  [Data2] [varchar](20) NULL,        [Data3] [nvarchar](50) NOT NULL,  ... bunch of DATA around 5kb  [Time_1] [datetime] NULL,    -- time created, -- index2 NONCLUSTERED INDEX  [Time_2] [datetime] NULL,    -- time finished ( usualy within few days ) -- index3 NONCLUSTERED INDEX  [Status] [int] NOT NULL,     -- active   [Modid] [timestamp] NOT NULL  )  

shadow paging and mvcc

Posted: 12 Apr 2013 11:35 AM PDT

If I get it right, in MVCC model database, if someone's gonna update some data, the old version of it kept as is and all modifications are made on copied data. So what is the difference from shadow paging mechanism? Citing Wikipedia:

Shadow paging is a copy-on-write technique for avoiding in-place updates of pages. Instead, when a page is to be modified, a shadow page is allocated. Since the shadow page has no references (from other pages on disk), it can be modified liberally, without concern for consistency constraints, etc. When the page is ready to become durable, all pages that referred to the original are updated to refer to the new replacement page instead. Because the page is "activated" only when it is ready, it is atomic.

How to identify computer specifications

Posted: 12 Apr 2013 10:07 AM PDT

We need to assemble a system that can store streaming data from a dozen sensors, including several cameras, averaging >5 Gb / minute. The desired data storage system is a single machine running a DBMS in the back of a car, so we have some significant space limitations. The sensors may or may not be network sensors (i.e. they will be in physical proximity to the data storage system, so they could be connected to the same computer if it can handle the computational load). Reading the data back out from the DB at this time is not that important, as there are two distinct time phases to this project: (1) data collection, and then (2) reading. Worst case scenario, we can pull the data after the data collection and put it into another database, but it would be better to have limited reading available during the writes in order to verify the data quality.

My prior database experience is with SQL Server, but with much lower write requirements. Others have suggested using Cassandra instead for this type of an application. In either case, I've no real idea of how to identify the necessary hardware specifications. I could try to search for a hard drive with fast enough write speeds, and then add LOTS of memory - but surely there is a more methodical way of approaching the problem.

Can anybody make suggestions on how to design a computer to support a database system? Links to other successful systems, or better forums for asking this question would be appreciated.

Thank you.

extra steps after changing storage engine and adding index

Posted: 12 Apr 2013 12:50 PM PDT

Someone told me to look into his website for quick optimization; I'm a programmer and i don't have much experience optimizing databases.

I have a php/MySQL site uses the MyISAM storage engine. It doesn't have any indexes on the columns.

I want to change the engine to innoDB and add indexes to the columns.

Some tables have a couple hundred thousand rows so it's not a very small database.

My question is mostly about the data that already is in the database. Do I need to do anything after these changes to make the already stored data aware of that ? or make them compatible with these changes ?

Recursive query in mysql

Posted: 12 Apr 2013 04:30 PM PDT

I am using GetAncestry function found at this post, The problem is when I try to use it with select query Mysql hang, I am not sure why it is happens my Mysql version is "5.5.16" any help appriciable

DELIMITER $$  DROP FUNCTION IF EXISTS `junk`.`GetAncestry` $$  CREATE FUNCTION `junk`.`GetAncestry` (GivenID INT) RETURNS VARCHAR(1024)  DETERMINISTIC  BEGIN      DECLARE rv VARCHAR(1024);      DECLARE cm CHAR(1);      DECLARE ch INT;        SET rv = '';      SET cm = '';      SET ch = GivenID;      WHILE ch > 0 DO          SELECT IFNULL(parent_id,-1) INTO ch FROM          (SELECT parent_id FROM pctable WHERE id = ch) A;          IF ch > 0 THEN              SET rv = CONCAT(rv,cm,ch);              SET cm = ',';          END IF;      END WHILE;      RETURN rv;  END $$  DELIMITER ;  

Best way to perform backups with filegroups and then restore those backups

Posted: 12 Apr 2013 03:43 PM PDT

Scenario:

Database consists of these file groups:

PRIMARY  xxTEST2  xxTEST3  xxTEST4  

Database consists of the following files in those file groups:

xxTEST_RESTORE - Rows Data - PRIMARY - xxTEST_RESTORE.mdf  xxTEST_RESTORE2 - Rows Data - xxTEST2 - xxTEST_RESTORE2.ndf  xxTEST_RESTORE3 - Rows Data - xxTEST3 - xxTEST_RESTORE3.ndf  xxTEST_RESTORE4 - Rows Data - xxTEST4 - xxTEST_RESTORE4.ndf  xxTEST_RESTORE_log - Log - N/A - xxTEST_RESTORE_log.ldf  

2 tables are created with row:

dbo.COUNTRIES  dbo.CUSTOMERS  

Then a full database backup is performed, and rows are added to the Customers table.

Question:

  1. Want to add 1 record and have a backup of this

  2. Want to add a 2nd record and have a backup of that

  3. Want to roll back to the previous time when the 1st record was added (Q #1 above)

    • Should I perform a TRANS LOG backup after adding the 1st record or a FileGroup backup?

    • What order of backups should I perform to ensure I can roll back properly as I keep getting errors that either the database is in restore mode, doesn't like the trans log I pick, etc....

Looking for the proper sequence for backup and then restore

Why are constraint applied in database rather than code?

Posted: 12 Apr 2013 12:29 PM PDT

Why are constraint applied in Database? Will it not be more flexible to be in code?
I'm reading a beginners book on implementing databases, So this is a beginner's question!

Let's say I design a database model, Which has this entity model:

Person, Has subtypes: Employee , Student  Student's subtypes: Graduate, Undergraduate  Employee's subtypes: Teacher, Administrator  

A registered person on system can be only an Student, Or Employee, Person entity requires uniqueness of social number (which every person has only one ofcourse).

If one day the college decides that teachers (an Employee subtype) can also be student, Taking courses in their free time (It's very improbable but I cant think of anything else right now), It's much harder to change database design which could have thousands of entries, Rather than just changing the logic in code that didn't allow a person be registered both as an student and an employee.

Why do we care about business rules in database design rather than in code?

How does mysql transaction works?

Posted: 12 Apr 2013 05:22 PM PDT

I already know how to use transactions, what I want to know is how MySQL transaction handles data being processed...

For example, consider the following:

    1. I have 2 databases (db1 and db2) and both databases are geographically far from each other, more like cities away.        2. db1 has a table namely, tbl_orig        3. db2 has a table namely, tbl_backup        4. I want to use MySQL transaction when copying the table from db1 (tbl_orig) into db2 (tbl_backup) so that in case of a power outage or anything that would interrupt the process, it would prevent my database to be in an unstable state.    

Say I use the following commands below:

    mysql> start transaction;        mysql> insert into db2.tbl_backup select * from db1.tbl_orig;        mysql> commit;    

In the above sql statements, in between the "start transaction" and the "commit" lines... what exaclty is happening to the rows indicated in the insert statement?

Does all of the rows affected by the insert statement from db1 gets transfered into db2 immediately and are waiting for the commit line to execute in order to make the insert official?

Or, are the rows affected by the insert statement are still inside db1 and are just waiting to be transfered into db2 once the "commit" command executes?

I wanted to know this facts because I will be copying data from a database through the internet and I'm worried that I might lose some data in the process... Any help would be greaty appreciated.

What is the best SQL Login Role for the following scenario?

Posted: 12 Apr 2013 10:20 AM PDT

I want to create a login for a new user who could only create and manage their own databases. Other databases on the server should be read-only to that user. What would be a good set of roles/permissions to use to implement this?

Thank you for your help!

p.s. I am using SQL Server 2008 r2

The 'Data-based filtering for mining models' feature is not included error

Posted: 12 Apr 2013 12:19 PM PDT

The 'Data-based filtering for mining models' feature is not included in SKU Standard 64 bit edition.

I have upgraded to Enteprise Edition but I still get the error.

Why is that ? Is there something I have to do so SSAS finds out it's actually an Enterprise Edition.

Recommended Approach to Programmatically Backing Up/Restoring Oracle DB

Posted: 12 Apr 2013 10:20 AM PDT

Supposing I have an ever-growing Oracle DB on one server, and I want to duplicate this schema and the data on another server - what would be the best approach to achieving this as part of a bespoke .net app?

What I've Tried/Researched:

  • RMAN - seems to be the most appropriate technology available for writing batch files to do a backup but I don't feel as though I'd have as much control over it as if it were developed from scratch in C#.
  • PL/SQL - I could use stored procs to export data to csv and combine it with C#?
  • DataReaders and official Oracle API's - Seems to me like the most favourable approach as it is what I'm most familiar with - but am I reinventing the wheel given the fact that the above two technologies have the power to create these backups?
  • 'OOTB' oracle executables - Taking oracle database dump via c# .net http://rockingtechnology.blogspot.co.uk/2011/06/oracle-backup-and-restore-code-in-cnet.html

Windows native backups makes SQL Server think that databases backup has been done

Posted: 12 Apr 2013 12:33 PM PDT

community. I've Windows Server 2008 R2 which hosts few virtual machines. One of machines is Windows 2008 R2 server with SQL server 2008 R2 Express installed. Here's the thing: I wrote a script that backups databases at 05:00AM everyday. Mon, Wed, Fri - Full backups. Tue, Thu, Sat, Sun - Transaction log backups.

A few days ago i had to try to restore data from backups and i couldn't. I recieved an error message that my Transaction log is to recent to use. It was on Tuesday. So basicaly i had to restore monday's full backup and tuesday's early morning transaction log backup.

I started to research the cause of it and soon discovered that everyday at 04:00AM and 11:00PM sql server backups all the databases to some VIRTUAL_DEVICE with two different SIDs. I realized that at 11:00PM starts Windows Server Backups on that concrete virtual machine. Backup contains only Bare metal recovery + System state + C: Later i understood from where comes backup at 04:00AM. There's similar story. At 04:00AM starts backup of Hyper-v Host with the same parameters(it's strange but sql server somehow realizes that machine is being backed up.

So we have:

  • Backup of Virtual machine at 11:00PM
  • Backup of Hyper-V host at 04:00AM

After procedures described above EventLog recieves message that SQL server has been backed up. But There's no physical place where those "backups" are stored. Since SQL Server thinks that he'd been backuped he change LSNs so any correct backups of transaction logs can't realy be restored after another correct full backup.

Please note: Hyper-v host and virtual machine itself are being backed up to the personal colume for backups. After assigning letter to the volume i can see VHDs of disk that had been backed up. And no sign of sql databases backup inside.

The main problem is to stop SQL from reacting on system backups.

Looking forward for your replies and thanks in advance, Alexey.

High amount of Read Misses and Pages To Be Flushed

Posted: 12 Apr 2013 05:16 PM PDT

I am running a mysql database backend for a Moodle installation, and after a few months performance really starts to suffer (Up to 30 seconds for some pages to load). Under investigation in the InnoDB buffer pool, I found that the buffer pool size seemed to be correct (innodb_buffer_pool_wait_free = 0). However, I also found that I have an exceedingly high percentage of Read Misses (52%) and what seems like a rather large amount of Pages To Be flushed (31 million). I'm currently running the slow query log, but the lag on a page loading seems like too much to be from simply an unoptimized query.

I haven't been able to find any explanation of why those could both be so high. Does anybody have an explanation for why Read Misses and Pages To Be Flushed would have those results?

Update: I am restarting the servers on a weekly basis during a scheduled down-period. I still cannot imagine why this is getting so large. Is there no auto-flush mechanism built-in?

Main Considerations When Moving From MS Access Programming to SQL Server

Posted: 12 Apr 2013 12:08 PM PDT

First post, be gentle...

I am a 100% self taught MS Access programmer (main part of my job is programming), I am now building larger databases; still using MS Access as the UI but SQL Server to store all data and do more of the 'work'.

In essence my question is; what subject matters do I need to know for SQL Server that I probably didn't learn or need when using Access?

Not looking for you to tell me how to do anything, more what you think are the most important things I should go an research - there's a lot of subjects and a hell of a lot of detail, don't want to find myself a long way down a less valuable path...

Brain dump:

  • Maintenance: what are the most important (check database, reduce database, update statistics, rebuild etc)
  • Indexes - I don't know as much as I should, is there a good book/ blog etc that can teach me the basics upwards?
  • Anything else I have missed (there's probably lots, as I said I am new to SQL Server...)

If it helps I work for a mid sized retailer and the databases I predominantly work on cover such things as

  • Reporting platform (summarises sales/ receipts/ inventory etc from main system and provides fast reporting)
  • A reconciling tool between a third part and what our stores put through registers (imports data from third party and cross references the transaction logs)
  • Stores all data to do with our promotions; product, prices, projections, actual results etc

Thanks in advance Simon

MySQL PDO Cannot assign requested address

Posted: 12 Apr 2013 12:47 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

What must be in place to validate a XMLTYPE against a schema?

Posted: 12 Apr 2013 12:03 PM PDT

I have a procedure that generates an XMLTYPE and I want to validate it against a schema. The problem is that there seems to be a permissions issue running createSchemaBasedXML because when I run the procedure as AUTHID DEFINER it gives the error "ORA-31050: Access denied", but when I run it as AUTHID CURRENT_USER it actually returns a validation specific error (I'll deal with that separately). CURRENT_USER is not an acceptable solution.

My supposition is that CURRENT_USER works because the user has the XMLADMIN role. Granting the permissions the role includes does not resolve the issue, so it must be the roles ability to bypass the ACLs.

The thing is, querying RESOURCE_VIEW for the ACL that protects the resource shows that it is protected by /sys/acls/all_owner_acl.xml. DBMS_XDB.getPrivileges shows that the xsd has all the following permissions:

  <read-properties/>    <read-contents/>    <write-config/>    <link/>    <unlink/>    <read-acl/>    <write-acl-ref/>    <update-acl/>    <resolve/>    <link-to/>    <unlink-from/>    <dav:lock/>    <dav:unlock/>    <dav:write-properties/>    <dav:write-content/>    <dav:execute/>    <dav:take-ownership/>    <dav:read-current-user-privilege-set/>  

Using DBMS_XDB.getAclDocument shows a principal of dav:owner has all privileges, so that must not be enough to allow the owner of the schema to create schema based XML. With this thought in mind I created a block to run DBMS_XDB.createResource creating a new ACL. I can successfully create the ACL and from SQLDeveloper I can see that it exists in the location I created it in.

There are any number of places I could be going wrong in this process, so the core of what I am looking for is this:

What must be in place to validate a XMLTYPE against a schema?

=== Update 4/3/2013 ===
I can set the acl for my xsd file to /sys/acls/all_all_acl.xml and back to /sys/acls/all_owner_acl.xml. As before, neither of these resolve the permissions issue. I also tried a createResource using the acl definition copied from all_owner_acl.xl and using the same path as those files of /sys/acls/. This at least successfully sets the ACL. I have run it for all my XSDs and queried RESOURCE_VIEW to confirm that they are set to the new ACL. After all this though, createSchemaBasedXML still gives an access denied error. Is my ACL correct or might there be another issue?

<acl xmlns="http://xmlns.oracle.com/xdb/acl.xsd"      xmlns:dav="DAV:"      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"        xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd         http://xmlns.oracle.com/xdb/acl.xsd" shared="true">         <ace>           <grant>true</grant>           <principal>MY_ORACLE_USER</principal>           <privilege>             <all/>           </privilege>         </ace>       </acl>  

=== Update 4/9/2013 ===
I have an anonymous block that can successfully validate XML based on a schema. This again points to a permissions issue that allows this to work when roles are enabled, but not when they are not available.

== Update 4/12/2013 ===
Every bit of evidence I get seems to indicate that there is something wrong with my ACL or perhaps more likely with the way I set the ACL. Taking the XDBADMIN role away from the user causes the anonymous block to fail with an access denied error even though I have granted all the permissions the role gives according to dba_tab_privs. My setACL follows this form:

DBMS_XDB.setACL('/sys/schemas/MY_ORACLE_USER/account.xsd', '/sys/acls/acl_acc.xml');  

A complete test case can be found on Oracle Communities.

1286 - Unknown storage engine 'InnoDB'

Posted: 12 Apr 2013 02:01 PM PDT

I am trying to use roundcube and it recently just broke. I don't know if this is due to a MySQL update that happened recently or not but in phpMyAdmin I get the following error if I try and view a table:

1286 - Unknown storage engine 'InnoDB'  

and

mysql> SHOW ENGINES;  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  | Engine             | Support | Comment                                                        | Transactions | XA   | Savepoints |  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  | MRG_MYISAM         | YES     | Collection of identical MyISAM tables                          | NO           | NO   | NO         |  | CSV                | YES     | CSV storage engine                                             | NO           | NO   | NO         |  | MyISAM             | DEFAULT | MyISAM storage engine                                          | NO           | NO   | NO         |  | BLACKHOLE          | YES     | /dev/null storage engine (anything you write to it disappears) | NO           | NO   | NO         |  | FEDERATED          | NO      | Federated MySQL storage engine                                 | NULL         | NULL | NULL       |  | PERFORMANCE_SCHEMA | YES     | Performance Schema                                             | NO           | NO   | NO         |  | ARCHIVE            | YES     | Archive storage engine                                         | NO           | NO   | NO         |  | MEMORY             | YES     | Hash based, stored in memory, useful for temporary tables      | NO           | NO   | NO         |  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  8 rows in set (0.00 sec)  

and

[mysqld]  default-storage-engine=MyISAM  local-infile=0  symbolic-links=0  skip-networking  max_connections = 500  max_user_connections = 20  key_buffer = 512M  myisam_sort_buffer_size = 64M  join_buffer_size = 64M  read_buffer_size = 12M  sort_buffer_size = 12M  read_rnd_buffer_size = 12M  table_cache = 2048  thread_cache_size = 16K  wait_timeout = 30  connect_timeout = 15  tmp_table_size = 64M  max_heap_table_size = 64M  max_allowed_packet = 64M  max_connect_errors = 10  query_cache_limit = 1M  query_cache_size = 64M  query_cache_type = 1  low_priority_updates=1  concurrent_insert=ALWAYS  log-error=/var/log/mysql/error.log  tmpdir=/home/mysqltmp  myisam_repair_threads=4  [mysqld_safe]  open_files_limit = 8192  log-error=/var/log/mysql/error.log    [mysqldump]  quick  max_allowed_packet = 512M    [myisamchk]  key_buffer = 64M  sort_buffer = 64M  read_buffer = 16M  write_buffer = 16M  

Ideas as to how to fix? It used to work just fine.

Alternative tools to export Oracle database to SQL Server?

Posted: 12 Apr 2013 03:01 PM PDT

I've got an Oracle database that I need to export (schema and data) to SQL Server.

I am trying the Microsoft SQL Server Migration Assistant for Oracle, but it is horribly slow, grossly inefficient and very un-user-friendly, e.g. I was having problems connecting to the SQL Server DB during data migration - but it still spent ~5 minutes preparing all the data before attempting a connection to SQL Server, then when it failed, the 5 minutes of preparatory work were wasted.

Right now, I'm just trying to connect to another Oracle DB using this tool, I left it overnight and came back this morning, and it's still stuck on 19% of "Loading objects..." And this is on a machine with a good 18GB RAM, of which maybe 8.5 GB currently in use. Task Manager shows me that SSMAforOracle.exe is using 0 CPU, 0 PF Delta, and no change whatsoever in memory usage. In other words: frozen stiff. Absolutely intolerable.

Are there any other tools out there that can migrate an Oracle DB to SQL Server a little more efficiently?

Performing SELECT on EACH ROW in CTE or Nested QUERY?

Posted: 12 Apr 2013 04:01 PM PDT

This is a problem in PostgreSQL

I have a table which stores the tree of users;

      +------+---------+      |  id  | parent  |      |------+---------|      |  1   |   0     |      |------|---------|      |  2   |   1     |      |------|---------|      |  3   |   1     |      |------|---------|      |  4   |   2     |      |------|---------|      |  5   |   2     |      |------|---------|      |  6   |   4     |      |------|---------|      |  7   |   6     |      |------|---------|      |  8   |   6     |      +------+---------+  

I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example

tree for #1 has size 7
tree for #5 has size 0
tree for #6 has size 2, and so on

Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this:

      +------------------+-------------+      |  tree_root_node  |  tree_size  |      |------------------+-------------|      |      1           |     7       |      |------------------+-------------|      |      2           |     3       |      |------------------+-------------|      |      3           |     0       |      |------------------+-------------|      |      4           |     3       |      |------------------+-------------|      |      5           |     0       |      |------------------+-------------|      |      6           |     2       |      |------------------+-------------|      |      7           |     0       |      |------------------+-------------|      |      8           |     0       |      +------------------+-------------+  

The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data.

I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing.

Can someone please point to right direction ?

List all permissions for a given role?

Posted: 12 Apr 2013 01:01 PM PDT

I've searched around all over and haven't found a conclusive answer to this question.

I need a script that can give ALL permissions for an associated role.

Any thoughts, or is it even possible?

This gets me CLOSE - but I can't seem to flip it around and give the summary for roles, rather than users.

http://consultingblogs.emc.com/jamiethomson/archive/2007/02/09/SQL-Server-2005_3A00_-View-all-permissions--_2800_2_2900_.aspx

 WITH    perms_cte as  (          select USER_NAME(p.grantee_principal_id) AS principal_name,                  dp.principal_id,                  dp.type_desc AS principal_type_desc,                  p.class_desc,                  OBJECT_NAME(p.major_id) AS object_name,                  p.permission_name,                  p.state_desc AS permission_state_desc          from    sys.database_permissions p          inner   JOIN sys.database_principals dp          on     p.grantee_principal_id = dp.principal_id  )  --role members  SELECT rm.member_principal_name, rm.principal_type_desc, p.class_desc,       p.object_name, p.permission_name, p.permission_state_desc,rm.role_name  FROM    perms_cte p  right outer JOIN (      select role_principal_id, dp.type_desc as principal_type_desc,      member_principal_id,user_name(member_principal_id) as member_principal_name,     user_name(role_principal_id) as role_name--,*      from    sys.database_role_members rm      INNER   JOIN sys.database_principals dp      ON     rm.member_principal_id = dp.principal_id  ) rm  ON     rm.role_principal_id = p.principal_id  order by 1  

Named Pipe Provider Error code 40

Posted: 12 Apr 2013 06:01 PM PDT

I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But sqlcmd throws an error:

Login timeout expired  

Help!

MySQL auto increment problem with deleting rows / archive table

Posted: 12 Apr 2013 08:01 PM PDT

A hosted server is running "maintenance" each weekend. I am not privy to the details.

In a database on this server there is a MyISAM table. This table never holds more than 1000 rows and usually much less. It is MyISAM so that the auto increment does not reset (and with so few rows it really doesn't matter). Rows are regluarly deleted from this table and moved to an archive table (1M rows).

The problem is lately the auto increment has "rolled back" slightly after each maintenance.

Is there any easy way to verify the auto increment of the insert table by reading the max id from both the insert and the archive table?

I'd rather not verify before each insert unless that is the only solution.

Here are the basic table layouts:

CREATE TABLE x  (      xid int(10) unsigned NOT NULL AUTO_INCREMENT, //snip      PRIMARY KEY (xid)  ) ENGINE=MyISAM AUTO_INCREMENT=124 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;    CREATE TABLE xhistory  (      xid int(10) unsigned NOT NULL DEFAULT '0', //snip      PRIMARY KEY (xid)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;  

Far from perfect workaround: (this was somewhat urgent, I had to manually update over 100 rows)

select xid from xhistory where x=?  

Check if just inserted row in x exists in history. If it does:

select greatest(max(x.xid),max(xhistory.xid)) as newval from x,xhistory  

Find a new id.

INSERT INTO x SELECT * FROM x AS iv WHERE iv.xid=? ON DUPLICATE KEY UPDATE xid=?  

And update our row with this id.

Cannot generate reports from SQL Management Data Warehouse

Posted: 12 Apr 2013 05:01 PM PDT

I'm running SQL Server 2008 R2 and have installed the MDW on one server and have a Data Collector collecting and uploading the server activity, query results, and Disk activity data to the MDW. When I select any of the reports from the MDW with Data Collection > Reports > Management Data Warehouse I receive the error:

Exception has been thrown by the target of an invocation - > Object reference not set to an instance of an object.

This occurs for all 3 reports and after I've waiting some time and data has been uploaded from the data collector. I do not have SSRS running, but read that isn't necessary.

Any suggestions?

How to properly secure MySQL database?

Posted: 12 Apr 2013 12:01 PM PDT

We have a web application based on the famous triad apache + php + mysql which we sell to our customers and gets installed on their servers.

Currently, we are using MySQL 5.1.41 which has only a single user registered, root, with is password and access allowed only from localhost.

The problem is, if someone creates its own mysql installation, and then copies our database from its original location to that independent installation, s/he would be able to access its content.

Is there a way to prevent also this kind of improper access to our webapp database? Can it be encorder MySQL-side or must it be something in our own application?

Where is the MySQL variable - innodb_flush_method?

Posted: 12 Apr 2013 11:25 AM PDT

I would like to tweak the value of innodb_flush_method to find out its performance impact on a database server. That variable is listed when I run the command SHOW VARIABLES.

But I could not find it in the configuration file for the MySQL Server - my.cnf. I'm using XAMPP 1.6.3 with PHP (5.2.3) and MySQL (5.0.45). Its MySQL configuration file path is /xampp/mysql/bin/my.cnf
I searched the variable in the file, but no luck.
Where can I find it? Am I just overlooking?

UPDATE
I upgraded to XAMPP 1.8.1. Its configuration file path is /xampp/mysql/bin/my.ini
Still could not find that variable.

Bulk insert into SQL Server from VMWare guest using distributed switch

Posted: 12 Apr 2013 07:01 PM PDT

This is mostly likely not a SQL server issue but the setup seems to only be affecting BULK INSERTS to SQL Servers.

We have recently moved VM Hardware and all the guests that were moved had their virtual switches changed from standard to distributed.

I then started receiving

A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0)

on a two SQL servers during BULK INSERT operations. One of the SQL servers was a VM with the new configuration and the other was a physical server. Both BULK INSERT operation originated from a VM with the new configuration. The BULK INSERTs would not fail every time, it was very random when it would.

When we changed the virtual switch to be a standard switch instead of a distributed switch the issue goes away.

I am looking for more of an explanation to why it doesn't work with a distributed switch instead of a resolution. My guess would be that the BULK INSERT operation is serial and with a distributed switch the packets are being routed through different hosts, some of which may be busier than others, and are arriving at the destination server beyond some latency threshold. (note: there is nothing in the windows event log at the times of the errors on either the source or destination server)

No comments:

Post a Comment

Search This Blog