[how to] Modeling issue with derived types |
- Modeling issue with derived types
- mysql error creating table
- MySQL: OPTIMIZE after TRUNCATE?
- Service Broker - Communication not happening between servers despite ssbdiagnose saying all is well
- How do I optimize large table so queries that target only recent data perform optimaly?
- shadow paging and mvcc
- How to identify computer specifications
- extra steps after changing storage engine and adding index
- Recursive query in mysql
- Best way to perform backups with filegroups and then restore those backups
- Why are constraint applied in database rather than code?
- How does mysql transaction works?
- What is the best SQL Login Role for the following scenario?
- The 'Data-based filtering for mining models' feature is not included error
- Recommended Approach to Programmatically Backing Up/Restoring Oracle DB
- Windows native backups makes SQL Server think that databases backup has been done
- High amount of Read Misses and Pages To Be Flushed
- Main Considerations When Moving From MS Access Programming to SQL Server
- MySQL PDO Cannot assign requested address
- What must be in place to validate a XMLTYPE against a schema?
- 1286 - Unknown storage engine 'InnoDB'
- Alternative tools to export Oracle database to SQL Server?
- Performing SELECT on EACH ROW in CTE or Nested QUERY?
- List all permissions for a given role?
- Named Pipe Provider Error code 40
- MySQL auto increment problem with deleting rows / archive table
- Cannot generate reports from SQL Management Data Warehouse
- How to properly secure MySQL database?
- Where is the MySQL variable - innodb_flush_method?
- Bulk insert into SQL Server from VMWare guest using distributed switch
Modeling issue with derived types Posted: 12 Apr 2013 08:09 PM PDT I have a super class Region and its derived classes OrigineRegion and DestinationRegion as follows : OrigineAgent derives from Agent: When I finished modeling this, I ended up with Origine type has two lists: Agents and OrigineAgents. |
Posted: 12 Apr 2013 05:26 PM PDT I was creating a DB with mySQL, and this error appeared:
I've got this: I've read before that may happends because a table is not created, but it is, I went one by one ("usuaris" first, "curs" for the second, and finally "apuntats") and it gave me that error. Any help on how to fix it? Thanks again, as always! |
MySQL: OPTIMIZE after TRUNCATE? Posted: 12 Apr 2013 06:50 PM PDT Using MySQL (either InnoDB or MyISAM tables--I have both), if I |
Service Broker - Communication not happening between servers despite ssbdiagnose saying all is well Posted: 12 Apr 2013 05:20 PM PDT I am setting up Service Broker communication between 2 servers, A and B we will call them. I used this article as an example: A is SQL 2005, B is SQL 2012. It doesn't work, and I have not been able to track down a good reason why. What I can see is: The Profiler on the receiving end doesn't even show any Service Broker events, indicating nothing is getting to it. But using ssbdiagnose as below, it says congratulations, you have no errors: Also in the transmission queue, the only thing I see that seems particularly off is 'to_broker_instance' is null, though I explicitly specified that info when setting up the route. Further, no errors are showing up in transmission_status. Also, SQL Server error logs are shedding no light. As for firewall issues, well, these are test servers not accessible from the outside so I tried turning the firewalls off altogether. One thing that is bothersome, I will get these: An exception occurred while enqueueing a message in the target queue. Error: 15581 State: 7. Please create a master key in the database or open the master key in the session before performing this operation. I do open the key and these go away. But I shouldn't have to repeatedly open the key everytime I want to do something, should I? I suspect this is part of the problem even though as mentioned the errors go away. Sorry for the somewhat open question - even some help identifying where to get more informative errors or debugging info would be great. This is new territory for me. |
How do I optimize large table so queries that target only recent data perform optimaly? Posted: 12 Apr 2013 01:49 PM PDT So I have this table that is ever growing. Most queries are targeting just recent data, say one month old. I suppose this is common problem but I have no idea how it can be solved. I am open to changing design or if there is mechanism in MsSql to solve this. I have limited options to try different solutions as database is in production and its hard to reproduce. |
Posted: 12 Apr 2013 11:35 AM PDT If I get it right, in MVCC model database, if someone's gonna update some data, the old version of it kept as is and all modifications are made on copied data. So what is the difference from shadow paging mechanism? Citing Wikipedia:
|
How to identify computer specifications Posted: 12 Apr 2013 10:07 AM PDT We need to assemble a system that can store streaming data from a dozen sensors, including several cameras, averaging >5 Gb / minute. The desired data storage system is a single machine running a DBMS in the back of a car, so we have some significant space limitations. The sensors may or may not be network sensors (i.e. they will be in physical proximity to the data storage system, so they could be connected to the same computer if it can handle the computational load). Reading the data back out from the DB at this time is not that important, as there are two distinct time phases to this project: (1) data collection, and then (2) reading. Worst case scenario, we can pull the data after the data collection and put it into another database, but it would be better to have limited reading available during the writes in order to verify the data quality. My prior database experience is with SQL Server, but with much lower write requirements. Others have suggested using Cassandra instead for this type of an application. In either case, I've no real idea of how to identify the necessary hardware specifications. I could try to search for a hard drive with fast enough write speeds, and then add LOTS of memory - but surely there is a more methodical way of approaching the problem. Can anybody make suggestions on how to design a computer to support a database system? Links to other successful systems, or better forums for asking this question would be appreciated. Thank you. |
extra steps after changing storage engine and adding index Posted: 12 Apr 2013 12:50 PM PDT Someone told me to look into his website for quick optimization; I'm a programmer and i don't have much experience optimizing databases. I have a php/MySQL site uses the MyISAM storage engine. It doesn't have any indexes on the columns. I want to change the engine to innoDB and add indexes to the columns. Some tables have a couple hundred thousand rows so it's not a very small database. My question is mostly about the data that already is in the database. Do I need to do anything after these changes to make the already stored data aware of that ? or make them compatible with these changes ? |
Posted: 12 Apr 2013 04:30 PM PDT I am using GetAncestry function found at this post, The problem is when I try to use it with select query Mysql hang, I am not sure why it is happens my Mysql version is "5.5.16" any help appriciable |
Best way to perform backups with filegroups and then restore those backups Posted: 12 Apr 2013 03:43 PM PDT Scenario: Database consists of these file groups: Database consists of the following files in those file groups: 2 tables are created with row: Then a full database backup is performed, and rows are added to the Question:
Looking for the proper sequence for backup and then restore |
Why are constraint applied in database rather than code? Posted: 12 Apr 2013 12:29 PM PDT Why are constraint applied in Database? Will it not be more flexible to be in code? Let's say I design a database model, Which has this entity model: A registered person on system can be only an Student, Or Employee, Person entity requires uniqueness of social number (which every person has only one ofcourse). If one day the college decides that teachers (an Employee subtype) can also be student, Taking courses in their free time (It's very improbable but I cant think of anything else right now), It's much harder to change database design which could have thousands of entries, Rather than just changing the logic in code that didn't allow a person be registered both as an student and an employee. Why do we care about business rules in database design rather than in code? |
How does mysql transaction works? Posted: 12 Apr 2013 05:22 PM PDT I already know how to use transactions, what I want to know is how MySQL transaction handles data being processed... For example, consider the following: Say I use the following commands below: In the above sql statements, in between the "start transaction" and the "commit" lines... what exaclty is happening to the rows indicated in the insert statement? Does all of the rows affected by the insert statement from db1 gets transfered into db2 immediately and are waiting for the commit line to execute in order to make the insert official? Or, are the rows affected by the insert statement are still inside db1 and are just waiting to be transfered into db2 once the "commit" command executes? I wanted to know this facts because I will be copying data from a database through the internet and I'm worried that I might lose some data in the process... Any help would be greaty appreciated. |
What is the best SQL Login Role for the following scenario? Posted: 12 Apr 2013 10:20 AM PDT I want to create a login for a new user who could only create and manage their own databases. Other databases on the server should be read-only to that user. What would be a good set of roles/permissions to use to implement this? Thank you for your help! p.s. I am using SQL Server 2008 r2 |
The 'Data-based filtering for mining models' feature is not included error Posted: 12 Apr 2013 12:19 PM PDT The 'Data-based filtering for mining models' feature is not included in SKU Standard 64 bit edition. I have upgraded to Enteprise Edition but I still get the error. Why is that ? Is there something I have to do so SSAS finds out it's actually an Enterprise Edition. |
Recommended Approach to Programmatically Backing Up/Restoring Oracle DB Posted: 12 Apr 2013 10:20 AM PDT Supposing I have an ever-growing Oracle DB on one server, and I want to duplicate this schema and the data on another server - what would be the best approach to achieving this as part of a bespoke .net app? What I've Tried/Researched:
|
Windows native backups makes SQL Server think that databases backup has been done Posted: 12 Apr 2013 12:33 PM PDT community. I've Windows Server 2008 R2 which hosts few virtual machines. One of machines is Windows 2008 R2 server with SQL server 2008 R2 Express installed. Here's the thing: I wrote a script that backups databases at 05:00AM everyday. Mon, Wed, Fri - Full backups. Tue, Thu, Sat, Sun - Transaction log backups. A few days ago i had to try to restore data from backups and i couldn't. I recieved an error message that my Transaction log is to recent to use. It was on Tuesday. So basicaly i had to restore monday's full backup and tuesday's early morning transaction log backup. I started to research the cause of it and soon discovered that everyday at 04:00AM and 11:00PM sql server backups all the databases to some VIRTUAL_DEVICE with two different SIDs. I realized that at 11:00PM starts Windows Server Backups on that concrete virtual machine. Backup contains only Bare metal recovery + System state + C: Later i understood from where comes backup at 04:00AM. There's similar story. At 04:00AM starts backup of Hyper-v Host with the same parameters(it's strange but sql server somehow realizes that machine is being backed up. So we have:
After procedures described above EventLog recieves message that SQL server has been backed up. But There's no physical place where those "backups" are stored. Since SQL Server thinks that he'd been backuped he change LSNs so any correct backups of transaction logs can't realy be restored after another correct full backup. Please note: Hyper-v host and virtual machine itself are being backed up to the personal colume for backups. After assigning letter to the volume i can see VHDs of disk that had been backed up. And no sign of sql databases backup inside. The main problem is to stop SQL from reacting on system backups. Looking forward for your replies and thanks in advance, Alexey. |
High amount of Read Misses and Pages To Be Flushed Posted: 12 Apr 2013 05:16 PM PDT I am running a mysql database backend for a Moodle installation, and after a few months performance really starts to suffer (Up to 30 seconds for some pages to load). Under investigation in the InnoDB buffer pool, I found that the buffer pool size seemed to be correct (innodb_buffer_pool_wait_free = 0). However, I also found that I have an exceedingly high percentage of Read Misses (52%) and what seems like a rather large amount of Pages To Be flushed (31 million). I'm currently running the slow query log, but the lag on a page loading seems like too much to be from simply an unoptimized query. I haven't been able to find any explanation of why those could both be so high. Does anybody have an explanation for why Read Misses and Pages To Be Flushed would have those results? Update: I am restarting the servers on a weekly basis during a scheduled down-period. I still cannot imagine why this is getting so large. Is there no auto-flush mechanism built-in? |
Main Considerations When Moving From MS Access Programming to SQL Server Posted: 12 Apr 2013 12:08 PM PDT First post, be gentle... I am a 100% self taught MS Access programmer (main part of my job is programming), I am now building larger databases; still using MS Access as the UI but SQL Server to store all data and do more of the 'work'. In essence my question is; what subject matters do I need to know for SQL Server that I probably didn't learn or need when using Access? Not looking for you to tell me how to do anything, more what you think are the most important things I should go an research - there's a lot of subjects and a hell of a lot of detail, don't want to find myself a long way down a less valuable path... Brain dump:
If it helps I work for a mid sized retailer and the databases I predominantly work on cover such things as
Thanks in advance Simon |
MySQL PDO Cannot assign requested address Posted: 12 Apr 2013 12:47 PM PDT Can someone help me with this error? I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error. I've tried this recommendation from stackoverflow however it does not solve my problem. |
What must be in place to validate a XMLTYPE against a schema? Posted: 12 Apr 2013 12:03 PM PDT I have a procedure that generates an XMLTYPE and I want to validate it against a schema. The problem is that there seems to be a permissions issue running createSchemaBasedXML because when I run the procedure as AUTHID DEFINER it gives the error "ORA-31050: Access denied", but when I run it as AUTHID CURRENT_USER it actually returns a validation specific error (I'll deal with that separately). CURRENT_USER is not an acceptable solution. My supposition is that CURRENT_USER works because the user has the XMLADMIN role. Granting the permissions the role includes does not resolve the issue, so it must be the roles ability to bypass the ACLs. The thing is, querying RESOURCE_VIEW for the ACL that protects the resource shows that it is protected by Using There are any number of places I could be going wrong in this process, so the core of what I am looking for is this: What must be in place to validate a XMLTYPE against a schema? === Update 4/3/2013 === === Update 4/9/2013 === == Update 4/12/2013 === A complete test case can be found on Oracle Communities. |
1286 - Unknown storage engine 'InnoDB' Posted: 12 Apr 2013 02:01 PM PDT I am trying to use roundcube and it recently just broke. I don't know if this is due to a MySQL update that happened recently or not but in phpMyAdmin I get the following error if I try and view a table: and and Ideas as to how to fix? It used to work just fine. |
Alternative tools to export Oracle database to SQL Server? Posted: 12 Apr 2013 03:01 PM PDT I've got an Oracle database that I need to export (schema and data) to SQL Server. I am trying the Microsoft SQL Server Migration Assistant for Oracle, but it is horribly slow, grossly inefficient and very un-user-friendly, e.g. I was having problems connecting to the SQL Server DB during data migration - but it still spent ~5 minutes preparing all the data before attempting a connection to SQL Server, then when it failed, the 5 minutes of preparatory work were wasted. Right now, I'm just trying to connect to another Oracle DB using this tool, I left it overnight and came back this morning, and it's still stuck on 19% of "Loading objects..." And this is on a machine with a good 18GB RAM, of which maybe 8.5 GB currently in use. Task Manager shows me that Are there any other tools out there that can migrate an Oracle DB to SQL Server a little more efficiently? |
Performing SELECT on EACH ROW in CTE or Nested QUERY? Posted: 12 Apr 2013 04:01 PM PDT This is a problem in PostgreSQL I have a table which stores the tree of users; +------+---------+ | id | parent | |------+---------| | 1 | 0 | |------|---------| | 2 | 1 | |------|---------| | 3 | 1 | |------|---------| | 4 | 2 | |------|---------| | 5 | 2 | |------|---------| | 6 | 4 | |------|---------| | 7 | 6 | |------|---------| | 8 | 6 | +------+---------+ I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example
Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this: +------------------+-------------+ | tree_root_node | tree_size | |------------------+-------------| | 1 | 7 | |------------------+-------------| | 2 | 3 | |------------------+-------------| | 3 | 0 | |------------------+-------------| | 4 | 3 | |------------------+-------------| | 5 | 0 | |------------------+-------------| | 6 | 2 | |------------------+-------------| | 7 | 0 | |------------------+-------------| | 8 | 0 | +------------------+-------------+ The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data. I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing. Can someone please point to right direction ? |
List all permissions for a given role? Posted: 12 Apr 2013 01:01 PM PDT I've searched around all over and haven't found a conclusive answer to this question. I need a script that can give ALL permissions for an associated role. Any thoughts, or is it even possible? This gets me CLOSE - but I can't seem to flip it around and give the summary for roles, rather than users. |
Named Pipe Provider Error code 40 Posted: 12 Apr 2013 06:01 PM PDT I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But Help! |
MySQL auto increment problem with deleting rows / archive table Posted: 12 Apr 2013 08:01 PM PDT A hosted server is running "maintenance" each weekend. I am not privy to the details. In a database on this server there is a MyISAM table. This table never holds more than 1000 rows and usually much less. It is MyISAM so that the auto increment does not reset (and with so few rows it really doesn't matter). Rows are regluarly deleted from this table and moved to an archive table (1M rows). The problem is lately the auto increment has "rolled back" slightly after each maintenance. Is there any easy way to verify the auto increment of the insert table by reading the max id from both the insert and the archive table? I'd rather not verify before each insert unless that is the only solution. Here are the basic table layouts: Far from perfect workaround: (this was somewhat urgent, I had to manually update over 100 rows) Check if just inserted row in x exists in history. If it does: Find a new id. And update our row with this id. |
Cannot generate reports from SQL Management Data Warehouse Posted: 12 Apr 2013 05:01 PM PDT I'm running SQL Server 2008 R2 and have installed the MDW on one server and have a Data Collector collecting and uploading the server activity, query results, and Disk activity data to the MDW. When I select any of the reports from the MDW with Data Collection > Reports > Management Data Warehouse I receive the error:
This occurs for all 3 reports and after I've waiting some time and data has been uploaded from the data collector. I do not have SSRS running, but read that isn't necessary. Any suggestions? |
How to properly secure MySQL database? Posted: 12 Apr 2013 12:01 PM PDT We have a web application based on the famous triad apache + php + mysql which we sell to our customers and gets installed on their servers. Currently, we are using MySQL 5.1.41 which has only a single user registered, The problem is, if someone creates its own mysql installation, and then copies our database from its original location to that independent installation, s/he would be able to access its content. Is there a way to prevent also this kind of improper access to our webapp database? Can it be encorder MySQL-side or must it be something in our own application? |
Where is the MySQL variable - innodb_flush_method? Posted: 12 Apr 2013 11:25 AM PDT I would like to tweak the value of innodb_flush_method to find out its performance impact on a database server. That variable is listed when I run the command But I could not find it in the configuration file for the MySQL Server - UPDATE |
Bulk insert into SQL Server from VMWare guest using distributed switch Posted: 12 Apr 2013 07:01 PM PDT This is mostly likely not a SQL server issue but the setup seems to only be affecting BULK INSERTS to SQL Servers. We have recently moved VM Hardware and all the guests that were moved had their virtual switches changed from standard to distributed. I then started receiving
on a two SQL servers during BULK INSERT operations. One of the SQL servers was a VM with the new configuration and the other was a physical server. Both BULK INSERT operation originated from a VM with the new configuration. The BULK INSERTs would not fail every time, it was very random when it would. When we changed the virtual switch to be a standard switch instead of a distributed switch the issue goes away. I am looking for more of an explanation to why it doesn't work with a distributed switch instead of a resolution. My guess would be that the BULK INSERT operation is serial and with a distributed switch the packets are being routed through different hosts, some of which may be busier than others, and are arriving at the destination server beyond some latency threshold. (note: there is nothing in the windows event log at the times of the errors on either the source or destination server) |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment