[how to] Database Volume Statistics in Oracle - dba_hist_tbspc_space_usage |
- Database Volume Statistics in Oracle - dba_hist_tbspc_space_usage
- Large INSERTs preformace optimization
- How to perform ETL from RDBMS to Neo4j
- Ubnutu/MySQL - Error "Unable to locate package innotop"
- How can I benchmark a PostgreSQL query?
- Multi-variance products
- How to shred .docx XML?
- Why is MySQL order by performance poor within a single partition of a partitioned table?
- Disk space full but logical space available in database
- SQL Server simple model database transaction log full 'CHECKPOINT'
- Drop database locked the server
- File locations for Full Text Catalogs using SQL Server 2005 on VMware
- Oracle 11g - what's the most appropriate NLS_SORT parameter to cover all/most European languages?
- MySQL Synchronise Data From One DB Table To Another & Vice-Versa Using Triggers
- SQL Server-MySQL master-master replication
- createdb: could not connect to database postgres: FATAL: could not write init file
- Real-time merge of two SQL Server databases on different servers
- MySQL PDO Cannot assign requested address
- RID vs INCLUDE on a large field
- 1286 - Unknown storage engine 'InnoDB'
- Alternative tools to export Oracle database to SQL Server?
- Performing SELECT on EACH ROW in CTE or Nested QUERY?
- "ORA-01950: no privileges on tablespace 'SYSTEM' " : What error is it?
- List all permissions for a given role?
- Named Pipe Provider Error code 40
- MySQL auto increment problem with deleting rows / archive table
- MySQL Data Synchronization
Database Volume Statistics in Oracle - dba_hist_tbspc_space_usage Posted: 12 May 2013 08:36 PM PDT I am looking for generating a Database Volume Consumption Report in Oracle 11g based on a specific time period. For e.g 1) Total Volume( Excluding Index Size) at 12 AM 2) Total Volume( Excluding Index Size) at 4 AM Once I have the above info, I take the difference in the volume which gives me the database consumption that happened during this period of time. I have tried using dba_hist_tbspc_space_usage. I wanted to ensure whether this is the right view where I have to hit for the above information. Also the Index Size need to be excluded. Please let me know your thoughts on this. |
Large INSERTs preformace optimization Posted: 12 May 2013 05:09 PM PDT I have 15 Amazon AWS EC2 t1.micro inctances which simultaneusly populate Amazon RDS MySQL d2.m2.xlarge database with data using large INSERTs (40000 rows in query). The queries are send continuously. The table is INNODB, two INT columns, there is index for both columns. CPU Utilization of RDS instance is about 30% during data receiving. When I have one EC2 instance, the speed is in orders faster then I run 15 instances simultaneusly. And the 15-instances group work slower and slower until the speed becomes totally unsatisfactory. How can I optimize performance of this process? |
How to perform ETL from RDBMS to Neo4j Posted: 12 May 2013 04:01 PM PDT I have a mature 50+ tables web application based on mySQL. In order to do some advanced data mining I want to use Neo4j and the goodnes of cypher. However I'm having a hard time migrating my data from RDBMS to Neo4j. I don't wan't to do it by hand, because this is generally unmaintainable. I use pentaho which supports the Neo4j JDBC, however I'm having a hard time wrapping my head around loading the extracted data to Neo4j since the GUI is clearly designed for RDBMS and there are no sources about how to do it. Had anyone run into a similar problem and found a solution? |
Ubnutu/MySQL - Error "Unable to locate package innotop" Posted: 12 May 2013 04:58 PM PDT I am using Ubuntu 12.04.2 LTS, with MySQL 5.5.29. Im trying to install innotop for database monitoring, but getting an error "Unable to locate package innotop" I checked the Ubuntu package site at http://packages.ubuntu.com/search?keywords=innotop&searchon=names&suite=raring§ion=all and there is no mention, but I know I used to see it there. did not help. I think I might have once had this problem before, and fixed it by installing some prerequisite, but I cant recall anymore. I checked Google for the error message, but no mention. |
How can I benchmark a PostgreSQL query? Posted: 12 May 2013 01:02 PM PDT I want to benchmark a query containing an user-defined function I have written for PostgreSQL. Are there any standard ways to run such a benchmark? I know that timing can be turned on with |
Posted: 12 May 2013 11:26 AM PDT My current system houses different variants of Products in different tables;
Products can reside under 1 Productgroup and can have multi-variance Colors, Sizes and Models. The logic on filling the database now fully relies on the website-scripts; meaning when I make a new Product in my back-office I am filling and relating the Colors, Sizes and Models to the product and make sure they have a combination of themselves. This structure quickly grows; like 3 Color-variants of 6 Sizes and 2 Models already convert into 36 different products. I wonder if there is a more logical way to this Database-wise. The problem is that in the future it might be possible that a product will get another property (let's say Sublabel). How could I make this method ever-extendable? I read about EAV, but that's really too much info to go on for now. Before I dive in the deep, I wonder if there are other approaches on this matter. |
Posted: 12 May 2013 12:08 PM PDT I am trying to import a xml (actually a docx file) to a sql server 2008 database. I am almost a novice in XML programming. I googled a lot but almost all the examples there are with simple xml file. Here the xml file is little complex(please see below). Can you please give me some idea how I should create the table for this XML and what query should I run in sql server. I need values for all the tags e.g. w:rsidP,w:rsidRDefault,w:rsidR of w:p,w:pStyle,w:bookmarkStart,w:t tags etc. |
Why is MySQL order by performance poor within a single partition of a partitioned table? Posted: 12 May 2013 10:30 AM PDT I have to store some sequence numbered data in MySQL. I have about 300,000 data items per day for about a 10 year span. Let's say the table structure is just sequence number (a big int) and data (a varchar). One very common query i'll be making is for all data for a single day, ordered by sequence number. Storing the data in one table per day works really well. Querying all data from a table takes 0.8 seconds (which is acceptable), and adding order by sequence number takes it up to 1.0 seconds (still acceptable), but this does result in a massive number of tables that i'd rather have less of. Storing in one table per month (and adding an indexed day number field), the time to get a day's data goes up to 1.6 seconds (not great), but adding partitioning by day number brings it right back down to 0.8. However... when I add the order by sequence number to the partitioned table query, the time goes up to 2.5 seconds! Does this make sense? I would have thought that since my where clause (day number = X) instantly limits to all the data in a single partition, it should then be about the same performance as one of my original day tables (which it is, until I add the order by clause, then everything goes to hell). It's like it is doing the order by before checking the partitions or something. Anyone got any ideas? (either "yes, that'll happen, and here's why", or "here's what you need to do to speed it up"). Thanks. |
Disk space full but logical space available in database Posted: 12 May 2013 11:28 AM PDT We have a fairly big MS SQL 2008R2 database that resides on a SSD drive. The drive itself only has ~110Gb of space, and the database files are the only files on the drive. The database is in "Simple" recovery mode, and only has two files, .MDF and .LDF. The disk is now nearly full: the MDF is currently 109Gb in size. However, SSMS tells me there's nearly 18Gb of "Space Available" (in the 'General' properties page), and if I go through the motions of From what I've read, How should we proceed - given there's apparently 18Gb of free space, should SQL automatically use that free space? Or is it as simple as: we really need more disk space? |
SQL Server simple model database transaction log full 'CHECKPOINT' Posted: 12 May 2013 10:15 AM PDT I have a SQL Server 2012 SP1 database which has a problem that the transaction log is full. It consists of a single data file and a single log file, both about 800MB. I had 5GB free disk space, and I've enlarged it to 20GB+, just in case there was a disk space problem. The database is in SIMPLE recovery model. I cannot do a full database backup - it fails with
I cannot add a log file - it fails with the same error. I cannot change the recovery model to full or bulk-logged - it fails with the same error. I cannot execute a manual CHECKPOINT - it fails with the same error. DBCC LOGINFO (dbid) shows that there are a couple hundred VLFs, all in status 2. I'm not using change tracking, mirroring, or transaction log shipping. What can I do to resolve this? (As recommended, moved from stackoverflow to dba.stackexchange) |
Drop database locked the server Posted: 12 May 2013 11:11 AM PDT I tried to drop one of the databases on the server. That locked the entire server up so I stopped the mysql process via the console. I restarted the server, started mysql and now it seems hanged on: debian-sys-maint | localhost | | Query | 192 | Opening tables | select count(*) into @discard from Not necessarily hanged, but this takes a long time to move to a different query. The other databases seem locked but I see no locks in "show open tables". Any idea what's going on and what I could try? Thanks. Edit: it seems like the locks are moving from database to database and table to table. I guess it's doing some kind of checks? Edit2: I think I understand what's happening, the answer to why it's doing this is: Linux Mint trigger slowly query on mysql on system booting |
File locations for Full Text Catalogs using SQL Server 2005 on VMware Posted: 12 May 2013 10:17 AM PDT I have an issue with one of our development SQL Server Servers. We have databases that use full text search catalogs. Just some background info, our servers are configured to use separate drives for logs, data, and the default installation location This server is a virtual machine. When I try to create a new full text search catalog, it forces me to create it in the default installation directory, citing that you can't create a catalog on a removable drive. This seems kind of contrived since this is a VM, none of the drives are "physical", and the drive it will let me install it on isn't any less "removable" than the the others. It does this on any of the test servers, but for this conversation the specifications are:
All of these drives are attached to the vm as Hard disks, not mapped drives. But when creating a new catalog, I always receive an error unless I put the catalog on the same drive the instance is installed on. Is this just a quirk with sql server, or is there a work around? I know in newer versions of sql server, the catalogs are stored internally, and so this isn't an issue, but a number of our servers are still and will stay for quite some time on sql server 2005. Has anyone else seen this issue? Did you find a solution? |
Oracle 11g - what's the most appropriate NLS_SORT parameter to cover all/most European languages? Posted: 12 May 2013 09:50 AM PDT I'm developing an Oracle-based application which is likely to be used by clients in many different countries (mostly European). I want the program to change its NLS_SORT so that it's case insensitive on startup using the ALTER SESSION SET NLS_SORT='???'; command. I'm going to have a config file which contains the NLS_SORT setting the user wants to use so it is editable, but I want a default which will cover European languages. Which is the best one to choose? Thanks! |
MySQL Synchronise Data From One DB Table To Another & Vice-Versa Using Triggers Posted: 12 May 2013 10:56 AM PDT We are migrating from one database structure to another - very slowly. The system is MySQL. There are two databases. For example's sake, we will call the databases old_db and new_db. Both databases have tables that will need to be synchronised for around a year whilst the business's application is migrated from the old db to the new db. Again to slim down the example, we have one table in the old database, called: old_db . myoldertable, new_db . mynewtable. Structure: They look similar in structure, but this is simplified. Problem:
Requirements
Solutions?
EDIT : So far I have looked at
|
SQL Server-MySQL master-master replication Posted: 12 May 2013 07:48 AM PDT We are looking into setting up a master-master replicated MSSQL database and I am interested to hear of any potential pitfalls that we may run into along the way and generally any advice that you might have for us. To expand on our situation a bit we currently have an internal SQL 2008 R2 server and a web based MySQL server. We use SSIS a lot to transfer the data between the two servers. This ends up with the data on the two servers being out of sync as the SSIS packages run hourly or nightly. We have investigated MSSQL->MySQL replication but we've never found a comprehensive solution. This has brought us to what we are hoping to set up with the web using both the current MySQL and an additional replicated MSSQL database. Specific areas that I'm interested in finding out more on are:
Thanks for your time |
createdb: could not connect to database postgres: FATAL: could not write init file Posted: 12 May 2013 06:41 PM PDT RedHat Enterprise Server 3.0 32 Bits psql (PostgreSQL) 8.2.3 user: postgres server is running: I had just created a new database cluster with initdb; but when I run createdb: any clues as to the cause and possible solutions to this problem? |
Real-time merge of two SQL Server databases on different servers Posted: 12 May 2013 07:28 PM PDT One of the requirements our customer has given is to merge two databases running on two different machines in real time. This means that we have to build some sync policy (or use SQL Server Tools to manage that). But I have some doubts:
|
MySQL PDO Cannot assign requested address Posted: 12 May 2013 01:07 PM PDT Can someone help me with this error? I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error. I've tried this recommendation from stackoverflow however it does not solve my problem. |
RID vs INCLUDE on a large field Posted: 12 May 2013 07:57 AM PDT I have a table that stores notes I have done a lot of reading recently about how MSSS handles indexes (2005 and forward). I have a clustered index on ID [ I have considered changing the clustered index to parentId, parentType since that is reasonably narrow and it is static. ] The overwhelming percentage of queries against this table are going to be along the lines of The question I want to ask today (though any feedback is welcome) is this: The NC index I could add is: This would be useful in creating little lists of the notes where we might include who and when type info. I am hesitant to include a Assuming I dont include the While I have read quite a bit about how expensive RID lookups are, it still has to be better to have this index as opposed to doing a table scan, RIGHT? [apologies for the code block, i have added the 4 spaces, but maybe i did it wrong? ] |
1286 - Unknown storage engine 'InnoDB' Posted: 12 May 2013 05:07 PM PDT I am trying to use roundcube and it recently just broke. I don't know if this is due to a MySQL update that happened recently or not but in phpMyAdmin I get the following error if I try and view a table: and and Ideas as to how to fix? It used to work just fine. |
Alternative tools to export Oracle database to SQL Server? Posted: 12 May 2013 03:07 PM PDT I've got an Oracle database that I need to export (schema and data) to SQL Server. I am trying the Microsoft SQL Server Migration Assistant for Oracle, but it is horribly slow, grossly inefficient and very un-user-friendly, e.g. I was having problems connecting to the SQL Server DB during data migration - but it still spent ~5 minutes preparing all the data before attempting a connection to SQL Server, then when it failed, the 5 minutes of preparatory work were wasted. Right now, I'm just trying to connect to another Oracle DB using this tool, I left it overnight and came back this morning, and it's still stuck on 19% of "Loading objects..." And this is on a machine with a good 18GB RAM, of which maybe 8.5 GB currently in use. Task Manager shows me that Are there any other tools out there that can migrate an Oracle DB to SQL Server a little more efficiently? |
Performing SELECT on EACH ROW in CTE or Nested QUERY? Posted: 12 May 2013 04:07 PM PDT This is a problem in PostgreSQL I have a table which stores the tree of users; +------+---------+ | id | parent | |------+---------| | 1 | 0 | |------|---------| | 2 | 1 | |------|---------| | 3 | 1 | |------|---------| | 4 | 2 | |------|---------| | 5 | 2 | |------|---------| | 6 | 4 | |------|---------| | 7 | 6 | |------|---------| | 8 | 6 | +------+---------+ I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example
Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this: +------------------+-------------+ | tree_root_node | tree_size | |------------------+-------------| | 1 | 7 | |------------------+-------------| | 2 | 3 | |------------------+-------------| | 3 | 0 | |------------------+-------------| | 4 | 3 | |------------------+-------------| | 5 | 0 | |------------------+-------------| | 6 | 2 | |------------------+-------------| | 7 | 0 | |------------------+-------------| | 8 | 0 | +------------------+-------------+ The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data. I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing. Can someone please point to right direction ? |
"ORA-01950: no privileges on tablespace 'SYSTEM' " : What error is it? Posted: 12 May 2013 07:55 PM PDT Earlier I created a user : but when I login as I am using Oracle Database 11g Express Edition Release 11.2.0.2.0 |
List all permissions for a given role? Posted: 12 May 2013 02:07 PM PDT I've searched around all over and haven't found a conclusive answer to this question. I need a script that can give ALL permissions for an associated role. Any thoughts, or is it even possible? This gets me CLOSE - but I can't seem to flip it around and give the summary for roles, rather than users. |
Named Pipe Provider Error code 40 Posted: 12 May 2013 06:07 PM PDT I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But Help! |
MySQL auto increment problem with deleting rows / archive table Posted: 12 May 2013 08:07 PM PDT A hosted server is running "maintenance" each weekend. I am not privy to the details. In a database on this server there is a MyISAM table. This table never holds more than 1000 rows and usually much less. It is MyISAM so that the auto increment does not reset (and with so few rows it really doesn't matter). Rows are regluarly deleted from this table and moved to an archive table (1M rows). The problem is lately the auto increment has "rolled back" slightly after each maintenance. Is there any easy way to verify the auto increment of the insert table by reading the max id from both the insert and the archive table? I'd rather not verify before each insert unless that is the only solution. Here are the basic table layouts: Far from perfect workaround: (this was somewhat urgent, I had to manually update over 100 rows) Check if just inserted row in x exists in history. If it does: Find a new id. And update our row with this id. |
Posted: 12 May 2013 06:22 PM PDT I have a user who will be away from the office for a while, but still wants to be able to insert a load of records to our MySQL database. The user will have a laptop, but no reliable internet connection so no VPN etc. I'm aware that MySQL has a Master/Slave Replication feature, but unless I've misunderstood, this only provides read-only synchronization for the slaves. I'm thinking that I can take a snapshot of the master database and copy it to a local server on the user's laptop. From this point onwards the local and master database will of course be out-of-sync. However, because the user only intends to add records while away and not delete or update records I'm thinking, perhaps naively, that a re-sync should be relatively easy to do with some SQL scripting. Is this the right way to go? Are there any hidden dangers to be aware of? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment