[how to] How to set up ODBC for oracle in Windows7 |
- How to set up ODBC for oracle in Windows7
- Is it possible to convert mysql binlog from statement format to row format?
- Xtrabackup manager failed with apply-log step
- When importing raw files for internal conversion, should I use a secondary database or just isolate them within the database?
- Convert units of measurement
- Does my.cfg affect number of inserts / sec?
- InnoDB pop queue
- Postgresql not starting correctly, or is it repairing itself first?
- How to find mongo document by ObjectID age
- Wrong return results
- Query returning correct data, and additional data [on hold]
- Oracle 11g bug ? not returning the record until I triggered index to invisible and then to visible
- Page Break is splitting in the middle of a single row in SSRS
- Is it possible to build an UNDO framework for postgres?
- Latin1 to UTF8 on large MySQL database, minimal downtime
- Postgres Write Performance on Intel S3700 SSD
- pg_upgrade fails with lc_ctype cluster values do not match
- Why is Postgres on 64-bit CentOS 6 significantly slower than Postgres on 32-bit CentOS 6
- How to repair Microsoft.SqlServer.Types assembly
- Unable to connect to Amazon RDS instance
- Will Partitions and Indexes on the same table help in performace of Inserts and Selects?
- "Arithmetic overflow" when initializing SQL Server 2012 replication from backup
- limit the number of rows returned when a condition is met?
- tempdb logs on same drive as tempdb data or with other logs?
- How to remove column output in a for xml path query with a group by expression?
- Why I don't need to COMMIT in database trigger?
- Inserting query result to another table hangs on "Copying to temp table on disk" on MySQL
- Primary key type change not reflected in foreign keys with MySQL Workbench
How to set up ODBC for oracle in Windows7 Posted: 25 Jun 2013 07:37 PM PDT Currently, I'm trying to figure out how to connect to Oracle database from my Client PC. The purpose for it is to manupulate database within FileMaker/Access (But mainly with FileMaker) My environment(192.168.5.40) is
Server environment(192.168.10.100)
I've tried to install instant client It didn't work right. So I tried few ways. However, I always get some kind of errors. Does anyone know what files exactly I have to install for this situation? Thanks for the help:) |
Is it possible to convert mysql binlog from statement format to row format? Posted: 25 Jun 2013 08:19 PM PDT The mysql server online is of version 4.1, which doesn't support row-based binary log. Nevertheless, I need the row-based binlog. Can I use the binlog generated by old mysql and import it into another mysql of higher version that supports row-base binlog to get the row-based binlog? |
Xtrabackup manager failed with apply-log step Posted: 25 Jun 2013 06:27 PM PDT I am using xtrabackup manager to backup mysql server. I have a problems with it. All my backup is run ok. But with the server that has about 200 GB of data, the backup task is always failed. I open the log file and see: It looks like xtrabackup stucked when perform apply logs steps. Anyone has ideal to solve this problem. Here is info about my software:
|
Posted: 25 Jun 2013 05:21 PM PDT When importing raw files for internal conversion, should I use a secondary database or just isolate them within the database? I've got between 1.25 and 2GB of CSV files to be imported (and already have most of the process running smooth as butter) so my question is: does it make sense as a "best practice" to use a "secondary" database for the import or just load them into the database that I'm going to be working in? Example: OR Obviously I'll be migrating from one table to the others via scripts, so it doesn't make much difference what the four-part name is going to be, whether it's just or The cons I've identified with the first style are the following:
Some of the pros I've identified with the first style:
What would be considered a best practice in this situation and why? |
Posted: 25 Jun 2013 07:52 PM PDT Looking to calculate the most suitable unit of measurement for a list of substances where the substances are given in differing (but compatible) unit volumes. Unit Conversion TableThe unit conversion table stores various units and how those units relate: Sorting by the coefficient shows that the This table can be created in PostgreSQL using: There should be a foreign key from Substance TableThe Substance Table lists specific quantities of substances. For example: The table might resemble: ProblemHow would you create a query that finds a measurement to represent the sum of the substances using the fewest digits that has a whole number (and optionally real component)? For example, how would you return: Mostly, I'm having troubles picking "centilitre" over "millilitre", taking into consideration millitre's parent measurement unit being the centilitre. Source CodeSo far I have (obviously non-working): IdeasDoes this require using log10 to determine the number of digits? |
Does my.cfg affect number of inserts / sec? Posted: 25 Jun 2013 04:17 PM PDT I'm a complete noob to MySQL and after a few hours of innoDB tuning I got nowhere. Either I do something wrong, which I really hope :), or my.cfg settings doesn't effect insert performance? On many websites I read it does, so here it goes, I start with the basic and try to explain my steps, bear with me and I hope someone can point out what I do wrong. Server info: VPS on Tilaa, 8gb Ram, 120gb Raid10, 4 x 2.4ghz (90%, so 8.6ghz). Database info: The table engine used is innoDB. Name table Current number of inserts: Steps I took: I open the So basicly this doesn't do a lot, after reading some questions on DBA stackexchange and surfing the Internet I added these extra settings: After saving these settings I delete the files: Finally, running the insert query (10 000 times) I see again 9,7 seconds runtime, which is exact the same as before the upgrade of the cfg file. Anyone has an idea what I do wrong/forget? Side note, when I open phpmyadmin and go to system variables I see indeed: bufferpool size 4096 MB (so the settings are really changed?). Any help is welcome! |
Posted: 25 Jun 2013 04:09 PM PDT I have a question about implementing a queue with an InnoDB table, using the Python/Django framework. In a multiprocessing environment, my code hits a deadlock. Please see my question on StackOverflow for all the details. |
Postgresql not starting correctly, or is it repairing itself first? Posted: 25 Jun 2013 07:30 PM PDT I never saw that problem before. I had problems and many postgresql processes were stuck so I killed them with a -KILL... When I tried to restart, it says that it cannot restart, but the daemon continues to run and uses a little processor a lot of I/O. Is it trying to repair the database? I get no log at all. I think there is a way to increase log output... I'll look into that. At this point, the socket to connect to the server doesn't get created, but the server does not quit or emit any error/message, so I have no clue what is going on!? If anyone has a clue, I'd be glad to hear about it. |
How to find mongo document by ObjectID age Posted: 25 Jun 2013 03:19 PM PDT I have a collection of documents I'd like to pull a subset created after a certain point in time. I understand the timestamp of creation is encoded in each documents ObjectID (assuming they are auto generated). I see the ObjectId has a getTimestamp method that returns that portion of the ObjectID as an ISOdate. I'm not very fluent in mongo and am having trouble constructing this seemingly simple query. For bonus points, once I figure out the "where clause", if you will, I'm wanting to select a single field from the documents using mongodump or what ever else might be available to export the results to a text file via a mongo shell. |
Posted: 25 Jun 2013 01:09 PM PDT I'm trying to grab all the rows that have a risk of critical or high, with the discription or synopsis or solution or cve like password. But it keeps showing all rows not just rows with a risk of critical or high. If I execute the follow query I get the correct return. |
Query returning correct data, and additional data [on hold] Posted: 25 Jun 2013 12:58 PM PDT Thank you all in advance for any responses. I am querying various Snort tables in order to produce a report. When I run my current query, I receive the expected results (as verified by our IDS console), but I also get incorrect results. I suspect the issue is with my JOIN statements. Here is a link for the Snort database layout: http://acidlab.sourceforge.net/acid_db_er_v102.html Here is my current query: |
Oracle 11g bug ? not returning the record until I triggered index to invisible and then to visible Posted: 25 Jun 2013 12:42 PM PDT We are using Oracle 11g, 11.2.0.3. We know a record exists in a table but a select is not returning it for some odd reason.
Facts: Statistics were quite old. Session Parameter for ORACLE SQL Developer: The SQL is a bit had to follow since it is produced by a Content Management application. I will not provide it for now.
|
Page Break is splitting in the middle of a single row in SSRS Posted: 25 Jun 2013 12:56 PM PDT I have an SSRS report for an Invoice and it generally works perfectly but occasionally it will page break in the middle of a row in the main Tablix. The row will split and leave part of the text on one page and the rest on the next. The Tablix has no inherent page breaking. I was just relying on it to break between rows (regardless of which rows). There are a couple rows that repeat on each page. There is a second Tablix below the Detail one with Summary Totals. This is just some background info about the report. I'm relatively new to SSRS and haven't had the greatest luck with the formatting thus far. Thanks for any help! |
Is it possible to build an UNDO framework for postgres? Posted: 25 Jun 2013 05:25 PM PDT I was thinking of a table which would automatically log all transactions made to other tables and the command to undo that modifications. So every time you issue an Example: You issue a so the log-trigger would do When you issue Someone experienced, please tell me if this is possible. |
Latin1 to UTF8 on large MySQL database, minimal downtime Posted: 25 Jun 2013 04:23 PM PDT Let me lay out the basic scenario for the problem I'm up against.
I'm confident in actually converting the data between the two character sets (using One idea is to set up replication to a slave; turn off the replication; convert everything; re-enable replication and let the slave catch up; then finally promote the slave to master. However, this doesn't solve the issue of how to set up replication in the first place. From what I've read, even mysqlhotcopy & xtrabackup still require table locks for MyISAM tables. Is there another option I've missed, or am I going to have to turn off my entire application in order to set this up? Thanks for any advice. |
Postgres Write Performance on Intel S3700 SSD Posted: 25 Jun 2013 12:05 PM PDT I'm not seeing the Postgres write performance increases I thought I would with a single SSD vs a hardware RAID 10 array of (8) 15k RPM SAS drives. I have a Dell R820 with a PERC H700 hardware RAID card and 8 15k RPM SAS drives in a RAID 10 array, as well as an 800GB Intel s3700 SSD. The server has 128GB of RAM and 64 cores of Xeon E5-4640 at 2.40GHz, running CentOS 6.4 and Postgres 9.2.4. I'm using pgbench to compare the SAS drives in a RAID 10 array to the single SSD. 15k RPM SAS RAID 10 Results pgbench -U postgres -p 5432 -T 50 -c 10 pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 1 query mode: simple number of clients: 10 number of threads: 1 duration: 50 s number of transactions actually processed: 90992 tps = 1819.625430 (including connections establishing) tps = 1821.417384 (excluding connections establishing) Single Intel s3700 SSD Results pgbench -U postgres -p 5444 -T 50 -c 10 pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 1 query mode: simple number of clients: 10 number of threads: 1 duration: 50 s number of transactions actually processed: 140597 tps = 2811.687286 (including connections establishing) tps = 2814.578386 (excluding connections establishing) In real world usage we have a very write-intensive process that takes about 7 minutes to complete, and the RAID 10 array and SSD are within 10 or 15 seconds of each other. I expected far better performance from the SSD. Here are Bonnie++ results for the SSD: Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP openlink2.rady 252G 532 99 375323 97 183855 45 1938 99 478149 54 +++++ +++ Latency 33382us 82425us 168ms 12966us 10879us 10208us Version 1.96 ------Sequential Create------ --------Random Create-------- openlink2.radyn.com -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 5541 46 +++++ +++ +++++ +++ 18407 99 +++++ +++ +++++ +++ Latency 1271us 1055us 1157us 456us 20us 408us Here are Bonnie++ results for the RAID 10 15k RPM drives: Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP openlink2.rady 252G 460 99 455060 98 309526 56 2156 94 667844 70 197.9 85 Latency 37811us 62175us 393ms 75392us 169ms 17633us Version 1.96 ------Sequential Create------ --------Random Create-------- openlink2.radyn.com -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 12045 95 +++++ +++ +++++ +++ 16851 98 +++++ +++ +++++ +++ Latency 7879us 504us 555us 449us 24us 377us Here are dd results for the SSD: dd if=/dev/zero of=/path/on/ssd bs=1M count=4096 conv=fdatasync,notrunc 4294967296 bytes (4.3 GB) copied, 12.7438 s, 337 MB/s And here are dd results for the RAID 10 15k RPM drives: dd if=/dev/zero of=/path/on/array bs=1M count=4096 conv=fdatasync,notrunc 4294967296 bytes (4.3 GB) copied, 8.45972 s, 508 MB/s I'd post the Postgres config, but its clear the SSD isn't outperforming the RAID 10 array, so it doesn't seem applicable. So is the SSD performing as it should be? Or is the RAID 10 with fast drives just so good that it outperforms a single SSD? A RAID 10 array of the SSD's would be awesome, but at $2,000 each the $8,000 price tag is hard to justify (unless we were sure to see the 2x to 5x gains we were hoping for in real world performance gains). |
pg_upgrade fails with lc_ctype cluster values do not match Posted: 25 Jun 2013 07:46 PM PDT I'm upgrading my PostgreSQL version 9.1.4 database to version 9.2.4. Both the old and the new version are the bundled versions of postgresapp.com for Mac OS X. When trying to upgrade the database I get this error: I searched this error message and found no useful tip to fix this. Any idea? |
Why is Postgres on 64-bit CentOS 6 significantly slower than Postgres on 32-bit CentOS 6 Posted: 25 Jun 2013 07:44 PM PDT We have some Postgres + PostGIS applications that run well on CentOS 6 32-bit machines. We've recently been testing them on CentOS 6 64-bit machines, with similar configuration (all our machines are managed by Puppet), and the applications run significantly slower. Even loading the database schemas take several times as long. (On the 32-bit machines, this takes 7 seconds to load PostGIS, the schema, and fixtures; on the 64-bit machines, this takes 50-110 seconds.) We initially had the problems with virtual servers, so ran tests on a physical machine and found the same problems. Note that the physical 64-bit machine is slower than virtual 32-bit machines. The databases are not large at all. We've experimented with various parameters on postgresql.conf and not gotten any improvement. Is this a known issue with Postgres or PostGIS on 64-bit CentOS? If not, how do we diagnose this? |
How to repair Microsoft.SqlServer.Types assembly Posted: 25 Jun 2013 04:39 PM PDT When I run a checkdb('mydb') this is the only error message printed. It is referring to 'Microsoft.SqlServer.Types' I do see that in the this db the clr_name is blank. but under the master db there is a value in there. I tried to drop or alter the assembly to add this value but its restricted. btw, this db was updated lately from sql-server 2005 to 2008R2. |
Unable to connect to Amazon RDS instance Posted: 25 Jun 2013 01:38 PM PDT I recently created an oracle instance on Amazon RDS. Unfortunately, I'm not able to connect to the instance using Oracle SQL Developer. The (relevant) information I have from Amazon; Endpoint - The DNS address of the DB Instance: xxx.yyy.eu-west-1.rds.amazonaws.com DB Name - The definition of the term Database Name depends on the database engine in use. For the MySQL database engine, the Database Name is the name of a database hosted in your Amazon DB Instance. An Amazon DB Instance can host multiple databases. Databases hosted by the same DB Instance must have a unique name within that instance. For the Oracle database engine, Database Name is used to set the value of ORACLE_SID, which must be supplied when connecting to the Oracle RDS instance: ZZZ Master Username - Name of master user for your DB Instance: org Port - Port number on which the database accepts connections: 1521 From this information, the connection settings in SQL Developer are pretty obvious, so I don't really see what I could be missing... |
Will Partitions and Indexes on the same table help in performace of Inserts and Selects? Posted: 25 Jun 2013 02:39 PM PDT I have a table containing the list of visitors and this table has the following information.
I have a second table that maintains the history of each visits, which means I if the same visitor visits the site, I insert into the second table and update the no. of visits on the first table. The kind of reports that I have to generate for this table are
On an average there are about 20000 inserts to the second table and about 15000 inserts to the first table, meaning 5000 were updates to the first table (5000 repeat visits). I need to decide between partitioning the tables by month and sub-partitioning by days for the reports 1,2,3 and index the browser related columns for report 4. There will be more reports in the future not sure on what clauses. Does partitioning/sub-partitioning along with indexing help in the performance of inserts and selects? Should I perform partitioning on both the tables? I am currently using MySQL 5.5 + InnoDB |
"Arithmetic overflow" when initializing SQL Server 2012 replication from backup Posted: 25 Jun 2013 03:39 PM PDT I'm initializing SQL Server replication from a backup, by following instructions from here: ...but, when I execute I get the following error: Any idea why, or at least where can I find this stored procedure to troubleshoot further? SQL Server 2012, Standard Edition. UPDATE: It looks like that the problem is caused by the fact that database was created using SQL Server 2008R2 and then attached here. Anyway, still need a solution for it. |
limit the number of rows returned when a condition is met? Posted: 25 Jun 2013 08:19 PM PDT Is it possible to limit the number of rows returned when a condition is met? I am working on a query to check if a student is ready to graduate, and they need to meet a certain number of credits per subject. I don't want all classes, because any class past the number of credits needed can be used for electives. EDIT: I forgot to mention that this is SQL 2008 R2 I was hoping to be able to do something like this (which I know doesn't work) Any help would be great Data Query Im exprecting to see these rows returned Expected Results |
tempdb logs on same drive as tempdb data or with other logs? Posted: 25 Jun 2013 02:04 PM PDT For many reasons I only have 3 hard drives (RAIDed and in an Always-On AG) for all my database files:
Should the tempdb log file go on F: with the data file(s) or on E:? My tempdb data file has the highest stalls by far, with the log file 4th out of 24. In my limited DBA experience (I'm a developer) I would lean to putting the tempdb.ldf on E: as the writes will all be sequential. |
How to remove column output in a for xml path query with a group by expression? Posted: 25 Jun 2013 11:38 AM PDT I forgot how to remove a column from being output in a Added XML Body: I will look around again online, but I asking for the syntax to SQL Server to NOT USE "idForSomething" column in the final output. I thought it was something like NOOUTPUT but I can't remember and it does not work. |
Why I don't need to COMMIT in database trigger? Posted: 25 Jun 2013 11:51 AM PDT We can't COMMIT/ROLLBACK in DML triggers because transaction is handled manually after DML statement. However, database triggers seems to be an exception. For example, suppose there's a database trigger: The trigger does not contain autonomous transaction procedure with commit inside that, so who is commiting the insert? This triggger works like a charm and inserts new record into log table after user logon. It smells like hidden Oracle functionality and I can't find any reference in Oracle docs about that. I'm using Oracle11g. |
Inserting query result to another table hangs on "Copying to temp table on disk" on MySQL Posted: 25 Jun 2013 01:10 PM PDT I started the process of inserting returned results to another table. The query groups the rows in respect of indexed IDs. This causes 149,000,000 rows to be decreased to 460,000 rows. The query includes 3 table Further information, the process completes in about 12 seconds for a test file which has 1000 input rows, and returns 703 rows. I started the query earlier ### we don't know when earlier is ###, but it is still running in the state: "Copying to temp table on disk" after 38000 seconds (10 and a half hours). I think there is a problem during the insertion process. What am I probably doing wrong here? If it helps, the operating system of the computer is Windows 7, it has 3 GB RAM, an Intel Core2Duo 2.27GHz processor. ### you forgot to tell us details on the hard drive. One partition in, one out, same disk, same partitions, etc ### Here's my query as it currently reads: |
Primary key type change not reflected in foreign keys with MySQL Workbench Posted: 25 Jun 2013 03:18 PM PDT I have a problem with MySQL Workbench and primary/foreign keys. I have some tables with PKs involved in relationship with other tables. If I modify the type of the PK, the type of the FK doesn't automatically update to reflect the change. Is there any solution? Do I have to manually modify all the relations? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment