[how to] Oracle datafile extension .dat or .dbf |
- Oracle datafile extension .dat or .dbf
- How to setup SQL active/active cluster to achieve Blue / Green instance switching?
- Linked Server Login Timeout but SqlCmd works
- Sync two Oracle production server database
- All four data nodes in MySQL Cluster in same node group
- PostgreSQL 9.2.4 (Windows 7) - Service won't start, “could not load pg_hba.conf”
- Different dates Oracle 11g with TOAD
- How should I arrange a database replication for my site?
- Why these queries show up at the slow-query log? Interpreting EXPLAIN
- What are reasonable options for an in-memory multi-core database?
- Oracle pivot on a column with delimited data
- Autogrow. Primary vs additional data files
- SSIS How can I write to a Specific Cell in an Excel Sheet
- How much data can SQL Server full text indexing handle?
- PK as ROWGUIDCOL or use a separate rowguid column?
- How can I achieve a unique constraint with two fields?
- memory used by Locks
- Postgres wont shutdown due to wal archiving
- Algorithm for finding the longest prefix
- I can't start Mysql 5.6 server due to "TIMESTAMP with implicit DEFAULT value is deprecated" Error?
- How to set SQL Server index pages per fragment?
- pgAdmin3 can't connect properly to Postgres 9.2
- Relation to original tables or to existing linking table
- OK to put temp tablespace on volatile storage or to omit it from backups? (Postgresql)
- mysql optimize table crash
- Is it possible to have extra tables in a Slave with MySQL Replication
- Replication issue - CREATE SELECT alternative?
- query processor ran out of internal resources and could not produce a query plan
- Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress?
Oracle datafile extension .dat or .dbf Posted: 30 May 2013 09:08 PM PDT I have seen these 2 extensions used for datafiles, Here are 2 examples from Oracle Database SQL Reference 10g Release 2 (10.2)
|
How to setup SQL active/active cluster to achieve Blue / Green instance switching? Posted: 30 May 2013 08:00 PM PDT I am wondering if anyone has ever used a multi-instance cluster (nee 'Active/Active') to achieve blue/green (or A/B) deployment scenarios, and what the best way of configuring it is (using SQL 2012 / Windows 2008 R2)? To be specific, the scenario I want to achieve is to be able to switch between which cluster instance is being connected to by clients without either the clients or the SQL instances knowing (I stress I'm not talking about node failover here). I'm envisaging that the best way to achieve this is something like:
This should hopefully enable me to do the following:
Joining the dots, it seems like this should be possible:
... but I've never seen a full example. Has anyone done it? Will what's proposed above work? What have I missed? |
Linked Server Login Timeout but SqlCmd works Posted: 30 May 2013 08:48 PM PDT I've got a SQL 2005 SP4 server that connects to a 2008 SP3 instance via linked servers using the SQL Server server type. Every once in a while, one of those linked servers will start throwing login timeouts. To rule out firewalls, I can RDP to the server and run sqlcmd and get in just fine, even making sure to use the same login. I'm thinking that SQL has somehow cached something that prevents it finding the right address. The remote servername is defined in that machine's host file. So far, only a reboot fixes the issue. *Edit: linked server is setup using remote sql login. Any ideas? |
Sync two Oracle production server database Posted: 30 May 2013 05:54 PM PDT I have an oracle database that runs a 6 hours batchjob everyday. This process slows down performance during the 6 hours timeframe. Is there any methods that i could build another server that runs the batchjob, once is done. sync the data to Production server. (time taken must be shorter than 6hrs) please advise thanks Shawn |
All four data nodes in MySQL Cluster in same node group Posted: 30 May 2013 04:47 PM PDT I am testing MySQL Cluster 7.2. I have two servers, With this configuration I would expect, as per the MySQL Cluster documentation, that the two data nodes 3 and 4 would be in nodegroup 0, while the two data nodes 13 and 14 would be in nodegroup 1. However, when I start everything up and show the nodes, I see this: Everything seems to be in nodegroup 0! What do I have to do to get 3 and 4 in one group and 13 and 14 in another? |
PostgreSQL 9.2.4 (Windows 7) - Service won't start, “could not load pg_hba.conf” Posted: 30 May 2013 03:56 PM PDT I am trying to get Postgres 9.2.4 to run as a service on Windows 7. After installing postgres, the service was running fine. However, after setting postgres up as a server for another program, the service stopped running. When I try to start the service now, I get a message saying :
When I try running the program that should use the database server, I get this error :
I have also encountered this error once while opening the same program :
I have tried running the service logged on as a local system account as well as my own account (In the postgres service properties) to no avail. I also tried restarting my computer. After a lot of troubleshooting online, I learned that a good thing to check is the pg_log file. Here are the contents of the latest pg_log entry : It seems to be having issues with the pg_hba.conf file, which looks like this : As per many suggestions online, I tried editing the top line to a number of different alternatives (host all all trust / host all 127.0.0.1/32 trust / host all 192.168.0.100/24 trust , etc.). This made sense to me, as the log file was saying that local connections are unsupported by postgres and was also pointing to that line. However, none of my changes had any effect. I tried restarting my computer after every change but nothing made any difference. When I searched for examples of what a pg_hba.conf file normally looks like, the examples looked slightly different from my file. I noticed that in the PostgreSQL program file, in addition to pg_hba.conf, there was also a "20130529-150444-old-pg_hba.conf" file which looked a lot more like the examples I was finding online. This file has several lines of comments before these last few lines : I was hoping that this was the original pg_hba.conf file and that if I replaced the new file with the contents of the old one, postgres would start working again. No such luck. I have been hoping for more error files to be logged in pg_log to see if the previously stated error had disappeared or changed to something else, but no more files have been logged. I have been troubleshooting online for a few days now and nothing I've found has worked. Sorry for having such a long question, but I wanted to be thorough and include all relevant information. I would appreciate it if anyone could shed some light on this problem or offer suggestions. |
Different dates Oracle 11g with TOAD Posted: 30 May 2013 03:25 PM PDT I have the following queries: Why does the first one returns the year By Googling I have found that I can "force" the client date format to be the one desire by changing the Thanks in advance! |
How should I arrange a database replication for my site? Posted: 30 May 2013 05:46 PM PDT Here is my problem. I have a busy Drupal site struggling under high load. After applying all caches I see that database is the bottleneck. I have two servers to handle the site: A and B, on the same rack/subnet. The server A is frontend web server and is set to handles all database queries to the server B. Currently there is no detabase set up on A. The database on B is MariaDB 10. CPU-wise, The server A is much less powerful than B, but has the same amount of RAM. The load on server A is very low (< 0.5) The load on server B is not low (> 5). Reads / Writes ratio is currently 92% / 8% So my questions are: -Are there any benefit in defining master/slave database on these two servers? -If is good idea to go master/slave route, how do you arrange the servers? (which server should be the master? Which one should be the frontend?) |
Why these queries show up at the slow-query log? Interpreting EXPLAIN Posted: 30 May 2013 08:26 PM PDT I'm having a hard time interpreting the Those are the tables definitions And this is the amount of data in every table. |
What are reasonable options for an in-memory multi-core database? Posted: 30 May 2013 01:03 PM PDT I'm going to preface this with pointing out that I only need somewhat persistent data. The purpose of this database platform would be to support statistical analysis in R. I usually build my tables from csv files I get from clients and query those tables to build flat files to dump into R. I can either import a .csv type file or run a query from R. So, essentially I'm performing a lot of inner and outer joins on the entire data set to get the resulting output I need. To date, my databases haven't exceeded 5-10GB. I may have projects in the near future that will be larger but I don't see anything that would exceed memory. I need maximum speed for a little while. To admit a little guilt - I would be happy with sqlite if it supported full joins (without getting too hacky) and if it had good multi-core support. I like it's simplicity - it just doesn't perform well enough. Or I'm too ignorant. Options I have explored are:
I'm switching from my laptop (running ubuntu) which frequently overheats to an Amazon EC2 instance which I can scale up as much as I need. Thus the need for good multi-core support. I'll likely build my tables in an on-demand instance and do the heavy querying in spot instances. My laptop has already conditioned me for periodic shut-downs so, I'm not too worried about that. I've already built an instance with R and have been having fun playing with AWS for other projects over the last few months. I'm not beholden to any specific database platform however, I have reached a point of information paralysis. Reasonable solutions and things to consider will be very helpful. I'm not looking for a step-by-step how to - that's what Google and the rest of stack exchange is for. I've also been avoiding Amazon's RDC service for this. I'm not exactly sure why - probably so I can use spot instances. I'm also open to the idea that I'm looking at my problem all wrong. Should I abandon SQL all together? |
Oracle pivot on a column with delimited data Posted: 30 May 2013 12:59 PM PDT My data is like: Where a column is delimited in the source system with semicolons. And I want to pivot it to: I found a technique here, but I can't quite get it to work: I thought I could include the keycol in the CONNECT BY this way to get parallel recursive chains, but I guess it doesn't work like that. I'm pretty sure I've done this with recursive CTEs in SQL Server. http://sqlfiddle.com/#!4/3d378 FWIW, I'm on Oracle 10g. |
Autogrow. Primary vs additional data files Posted: 30 May 2013 12:18 PM PDT My databases all use autogrow, which grows the primary MDF file on a percentage. But one of the databases, from a third party application grows by adding additional .NDF files. Where is this option set? When I look at autogrow settings, there is the option to grow or not, by percentage or by xMB, and an option for limited or unlimit growth. But I see nothing that tells it whether to grow the primary MDF, or grow by adding additional NDFs. And, is there a way to combine these NDF files back into the primary MDF? Thanks! RZ |
SSIS How can I write to a Specific Cell in an Excel Sheet Posted: 30 May 2013 12:59 PM PDT I am trying to complete what I thought would be a very simple task but after hours of looking thru various articles and attempting different methods, I still have not been able to Write to a specific Cell using SSIS. All I am trying to do is write "DOB" in cell D2 in an excel sheet. I tried using SQL COMMAND in the Execute SQL Task componenet to do the UPDATE of the 1 cell but kept getting error messages. Below is the code I tried. SSIS came back with an error saying it was expecting at least 1 paramater... I also tried but got the following error message:
I tried a few different C# and VB scripts but none of them did the trick. Any ideas or suggestions? I tried modifying the script in the below article to accomplish my task but was unsuccessful http://bidn.com/blogs/KeithHyer/bidn-blog/2475/updating-a-single-excel-cell-using-ssis I'm thinking there's got to be an easier way. |
How much data can SQL Server full text indexing handle? Posted: 30 May 2013 06:30 PM PDT I realize that the question is vague and it depends on hardware and our needs. We currently have 5 million rows of data, a total of 5GB of data which we want to index using full text indexing. Our data increases quite rapidly and it's not unreasonable to assume that in a few years it will be closer to a billion rows and a TB of data. The index is searchable by web site users, and they expect responses within a second or two. Is it reasonable to assume that this data set will be indexable using SQL Server 2012 full text indexing? Is it common to do full text indexing of this amount of data? And is there any good reading on the subject, for example from others' experience? |
PK as ROWGUIDCOL or use a separate rowguid column? Posted: 30 May 2013 03:02 PM PDT There's a long-winded debate going on here so I'd like to hear other opinions. I have many tables with uniqueidentifier clustered PK. Whether this is a good idea is out of scope here (and it's not going to change anytime soon). Now, the database has to be merge published and the DEVs are advocating the use of a separate rowguid column instead of marking the existing PK as the ROWGUIDCOL. Basically, they say that the application should never bring into its domain something that is used by replication only (it's only "DBA stuff" for them). From a performance standpoint, I see no reason why I should add a new column to do something I could do with an existing one. Moreover, since it's only "DBA stuff", why not let the DBA choose? I kind of understand the DEVs' point, but I still disagree. Thoughts? EDIT: I just want to add that I'm in the minority in this debate and the DEVs questioning my position are people I respect and trust. This is the reason why I resorted to asking for opinions. |
How can I achieve a unique constraint with two fields? Posted: 30 May 2013 02:32 PM PDT I have a table with e.g. Name and IsDeleted fields. I want to add a row constraint so that only one Name value can have IsDeleted as 'false'. There can be many duplicate Name values, but they must all have IsDeleted asd true. How would I write this check constraint ? |
Posted: 30 May 2013 05:08 PM PDT I am kind of curious, one of SQL 2012 enterprise edition with 128 GB of RAM size of database is 370 GB and growing, amount of memory used by locks (OBJECTSTORE_LOCK_Manager) memory clerk showing 7466016 KB. I can also confirm that by looking at perf counter However, when I run query it shows only 16 locks. So what is using over 7 GB of locks. Is there a way to find out? Does that mean if once memory for locks has been allocated SQL has yet not yet deallocated it? In past 1 hour I do not see lock count exceeding 500 but lock memory stays the same. EDIT: Max Server Memory is 106 GB, We do not use lock pages in memory and I do not see any memory pressure or any errors in the error log in past 12 hours. Avialble MBytes couter shows more than 15 GB of available memory. EDIT 2 Activity monitor consistenly shows 0 waiting tasks so obviously no blocking. Considering SQL server lock take about 100 bytes of memory 7 GB is lots of memory and trying to find out who is using it. EDIT 3: I run a server dash board report top transaction by lock count it says "currently no locking transactions are running on the system. However, lock memory still shows as stated above. DB is most busy during overnight hours. |
Postgres wont shutdown due to wal archiving Posted: 30 May 2013 07:03 PM PDT I commanded Postgres to shutdown using the init.d scripts (Linux) over 18h ago. I can still see the processes running: On the standby server (running normally) I see that: The log shows 'FATAL: the database system is shutting down', what could be the reason of this and how do I get it back running? |
Algorithm for finding the longest prefix Posted: 30 May 2013 03:28 PM PDT I have two tables. First one is a table with prefixes Second is call records with phone numbers I need write a script which find longest prefix from prefixes for each record, and write all this data to third table, like this: For number 834353212 we must trim '8', and then find the longest code from prefix table, its 3435. I solved this task long time ago, with very bad way. Its was terrible perl script which do a lot of queries for each record. This script:
First problem is query counts - it's I tried to solve the second problem by: That speeds up each query, but did not solve problem in general. I have 20k prefixes and 170k numbers now, and my old solution is bad. Looks like I need some new solution without loops. Only one query for each call record or something like this. |
I can't start Mysql 5.6 server due to "TIMESTAMP with implicit DEFAULT value is deprecated" Error? Posted: 30 May 2013 03:17 PM PDT Ok, Here is my story, I went to mysql.com site & downloaded the file Everything was ok as the Mysql server started smoothly. I then stopped server using this command: The Server was shutdowned properly. I used this way to start & stop mysql server a few times without any problem. However, yesterday, I started the Mysql server but then, at the end of the day, i turned off my PC while my MySQL server was still in the Starting Mode (ie, i did not shutdown mysql using " Also, when my PC got turned off at that time, the Win 7 was starting to download some packages from the internet to update Win7 so the configuration of win7 could be changed. But today I could not start Mysql Server using the above command as there's an error: [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Pls use --explicit_defaults_for_timestamp server option (see documentation for more details). I searched over internet & some people said that I have to go to my.cnf file & add this line into: However, there is no my.cnf file in mysql 5.6, there a lot of cnf file in mysql5.6 but with different names: I tried to add I don't want to reinstall cos I created a big DB in the current mysql Server. So how to fix it? Note: when first time I ran Mysql server, win7 pop up a message saying something (i couldn't remember) such as "do you allow ... Firewall", so do u think that is causing the issue since Win7 got its configuration updated & somehow it reset the Firewall so the Mysql server couldn't start? |
How to set SQL Server index pages per fragment? Posted: 30 May 2013 02:50 PM PDT I have SQL Server 2008, and a number of databases. I have discovered that one of my table's indexes is extremely fragmented (How I know: http://msdn.microsoft.com/en-us/library/ms189858.aspx) The Why is the pages per fragment so low? I have |
pgAdmin3 can't connect properly to Postgres 9.2 Posted: 30 May 2013 02:48 PM PDT I have installed
Then it will throw this error:
etc. Also I am quite stuck with my working version of |
Relation to original tables or to existing linking table Posted: 30 May 2013 11:27 AM PDT In my database I have a table with different Every guest which has settings is are allready linked to the events for other reasons so there is an existing So I'm not exactly sure about how I should link the settings table with the others. Option 1 I link the settings with the table Option 2 I link the settings with the "original" tables Spontaneous I would go with option 1 but I'm a little bit confused about it... My concern with option 1 is, that if I have a lot of deep relations, maybe even another table after Which is the better solution and what are its advantages and disadvantages? |
OK to put temp tablespace on volatile storage or to omit it from backups? (Postgresql) Posted: 30 May 2013 07:14 PM PDT I would intuit that it's fine, but I just want to make sure there are no gotchas from a recovery point of view: If I were to lose my temp tablespace upon system crash, would this prevent proper crash recovery? Also, if I were to omit the temp tablespace from the base backup, would that prevent proper backup recovery? |
Posted: 30 May 2013 02:14 PM PDT When I try |
Is it possible to have extra tables in a Slave with MySQL Replication Posted: 30 May 2013 01:14 PM PDT As my title mention I have a Master and a Slave database. Master if for operations data and my slave mainly for reporting stuff. The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system. How could I add tables on the Slave without Master caring about it ? Is it even possible ? |
Replication issue - CREATE SELECT alternative? Posted: 30 May 2013 03:14 PM PDT I've an MySQL 5.1 slave for our BI team. They need to make some CREATE SELECT with big select queries (several million lines). As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT. Do you now a good non-blocking alternative to thoses CREATE SELECT statements? I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :) Max. |
query processor ran out of internal resources and could not produce a query plan Posted: 30 May 2013 07:16 PM PDT This is showing up in the logs several times a night. How do I find the query causing the issue? SQL Server 2008 R2 Sp1. Thank you |
Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress? Posted: 30 May 2013 04:14 PM PDT I'm trying to monitor the progress of a sqlite3 command importing a large .sql file into a database using I've tried the following which will import the file, however progress is not shown: I know I need to provide Can anyone help me? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment